current page

history

user

Lecture on Thurs, Oct 13, 2011. (Slides)

Readings

  • Required

    • Perception in visualization. Healey. (html)

    • Graphical Perception: Theory, Experimentation and the Application to the Development of Graphical Models. William S. Cleveland, Robert McGill, J. Am. Stat. Assoc. 79:387, pp. 531-554, 1984. (pdf)

    • Chapter 3: Layering and Separation, In Envisioning Information. Tufte.
  • Optional

    • Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. Jeffrey Heer, Michael Bostock. CHI 2010. (html)

    • Gestalt and composition. In Course #13, SIGGRAPH 2002. Durand. (1-up pdf) (6-up pdf)

    • The psychophysics of sensory function. S.S. Stevens. (pdf)

Comments

jkeeshin wrote:

I thought the Cleveland paper on Graphical Perception was overall informative and a little bit surprising. My main takeaways from the paper were that in most situations we should prefer certain types of charts (dot charts, framed-rectangle charts) over most charts, and this has a basis in the way we perceive things and our snap judgements of 'elementary perceptual tasks.'

The most surprising suggestion I thought they had was to replace shaded maps with framed bar charts. The least surprising was that angle judgments are overall imprecise, but I think I still have a soft-spot for pie charts somewhere.

Also, I had seen this statistic before, but found it interesting that human perception of magnitude is rallied by the "power law of theoretical psychophysics" p =ka^alpha.

In the Tufte reading, I enjoyed the discussion of "1+1=3," but found the best part to be the use of simple color for layering. This seemed to be the most effective in the Marshaling Signals diagram.

jnriggs wrote:

I enjoyed Tufte's chapter on layering and separation even more than I usually like his writing. The biggest takeaway for me was the idea that layering and separation affect everything from the macro to micro and everywhere in between.

More importantly, though, I think he did a great job of explaining and demonstrating how these different "zoom levels" of layering and separation affect each other. If we alter something as small as the line width of individual elements, we can enhance or disturb the comparisons being exposed between the most macro levels of the layering. For this reason, I think it's really clever that he included the optical illusions on page 60 to reveal how important negative space is in the context of how elements relate to one another. "1+1=3 effects" one of the most valuable design ideas to remind myself of when an in-progress visualization doesn't feel quite right.

zgalant wrote:

I really liked Tufte's explanations and examples of "1+1=3." Negative space can be extremely powerful, and he does a good job of showing it. Here are some other examples of negative space used in art that are pretty cool: http://creativedusk.com/the-art-of-negative-space/

Also, in Tufte's proof of 1+1=3 on page 61, he doesn't even mention the overlap between the 2 crossing lines as more information.

@jkeesh - I also agree that adding simple color is extremely helpful, and I think it emphasizes the different layers and negative space. I especially think grays and one contrasting color is very helpful -- like the blue water or the red on the marshalling signals.

abless wrote:

I am generally very interested in and fascinated by (human) visual perception. It is stunning what elements we humans can easily perceive and recognize, and what other elements we can only ignore. For example, I enjoyed the Change Blindness examples in Healey's paper: the two images appear exactly the same, but once you found the difference, that difference is obvious and impossible to ignore anymore. The imperfection and peculiarity of our visual system also leads to the question in how far what we perceive can be called reality. I think one theory in neuroscience states that reality is more of a projection: "vision is graphics, in that what we see is really our own internally generated 3D graphics, modelled to fit the 2D retinal data, with the model testing and updating occurring here in the thalamus." [Dougman, http://www.cl.cam.ac.uk/teaching/1011/CompVision/ComputerVision2011.pdf].

I wasn't as impressed by the 1+1=3 as the commenters above. I just didn't perceive the negative space, and it took me same time to understand how one could perceive the two seperated bars as one bar of width 3. Clearly, though, negative space is very powerful, with one great recent example being the Steve Jobs Logo tribute:

  • steve-jobs-apple-logo-tribute.

abless wrote:

Also, another great example of how our perception of reality can be heavily skewed:

1) Lightness illusion:

checkershadow-AB

2) Color constancy:

ch11fbd1

"The brown tile at the center of the illuminated upper face of the cube and the orange tile at the center of the shadowed face are actually returning the same spectral light to the eye (as is the tan tile lying on the ground-plane in the foreground). Readers who find this hard to believe can convince themselves by cutting holes in a sheet of paper such that the rest of the scene is masked out, in which case the two tiles on the faces of the cube look identical in both color and brightness. This illustration provides a dramatic example of the influence of context on the color perceived." (http://www.ncbi.nlm.nih.gov/books/NBK11059/box/A767/)

abless wrote:

Again, apologies for spamming the comments, but here's an example somewhat related to the Change Blindness seen in lecture (Derren Brown, Person Swap):

http://www.youtube.com/watch?v=vBPG_OBgTWg

blakec wrote:

I really enjoyed Tufte's description of how negative space can both add and subtract value from a graphic. If the negative space is created using light borders, it can direct the viewer to seeing the negative space without overwhelming them with a dark and powerful border. The map of Tokyo shows how the visualizations of houses closely packed together can induce the perception of streets separating the houses. It's interesting to me though that in some cases negative space can really enhance and image, while in other cases if it is over emphasized it can clutter the image. This is shown the air instruction manual's index in which the black boxes that contain the white words create large negative spaces that distract from the important words. Tuftes's examples show how careful someone has to be when generating visualization so that they don't create unwanted negative spaces that distract from the image.

jojo0808 wrote:

@abless: Argh, you posted the video I was going to post. :) Definitely a really interesting watch.

I really liked this statement in the Healy paper: "The goal of human vision is not to create a replica or image of the seen world in our heads. A much better metaphor for vision is that of a dynamic and ongoing construction project, where the products being built are short-lived models of the external world that are specifically designed for the current visually guided tasks of the viewer."

This makes so much sense, and I think if this weren't so, we would have an extremely hard time understanding things like comics, where artists often omit things like backgrounds or other details because they're not important to the moment. The comic artist considers what the aim (or the task, if you will) of a panel or page is -- is this portraying something about a character? is the plot being moved forward? do we want to depict something about the story's world? -- and THEN includes or excludes visual elements based on that. In this way, the reader can fully construct a world using the bits and pieces they pick up while reading each page over time, and filling in the blanks eventually becomes automatic (at least in the context of a story).

Even in our day-to-day vision, I think the majority of the world is invisible to us until we feel like we need to interact with it or unless it interacts with us for some reason.

This is only marginally related but I'm going to take this chance to plug a wonderful book I read a while ago: There's a chapter in a book called Einstein's Dreams that starts like this: "Ten minutes past six, by the invisible clock on the wall. Minute by minute, new objects gain form. Here, a brass wastebasket appears. There, a calendar on a wall. Here, a family photograph, a box of paper clips, an inkwell, a pen. There, a typewriter, a jacket folded on a chair. In time, the ubiquitous bookshelves emerge from the night mist that hangs on the walls."

ajoyce wrote:

As I was looking through the perception examples on the Healey webpage, something struck me: representing data as raw numbers is still a type of visualization, and actually one which humans have developed a reasonable ability to perceive effectively. Encoding data as series of numerical digits is essentially a form of shape encoding, but one that carries additional intrinsic meaning that is almost universally understood. We have been trained to perceived the shape "9" as carrying greater quantitative value than the shape "6", despite the two shapes' similarities. Furthermore, the grouping of two shapes, such as "11" carries greater value than the sum of two individual "1" shapes and greater meaning than the grouping of two simple vertical lines ("||").

It is interesting that this is presumably not a natural aspect of human perception, as the common system of numeric digits must be learned by children before they can intuitively understand what "3" or "5" means. However, once this knowledge has been acquired, it seems like one does develop an intuitive sense that "5" is greater than "3" — a sense that is required to interpret almost any raw quantitative data effectively without visualizing it in an alternative way.

mlchu wrote:

I enjoyed the readings and the illustrations in class reinforced my understanding on the theories. While I am amazed by how the use of multiple attributes can yield both gain and interference to the visualization, I am more inclined to keep the clarity and simplicity of the presentation by adopting single dimension for one variable. The use of additional attribute can trigger unnecessary interpretation and reasoning process, which can be avoided by using one single attribute as long as it is clear enough to convey the message. Another takeaway message from the class illustration and Cleveland paper is that, while these principles serve as guidelines for visualization design, there is no universal prescription for how to make a good graph. Perception on a graph highly depends on personal preference. To create better visualization that can effectively convey the message, one good suggestion mentioned in class is customizing the visualization according to personal preference.

ardakara wrote:

A visual illusion that I want to draw attention to is what's called an Ebbinghaus Illusion. Here is a an example:

650px-Mond-vergleichsvg

There seems to be debate over why exactly this happens, but the basic idea is that a circle with larger circles around it will look smaller than another circle with the same size, surrounded by smaller circles. It's very similar to the color illusion @abless gave examples of, except related to size. Contextual comparison seems to have a big role in our cognitive system and may skew our comparisons.

One can think of cases where this illusion will actually throw off the perceived information from a visualization. We saw that Tableau offers circles with different sizes overlaid onto a map pretty often for data with geographical and quantitative dimensions. If a city or a state with a certain size circle is surrounded by others with larger circles, tough, because that will make it seem smaller than others with the same value.

If we take some artistic license (which we've been strongly advised not to thus far, but just bear with me) we can even believe that this is not a bad thing. Our eyes are doing a local filtering where we are curving the subsections of a visualization. So having an X value where everyone else nearby has 2X maps to a final value that's lower than having an X value where everyone around has X/2. Yes, objectively it's distorting the data, but one can definitely argue that the distortion is a useful one.

mbarrien wrote:

Someone in class asked about how long a flicker is necessary for change blindness. There's a downloadable demo I found here that allows you to change the length of time of the flicker and judge for yourself: http://cognitrn.psych.indiana.edu/CogsciSoftware/ChangeBlindness

(I know I've seen a better demo elsewhere that allows you to change the length on the fly using a slider as well as change the color of the flash, but I can't find it now. Has anyone else seen this?)

Another link I posted in an earlier lecture, but seems more relevant here is the ability to have change blindness without any flicker, instead over a long period of time. Demo video here: http://www.newscientist.com/blogs/nstv/2011/06/friday-illusion-can-you-spot-the-change.html

awpharr wrote:

@abless: Wow, both of the images and the video you posted are fascinating! I cut two holes in a piece of paper to test the cube example, and found myself moving the piece back and forth to see what the visual threshold is for the orange tile to appear brown. It is quite incredible what our brains can do with context.

I played around with the perception example applet on the Healey website. I set the exposure to the lowest amount (50ms) and went through each search type with both the lowest (10) and highest (99) number of elements possible. I could determine if the red circle was there for color as the search type every time. For shape as the search type, it was simple to find the red circle when the number of elements was low, but very difficult when that number was high. For the conjoined search type, I could find the red circle about 50% of the time with 10 elements, and could never find it for the highest. These finding reaffirm some of the things we talked about in class today. Color is easily distinguishable when one color is set against another. Shape is a bit more difficult, but similarly distinguishable when one shape is set against another. However, when both are mixed together, our brains have a difficult time picking out a single data point. This shows how we have difficulty quickly sorting with 2 separate variables.

olliver wrote:

It makes sense to me that people might underestimate area, especially with circles, but I would think that humans must have some intuitive sense of volume because of the need to guess at how much more something weighs than another. Indeed, heaviness was seen to be over-estimated in the graph of the perception curve. I'm curious as to whether an illustration of a 3D object that emphasizes it's physical weight would trigger that heaviness sensation and allow a viewer to have a more accurate impression of increased volume. It might even be so that when something looks like a heavier material, it's relative volume may seem larger. This would encode another dimension, not texture really, but a material or substance.

  • The more I think about it, the more I feel that generating a finite list of possible visual dimensions is really quite impossible when one considers all the ways an image can conjur sensations and feelings, such as the facial expression example. Just think about how many dimensions an artist can encode into a single painting. Lumping this all under Shape or Form is just really unhelpful and inaccurate.
ifc wrote:

Wow, abless' photos are awesome. Couldn't quite believe the claims even after trying to cover up my screen with my hands so had to move to the digitalColor Meter application to verify manually. Very neat stuff. Clearly our brains are wired up to fill in the perceived blanks/structures behind the scenes. I'm kind of surprised not to see M.C. Escher's name somewhere on the comment board.

Though these illusions/eye tricks are a fascinating part of human perception, I personally doubt they have much of an effect on general data visualization. It just seems like they largely have to do with the brain reconstructing physical objects (which don't seem to appear much in visualizations) and they would be readily apparent to a visualization creator. Nonetheless its a very cool area of study.

Anyways I wasn't impressed with Tufte's 1+1=3 discussion. I think the concept itself was more of an attempt to illustrate how negative space (or more generally visual objects) can add more to a visualization that you might realize if you aren't careful. To me this message just didn't seem to come through enough. In his examples, it seemed like a matter of reducing unnecessary noise that can draw users attention more than anything else. Also the layering section was much more comprehensive in class then in the text.

crfsanct wrote:

Change blindness was fascinating to experience in the flickering images. Even though we had an easy time in lecture finding the change when the two static images were next to each other, change blindness seems to be very much at play also in "spot the difference" games http://en.wikipedia.org/wiki/Spot_the_difference

The movement of the eyes from one image to the other, even though it is a short distance, seems to be enough of a distraction. We need to move our eyes because we actually can't see much detail in our periphery.

Another thing I liked was that the Healey reading presented many preattentive visual tasks. I think that many of these should be made available as options for encoding. It might be only possible through software, but I think motion would be a nice encoding variable for ordinal dimensions. We can rotate shapes at various speeds for example. Movement can also be negative, which can be a useful property since most encodings are not two-sided.

luyota wrote:

I really, really, really enjoyed the lecture today. One of the most interesting topics heard in the class was the natural human double buffered visual perception system. It is not news to me that we can easily recognize two very similar images if we switch between them, but after considering this double buffer analogy and using that concept to produce videos that put grey screens between two similar, I was so surprised to learn that the brain's buffered images will be flushed out by these gray screens. It's amazing.

Also, in response to @abless' shared video, I've found out another very similar video: http://www.youtube.com/watch?v=tDObotwpOPQ except that the two girls in this video are wearing the same set of clothes and look similar. Anyway, this seems to be a fun experiment that might intrigue more ideas on human perception dilution. This is just so cool!

Finally, I think there's one type of visual perception dilution not discussed in the class: The animation effect like the example below:

opticalillusion1

I guess it is yet another way human perception build up the world automatically while not totally rational :)

zhenghao wrote:

It was definitely interesting to see all the ways the human visual system could be fooled or misled especially in all the cool optical illusions posted above. It was especially cool to see how the move towards formalizing and rationalizing the use of visual elements started by Bertin has culminated in these scientific and quantifiable perceptual experiments.

In particular, it seemed that there were really two main kinds of basic effects at play described in today's lecture: "low level" effects like pre-attentive effects and "top down" effects like gestalt effects (e.g. the phantom contour experiments). Have studies been done to show how these two different classes of effects might interact?

Also just as a side comment, the example in the beginning of class about counting the number of 3s reminded me of this cool experiment that was described in a talk I attended by VS Ramachandran awhile ago. Basically, it tested number-color [synaesthesia http://en.wikipedia.org/wiki/Synesthesia] which is a phenomena where certain people "see" certain numbers in certain colors. What Ramachandran did was to perform the exact same experiment Prof. Heer did but with people with number-color synaesthesia and he found the same "pop-out" effect that was evidence that:

a) These people weren't faking it and weren't insane b) It was a sensory/perceptual effect rather than a cognitive one.

The paper can be found at http://www.imprint.co.uk/rama/synaesthesia.pdf starting from page 7 and it's a fascinating read!

chanind wrote:

I always find it fascinating how optical illusions can give insight into how human vision works. After studying computer vision and trying to create algorithms to perceive the images the same way as humans can makes me realize just how incredibly powerful the human visual system really is.

For example, in different lightings colors appear completely different in terms of RGB value, but humans are able to distinguish colors across all sorts of lightings with no issues whatsoever. As far as I know there are still no algorithms in computer vision which can accomplish this consistently. However, as a result, it's easy to "trick" the human visual system by showing colors which have identical RGB values but appear completely different due to lighting or other changes which are common in the real world, such as in the examples @abless posted.

schneibe wrote:

This quarter I am also following a class on visualization given by Prof. Dan Schwartz (a cognitive psychologist) in the school of Education. He mentioned a few things on the perceptual system that can be of interest here: first, our perceptual system is for action. It helps us structure the world in a fast and effortless way (no cognitive load). Because it needs to be efficient, it is also a deterministic system; it provides us with only one representation of the world. The consequence of this "effortless structure" is that we cannot ignore optical illusions (a good example is the necker cube: people cannot see both possible cubes at once - http://en.wikipedia.org/wiki/Necker_Cube). This website (http://www.michaelbach.de/ot/) is also a gold mine for optical illusions. The verbal system, on the other hand, admits ambiguity, because people can interpret textual information in many different ways. Another consequence is that the perceptual system (in the pre-attentive stage) provides enough information for people to formulate pre-interpretations; those pre-interpretations are usually strong and difficult to override. Thus, in most cases an efficient visualization should avoid giving a wrong first impression because it is difficult and effortful to recover from a wrong pre-interpretation.

netj wrote:

I really enjoyed reading the Change Blindness part, and testing the examples on myself. It instantly reminded me of the "Awareness Test", a public ad in UK I saw some years ago. It was disappointing to see the exact video at the end of the class, because I was so excited thinking of sharing such an amazing video via my comment. Anyway, I found some other interesting videos about change blindness, as well as some more examples with two images that are still interesting.

In class, I remember discussing change blindness may affect interactive visualizations by making it hard to see highlights for some people, so a possible solution could be using transitions. But I was surprised to see the Gradual Change Blindness, similar to the example @mbarrien already shared. It proves our vision is also blind to changes that are small, but significant when accumulated over a long period of time, somewhat opposed to the proposed solution. I think this teaches us we should be aware not to trigger this blindness when we want to design animations or transitions to show important changes over a long time, especially with a large or dense visualization. But it might be hard for the author to determine if the animation they produce has this problem, since all his/her attention will be already on the changing part. I wonder if there's any quantitative analysis of this problem, which may allow us to automate the detection someday.

pcish wrote:

The Healey site was a fascinating read with a lot of background information on what was covered in class. Some of the claims it makes appear to be disputable though, such as terminators and lighting direction being preattentatively processed attributes or the effectiveness of their color selection technique. It was also interesting to note that all of the theories about our visual system come from psychological studies that treat the visual system as a black box, and that there are no biological studies (conducted / presented) that is able to provide better insight to how the system works.

The presentation slides on Gestalt gave an overview of the topic and many examples. However, the material is light on explanations, which caused some of the examples to be hard to understand, and I feel a more detailed treatment of the topic would be fascinating to read.

yangyh wrote:

I found the lecture yesterday really amazing and useful. Knowing preattentive properties can definitely help our visualization be perceived in a more effective way. I also played around with the java applet example on the Healey website, and noticed the significant time difference for our eyes to spot the target. Color is indeed a very effective encoding at first glance, however, as abless suggested (with great examples), color can be deceiving sometimes. It's just interesting to know how human eyes can be deceived or misled easily, and I think from now on I will start doubting what I see!

As to the change blindness, I also want to share a video which I found impressive. http://www.youtube.com/watch?v=Hu2oo4Zp0Ao I think this example supports Healey's prospective that once the difference is found, it became obvious and impossible to ignore. Interesting!

junjie87 wrote:

While we are on the topic of illusions, there's one more called the Vertical-horizontal illusion. Basically, humans perceive vertical lines to be longer than horizontal lines to equivalent length. What's more amazing is that even if you know about it, you can't help but think the two lines are different!

I think this has some implications for data visualization: We should not use angle/length to represent multiple dimensions. I'm not sure what we would call it ("misleading"?), but I feel this is different from the filtering interference that was mentioned in class.

arvind30 wrote:

I really enjoyed reading Tufte's chapter on the use of negative space. It's a topic that's interested me for a very long time and I especially enjoy seeing it used in the mainstream. Some of my favourite examples involve logos of companies - FedEx's arrow, WWF's Panda and NBC's peacock.

While the use of negative space can be powerful, Tufte seems to take it for granted that it immediately jumps out at people. Personally speaking, I didn't notice the FedEx arrow or even the NBC peacock until someone pointed it out to me (this is especially true with the peacock which I've only started seeing in the last year or so!). This is also true for some of the examples Tufte presented - as I got further into the chapter, I started to analyse the image examples before reading the text because sometimes, I would only notice the noise/vibrations/negative space usage only AFTER he described it (and then I couldn't "unsee" it). This was most true with the 1+1=3 examples with the strips.

dsmith2 wrote:

I thoroughly enjoyed the lecture portions pertaining to perception and our failures therein. It was fun and surprising to see different ways in which images could fool the eye, but I'd like to take a step back to think about how these ideas can help design more effective visualizations.

The idea of how it is easier to compare when images are flashed one after another (as in the sphinx example) is useful in that it makes one realize that placing two graphs side by side often doesn't emphasize change in a way that most effectively utilizes our perception.

This seems to make quite a case for the value of animation in visualization. It brings me back to the example of the census data visualization shown in class week 1: we were able to get a map of so many dimensions for one slice of time, and animating/overlaying the changes in a way that didn't simply juxtapose time slices allowed for a unique and EFFECTIVE visualization of changes through time.

In terms of some of the visual illusions people have been posting in response to lecture:

ardakara and abless' examples show how human perception of both size and color/value can be skewed by surroundings and skew our ability to compare. This, while seeming gimmicky, is also useful for thinking about visualization because it shows where some visualizations can falsify to the human eye.

I was initially somewhat confused as to how optical illusion games could inform visualization design decisions, but when you reframe optical illusions as inaccuracies in human perception, they become critical to understand when trying to make use of human perception to encode information.

aliptsey wrote:

@pcish We did only read papers referencing psych studies where the visual system is treated as a black box, but there are plenty of interesting neurobio studies on the biology of visual processing (ie, how rods and cones in your eye process color, how edge detection works), but they are even more low level than the studies we are looking at.

I agree with the comments above that it would be really interesting to understand more about gestalt principles, and how they interact with the low-level preattentive processing. Relevant for the class, it would be useful to understand these interactions in more depth. I keep thinking about the use of trendlines (which would probably be described by the gestalt principle of continuity), and how they affect the importance of the variable encoding of the data.

Some really basic gestalt principles, illustrated: http://graphicdesign.spokanefalls.edu/tutorials/process/gestaltprinciples/gestaltprinc.htm

kpoppen wrote:

The one thing that stuck out to me from the reading apart from others mention above is the huge variance in the perceptual data that Cleveland and McGill collected across users. On top of the fact that our perceptual system does a number of very unintuitive things to our processing of various stimuli (as many have noted, and added to above), we don't even all respond to things in the same way. Although this task complicates the art / process of visualization, I think the real takeaway is that the easiest way to create an effective visualization is to keep it as simple as possible (as Tufte espouses as well). By simple I don't mean so much not displaying things as only displaying what truly matters, with an encoding that directly and effectively encodes the data that you're trying to visualize. I suppose that's much, much easier said then done, but as Cleveland and McGill point out there are a number of things that you can do that are asymptotically better in this regard without really complicating the task much. I just wish people like whoever makes the infographics for USA Today (for example :-) would read some Tufte and save us all some pain.

jofo wrote:

Like many of the other participants I'm fascinated by how our visual/perceptual system can trick us. Reading through Healey I often come back to the question of what do we do with this knowledge or examples? What impact on design does it (or should) have? I found the discussion in class of compensating size underestimation useful, but it seems like the community has not really decided on best practice here. I would like to see more of "the perception system works like this and this, and thus you should ...". Sometimes there are also a conflict of design suggestions such as small multiples vs overlay (or usage of time dimension).

The US election results visualization in the Healey article tries in my view to fit too many dimensions in one picture. The reader has to carefully read the textual description to understand how it works.

The section on shapes vs color etc made me think of chess "visualization" as being commonly visualized using a layer grid of two colors, and variating form (piece figures) and hue (piece color). Could there be other ways to represent the same data (the configuration of pieces) that are more effective, perhaps for certain tasks? Would be interesting to explore.

Btw, @zhenghao thanks for interesting comment about Synesthesia!

jneid wrote:

One amazing realization I have gotten from this class is the significance of GRID LINES. I had never really paid much attention to them before, but they make a big difference. My favorite quote from today's Tufte reading was his calling the marshalling signal table an "information prison". :) It's true: line separators are meant to distinguish the data but not to distract from it or to isolate the different parts of the data from relation to the other data. To this effect, we have seen that lighter, thinner grid lines suffice, if any are necessary at all. The example used when we were discussing maximizing data density even had negative grid lines of white space! Multiple topics from perception also relate to grid lines or separation lines in visualizations. I would expect a "Just Noticeable Difference" to be the perfect display of a grid, so that it is useful, but does not attract attention. It is also important to avoid using lines of varying length, slope, and angle UNLESS they are meant to convey specific information, especially since pre-attentive processing may cause the viewer to focus on these aspects. Perhaps a good way to create implicit lines is with gestalt grouping, so that the data elements themselves form lines. Finally, it is important to avoid creating lines or effects (like a moire effect) accidentally which could be distracting. I feel like grid lines tend to hold on to data, like my dog holding on to his favorite toy or like Vanna White, showing off themselves rather than showing the data. Instead, they need to hand you the data.

sakshia wrote:

The Healey paper provided a good lens for reading the Cleveland paper, however, in and of itself, the theory was confusing and a bit 'theoretical' at times. Tufte was more instructive, although still referencing perceptual work, and a smoother read. It's interesting to see how various researchers are trying to formalise a system surrounding visual perception (Healey), and graphical perception (Cleveland). The illustrations and proposals are supposed to be models of reality, and it’s strange to see different ideas related to ‘reality’ (it starts to make one question the other ‘scientific’ models we learn and accept; particularly those at a level we can’t physically perceive). Of the various theories proposed, the boolean map theory was one which was more understandable and concrete. I appreciated that after presenting a theory, Healey would draw out a conclusion that one could apply to visualization. It was great to see the use of experiments in Cleveland’s paper, and also see concrete suggestions for redesign of graph forms. I liked the idea of proposing these perceptual tasks, hypothesizing an ordering, and then using experiments to test. I wonder, however, if the distinction between the tasks can be muddied up?

bsee wrote:

It seems like this page has became an optical illusion page. :P

I really liked the paper about perception in visualization. I always found the human visual interesting fascinating, and that paper really helped me understand why people perceive things as they do.

When we were doing the color/shape exercise in class, I was reminded of one of the exhibits in a science center. It was basically an experiment to teach children what visual cues humans are most sensitive to. Everyone should give this a try too. I managed to find the instructions on this website: http://www.glassescrafter.com/resources/test-your-peripheral-vision.html .

Although the test is on the peripheral vision, it really tells a lot on how the human visual system work. You would realize that most people tend to pick out color first. Then shapes, and lastly words. It seems in grained into the human visual system to notice change in colors, especially relative to the surroundings. (Which is why the optical illusion posted above works.) Some argue that it is because our ancestors depended on this visual cue to detect preys/predators for survival. Even though the real reason is never concluded, it is cool to know what visual cues human beings are most sensitive to.

Also, I have got a feeling that the Steven's power-law can somehow be explained using how our ancestors needed though visual cues to survive. For example, I would imagine that shock has such high exponential because most shocks that human being experiences are signs of danger. So the body natural developed a skill to increase the sensitivity towards shock, for survival. This is just a wild guess, and so it'll be cool if someone can give a concrete explanation on why the power-law works the way it does. :)

I guess this is why understanding the human visual system is so important in creating awesome visualizations!

jsadler wrote:

I was surprised by the "Stevens Power Law" graph showing how our different senses might over or under estimate stimuli. I work a lot with prosthetics for my research and we frequently have a hard time deciding "what is the best mode to give feedback to a patient". For example in a patient missing and arm what kind of mechanism do we use to encode limb position feedback to the patient?

People often try separate or combinations of haptic , auditory and visual feedback - for "propriorecption" , the sense of where a body part is spatially over time. Looking at the power graph shown below I wonder if the undistorted nature of length encoding can be applied in visual feedback mechanisms for devices such as prosthetics , robotics, telerobotics ,etc. This may extend out of the topic of data visualization a bit , but i think it is interesting to think of how the best "pyscophysical sensory practices" we are uncovering may be applied to other fields....

On another note I personally am looking forward to a data visualization that I can smell...

/wikis/cs448b-11-fall/Graphical_Perception/Discussion-2011-10-14-18-09-31?action=AttachFile&do=get&target=Stevens.png

stubbs wrote:

It would be interesting to use (and measure) these same aspects of perception in the service of temporal operational control of the viewer (akin to Tufte's micro/macro readings or Triesman's serialization of feature detection); thereby intentionally serializing the dimensions of data, not by position within the spatial substrate, but by feature and/or elementary task.

Healy's hierarchy of features, Cleveland's hierarchy of elementary tasks, and Tufte's subjective aesthetic inferences are presented by their respective authors as explicit and rather strict 'best to worst' guidance for building visualizations that lead to efficient and accurate interpretation/perception. However, instead of using these as absolute guides (albeit useful to avoid blatant mistakes), why not leverage these principals to tell a story over time?

As @schneibe noted, preattentive features give rise to a viceral impression which is difficult to override, thus, one may wish to ascribe preattentive features to higher levels of abstraction or key, underpinning concepts. Or, as suggested in Boolean Theory, in which we sequentially partition sets of objects and subsequentially intersect them; we may want to ascribe visual encodings that require easier feature-recognition tasks to higher 'layers'/dimensions of data, and slightly more abstruse representations to lower layers of data to guide the viewer's set-building cadence in uncovering levels of detail. Indeed, we intentionally introduce "inefficient" features, however, this gives the designer control to tell a temporal story in a fixed space, "possibly matching an ordering of information content".

A side note: I once studied animal behavior and I believe the way a frog decides to catch a fly has been measured to be a simple flipping of binary switches in a deterministic order: 1.size of object, 2.movement of object, 3.shape of object, 4.brightness of object, etc.

bgeorges wrote:

I found the portion of lecture covering redundancy gains to be very interesting, and in particular, the way that different types of dimensions interact to speed classification. The graphic on the lecture slide entitled "Summary of Integral-Separable" (reproduced below for convenience) will be especially useful to me when working on visualizations in the future, although I found it somewhat surprising that it did not include dimension pairs like size|color (which I used during my exploratory data analysis). I would say that size|color falls somewhere in the middle of the integral-separable spectrum.

20111015-1seyquy9ihjd8fqd3sje5jufmm

fcai10 wrote:

Having seen some of the perception results (pre-attentive processing change blindness, Steven's power law) in another class (Psych 205 Cognitive Psychology), it was very satisfying to see them applied to the design of effective visualizations.

While the Healey paper was a good overview, I did not find it as engaging as the other readings. It spent a lot of time explaining the different models for how pre-attentive processing works -- I was not sure how these theories can be verified (seems to be out of the realm of psychology; the ball may be more in biology's court, perhaps with the recent advent of optogenetics). However, it only devoted the shorter last sections to how these psychology results can be applied to visualizations. To its credit, the Healey paper did have interesting Java applets.

The Tufte reading interested me more as it has throughout the class so far. I find that readings about visualizations benefit especially from having, well, visualizations, and the examples presented were very good at making the case for using color to separate data, using lighter gridlines, freeing data from the tyranny of tables.

blouie wrote:

I found the discussion surrounding negative space really interesting, but I can definitely understand how it's not immediately intuitive to everyone. I think this might be because our mental models of a visualization can only really suggest one perception at a time, so it's hard to let both the positive and negative images coexist in our minds. Regardless, human perception is tricky business. It's hard to develop visualizations that are universally compelling and informative.

I, too, am interested in the whole discussion about Steven's power law with regard to sense perception. How could that be used to make "visualizations" more interesting -- the quotations are there since the involvement of other senses would imply that these would no longer be visualizations. Could things that we know get perceived in higher magnitudes given a unit of the sensation be used to make perception of certain depictions of data more acute?

tpurtell wrote:

The low level optical illusions are all very neat. They imply a need for scrutiny of a visualization based on the initial visual impact for people not familiar with the data. It's easy to overlook an unclear graphic because the creator already knew where to look. It seems like the biggest implications will be for animated sequences. Things in the periphery just can't deliver as much information from motion as we naturally expect.

I was curious if the color of the flashed solid background made a big difference in the change blindness test so I made a little experiment out of it. You can try it out here if you want . For that example, yellow seems to clear my visual scratchpad faster than blue. That makes sense gives the opponent color processing native to the visual system.

kchen12 wrote:

When read in conjunction with the previous Micro/Macro Readings chapter, the Tufte reading offers a variety of ideas on integrating visualization of high-level trends and low-level specifics. However, I wish that he presented more examples on using tactics to present different layers of varying data entities/data types, as opposed to focusing examples of layering visual points e.g. data points upon light grids or train schedules upon lightly covered alternating boxes.

The 1+1=3 or more principle was an interesting way to reframe the idea of negative space existing as it's own entity. I found that my eyes seemed to be less tuned to allowing the white space to become its own shape, although I'm unable to measure their visual fatigue. Tufte suggests not boxing type to prevent the unnecessary creation of white space, but I feel this is effective primarily because it helps reduce visual clutter.

Finally, many of the principles introduced in Healey's paper and in lecture--pre-attentive processing, change blindness, boundary detection and grouping, etc.--were principles presented in Pysch 55: Intro to Cognition and the Brain. I really appreciate that we are going beyond surface-level principles of color choice and placement and trying to understand the cognitive reasonings for why certain techniques are more effective than others.

phillish wrote:

I enjoyed reading about how our sense of perception can easily be tricked by optical illusions. It brings up another layer of complexity to consider when designing visualizations. Are the design choices (e.g. color, spacing, texture, ordering) implicitly sending a wrong message? Understanding Gestalt principles is a good way to double check for unintentional groupings that could misrepresent data to the audience.

The law of Prägnanz, as proposed by Gestalt, was definitely an interesting topic. The brain can fill in gaps to visualize entire shapes. While this allows our minds to better envision a scene, such as the shape of a park bench that is partially occluded by a tree, these mental assumptions can sometimes guide our sense of perception in the wrong direction. For example, the bench could actually be two short benches that stand side by side. If a tree covers the middle seam, our mind would see one single long bench.

babchick wrote:

I have to agree with @kpoppen in that, at the end of the day, the best way to internalize and practice the heuristics suggested by Tufte in "Layering and Separation" is to boil your "ink" down to only the data that actually needs to be interpreted; everything else is meaningless at worst, and at best, serves only an auxiliary role in focusing our attention. Yes, there are other important considerations such as color and saturation to take away from the reading, however, I feel like these skills are an acquired taste more than they are a firm set of teachable principles.

In regards to lecture, pre-attentive visual processing seems to have a lot of potential in building real world monitoring systems. Infrared security cameras are a good example of a tool that exploits this phenomena, but it seems like there's a lot of opportunity to build other types of feeds that can be mindlessly browsed and highlight the interesting data. I suppose one reason why this is not the case is because in order to figure out what data points to pre-attentively highlight, you need to be able to identify these outlier data points on some ordinal scale, and if that's possible, there probably is no point in asking someone to scrub through a haystack when a computer can find and present the needle. Like most of these unintuitive optical illusions, it seems to be more of a mesmerizing design flare than a useful perceptual hack.

cmellina wrote:

This discussion has been interesting. One thing that came to mind was this article, about how EEG signals have been used to speed image search. One thing that has come up in this discussion is that "visualization" as such is targeted at vision, but we have other modalities and they at least provide the opportunity for encoding dimensions of the data as, e.g., texture or sounds. Although the EEG image search does not leverage another modality, it does leverage biological signals to make best use of vision. Is it an example of visualization? Or should it be taken as an activity in a broader class of ways humans interact with computerized data?

mkanne wrote:

As Tufte said, 1 + 1 = 3 is truly a visual rule that has been exploited by designers of almost every visual discipline. I found it interesting, however, that his examples in this chapter gravitated towards the use of this principle in maps and tables when so many other types of visual data display make good and bad use of separation/negative space. The one non-map example I did enjoy was the box and whisker plot redesign which hit on the head some of the issues that we were complaining about 2 classes back when trying to find the best way to visualize Professor Heer's univariate data from mTurk. Using a box to show the middle 50% adds a second dimension to the representation which encodes no data and causes perceptual confusion.

I understood the Cleveland paper's argument that certain types of plots are more useful in most situations than others and that we should prefer them when conditions are right to provide the most accurate perceptual decoding experience, but disagree. This really goes back to Tufte's chartjunk arguments about showing the data and not providing "visual interest". I believe that we must balance visual interest with data display (but obviously never erroneously display data). If this means that we branch out from the few designs advocated by the paper, I see no problem especially if it leads to creativity in data display.

rc8138 wrote:

Having taken an optometry course as an undergraduate student, many of the optical illusions was introduced to me, but not in the context of data visualization. This lecture was both entertaining and educational and it allow us to see how optimal illusions play an important role in creating effective visualization.

"Stevens Power Law" was quite interesting, and it make a lot sense how we tend to underestimate our sensation for area, volume, and brightness while overestimating our senses for more drastic stimulus. Makinlay's ranking of encodings is also very insightful, and gave me a better understanding on why some encodings are better than the others.

I was particularly interested in the concepts using "redundancy gain" to speed classification. I found myself often using redundant encoding to serve this exact purpose, but I wonder if this "redundancy gain" violates Tufte's minimalist principle? I think it goes back again to my previous point that whether the visualization is more about persuasion or data analysis. If it is the former one then redundancy gain could be a great tool to use.

jhlau wrote:

I'm very interested in the idea of preattentive processing, because I do believe that most of our perception happens unconsciously. I think one of the most interesting fields in perception have to do with exploring the limits between unconscious perception and conscious perception. Another interesting field is how our methods of perception developed. For example, the human brain has been shown to be incredibly adept at recognizing and reading anything that looks remotely like a face. I wonder if our preattentive processing abilities have any such interesting story behind them.

Also, I deeply enjoy this quote in the Healy paper: "The goal of human vision is not to create a replica or image of the seen world in our heads. A much better metaphor for vision is that of a dynaamic and ongoing construction project, where the products being built are short-lived models of the external world that are specifically designed for the current visually guided tasks of the viewer." I feel this is absolutely spot-on with regards to not just visualization, but any type of art. For example, one of my favorite films is Hero because of how focused it is on creating an effect using visualization, instead of trying to recreate reality. I find this type of filmmaking to be much more effective. See here for an example: http://www.youtube.com/watch?v=LDagtOmhkGo&feature=related. Obviously the people can't fly and the leaves/air don't turn red in a matter of seconds, but it paints a much more vivid picture in the viewer's mind.

These ideas also remind me of the human scales of perception, which I believe should inform design decisions that may seem very far away from the field of visualization. For example, I was driving the other day and noticed that my car's AC seemed to be on a linear scale: the 8 settings from low to high seemed to just block the cold air in proportion to the setting (1 = 1/8, 2 = 2/8, etc). To me however, it seemed that I would want more granular control at the lowest settings because my perception of cold was similar to my perception of pain (for example, 2x the cold feels like 4x the cold), so I would prefer my AC to be on something like an exponential scale.

insunj wrote:

The first half of the lecture wasn't too new, but I really enjoyed the second half of the lecture on visual processing as it was extremely helpful and practical. Understanding what humans can detect well or not, visualization can be designed in effective way, by finding the balance for showing appropriate amount of data. In the real world visualization problems, we see this issues arise. For example, when representing differences across states in a map, we know color,size is a great way to do, but we don't want to code more than one element because decoding is expensive brain work. I enjoyed Cleveland's paper as it dealt with practical issues that come up frequently and proposed various design choices!

Leave a comment