current page

history

user

Lecture on Oct 5, 2009. (Slides)

Readings

  • Required

    • Perception in visualization. Healey. (html)

    • Graphical Perception: Theory, Experimentation and the Application to the Development of Graphical Models. William S. Cleveland, Robert McGill, J. Am. Stat. Assoc. 79:387, pp. 531-554, 1984. (pdf)

    • Chapter 3: Layering and Separation, In Envisioning Information. Tufte.
  • Optional

    • Gestalt and composition. In Course #13, SIGGRAPH 2002. Durand. (1-up pdf) (6-up pdf)

    • The psychophysics of sensory function. S.S. Stevens. (pdf)

Comments

fxchen wrote:

The Healey article was fascinating in giving a brief overview in the different theories for preattentive processing, postattentive processing, feature hierarchies, and change blindness. The presentation of each of these topics was particularly enlightening in both explaining and giving the reader an example of this perceptive theory in "action".

The subsection of this that particularly was intriguing was the change blindness theory and practice section. It usually took me a full minute (or sometimes a few minutes) of concentrated effort to actually find a change in a few examples. In describing how to take the takeaway about subjects not perceiving change, Healey writes "prior expectations cannot be used to guide their analyses. Instead, we strive to direct the eye, and therefore the mind, to areas of interest or importance within a visualization." Tufte emphasizes a similar point throughout Chapter 3 by stressing the importance of forming relationships between different data representations to guide a reader's train of thought.

As a sidenote, I cannot help but wonder how these theories factor into the visual portion of IQ tests. IQ tests have visual portions that require finding feature hierarchies or identifying changes to a scene.

nmarrocc wrote:

So we have all this theory about separable and integral correlation, Webber's law, Pre-attentive processing etc.--A tool box of visualization techniques. So can we now use these things to automate the creation of graphs? If so, then I think some interesting questions to answer would have to do with trade offs between various techniques. For example lets say that our theory tells us that position is less error prone to decoding, but some other theory tells us that when using length we can make things easier to understand by exploiting intuition. How do we decide which element to use? Also, maybe we could build graphs using tags so that computers could more easily build and critique graphs. This way we could build a graph, tag all the elements in it, this way the program knows what shapes/colors were using, where the data points are, where the lines are etc. And now can apply sort of run through a check list of theory and let us know what elements we could change and why.

jieun5 wrote:

I feel that maps are perfect examples that must employ effective layering of information and avoid "1+1=3 or more" effects that arise from unintentional interaction of elements on the flat surface. There are many web-based maps available-- with interactivity and options to turn on supplementary layers. As a map-lover, I was curious to take a quick look at three different kinds (Google, MS, Yahoo) to examine the extent to which they are successful at adhering to principles in visual perception.

Perhaps I'm biased as I'm a Google Maps user, but I feel that this map has many strengths. As shown in the screen capture, the color contrast is decent such that no one feature pops out because of color. I like the slight white outline around labels so that the letters do not get confused with the roads in the background. Finally, I really like the addition of optional supplementary layers (for photos, videos, wikipedia, webcams, and transit under "More"). By the way, it is interesting how Google relies heavily on the width of line to denote the road-types, such that highways are substantially thicker.

Microsoft's Bing Maps has probably the best use of color contrast among the three- it is easy on the eye. Unlike Google, it relies more on the color intensity to denote road types, such that highways are darker orange; I like this better because this minimizes distortions in road widths. However, Tufte is likely to criticize this map for poor display of labels: not only do they clutter with the map because there is no white outlines around the letters separating them from the background, but also they are forced to follow the shape of curvy roads (this reminded me of labeling the Rhein River in Tufte p. 62). I do like the use of bold for city names though, which is in my opinion less visible in Google Maps and Yahoo maps. Finally, though this map also features interactivity, they have to do more with the different views of the map (Arial, Bird's Eye, 3D) and not so much added layer of information.

Lastly, Yahoo Maps had my personal worst-first-impression simply because of its strong color contrast. Less important features, such as bodies of water, seemed to stand out too much in bright blue, and highways stand out in bold red. However, apart from these, this map does a good job of using various shades of pastel colors to convey different types of land regions. Word labeling is also clean with white outlines. This map offers the least number of extra-layer (traffic) or extray-view (hybrid, satellite) interactivity options.

vad wrote:

Here is a cool (and very relevant) way to win money and beer of people down at the pub that is based around peoples’ flawed perception of curved distances.

http://www.youtube.com/watch?v=Cj1huB36eXk

cabryant wrote:

Two Thoughts:

Josef Alber's “artistic proof” that 1 + 1 = 3 underscores the importance of understanding psychophysics (and, in the absence of recorded evidence, aesthetics) in creating effective, layered images. One lesson from the field of aesthetics (curiously, not mentioned explicitly in any of the course readings), is the interaction between “warm” (red, orange, yellow) and “cool” (green, blue, violet) colors. The former tend to advance to the fore of the perceptual field, while the latter recede in contrast. This effect helps explain the added dimension in the IBM Parts Manual diagram, where the red-colored annotations are not only separable from the black-colored figure in two dimensional space, but also create the illusion that they lie in a plane above that of the figure.

The presentation on Change Detection from lecture invites comparisons with Small Multiples from the previous readings. Although changes between images may be detected more readily if they are presented in sequence in the same visual space, comparisons between non-sequential images becomes increasingly difficult, due to limitations on working memory. The decision of which to employ, likewise for all encodings, depends on the nature of the data and the types of comparison the presenter wishes to elicit.

wchoi25 wrote:

At the end of this week's Tufte reading is a quote from Kepes's "The Language of Vision" that highlights eastern visual cultures' use of empty spaces to direct the viewer's eyes to different places on an image in varying speeds. I thought this was particularly interesting in light of the perception theories put forth in the Healey paper. How can complex visualizations best portray information that cannot be pre-attentively processed? Pre-attentive processing is powerful in that certain patterns or even a single outlier can immediately be recognized and draw the viewer's attention for further study. But for visualizations that rely on post-attentive search, the challenge is to figure out a way to direct the viewer's "viewing experience": where should the gaze be headed and in what order? Where should the gaze fall for a shorter/longer duration? A complex but good visualization should be like exploring a museum with a good curator, where attention is naturally drawn to interesting features and in an order that makes sense and facilitates one's understanding of the whole. I think a good example is the Napoleon's march visualization, which clearly affords a visual tracing of the eye along the path, focusing on the thicker parts, the branching parts, etc.

joeld wrote:

I really liked Healey's web page, first because of the dramatic insight that preattentive processing is what makes visual display so compelling, but also because of the thorough nature of the summary of how the different visual elements interact. Triesman's model seems to have the most promise in this respect. For example the difference between the time it takes to find a single element, or to decide that an element type is not present are very interesting, especially when compared to Mackinlays feature orderings. The main point being that the prominence of a feature is closely tied to the distribution of the data and the questions being asked of it.

aallison wrote:

If there is one thing I've learned, if your data is awful, just blink a grey screen in between each chart you are displaying.

I think the sections on pre-attentive processing have big implications for how we design multi-dimensional data visualizations. First of all, we can take advantage of these properties to obtain the macro/micro aspect of visualizations that Tufte encourages in his writing. The features that are most easy to perceive (those that are pre-attentively processed) should be used to show macro relationships. Within those groups, we can then use other features to give a more in-depth view of the data, providing the micro view. Large relationships should be easy to see, while we'd like to keep the "noise" of the micro data out of our perception unless we *want* to analyze it.

This section has also made me wonder how many variables you could vary in a small multiples display while still achieving a good visualization. Could you take advantage of pre/post-attentive processing to vary multiple variables in each one. Perhaps you could vary a "macro" and a "micro" in conjunction without creating an interference effect.

gankit wrote:

I find steven's experiments and his derivation of the power law for out ability to measure differences in various types of information very informative. Although it is questionable whether one would be able to repeat the experiments and get the same result, the general progression of the power law coefficients give a rough estimate on what works and what doesn't.

Looking deeply at the power law coefficients, I realized that there is a distinct pattern in them. Things that affect us negatively tend to mostly get over estimated! For instance, shock, saltiness and warmness are over estimated. Quite natural as well.

gankit wrote:

Something interesting is that in all the theories of preattentive processing, the task usually is whether an object with X and/OR Y is present. (eg. is the red circle present). Even in the final model of boolean maps, one is expected to make a certain boolean map based on the given task.

But in creating visualizations, the user is simply exposed to the data graph and, in most cases, not given a specific task. What would be really interesting is to figure out what kind of feature differentiations can a user "pick up" instantaneously without any prompting. Or else, embed prompting clues in the visualization to bring out the questions the author of the data graphic wants the user to ask.

bowenli wrote:

Interesting ted talk on perception: http://www.ted.com/talks/beau_lotto_optical_illusions_show_how_we_see.html

I feel that this talk paper, and the subject of preattentive features is a bit frustrating in how case by case it is. It is difficult to make any kind of quantitative conclusions from these studies that will aid in creating visualizations. It seems to be the case, as often in HCI, that the answer is it depends. It seems that the goal of having an automatic visualization generator will never even come close to being as good as someone designing by hand.

Postattentive: Interesting conclusion that "learning" a visualization doesn't really help you search for a value. As I'm sure most people would like to believe that it would help them. Also the traditional vs. postattentive, I wonder if the results have anything to do with switching from left brain (words) to right brain (images) in one case the making the opposite switch in the other. It would be interesting to see a language search, or visual primer in any case.

Change blindness: These were pretty fun. I probably spent 2-3 minute on each one. Incredibly difficult when you don't know what the change is. I tend to agree with the first impression hypothesis. I tested myself and found it was somewhat easier to see the change if I was further back and took my glasses off. This made me recognize things less, so I was more attuned to what actually was happening rather than what was stored in my mind (which was nothing, if I couldn't recognize the symbols in the image).

Tufte, Ch3: It seems the traits he talks about in this chapter can be applied to other types of design as well. Placement of type with respect to other objects, making separation lines visually lighter, etc seem to be taken to heart in many web designs. It would seem that these guidelines can be recursively applied to larger sections of a graphic as well as individual elements.

cabryant wrote:

Regarding bowenli's comment on Change Blindness: it may be that the suggested strategy was more effective because it involved greater rod-based perception rather than that of cones. Rods are more sensitive to environmental changes, require less light to detect visual cues, and more effectively detect motion and directionality. Staring intently at the change blindness images would favor foveal processing, which is ill-equipped for detecting rapid changes.

vagrant wrote:

I was a big fan of Healey's exploration. I felt that it covered a spectrum of topics from a psycho-scientific perspective that encouraged me to experiment with the way I present my information. It is simply a matter of learning how to do any of that. However, as Bowen mentioned, it is difficult to say if a systematic algorithm capitalizing on these studies could ever reliably result in better data visualization software.

Tufte's chapter on layering convinced me to try a little something with my second assignment: I tried adding mini-captions, and the result...I think it came out rather cool, we'll see how others feel. Honestly, looking at the examples in the text, I feel awfully tempted to try layering given any data visualization.

In particular, the blindness change phenomenon described was a good deal of entertainment, particularly in a room of friends. One of my dreams is to design video games, and my imagination was spurred by the demos provided.

dmac wrote:

I think one of my favorite observations about many of the visualizations we have seen so far is how daunting, cluttered, and opaque they seem at first, but upon closer examination (sometimes by literally moving your face closer to the page) the information in the visualization slowly reveals itself. Two examples that spring to mind are the train schedule and the medical chart on page 56 in Envisioning Information. I showed the train schedule to my friend and his first reaction was to wonder how anyone is supposed to be able to read it. After I explained some of the features to him, however, he thought it was really cool. This pertains very much to micro/macro, but it has a lot to do with layering and separation as well. Layers allow a reader to experience different views of the data while looking at the same visualization. They allow a reader to peer *inside* the visualization, layer by layer, extracting and comparing more and more information as each layer is read in turn. It hearkens back a couple chapters to when Tufte commented that simplicity is a design decision, not a quality of good visualizations itself. In fact it would seem quite the opposite: the best examples of visualizations we have seen so far are not simple at all, and often appear quite chaotic. Establishing the invisible framework upon which to hang the chaos is where the challenge lies.

alai24 wrote:

I enjoyed reading Haley's page, mostly because the images and interactive bits demonstrated most of the information held in the text. A lot of the terminology alluded to a brain as a computer analogy where ram gets 'flushed' by seeing gray and the separation of perception into various maps. It's fun to think about the evolutionary roots of how we perceive things like bright colors, motion, and patterns. For example, fruits are usually bright so it'd be beneficial to quickly spot them in a forest.

In San Francisco the MUNI buses instead of having the standard red brake lights have lights the move outward like :<<< >>> which supposedly makes drivers more aware of when the bus is stopping. Supposedly the lights make it appear as if they were moving towards the driver, which has a greater affect than a simple red light.

akothari wrote:

I resonated with jieun5's comments on maps published by the big 3. I am a big google maps user myself, and I hardly use any other map service. I wonder how much that has to do with me just getting used to the service + colors, or the design actually assisting me better. It's harder to switch to a new set of colors and layouts once you get used to something. An example of this is changing GMail layouts - a lot of different themes came out recently making your gmail interface looking very fancy. I tried a couple, but saw myself switching back to the original plain theme.

nornaun wrote:

What I really like from the reading is the concept of "1+1=3". I think I have heard that concept somewhere before but manage to erase it from my mind. It does not appear intuitive at first. However, if you think about it, it is so obvious that it makes you wonder why haven't anyone thought about it this way before. I believe one merit of data visualization field is to help people convey ideas more effectively through the data that they have. Isn't it intriguing to think that we have to learn to make thing appears visually intuitive by using the concept that is not so intuitive itself?

cwcw wrote:

I love the layout of the Healey reading--he does a remarkable job of using visualization examples to enhance the understandability of the concepts he's outlining. The diagrams are visually appealing, but moreover they are simple. They highlight the concept of preattentive processing beautifully.

This goes hand-in-hand with the video clips (basketball passing; whodunit) Jeff showed us in class. They are highly effective illustrations of the tricks the human brain plays, as well as the strict limitations of the human brain. These fundamental principles are key in visualization designers' ability to communicate information, to tell a story, effectively. I'm noticing how many decisions are involved in the process of designing my own visualizations, all of which contribute to the phenomenon of preattentive processing. Excellent exercises.

tessaro wrote:

As an art student I was taught many or the princples underlying color and visual design by teachers who were themselves students of Josef Albers. His brilliant book "Search vs. Research" referenced in the '1 + 1 = 3 or more' excerpt is used by Tufte to illustrate the perils of unintended effects–like the distracting negative spaces which interfere with foreground reading as in the airplane instruction manuel. It should be noted however that much of Albers pedagogical work, including his seminal “Interaction of Color”, is really a call to students to manipulate effects rather than indivisible units– to focus on relations rather than parts. The example of the arithmetic fallacy for visual sums is not only a warning to visual designers but also an admonition to reject a simple linear accounting in all matters visual. This means that graphic effects, like the illusory edges of subjective contours, can themselves be exploited for positive visual effect. To see an example of this in Albers' own work, click the link below to look at a typeface he designed while at the bauhaus. It is made from a ridiculously minimal set of elements and, like a construction toy, builds up the face by exploiting again and again '1 + 1 = 3 or more'. Subjective contours are key to achieving whole letters which in some cases can, like a perceptual game, oscillate right at the edge between wholes and shapes. This design illustrates a tenant of Albers' teaching that I think applies equally well to visualization design - that or achieving maximum effects from minimum means.

http://www.p22.com/products/albers.html

malee wrote:

I agree with Healey's section on nonphotorealism because it easy to assume that something that better matches reality is a better visualization. Although the data should of course stay true to reality, the representation of it does not have to.

Healey's CT scan example demonstrated how adding detail on top of the original can be an effective mechanism. It is also possible that removing details from a photorealistic representation can also be a good mechanism. One simple example of this is the fact that effective road maps are not just satellite pictures. A fun example would be the way in which most comic strip character's faces are simple circles plus a few strokes. The abstracted representation of the face is as effective (if not more at times) than a photorealistic image.

anuraag wrote:

The Healey reading was fascinating. His description of the perceptual difficulties posed by conjunction search, feature hierarchy, and other forms of visual interference seem to me to be an effective argument against some of the principles of data density Tufte and others present. I suppose this illustrates a broader tradeoff between visualizations that effectively communicate (i.e., visualizations that make it easy to quickly grasp the central insight of the data) and visualizations that effectively record or encode data (a category where one might be more inclined to go data-dense).

Healey's description of change blindness also seems to have some interesting consequences for how we should think about the role of animation in visualization. If we buy that nothing is stored in memory (or even the weaker theory that nothing is stored until subjects' memories are triggered by specific reference to what changed), then animation does not seem like an effective tool for observing and analyzing subtle variation over time. It may still be an effective tool for demonstration already-detected variation, if one can alert the viewer to a specific area of focus before beginning the animation.

zdevito wrote:

Most of the examples of change blindness we have seen are limited to 2D displays: either carefully staged video or pictures with a blank screen interspersed. It is tempting to think that change blindness is limited to these pre-staged 2D representations. However Simons and Levin (1998) showed that changed blindness occurs even in real-world interaction (http://www.psychonomic.org/backissues/2129/R381.pdf). This video shows the effect: http://viscog.beckman.illinois.edu/flashmovie/10.php

Note that the man talking to the woman changes, but she doesn't actually notice.

rnarayan wrote:

Per Healey, one way to induce an interruption in what is being seen (rendering change blindness) is via an eye saccade, yet the microsaccade is the mechanism by which the neurons are refreshed (inducing change). This process enables visual fixation and thereby visibility (since the rods and cones only respond to change in luminance). These two concepts seem evidently contradictory - how are they reconciled together in the perception psychophysics model?

Follow up comment on (bowenli and cabryant) - the fovea also enables color perception and sensitivity to chromaticity changes are lesser than that of change in luminance (detected by rods).

The boolean map theory (and the color symmetry mismatch example)seems to fit the biological model - the cones are layered into two zones with the first sensitive to RGB and these signals either excite or inhibit a second layer of neurons producing opponent signals.

One nagging issue about the choice of color is to account for gamut reduction - since all devices do not employ a perceptually uniform color space as the intermediary for gamut mapping (say display RGB to printer CMYK or projector color space) - we can see these visual artifacts in almost all slideware...

codeb87 wrote:

I found Healey's discussion of Boolean Map Theory the most interesting of all those presented. The aspect of Boolean Map Theory that I found intriguing was the way in which it scales with additional binary parameters. If we assume that each parameter is distributed evenly across the data set, then with each map that is applied, half of the data is eliminated from consideration. This means that human's processing of Boolean Map target selection runs in log(n) time, where n is the number of parameters the data takes on. That's amazing! Independent of the size of the data set, and assuming that the human brain performs the set intersections in constant time, the result is striking - as good as a computer performing binary search! I would be interested to see experiments that confirm or reject this result, but either way, it made me appreciate the capacity of the human cognitive/visual system.

saip wrote:

In the recommended reading on Cleveland and McGill, the authors argue that the use of dot and bar charts as replacements for divided bar charts and pie charts conveys more quantitative information. They argue that our peceptual ability is better at judging position than angle or length. It seems to make much sense to me now. After thinking about it, I'm not very sure, for example, if in a donut chart we perceive the difference in angle or the difference in the segment area. Or if we perceive the length of each segment as the arc length rather than a straight line. I think that divided bar charts and pie charts could be favored more in certain situations. Some pie charts turn out to be more intuitive when minute variations in value are not very important and when values are inherently represented as percentages.

Leave a comment