current page

history

user

Lecture on Oct 7, 2010. (Slides)

Readings

  • Required

    • Perception in visualization. Healey. (html)

    • Graphical Perception: Theory, Experimentation and the Application to the Development of Graphical Models. William S. Cleveland, Robert McGill, J. Am. Stat. Assoc. 79:387, pp. 531-554, 1984. (pdf)

    • Chapter 3: Layering and Separation, In Envisioning Information. Tufte.
  • Optional

    • Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. Jeffrey Heer, Michael Bostock. ACM CHI 2010. (html)

    • Gestalt and composition. In Course #13, SIGGRAPH 2002. Durand. (1-up pdf) (6-up pdf)

    • The psychophysics of sensory function. S.S. Stevens. (pdf)

Comments

rakasaka wrote:

The readings highlight why Polaris/Tableau does prefer using pie charts, and it makes sense the bar charts encode more information in many ways pie charts do not. I think what was most fascinating about the paper was being able to quantify visual perception (albeit in a limited way) such that they could highlight the importance of position relative to length and angle. (Did anyone notice Tufte had suggested dot charts as effective just about the same time Cleveland had, in 1983? pp. 62)

Tufte's approach to positioning and layering, however, had me concerned in that it seems harder and harder to not end up being "pixel-perfect" (the Rhine type is a good example). Would it not be easy to be critical of every single imperfection in a visualization? Is there such a thing as a "perfect" visualization?

skairam wrote:

I really appreciated the reading this week about elementary visual perception. In the past, I have often felt as Kruskal did (as quoted in Cleveland) that "in choosing, constructing, and comparing graphical methods, we have little to go on but intuition, rule of thumb, and a kind of master-to-apprentice passing along information." While I usually find myself agreeing with Tufte's points, usually the underlying justification just seems to be common sense or "it looks better that way".

Clevland's clear connections between the psychological underpinnings of perception and implications for information visualization are helpful in answering questions such as @rakasaka's above - even if there is no such thing as a 'perfect visualization', there are clear metrics regarding how accurately, easily, and persuasively a visualization might convey information to the average person.

nchen11 wrote:

The Healey paper was extremely interesting, especially the interactive Java applets. The pre-attentive attention experiments were made much more meaningful once I saw how short 200ms really was.

Change blindness was also something that didn't seem to make much sense while I was reading it, but the point was driven home when I watched some of the GIFs and started to get a headache from staring at the blinking frames so intensely. I thought it was interesting that some of the changes were almost immediately obvious, whereas others I couldn't figure out even after poring over in picture in great detail.

I do however disagree with this comment about motion: "For example, consider searching for a red circle in a background of red squares and blue circles, a situation that normally produces a time-consuming serial search for the target (Fig. 3). If the red elements are animated to move up and the blue elements are animated to move down, however, the target is immediately visible." I think that the fact that the elements are traveling in different directions become a more overriding feature than their color, and I'd be interested in seeing if search times are affected if the colors were removed and replaced with distinct movement patterns instead.

skairam wrote:

I really appreciated the reading this week about elementary visual perception. In the past, I have often felt as Kruskal did (as quoted in Cleveland) that "in choosing, constructing, and comparing graphical methods, we have little to go on but intuition, rule of thumb, and a kind of master-to-apprentice passing along information." While I usually find myself agreeing with Tufte's points, usually the underlying justification just seems to be common sense or "it looks better that way".

Clevland's clear connections between the psychological underpinnings of perception and implications for information visualization are helpful in answering questions such as @rakasaka's above - even if there is no such thing as a 'perfect visualization', there are clear metrics regarding how accurately, easily, and persuasively a visualization might convey information to the average person.

ankitak wrote:

The discussion in class about encodings for quantities made me think about how brightness can be used in a very different way than the other encodings (position, length, area, volume) for quantities:

1. First, brightness is the only encoding amongst these which can be used only for comparison between quantities and cannot easily depict the absolute value for a quantity (unlike the others where it is easy to specify a scale with absolute quantities).

2. While trying to encode certain quantitative measures on a map (for example the graduated sphere map in class), it might be easier to perceive the quantities if brightness is used rather than the others. This may be because color can provide can a sense of continuity and cover the entire space on the map encoding the quantity at every pixel. On the other hand, circles or spheres are like blots on the map encoding data only for certain points; also, bigger sized circles/spheres can hide other circles/spheres or even the area they are encoding the quantity for.

Any opinions on this?

ankitak wrote:

The discussion in class about encodings for quantities made me think about how brightness can be used in a very different way than the other encodings (position, length, area, volume) for quantities:

1. First, brightness is the only encoding amongst these which can be used only for comparison between quantities and cannot easily depict the absolute value for a quantity (unlike the others where it is easy to specify a scale with absolute quantities).

2. While trying to encode certain quantitative measures on a map (for example the graduated sphere map in class), it might be easier to perceive the quantities if brightness is used rather than the others. This may be because color can provide can a sense of continuity and cover the entire space on the map encoding the quantity at every pixel. On the other hand, circles or spheres are like blots on the map encoding data only for certain points; also, bigger sized circles/spheres can hide other circles/spheres or even the area they are encoding the quantity for.

Any opinions on this?

gdavo wrote:

Today in class we saw for the second time a slide claiming that color (hue) should be used for nominal variables. Some people discussed this opinion and later I remembered of something also contradicting it. In the first reading of the class, Chapter 1: Information Visualization, In Readings in Information Visualization. Stuart Card, Jock Mackinlay, and Ben Shneiderman on page 6 there is an interesting "cotidal chart" (figure 1.9). It maps the tidal phase onto the color wheel (rainbow) "because the color wheel is continuous without a zero point".

Contrary to most charts using a "rainbow" color encoding scheme, here there is no minimum/maximum color nor zero, like a phase has no absolute reference and is cycling modulo 360 degrees. In that way the designer did a really good job in using an encoding that has the same properties as the data. But I agree that this is a very peculiar example and that color is not a good way to encode a quantity, most of the time. I hope we will learn more on that during the lecture about color.

yanzhudu wrote:

The "Feature Integration Theory" actually has some biological backing. Human visual cortex performs a lot of preprocessing on our vision data: detect color, shape, orientation, even trying to connect line segments, resulting in the "phantom square" we saw during the lecture. These preprocessing allows us to react to "features" quickly.

When we design visualization, we should exploit this human capability. We should make use of feature that can be easily recognizable by human, rather than requiring logical inference on viewers' part. This reduce the mental burden for viewer and make visualization easier to read.

gneokleo wrote:

The paper by Healey reminded me of museum exhibitions like the ones at the exploratorium (http://www.exploratorium.edu/explore/seeing/). Haley covers a lot of interesting explanations why we tend to istinguish some things better from others and I liked how he gives examples for every explanation and every approach. I particularly found the boolean map explanation very interesting and I could immediately see how this explanation can be plausible. However after reading this paper i started thinking whether or not it also depends on the person looking at the images and processing them or are all the techniques common to all the viewers. For example the author gives processing time averages for some example but i wonder how much that varies from person to person.

Tufte's chapter on Layering and Seperation was directly related to what Healey and Cleveland et al. were saying but in a more applied manner by giving real examples from visualizations. It is clear from Tufte's book that noise reduction, by utilizing methods described in the readings can, like he describes, significantly reduce a viewer's fatigue.

jbastien wrote:

In class we talked about Steven's power law and how people perceive the apparent magnitude represented by different encodings (like length, area and volume). We then discussed how each of these encodings can be scaled to account for these flaws of perception. The conclusion reached was that different people probably don't follow the same power law and scaling is therefore bad.

What I'm wondering is: does the error in estimating magnitude when the encoding is scaled reduce as compared to when the encoding isn't scaled? It would be interesting to have a frequency distribution of error rates with and without scaling.

As a neat meta-discussion, I'm also wondering if Steven, Cleveland and McGill analyzed the perceived magnitude of the visualizations they produced to present their results. That would be almost as neat as the self-description graphs http://xkcd.com/688/

iofir wrote:

In regards to the readings and the discussion in class about perception of different encodings, does anyone else feel like some of these encodings are really hard to perceive? For example I really don't think that orientation is a very good encoding since in most cases it appears as a flow like a vector field rather then an amount. The weather charts in the readings use orientation for the amount of precipitation, but it really looks like the direction of the wind. Another bad example is the use of minor variations in glyphs like parallel lines, closure, different lighting direction or glyphs like "L" vs "T".

lekanw wrote:

The topic of perception reminds me of the classic experiment by Simons & Chabris (1999) about selective attentiveness. If you haven't seen it before, watch this video before reading on: http://www.youtube.com/watch?v=vJG698U2Mvo

The failure of many to see the gorilla in the video demonstrates how easily we can focus in on one pattern initially, without noticing another obvious pattern. This provides another reason why it is important to choose an encoding that can be easily perceived, and to minimize perceptive noise that can lead the viewer astray.

msavva wrote:

I found the discussion on change blindness and related examples to be really interesting. Healy claims that the very narrow attention area of the human visual system has an important implication for visualization in that it requires us to direct the eye and mind of the viewer to the areas of importance. It occurred to me that an alternative to this "passive" approach would be the more "active" approach of trying to preemptively detect when and where user attention is focused through gaze and blink tracking and then appropriately choose to tweak visualization elements so that they nudge the user towards seeing the intended parts of the visualization. For example, perhaps a solution to the common problem of having to jump back and forth between elements for comparison or between elements and a legend for classification would be to detect these gaze patterns and then bring the elements closer together or overlay a translucent legend widget on the area of attention. Of course that would only work for single user scenarios but one could also imagine advanced systems that track multiple gazes and compute the optimum possible display for all viewers (okay, maybe my imagination is crossing over into Sci-Fi land with this example).

mariasan wrote:

I'm fascinated by the concept of preattentive processing and would love to read more about it. I wonder what it is that makes us so sensitive to differences when it comes to specific set of visual features.

felixror wrote:

I find the discussion on human graphical perception fascinating. The fact that it is so easy for us to detect targets preattentively for unique features while so hard to find conjunction targets is very surprising. In this regard, Treisman’s feature integration theory does a good job in explaining such phenomenon. However, I find the classification of feature maps a bit problematic. On the one hand, color feature is treated very sophisticatedly by the fact that it is subdivided into yellow, green, red, and blue; while the classification of other features are relatively sloppy in comparison. For example, I feel that orientation is too broad of a concept. It does not contain any information about the shape associated and also the type of orientation. In addition, there should be a description of a hierarchy of different visual features, since I believe human do not react uniformly across all visual features. In case conjunction patterns are encountered, there should be a dominant visual feature. Knowing the hierarchy of visual features allow us to choose visual encoding more effectively.

ericruth wrote:

Pre-attentive processing seems like one large advantage visualizations have over text-based data display. The demonstrations in class today were really compelling, however I did notice significant differences in the effectiveness of pre-attentive features. This makes me wonder if there's any ranking of pre-attentive features similar to Mackinlay's rankings of encoding methods. Personally I found color to be one of the most compelling pre-attentive feature we saw today, but I'd be curious to see research that compares various pre-attentive features in a more structured way.

Something else I'm wondering is what types of information can be effectively processed pre-attentively. We saw many effective examples of nominal groupings represented by pre-attentive features, but most of the quantitative/ordinal variables didn't seem to come across clearly until after thinking about them. This makes me wonder if "effective" pre-attentive processing is limited to groupings, or if there are other patterns that humans can effectively pre-attentively process.

hyatt4 wrote:

I think the biggest take away for me regarding this topic is that there is a lot left to be understood regarding the mix and match of different visualizations. While there has been some research in the effectiveness of one or two visualizations, the comprehensive effects of further aggregating visualization encodings is less understood. I'm curious if there is some perfect combination that has yet to be uncovered, or if our best bet is to continue by trial-and-error while using mechanical turks (or some other form of subjective feedback) to be a tool to determine if we are heading in the right direction.

hyatt4 wrote:

I think the biggest take away for me regarding this topic is that there is a lot left to be understood regarding the mix and match of different visualizations. While there has been some research in the effectiveness of one or two visualizations, the comprehensive effects of further aggregating visualization encodings is less understood. I'm curious if there is some perfect combination that has yet to be uncovered, or if our best bet is to continue by trial-and-error while using mechanical turks (or some other form of subjective feedback) to be a tool to determine if we are heading in the right direction.

amirg wrote:

The topic of graphical perception reminds me of a lecture I heard last year from Professor Robert Sapolsky, in which he talks about people can more easily find in images things that evoke fear, such as snakes. Obviously, this type of perception has some survival benefits as someone who sees when a snake is nearby is less likely to be bitten by it. I think it raises an interesting question about how our visual processing systems may have evolved to be adept to particular pre-attentive features more than others. Is there something about being able to pre-attentively pick out different colors that confers some sort of survival benefit that is likely to have been passed down? I would imagine that there is.

In this vein, I also found the discussion of different theories for pre-attentive visual processing to be quite interesting as they really try to get at this issue from the perspective of how our eyes and brain coordinate to process images.

selassid wrote:

I was at "adult night" at the Exploratorium science museum in San Francisco yesterday (which sounds much sketchier than it is; once a month they're open from 6-10pm and you have to be over 18 to enter, similar to "adult swim") and I was really excited to see an exhibit that demonstrated pre-attentive visual processing! I took some pictures of the exhibit: Left half and right half.

Although they didn't go into detail explaining the nuances of the effect, they demonstrated that color, shape, and orientation were pre-attentive, and that combos of them were not (although they didn't use the term "pre-attentive"). I was very interested in the difficulty of finding the bold serif R in the sea of sans-serif Rs in the upper right section of the right half of the exhibit. I'm not sure if the shapes of the two fonts were not different enough to make the processing pre-attentive for me, or if font and letter (meaning?) are integrally related. I feel I'm much faster at recognizing the semantics of a word or letter rather than the font it is written in, though. In the upper rightmost section of the right half of the exhibit, it was very easy to identify the R in the sea of Es, even though they are a pretty similar shape, too. This might tap into the explanation behind the Stroop effect, as the processing of meaning of words seems to happen more robustly.

acravens wrote:

@gneokleo, @felixror... In class and in your comments we seem to be touching on the implications of individual differences in human perception for designing visualization. I was thinking about perceptual variation during the lecture in terms of child development and learning. I don't know enough about the range of experiments that have been run, but I was struck that all the subjects in the Cleveland and McGill paper were adults. When and how do these "typical" perceptual capabilities develop and are there systematic things one can point to that influence where one falls on that spectrum of variability we were discussing in class?

trcarden wrote:

I noticed that no one commented on the "set" game that was produced in class. One of my old roommates used to love to play that game and got considerably better at being able to find the seemingly integral attributes of each card. I wonder if training can actually produce pre-attentiveness in other dimensions or perhaps even in otherwise integral dimensions.

amaeda10 wrote:

Speaking of graphical perception, I highly recommend the book "Mind Hacks Tips & Tools for Using Your Brain" http://www.amazon.com/Mind-Hacks-Tricks-Using-Brain/dp/0596007795

In chapter 2 of the book, it covers brief theory of visual processing and many examples and experiments about vision you can try.

Please note that this book is not only about the visual perception, but more broadly about the human cognitive process, including grouping (Gestalt grouping is introduced), hearing, integrating, reasoning, remembering, attention, etc.

jtamayo wrote:

We talked about how it is a bad idea to compensate for Steven's power law by pre-scaling our visualizations. It's interesting to note that for some variables computers will do this for us, whether we like it or not. In particular, gamma correction is present in every operating system, and even in compression formats like JPEG.

To further confirm the inaccuracies of the exponents in Steven's power law, until very recently OS X and Windows/Linux used different exponents for gamma correction.

asindhu wrote:

@iofir I definitely agree. I also found that some of the encodings that we've seen formally classified as at least somewhat effective like orientation are actually not very easy for me to decode. The rain map was a perfect example -- a visualization is supposed to reduce the cognitive load on the viewer, but I found myself constantly having to translate the visual encoding in my mind to remember what it actually meant. I think this is just further evidence of the fact that there is huge variation among people and so all of these findings can only be general guidelines, because you'll always find people for whom a supposedly "effective" encoding actually doesn't work very well.

On that same note, I wanted to reiterate a comment that someone made in class about how the context of the data is actually very important. Prof. Heer mentioned (in response to that comment) that all of this work is somewhat "context independent", meaning these findings of accuracy of perception or effectiveness of encoding is all based on an abstract quantitative value. However, just as was mentioned in class, even though angle isn't as good as position according to experimental trials, it may be best if your visualization is actually encoding angle. And here I mean "best" not in the sense that angle encodings will suddenly become more accurately perceived if they are encoding actual angles, but that there are likely to be other benefits of having that direct correspondence between the type of data and the type of visualization (it may make it easier to spot patterns/trends, etc.). So I guess what I'm saying is that while all of this work on perception is important, at the end of the day we should also consider what the underlying data actually is and in what context we are creating the visualization, because these are factors that should be weighed in addition to the scientifically determined effectiveness of the chosen encodings.

andreaz wrote:

I was curious to learn more about color perception in visualization research and I came across some interesting research about the interference effects of color on achromatic codes. In multidimensional displays, a literature review by Christ (1975) found that color coding interferes with participants' accuracy judgments for size codes, alphanumeric codes, and shape codes. Moreover, increasing the number of colors in a display decreased the accuracy in both digit- and size- identification tasks. The interference effects of color illustrate how difficult it is for humans to inhibit its processing, and suggest that color is an effective encoding in certain cases (when you want certain elements to jump out) but detrimental in others (when it interferes with the accuracy judgments of other encodings).

Source: http://www.ingentaconnect.com/content/hfes/hf/1975/00000017/00000006/art00002

emrosenf wrote:

After class and the reading, I have a good appreciation for the pitfalls of pie charts. But now I'm having trouble understanding why they were so popular in the first place.

I found this site the other day that I would like to contribute to the class wiki: http://www.informationisbeautiful.net/

In particular, I would like to single out this infograph: http://www.informationisbeautiful.net/visualizations/colours-in-cultures/

After you look at it for a few minutes, you'll realize that emotion and region are both encoded with position. Color is actually used to represent . . . an actual color for a change.

I think it's a terrible chart. The wedges are distorted and we naturally think that area means something, but it doesn't. The gaps when there are no 'values' are very confusing. I see no advantage to having the graph wrap around itself -- theres nothing periodic about it. And the fact that a few of the colors have their own legend? Shameless.

What do you think? Am I off base?

jsnation wrote:

When reading the chapter on Layering and Separation, I was embarrassed to find that I made one of the worst mistakes on color choices in my assignment 1! Basically, as quoted by Windisch about the "first rule of color composition" - bright, strong colors have a negative effect when they are used in large areas adjacent to each other - which is exactly what I did in that assignment. I had thought at the time that I was selecting colors to be as 'different' as possible, but I see now that I was drawing extra attention where it was not needed, and basically making the graph noisier than it could've been with more muted colors. If I had to redo that assignment now, I would select dull colors, with the lines between colors being brighter, because that was the part of the graph that should've been accentuated by color.

I also found Cleveland's article on graphical perception to be really interesting to. I don't think I have ever encountered the dot chart, curved difference chart, or framed-rectangle chart before, but I liked his reasoning behind suggesting them. I wonder why it is that these graph types have not been adopted more readily? I think it is most likely because it is very hard to change the 'norm', but I also wonder if the artistic value of a graph has anything to do with it. At least for the framed-rectangle charts, I find that the patch maps look much more aesthetically pleasing to me even if they are harder to read and create false 'groupings'.

On a side note, I looked for the 'Set' game that was shown in lecture, and found a few one's online, but they all seemed to have a clunky interface and not work very well (Or I am very bad at it). Does anyone know the location of a good version of the Set game I could try?

esegel wrote:

Regarding change blindness: While online I find many examples of change blindness, I can't find a good explanation of what causes the phenomenon. In particular, I would like to know the "recipe" for constructing change blindness illusions. Why is it, for example, that we can hide the engine on an airplane but not a man's tie? Are there predictable patterns about which elements in a scene can be changed and which cannot?

Regarding pre-attentive processing: These effects are strikingly strong. This vision research provides convincing empirical evidence for certain design choices while others in the field (e.g. Tufte) appeal to "design principles" and "aesthetic sense" instead.

Regarding gestalt perception: These effects are neat. They are also as immediate, strong, and convincing as the preattentive processing effects. Is one stronger than the other? The preattentive effects seem "bottom up", meaning they detect "irreducible components" rather composite images; the gestalt effects seem "top down", meaning they group many compositional together as one. So even though both effects appear immediate and quick, I predict that preattentive effects are quicker than gestalt effects. This can be empirically tested.

riddhi wrote:

I enjoyed the revisit to the Small Multiples concept in class. While doing project 1, I was trying to communicate everything I wanted to through one graph. However the illustration of the small multiples concept in class, especially in conjunction with layering principles and the integral-separable spectrum on which multiple attributes lie, has made me understand their effectiveness better. I thought steven's power law was particularly interesting, and the in-class exercise that had us rank position, length, area, color etc for effective magnitude estimation power, before revealing the power law was useful. My first thought was area as well. Except I realize area is good for loose ordinal estimations and not precise quantitative estimations.

arievans wrote:

I agree with @iofir, namely, some of these encodings are really hard to perceive. In particular, the weather charts are extremely difficult for me to decipher. I think in general what the last few lectures and readings have revealed is visual perception and our design need to happen on a case by case basis. It seems like we're always playing a game of tug of war between generalizing visualization principles and tailoring to specific instances or situations. That is somewhat disheartening because ideally we'd like to always move in the generalization direction, i.e. make things as general as possible while still having usefulness. Maybe we just need to get better at categorizing data? Different classifications of data lend themselves better to certain types of visualizations...

Also, the change blindless phenomenon is fascinating to me, but I'd really like to see some creative applications of it. Has anyone found any on the net?

anomikos wrote:

I loved the idea of using Mechanical Turk to try out and test different concepts around human perception. It was interested to discover though the level of planning that these tests require to produce accurate and scientific results. I know that prof. Klemmers teaches an interesting course regarding HCI experiments design later in the year. If you are also interested in this maybe you should check it out.

Regarding the lesson in class: it was interesting to see how people responded to the different tests that Prof. Heer had prepared for us. It perfectly matched the scientific evidence! Still many question were raised regarding human perception on an individual basis or as a whole. E.g. do people that generally overestimate let's say area overestimate other elements as well. More interesting experiments to tackle as part of the final project I guess.

@jsnation I was TAing Iphone programming last spring and one of the teams worked on a set game implementation for Iphone. They were planning to release it in the App store. I could look into that if you are interested.

nikil wrote:

I am fascinated by the research on preattentive features. I'm curious to see what other features in that space can be classified as preattentive, because I think that even though a lot of the definite primitive ones have been covered, there are other instantaneous aspects to daily life that jump out to us and if analyzed could lead to the possible important discovery of new preattentive features. I also think that 3d depth cues have an interesting visual effect in relation to stereo vision and possibly could be the basis for a new set of preattentive features. All of the work done so far focus on two dimensional projections. I really think the 3-d space offers interesting opportunities.

avogel wrote:

The research on preattentive features is pretty cool, but I expect it's the sort of thing that could be horribly misapplied, like the classic 7 +/- 2 rule of thumb. In the case of 7 +/- 2, I remember hearing (probably in CS147, several years ago now) designers used this as a justification for changing menu layout. People can see all the menu options available when they open a menu - they're not keeping the menu layout in short-term memory, yet designers were under the impression that's exactly what people did. In the same manner, I could see designers mistaking "preattentive features" for "subliminal messaging," or worse, purposely trying to make confusing charts with that in mind.

On another note, the second Tufte book has a far different feel than the first. At times I felt he was abandoning, or making exceptions to, his minimalist approach, but I'm reconsidering that interpretation. Rather than looking at straight data, he's looking at more complicated representations and how non-data ink can be used to create worthwhile art (e.g., the Japanese train map).

I don't recall if the chart I'm thinking of was in the reading for today (reproduced in this blog, arguing for some use of "chartjunk" - an interesting read in and of itself), but it made me think the people at the onion are familiar with Tufte's work, or at least sensitive to the same trends in data presentation.

msewak wrote:

In the perception of visualization Healey paper, I thought it was very interesting that during pre-attentive processing, humans pick out color boundaries more quickly than shape distinctions. I also found change blindness interesting: some features are processed pre-attentively and others cannot be found even if people actively look for them. For example, it was very hard to see the difference between the two pictures when they were interleaved with blank images. This is a good observation to keep in mind when drawing images.

heddle wrote:

Ha! During this lecture, all I kept doing was try to figure out the evolutionary advantage of each pre-attentive feature. For example, the ability to pick out color (although they don't mention any difference in the ability to pick out specific colors over others) probably came from the need to spot food and poisonous creatures :) I imagine that the ability to distinguish if something is a face is also pre-attentive, but that's just a guess.

Anyways, the section in both the Healey paper and the Graphical Perception paper that I thought was dealt with the least was depth and volume. As we get better and better at 3-D modeling in computers, I think more time and research should probably be spent understanding how users interpret data with depth. In another class, we learned that humans are inherently bad at perceiving depth without a reference point. The example given was if you're scuba diving in the ocean, you can't see the bottom so your vision is just a wall of blue, and a creature swims by. Is it tiny and close or huge and far away? People can't actually tell. I imagine that, although you don't scuba dive when you read data, our inability to accurately perceive depth would affect our interpretation. This was touched on with people's inability to determine volume, and whether a circle was twice as large as another.

adh15 wrote:

@emrosenf I especially like the legend that identifies some of the more ambiguous colors in words. Surely that's a sign that something could be improved...

In the Tufte chapter, I found the use of color to separate layers of information extremely effective. It seems that often multiple colors are used within the same layer of a visualization, but going forward, I am interested to explore how color can be used to present more information in a way that is still uncluttered.

jeffwear wrote:

While the lecture did not present a ranking of the preattentive features according to ease of perception, I'd venture a guess that motion is the most salient. Along the lines of @heddle's analysis, being able to detect the motion of predator and prey is paramount to our species' historical survival. I also support the strength of motion as a preattentive feature by noting how it augments the spatial conjunctions of 3D disparity, color, and shape.

It strikes me as unfortunate that make visualizations, as they are printed, cannot make use of motion - although they might suggest it (this blog post analyzes a terrific example of an action shot). At the same time, it is ironic that modern reading takes place within interactive electronic media where many elements 'surrounding' the visualization are in motion - notifications, ads, etc. And this disadvantage of static visualizations experienced on computers is compounded by the interactions between the visualization and its surroundings in color and shape and spacing. As Tufte notes, these patterns therein become "dynamically obtrusive". (53)

Leave a comment