current page

history

user

Lecture on Dec 2, 2010. (Slides)

Readings

  • Required

    • A Nested Model for Visualization Design and Validation. Tamara Munzner. InfoVis 2009 (pdf)

  • Optional

    • The Value of Visualization. Jarke van Wijk. Visualization 2005 (pdf)

    • The challenge of information visualization evaluation. Plaisant. (pdf)

Comments

wulabs wrote:

In "the challen of information visuzalization evaluation", #5 regarding Learnings From Examples of Technology Transfer. This is very interesting to see existing viz technologies used for different purposes and domains. I do not usually see the same complex visulization being used for different domains nowadays because people have gotten acustomed to using very simple online web interfaces. Anything such as the treemap or a scattergram would seem to be a custom java or flash applet. perhaps it is because nobody has really built out such toolkits that are easily embeddable as widgets to a cloud based platform? I think if somebody did this, we would see an uptake in increasily more complex visuzliation tools as well as more of the domain transfer, as noted in this paper

yanzhudu wrote:

The framework in "The Value of Visualization" is interesting. If this framework can be automated, then it can be used to automatically assess a visualization. This could assess and maybe improve quality of automatically generated visualization.

sholbert wrote:

The map morphing experiment was really interesting. In my personal opinion, I think that due to distractions in the atrium, users' cognative integration between the two different views was affected because their recollection of details between the two views was compromised.

In the lab settings, perhaps they're able to focus on more these details.

msavva wrote:

The research on inferring cognitive load from pupil dilation mentioned during lecture reminded me of a very interesting article I saw some time back on research in "Affective Computing". It's quite a broad field which concerns itself with trying to infer user emotional state and then driving interactions or algorithms with emotion signals. The particular application was a system named "EmoRate" which could index videos by the type of emotional response of the viewer and then allow creation of emotion timestamps for looking up scenes that match a given emotion. A byproduct of tracking viewer emotion was that you could construct an emotional response signature for an entire video (or any interaction for that matter) and use that to classify the overall response. Though the technique itself was relatively crude and only handled 4 types of emotion (mostly due to limitations in the EEG sensors used), it certainly inspired a great many questions about future research. If it became possible to track cognitive responses from visualization viewers at a fine resolution then the research directions that would be facilitated are mind-boggling.

jbastien wrote:

I think this was one of the most important lectures, yet also one of the least informative, and by that I don't mean to say that the teaching class did a bad job: my takeaway is that there's no easy way to evaluate visualizations. I think the examples and the discussions were great, and they show that evaluating visualizations is still an open problem, and can be approached like any other scientific problem. There's no easy rule or recipe that can be followed, and there will probably never be.

asindhu wrote:

I really liked the "nested" validation model because the way it's presented lends itself very well to some kind of automation. As @jbastien said above there's certainly no simple recipe to follow and no guarantee of producing a perfect visualization, but developing models such as this one can bring us a step closer to creating automated visualization validation as at least a very first step in evaluating visualizations. I do agree at at the end of the day, evaluating a visualization is somewhat subjective and highly dependent on so many factors like the nature of the data and the context, and so we may always need human evaluation as the final word. However, as visualizations become more and more prevalent and as toolkits for creating them become better and better, we're going to have more and more people creating visualizations and the issue of quickly and reliably validating them is sure to become increasingly important.

abhatta1 wrote:

In the paper A Nested Model for Visualization and Design Validation, mainly four methods have been suggested as a model namely, domain problem characterization, abstraction design, interaction design and algorithm design. In this model the part mostly confusing me is the domain problem characterization part. The paper mentions interview to be a major component for non-domain designers to come up with effective characterization. Except interviews what can be other quickfix methods for non-domain designers to characterize data efficiently ? It is apparently not clear and seems to be a major issue to me. For example, I sometimes find it difficult to comprehend complex biological data without knowing the appropriate background (eg. the gram-staining data presented in class - I understood it only after I understood what gram-staining is supposed to do). I suppose there might be similar complex data in various fields which I shall not be able to decipher easily.

hyatt4 wrote:

I was intrigued by the Xerox R&D team's continued research efforts after the results came in from their competition with Microsoft. My initial thoughts lined up with their hypotheses on the usefulness of the overview window. It made a little more sense to me why the overview window was not as useful when the data domain was taken into account. I think there is still a lot of room for exploration here in terms of other data domains, such as editing pictures. I think it is very useful to have a zoomed out view of a person's face when I am trying to clean up a small part of the picture which has been magnified to fill the screen. When I get interrupted by someone else in the middle of an edit and then turn back to my work and ask, "Was I removing nose hair or ear hair?" it is nice to have that quick reference point.

andreaz wrote:

The aspect of Munzner's model I found most useful was how its organization provides a pretty good framework for allowing us to rapidly determine appropriate validation methods to use. However, I feel that the main drawback of her model is that it doesn't characterize the typical nature of the design process, which tends not to be done in a strict top-down order. Nonetheless, the model's emphasis on the cascading nature of upstream errors and its impact on downstream levels is useful if it's taken into consideration when interpreting the validity of assessment results at more downstream levels.

acravens wrote:

@jbastien - I think you captured my sentiments quite well. The nested framework helps emphasize why. Especially in terms of complex problems and trying to validate that you have the problem right and operation abstraction (which seems to me similar to having an overall metaphor that matches someone's mental model).

iofir wrote:

I had an interesting discussion today with a colleague at HP about 2D vs. 3D visualizations for navigational maps. We debated whether a 2D top-down view would make a more effective viz then showing the distorted perspective view that the GPS in the car was showing. The view it showed focused on closer roads making them bigger. we ended up agreeing that it depends on the use case, but I did wish that there was a way to prove that one was superior to the other.

amirg wrote:

@jbastien - I agree: there is not always a clear cut way to evaluate a visualization. This is one of the things I think is hardest about visualization and HCI in general. I would also add that a good evaluation method is very much dependent on the goal of the visualization task and what the visualization intends to show.

I think one of the other challenges at play here is how to validate design decisions. We saw several examples in lecture of design choices that we thought would be better than others but then turned out not to be. For me, the question posed at the end of the lecture of "What did you find most difficult in creating visualizations and designing techniques?" is the difficulty of making design decisions when the decision involves a tradeoff (which is probably most of them). There are so many tradeoffs for each decision, encoding, data transformation, etc. that it can often be difficult to know which ones will yield the best results with respect to the goal of the visualization. Often it is possible to rationalize many different possible decisions based on these tradeoffs. I think this is where evaluation and users can really be helpful. As I said above and as we saw in lecture, we are not always good at evaluating these tradeoffs rationally, so some measure of formal evaluation can be particularly useful.

ankitak wrote:

It does seem true that evaluating visualizations is not a problem that has been solved yet - we do not have an easy way to evaluate the visualizations concretely. However, if we take a step back and think about the aim of visualization, we can probably understand the origin of this difficulty better. As Jarke J. van Wijk proposed in his paper "The value of visualization", "Insight is the traditional aim of visualization". Now, human insight is not a formally measurable quantity, at least not yet.

sklesser wrote:

Mirror and offset horizon graphs seemed like one of the less intuitive visualization techniques we've seen so far. I don't think the average person would see this and understand what is going on, while the experimental results show that with an explanation and with practice consistent evaluation can be obtained. I think this brings up an important idea of evaluation for a particular audience and function; I think certain intuitive techniques need to be used to convey information to the general population through avenues such as newspapers and news, while more advanced less intuitive techniques can be used for domain experts. Along the same lines, different evaluation heuristics should be used such as looking for intuitive understanding versus optimizing analysis time.

Leave a comment