Lecture on Nov 2, 2010. (Slides)


  • Required

    • Multi-Scale Banking to 45 Degrees. Heer & Agrawala. (pdf)

    • Escaping Flatland. Envisioning Information. Tufte.
  • Optional

    • A Fisheye Follow-up. George Furnas. CHI 2006. (acm)

    • Space-scale diagrams: Understanding Multi-Scale Interfaces, Furnas & Bederson, CHI 1995. (pdf)

    • Guidelines for Using Multiple Views in Information Visualization, M. Q. Wang Baldonado, A. Woodruff, A. Kuchinsky, Proceedings of AVI 2000, Palermo, Italy, May 2000, pp. 110-119. (acm)

    • Stacked Graphs – Geometry & Aesthetics. Byron & Wattenberg. InfoVis 08. (pdf)

    • The Table Lens: Merging Graphical and Symbolic Representations in an Interactive Focus + Context Visualization for Tabular Information Ramana Rao and Stuart K. Card, SIGCHI '94, pp. 318-322. (pdf)

    • The visual design and control of the trellis display. Becker, Cleveland and Shyu. (pdf)

  • Links


ankitak wrote:

This reminds me of a Ted talk on Photosynth - http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html. I feel that it demonstrates a new way of "escaping flatland" of our computer screens by providing the users with a multi-resolution experience.

Looking at the bigger picture, I think that this is a neat project addressing many interesting research issues. Amongst its applications, I find that using the social / other freely available data to create rich photographic experiences of various locations is great!

gneokleo wrote:

Tufte gives a lot of interesting examples which try to "escape flatland", the limitation of portraying information in 2D. What I noticed in some of the examples is that the visualization often sacrifices detail or accuracy for that, for example the map of Ise, Japan or the dancing instructions or even the map of Rockefeller center in chapter 2. In some of the examples i think it would have been helpful for the reader to have two perspectives, especially in the maps so that he can switch between a more accurate representation to an easier to read representation.

The Heer and Agrawala paper stresses the importance of helping the reader analyze data more efficiently with giving him an appropriate trend perception which I think is very helpful. It also reminded me of Google Finance and the really good job they do with the zooming feature of stocks which smooths the line according to the zoom level.

wulabs wrote:

Banking with multiscale at the optimal 45deg is a very interesting idea. I do believe it is somewhat subjective though because it only affects how the data is perceived rather than changing the actual display of the data or interaction. One good application of such that I have seen is the use of sparklines for things such as stock prices. users can very quickly see a real-time trend in a very small amount of space, and usually sparklines can be put in a table or list with other sparklines and it is simple enough to not be confusing or overwhelming.

I would have liked these guys to do a case study with a control and test group to see the effectiveness of the 2 different graphs in making business decisions. The paper just says "oh, graph A looks different from graph B and most people are likely to make these conclusions" rather than having actual concrete data.

brandonh wrote:

If you've ever seen a Tufte graphic design seminar, then you'll definitely recall the effectiveness of the "Criminal Activity of Government Informants" chart on pg 31. Ingeniously, no matter which way you read the chart, you read a story of violence and crime that supports the idea that these are no trustworthy informants. It's almost like the information designer is the protagonist in the movie Inception - they're creating a world designed to plant an idea. This idea one not explicitly labeled, but made so clear by the presentation of facts, that you take is as your own conclusion, independently derived, and it sticks.

Given the timing, I have to wonder if there are any examples of political ads employing the same techniques. Sure, we've got direct messages like "bobble-head Meg is sending jobs overseas" and "Jerry is paying for illegals to go to college", but has anyone come across a visualization meant to plant an idea in voters' heads about candidates?

As for multi-scale banking, what's memorable about this paper is that sometimes less is more, specifically fewer pixels with the right banking can show an aspect of a line more effectively. This seems like a useful insight for the design of information displays on limited-resolution mediums like mobile phones. Still, it would have been helpful to do a quantitative user study to validate some of the perceptual assertions.

msavva wrote:

The Tufte chapter provided some rather impressive examples of very data dense visualizations that are simultaneously eminently readable at a micro scale and yet also manage to convey a sense of the macro scale structure within the data. In particular, I found the Tokyo weather charts to be excellent examples of how it's possible to encode data elements into "small multiple"-like visual elements and then aggregate them over a chart so that they end up portraying the average and distribution of the data. It's amazing how easy it is to just step back from the chart, squint, and get a sense of the overall pattern (okay, maybe you wouldn't really squint when actually trying to read the chart but the fact that you can still perceive large scale patterns is a testament to how well designed these visualizations are). Perhaps in this case the visualizations work so nicely because a relatively intuitive mapping between weather patterns and visual encodings is easy to construct (and there's also a clear way to map the elements out over the chart by indexing in time over both dimensions). These examples reminded me of the "Harvey Balls" and how they are used by Consumer Reports to rank cars by reliability.

skairam wrote:

In the Multi-Scale Banking Paper, I also would have been curious to have seen the results of a study with users to determine how the different aspect ratios affected graph understanding. Specifically, it would be interesting to note differences with respect to performance between periodic (e.g. sunspots, CO2) and non-periodic (e.g. mutual funds, downloads) data and with different types of tasks.

esegel wrote:

Some thoughts from lecture:

(1) It seems like the Tufte reading "Micro/Macro Readings" would have been more applicable to today's lecture than for the "Exploratory Data Analysis" lecture. This Tufte chapter is all about techniques for expressing different details, structures, and data on different levels of analysis. This is very similar to the types of "zooming" techniques we discussed today.

(2) Today's class had an example of finding nearby locations on a PDA (is that term used anymore?). I was thinking that a useful strategy would be a fish-eye lens approach, with closer places being less distorted and more visible. (Fish-eye perspectives were talked about later on...)

(3) The "The World as seen by a New Yorker" seems related to a fish-eye perspective: things further on the periphery are more distorted. However, it has the additional strategy of giving *less detail* to things these periphery objects. This seems to be a good combination strategy (i.e. fish-eye + dense details only locally). Was this explicitly brought up in lecture and I missed it?

yanzhudu wrote:

The "Multi-Scale Banking to 45 Degrees" paper details algorithm for automatically select aspect ratio for maximizing the line segment angle difference. Sometimes, we may want to stress a certain trend in a graph. For example, if we want to emphasize that the trend is growing, we may want to increase resolution on the vertical axis. (Sounds likes the "how to lie with statistics" we saw during the lecture). I am wondering is it possible to add a parameter to the "Banking to 45 degree" to generate such emphasis.

jtamayo wrote:

The Table Lens paper presents a great example of the focus+context idea for displaying data. Two levels of information, however, seem little when compared with the 19 zoom levels available in Google maps. What's even more interesting is that the information in each zoom level in a map changes, depending on what's interesting at that scale.

I wonder whether it would be possible to automatically generate meaningful zoom levels for large data sets that don't go too well with simply reducing the resolution of the data (like images).

rakasaka wrote:

I can't help but always think with additional readings how I could have improved upon previous iterations of assignments, and the more I read Tufte the more I feel that "extensive compromise" in an effort to displaying multiple levels of information is not only an option, but may also be necessary. That being said, I think that small multiples are a good solution to using space effectively, though I am somewhat saddened by the realization that my previous assignment was nowhere near effective in their use.

I like the notion that if "numbers are boring, then, you've got the wrong numbers"- however I've also come across the issue where even though the numbers are interesting, the data manipulation done with them renders less than effective. I wonder if a successful visualization has ever been done just with numbers?

estrat wrote:

Hm, deleting comments doesn't seem to work

estrat wrote:

The Datelens slide reminded me of logarithmic calendars (they might be the same concept, I don't remember everything said in lecture). The idea is that while most calendars display time linearly, items far in the future are exponentially less useful to us. The difference between an event in 30 and 31 days is usually negligible, while the difference between today and tomorrow can be huge. So it might make more sense for a calendar to show you your schedule in such a way. I guess the NYC map we saw in lecture does something similar but for distance instead of time.

Edit: Actually looking at the Datelens website the concepts seem to be the same

hyatt4 wrote:

I liked the Multi-Scale Banking to 45 Degrees paper, and the approach to find the optimal resolution based on power found in the data. At one point, the paper discussed the ability to located a local trend/aspect ratio as opposed to a global aspect ratio. I think this could be very useful if you only wanted to look at a certain piece of data. However, there may be some complications and unwanted artifacts if the ratio was repeatedly optimized for data as it changed. For example, suppose we wanted to analyze music data and found that there was an exceptional amount of low frequency data at the beginning, mid range activity in the middle, and high frequency data near toward the end. Using a spectrogram technique (like that discussed in the paper, taking just chunks at a time), we could optimize the aspect ratio through the out music as we looked at the beginning middle and end. However, while we would have an optimal aspect ratio (in the space utilizing sense that is being addressed), we would lose some ability to compare the data directly. I think it might be useful to find the various strong components throughout the data, and give optimal options when viewing each section (in time), but also providing the option to keep aspect ratios from prior or future sections so that you could have a direct comparison of these sections.

strazz wrote:

@rakasaka I agree with your comment that additional readings make me reflect on design choices I made on previous assignments and how I could've made better visualizations. If we consider that sizing as the most important visual encoding, I definitely would've made different choices. I also think that the "powers of ten" video is an excellent example on how manipulate data by zooming in and out of it while handling visual cues on what is happening and when. Another great example of this is Photosynth (mentioned above) which allows for impressive image data visualization using position and size encodings. What I was left wondering after lecture was that if size is the most important encoding because of how our brain interpret visual information, then which secondary encoding would be the most useful when working with size encoded items, like for example color, shape, hue?

anomikos wrote:

I found the justification behind the banking to 45 degrees theory slightly confusing both while reading the paper and during the presentation, but arguably the examples seem to indicate that it produces far better results. However as we have seen in class there are strong arguments from both sides when it comes to the appropriate use of aspect ratio. In the end it comes down to what the creator of the visualization intended to focus on. I think that more than anything else use of space in a visualization is tightly coupled with the point that the designer wants to make and where he wants to shift the focus as indicated by the "World though the eyes of a New Yorker" and the "Human cortex usage" examples.

abhatta1 wrote:

In the Stacked Graphs paper, I learned an important lesson.Initially the visualization seemed esoteric.I found it very hard to understand what was going on, specially in terms of the music (last.fm) data. But when I read the description of the paper it seemed to encode several parameters in a single visualization which otherwise would never have been possible. Particularly the box office graph seemed to bring out trends which would not have been easy to understand in a line or bar chart. I believe that some visualizations even if apparently incomprehensible, might be ultimately beneficial.

acravens wrote:

The most important point I took away from the lecture was the summary point to think about focus+context not just in terms of how to show data (i.e. fish eye lens) but in terms of filtering and what to show and how to filter.

Here's another fun cartoon along the lines of the one shown in class; this is the World According to San Francisco and I think part of its power is that parts of it are just true enough to be profound.

These seem useful to me to remind us that implicit in any dataset is a point of view, determined by both what it contains as well as what it leaves out. How might our design choices minimize or maximize the effect of that point of view in the data itself?

asindhu wrote:

In going over the plots of carbon dioxide concentration and payroll changes in lecture, the main takeaway seemed to be that it's hard to say whether one aspect ratio or choice of banking is "good" or "bad", but that it depends on what you're trying to do with the data. I think there's a related but slightly different point as well -- it depends on the data itself and what kind of variance is meaningful in relation to your actual data.

For example, take the payroll graph. Instead of asking "is one graph more misleading than the other?" or even "what message are we trying to show here?" we could ask, "in the grand scheme of things, what kind of change in payrolls is actually significant?" This is often difficult, obviously, because it requires a much broader perspective and a consideration of the context. If we make that effort, though, we might be able to say that a certain percentage or dollar amount increase could be considered "significant" in that it would actually have an effect on the employees and on taxpayers. Based on that analysis, we can then decide on an aspect ratio that appropriately shows the increase actually in the data as either flat, shallow, or very steep based on our own judgment of significance.

jdudley wrote:

After seeing some of the videos in class on Tuesday I was reminded about one of the great data viz TED talks.

David McCandless: The beauty of data visualization


felixror wrote:

I find the idea detailed in the “Multi-Scale Banking to 45 Degrees” paper very inspiring. Finding the right aspect ratio can help viewers distinguish and detect useful information (such as trend) more effectively. The paper outlines two methods, namely spectral analysis and iterative locally weighted regression for trend detection in order to decide the optimal aspect ratio to display line plot. I think one of its biggest applications is the visualization of stock prices, where the perception of trend is very crucial to the overall impression of the stock’s performance. I subsequently looked up the stock charts of Google and Yahoo finance respectively and found out that Google’s charts adopt a lower aspect ratio then that of Yahoo’s charts. My perception of the stock’s performance varies as a result of seeing the two charts.

amirg wrote:

I found the Wang et al paper on guidelines for using multiple views very interesting. They present a series of rules for determining when it is useful to present multiple views for visualizing data and when it can be problematic. I like that they use a very structured approach to determine the costs and benefits of using multiple views in various scenarios (some of the benefits include aiding memory and comparison while the costs often include learning curve, computational cost, and display overhead). Though the authors present a really cool framework for understanding how multiple views can improve the power of a visualization, I would love to see more information on how we can analyze these guidelines more quantitatively, since the authors justify these guidelines mostly through examples. Overall, I think the paper makes the critical point that more content is not always better, but rather that one has to be judicious when choosing when and how to add to a visualization in order to use the space as effectively as possible.

jsnation wrote:

The banking to 45 degrees concept highlights how important aspect ratio is to the message you are showing with your figure. Reading the paper I was wondering why the most common graphing tools for line graphs, like Microsoft excel, don't use these concepts. The average excel user probably isn't even aware of the effect of aspect ratio on their chart, and may just blindly accept Excels normal scaling for the plot. The last time I used excel (which admittedly was like 4-5 years ago, so things may have changed), I think the chart aspect ratio was based off a fixed size for graphs, and whether you included legends and labels and how large those were. It seems like a tool like excel, which is geared more towards the average person vs. a data visualization expert, should support features like banking to 45 which can greatly improve the effectiveness of the figure. Also, the multi-scale banking would be a great benefit to the average Excel user who might not be readily aware of the trends in their data they are trying to highlight, or how to highlight those trends. I was surprised by how well the multi-scale banking seemed to pull out a relevent scale even in data sets that are not inherently periodic, like the prefuse downloads or the financial data.

andreaz wrote:

Automating aspect ratio through the multiscale banking techniques sounds incredibly helpful in creating visualizations since aspect ratio is one less design element the designer must think about. Although the appropriate aspect ratio is ultimately dependent on the context of the data, it nevertheless is useful to have a scientifically-sound starting point. 

In lecture, I was surprised to hear that the overview+detail technique is not as beneficial to viewers as they would like to believe. Now that I've thought about it some more, though, the intuition behind that fact has started to make sense since the overview+detail technique forces viewers to constantly reorient themselves in space. As a result, the user's cognitive load and visual search times are likely to be adversely impacted. The focus+context technique seems like a better alternative since the overview and details are presented within the same image. Although the success of focus+context visualizations is highly dependent on the filtering applied, I don't necessarily believe that filtering techniques should always be unbiased. For example, I think the success of the New Yorker and the Ise Shrine visualizations is largely due to the artists' bias in filtering out certain details.

emrosenf wrote:

Does anyone else want one of these? I think I'm going to print one

ericruth wrote:

A lot of the use of logarithmic scales in this lecture seemed rather misleading to me. I can see how such scales could be useful to a highly trained eye that's used to interpreting a log scale, but overall it seems to me like the benefit does not outweigh the added confusion for most audiences. Log scales without lines to indicate the change of scale are even worse, and don't seem like they'd ever make data easier to correctly interpret.

In particular, I really didn't like the log scale method of displaying stock prices - especially in the example where it removed resolution for the more recent data and thus hid a lot of the recent change in stock price. I understand that percentage changes can be important for stocks, but it seems like absolute changes can be equally important to know exactly how much money you are gaining or losing. In addition, I feel like people naturally form the "percentage" comparisons in their head when looking at an absolute scale. In the example slide - the absolute scale clearly shows the stock's price dropping 1/2 it's value (because of the obvious *position* encoding), while the log scale barely shows this change. In my mind, it would be much more effective to display stock performance on an absolute scale and allow zooming for added detail.

sholbert wrote:

This reading and the lectures reminded me a bit of a couple visualizations I saw recently of social networks: 1. http://www.christianmarcschmidt.com/invisiblecities/#screenshots 2. http://www.independent.co.uk/life-style/gadgets-and-tech/news/munroes-map-for-social-networksrsquo-lost-souls-2111356.html

The first visualization bases the height of a location on the data density from twitter and flicker. By zooming in, one can navigate and view some of these posts.

The second is more of a comic, but provides some resemblance of real data. Because viewers aren't familiar with the topography of this fantasy landscape, many of the theories from the cartogram paper don't apply. The geological representation provides more of an interesting analogy to look at in the context of social networks. It is much more interesting then just looking at market share demographic data about networks. Deductions can be made from the map, some may be accurate, some not so accurate. The important factor is that it engages users to try to dissect the analogy in the context of each social network.

adh15 wrote:

@esegel Thanks for reminding me of the iPAQ Halo interface. When Professor Heer presented that example during class, I was pretty dissatisfied with the result. However, after taking a brief look at the paper (Baudisch 03) I now understand a bit better what an elegant solution it presents. I was convinced by a figure in the paper that demonstrated the interaction as the interface zooms from a view that shows all five destinations down to street-level view. The prerequisite of seeing the first view that explicitly shows the big picture helped me understand how useful the halos could be to remember locations and distances. In the paper, Figure 6 also shows how on a circular display, the arc length of each halo is proportional to its distance. This property does not hold on the rectangular iPAQ display, but is interesting nonetheless.

selassid wrote:

I've been trying to think of situations in which view distortion accompanying data distortion eases interpretation. I was wondering if a view distortion might be a better way of visually communicating a transformation? Parts that become visibly strange will stand out and allow the viewer a sense of which regions of a transformation are stable and which allow comparisons to other regions directly. Seeing twisted or bent grid lines and ticks would be a cool way to demonstrate this. Animation also might help out too. Although a way to rationalize this is by thinking "the transformation is the data."

nikil wrote:

I was rather surprised that the Halo (Pocket PC program that used circles to communicate distance) was found to work so well. We've heard in class that humans have great difficulty in being able to tell the difference between curves and that it is much easier to distinguish different lines. I wonder if that only is difficult for guessing the differences in curves which are lined up on a graph and maybe aren't circles. As the Halo team claimed, that people were actually able to guess the distances and differences correctly, maybe circles are key?

jbastien wrote:

Something that I find very difficult is not just what proportions to make something but how to size and position different graphs which interact.

arievans wrote:

I really enjoyed the idea that Halo implemented, using circles to communicate distance for landmarks that were off-screen. It got me thinking in general about ways that we can represent information that is not currently in the view. But then that got me thinking... I don't think that we very often are used to having representations of data with data off the view besides mapping. Is that because we dont know very well how to do it effectively? Or because most data doesn't lend itself to having information off the screen?

I'm thinking it's actually a mix of both. But I think we're underutilizing the principle of "hey, there're more info off screen in this direction that you might be interested in." Given that screen real estate is always a concern (especially, obviously, on mobile devices), it seems to me that there is substantial room to explore this domain.

If I had more time, i'd explore it myself! But in the meantime it's food for thought for everyone else :)

msewak wrote:

I never realized that distorting the aspect ratio of a graph could have such a large effect on how it is perceived. The CO2 level graph really puts it in perspective for me. And for the sunspots, it was impossible to see the cycles, if not for the correct aspect ratio. These techniques should be used in Excel, I'm not sure why they do not use it!

sklesser wrote:

I think one has to be very explicit with using a non-zero or non-standard baseline used to make trends easier to spot. One has to make very explicit that the visualization is used to show relative changes and not absolute values. Maybe one should also consider including a smaller version of the graph with a standard baseline to get a sense of the absolute changes to make sure things are put in perspective.

mscher wrote:

I think that the issue that arievans brought up about how to communicate that more data is off screen is really interesting. This brings to mind websites that face the question: how do we get the user to see the data below the fold. A lot of homepages these days hav an incredibly long vertical, and they need to encourage users to scroll down the page. This is an interesting article that discusses the ux concerns with designs that go below the fold: http://iampaddy.com/lifebelow600/

While this is a relatively simple case of trying to reveal data that is off screen, these subtle hints and nuanced interaction techniques can be important ways of making sure that a user sees all the info you want them too.

Leave a comment