history

user

Lecture on Sep 21, 2009. (Slides)

Readings

  • Required

    • Chapter 1: Information Visualization, In Readings in Information Visualization. Stuart Card, Jock Mackinlay, and Ben Shneiderman. (pdf)

  • Optional

    • Decision to launch the Challenger, In Visual Explanations. Edward Tufte. (pdf)

    • The Value of Visualization. Jarke van Wijk. IEEE Visualization 2005 (pdf)

    • Graphs in Statistical Analysis. F. J. Anscombe. The American Statistician, Vol. 27, No. 1 (Feb., 1973), pp. 17-21 (jstor)

Comments

jheer wrote:

Card, Mackinlay, and Shneiderman posit a dichotomy between "scientific visualization" and "information visualization" based primarily on the input data: physical "scientific" data (e.g., air flow over an airplane wing) often has a natural spatial mapping whereas "abstract" data (e.g., stock prices or an online social graph) require that spatial mappings be designed. To what extent do you agree or disagree with this distinction?

Card et al. also mention Larkin and Simon's study of people solving physics problems with and without the use of diagrams. If you are interested, I recommend reading their paper Why a Diagram is (Sometimes) Worth Ten Thousand Words.

cabryant wrote:

I believe that this dichotomy does exist, but as a function of visualization objectives rather than of the characteristics of the measured phenomenon. Science, by most definitions, is a systematic approach to codifying knowledge within a specific domain. Whether or not the domain is inherently physical, once the collected data is codified, it is abstracted from its original form. Visualization is subsequently employed with the ultimate objective of conveying specific information to the human mind. This objective should inform the decision to employ a spatial or abstract form of visualization. Temperature, for instance, is a physical phenomenon, which is often effectively conveyed in an abstract thermometric form. However, if the objective were to convey the activity of molecules with respect to temperature, an interactive, spatial visualization would likely be more appropriate. Alternatively, spatial representations are often most effective for conveying abstract concepts (i.e. the use of hills and valleys to represent the “landscape” of a search domain).

The spirit of this position may be used to question the doctrinal nature of Tufte’s claim: “There are right ways and wrong ways to show data; there are displays that reveal truth and displays that do not." Such a position must take into account the objective of the display. It is faulty to assume that engineers will produce a visualization that abstracts what is likely relevant data into a simplified diagram of the form that Tufte devises. Tufte claims that “Thiokol’s numbers . . . for O-Ring damage . . . break the evidence into stupefying fragments [erosion, soot, depth, location, extent, view].” Yet this level of information is likely critical to making informed engineering decisions with respect to O-Ring development and deployment. Furthermore, Tufte’s claim that the temporal scale of Thiokol’s visualizations is irrelevant may not ring true to a Thiokol engineer, who may be interested in whether or not subtle changes in internal O-Ring development/deployment, over time, have had an impact on O-Ring integrity (i.e. if all O-Ring failures were isolated to a particular time, then variables other than extant temperature should be taken into account). To the engineers, Tufte’s abstracted diagram would likely qualify as “not revealing the truth.”

Strikingly, a more effective diagram to support Tufte’s position would be a single binary representation indicating whether or not to launch, as his own version still leaves open the potential (however remote) for misinterpretation! I would posit that an effective compromise would be a tiered set of diagrams, ranging from the binary (launch: yes/no) to the detailed diagrams provided by Thiokol Engineers, which may be referenced when more abstracted diagrams yield a greater number of questions than answers.

malee wrote:

Armed with the CMS reading, it seems like we could create visualizations (and do Assignment 1) with a bit more insight. We would be more critical about how expressive and effective the visualization is, whether the mappings make sense, whether pattern processing is automatic or controlled, etc.

However, it still feels like there is a lot of trial and error associated with creating a good visualization. Even if we have a categorization of nominal/ordinal/quantitative, there is so much choice left in figuring out mappings and the final visualization. It would be interesting to use Mackinlay's Automated Design work (mentioned in lecture 2) to assist in exploring possibilities + the design space, particularly when it's unclear what the message of the final product should be.

joeld wrote:

I recently had a baby and we are going through sleep training now, so I was very pleased to see the baby's sleep wake patterns in figure 1.8. The red lines in the plot are a little misleading though - most infants go through multiple sleep wake cycles per day on about a five hour interval rather than the 25 hour interval that was suggested.

Contrast the visual starkness of most of the black and white plots with the stunning colors of figure 1.9. In our discussion of information "channels" from tuesday we neglected the "emotional" or aesthetic channel. That is to say - people form judgements about whether a diagram is "beautiful" or "cluttered" or "interesting". These judgements are a gestalt of the specific data axes (size/shape/color/texture) that make up the data elements, and because they operate on an emotional level probably dictate a great deal about whether a visualization is persuasive or useful.

:)

gankit wrote:

There was an interesting section (under "Perception" pages 24-25) in the CMS reading on our visual system (ref. eyes). Summarizing that section, I have come to understand that our eyes are constantly observing images at 3 levels: (1) the retina generates a highly blurred overview of what is happening around us focussing on some points and blurring out the tiny details (2) the movement of the eye (and head, if required) change the center of vision (foevla) to be alert of changes in the visual field and start observing them. The eyes have about 200 degrees field of view and observe changes at about 70 times/sec. (3) Finally, receptors in the foevla keep observing the finer details and reconstruct the detailed picture. They also keep moving to keep the picture refreshed and recent.

Understanding such function of the eyes (even if at a basic level) is great for guiding visualization design as we can immediately derive some basic insights: (1) Don't move too many things at a time since the eyes are highly receptive to change. More than one localized change will get you into trouble. (2) Keep the main "rolled-up" view in focus and steady if you want the user to understand it well. (3) Too much change (too many colors, or different shapes, textures etc.) are also distracting and hence, should be minimized. (4) Changes, if any should be slow enough for the eyes to be able to fully follow along.

Any others?

aallison wrote:

I agree with most of your comments in the second paragraph, but have a few comments. I think (1) is a great insight to have, but many visualizations benefit from moving many things at once (e.g. Hans Rosling's visualization presented at TED). You have to treat a lot of movement as a cognitive cost and design with that cost in mind. Minimize all other elements of the visualization so that the viewer can focus on what is important.

I think number (3) is synonymous with keeping your visual design minimal. Only include what is necessary. If you have reason for it, you might have to use 10-20 shape/shading/color changes.

I take (4) to be referring to animation, and I agree with your point. However animation should only be used if it has a clear defined purpose. Sliding information in a left right fashion implies a certain layout of information in the viewer's mind which implies certain things about that information.

rajsat wrote:

My comment on sleep-wake cycles(page 4, CMS): I feel the red lines in the diagram provide a misleading wake-cycle period for infants in their first 5-6 weeks of their birth. From a visual inspection of the dots and lines in the first few weeks it is quite clear that the sleeping times are totally hap-hazard- essentially we could draw a different red line between two of those drawn and that would also seem to fit their wake-sleep cycle model. All this said, I do feel that this visualization is probably the best way to represent such a massive data set. For infants over 7 weeks, the visualization perfectly illustrates the 25 hr cycle slowly progressing into the 24 hr cycle.

btw, Thanks Allison for mentioning Hans Rosling. His talks on poverty and on HIV prevalence in developing countries illustrate how the use of simple, dynamic animation helps in amplifying cognition. In this video, Joann Kuchera-Morin demonstrates Allosphere- that she describes as a "large dynamically varying digital microscope that's connected to a supercomputer". From what I understand, it basically maps mathematical algorithms in time and space into visual and sonic representations to model new patterns in information obtained from FMRI data from regions of brain, emission spectrum from bonds and even the electron spins inside the Hydrogen atom. What I like about this demo is the way they use sounds to model stuff like density levels in the brain & specify orbital level electron spins in atoms.

As a side note, I think this demo clearly demonstrates that scientific data may not be geometric in nature. Of course a vast majority of scientific data-like air/fluid flow, stress tensors, geological phenomenon like earthquakes, erosion,drifts,etc or even stuff relating to human anatomy like lymph/hemoglobin content, etc - can be represented using natural spatial mappings. But some of those described in this video are not necessarily intuitive. Take for instance their representation of the n-dimensional Schrodinger equation in time. The positions of the electrons in the lower 3 orbitals are not spatial/geometric as is evident from the video. The sounds being emitted from these undulating electrons give an insight as to when a photon will be emitted and again this is not spatial.

bowenli wrote:

I don't agree with the article's portrayal of "scientific" data vs. "abstract" data. While I agree that measurements taken from physical space may be easier to map into physical space, I don't think this is always the case. I think there is always a choice to be made when displaying any type of data. In the case of the ozone layer, the 3D representation was chosen, but another one could have done just as well, if not better. Also it feels like the distinctions between "data graphics," "scientific visualization," etc. are unecessary and designed to impress marketing type folk.

I think an interesting question to ask is how closely are our current visualizations tied to technology limitations. The article mentions cycling through different variables or using special glasses to view 3D. Will future technologies allow us to see or feel even greater number of variables through means we can't even imagine?

Maybe a bit beyond the scope of this class, but it's interesting to think about which of the traits discussed for an effective visualization really come from "physical" properties of a human, and which are cultural. It is known that language plays a huge factor into how people think about things and it can change your beliefs. Also, certain repetetive actions in modern day life (like driving, computer use, etc) make us good at certain actions and poor at others (hunting in the jungle). What is the net effective of these situations? And also, can you induce (ie train) people to be efficient at a certain type of display?

The discussion of the spatial substrate and of distortion is interesting because it makes me think of the types of underlying semantics that we assign to visual cues. This can be used to pack more data that would otherwise be possible. For example because we understand that a "3D" graph is supposed to look 3D on a 2D screen, then we can glean more information from it than purely the distance between the pixels alone. This response is a cultural agreement that such images mean "3D". Can we form other such agreements to do some mental processing to pack more data?

wchoi25 wrote:

I like how CMS posits the information visualization problem in contrast to scientific visualization. While I agree as discussed during class that the boundary between the two may not be as crisp as presented in the paper, I think it is definitely helpful to see the single distinguishing feature of the kinds of visualization that are particularly interesting: they provide visual representations of otherwise non-visual concepts or data.

In that context, perhaps the most important contribution of this chapter is its reference model for visualization (Diagram 1.23), outlining the process through which raw data are transformed into views that aid human tasks. I found this to be particularly helpful as I read through the rest of the chapter where CMS provides more details on each step.

Yet, this model to me presents one of the more difficult generalizations to accept. In many cases, visualization does not seem to always follow these steps so closely. For one, some visualization tasks do not really go through what CMS would call data tables, as there may be no real measurements or data points of any kind. In fact, the raw data itself could start with something visual, such as pictures or movies. What if the task at hand is to come up with a visualization that captures and summarizes a few hundred photographs or a long footage from a surveillance camera? While these are already in a visual form, I would still label these tasks "visualization" as they strive to encode some data into a new visual. Going forward, I think visualizations of not only tuples of numbers (i.e. recorded data) but other media like prose text, spoken language, image, video, or music would be increasingly important. CMS's reference model does not seem to capture these sorts of tasks well.

vagrant wrote:

At the risk of repeating opinions already shared, the primary lesson I gained from the main reading by Card, Mackinlay, and Shneiderman is that data visualization aims foremost to provide insight. In this context, the argument between “scientific visualization” and “information visualization” is best judged by whichever option better fosters decision-making, discovery, or explanatory communications.

That said, let us consider the case where one puts aside whatever predetermined schemas or objectives may exist for a given data set and instead focus purely on representing data first (from which schemas or objectives may then later be derived). From here, I would argue that there is some intuitive merit to the proposal that physical data naturally suggests a spatial mapping whereas abstract data requires a design.

Allow us to focus on the process of mapping data to visual form. In regard to the first assignment, it appears to me that the most basic approach is to plug the non-spacial data values into a software program in order to experiment with visual mappings, determining the variety of visual structures available. From the selections given, one may divine that a particular representation best demonstrates a specific pattern, or that another keenly detects a unique idea. Whether the visualization ultimately chosen is classifiable as a scientific or an informative process is beyond the point of the exercise, as the question loses sight of the goals of data visualization in general. However, for most students, this proposal for information visualization will likely lead to a satisfactory result.

Beyond debating the validity of visualization taxonomies, I was wondering how the figures given in the reading were generated. As a novice to creating graphical data representations, I wonder: are most others in my situation? What software might people recommend I practice for this assignment, and for producing data visualizations at large?

hhzhang wrote:

I am also not quite sure about the distinction between "scientific visualization" and "information visualization". Here is my understanding which may conflict with the reading text.

In my intuition, "scientific visualization" is used when the set of data variables are quite limited and relations among data are somewhat fixed. And depicting such data requires much insight into the specific discipline. Also much pre-required concepts and conventions exist when interpreting the visualization. Hence there is not much point seeking generally applied data visualization techniques in that area now.

While for "information visualization", the target data often has plain or direct meaning. We can view them in different combinations of relations as we wish. So we can summarize and visualize these data using general "information visualization" schema, which is the focus of this course.

To design visualization, knowledge of existing work and theory is important, also important being comfortable with popular visualization tools. I agree with vagrant, some recommendation of softwares would be helpful.

jqle09 wrote:

I agree that there is a class of visualizations, scientific visualizations, that are often based on the inherent physical space from which they are derived but a distinction between what we called information visualization only exists because I think we tend to manipulate concepts into forms with which we are familiar. Familiar forms provide base descriptions with which we can derive general insights and for scientific visualizations it is natural to derive our visual abstraction from the corresponding physical space. But I think a separate distinction from information visualization is not so clear as described in CMS.

I think scientific visualization is a subset of information visualization where there exists a mapping of data to a space that our mind does not have to create from scratch and a visual form already exists from what we can see in the world. I also agree with what vagrant and bowenli, and really depending on what we want to discover in our data tending towards use of a physical representation may be inadequate.

@vagrant - I am definitely in the same boat as you with regard to creating data visualizations. Probably because I think excel is good for trying a lot of different visualizations. Maybe the graphs are not the most pretty but being able to experiment is nice. I also hope someone with more experience can guide us towards something better but I suppose this is also why we are taking the class.

dmac wrote:

I also hesitate to make a meaningful distinction between scientific and information visualization. It may seem reasonable to classify inherently spatial data into their own category, but as stated by others I think this temptation exists largely due to the following:

1. A large percentage of all data that is interesting to visualize happens to be spatial.

2. We naturally interact in a spatial world, which makes it easier for us to interpret data in a spatial way.

I think there is a fine line between pointing out the large subset of all visualizations which are spatial and dividing all visualizations into two disjoint sets of scientific and informational.

Something else about the reading that struck me was that the authors define "visualization" somewhat narrowly as "computer-supported" and "interactive" (page 7 in the pdf). While it is entirely within their rights to do so (perhaps they only wanted to explore the subset of computer-supported, interactive visualizations) I think it brings up the interesting question of how hand-drawn, static visualizations differ. In Playfair's time it seems that much of a visualization designer's job was in creating a view of some data that would relate a message to an audience. Whereas with computer-supported interactive visualizations we are now better equiped to explore the data, asking questions and receiving answers, creating a "conversation" with the visualization.

Of course, just because a visualization is not computer supported doesn't mean it can't be interactive. Mechanical models of our solar system are one example.

alai24 wrote:

I am also hesitant to differentiate between scientific and information visualization. It appears that most of the examples given boil down to if the data is spacial, one can easily find a spacial representation for it, which is not much of a statement at all.

dmac also brings up a great point about what should be included under the umbrella of 'data visualization.' A lot of static visualizations in the form of 'infographics' like the one's you'd find in popular science magazines encode a lot of information (like how stuff works) though they aren't what I think of when I think of data vis.

@bowen I think now should be a pretty interesting time to examine how current technological limitations affect our interaction with data due to the explosion of touch screen devices and the arrival of Microsoft's Natal. Personally, I hope the Minority Report's style of interacting with data (though probably more mundane than examining future murders) comes true.

fxchen wrote:

@alai24 Wow. This is the first time I looked at Project Natal and I just spent the past half hour looking at different videos and reading about this. For those who are clueless: it's Microsoft's motion-sensing / interaction system for XBox that includes facial & voice recognition, gestured control, skeletal mapping.

Back to critique on Tufte reading: This is essentially a very thorough walk-through of every step in the process leading up to the Challenger exploding & Tufte's analysis of the situation. I appreciated the author's reflective narrative (though very long) in the piece. The most interesting part of the piece for me was to read the author's thought process in picking apart the situation and Tufte's arguments.

vad wrote:

The infant sleep cycle chart really impressed me, and I can see from the comments above that I'm not the only one. I'm having a hard time understanding why three days were chosen as the width of the chart; does anyone have any theories?

jieun5 wrote:

I had mentioned in class that "abstract data" in information visualization could arguably be mapped in a naturally intuitive way, consistent with how they are organized and/or processed by our brains. I wonder to what extent this is possible, especially with the recent development in neuro-imaging, such as fMRI and EEG.

For instance, fMRI provides a very detailed spatial resolution of our brain as we carry out a task, while EEG provides a detailed temporal resolution. In this way, neural imaging could be considered as a visualization of abstract data that are being processed by our brain. Abstract data, then, could be visualized in terms of the spatial (and temporal) relationship between cortical structures. And it is often the case that structures that are closer together (or with dense neural connections between them) are more highly "related".

There is a fascinating example of visualizing event segmentation in music through fMRI data. ("Neural dynamics of event segmentation in music: converging evidence for dissociable ventral and dorsal networks" from Neuron [0896-6273] Sridharan yr:2006 vol:50 iss:4 pg:643.) Figure 2 in Page 4 of this article shows images of cortical responses as a function of time (from -6 to +6 seconds) around a movement transition in classical music. Neural imaging data such as these make me doubt any kind of clear distinctions between "information visualization" and "scientific visualization".

zdevito wrote:

Though much has been said on CMS's distinction between scientific visualization and information visualization, it seems like the main object to this dichotomy is naming scheme. There clearly are "scientific" visualization that are not based on some physical analogy. For instance, the data we are looking at for our first assign is the result of biology experiments but the actual values being visualized do not a simple real-world analogy that lends itself to a "scientific visualization." Rather it seems that the distinction is between physically-based visualization (i.e. visualization that associates data with some real-world image) and abstract visualization. As we have seen in the readings, use of visualization based off of maps and other physically-based data preceded more abstract forms such as scatter plots by a long period of time, so it is natural to want to treat them specially.

mpolcari wrote:

It seems the differentiation is actually between data with a natural spacial mapping, and data without - "Physical/Scientific Data" may be a misnomer. If I were charting the % of clicks on a webpage that went to various links, the underlying webpage could be the spacial mapping eg. http://www.labsmedia.com/clickheat/index.html I'm not sure such a visualization would be described as "Physical/Scientific".

anuraag wrote:

It seems to me that the real distinction between what the article calls "scientific visualization" and what it calls "information visualization" has less to do with the nature of the data (whether it is physical, abstract, spatial, etc.) that with the intent behind visualizing it. What we've been calling scientific or spatial visualization is all about taking data that represents some physical source scene and returning it to a state closer to the initial physical source. What was once "visual" in the sense that the natural phenomenon could be observed with one's eyes was compacted into a quantitative description, and scientific visualization is about re-inflating this description into something that resembles the original visual phenomenon. This form of visualization does not seem to have as its primary concern "cognitive amplication" as do some of the other instances noted in the article.

Some of the visualizations having to do with telecoms or bonds, by contrast, take data with non-visual source phenomena and amplify cognition by making them visual. My sense from the article is that information visualization creates a "new" mental model to amplify cognition, while scientific visualization simply restores data to a form closer to our original mental model.

mikelind wrote:

One of the things that really interested me in the Card, Mackinlay, Schneiderman article came in the description and visualization of the cotidal chart. I thought it was an extremely interesting chart, first off, but I was very curious about why there was a tracing of Magellan's Route on the map. When reading over the description and explanation, I found nothing about the reason for this addition, so I wondered about why it was there and found myself searching over and over for some trend that made sense with the Magellan information. I couldn't find anything, so I would be extremely interested to know if there is a connection or if it just happens to be there, but I also though that it was interesting that a piece of somewhat cryptic data can make a visualization more interesting and engaging. I'm now wondering if there are certain situations where the best visualizations are not necessarily those which show the data most easily, but rather are most engaging to the viewer, even if certain useful information may be slightly obscured. Does having to actively think about the information cause us to retain it better?

mattgarr wrote:

I found it ironic, returning to the CMS reading, to find that on pages 24-25, which includes a discussion of the eye's function, there is a key example of what Tufte refers to as Chartjunk. While reading these pages, I found that my eye constantly returned back to figure 1.29, a graphic depicting the geometry of the retinal surface. It was a striking example of how a moire pattern that "dances" distracts a reader or viewer of graphics towards non-pertinent information. Where Tufte does not discuss how the eye functions, we find here, in the context of a dicussion of that function, a graphic that works counter to it.

It is interesting that the moire pattern "dances" regardless of whether it is in the region of the fovea, or in the periphery. In fact, there is a shimmering quality in the periphery that only seems to magnify when the graphic is looked directly upon. CMS provides a good explanation of why the eye is compelled to return again and again to the chart in the example of driving a car. Our visual perception is based on accumulated data--when we see a moving car changing lanes, as the example goes, we dedicate more visual resources to understanding this rapidly changing condition. In similar fashion, because the moire pattern creates an illusion of motion, we feel compelled to return again and again to it, distracting us from the text we are trying to process.

tessaro wrote:

I was struck by CMS' focus on scientific data being "often physically based" with the attendant distinction on the data's source defines its categorical utility. It appears that the notion of abstraction has been placed in opposition to a true spatial mapping with a real world referent. The question of whether this is a distinction thats makes a difference is challenged by the idea of what constitutes scientific data space given that so much of modern science is not tied to a simple spatial analogy.

In the slide lecture Value of Visualization, the example of Watson and Crick's DNA molecule was shown as a three dimensional model aiding cognition of a then unseen molecular world. Interestingly, the visual aid which allowed Watson and Crick to posit the double helical structure was based on by Rosland Franklin's x-ray diffraction grating of the molecule. The image generated from this tool of crystallography was itself an abstract spatial mapping of dna, a projection which left the trace signature of a helical structure. The distinction between abstract and spatial mappings as synonymous with informational and scientific visualizations can blurred in the very of process of doing science.

rnarayan wrote:

Agree with many of the comments that have gone before and at the risk of beating a dead horse, it seems to me that this nomenclature and taxonomy may be a bit superficial. A natural spatial mapping doesn't necessarily seem to be an intrinsic characteristic of all scientific data. From a simple voltage-current-time series characteristic plot to a more involved entropy-heat-temperature visual, classes of scientific data that are oblivious of spatial relations aren't too difficult to find. Whereas the information visualization examples in CMS having abstract data input were not entirely isolated from inherent spatial mappings (e.g. telecom fraud, home finder, etc.) either.

Further, several forms of abstract data are capable of human cognition without a spatial or other mapping. Certainly, a stock market derivatives chart doesn't require one for an economist or a stock broker. Likewise for a scientific person, a visual of a higher order harmonics function abstracted as a Fourier series doesn't require one. To a certain extent, the compaction of information here is to a visual representation that the person in a specific domain is capable of comprehending using the notations and tools of that domain - which may or may not bear a relation to spatial mapping.

nornaun wrote:

I think the distinction between scientific and information visualization is useful in setting up the framework visualizers. However, it appears to me to be a mere convenient distinction rather than something you should hold religiously. Also, as others have already mentioned the line between the two categories divided by this criteria is blur.

One distinction that I like is by the purpose of the visualization: to communicate or to solve a problem. This distinction is even more ambiguous. However, it serves two purposes. First, the readers who are informed of the type of a visualization know what to look for in the graphics. (This will be especially useful in textbooks. __) Second, it can help a visualizer keep the end goal of her work in mind. Working with a lot of data, visualizers might easily lose sight of their goal under a pile of information.

nornaun wrote:

I think the distinction between scientific and information visualization is useful in setting up the framework visualizers. However, it appears to me to be a mere convenient distinction rather than something you should hold religiously. Also, as others have already mentioned the line between the two categories divided by this criteria is blur.

One distinction that I like is by the purpose of the visualization: to communicate or to solve a problem. This distinction is even more ambiguous. However, it serves two purposes. First, the readers who are informed of the type of a visualization know what to look for in the graphics. (This will be especially useful in textbooks. __) Second, it can help a visualizer keep the end goal of her work in mind. Working with a lot of data, visualizers might easily lose sight of their goal under a pile of information.

rmnoon wrote:

I'm having a hard time reconciling many of these readings with my scientist's desire for a "right way". Everyone seems to agree on the fundamental concepts of WHY we should take an active interest in visualization (and there's even a general psychological/neurological consensus), but everyone seems to have his or her own formal system for constructing and evaluating reliable visualizations.

Perhaps this is the fate for those who live/work/think at the intersection of art and science. I can see parallels between Bertin/Tufte/etc and the scientist-artists of the renaissance. The Vitruvian Man, for example, seems to me to be the epitome of a functional data visualization.

Exciting stuff. Really exciting stuff.

nmarrocc wrote:

I like how Card and Mackinlay talk about knowledge crystallization as removing, reducing or abstracting data. When we look at a visualization and are attempting to use what we see as an aid in cognition we are really trying to form patterns in our mind. Keeping this in mind, maybe we could design tools that help us arrange data into patterns more easily. We could build interactive visualizations that let us move around and manipulate data visually but also have the computer suggest ways that you could move the data or maybe automatically draw lines around patterns that it finds interactively as you move around the data, sort of a combination of visual analysis and data mining. A tool like this could also aid in making charts or graphs if your goal is not to aid cognition but to convey a story. For example lets say that you have some data, and you want to show how much greater some pattern is than another pattern, the most effective way of doing this is to remove as much superfluous information as possible. In other words, you want the abstraction to be as clear as possible. Once you identify the pattern you want to convey to the user the tool could then attempt produce the simplest view (orientation, spacing, coloring) of those data possible.

Leave a comment