current page



Lecture on Oct 28, 2009. (Slides)



vad wrote:

I would like to add two of my fabourate text visualizations to the amzing material that we saw in lectures.

The first one is the visualization of the E.E. Cummings' collection of poems, entitled "I X I".


Look here for description.

The second one is a website that is dedicated to speaking without words (so I probably should not describe it). Instead here is a taste:



vagrant wrote:

It is almost inexplicable how text visualization has become almost a fetishistic curiosity in my mind. I've always been a fan of animated and interactive data visualizations, but until very recently the idea of representing bodies of text had never entered my mind.

Now, with the readings and examples displayed in class, I wonder if the topic could become an obsession for me.

Text visualization is a field that strikes me as exciting in its comparative, even competitive spirit. More than any other form of visualization we've studied thus far, text representations seem to congregate around solving the same focused set of problems. Given any test data bed, the strengths and weaknesses of multiple text visualization solutions can be immediately observed. What tickles me in particular is that there seems to be few observable limits to the ideas that exist for a body as apparently bound as text.

Now, beyond my surprise and admiration, there is a part of me that wonders if perhaps a very large and very redundant relational database could encapsulate many of the capabilities of visual implementations. Perhaps there is a reason why many of the visualizations we have seen stem out of or depend on some sort of query interface.

jieun5 wrote:

As mentioned in class, there are many parameters about text beyond size (encoding of color, width, position, and font type) that are left "unused" when creating text clouds.

Is this because it is difficult to find intuitive mappings between those other visual parameters and the features of the corpus? Or has this possibility been under-explored?

For instance, are there examples that use color to denote emotional valence of words, or width to denote the relative timing each syllable takes up in speaking out the words, or use different font types and families to illustrate the overall "atmosphere" in which the texts are presented (i.e. Comic Sans for informal and young, Old English for formal and historical)?

If anyone could give me links to examples such as these that make use of other useful parameters, I'd really appreciate it-- as I may be incorporating that to my final project. :)

bowenli wrote:

I asked the question about text visualization in other languages. In my mind I was just thinking how much of a failure the Sarah Palin speech word cloud would be if it were in Chinese. In Chinese the spacing and placement of each stroke in the word is crucial to the coherence of the word. So writing something with uneven spacing in between or sideways really kills readability. Another interesting point is that Chinese characters are all supposed to be written in a square. Every. Word. So the word cloud will not suffer from the longer words have more area problem, however it may be more boring because everything's so square.

@jieun5 I think part of this space overlaps with typography and more traditional graphic design themes. is a pretty good place to start for blogs to check out.

I really liked the example in class relating to abortion discussions in Congress. It's interesting that they were able to make a chart with the most used words and the least used words. Unlike data sets involving numbers, each word has a meaning and it may be possible to assign importance values or degrees of interest to each word. And based on that you can tailor the visualization be more effective. In that sense, I think just using the count as an indication of interest is kind of simple given the NLP available today.

Edit: I just remembered this which is absolutely incredible.

joeld wrote:

Jieun's idea is really interesting if you stop to consider that text itself is really a visualization of the sound of speech. This is perhaps more obvious in languages like Japanese where the mapping between shapes and sounds is more strict. Have you ever noticed that the letter "O" is shaped the same way your mouth is shaped when making the sound "O"?

It's not exactly a visualization of the structure of text, but Google trends shows some history of people's interaction (or attempts to interact) with the Google corpus:

I do like Viegas' Mountain view much better though.

wchoi25 wrote:

The Hearst reading points out many text visualizations (especially for search interfaces) that are flashy, visually appealing, but ultimately not so usable. There just seems to be one fundamental hurdle for these clever uses of space to organize and represent text. We are so familiar with reading text, left to right, top to bottom, neatly organized in horizontal lines on a 2D surface, that an organization that divulges from this too much become exponentially harder to understand. Particularly challenging is fitting all those long text labels in your visualization and still keeping them readable. Perhaps these labels could be delegated to another channel such as audio, where hearing bits of words with disparate source positions and distance is a relatively more familiar task for us.

rnarayan wrote:

@wchoi25 - playing with the Textarc viz at, i found that it has an option to play the pitch of the sound when you roll over the word - but such encoding issues aside, uncovering meaning using concordance seems to have limited applicability. By their own admission, it is as much an intellectual toy as it is a tool.

On the contrary, PhraseNet seems to have a lot more potential as a tool for text mining and analysis, given that the linkages b/w words can be user defined/specified and the pattern matching can allow for semantic mapping beside syntactic constructs. Without semantic mapping, constructs that are for instance in a negative context (contextual variations of "not war" could pick up a high TF.IDF value for "war" in a document on "peace") and yield unintended results. It is not difficult to envision using PhraseNet+WordTree in a legal discovery process for uncovering precedents for court cases (albeit the legal field is already richly endowed with large cross-referenced databases and mining/discovery tools).

It is interesting to contrast the two-dimensional architecture of Textarc with a 3D system such as BenFry's Valence, where the least-often used words gets pushed to the center, exactly the opposite of Textarc. The organizing principle of a self-evolving map seems appropriate for large volumes/datasets. Like the Enron e-mail example shown in class, exposing/tracing anomalies in complex (adaptive) systems could lead to significant discoveries.

Re: the geography of science/Small, MIMIR, etc. - a more recent approach uses clickstreams instead of citation relationships. Here, they cite (pardon the pun) immediacy/currency and also the actual observational (as opposed to motivational) perspective of the data, being two of the advantages over several citation based viz.

Looking at IBM webfountain (Hearst), the question that comes to mind is why Google, et al. dont yet provide indicator glyphs for page length, recency, etc. for search results ?

gankit wrote:

The Hearst readings were extremely instructive!

I loved the discussion on allowing search queries to be expressed using visualizations. It is commonly known that people don't necessarily find the information they are looking for in their first attempt. They keep changing the keywords and hoping to eventually get to the information they are looking for. I always wondered if this could be done in a much more interactive way with a visualization giving the user hints as to what related queries he/she could explore, how the underlying data distribution is in the database and what could he/she expect with a change in query. Reading the discussion on visualization for search query expressions, I realized that a good visualization also restricts the expressibility of the query in order to make it easy for the user. It also talked about using pre-attentive artifacts such as color (slight shading/bolding) of related words to guide the user.

jqle09 wrote:

As @gankit said, I think there is definitely a lot of improvement that can be made making query refinement visualization. Though I agree any visualization would restrict the expressiveness of your query, I think search queries as they are right now are already not very expressive. Right now google only offers suggestions for possible query refinements at the bottom of the page, the place where you would be clicking on next page to find possibly better results. This is nice but I think it would be cool if there was an interactive way in which to explore refinements for a query, maybe like the visual thesaurus example, where better refinements (possibly ranked by how many people then used this query after failing to find what they wanted with their first query) are closer to the initial query node. Then you could search for websites by exploring the "refined query" space, instead of paging through the list of web pages. I think this might be really useful as tool for better searching. I think I may be a little obsessed with node-link type graphs as I think they look really cool.

I thought the Phrase-Net visualizations were really effective (I particularly liked whatever font they were using in the visualizations). They used space really effectively by compressing nodes and edges such that the main relationships could still be determined. I was thinking that I've had problems with visualizing textual relationships because I was always overloaded with information to display or see, and this problem seems present in many of the visual systems shown in the Hearst's chapter 11 (though really informational). But I think the phrase-net does a pretty good job of keeping relationships cleanly displayed. I checked out some of the visualizations on many eyes

Also in the Phrase-Net paper, I thought it was interesting that using simple regular expressions instead of a full semantic parser to parse text revealed many of the same main terms. As mentioned in the paper, in many cases using naive/superficial approaches produce useful results when given enough data. I think I use this as a rule of thumb in a lot of problems I try to approach.

nornaun wrote:

One random thought that I had while going through the reading is how we can visualize the relationship between words and pictures/diagram. Pictures can serve as a visual aid to help browsing and detect relations between the word since (unproved claims, I think I read about this somewhere but still need to find the source) we tend to relate the concept of sth with its picture better than abstract word. It can be helpful for crossing the realm of language barrier as well. The challenge in visualizing words with pictures is the format of picture itself. Displaying a picture to represent a concept certainly takes much more space than a word. A picture (even a thumbnail) will clutter the screen.

One way to work around that is using the methods described in Phrase-Net with picture as a subnode to the word. We can hide the pictures in a sub-node and let users click on it themselves if they want to look at the related pictures. However, there should be a more effective interaction that can solve this issue. I am still pondering over this.

tessaro wrote:

Picking up on some previous posts about how text as typography relates to text as visualizations of speech sounds, here are two links to interesting investigations of the relationship of spoken language to visual shape.

The first link is to a survey of the evolution of the 26 letters in the modern english alphabet through sound and form. Translating the complex vocal structures of speech to the visual shorthand of an alphabet is itself a collective visualization project crafted over millennia.

Shapes for Sounds by Timothy Donaldson

The second link is to an exhibit from the exploratorium that attempted to model the shape required to make the vowel sounds in our mouths through a kind of reed organ. The air travels through the clear plastic chambers shown in the photos from right to left. (click the images to hear the sounds) The profiles of the vowel chambers can be seen as abstracted three dimensional visualizations of the complex interaction between your vocal chords and mouth to create the nuanced vowel sounds in english speech. Note how each vowel begins with a similar shape profile and modulates till its release through the "lips" of the chamber.

exploratorium vocal vowels

malee wrote:

I would posit that word visualizations are so visually compelling because of their deviation from our expectations of left-to-right/justified-margin text.

In any case, it would be interesting to apply word visualizations to non-prose text, ie code. How can we better get the gist of a large chunk of code? Also, how can we better visualize the changes in this code base over time? The structure and the relationships within text are represented differently in code vs prose, and it would be interesting to explore what methodologies work better for one over the other.

cabryant wrote:

Text is an intriguing data format for a variety of reasons, but I think its primary attraction (as reflected by the preponderance of textual visualizations presented on sites like Swivel and Many Eyes) is the power of language to outwardly manifest inward realities. Text, therefore, is the gateway to understanding individual thoughts and collective cultures. The challenge of visualization is to describe at least two aspects of text: the underlying meanings treated within a corpus, and the way in which text conveys that meaning.

In linguistics, this dichotomy is described (at least by Chomskians) as the distinction between Surface and Deep structure, which describe syntactic form and underlying meaning, respectively. One implication of this view is that there may exist multiple surface structures reflecting a common deep structure. In a sense, Bag of Word models provide insight into neither, as both the semantic and superficial relationships between words are stripped away. Phrase nets and word trees attempt to isolate similarities in surface structure, but fail to aggregate multiple surface structures with common underlying meanings.

Although the trend in textual visualization appears to be moving toward identifying common deep structures, a focus on surface structures can be equally informative. The field of literature (and linguistics too, to some extent) has long recognized that surface structure also contributes to meaning. Take, for example, the effect of the passive versus active voice, or techniques such as alliteration and rhyme. These constructs contribute greatly to the meaning of the text and the understanding and sentiment evoked in the reader/listener. Furthermore, these constructs reflect the unique application of language by the individual, serving as a literary or oratory signature. I think there is a great deal of potential in visualizing these aspects of text, as well.

zdevito wrote:

The non-text visualizations we have studied focused on taking a large body of data and making it easier to draw conclusions from that data through the use of visual encodings. For instance a dot-plot of data in a table can quickly show relationships between two variables. To some degree text encoding attempts to do the same thing: a word cloud can give an overview of the topics in the text. However, this information is comparatively limited. Let's take a novel as an example text and consider what an ideal visualization of the novel might be: we might want to see the relationships between the characters in graph form, artistic renderings of the people/places, or a timeline of the interactions between people (e.g. It is clear that the currently text mining technology cannot approach this level of detail automatically and has to resort to simpler relationships. Phrase Net, for instance, attempts to recover the family trees from the bible using a "X begat Y" search but it can only come up with partial results. Right now much of the focus is on text datamining to allow even the simplest of text visualizations to be possible. I wonder what text visualization would look like if some of this semantic meaning of the text was already pre-processed.

mikelind wrote:

I thought the readings this week were quite interesting, especially about the information visualization for search specifically. I think that it is interesting to see these comparisons between different ways of visualizing search data, but also important to think about why they fail in making things more effective for the users. I feel that web searching is one of the most practiced activities that people do now, and changing something like this in any way dramatically will likely slow a user down or make them less effective, simply because it is a larger mental shift from what they do already. It causes them to think about their information differently than how they have learned it, and it seems like no surprise that some of the more dramatic changes in search interface were some of the least favorably reviews in the the Hearst reading. I also find it interesting in how text visuals and text clouds for search were used. Often I find myself searching through many search suggestions not because the sites don't match what I searched for, but rather because of very specific details in the site I eventually found. It may be more interesting to see a text visualization of the specific differences between search results, rather than just what text appears most frequently in each.

alai24 wrote:

Text itself is a visualization of spoken language. Some poems in chinese and japanese evoke emotions not only through the meaning of the words, but also through the aesthetics of the calligraphy, an example being "Cry for noble Saichō" Koku_Saitcho_shounin I have no idea how to read Japanese, but one get a feeling for the poem just by looking at it. EE Cummings utilizes a typewriter similarly in his writings.

I think the most effective text visualizations are the ones with a clear purpose. The phrase nets used to map ancestry and locales are very interesting because they tell a clear story.

aallison wrote:

First of all, a link to a fun tool for visualizing svn repository changes. This video shows the historical development of the python code base:

It is an interesting question: How do we think when searching for information? Do we think in words? Or do we think in images? The implied question is: Is a text-based or visual-based interface better for human-performed search?

We grow up in school asking questions from our teachers. We learn to search on google through a text box. When users search for answers in, some find serendipitous anomalies, while others form hypotheses: "I wonder how the programmer population has increased in the past few years?" The questions we want to ask are the easy part. It's sorting through the results where visualization serves as a true aid.

Leave a comment