Text Visualization of Song Lyrics
Looking up song lyrics is a commonly performed task for musicians and non-musicians alike. Currently, however, if one searches for song lyrics, they show up in plain text (often broken into stanzas like poems). It is quite difficult to tell from this plain text how the lyrics actually fit in with the musical elements, and what the resulting perceptual experience is for hearing the song.
I would like to design a text visualization of song lyrics (perhaps akin to tag-clouds, but not based on word frequencies, and experimenting more with encoding of various paremeters) to maximally convey the song as an interplay between linguistic features of the lyrics and the musical features of the tune. My goal is to have a "what you see is what you hear" visualization.
For the purpose of this project, I will work with symbolic data from music scores (as opposed to audio signals). Ideally, I would use a Music OCR application to scan a score and convert the data into either a MusicXML or MIDI format, which would serve as an input to a visualization algorithm. Depending on the ease of this data-transformation procedure, I may have to manually construct this input data for prototyping.
The following are potential features to encode and map to a visual parameter:
- Musical: chords, rhythm, pitch contours, meter/pulse/ beats, expressive markings (articulations, dynamics, and tempo/style markings)
- Linguistic: lexical accents, semantic (emotional) valence & degree, word frequency (i.e. "love" is super common in songs; "computer" might not be...)
- Structural: refrains, stanzas, and rhyming scheme
(Initial List of) Related Works
note: a more complete list of related works and references can be found in the Initial Problem Presentation page (see below for link)
Initial Problem Presentation (11/11)
- Project Webpage (with links to everything)
- Poster (Oh_poster.pdf)