Visual Songs

To explore the techniques of computational intelligence (text mining and machine learning), me and some other Lifestyle Informatics students initiated Visual Songs.

While artists themselves translate their lyrics into sound, we challenged ourselves by the problem of visualizing their songs and automatically generating a mood board based on written songs.

While humans can comprehend the emotions of a song by simply listening to it, a different approach is required for a Natural Language Processing system to perform this task, since only the textual component of lyrics can be utilized. Without sound, it is more difficult to extract the sentiment of a song, since all that is left are words. Another problem is that lyrics do not follow the same syntactic rules as informative texts. The language structure that is adhered to in lyrics is more similar to the structure of poetry. Lyrics can be ambiguous, since they often contain metaphors, idioms and polysemous words; the interpretation is left to the listener of the song, and can be interpreted differently by different people.

The approach to this problem involves scraping the lyrics from the web, preprocessing the scraped texts and analysis of topics and overall sentiment. Furthermore, based on the main topics, corresponding images are scraped from the web and placed into the mood board. Based on the sentiment score, a matching level of saturation is given to these images.

by Kayleigh Beard & Nathalie Post

Published by Kayleigh Beard

I sing, perform, and make my own digital instruments. Through my voice, and making music with intuitive body gestures, I seek to create a serene and tranquil atmosphere to retreat from today’s overwhelming world.

Leave a Reply