Visual Songs

To explore the techniques of computational intelligence (text mining and machine learning), me and some other Lifestyle Informatics students initiated Visual Songs.

While artists themselves translate their lyrics into sound, we challenged ourselves by the problem of visualizing their songs and automatically generating a mood board based on written songs.

While humans can comprehend the emotions of a song by simply listening to it, a different approach is required for a Natural Language Processing system to perform this task, since only the textual component of lyrics can be utilized. Without sound, it is more difficult to extract the sentiment of a song, since all that is left are words. Another problem is that lyrics do not follow the same syntactic rules as informative texts. The language structure that is adhered to in lyrics is more similar to the structure of poetry. Lyrics can be ambiguous, since they often contain metaphors, idioms and polysemous words; the interpretation is left to the listener of the song, and can be interpreted differently by different people.

The approach to this problem involves scraping the lyrics from the web, preprocessing the scraped texts and analysis of topics and overall sentiment. Furthermore, based on the main topics, corresponding images are scraped from the web and placed into the mood board. Based on the sentiment score, a matching level of saturation is given to these images.

by Kayleigh Beard & Nathalie Post

Published by Kayleigh Beard

Music, Art, Technology. With a scientific background in Artificial Intelligence, an artistic background in music and sound, and a deep interest in human health, psyche and spirituality, Kayleigh now pursues to create experiences, give performances, and inspire herself and her audience to live more pure and authentic.

Leave a Reply