Work in Progress: Automatic Music Generation

This is a song that comes out of a first draft Max project in which music is randomly generated and played via Ableton instruments.

I started this experiment to playfully find out what happens if you let a computer randomly generate music within a specific set of musical rules and parameters. But I would like music generation that goes beyond the bleeps of experimental music, I would like it to make sense musically to our ears 🙂 And then the question is, what rules and/or input should or shouldn’t be added?

It might not stay completely random, I would be nice to use input such as certain data, human interactivity, etc. I see this as an extension (or even replacement?) for composers. This could allow a composer to experiment more efficiently. It could also be used for the audience to experience a musical performance in a different way, to be more a part of the music instead of the consumer. Maybe this can be even turned into an interactive installation so that the audience become the composers. Body movement, body state, and many other ways of interactivity can be integrated. For example it is nice to be able to make music by dancing, instead of dancing to music. It’s also a nice idea to generate music in a way I like it. This way I will be the listener who enjoys new music in my style, but at the same time I am the artist that produces the piece.

I have created a Max project which now consists of:

  • song basics: scale, beats per minute
  • percussion
  • melody
  • chords

Segments of the song have a random length. Every time a segment has ended, it chooses randomly which part (percussion and/or melody and/or chords) will change. When one changes, new note values for each beat are calculated based on the scale. The notes are send out as midi and played by instruments in Ableton. So any instrument can be attached. Also a ‘drunken walk’ was added to create some dynamics in the velocity, to add some ‘human’ touch. Of course, like all the other randomised parameters in the song, this ‘drunken walk’ can be replaced by other kinds of input.

Plans for further experimentation:

  • integrate a certain style that I like?
  • add human interaction as input?
  • how to integrate certain emotions?
  • experiment with note lengths
  • create more possibility in dynamics for the chords (right now they’re only played once per 4 beats, and for a long time)
  • experiment with motif length, because right now, the motifs are always 16 beats
  • sometimes the music slows down when a change happens, I think the computer is having a hard time with something, check it out
  • experiment with the number of notes within a motif
  • and last but not least…

…receive feedback and maybe work together with people with nice ideas 🙂


Leave a Reply