Experiments with Music by People in Space (Kinect)

I am really enthusiastic about the concepts of involving the audience in a music performance and allowing people to create music together in an intuitive way (without needing music knowledge). That’s why I started to play around with the Kinect, to track how people walk in a space, and let their movements create music. (this fascination with movement to music seems to relate to a previous project of mine: TardiSpaces)

1. One person = one instrument

I started off with building a system in which each person in a room represents a musical instrument. In this way the X and Y position of a person in a room could influence elements of that instrument. I linked the X and Y values to MIDI values in Ableton. I played around with controlling the values of volume, parameters of an arpeggiator and effects such as reverb or equalizer. With only one person in the room it is easy to figure out how that person influences the music. But it gets very chaotic and unclear when more people are in the room. Also, the interaction wasn’t intuitive. It was strange that when you walk to a certain corner of the space, the music would turn louder.

So Jan Klug helped me with creating more intuitive interaction. One example was to let the speed of movement determine the volume of the instrument. That felt more intuitive, since the music becomes more intense when you move in a more intense way. Also I added regions in the space that trigger audio samples. The video below is an example of a simple experiment with this. There is a monotone synth playing, which turns louder when I move louder. Regions in the space determine whether voice samples are played. There is also a continuing heartbeat, on which I can control the EQ effect by changing my XY position.

I wasn’t satisfied with where this seemed to be going. In this way it is clear very soon what effect you can have as a person on the music, and then it just becomes very repetitive. But I wanted to try it out with more people with different instruments all mapped to movement in a different way. I wondered if that would make it less repetitive and interesting for the participants to interact with other participants.

So the next video is an experiment with more people. One person controls volume of an instrument by speed of movement. One person controls the arpeggiator by their XY position. One person controls the EQ effects of the synthesizer by their XY position. Etcetera.

But I also was not satisfied with these results. Not only did the participants have no clue what music element they were influencing as a person, I, as a creator of this system couldn’t even follow what was going on! It was very chaotic. And I also found it very uninspiring, because it just sounded like an ongoing sound scape.

I think one reason for this was that the blob detection software couldn’t properly track each person. What I mean is that when it labels person A as blob 1 and it closely passes person B who is blob 2, they can switch numbers. In this way, person A, which represented a synth, suddenly takes over the instrument of person B, which represented the arpeggiator, and vice versa. In this way it is really hard to keep track of what people are representing, and the sound had often very sudden changes because of this problem.

So again, I asked Jan Klug for advice. And he wondered if the whole blob tracking wasn’t a bit too complex for what I wanted to do. (well the results where definitely very complex and chaotic) What if I could measure the general movement of a group of people? And what could I do with that?

2. Using general movement

The next video is a demonstration of how the average position of a group of people influences the loudness of different instruments being played. For this I used regions in the space. In this experiment I created a simple mapping with the regions and the volume of musical instruments. On the right side you can see a square with regions. If the average position of people is in the center of a particular region, the instrument that belongs to that region plays at it’s loudest.

I really like this approach and the possibilities it offers. This way of audience interaction invites the participants to also interact with each other. Their position in relation to the position of other people will result in musical output. In this experiment I only connected the regions to volume of instruments, but now I want to experiment with different kinds of mapping. I showed this to the master students of the conservatoire (NAIP programme) with who I collaborate for a Street Performance festival in May.

3. Brainstorm session with conservatoire students (Master programme NAIP) 

Based on my latest experiments, I had a brainstorm session with two students from the NAIP programme from the conservatoire We discussed the following ideas and topics for further experimentation:

  • Polyphony in melody when people stand in a certain way in relation to each other.
  • We need a stable factor, something like background music. We could adjust that based on the atmosphere of the day? ( for example morning, afternoon, evening)
  • Together with participants from the audience, real musicians react on music areas in the space and in that way interact and collaborate with the audience.
  • We could give workshops about this system?
  • We could add a visual element, to make the interaction more clear. For example draw the areas on the ground, and/or use icons (of instruments).
  • We could also drop the Ableton music and only let real live musicians play music while they react on cues from the system. (for example playing softer or louder)
  • Breaking up the melody and during interaction with people slowly adding number of notes in the melody.
4. Literature studies

While working on this project I have also been reading literature related to this fascination of movement to music and interactive performances. And I am definitely not the first who is inspired by this thing, I even found experiments and works dating back to way before I was born!

I read two articles, and I will list a few aspects that I found inspiring and are relevant to my project.

1. Sensing and mapping for interactive performance by Kia NG (2002)

  • “With such systems, the modes of interfaces, sensitivities and reactions (output) are highly flexible and can be configured or personalised, allowing better access to musical instrument playing with shorter learning time requirement.” This explains my fascination about this topic, I like the idea of intuitive ways of making music.
  • Very Nervous System, in 1986 people were already doing experiments with connecting body movement to musical output!
  • Different kinds of mapping strategies to consider when designing an interactive music system:one-to-one
    • one-to-many
    • many-to-one
    • many-to-many
    • (my first experiments were one-to-one, but now they start to shift to many-to-one and many-to-many)
  • “Musical mapping can be enhanced with a database of composed musical phrases and several mapping layers can be overlaid in order to produce multi-layered effects.”
  • One-to-one mapping generates musical output that can be easily related to dance, but when many people are involved, the output becomes complex and difficult to follow. Therefore a performance started with simple movement to demonstrate output. This is exactly what I encountered in my first experiments, just like the next one:
  • Constant one-to-one mapping can be tiresome and uninspiring, therefore background music was composed.
  • Mapping is part of compositional design, can integrate with background music and choreography.
  • In the design process you could use a mixture of one-to-one direct sound triggering, special effects, and pre-composed music.
  • I became inspired by using different types of input:
    • Facial expressions as input
      • Interactive music head with pattern recognition techniques or many-to-many mapping, non-linear mapping functions.
    • Colour detection as input
    • Pressure maps on the floor
      • “Special sound effects and short audio segments can be activated by members of the audience who step onto pressure-maps or touch sensors, while pausing to read text or images related to the story.”
    • Use of accent detector to create accents in music
  • “There may also be applications for music therapists, to encourage movement, using this motion-sensitive system to provide interactivity and creative feedback.” I like the idea that my project could encourage people to move and have a therapeutic effect.
  • Applying AI to this project: learning certain expression of dancers. For this I could have a look at Laban, a dance notation method.

2. Bio-sensed and Embodied Participation in Interactive Performance (2017)

  • “Three main issues emerging from design ideas of interactive performances based on bio-sensing and bodily tracking technologies :
    • i) Temporality of input  – the extent of the feedback loop between audience action and its influence on the performance,
    • ii) Autonomy and Control  – that is the degree to which audience members, performers and directors can act upon the performance;
    • iii) Visibility of Input  – that is the degree to which the use of such sensors can make audience’s actions socially visible.”
  • “More recently the focus on participation-through-technology in interactive performances has been seen as a way to understand other people and engage in a dialogue with them.”

  • “A Kinect sensor capturing movement qualities of the audience modified the music, which in turn influenced the dancers on stage.”

  • Use of proximity between participants.
  • Yoga.
  • Locus of control, who has primary control over input, audience or artist. This sparked a question for my project: how much control do I want as an artist?
  • Consciousness of control: active (body movement) v.s. passive (galvanic skin response). Which type of control do I want in this project?
  • “Alter the boundaries between the creative artist and the audience.” This is exactly what I wanted to reach when I wanted to start with involving the audience in the performance. As if, as an artist, I don’t want to do it alone. I want my audience to be responsible for the output as well.
  • “By changing the act of interaction from active, as in the example above, to passively being sensed these three aspects of interaction can be disconnected from each other. It is this disconnection between action and reaction, between cause and effect, that opens the possibility for the artists to manipulate, blur, and play with boundaries between traditional roles.”

Published by Kayleigh Beard

I sing, perform, and make my own digital instruments. Through my voice, and making music with intuitive body gestures, I seek to create a serene and tranquil atmosphere to retreat from today’s overwhelming world.

2 thoughts on “Experiments with Music by People in Space (Kinect)”

Leave a Reply