Audio signal processing and sound spatialization

 

Also known as audio processing and recently referred to as live-electronics, audio signal processing is a series of procedures by which sounds are captured via microphones, manipulated algorithmically with different electronic devices and then played-back over loudspeakers. Curtis Roads believes that this process is essential for an electronic composer:

The artificial division between “composition” on the one hand, and “orchestration” on the other, need not apply in computer music. To generate and process acoustical signals is to compose – more directly than inscribing ink on paper.” (1996, 349)

Perhaps the most common examples of audio processing are the audio effects, widely practiced in electronic music, such as ring modulation, delay, reverb, chorus or more recently granular synthesis. Not only the power to alter auditory signals is appealing to composers, but also the ability of acoustical sound analysis, which has multiple applications in the field of electronic music. An illustration of this would be the software Macaque, which I used in one of my pieces to translate recordings of animals into pitches and rhythms. It was written by Georg Hajdu in Max/MSP and it allows the transcription of sound spectres and partial-tracking data into standard musical notation (Didkovsky & Hajdu 2008).

 

Sound spatialization is the process by which the position and movement of sounds is simulated in a confined space. Acoustic signals can optionally be recorded with dedicated microphones for a more accurate reproduction, but any acoustic or electronic sound source, be it pre-recorded or live, can be moved in 3D space. They are played-back with the help of algorithms that precisely control various arrays of specially positioned loudspeakers. The practice of spatialization was broadly commercialized by the film industry to enhance the audio-visual experience in cinemas, but it was also pioneered in electroacoustic music since the 1950s by composers like Karlheinz Stockhausen, Edgar Varèse or Iannis Xenakis, who wrote the first pieces for multi-channel playback. Some of the contemporary techniques for spatializing audio in electroacoustic music are:

•  Ambisonics – invented by Michael Gerzon from the Mathematical Institute, Oxford;

•  Wave field synthesis – researched first at the Delft University of Technology in Holland;

•  VBAP (Vector Base Amplitude Panning) – developed by Ville Pulkki at the Helsinki University of Technology.

Ambisonics and VBAP have already been implemented in music programming software, such as Max/MSP, permitting composers to expand their range of expression by adding sonic depth to their pieces.

 

Both audio signal processing and sound spatialization are highly valuable tools for the multimedia composer. Also, they provide the capability of magnifying the narrativity of acoustic music. By processing their live signals, instruments can become actors that play intriguing roles or they can generate unpredicted electronic sounds and conglomerates. Adding the spatial element to instrument signals and electronic sounds results in a further aspect of narrative: the personification of musical elements.

 

back to top