Visual music

 

People make associations between the experiences they perceive through the five senses and some of these have become conventions. For instance, we associate colors like red or blue with sensations like warm and cold, or the taste of various foods with sharpness or strength. Equivalently, associations have been made between sound and images. A study made by Scott Lipscomb and Eugene Kim (2004), with the goal of setting the basis of algorithms that would transform music into visual animation, showed how humans perceive the connection between visual and audio:

Audio parameters

Visual parameters

Loudness

Size, colour

Timbre

Shape

Duration

Not conclusive

Pitch

Vertical location, colour

 

First proposals for transposing music into visuals were made by Sir Isaac Newton in his Opticks (1704). “He was the first to observe a correspondence between the proportionate width of the seven prismatic rays and the string lengths required to produce the musical scale D, E, F, G, A, B, C” (Kenneth cited in McDonnell 2007). His correlations were red to C, orange to D, yellow to E, green to F, blue to G, indigo to A and violet to B. Also in the 18th century, the French Jesuit Louis Bertrand Castel, in his desire to show the color of music to the world, built an Ocular Harpsichord that contained 60 small colored windows, each revealed by pressing a specific key of the instrument. At the beginning of the next century, once with the discovery of electricity, there have been more advanced constructions, based on lamps instead of natural light. The visualization of music was further explored by abstract painters like Paul Klee, Wassily Kandinsky and Roy De Maistre. They all shared the passion for music and visual art, therefore each of them tried to develop ways for transposing musical parameters into painting. Their methods generated abstract images that were based on the principles of musical composition.

 

One more characteristic of music remained yet to be explored at the beginning of the 20th  Century. This characteristic was the time-based development of the composition. It is considered to be one of the most important aspects in order to achieve a close correlation of music with the visual elements. Avant-garde filmmakers experimented with the principles of composition in the recently discovered media, such as film. Hans Richter, Jean Cocteau, Marcel Duchamp, Viking Eggeling, Fernand Leger, Man Ray and others extended the visual universe – once imprisoned in static paintings – into short experimental motion pictures. The abstract stories, first silent and colorless, offered the viewer studies of shapes, repetition, scaling and motion that were full of rhythm and musicality. Richter’s first film, Rhythm 21, “involved expanding and contracting forms on a black or white background in a contrapuntal interplay. Much of the tension of the film results from the way that background forms develop into foreground figures and foreground elements into background (much as the lines in a polyphonic composition do)” (Elder 2008, 162). Oskar Fischinger was another advocate for the harmonization of motion pictures and music: “He created synthetic sound by modifying a camera that was able to photograph his ornament drawings and other geometric shapes right onto the film’s soundtrack” (McDonnell 2007).

 

Contemporary performances of visual music involve both hardware and software usage in order to synchronize music and video in real-time. In the digital era, sound and images are, in their essence, a product of numbers and algorithms, hence they have the same root and can be modelled in the same way. Programmers create software that allow the simultaneous processing of visual and audio. The user-friendly interfaces of these programs make them very accessible and artists all around the world use them as tools for creating audio-visual narratives.

 

back to top