The following post is an edited version of the third of four chapters from my honours thesis, originally written in 2013. The thesis as a whole acts as a kind of “how to” guide for composing in a few different styles, each of which somewhat removes human aspects of music composition, at the same time exploring ideas of musical universals – those aspects of music that seem to be ubiquitous across all cultures or even found to be in common across different species! This chapter details one method of musical data sonification, which I used in order to create musical representations of the orbits of planets around distant stars.
Chapter 3: Biomusic, Music in Nature and Musica Universalis
“Universals are rooted in nature, but have effects in culture” (Leman, 2003, unpaginated)
Given that music can exist in numerical data, and in the sounds produced by animals, naturally occurring patterns can also hold musical information, and could have had profound impact upon the creation of early music. Natural patterns such as the rhythms of heartbeats, the natural walking pace of an individual, the night/day cycle and the changing of seasons are all examples of patterns that hold potential musical information and are all intrinsic parts of life.
Biomusic here differs from zoömusic in that it refers to sounds, pitches or rhythms created biologically, but without an intended aesthetic aspect. While the ‘voices’ of many animals used for mating calls are widely considered to have an intended aesthetic aspect, other sounds that are purely functional or biological come under the heading of biomusic.
Musica Universalis is an archaic philosophical concept relating the movements of celestial bodies – the Sun, the Moon, and the planets – to a form of music. This ‘music’ is of course not audible, but rather it can be described in the same terms as music – through mathematical and harmonic principles. The implications of this have historically been thought of as astrological rather than purely mathematical (Kepler, 1997). The harmonically described motions of celestial bodies (the rotations, orbits and resonances with other objects) are yet another example of patterns in nature which contain musical information that can be used in compositions.
Life thrives based upon principles of repetition, cycles, and patterns, as does most music. Humans and many animals are adept at pattern recognition, and in artistic works, great importance is placed upon the skilful use of repetition:
Memory affects the music-listening experience so profoundly that it would be not be hyperbole to say that without memory there would be no music. As scores of theorists and philosophers have noted, […] music is based on repetition. Music works because we remember the tones we have just heard and are relating them to the ones that are just now being played. Those groups of tones—phrases—might come up later in the piece in a variation or transposition that tickles our memory system at the same time as it activates our emotional centers […] Repetition, when done skilfully by a master composer, is emotionally satisfying to our brains, and makes the listening experiences as pleasurable as it is. (Levitin, 2006, pp. 166-167)
Indeed, music in every culture of Earth is built strongly around repetition (Middleton, 1990). Repetition and variation of ideas forms the backbone of music from simple folk melodies to more sophisticated forms such as the fugue or sonata. Composers have tried to control any inherent repetition through compositional methods such as the twelve-tone technique, devised by Arnold Schoenberg in the 1920s, which gives equal importance to all twelve chromatic notes (Schoenberg, 1975), avoiding a tonal centre and thereby giving composers the ability to impart a pseudo-randomness to the music, perhaps more so with extended techniques such as the all-interval row (Carter, 2002). This was pushed to its logical extremes by Scott Rickard in The World’s Ugliest Music (2011), which is a piece that features all 88 keys on a piano without repeating any, and also avoids repetition in the rhythms in which the notes are played.
Musical notation, especially in its earliest form, could not have existed without repetition. A theoretical structure built upon repeated and repeatable ideas is key to any system of symbolic notation. Repetition of only a subset of all available notes defines not only musical scales, but tuning systems. Musical structures such as these are perhaps the most ingrained forms of pattern and repetition in all forms of music.
Evidence of repetition in all aspects of human culture hints at a deep-rooted predisposition for pattern-recognition within the human brain. In fact, the human mind is hard-wired to make unfamiliar things seem familiar; to recognise one human face amongst thousands, and to recognise those thousands, with their countless differences, as examples of the same thing (Sagan, 1995). Perhaps music’s patterns and repetitive nature stem from patterns found in the physical world.
Studies have shown links between the tempo of music and the heart rate of the listener (Bernardi, et. al., 2006). It is perhaps true that musical tempi lie within the range that they do because of humans’ heart rates; these tempi feel natural because such rhythms are found in nature themselves, and particularly in the brain’s formative years of childhood. There exists an upper limit to musical tempo, or at least to the number of sounds per second. If a rhythmic sound is at a rate of equal to or greater than the lower frequency limits of human hearing (that is, roughly 20Hz), the sounds are heard as one continuous tone rather than a rhythmic structure. This is closely related to the visual equivalent flicker fusion threshhold, which describes the psychophysical phenomenon that allows animated pictures to appear as a fluid motion, rather than a series of still images in succession. There should be no such lower limit, apart from that limit which remains practical for human musicians and listeners. Several ‘impractical’ pieces exist, including John Cage’s As Slow As Possible (1987) (a piece composed for organ lasting 639 years, and humorously beginning with a rest lasting seventeen months), and Jem Finer’s Longplayer (1999), which takes a century to play at the intended tempo.
Music of the Spheres
“The heavenly motions are nothing but a continuous song for several voices, perceived by the intellect, not by the ear; a music which […] sets landmarks in the immeasurable flow of time.” Johannes Kepler, The Harmony of the World, 1619.
Johannes Kepler’s 1619 Harmonices Mundi in part looks at physical resonances between planets in the solar system in a musical sense (Kepler, 1997). While the implications of Kepler’s interpretation of these ‘harmonies’ seem more like astrology than modern astronomy, the idea is interesting, and using modern technology and software simulations, similar methods have been used to codify the quantitative data into musical pieces, from which data can be more easily garnered. Astronomer Alex Parker’s Kepler 11: A Six-Planet Sonata (2012) does just this, taking orbital data from a stellar system with six planets and mapping each planet onto a musical note, each drawn in this instance from a minor 11 chord. This piece allows informed observers to easily infer information from the data, particularly orbital resonances which would be manifested as obvious rhythmic patterns in the sound, but here they are absent. This data would otherwise be more difficult to obtain through standard methods of analysis.
The resonances found between orbits of planets and moons are an example of explicit and exact repeated patterns in nature, and have influenced cultures throughout the world. The ~13:1 resonance of the Moon:Earth system has defined calendars, religious festivals and the habits of farmers planting and harvesting crops for thousands of years. Perhaps resonances and patterns such as this can have some impact on the development of musical cultures, or perhaps the patterns are too long-scale to associate with musical tradition.
Composing music based on orbital resonance data
Data sonification is a way of codifying data as sound, which can then be interpreted as sound or decoded into the original data. Under Ruwet’s analytic methodology (1987), this encoding/decoding occurs every time music is composed, performed, analysed, interpreted, or misunderstood. Many methods of data sonification exist, and most involve some kind of pseudo-arbitrary “mapping” of digits to musical data (pitches, note lengths, loudness, etc.), usually employed to ensure that the given data set sounds “musical” in terms of standard codes and conventions of western musical tradition. While musical attributes are not a requirement of sonification, they are widely used, as it allows designers to tap into the inherent pattern-recognition capabilities of the human brain. Using audio to convey certain types of information can far exceed the usefulness of visual or numerical displays of the same data. The human brain is capable of hearing and differentiating between a number of different sound sources at once. Most people are able to pick out the instruments of a rock band; drums, guitars, vocals, and keyboards, and a trained professional may be able to distinguish each instrument in an orchestra. Because of this hard-wired versatility in auditory processing, sonification can be a useful method of displaying particular data. Sonification can also, of course, be performed to create aesthetically pleasing and traditionally “musical” representations of the data set, without the primary focus being on a usable display of data.
To create music through data sonification, an appropriate set of data must be found, and a matching appropriate mapping system must be fitted to the type of data being used. Orbital resonance data, such as the numbers which describe planetary motion around a star, or a moon’s motion around a planet, can be expressed in either durations or frequencies, as well as ratios describing the relationship of each object to the others. There are a number of ways that this data could be mapped to a musical representation; all of the values (frequency, duration, ratios) can be used to describe musical data (pitch, rhythm, and intervals, respectively).
Over the last two decades, one of the frontiers in science has been the detection of exoplanets (planetary systems orbiting other stars). Due to technological advances, more and varied methods of exoplanet detection are consistently improving, and because both the field and the missions are relatively young, discoveries are more and more common as time passes. My interest in this field has led me to choose exoplanetary orbital data as basis for generative music composition, using orbital period data and the distance of each planet from the star to inform musical choices, and create interesting sonic textures through repetition and irregular rhythms.
Ephemeris 1: Kepler-20
The first of two extrasolar planetary systems I have chosen to use as a basis for my compositions is Kepler-20, notable for being the first system to be discovered as having small (Earth-sized) exoplanets. Kepler-20 is a star very similar in size and composition to the Sun, and is located 950 light-years from Earth. The system comprises the star and five planets, two roughly Earth-sized, and three much larger. Because of their proximity to both their host star and each other, all of the planets orbiting Kepler-20 have near resonances; preceding outwards from the star, they are 3:2, 4:2, 2:1, 4:1.
I found that a time scale of 1 second = 5 days was suitable for creating music with this data set. At that rate, the least frequently played note is heard once every 15 seconds. I assigned the notes of a C minor 11 chord to the planets (based on their distance from the star; the highest notes being assigned to those planets closer, the lower notes to those further out), and created a system of low frequency oscillators (LFOs) to trigger those MIDI notes to be played by a virtual instrument sampler. As each planet completes one full orbit around the star (this can be thought of as triggering a note each time a planet passes a certain “trigger point” around the star, represented in Fig. 3.1 by a vertical line), the assigned note is played. As each planet orbits at a different rate, and because each takes a different length of time to complete one orbit, this creates interesting and unusual rhythmic textures.
Ephemeris 2: HD10180
The second system I have chosen is HD10180, which is notable for having a large number of planets. With at least seven planets and a further two unconfirmed “planet candidates,” it is markedly larger than any other known extrasolar system, and possibly larger than our own Solar System. The planets themselves are quite spread-out; the furthest one is 157 times the distance from its star than the closest.
Because of the system’s variance in orbital periods, I used a time scale of 1 second = 7 days to generate frequencies for the rhythm of the piece. I used a B major 9 chord to generate pitches, again assigning the highest pitches to the planets nearest the star. Due to the great range in orbital periods, the highest note is played very rapidly, almost 6 times per second, and the lowest is only played every five-and-a-half minutes.
Ephemeris 3: KOI-500
The recently-discovered planets in orbit around KOI-500 are notable for being the most tightly-packed extrasolar planetary system that has been found. Four of the five planets are locked into a so-far unique resonance; at regular intervals, the outer four known planets will line up in a set formation every 191 days, making this a system of interest for astronomers studying resonances between bodies.
Because this system is so tightly packed, I used a scale of 1 second = 1 day to define the rhythms of the piece. There is little variance between the shortest- and longest-period orbits, and as such the lowest and least frequently heard note is played every 9½ seconds, meaning that the resonances between each of the planets can be heard more easily over the time-scale. I took pitches from an Eb minor 7 chord.
Ephemeris 4: Sol
To complete the set of 4 Ephemerides, I chose to create a similar composition based on our own Solar System, taking pitches from a C Major scale and triggering these based on the planets’ orbits at a scale of 1 second = 1 year.
The Solar System is naturally the stellar system with which we are most familiar, and is extremely widespread, with Mercury’s orbit being 684 times shorter than Neptune’s. Because of this large difference, the lowest note heard in the piece (a G) is heard only once every 2.8 minutes at the given time-scale.
Bernardi, L., Porta, C., and Sleight, P. (2006). Cardiovascular, cerebrovascular, and respiratory changes induced by different types of music in musicians and non-musicians. Heart, 92, pp. 445-452.
Cage, J. (1987). As Slow As Possible. St. Burchardi church, Halberstadt, Germany.
Carter, E. (2002). Harmony Book. Carl Fischer Music, New York.
Finer, J. (1999). Longplayer. Trinity Buoy Lighthouse, London. [Available: http://longplayer.org/]
Kepler, J. (1997). The Harmony of the World (Dr. Juliet Field, Trans.). The American Philosophical Society, Philadelphia.
Leman, M. (2003). Foundations of Musicology as Content Processing Science. JMM: The Journal of Music and Meaning, 1(1).
Levitin, D. (2006). This Is Your Brain On Music: Understanding a Human Obsession. Atlantic Books, London.
Middleton, R. (1990). Studying Popular Music. Open University Press, Philadelphia.
Parker, A. (2012). Kepler 11: A Six-Planet Sonata. [Online]. Available: http://www.astro.uvic.ca/~alexhp/new/kepler_sonata.html [Accessed 3/07/2012].
Rickard, S. (2011). The World’s Ugliest Music [Online]. TEDxMiami. Available: http://tedxtalks.ted.com/video/TEDxMIAMI-Scott-Rickard-The-Wor [Accessed 17/11/2011].
Ruwet, N., and Everist, M. (1987). Methods of Analysis in Musicology. Music Analysis, 6(1), pp. 3-36.
Sagan, C. (1995). The Demon-Haunted World – Science as a Candle in the Dark. Random House, New York.
Schoenberg, A. (1975). Style and Idea, University of California Press, Berkeley.