Counterstream Radio is your online home for exploring the music of America’s composers. Drawing upon New Music USA’s substantial library of recordings, our programming is remarkable for its depth and eclecticism. The station streams influential music of many pedigrees 24 hours a day. Keep listening and discover the sound of music without limits. Click here to open Counterstream Radio.
The timbre and envelope possibilites of electroacoustic music are rich and multifaceted. Timbre and envelope are intricately related and are major determinants of how effective a sound event in music will be.
The term “envelope” when applied to music is the attack, sustain and decay of a sound. When performed by an acoustic musical instrument, depending on what the composer has notated in the score, the envelope of a particular note could begin with a sforzando attack, followed by a soft piano sustain, and an even softer pianissimo decay.
The term “timbre” or tone quality or tone color is the quality that distinguishes different types of tone production, such as the difference between a flute and a trumpet playing the same note. The timbre of a sound depends on the number and relative strengths of that sound’s component frequencies, as determined by resonance. Acoustic musical instruments are typically based on an oscillator such as a column of air or a string which oscillates simultaneously at many frequencies. In acoustic instruments, these resonant frequencies are mostly limited to integer multiples (“harmonics”) of the lowest frequency, which largely determine what the human ear hears as the timbre of that particular instrument. At different dynamic levels, the relations of the resonant and other frequencies of an acoustic instrument typically shift or adjust to each other in a predictable manner.
In acoustic music composition, the composer has a wide spectrum to choose from in both envelope and timbre. In digital electroacoustic music, the composer has even wider options to choose from. The timbre and envelope possibilites of electroacoustic music are almost unlimited: the composer can invent her or his own timbre which is not limited to those that resonate in integer multiples of the lowest frequency.
To create such timbres, in pre-digital electroacoustic music one method was to submit a sound to changes which would occur a certain number of times per second, and see what happened. Any change more than 7 times per second was likely to be heard by the human ear as a change in timbre or even in pitch, whereas if imposed less than 7 times per second, the change might be heard as vibrato or tremolo. Many intriguing results could be obtained and used in a composition. But with digital electroacoustic technology, this kind of musical experimentation can be applied with much greater musical control, because smaller changes can be made at a time, which can be more likely to result in subtler, more musically useable results.
Using digital technology, a composer has access to a previously unimaginable level of control over both envelope and timbre. The digital options to vary timbre range from timbral modification of a single sample, to timbral modification of an attack or sustain or decay of a specific sound’s envelope, to timbral modification of a section of a work, or even timbral modification of an entire work. Likewise, in the imposition of envelope designs, the digital options range from imposing an envelope shape on a single sample of a sound (which will typically be heard as a timbral change) to the imposition of an envelope on a single note, to the imposition of an envelope shape on a whole section of a piece, or even to the imposition of an envelope shape on an entire pieece.
But at the present time, some composers have not yet taken advantage of these rich digital options for varying timbre and envelope. For example, as regards timbre, one can frequently hear pieces in which the composer uses a sine wave in low pitch register and attempts to massively boost the volume of this simple sound, which is essentially a funadmental without overtones. The composer has increased the dynamic of the fundamental pitch, presumably to make it sound more powerful, but such a simple sound lacks the overtones that if increased in volume could actually make the sound appear louder.
A low-pitched sine wave at high volume presents a health risk of noise-induced hearing loss.
Such a low-pitched sine wave at high volume presents a health risk of noise-induced hearing loss by damaging the microscopic hair-like projections (stereocilia) in the inner ear. But in its musical effect, as long as that low-pitched sine wave at high volume continues, it obliterates all other musical activity, creating a loud, simple monophony, like the effect of an explosion, which when it’s over leaves a sense of emptiness, a depleted musical energy.
Part of the issue here may be the lack of awareness that there is a difference between the measured sound pressure level (SPL) and volume as perceived by the human ear. Over centuries, acoustic composers have developed techniques of orchestration as well as musical instuments that follow the perception of volume by the human ear. It’s understanble that the emergence of pre-digital and digital electroacoustic technology makes it possible to simply turn up the dial, as it were, of a fundamental pitch, and some composers have made this weaker musical choice.
The desire for an increase in volume is easily satisfied by other means. The digital options for more effective musical results in timbre and orchestration are enormous, and one needn’t settle for less powerful musical results. A basic tenet of the orchestration of instrumental music, when an increase in volume is desired, is not only for the composer to direct instrumentalists to play louder (which as mentioned above is likely to include increasing the resonance of the higher frequencies in their instrument’s timbre) but also for the composer to bring in other instruments which play higher frequencies than the instruments which are playing the fundamental pitch. If there are other instruments available, acoustic orchestration does not have to simply increase the volume of the instrument playing the fundamental pitch in order to increase the perceived volume.
The same principle can be applied to the orchestration of electroacoustic music, adding higher frequencies as a way of increasing perceived volume. Not only can adding higher frequences be experienced by the human ear as a more powerful sound and a richer timbre, but since the energy is distributed over multiple frequencies and stereocilia in the inner ear and not concentrated on a single fundamental frequency, there is likely to be less risk of damage to hearing.
As with timbre, in many current electroacoustic works little attention is given to the shaping of a sound’s envelope. The character of an envelope, its onset or attack, its duration or sustain, and its decay or ending, is determined by its gradual or abrupt (linear or exponential) nature. But in many current electroacoustic works, sounds are reduced in power and expressiveness by having unclear envelopes, with indistinct beginnings and endings and unclear shapes. This can be remedied in digital technology by simply drawing in the desired envelope shape to sculpt the volume of the sound, and listening critically to the results.
The timbre of a sound is greatly affected by its envelope. The physical characteristics of sound that determine the perception of timbre include both the frequency spectrum and envelope. The perception of timbre is closely related to the physical phenomena of the unfolding of the frequency spectrum over the duration of the envelope. This can be called the spectral envelope. The musical interest of a specific sound event of any duration can be greatly enhanced by introducing different frequencies in the onset, duration and ending of the event, especially by careful construction of the transient frequencies in the rise time of the attack.
The Development of Musical Material from Non-pitched Sound Events.
If the unfolding frequencies of a sound are in the harmonic spectrum (integer multiples of the fundamental pitch), the fundamental pitch may remain perceptible to the human ear, and the composer can therefore manipulate this sound event in any of the ways in which clearly perceptible pitch can be used in classical music.
But if such frequences are not in the harmonic spectrum of the fundamental pitch, then the composer will have to consider manipulating this sound event with non-harmonic spectral envelopes in ways that are not dependent upon the perception of pitch. There are a number of procedures she or he can use to generate musical material, in addition to the preeminent procedures based on pitch.
2. Variation Techniques
Variation Through 16th-Century Counterpoint Techniques
Even sound events with non-harmonic spectral envelopes can sometimes generate unpredictably interesting musical results when submitted to the traditional manipulations of 16th-century counterpoint.
Since the sound events used in electroacoustic music often have little or no distinct pitch characteristics, contrapuntal sequencing devices which are based on pitch—inversion, retrograde and retrograde inversion—may well not generate identifiable or interesting variations. But in my experience, even sound events with non-harmonic spectral envelopes can sometimes generate unpredictably interesting musical results when submitted to the traditional manipulations of 16th-century counterpoint, and therefore should be run through inversion, retrograde and retrograde inversions to see what happens.
Variation Through Analog and Digital Technology
In earlier electroacoustic music there were a number of procedures used to create variants of unpitched musical material. At the Columbia-Princeton Electronic Music Center in the 1960s and 1970s, Vladimir Ussachevky and Bülent Arel taught younger composers like myself to create a limited number of sound events to use in a piece, and to experiment with each one to generate as many variations as possible. Usually these variations were created by running a unique sound event through various timbre, speed, and pitch modification devices, or by splicing them into sequences or rhythms, or by mixing them into larger events or even into new timbres.
Similar variations can now be made by digital means, if the composer can muster up enough imagination to replace the efficient analog sensorium of eye, hand, and ear. It’s my impression that the use of digital technology may tend toward a reduction in an individual composer’s sensitivity and bodily response to physical phenomena, including sound. If this is true, then I expect it may be at least in part a result of the physically ennervating, facile nature of using the computer. With touch keybords there is little pressure or contact between even the composer’s fingers and the material world around him or her, and a minimum of awareness of the composer’s own body or their physical surroundings.
I look on as Ussachevsky carefully adjusts pitch settings on an analog synthesizer. In the background are three sine-squarewave oscillators.
Variation Through Placement Techniques Modeled on Rhetorical Devices
With touch keybords there is little pressure or contact between even the composer’s fingers and the material world around him or her, and a minimum of awareness of the composer’s own body or their physical surroundings.
Beyond the digital possibilites for timbre, speed, and pitch modification, or digital editing of sound events into sequences or rhythms or transformation into new timbres, there are a number of techniques, especially regarding placement in time, that are used in the practice of rhetoric.
The study of rhetoric in language was traditionally divided into five parts. I summarize below some of placement and relational techniques used in the fifth division of rhetoric called “Style” (Elocutio). These variation techniques may be usefully applied to generate material from carefully sculpted pitched and non-pitched sounds.
I’ve adapted into muscial terminology selected definitions from Edward Corbett’s and Robert Connor’s Classical Rhetoric for the Modern Student. These are offered as little adaptations of rhetorical devices to the medium of sound, and focus on the placement and relationship of sounds. They are close to electroacoustic teaching exercises used in earlier analog electronic music studios, such as the Columbia-Princeton Center where I taught such things, and they can easily be performed digitally.
Rhetoric Exercises in Placement and Relation of Sound
Parallelism: Create a series of sound events with similar envelopes and timbres, with dissimilar pitches, octaves or locations (e.g on different loudspeakers).
(same envelope and timbre, BUT different pitches, octave, and locations)
Antithesis: Create and alternate two sound events which contrast in envelope and timbre.
(same pitch, pitch register and location BUT different envelope and timbre)
Anastrophe: Create five sound events that are heard three times in the same order. Then reverse the order.
ABCDE, ABCDE, ABCDE,
EDCBA , EDCBA , EDCBA
Parenthesis: Create five sound events that are heard three times in the same order. Then insert a new sound event in a position that interrupts the previously heard flow of events in that phrase.
ABCDE, ABCDE, ABCDE,
ABXCDE, ABXCDE, ABXCDE
Apposition: Create two separate sound events, then put them together so that the second modifies the first. For example, Sound Event #1 begins with a 2-second crescendo and ends with a 2-second diminuendo, but when immediately followed by Sound Event #2, Sound Event #1 ends with a 1-second crescendo.
I want to start this post with a challenge to my musicologist colleagues (I hope there are musicologists reading NewMusicBox), but it is really a call to action for us all. The exploration of electroacoustic music, its historical and social dimensions, is long overdue. In fact, as so many pivotal figures pass away, I cannot fathom why there has not been a rush to collect primary source material, let alone to interpret it. The lack of this activity spurred the creation of the Video Archive of Electroacoustic Music and gave the collecting of oral histories urgency when my wife and I started in the 1990s. Much as we both care about this work, we are no longer able to collect these oral histories, yet this work is increasingly important today. It was alarming to us at that time that no one had captured the stories of electroacoustic music’s pioneering composers and engineers. Though aware of the great work being done by Vivian Perlis at Yale, we knew that no one had yet filmed the stories of figures such as Bebe Barron (composer of experimental electronic film scores and collaborator with John Cage, Earle Brown and others) who was very ill. Neither was there much about the founders of any of the first studios in the USA, including the Columbia-Princeton Electronic Music Center, San Francisco Tape Music Center, University of Illinois studio, and Bell Laboratories. We rushed to capture what we could, given scant resources and the many other demands on our time. We were certain then that whole careers could be made mining these materials if only someone could preserve the stories. And today there remain untapped opportunities to do critical archival work, to interpret the stories, and to study the music itself.
Bill McGinnis, the first engineer of the San Francisco Tape Music Center (before Don Buchla came in). Photo taken in 1998 when we interviewed him in his home studio in San Francisco.
Twenty years since we began our collecting, electroacoustic music is still essentially unclaimed territory—especially outside of its popular music dimension. It seems ironic that, in a time when we fetishize even the most mundane activities and record them with our phones, there is still such little effort made to capture and interpret this pivotal transformation of music. There have been a few modest bright spots, including the conference on the late electroacoustic works of Luigi Nono at Tufts that I was pleased to participate in this past March. Alas, this seems to be the exception, which leads me to revive my speculations on some of the forces that have delayed the development of a subspecialty in electroacoustic musicology. I write this with the hope that things may be poised for a change and that some of you will take up the challenge.
Composers and musicologists still do not communicate with each other very much or very well.
Why then has this work been slow to start, and what might change the situation? For one thing, composers and musicologists still do not communicate with each other very much or very well. This is true even at my own educational institution, where relations between programs are excellent and we actually all like one another. I contend that there is still too much of a separation, often due to historical animosity and unfortunate, longstanding battles over turf. These are destructive and if we do not put them to rest and start doing more to think as one profession, we will continue to fade into irrelevancy. I do not object to the study of popular music, but I do not subscribe to an idea, gaining in currency, that turning to the study of popular genres is the path to relevancy in music education. Instead, I propose that a more robust conversation about musical ideas and the creation of new work, the kind of conversation found in all of the other arts, is essential. Such a profession-wide conversation is currently lacking. I mention this larger issue here because there are lots of dimensions to electroacoustic music that make it fertile ground for engendering a lively and productive scholarly argument of the kind we desperately need.
Another factor to consider is the technology of electroacoustic music, which presents both opportunity and obstacle. On one hand, the novelty inherent in the constant flow of new music hardware and software products is attractive to a broad segment of the technology-worshiping public. The energy of invention around this marketplace is alluring, lending a pseudo-scientific legitimacy that other music does not enjoy. Old technology—say, a violin—becomes invisible to us, but this means that instead of being diverted by the instrument itself, we are more likely to engage with the music. The violin is very little changed over its several hundred years and has ceased to be a novelty, so we are able to ask instead: what new musical ideas can this instrument express? In fact, fewer people than ever study the music itself, which is part of a larger problem, but this also explains something about why we are stuck in the starting gate with electroacoustic music.
My relationship to cars is a pretty good analogy to how I’ve worked with synthesizers: They look shiny, sexy, and inviting at first, but once I drive one a little, it becomes just a way to get from point A to point B—at least until something goes wrong.
The obsession with the devices of electroacoustic music is as much a problem for composers as it is for scholars. Music by definition is highly abstract, and thinking in music is hard, even more so when there are frequently no scores to consult. It is much easier to become immersed in the features of some new “toy.” The non-abstract, concrete aspects of music hardware and software make these much easier to relate to than music itself.
I confess that the allure of the first synthesizers in the 1960s was one of the things that drew me into composing electroacoustic music. At the time I was in high school, I had been composing for a couple of years and quite accidentally was lent an ARP Odyssey synthesizer. I had heard Mort Subotnick’s Silver Apples of the Moon and was frustrated not to be able to get similar results from this keyboard-reliant minisynth. Over the subsequent years, as I was introduced to many other analog synthesizers from Moog, Buchla (the one used by Subotnick), Serg, EML, and others, and each time found I was working against rather than with the differing architectures. I really wanted the machine to become almost invisible and to allow me to make the music I was hearing. My relationship to cars is a pretty good analogy: they look shiny, sexy, and inviting at first, but once I drive one a little, it becomes just a way to get from point A to point B—at least until something goes wrong.
Still, as a steward of one of the remaining Buchla 100 systems, I have an acute awareness that the history of the technology is also in need of attention. There are many stories that need to be told through this lens: How is it that the Buchla and the Moog are fundamentally similar, yet so different? What are the relationships among the musical approaches composers take and the idiosyncrasies of particular technology? I am ultimately, however, much more interested in the music that this technology allows us to create, and I imagine that at least some musicologist colleagues would be, too. But if there is little work being done on the history of the instruments, there is even less addressing the music itself. For perspective, consider again how a historical change in our relationship to the instruments is part of this mix. It would seem odd to point out that there is a much larger body of work on the piano music of Mozart than on the evolution of the piano, yet in electroacoustic music, this balance is reversed. All of the energy in the room is constantly sucked up by the vast and ever-expanding array of new products, and so little is left to consider what is being achieved musically with these tools. I think we all recognize this phenomenon, but that recognition has not changed anything much in the forty years I have been in the field.
Electroacoustic musicology, far from being a narrow subspecialty, could develop as a broad range of investigative possibilities, many organically interdisciplinary in nature. The relationship to the history of science and technology is perhaps the most obvious, but there are many other linkages. And as much as I argue here for this work as a branch of so-called “art music,” the connections to the world of popular music represent the richest pathway between the academy and the larger public. Having collected the oral histories, I often feel too that one especially ripe approach would be ethnographic.
It would seem odd to point out that there is a much larger body of work on the piano music of Mozart than on the evolution of the piano, yet in electroacoustic music, this balance is reversed.
As I mentioned in my previous post, electroacoustic music inhabits a network of communities, some organized around institutions and others around compositional approaches or particular technologies. If one adds the recognition that there is a large body of music without notated scores, the similarity to the world of the ethnomusicologist is inescapable. Ethnographic work though is only one possibility of many. The field is wide open and accessible on multiple levels to any ambitious and pro-active scholar who is willing to eschew the conservative canon in favor of a somewhat more recent and— arguably—field-changing history.
I’ve composed works using electroacoustic technologies since 1963, and I want to share with you over the next several weeks some of my thoughts about the current state of the medium. Since I am trained as a Western classical composer, my comments will be from that perspective.
1. Structural Issues in Current Electroacoustic Music
The first subject about which I’d like to share my thoughts with you is the issue of structure in current electroacoustic music. I serve on the board of The Association for the Promotion of New Music (APNM). We at APNM will soon be issuing a call for composers to submit their work for an electroacoustic concert in Spring 2017. We intend to award the performances to electroacoustic compositions of structural clarity and elegance.
Why electroacoustic music focusing on structural clarity? Because, in our opinion, many current electroacoustic works are weakened by not having clear structure. Even if some may be promising in other ways, we believe many current electroacoustic works suffer from an overall sameness of events throughout the duration of the piece.
Many current electroacoustic works are weakened by not having clear structure.
If what I am describing is true, then why is this happening, and why now? In the following I’ll consider a number of possible reasons. I’ll start with some thoughts about what might be the smallest structural unit in music, and I will focus most of this discussion on structural issues involving timbre.
Microstructure: the changing of small patterns in short periods of time
Music is a time-based art form. However long a piece of music lasts, the listener’s interest must be engaged, second by second, moment to moment, led on in time. If this teasing of the listener’s ear does not happen, the immediate result could be assumed to be boredom and disinterest. How do we as composers—of either acoustic or electroacoustic music—create this teasing, this interest, moment to moment?
I believe it’s through a certain degree of change, in a limited number of sonic parameters, at one time. For example, if pitch is the parameter being changed at that moment to tantalize the listener, then some or all other parameters of sound are held relatively constant, such as tempo, duration, timbre, or volume.
The listener’s interest must be engaged, led on in time. If this teasing does not happen, the immediate result could be assumed to be boredom and disinterest.
The lack of change in the held parameters frames and intensifies awareness of the change which is taking place in the chosen parameter. The degree of change, the choice of which parameters are changed, and the choice of which parameters are held relatively constant will become important identifying characteristics of a particular piece of music. The audible distinction between changing and non-changing parameters of sound could be considered the baseline of musical structure, the momentary movement from one musical event to another.
But, you might say, isn’t the ability to separate out and manipulate the different parameters of sound the very focus of professional training for a classical composer? How can a composer with years of professional training fail to create this micro-level of structure?
One reason microstructural clarity may be more difficult to create is that in current digital technologies the composer largely creates her/his own timbres, whereas in writing for acoustic instruments, a composer uses already highly developed, distinctive timbres that can be easily heard by the audience to be different from the timbres of other acoustic instruments. The use of acoustic instruments gives a wide palette of different timbral colors, and thereby enables acoustic instrumental timbres to be easily used as a changing parameter in microstructural momentary movement through time.
The composer writing for acoustic instruments can then use additional orchestration skills to further separate the musical materials played by different acoustic instruments, maintaining clarity and contrast between them. Such momentary microstructural contrasts can then be further extended into larger, macrostructural sections of contrast, large structural units.
The role of inadequate timbral differentiation in weakening microstructure
In contrast, the electroacoustic composer using current digital technology generally has to create his or her own timbres. At present there are a number of ways of doing this, ranging from commercially available off-the-shelf electronic timbres, to recorded sound samples of musical instruments or sounds from daily life, to sounds generated through the use of software synthesis programs such as Csound and Cmix. Thus any composer using digital technology has quick and easy access to sound samples of all sorts, easy access to fairly crude built-in modes of timbrally modifying those samples, and easy access to crude amounts of reverberation in which to drench the sound samples. Without careful consideration and restraint, by using these current digital technologies the composer can easily fall into the error of constructing sounds whose timbral spectrum will conflict with others, resulting in the masking and muddying of whatever microstructural change might take place in the music at that moment.
Through such blurring of timbral contrast, the otherwise forward movement of microstructural change can be reduced to sameness, to non-change, to some degree of boredom.
Is this any different, you might ask, than in pre-digital analog technologies? I would say it’s quite different. When I listen to current electroacoustic works, I often hear the blurring and muddying of timbres, significantly more than in pieces made with pre-digital analog technologies. I think the main reason for this is speed: current digital technology supplies the composer with very fast access to ready-made or minimally developed timbral solutions. If the composer does not have rigorous discipline and cannot resist caving in to this fast but undistinguished solution, the result is a set of timbres that are almost identical to those in many other composers’ pieces, timbres that are relatively undeveloped in formant structure, that perhaps contain unfiltered white noise, are indistinct and can readily mask others, and that are experienced to some extent as boring.
In contrast, with the use of pre-digital analog technology much more attention, consideration, effort and time was required of the composer to create almost any timbre at all. The analog process of creating a timbre could involve something like setting three or four sinewave oscillators to create a cluster of throbbing interference patterns, or choosing a sound sample from a tape recording of an animal sound and, say, recording it backwards while running it through a filter. The only sound source in the analog studio that was as fast to access as sound sources are in the digital studio was the analog synthesizer, which could immediately generate extremely simple timbres, identical to those of any other composer using such a synthesizer.
The Columbia Princeton Electronic Music Center in 1985.
The diminishment of macrostructure through timbral blurring and overuse of reverberation
Timbral blurring can thus result in the perpetuation of sameness by weakening the microstructural movement from one moment to the next. If timbral blurring is perpetuated throughout a piece of music, it can reduce the pattern of the structure of an entire piece into simply a beginning, an enduring in the same way for a period of time, and then an ending.
Another issue that I hear in current electroacoustic music that weakens timbral contrast and can easily diminish the perception of sectionality over an entire piece is the overuse of reverberation. In some current works the composer seems to have run the entirety of their piece through the same degree of reverberation, thereby blurring whatever timbral range and locational contrast might have otherwise existed. The wholesale immersion of all music material into reverberation moves it perceptually away from the listener and into the middle or far distance by reducing high frequencies and reducing volume. As with the blurring of a particular timbre, the result of excessive reverberation is an increase in blandness and sameness, a lack of contrast—in addition the lack of “presence” or closeness of the sound to the listener.
Why do some composers not realize the weakening effect of immersing their musical material in lots of reverberation, with the resulting loss of high frequencies and presence? I suspect it may have something to do with never hearing their compositions in public concert at the proper volume in an acoustically optimal space.
2. The Importance of Public Presentation of Electroacoustic Music
I’m more and more convinced of the importance of public concert playback of electroacoustic music, in contrast to listening on loudspeakers or computer speakers, or—worse—on earbuds or a smart phone.
I’m more and more convinced of the importance of such public concert playback of electroacoustic music, in contrast to solitary listening on one’s home loudspeakers or on one’s computer speakers, or—worse—on earbuds or on a smart phone, or even on high-quality headphones. A recent performance of my piece The River of Memory for trombone and fixed audio media at Opera America’s fine recital hall made it stunningly clear to me that electroacoustic music, just like acoustic music, sounds most exciting when shared in public—in live concert and in an excellent acoustical space. The acoustic spaces that are superb for the shared experience of a public concert of electroacoustic music sound even more exciting when there is a live instrument also playing. The quality and quantity and placement of the loudspeakers of course are the other major factors for top-level electroacoustic music playback in an acoustically appropriate hall.
The public concert playback of electroacoustic works originally written for dance or theater, but presented only as audio playback, present other issues related to structure. On June 14 at 12:30 p.m. my composition The Mud Oratorio will be presented in concert playback by The New York City Electroacoustic Music Festival (NYCEMF) at the Abrons Art Center in Manhattan. This 51-minute-long computer piece for dance-theater, for which I wrote both music and libretto, was commissioned by Dance Alloy of Pittsburgh and Frostburg State University in Maryland. I created the work around two Nature Conservancy swamps in Frostburg, Maryland, whose flora and fauna survived the ice age. My voice narrates, with bird and animal imitations by a local biologist, and sounds constructed by digital sampling and software synthesis.
This will be, of course, a concert presentation, with no staging. The work is in four sections: “Spring,” “Summer,” “Fall,” and “Winter.” The four sections of the music are indeed very different from each other, differentiated by tempo, timbres, and the like, and will sound well if the venue is acoustically intimate and the loudspeakers of high quality. But the musical structure of this piece was created around the verbal structure of the libretto and the visual structure of the dancers onstage. This is a concert performance, so to substitute for the visual macrostructure of this large work, I intend to have the libretto projected upon a screen.
I hope to share some more thoughts about electroacoustic music with you soon.
Alice Shields is considered one of the pioneers of electroacoustic music. Her works include some of the first electronic operas, as well as vocal, chamber, and electronic music influenced by world music, dance, and theater. She received a doctorate in composition from Columbia University and has been associate director of the Columbia-Princeton Electronic Music Center and director of development of the Columbia University Computer Music Center. Recent performances include the world premiere of Quartet for Piano and Percussion by Iktus Percussion on June 4, 2016; The Mud Oratorio by the New York City Festival of Electroacoustic Music on June 14, 2016, and The River of Memory for trombone and computer music by the Association for the Promotion of New Music on May 18 2016. For more information please visit www.aliceshields.com and https://soundcloud.com/user-aliceshieldscomposer.
Electronic works by ten American composers will be presented during the 10th Forum Wallis, an international festival for new music which will take place between May 12 and 16, 2016 at the historic Leuk Castle in the Canton of Valais, Switzerland. For the second time in the festival’s history, there was an international competition for electronic compositions. Out of 289 submissions from 45 countries, a total of 24 works were chosen.
Below is a list of the ten works by American composers which will be featured. (Click on the links on the titles to hear each of the pieces.)
The other composers featured during the festival are James Andean (Finland), Laurence Bouckaert (France), Mikel Chamizo (Spain), Manfredi Clemente (Italy/UK), Jannik Giger (Switzerland), Orestis Karamanlis (Greece), Alain Michon (France), Marco Molteni (Italy), Mirjana Nardelli (Italy), Yasuhiro Otani (Japan), Emilie Payeur (Canada), and Leonie Roessler (Germany/Netherlands).
A conversation on the second floor of the historic Ear Inn (est. 1817) in New York City
April 17, 2015—1:00 p.m.
Video presentations and photography (unless otherwise stated)
by Molly Sheridan
Transcription by Julia Lu
The variety of activities that Charlie Morrow has been involved in for more than half a century is staggering even by today’s standards, when the wearing of numerous hats is almost a pre-requisite for being successful as a composer. The almost always Bowler hat-clad Morrow was writing conceptual pieces that predicted Fluxus as a high school student in the 1950s, twelve-tone scores under the tutelage of Stefan Wolpe at Mannes in the 1960s. He went on to develop alternative performance spaces, environmental music (including a widely publicized concert involving performances with fish), and music for multiples of the same instrument in the 1970s. While immersing himself in all those activities, he built one of the first private electronic music studios and wrote hit arrangements for Simon and Garfunkel, as well as The Rascals and Vanilla Fudge. He also penned some of the most earworm-inducing commercial jingles which promoted everything from Diet Coke and Hefty garbage bags to special express subway service to JFK airport.
Although I had never had a lengthy conversation with Morrow until we met up with him for this NewMusicBox presentation, he was a major role model for the choices I have made in my own life: he was a Columbia grad who, during his time there, immersed himself in world music; a musical creator who was never beholden to any particular musical genre or the limitations that adherence to any genre demands; for many years he was also the publisher of EAR Magazine, a seminal publication for new music which was one of the main inspirations for NewMusicBox. So I had tons of questions I wanted to ask him. Some of his answers led in directions I didn’t anticipate. For example, when I asked him about his earliest musical experiences, he actually spoke about events from the first year of his life and even shared a memory he had of being born.
“I always wanted to remember my birth,” Morrow explained. “I spent a number of years working back towards it. Using milestones of memory, you can find your way back to things that are lost in your memory by locating things; you can be very certain. … I remember that the physician who delivered me stank; at least he smelled bad to me as a living creature who had never smelled anything outside of amniotic fluid before. Then I remembered feeling crushed and totally thrashed in the birthing process. Then I remembered floating and hearing voices outside of my mother and having the sense of the world beyond the place where I was as my consciousness evolved.”
When we talked about his 1967 Marilyn Monroe Collage, which he created at the invitation of Andy Warhol to accompany an exhibition of Warhol’s legendary iterative Monroe silkscreens, I thought it would lead to a discussion of his gorgeous Wave Music pieces, which are scored for multiples of the same instrument—a process that seems aurally analogous to filling up a wall with iterations of the same visual image. Instead, he said, his impetus came from attempting to perform concerts with toadfish!
“I had decoded the language of toadfish and did a fish concert,” said Morrow. “In the course of doing that, I would get my audiences to make the sounds and then I decided that I would do a herd of the same instruments. It all grew from having heard the fish … as groups of individuals all signaling and communicating with each other. … Every living creature has evolved being able to receive vibrations from all of their vibratory receptors in a certain bandwidth and a certain sensitivity level and then a certain selectivity level. … We’re in two different parallel universes with different band widths, different perception and reception. But if you do get a message back—it seemed that we were able to understand in both field frogs and toadfish a kind of communication.”
As luck would have it, the fish concert took place right after Richard Nixon resigned from the United States presidency, and it became an international news story since it was a quirky distraction from current events.
“It was a total accident,” Morrow acknowledged. “He resigned the night before. He didn’t send anyone an invitation about his resignation. I mean, it wasn’t like ‘a month from now, would you all like to watch me resign?’ You know what I mean? What had happened was at that time, as part of my jingle business, I discovered PR. There was a guy named Morty Wax, who was my press agent at that time and who was very clever. … Since he was a respected press agent, everybody knew that it was going to happen. And on that morning, it became a world press event because everybody needed some distraction from the horrors of politics. … I heard reports of it from all over the world: Nixon resigned last night and this morning a group of artists in New York gave a concert for fish. It was that kind of ironic spin.”
Although Charlie Morrow is the quintessential DIY composer, he often thinks big—extremely big. Over the past decade, he has developed a revolutionary three-dimensional soundscape design, and his recent projects have included everything from a 72-speaker immersive environment as part of Nokia World in Barcelona to a permanent sound installation at the new display of the Magna Carta at Lincoln Castle in England. For next year’s summer solstice, June 21, 2016, he is mounting an unprecedented 24-hour concert that will take place in 24 different time zones.
“A mass performance should be either a totally composed piece like the Monkey Chant or Berlioz’s Requiem or something that’s created by the people who are doing it,” Morrow opined. “I’m sort of in the middle, but I think the pieces themselves have to achieve an audience. … My job has been to keep people surprised and interested as a sound maker. Whatever I turn my attention to, the idea is to bring something to it that makes it worthy of attention and, at the same time, to find some balance where it doesn’t burn itself out from multiple hearings.”
Morrow demonstrates 3-D sound for us using his laptop.
Frank J. Oteri: We usually tend to begin at the beginning, in so far as we can begin at the beginning. There are many beginnings. But where I wanted to start our talk isn’t exactly at the beginning. I wanted to talk with you about your years as an undergraduate at Columbia, because I’ve read in several places that you studied with Colin Turnbull, who wrote a very popular ethnography about the Ituri rainforest pygmies and made some amazing field recordings of their music. So I was curious about how you, as an undergrad, became interested in the music of other cultures.
Charlie Morrow: Well, I’ve always been interested in the music of the world because I’ve been interested in radio. I’m a radio amateur. I started out by being a short wave buff; I would listen on many frequencies to sounds from all over the world. I came quickly to understand that there was a wide variety of music that was—I would say—misunderstood, or marginalized, or made other than mainstream by— at the time—the prejudices that divide anthropology from sociology. It was almost as if there was a racist component to it. If you weren’t white and from Western Europe, or amongst the elite of Asia, that what you did was somehow on a second category.
This impulse has been running through all of my work. A large part of it is because some of the more excellent things that music’s about are actually part of world music and older cultures, and it has been lost by the commodification, commercialization, and conversion to listener-directed product making. I come out of signaling—bugling, music for the time of day and for the location that you’re in, the idea of it being involved in some social structure like the Boy Scouts, the military, or the church calendar. I come from a multi-cultural city, Passaic, New Jersey. We had representatives of practically every religion and many, many countries there. So there was a sense, just walking through Passaic, of a wide variety of people. There were many small communities—Catholic, Russian Orthodox, Greek Orthodox, many flavors of Judaism, small synagogues the size of this room. But the one thing that characterized most of these groups, I found later on, was the incredible insularity of “we’re right and everybody else is wrong,” which is what I think created an atmosphere, when there finally was a kind of elite majority that controlled the pantheon of Western arts, that they said, “Well, this is ours.” And there were basically too few of everybody else holding onto their own traditions.
That’s a long introduction to the fact that I studied principally with Willard Rhodes at Columbia, because he was my ethnomusicology teacher, and then I met Colin Turnbull informally through the Museum of Natural History. I met him and I would go through the storeroom. Our discussions were based on the functionality of music. Functionality is a huge issue with me. Not just signaling, but ceremonial aspects and particularly the power of materials. A lot of my early writings concerned how, for example, something living has to die in order to become a musical instrument.
It’s a big theme that runs through my work. The relationship of death and life in Western music and, in particular, instruments—that’s what was so fantastic. You know, people play elephant tusk horns, Tibetan thigh horns, and I’m a horn-trumpet-wind person. The idea of blowing the breath of a living person through part of something dead was a connection to a larger world, rather than something morbid for me. I think this is what brought Colin Turnbull and I to our relationship because he felt very much the same. He saw magic everywhere. And he also saw clearly the way people treated each other. I think that he, in his own life—particularly in choosing a male pygmy as his husband—was putting himself on the line. He was a high-risk guy.
FJO: It’s fascinating to hear you say that as a teenager you already had the idea of infusing the past into the present and that it’s been a running theme in all of your creative and theoretical work ever since then.
CM: Yeah, actually it was earlier. I think it came from one particular question which I had had until I answered it, which was that I always wanted to remember my birth, and remember before I was born. I spent a number of years working back towards it. Using milestones of memory, you can find your way back to things that are lost in your memory by locating things; you can be very certain. And I finally went back and was able to remember my own birth. I remember that the physician who delivered me stank; at least he smelled bad to me as a living creature who had never smelled anything outside of amniotic fluid before. Then I remembered feeling crushed and totally thrashed in the birthing process. Then I remembered floating and hearing voices outside of my mother and having the sense of the world beyond the place where I was as my consciousness evolved, so going backwards into that process led inextricably to an explanation of why I thought this way.
FJO: This is amazing! Usually we begin these discussions at the beginning, but we’ve never talked to anyone about the very beginning.
CM: My beginning, anyway.
A maverick from the very beginning. (Photo courtesy Charlie Morrow.)
FJO: So, alright, since you went all the way back there, I’m going to try to go back there with you. Do you remember the first time you heard something that was described as music?
CM: Yes, I do. I remember that my parents had a record. I must have been about three-years old, and they said, in playing the record, that this was music. I remember hearing a recording of Stravinsky, a narrated record about music, and then they said there’s some new and wild things like Stravinsky. And it went on from there. I had limited experience of music making outside of our house. But actually, my first real experience of the power of music was much, much earlier when I was about a year old. I was born in ’42, and my father and mother both were psychiatrists. My dad wanted to practice psychiatry in the military. He had volunteered for the Navy, but he was too short by some tiny amount. So he wound up in the Army. They put him into an army psychiatric hospital in Lexington, Kentucky. And my mom and I took a long train trip down to visit. It must have been the summer after I was born. There was a military parade for the officers. I remember how I could not keep the sound of the drums outside of me. It seemed to penetrate my body. I had never experienced anything that loud or that close. That became the earliest experience for me of music, just that very, very intense military drumming.
FJO: What’s so interesting about that is that one of the instruments you would have heard in that military parade—trumpet—became your first instrument, and also that you wound up doing so many outdoor, environmental pieces. So the seeds of those later developments go all the way back to this initial musical encounter.
CM: I think you’re right. It also came from the intense liberation that I felt as a bugler in the Boy Scouts. The experience of blowing “Reveille” or “Taps” from a hanging metal cone megaphone, and blowing it in three different directions—I think that that completely convinced me of where I was going. It sounded so different in each direction. Having three shots at playing it, having learned and heard the first and then the second, and the third, was an iterative experience that made me well aware of what environments are about.
FJO: That also ties into your whole development of the 3D sound cube and directionality much later on.
CM: You’re absolutely right, because what I wanted to achieve with the 3D sound cube was a natural feeling that you could locate where sound came from. Because it makes you nervous if you can’t, because your life is threatened on a very primordial level if you don’t know where sound is and what’s making it. It could get you, or you might not get it if you needed to eat it or any number of things that are defined by instantly resolving where something is, and instantly making a judgment about what it’s about. You know, is it threatening? Is it appetizing? Is it intriguing?
The schematics for the MorrowSound 8.1 System Single Cube array.
FJO: In that sense, sound is very different from visual information because we’re trained to sense perspective, which enables us to know how far away something is just by seeing where it is. Sound, on the other hand, we perceive as a non-corporeal, disembodied thing. But of course it is physical, too, but it’s not something we can necessarily see.
CM: Also our eyes are frontal, but our ears—divided left and right—resolve sound in a full spherical environment. Your eye is not trying to invent anything for the portion that it doesn’t see, unless you stick it in an oculus or another kind of enhanced, immersive experience. But your ear does that all the time; your ear is resolving x, y, z, w, and t at least—w is where the observer is, t is over time.
FJO: Before we get too theoretical, I want to head back to Columbia. There’s another person whom you studied with there at that time who is one of my heroes because of his incredible open-mindedness—Otto Luening. So I’m curious to learn more about your relationship with him and what his influence was on you.
CM: Well, I had a class in music history with him, which was quite nice because he was able to speak very personally about the materials in music. I think that his most fascinating teaching was the multi-level interpretation of everything. You don’t just hear the music or see it in one way; he always explained what it was for, who did it, and what the environment was at the time. He was also interested in the gestural aspect of the music. And he had a great sense of humor. I remember once he was talking about one of the Scarlattis, about the tight little playing of very delicate and carefully honed keyboard music. And he said, “They did that ‘cause in those days you couldn’t go like this: bang-bang-bang-bang.” He always had little side trips like that. He was constantly riffing on what he taught, which created open doors because he took everything he said with a big grain of salt.
FJO: Did you have any involvement with the electronic music studio that he was developing?
CM: I didn’t work in it, but I became very familiar with it. Being a techie, I was fascinated and I met a number of the people who worked there. I knew Bülent Arel and some of the South American guys who were working there, and I continued to have a connection to the studio. I maintained a steady relationship with Charles Dodge. I stayed connected because they were proactive in creating a world of their own. Early sound studios were very particularly made to the interest of their creators. I had one of the first privately-owned sound studios in New York. When I moved to 365 West End Avenue, we built a studio there and I had a team of people working with me. Our studio was totally different from what Columbia was about; it was concerned with programmability, repeatability, and the accuracy of a lot of the work. All of those issues distinguish what I’ve eventually done with 3D immersive sound from what the entire industry is doing with it.
Charlie Morrow at his NYC studio, circa 1969. (Photo courtesy Charlie Morrow.)
In a way, Columbia’s studio set me off on a path of being a staunch independent, doing more gesturally-based things, working with cheaper equipment, working with approaches that were more connected to the natural world. I always wanted to get closer to electrons as part of nature; my field at Columbia was chemistry. Chemistry’s an extraordinary embodiment of metaphysics.
FJO: So you weren’t a music major?
CM: No, I was a pre-med student.
FJO: So you were trying to follow in the footsteps of your parents?
FJO: Interesting. I didn’t realize that. So you studied music history with Luening, not composition?
CM: Yes, but I had studied composition. My first composition teacher was a guy named Carlo Lombardi, when I was in Newark Academy. Carlo was a student of Dallapiccola, so I got a really interesting education right away—a really Italian take on Viennese 12-note music. Carlo was also a very good keyboard player; he could play anything I could write, so I suddenly started to have good performances of what I was doing. And he encouraged me to go to Interlochen where I was in high school composition and orchestration classes. I worked with a number of teachers there and things got played. What really took my career along was the idea of having the work played, because I’d written for years before that but I didn’t have anybody to play it, except if I wrote it for trumpet, which was my instrument.
FJO: But what interests me is that already, before you went to college, as a high school student, you were writing conceptually based pieces. And this was music that was 180 degrees away from 12-tone music. You were writing downtown music in high school back before the words uptown and downtown took on polemic meanings.
CM: Yes, I was. I did a slow Gabrieli piece.
FJO: That piece [Very Slow Gabrieli] actually reminds me of the music of a younger composer, Jacob Cooper, who—nearly 60 years after you wrote that piece—has created a whole body of fascinating repertoire based on slowing down older music.
CM: Interesting. It’s nice to know that the door, once it was opened, is happening. But my favorite [among my early pieces] is the surprise music where at a pre-arranged time, when an orchestra’s playing, everyone stops and squirms and belches and makes funny little noises, unknown to the conductor. It’s a guerilla event in the middle of an orchestra performance and it really worked out well.
FJO: That’s a very Fluxus idea, but this is pre-Fluxus.
CM: It is. We’re talking the 1950s.
FJO: But it makes me wonder. I went to Columbia in the ‘80s, which was at the tail end of what some people perceived as the period of 12-tone hegemony in many academic institutions. It was a time when many folks still didn’t really look too kindly on alternative compositional approaches. So I could only imagine what the reaction was there to the wilder side of your music at that earlier time.
CM: Well, basically I divorced myself from the non-ethnomusicological part of Columbia. I played with Philip Corner, James Tenney, and Malcolm Goldstein and was part of the Tone Roads concert series. I guess we had our own world. I met Cage through them, and it was like finding my people.
FJO: Yet in the middle of all that, you wound up going to Mannes and studying with Stefan Wolpe, a fascinating composer who was at the other end of the aesthetic spectrum.
Charlie Morrow. Photo by Colin Still (courtesy Charlie Morrow).
CM: I was bouncing back and forth. I have works in different styles from that time. I guess what I was discovering was that I could work in a number of styles. It’s how I wound up in the jingle and film scoring business. I could work authentically and non-imitatively in other styles, and that became interesting for me. Having done that for a while as the business became more codified and referential, that stopped being fun. It was fun as long as the door was open. When I first went into that world, I went into it as a combination composer and sound designer, because those were two separate things: two people that got a job. I could get a job, and they could pay me once, instead of paying two people. But once the ’80s came, I began to look for something else to do with myself because it had become pretty much like Columbia and 12-note music. The commercial music scene had become formulaic.
FJO: But we’re jumping ahead here. You weren’t doing commercial jingles when you were studying with Wolpe at Mannes.
CM: No, that hadn’t happened yet. That happened after I did a piece for tenor and orchestra that won a prize and brought me out to San Francisco. When I came back to New York, I had imagined that since I’d met Leonard Bernstein and had suddenly been introduced to the mainstream world that I’d get phone calls and letters and requests for commissions. It was a wild fantasy. It never happened. I think at that point, I wrote an essay called “View from the Bottom of the Heap” which was published in 1966 in the American Music Center’s newsletter. John Duffy encouraged me to put my ideas out there about being an independent composer and earning a living from it. So it was at that time that I began to part company with the concert hall. I did a protest concert called “For the Two Charlies,” with Ives’s music and my own, and that was the end of my life in the concert hall. I just devoted myself to music outside the concert hall.
FJO: At the time you complained about the constricting of sound in the concert hall, that it’s a very artificial idea to create a blank slate for music to fill up, since in every other environment outside of a concert hall music co-exists with many other sounds. So the concert hall environment artificializes the listening experience.
CM: It’s true. Later on, as an outdoor event-maker turned soundscaper, I began to realize the concert hall was just one of many possible environments. When I started to build things in 3D, the idea was that you make a location and then you populate it with sounds and sound scenery. But first you make an environment. Every place is an environment. I think it was a conceit on my part to see the concert hall as being too quiet for what I had in mind.
Charlie Morrow’s journey outside the concert hall has led him to create music with bathtubs and tractors as well as experiment with new ways to hear sound. (Photos courtesy Charlie Morrow.)
FJO: So how did you make the transition from writing for tenor and orchestra to creating music for experiences that were outside the concert hall? Building your own studio and production company takes money and time. And to be successful at it also requires connections. I know one of your classmates at Columbia was Art Garfunkel.
CM: Right. I also met people who had studios. I learned how the studio world worked. At the same time, I had a bit of interesting input from my mother who was a psychiatrist and introduced me to a fellow named Andy Mashberg, whom she’d met at a medical convention. After I came back to New York and actually saw Leonard Bernstein sitting at the Bavarian Inn at the next table from me, I met Andy Mashberg whom my mother had talked to. Andy had said to my mother, “I know how he can survive without teaching.” He met me and he said, “You can be a writer of jingles and corporate music and film scores. This is what you have to do.” And he talked me through it. He gave me a list of people. He told me how to make a demo reel. So I was basically walked right to the door of work. And fortunately, within a half a year, I started to have some good opportunities. I did a kind of humorous Cinzano radio commercial. “Please, don’t pinch Cinzano ashtrays. Try Cinzano vermouth instead. Cinzano vermouth is better than ashtrays. Get it into your American head.” These were the lyrics of a mad man named David Altschuler. We became lifelong friends. And fortunately he had work for me. My career has been meeting people who thought that what I did could be useful for what they did. So in terms of being a producer, I quickly learned to do what was needed by people who liked me and thought I could do it.
FJO: So you were already doing commercial work when you started doing production on pop records in the late ‘60s?
CM: No, the other way around. What happened was my then-wife didn’t like me up all night and away, you know, because in the daytime I was also trying to find work and it was stretching our relationship. So what I’d learned from the pop music world was that I wanted to work in the daytime if I was going to keep a home together. So that’s what happened. I more or less started out by getting into the commercial studios through the pop music connection, but then making connections into the advertising world. I already knew good performers from all the worlds that I was in. And that was from a long history of being a producer as well. I had helped Charlotte Moorman produce an avant-garde festival and I had worked in Norman Seaman’s office, who was a promoter—all of this with my mother behind the scenes trying to figure out how I might survive doing what I wanted to do. She was a great admirer of [Sol] Hurok, and she said, “Look at that guy. He finds the talent, he finds the venue, he finds a sponsor, he spends other people’s money, and he makes money for himself.” She was constantly encouraging me to figure out what was on the table and how to move it around.
FJO: So she never tried to get you to go back to med school?
CM: No. My father did, but not my mother. When I was 38, my father, having seen a concert of mine at MoMA, said, “Haven’t you had enough fun now?” I was trying to figure out what he meant. At the time, I was making a very good living, so it couldn’t have been about money. I think he was embarrassed by my eccentricity.
FJO: So getting into the pop world was through Garfunkel?
CM: Yeah, it happened through Garfunkel. And then I had a business partner named Barry Minsky and through him I wound up doing an orchestra piece for The Young Rascals. Then I met other people. Through Atlantic Records, I wound up working for Vanilla Fudge. Then it went kind of back and forth. Studios would put together teams, and so I wound up doing arrangements on various records; the Record Plant studio became a kind of home for me. It evolved from A & R studios where Simon and Garfunkel had recorded originally. I think it was a Columbia studio on lease, or they bought time at A & R. But from A & R, it led to the Record Plant. And everybody hung out there. It was kind of the club house for all different kinds of music production for the pop scene.
FJO: In terms of its production, Simon and Garfunkel’s record Parsley, Sage, Rosemary and Thyme was radical at that time; it was the first eight-track record. Considering your ideas about the directionality of sound, which an eight-track recording would have emulated much better than any previous technology, did you have something to do with that?
CM: I created hit charts for them. I talked to Paul Simon about the sounds, using a Renaissance keyboard instrument. None of them read music; it was all about sharing ideas. So I had something to do with it, yes. But I didn’t write a note.
FJO: But did you have anything to do with the multi-tracking? It was a vital step toward the way most pop music recordings were subsequently made. Nowadays, with digital studios, you can theoretically record an infinite number of different tracks and then mix them together however you want to during post-production. But before that album, most recordings were one-track or two-tracks as stereo came in. Then George Martin made the first four-track recordings of The Beatles in 1963. But Simon and Garfunkel beat even The Beatles to eight-track, and from thereon in there was no turning back.
CM: Well, I think it came from the engineering side; that wasn’t my idea. I was just a hired hand. I would come in and do the sessions, or talk on the phone before. For the real artisanal work that was done in the studio, there was as engineer involved. I think his name was Stan Tonkel. He was extremely far thinking. Of course, Columbia Records themselves bought a lot of multi-track machines. They had the money. Commercials lent themselves to multi-track machines also because you wanted to be covered for different versions and be able to do very polished work based on a lot of fragments. Directionality was not such an issue. It was more about layers. Layering is still very important in the work that I do, as you’ll see in the software that I’ll show you later. We layer in 3D. We can create as many layers as we like in order to be able to create a world of sound, and that is similar to what an eight-track machine has to offer.
A poster for an all-Morrow concert at Town Hall in NYC before he decided to create music outside the concert hall.
FJO: One of the reasons I thought there might have been a connection here was that you used multi-tracking in the multi-layered Marilyn Monroe piece you did around that time [Marilyn Monroe Collage].
CM: Well, actually, I had to do that piece as a series. I remember, I had a classmate, Mike Shapiro from Columbia, and Shapiro had gone to work for a sound library. They had an excellent mono studio. I think we did the Marilyn Monroe piece by creating all of the elements and rolling them in on two-track machines, doing them as very careful sound on sound. So that was because I had a guy who was really good at being my hands and he engineered the whole thing for me. It was such a juggle.
FJO: It certainly doesn’t sound like it’s just two tracks.
CM: No, it doesn’t. But I think that might have been just prior to eight-track recording. I knew about four-track recording.
FJO: That piece opens up doors to all kinds of things, like taking found sounds and using them as sonic objects for your own ends, which is a very post-modern idea and something you’re still doing now with your recent re-compositions. It’s an idea that has a lot of currency right now—sampling something and turning it into a new creation by remixing it and making it your own. Everything in your Marilyn Monroe piece came from something she actually said that was recorded, but you turned it into something that she never said.
CM: That’s right.
FJO: Also since she was so iconic, and was someone that everyone could immediately identify, there was something very populist about your piece, even though it was experimental conceptual music.
CM: It’s true. It had grown out of an invitation by Andy Warhol to create a piece for his Marilyn Monroe show at a gallery on 57th Street. My motivation for it was actually seeing everything and, in terms of ceremony, thinking of the artist as a sacrificial lamb. And I thought, coming back always with this death image, that I was taking Marilyn Monroe and reviving her for my own benefits. She was a beautiful vehicle for the thoughts I had about her, which concerned, in my way, the exploitation that show business does.
FJO: Another interesting aspect about what you did with Marilyn Monroe, which makes more sense now that you’ve referenced an exhibition of Warhol’s Marilyn Monroe silkscreens, is that you’ve taken something very popular and turned it into something much more rarified and abstract, just as Warhol did by silkscreening those images of her. Her image became just a form in which to explore a process, just as he had done earlier by painting sequences of Campbell’s soup cans or Brillo boxes. Which connects to another thing you have done as a composer—all the stuff you’ve composed for multiples of the same instrument. Having 30 harps or 40 cellos, all the same sonority, is the sonic equivalent of a whole room filled with the same visual image.
CM: That’s a very interesting reading. I was on a panel about animal communication. I had decoded the language of toadfish and did a fish concert and had before that done a lot of field work with peepers where I could get into dialogue with them. In the course of doing that, I would get my audiences to make the sounds and then I decided that I would do a herd of the same instruments. It all grew from having heard the fish and the peepers as groups of individuals all signaling and communicating with each other.
But then you remind me that when I first went to Mannes School of Music, I met a guy from the neighborhood whose mother had an empty flat. He said, “You’ve got to come over here. My mom let this crazy art director from advertising do a show in one of her empty flats. It’s a block away. Come with me, Charlie.” I walked in and there was Andy Warhol, and it was his first show. And the walls were exactly as you say. And I remember thinking about the simultaneity of duplicates at the time. But until our conversation it had not surfaced that this is a piece of that, because I’d always seen it through the herds and other multiple images from nature, rather than from the manipulation of the artist.
FJO: That fish concert happened during the period between Richard Nixon’s resignation and Gerald Ford being sworn in as president. Was that some sort of an artistic statement?
CM: It was a total accident. He resigned the night before. He didn’t send anyone an invitation about his resignation. I mean, it wasn’t like “a month from now, would you all like to watch me resign?” You know what I mean? What had happened was at that time, as part of my jingle business, I discovered PR. There was a guy named Morty Wax, who was my press agent at that time and who was very clever. Morty himself was the one who said, “You’re working with all these sounds. Why don’t you do a concert for fish?” And I said, “What a great idea, Morty. We’ll do it.” At the time, I had been working for large industrial, multi-media projects with a guy named John Doswell. Doswell just died a month ago, but he’s been significant in my life because he was very active in the harbor life here. And he arranged tugboat races and so forth later, but Doswell said, “Come on out. Let’s do it from my boat.” So I suddenly had a boat, and I knew the technology, and so Morty Wax’s suggestion turned into reality. And since he was a respected press agent, everybody knew that it was going to happen. And on that morning, it became a world press event because everybody needed some distraction from the horrors of politics.
FJO: Although I would imagine in terms of it being a world press event, it was overshadowed by Nixon’s resignation.
CM: Of course.
FJO: So that’s the bad part of it happening the same day.
CM: But it was mentioned worldwide. I heard reports of it from all over the world: Nixon resigned last night and this morning a group of artists in New York gave a concert for fish. It was that kind of ironic spin.
FJO: At that point, you had already done stuff with birds, the Central Park pieces.
CM: Well, I had done the solstice events. Let’s see; let me put it together: ’74 was when Nixon resigned and we did the fish concert. I had already started The New Wilderness Foundation and the New Wilderness Band. We had already been doing solstice events, and we were communicating with birds in those events.
The New Wilderness Band in performance sometime in the mid 1970s. (Photo courtesy Charlie Morrow.)
FJO: So in terms of how that works, you say communicating with birds or with fish. I’d like to unpack this a bit.
FJO: We’re hearing their sounds, but do you have any sense of what they’re hearing from us? Is there really a two-way aspect to this or is it all just our interpretation of what this is?
CM: Well, I would say that it’s both. First of all, I believe in the bandwidth of perception. Every essay in the book I’m working on has to do with the fact that every creature hears and sees vibrations on different wavelengths. Every living creature has evolved being able to receive vibrations from all of their vibratory receptors in a certain bandwidth and a certain sensitivity level and then a certain selectivity level. It’s absolutely true, for example, that a horse and a human might be riding and spending days together, but they’re not getting the same world because their ears are in different positions and the evolution of our sensory systems are different. So going then beyond a fellow warm blooded animal to reptiles, talking to each other through a black hole, so to speak, we’re in two different parallel universes with different band widths, different perception and reception. But if you do get a message back—it seemed that we were able to understand in both field frogs and toadfish a kind of communication. They basically both have a simple language and it was the complexity of such a simple language that turned my interest.
Toadfish make a sound [demonstrates] and they tend to have a lead toadfish that’s making a sound and the others want to reply. So groups follow the leader. And then that toadfish, once he’s got a group, starts to increase the tempo, jumping the beat, and the group follows until another starts over here at a much slower tempo. This goes on every day. That’s an auditory transactional state that those folks are in, or those critters. So if you make those sounds, and they answer you, they play; if you can play in that band, you’re there, at least for the part that you hear and the part that they hear.
FJO: That’s absolutely fascinating. What’s perhaps even more fascinating is that at the same time that you’re doing this really out there stuff like this concert for fish, you’re making a living doing commercial jingles. I’m curious about the spillover. Some of your commercial stuff is quite avant-garde in some ways. Your Hefty garbage bag theme, in particular, is pretty wacky musically; it’s full of really unusual harmonies which never resolve.
CM: My job has been to keep people surprised and interested as a sound maker. Whatever I turn my attention to, the idea is to bring something to it that makes it worthy of attention and, at the same time, to find some balance where it doesn’t burn itself out from multiple hearings. Soundscapes are like that. We build soundscapes that people will hear for a permanent installation. There is this balance that has to be achieved between every element in relation to the other elements.
That’s something you learn from orchestration. This is just a contemporary equivalent of orchestration. Whether it’s a trumpet orchestra in West Africa, or a Western orchestra, or an opera. It functions transactionally. Everybody’s got to have a role in it and have a good time somehow. Like in a good gamelan piece, the social fabric is illuminated when all the pieces come together and the music ticks. A trick with being in the jingle world was always to find that balance. However, I don’t think it’s possible in the same way now. I have a job right now, which shall go nameless. The environmental pieces of it were created and were fine with the client. But all the tiny sound effects that were in it they wanted copied exactly from today’s latest high-tech, game-oriented feature films.
I had an argument with people who were half my age—actually in this room—in which I said to them, “I think that you’re making a terrible mistake. I think that just simply copying that without a reason other than that you think people will identify with that is basically to burn them out faster. What you’re doing will be trivialized faster in my experience.” At which point, I put somebody else from our team on the job. And a note came back. “It’s a problem working with Charlie; we have to listen to philosophy.” From my point of view, I’d given them sound economic advice, and from their point of view, I was wasting their time because they were in a hurry to do something they thought was right. It kind of epitomized what has happened at all stages of my career.
At one point I was asked by an agency guy to write a Pan Am commercial. He said, “Would you make me a commercial? I want you to do this with an original flair. It shouldn’t sound like anything that anyone’s heard before.” I said, “Do you really mean that? He said, “I do.” I said, “Well, it’s opportunities like this that I live for.” And so I wrote two pieces, for the same ensemble. We read through the first one, and the guy came out screaming. He said, “What is this shit? I’ve never heard anything like it in my life.” I said, “Well, did you hear what you just said? Wasn’t that my assignment?” He said, “Don’t fuck with my head.” I said, “Well, I’m just teasing.” We played the other one. He said, “Don’t ever do that again.”
FJO: Do you have a recording of those? I’d love to hear them.
CM: I actually do. I have to dig them out.
FJO: The jingles you created for Hefty and WINS radio were both used for years. Those were really successful.
CM: They still are.
FJO: And they’re both instantly identifiable. So there were—and clearly still are—folks that were accepting of these more unusual kinds of sounds. People obviously liked them, because they’re still popular.
CM: There are tastemakers who do it right. There’s also such a thing as good luck. But my style has always been to create something that’s a little bit on the edge. Generally that seems to work to keep it fresh for as long as it lives. I mean, that’s in my mind and what I’ve learned from composers of the past. The good stuff still sounds fresh and sounds right. So I try to impart that, whether it’s a three-second logo or a ten-day event.
FJO: At that same time, you also wrote a tune that could very well have been a Billboard hit single, if it wasn’t written for a commercial—your “Take the Train to the Plane” jingle for the New York City subway system. It’s actually almost a pop song.
CM: It really is. Well, that was a remarkable situation where I wrote for two very bright marketing guys who were great fans of things that were just like that. They wanted me to write something that’s memorable, that people would sing, and that would possibly have a life outside of the use by the MTA.
FJO: And did it?
CM: Yeah, there were a number of releases of it. It’s been licensed for a number of feature films. It hasn’t been what you call an avalanche of coverage, but it has lived.
FJO: So you worked in all these different genres. You created avant-garde music, and you have this academic music that you’d written earlier, you did pop music production, you did improvisatory stuff with the New Wilderness Group, and then commercial jingle work. Then you were part of the creation of EAR Magazine, which was a publication for new music that embraced it all. It makes sense now, because your background was doing it all. But I’m wondering what made you decide to participate in a publication for this stuff.
CM: Well, it’s just like my mother or Morty Wax suggesting something. I’m not so much an inventor of things, but a selector of good ideas that float past my nose. First of all, Beth Anderson and a guy named Charles Shere from San Francisco developed a community mimeo publication called EAR. And Beth came and lived in this house. R.I.P. Hayman has had many people come and share his roof with him. He’s a very generous guy. And Beth and he put out EAR together, and then Beth wanted to get out. Magazine fever is something people usually have for short periods of time—I mean, a certain number of years. But Rip wanted to keep it going. So Rip asked me, since we were already working together on so many things, whether New Wilderness could be its fiscal agent, its bank account, and its tax status. And I said of course. Then I wound up working with EAR a lot. I took an interest in EAR because I believe in thematic publication. So under my tenure with EAR, we had issues on music of healing, poetry, politics; the EARs were basically anthologies.
I also have to say that I’ve been very much inspired by my long-term relationship with the poet Jerome Rothenberg, who was a master anthologist. Poetry appears in the first person. You print the poem and there it is before you. The idea of EAR was that we’d have essays and actual compositions, a direct communication from the creator to the reader, which is quite different from the way music is generally handled. Music is generally written about—it’s critiqued, it’s promoted, but the actual primary content is very rarely presented other than in books of scores. So I would say that in that way we fell together as people who were interested in what the other was doing and then seeking a community through publication.
FJO: Well, as I’ve told you before, EAR was one of the main inspirations for the creation of NewMusicBox—people who create this work talking and writing about it themselves, rather than there being filters. Initially, for the first few years that NewMusicBox was online, each month was a thematic issue. Ultimately, though, we realized having a monthly thematic format wasn’t the perfect fit for the instantaneous 24/7/365 communication mechanism that the internet was evolving into, so now we post stuff almost every weekday and the pieces don’t all connect to each other in the same way. But I’m curious; you talk about EAR on an aesthetic level. I always thought of it as a socio-political act. We have this world outside of what we do that doesn’t necessarily understand what we do. So the media often gets it wrong in terms of how they describe it. Sometimes they’re dismissive and at times they’re downright hostile toward it. But we can create our own publication. We can create our own world. Let people know about what we do by telling them ourselves. Rather than relying on tastemakers to do it for us, we can be our own tastemakers.
CM: I agree, and program makers, too. Our solstice broadcasts were lengthy compilations of material done in celebration of a holiday, and promoted to the world through broadcast and getting people to physically show up. So I agree with you. The idea of artists curating artists, and artists writing in the first person was definitely in the air and I felt very strongly about it. After all, my whole career has been stepping out, making my own studio, making my own way as a producer, and I thought that in this sense a community is built by people who are able to do that and then sharing the skill sets. Bringing that together, how wonderful EAR became under different editorial leadership and different art direction!
It was quite unusual for a publication within music to take on such great graphic interest. This is where R.I.P. Hayman’s particular inspiration—and all of us, in that way—all feeds back to Philip Corner. Rip and I met through Philip Corner’s sound out of silence spaces. Philip had learned calligraphy in Korea and made calligraphic music; his calligraphic scores had opened the door between graphics and communicating and music and sound. So we were looking to get the word out. At that time, I became the music critic for the Soho News, and I wrote essays about Jackson Mac Low, Alison Knowles, and a number of other people who were important in their thinking to me, because no one knew who they were. They weren’t mainstream artists. They were just doing their work and I think that publishing the work and also writing about it within a framework of the art community is very positive.
Ear magazine was much more than a publication; it served as a central hub for the entire new music community. The April 8, 1983 benefit concert for Ear brought together a widely diverse group of music creators including Laurie Anderson, Derek Bailey, David Hykes and the Harmonic Choir, Nam Jun Paik, and the Gamelan Son of Lion as well as John Cage who especially created the composition ear for EAR for the event.
FJO: The other thing that I found so inspirational about EAR was its definition of what new music was. It was really open-ended. I first became aware of the rock band Sonic Youth through EAR magazine. EAR was a portal into a broad range of genres, not a place that passed judgment on what was “uptown” or “downtown” or what had pedigree or lacked it. It presented everything on an equal footing, which was incredibly mind opening and made for a more inclusive new music community.
CM: I think you’re right, and in that respect, EAR also demonstrated that a community could be supportive of each other. While there had been this weird uptown-downtown split, it was really a tiny fissure in a community of people who otherwise were quite frankly really hungry to be more connected to other people and to know about things. I mean, your own experience shows what we found with EAR, because the readership went up. People were hungry to learn, and it was easier to understand it if you could see the real work, and understand that it had been either an impulse or been thoughtfully put together. It was transparently primary materials. What made it exciting for all of us was that we were constantly amazed by the breadth of the community and the diversity of what was called new music. It didn’t have to be pigeon-holed. Every time anyone would describe it, it would become something else.
FJO: But EAR eventually went away. We were talking before we began recording about finding ways to digitize the EAR archive to make those incredible issues available again, but it’s weird. All of this existed before the internet. It almost predicted the internet in terms of its interactivity and its attempt to join communities. At one point, I know there had even been an EAR Music East, and an EAR Music West on the West Coast. All these things are so much easier to do now that there’s an internet, yet by the time the web became used by the general public, EAR no longer existed, which is a tragic irony.
CM: It was unfortunate. You know, EAR kept evolving, and at a certain point EAR wanted to separate itself from New Wilderness Foundation and I think it was a time of a changing world. In my own case, I became a father in 1989 and producing the big solstice project that year for June 21, I barely was able to attend my daughter’s birth and be there for when she came home. I suddenly had a whole other world. After that, it kind of came to an end. EAR got a board together and then it went bankrupt. One of the differences throughout the whole project was that Rip and I, when there was no money, would put money in. That’s a necessary and magical ingredient; no matter what happened, we would keep it floating. The new board for EAR had a situation. The printer had gone out of business for some reason and EAR was impounded by a creditor. But EAR had already sold substantial advertising like in tens of thousands of dollars. In order to collect it, EAR had to appear. Generally speaking, EAR paid for its printing after it got its advertising money. So, this chicken-egg effect worked out that then the board, who were a lot of nice people, a lot of them with money, when it came time to put their hands in their pockets, they put their hands in the air. And something very wonderful came to an end. I think no project like this can exist unless there’s somebody who’s a tireless fool who will pay the bills.
FJO: The other amazing thing about it is that magazine was created in this space, the Ear Inn—a building that’s been here since the second decade of the 19th century. In a way, it’s a remarkable parallel that connects back to what you were saying earlier about creating new work through a relationship with old things. We’re in this really old place, certainly by New York City standards, that became one of the meccas for really new music. It seems wonderfully contradictory and yet it makes total sense.
CM: True. It’s a nice thought. I think that very much has to do with R.I.P. Hayman and his great generosity, imagination, and tenacity with keeping a space like this from being totally wiped off the face of the earth, which it’s been threatened with so many times. It makes this a very vital location for doing things.
FJO: I know your feelings about the concert hall and what it represents. So you created a musical existence beyond it and I think, to some extent, that idea translated into your idea about recordings as well because for a very long time your music was never available on recordings. Once again, just like a concert hall captures sound and puts it in this one place, a recording does that even more so because it captures time. John Philip Sousa rallied against canned music a century ago, but unfortunately in our world, unless you can your music and commodify it that way, people aren’t aware that it exists. A few years ago, after all these incredible things you did across many decades, XI finally put out a three-CD retrospective so people who weren’t around to hear these things when they happened could actually hear them.
In June 2011, XI Records finally issued the first-ever album devoted exclusively to the music of Charlie Morrow, Toot! (XI 135), a generous 3-CD retrospective containing works spanning half a century.
CM: I think that I’m not so good at making records. My whole career has been in making soundtracks, making events, and broadcasts. For all of these things I’m an expert, but I was never a good producer for my own work. They say that a lawyer who represents himself in court has a fool for a client. I think there are people who are excellent record producers for themselves, but it just was a skill that I lacked. There was no other reason for there not being recordings of my work. There were small editions of my work along the way, as part of anthologies or some collection or another, but my work was primarily in broadcast and in media and in public spaces, because that’s what I knew how to do. It is a picture of my limitations, about presenting what I do to a wider audience in that medium.
Now I have marvelous spatial systems. I’m quite capable of presenting large spatial events. I do them once, and I have not attempted to publish them and make them repeatable. It’s just simply a limitation that I’ve got. I think that if I had somebody helping me over those years, that side would have been much better handled. Without Phill Niblock saying that he would like to do a triple CD of my work, I would simply have not done it because it’s just a little bit off of what I do well. So I worked on it with a group of people and they helped select things that would be good on CD. I think what makes me a terrible record person is that I’m a terrible A & R guy. I can’t figure out what belongs on a disc, what’s a reasonably good experience and so forth.
Harpist Alyssa Hess with Charlie Morrow, John Cage and R.I.P. Hayman at MoMA in 1984. (Photo courtesy Charlie Morrow.)
FJO: Well there are certainly some pieces on there that work wonderfully as stand-alone sonic experiences, particularly that gorgeous multiple harp piece [Wave Music VII]. But of course it makes me eager to hear more. I read that there are three string quartets that you wrote early on. Are there recordings of those? Might those be released on another recording one day?
CM: Well I’ve assembled an archive now. I’ve started to put together collections on SoundCloud that are private. Jerome Rothenberg and I have done a lot of collaborations, so I’ve put all the Rothenberg ones together. A friend of mine has an online radio station, so we did a Rothenberg celebration for a bunch of months. But radio is a funny medium because people aren’t necessarily going to listen to long works on radio. But everything’s available in the archive. So we have a number of solutions. David Rothenberg thought there should be a retrospective museum. Owen Bush has suggested since I’m working in virtual reality that we create in virtual reality our own virtual museum, and put all the work in there, since it is site specific. It could then be performed in a more or less site-specific way. And we’re building that virtual reality museum right now with the help of the Unity Studio in Denmark. I think that will come along, but if any of the pieces are interesting to you, and you had some idea how they might be best presented to others, I’m totally into it. I just haven’t taken that step.
On the other hand, we’re remastering all the audiographics cassettes. We had 42 of them. It’s probably the seminal series of sound art and anthropological music: Philip Corner’s first recordings, Dick Higgins’s stuff, Alison Knowles’s stuff. I’m going to make all that available, because there’s a French label that’s interested in doing a sampler and then helping to collect orders for it. I have such big chunks of things that making them meaningful and making them available in a way that’s sensible is just slowly coming to me.
FJO: Since you mentioned Denmark and a French label, there’s one last thing I want to ask you about. You’ve spent a considerable amount of time doing projects in Europe. For some of the larger-scale activities that you’ve done there—like that piece of yours that involves 2,000 people—we certainly have the people and the enthusiasm to make it happen here, and yet these kinds of things seem to happen more in Europe these days.
CM: I think as always, it has to do with who the organizers are. I’ve been recently talking to Aaron Friedman, who was my successor to Summer Solstice celebrations as large scale music events. But I discovered that one of the biggest differences between the stuff that I did and the stuff that he does is that we paid people. He doesn’t pay anybody anything. So therefore the group that’s going to organize itself as 15 percussionists are going to play their own works because they’re there for free and they’re going to want to organize what they’re doing. So it’s a pinch point in doing a curated performance. We were able to do what we wanted because we paid for it—we went to the music performance trust fund and got half the money from the musicians’ union, and they matched the funds. Nobody got a lot of money, but it made it easier to rehearse, say, with cellists or more or less mainstream performers whose time is very precious and now even more so as it is even harder to live in New York.
But I don’t think it’s any easier to organize these large-scale things in Europe any more. First of all, anything like that tends to bear the aegis that they’re retro, ‘60s events; they’re post-hippie stuff. I mean, there’s a variety of ways in which mass performances are described. And in a way, a mass performance should in fact be either a totally composed piece like the [Balinese Ramayana] Monkey Chant or Berlioz’s Requiem or something that’s created by the people who are doing it. I’m sort of in the middle, but I think the pieces themselves have to achieve an audience. The fact that you and I are sitting here talking about it hopefully will lead people to go to the website. Because now on my website, I have a sample of all of the major works. You can see the piece—not on video, but there are photos—and you can hear a good sample of what they’re like. At this point there’s now a lot of material, so hopefully people will find it useful and want to bring it to life.
Niche music genres are nothing new. They existed before hipsters, before Stravinsky, and before Mozart. However, in the last two decades there has been a blossoming of niche music genres, made possible by technological advancements such as personal computers and Digital Audio Workstations as well as decreasing costs to build home studios and widespread use of the internet. As more and more people are creating music, they are subjugated less and less to the genre-defining artists of the status quo. The result is the emergence of countless niche genres, each with its own unique following.
Perhaps one of the most fascinating niche genres to recently surface is Black MIDI. Created by self-proclaimed “blackers,” Black MIDI exists almost exclusively on YouTube in the English-speaking world, with total video views numbering in the millions while total subscribers for teams (groups of blackers who collaborate on Black MIDI tracks) remain less than 50,000. Black MIDI is presented on YouTube as a video recording of a MIDI file containing millions of individual notes played back through a sequencer.
The term “Black MIDI” refers to the moments in a piece where the notes, if displayed on a traditional two-stave piano score, are so dense that there appears to be just a mass of black noteheads. The increased density of notes also affects the computer, which is sometimes unable to process all of the notes within a particularly complex section. The goal of Black MIDI is to approach this processing failure without actually crossing that line. “We try to make it insane—but not too insane,” says Jason Nguyen, the person behind the major Black MIDI distribution YouTube channel Gingeas.
The origin of Black MIDI can be traced back to Japan in 2009 when the first blacker, Shirasagi Yukki @ Kuro Yuki Gohan, created the first black MIDI and uploaded it to the Japanese video site Nico Nico Douga. The piece is based on U.N. Owen Was Her?, the theme song from the extra boss level in the Touhou Project, a vertically scrolling Japanese shooter video game. The use of Japanese video game music has since remained iconic to Black MIDI.
For the next couple of years, Black MIDI spilled over from Japan into China and Korea, where it continued to grow. It was not until 2011 that the genre took off in the West, the first major hit being this upload by YouTube user Kakakakaito1998. Typical of Black MIDI’s early style, the video features a traditionally notated two-stave piano score rather than a MIDI piano scroll alone.
Once Black MIDI made its way to the West, it was not long before blackers began refining the creation and presentation of their niche form of art. Blackers sought to solidify their identity, which led to the creation of Guide to Black MIDI and Impossible Music Wiki, the latter of which was created by Nguyen and the other blackers with whom he frequently collaborates. Both sites serve as an introduction to and codification of Black MIDI.
Blackers also began pushing the limits of their art, adding more notes (numbering in the millions) and making the visual presentation as important as the sonic presentation. Black MIDI became a marriage of visuals and sound, a cascade of colors and patterns paired with an ordered complexity of notes. While the popular songs of choice remained music from Japanese video games, blackers also started making black MIDIs based on recent pop songs.
As computer-processing power increased, Black MIDIs also became larger and included more notes than before. In addition, much of the software was updated to 64-bit, which positively impacted RAM usage and allowed playback of even larger files. The continued growth and evolution of technology also allowed blackers to develop tricks to fill their videos with more notes.
“My videos are edited for no lag,” says Nguyen. “They aren’t real-time: I record the MIDI program slowed down, and then speed it up in a video editor.” This technique takes less of a toll on computer processing power and RAM.
In addition to software and visual changes in Black MIDI in the West, English-speaking blackers established their own team, BMT (Black MIDI Team). Teams, including BMT, consist of a number of blackers who serve various roles, from blackening songs to creating the videos and hosting them on YouTube. This collaboration creates a virtual production and distribution chain that ensures blackers get their work out to as many people as possible through several main YouTube accounts—including Gingeas—while also being credited for their work. Additionally, while BMT is separate from the other major teams that exist in China and Korea, they frequently collaborate with each other on videos and MIDI tracks.
The lack of a major Japanese team brings up an interesting observation: Black MIDI has since disappeared from Japan where it originated. According to Nguyen, Japanese blackers “are analogous to those TV shows where there’s a mysterious founder of a civilization that is not really known throughout the course of the show.” The Japanese blackers have now assumed this role of a silent creator. Although the forebears of Black MIDI are long gone, the Black MIDI community has spread around the globe and is thriving.
One can’t help but draw comparisons between Black MIDI and Conlon Nancarrow’s studies for player piano. Both Nancarrow and blackers have tested the possibilities of note density in their pieces, creating astounding polyrhythms and textures in the process. In addition, the method of note entry is essentially the same between the two. However, Nancarrow’s medium was acoustic while the blackers’ is digital. In some regards, black MIDI could be construed as the 21st century’s response to Nancarrow.
Despite this apparent connection to Nancarrow, the Guide to Black MIDI claims it does not exist and that Black MIDI was an independent evolution: “We believe that references to Conlon Nancarrow and piano rolls are too deep and black midi origins must be found in digital MIDI music world” [sic]. Notwithstanding the blackers’ contentions, there are obviously significant similarities between Nancarrow and Black MIDI.
More recently, other artists have been creating music from a combination of both Nancarrow’s acoustic techniques and the blackers’ digital techniques to achieve intricate musical effects. For example, electronic composer Dan Deacon has written multi-layered player piano tracks that create an acoustic sound more complex than Nancarrow and are only made possible through the addition of modern MIDI technology and a Digital Audio Workstation. While Deacon’s style is entirely different from both Nancarrow and the blackers, the techniques he employs remain the same.
Though only one of many niche music genres that are internet-exclusive, Black MIDI stands out as unique. The simple melodies and tonal harmonies combined with the possibility of near or total computer processing failure are captivating. Additionally, Black MIDI’s connection to visual art adds a third dimension that makes the art form even more engaging. For a genre that has only existed for six years, it is difficult to tell where black MIDI is headed or where its influence will plant its seed, but for the time being I’ll enjoy the ride and listen to this along the way.
A conversation in Spiegel’s Lower Manhattan loft
September 9, 2014—3:00 p.m.
Video presentation and photography by Molly Sheridan and Alexandra Gardner
Transcription by Julia Lu
People often speak about computers and technology as though these things are completely antithetical to nature and tradition, though this is largely a false dichotomy. Electronic music pioneer Laurie Spiegel began her musical life as a folk guitar player and has never abandoned that music. But she fell in love with machines the first time she saw a mainframe tape-operated computer at Purdue University on a field trip there with her high school physics class and has been finding ways to humanize them in her own musical compositions and software development ever since. She sees a lot of common ground between the seemingly oppositional aesthetics of folk traditions and the digital realm. In fact, when we met up with her last month in her Lower Manhattan loft crammed full of computers, musical instruments, and toys of all sorts, she frequently spoke about how in her world view the computer is actually a folk instrument.
“The electronic model is very similar to the folk model,” she insists. “People will come up with new lyrics for the same melody, or they’ll change it from a ballad to a dance piece. Nobody can remember what the origin is. There is no single creator. … In the way that electronic sounds go around—people sample things, they do remixes or sampling, they borrow snatches of sound from each other’s pieces—the concept of a finite fixed-form piece with an identifiable creator that is property and a medium of exchange or the embodiment of economic value really disappears … in similar ways. … Prior to electronic instruments, you had to go through the bottleneck of written notation. So electronic music did for getting things from the imagination to the ears of an audience what the internet later did for everybody being able to self-publish, democratizing it in ways that obviously have pros and cons.”
A realist as well as an idealist, Spiegel is well aware of the cons as well as the pros of our present digitally saturated society. “[W]hen I was young,” she recalls, “You had a great deal of time to focus on what was happening in your mind and information could proliferate, amplify itself, and take form in your imagination without that much interruption from outside. … Our culture is at this point full of people who are focused outward and are processing incoming material all the time. Would somebody feel a desire to hear a certain kind of thing and go looking for it? Would they hear something inside their head and want to hear it in sound? It seems that people are fending off a great deal now. The dominant process is overload compensation: how can I rule out things that I don’t want to focus on so that I can ingest a manageable amount of information and really be involved in it. Information used to be the scarce commodity. Attention is now the scarce commodity.”
The imagination is very important to Spiegel. It is what has fueled her pioneering sonic experiments such as her haunting microtonal Voices Within: A Requiem from 1979 or her landmark 1974 Appalachian Grove created at Bell Labs soon after she returned from the mountains in western North Carolina where she traveled with “my banjo over one shoulder and my so-called ‘portable’ reel-to-reel tape recorder over the other shoulder, listening to and enjoying older music and the culture that comes from early music.” It is also why she created the Music Mouse computer software, a tool that transformed early personal computers such as the Mac, Atari, and Commodore Amiga into fully functional musical instruments and idea generators for musical compositions. It also led her to create a realization of Johannes Kepler’s “Music of the Spheres,” the 17th-century German astronomer’s conversion of planetary motion into harmonic ratios; this electronic score and a song by Chuck Berry is the only music by living composers that was sent into outer space on the two Voyager spacecrafts. (Although Spiegel insists that her realization, which was included as part of “Sounds of the Earth” rather than “Music of the Earth,” is not her musical composition.)
But perhaps even more important to Spiegel than the imagination is emotional engagement. “I always wanted to make music that was beautiful and emotionally meaningful,” she explained. “The emotional level is the level at which I am primarily motivated and always have been. I’m still the teenage girl who, after a fight with my father, would take my guitar out on the porch and just play to make myself feel better. That’s who I am musically. I kind of knew what I liked as a listener, and what I liked was music that would express emotions that I didn’t have a way of expressing, where somebody understood me and expressed in their music what I was feeling in ways that I couldn’t express myself. So, to some degree, I think I see the role of the composer as giving vicarious self-expression to people, although at this point, with the technology we have, there’s no reason for anybody who wants to make music not to be able to.”
Laurie Spiegel’s equipment in 1980. Photo by Carlo Carnivali, courtesy Laurie Spiegel.
Frank J. Oteri: The meta-narrative of electronic music, and technological developments overall, is that we went from big anti-personal mainframe computers that took up entire buildings to home computers to handhelds and even smaller. Laurie Spiegel: And I went that whole journey. I started using punch cards and paper tape. The first computer I ever saw was at Purdue University in Lafayette, Indiana, when I was in high school. I went down there for a weekend and they had a tape-operated computer on which I attempted to do an assignment for my high school physics class. In this class there was me and just one other girl. All of the others were guys, and the teacher really thought we didn’t belong there. It was just so weird. But I always loved science. FJO: But before you got involved with making music with electronics, you were a guitar player and the acoustic guitar is one of the smallest, most intimate instruments that one can play by oneself and have a full sound, all alone. So it seemed to me like there’s a connection between that and how electronic music came to be made on smaller and smaller devices. LS: Personal and private are important aspects of music to me. When I was little, I started with a plastic ukulele which was even smaller. Then my grandmother, who was from Lithuania, played mandolin, and she gave me a mandolin when I was maybe nine years old or so. That had the advantage that I could keep it under my bed and take it out at night and play it quietly with nobody hearing me playing it. I had the total freedom to just improvise and make stuff up. I don’t think I even told anybody when she gave it to me. It was like my secret instrument, my private means of expression – whereas the piano in the living room was this large, sacred object where everybody in the house heard you and didn’t necessarily want to hear kids practicing. The guitar was similarly private, and I could play it in my room. The freedom of not being heard, for a person who’s basically somewhat self-conscious, is really important, and so is the portability.
Despite having computers and other electronic musical instruments from half a century scattered throughout her loft, Laurie Spiegel still loves to play the guitar.
I used to take the guitar with me everywhere I went during high school, college, young adulthood, up until I hit classical music circles and discovered that a lot of the people who were studying music, and were the best at it, didn’t seem to do it for personal enjoyment. They were so serious about it. In the folk music-type circles and improvising circles, people would bring their instruments with them and people jammed all the time. But once I hit Juilliard, I didn’t find that people really did that kind of stuff. They didn’t improvise. They were seriously working on their trills. And they were seriously working on their performance pieces. It wasn’t integrated into their lives the same way as for amateurs who really love music. I guess I still regard myself somewhat as an amateur, just doing it for the love of it really, which is the technical definition of that word. I’ve always been an improviser too, which electronic instruments were perfect for because you were actually interacting live with the sound in electronic music; whereas, when I write music on paper, for instruments, I don’t get to hear it, or not for a long time, or not while I’m working on it. Of course, that’s no longer true because all the notation software now let’s you hear stuff while you’re working on it, and you know that a rhythm isn’t what you meant right away. But in the old days, when I was learning notated composing, it was in your head. FJO: It’s interesting that that came much later for you though, long after you were playing music. LS: I was playing music, I was improvising, I was making stuff up, and at a certain point I wanted to learn to write things down so I wouldn’t forget them. So I started trying to teach myself to write stuff down. One of my roommates in the house that I lived in pointed out to me that they call that composing. You make things up and write them down. I was living in England and studying philosophy and history, doing a social sciences degree basically. I said, “No, I’m not composing. I’m just writing things down so that I don’t forget them. I’m not a composer.” But eventually it became undeniable, and composing took over. FJO: And so the social sciences became less of a concern for you once music took over? LS: No, it never really went away. I’m still very interested in politics, sociology, economics, statistics, anthropology, psychology, all that stuff, and animals. I’m a complete sucker for animals. FJO: But it was still a transition. You were at Oxford and then you were studying with John Duarte. LS: In London, during the second year that I was over there. He was probably the perfect teacher for me. He had a partly classical, partly folk, and partly jazz background. He taught me counterpoint and theory and a bit about composing, as well as classical guitar. Once a week I would take the train into London for the weekend and spend a whole day in his house. And we stayed in touch. Much later, when he was in his 80s, he started to learn to use personal computers and began doing his composing directly into the computer. It was amazing. He was an English composer not obsessed with avant-gardism, firmly rooted in some kind of folk—folk is not a general enough word, but a grassroots sense of musical meaningfulness, or maybe it is more accurate to say he was connected to tradition very organically and naturally in his music, like quite a few other British composers. I identify with that.
Laurie Spiegel in the early 1970s. Photo by Louis Forsdale, courtesy Laurie Spiegel.
FJO: So that’s a very different experience from then enrolling in a composition program at Juilliard, of all places. LS: Yeah, well, I was completely not expecting the dominance of the post-Webernite, serialist, atonal, blip and bleep school of music. I wasn’t interested in that. I mean, I knew what I wanted to learn. I wanted to learn harmony, structure, form, process, history, and repertoire, lots of stuff. But it wasn’t really considered cool to be interested in learning to write tonal music. I remember a teacher—who shall remain nameless—who, when I brought in a piece in E minor for guitar, said, “Hmm, key signature. Doesn’t mean for sure that you don’t have any musical imagination, but it’s not a good sign.”
It was so much more uptight then. I was in a way intellectually prepared for it because at Oxford there was a comparable phenomenon going on. The logical positivists were in charge talking about how many definitions can dance on the head of a … whatever. I was more interested in phenomenologists and Asian philosophy, and all kinds of stuff that was about the opposite of the dominant philosophers at Oxford at the time. Logical positivism is divorced from gut feelings, which were my personal link to music. As a teenager, when I was miserable I would take my guitar out on the porch and play and express my emotions. And when I heard great classical repertoire, it could vicariously express emotion for me. And so music was really about emotion. It was also about structure, because I love structure. That’s the computer programmer in me. So the things that I was most attracted to in music were slightly at odds with the music that was in with the dominant power structure when I went to Juilliard.
Then there were also all these child prodigies wandering around. I already had finished a degree in the social sciences. I was older, which made me immediately suspect because it’s a highly child prodigy-oriented atmosphere; if you weren’t discovered by 12, you were a has-been. But there were a number of things that saved me from giving up and going crazy. One was that through electronic music I was able to create music people could hear and I became active in the Downtown scene while I was still up there. And people liked my work. I played music in other people’s ensembles, played guitar or banjo or whatever for Tom Johnson and with Rhys Chatham. I would do these filigree patterns, and Rhys would do these long drone-like lines against the stuff. That balanced it. Also I was making a living. I got a job with a small company that did educational films and filmstrip soundtracks. I composed all of their soundtracks for, I think, three and a half years or about that, and it paid decently. And again, when you do soundtracks, all that really matters is emotional content, and to a lesser degree the style. It’s the opposite of the aesthetic that was dominant uptown with Boulez, Wuorinen, and Milton Babbitt, although I liked Milton and a lot of these people. I was friendly with and hung out with the Speculum Musicae people, but our musical tastes were just in contrast to each other. FJO: But your primary teacher at Juilliard was Jacob Druckman, who was really all over the map aesthetically. LS: Yeah, boy, Jake was amazing. I was also his assistant and spent a lot of time in his house up in Washington Heights. I proofread the parts for Windows. He let me use his extra studio time when he wasn’t using it at the Columbia Princeton Studios, so I got to know Vladimir [Ussachevsky] and Otto Luening pretty well, and of course Alice [Shields] and Pril [Smiley]. I have a reel of pieces I recorded up there that at some point I’ll transfer and see what they sound like. FJO: I’d love to hear those! LS: I also studied with Vincent Persichetti, who was a wonderful teacher. He really did his best to try to help each of his students find themselves individually and learn to make the music that they personally wanted to make. He didn’t push you in any direction. He didn’t want to create a clone of himself, unlike some of the teachers there, and he was great. And I also had some lessons with Hall Overton, who appreciated that I was one of the very few students there who could improvise and enjoy it. But at the same time, I was going downtown to meet Mort Subotnick and visit his studio when it was still upstairs from the Bleecker Street Cinema. I fell in love with the Buchla, so I was doing that too. I was doing all of these different kinds of music at once. Unlike most people who might be immersed in the atmosphere of Juilliard, it was one of the places that I was active musically, but it wasn’t the place. It didn’t dominate me.
Laurie Spiegel with various synthesizers and reel-to-reel tape recorders in the 1970s. Photo by Louis Forsdale, courtesy Laurie Spiegel.
FJO: You played piano, but it wasn’t your major instrument. LS: No, I had to kind of begin to learn piano because it was useful for theory, harmony, and composing and studying. And I love the repertoire, but it wasn’t like anything with strings on it, which attracted me like a magnet. But pianos—I mean, I love them, but they came later. FJO: But in terms of compositional paradigms, a keyboard configuration creates a certain kind of mindset. I want to discuss this more when we talk about the Music Mouse software you developed and your algorithmic compositions. If you think in terms of a seven-five keyboard, whether you’re improvising on it or even composing in your head and coming from a keyboard-oriented background, certain patterns are going to emerge. And if your frame of reference is a guitar fret board, other kinds of things are going to happen. LS: If philosophically you’re a determinist, you could say that absolutely everything is algorithmic, but we do have a sense of free will and we do have the perception that we’re making decisions. But yeah, you could argue that if everything is deterministic, including the workings of the mind, then all music is algorithmic.
That seven-five pattern you see on the keyboard is only visible there because it’s the structure of the diatonic scales that we hear. It’s a pattern within the musical model our culture is dominated by. It’s not that pattern, but how it fits the hands, and the habits of the hands that become actual reflexes, that can be limiting. They can become so ingrained that they keep the imagination from roaming. That happens with the guitar fretboard too, though with different patterns, and with an instrument such as “Music Mouse” too, I suppose. Each instrument somehow biases our music in its own unique direction. Some composers manage to transcend those kinds of habits, some compose away from any instrument, others invent new instruments. But the physiological interface is sort of an algorithmic constraint all on its own, and I would think there are also similar cognitive constraints.
Some of the analog synthesizers in Laurie Spiegel’s loft.
FJO: You were telling me when we spoke the other day that there was a music composition teacher who was so upset with you because if his students used Music Mouse he wouldn’t know if they were coming up with their own music. So when you mentioned falling in love with the Buchla, I remembered that when we did our talk with Morton Subotnick he said that he was very determined to avoid the standard piano interface, that it was very important for him not for it to have that interface in order to free people’s creativity, that you would have to deal with the instrument in a completely new way. Otherwise the paradigm would force you into familiar patterns. LS: I believe that was some of Schoenberg’s rationale for coming up with the 12-tone system, too. It breaks you out of all of your customary habits and the patterns that are ingrained. Every time I pick up the guitar, my hands tend to fall into patterns of things that I’ve played before, which can be good. But you are looking for something new when you’re composing, unlike when you’re just performing. Yeah, that was one of the wonderful things about the Buchla versus the Moog and Arp and other early electronic instruments. It was modular and there was no keyboard, and so you really worked with timbre and texture and sonic shapes and architectures, as opposed to falling into melody and harmony. FJO: You came to these various pieces of equipment and you’ve done new things with them, but you also wrote music that was instantly beautiful. But beauty is also something that is in part acculturated. LS: I always wanted to make music that was beautiful and emotionally meaningful. It was out of fashion to do that. A lot of people were simply trying to avoid doing that at the time, whereas I was willing to go for it. Newness was being pursued for its own sake. FJO: You even composed a short piano piece that addresses the whole history of music and shows a way out of that. LS: Oh, The History of Music in One Movement. FJO: I love the program note you included in the score and how even though the music is inspired by all these periods in history, every note of it is yours. There are moments that almost get into sort of a modernist place, but it doesn’t end there. Writing something like that when modernism was acknowledged as the final phase in music’s evolution was very brave. LS: That piece was one of the most fun composing experiences and one of the most interesting that I’ve ever had. At every point when I was writing something evocative of a certain period, I had to sort of try to feel through what it would feel like to need to go on to break through into what happened in the next period. I had to want the freedoms that the next musical era took. There are many transitions in there. The hardest part of writing that was that horrible little place where I did an actual pair of serial rows that retrograde and invert against each other and that sound so ugly and harsh to me. For historical accuracy, I thought I really had to put that in. And at that point in the piece, it says “Oh my God, we can’t do this,” and it retrogrades back and it takes a different direction and kind of goes off into a sort of Impressionist-tinged blues, and then into minimalism, texture, pure sonic fabric. But of course, when we wrote that, we hadn’t yet gotten to “post-minimalism”, whatever that means.
FJO: Musicologists point to the late ‘50s and early ‘60s as the beginnings of minimalism, but the ‘70s were really when it had its greatest impact with audiences. In fact, its full flowering seems to have gone hand-in-hand with the sudden availability of electronic instruments. This is also true for other kinds of music that were evolving at that time, like prog rock. LS: Electronic instruments gave people the freedom to create works and sound on an unprecedented scale. Prior to electronic instruments, you had to go through the bottleneck of written notation. You had to go through the bottleneck of a limited number of orchestras with very conservative tendencies, because they had their subscribers to please. Electronic instruments were a great democratizing force. That’s one of the reasons why you began to see so many more women composers because you could go from an idea for a piece to the point where you could actually play it for another human being. I mean that had been true all along if you limited yourself to writing only for the instruments you played yourself. But when it came to writing things on an orchestral scale of sonority, to be able to realize something and then play it for other people all on your own was a brand new phenomenon. So electronic music did for getting things from the imagination to the ears of an audience what the internet later did for everybody being able to self-publish, democratizing it in ways that obviously have pros and cons. The economic models of these various ways of getting something from the inside of my mind to the inside of someone else’s mind, for whom it would be meaningful, have been completely upset and will have to settle down differently. Analog electronics were revolutionary, and now the digital ones are also. It’s amazing how quickly so many changes have taken place and they’re very disorienting to a lot of people, understandably.
Laurie Spiegel at the McLeyvier Music System, an early digital synthesizer with a computer terminal, in the early 1980s. Photo by Rob Onadera, courtesy Laurie Spiegel.
About what you asked, minimalism and electronic instruments, it was liberating for us players of plucked instruments and pianos to work with sustained tones. Instead of composing additively, but writing down one tiny sound at a time, we could start with a rich fabric of sound and subtractively sculpt form into it, or we could set up a process and let it just slowly evolve on its own. FJO: The other big change happened with how those electronics were situated. In the early stages you had to be attached to some kind of university system or, if you got lucky, you could afford a Moog or a Buchla. LS: One the things that I think made the ‘70s a really special period was that electronic instruments were too expensive for most people to own one. Sure there were people who had their own—Mort had one, Suzanne Ciani had one, a lot of rock groups could between them get one. But for a lot of us, the way to get access to electronic instruments was through shared studios. There was PASS—the Public Access Synthesizer Studio—which later evolved into Harvestworks. There was the NYU Composers’ Workshop. There was WNET’s Experimental TV Lab where I was a video artist in residence for a while, though I ended up really not doing much video but doing sound tracks for everybody else’s videos. There was Mort’s little studio, and its community of people upstairs from the Bleecker Street Studio. The Kitchen was another one. The Kitchen started as a center for video and then expanded into music. So there was community. There were interactions between people. People would meet each other and they would get ideas and bounce ideas off each other and work together in ways that I would think must be much more difficult to achieve now that everyone has an extremely powerful studio—beyond our wildest dreams back then—in their bedroom or sitting on their desk. To be working in the studio and, okay, I’m coming in and Eliane Radigue is just finishing up, and she shows me what she’s doing. Then she watches me put up what I’m doing, and then when I’m done, Rhys Chatham comes in and he’s like, “Oh, you could do this and this, and by the way, you know, we’re trying this; do you want to come and play with us?”—I mean, things just happened between people and I think that made the ‘70s a really special period, the fact that there were so many shared studios where people worked together, interacted with each other, commented on each other’s work, and helped each other with their work, as opposed to everybody sitting by themselves in their rooms with their computers.
Spiegel at work in the era of mainframe synthesizers. Photo by Emmanuel Ghent, courtesy Laurie Spiegel.
FJO: Even some companies, like Bell Labs, became hotbeds of activity for composers at that time. LS: Well, there was no place like Bell Labs. You can’t really even consider it a company. Bell Labs was pure research with a level of autonomy given to each person working there that probably no longer exists anywhere. There was no need to do anything with any commercial buy in. You could do whatever you were interested in, everyone was brilliant, and everyone was interested in stuff. You didn’t last that long or do that well at Bell Labs if you weren’t self-motivated and a self-starter. You were expected to have your own ideas and be able to realize them. I’m still in very close touch with my friends from that lab. We email all the time and toss ideas around. I just don’t know if there is any other place quite like that, although I think places like Apple and Google like to think they have the level of freedom that they had at the lab. I’ve never really been around them on a work-a-day basis to find out. FJO: I love that they would just let artists come and do their thing.
Robert Moog, Laurie Spiegel and Max Matthews. Photo courtesy Laurie Spiegel.
LS: Well, they did and they didn’t. The arts were a little on the hushed side because of their regulated monopoly status, and moving into the ‘70s, they began to be under attack by the various powers that wanted to divide “Ma Bell” into a number of small, separate, competing companies, which ultimately did happen and was a great loss in my opinion. They were under a certain mandate; there were a number of considerations. One was that everything they did should be oriented to communications research. So when they came up with Unix and the C language, they just gave them away for free. Another was that they were not really supposed to be doing digital communications so much, I think, as improving existing analog telephone service. I’m not really that sure. I wasn’t in the managerial level of the lab. Max Matthews was, though; he was a fairly high-up person. He ran twelve sub-departments that did all kinds of amazing stuff: acoustic research, speech synthesis and analysis, non-verbal communications, various cognitive studies like studies of the characteristics of long-term versus short-term human memory and stereopsis, and in vision the study of eiditic memory. You would just walk around or ask whoever happened to be at the coffee machine when you were getting a cup of coffee: “What do you do?” and they would tell you something absolutely fantastically fascinating that they were very much into. It was an amazing place. FJO: In addition to music, you were also doing video work at Bell Labs. I love the name of the program you worked on there. LS: VAMPIRE! (Video And Music Program for Interactive Realtime Exploration.) It was a system that could only be used at night. That was the mandate. We artist types could use the computers during the hours during which they were not in use for legitimate Bell Telephone research. FJO: I think my favorite work of yours from that period though is that gorgeous Appalachian Grove. LS: Yeah? At that point I had a graduate research fellowship starting in I think ’73 at the Institute for Studies in American Music with Wiley Hitchcock, whom I greatly admired. Anybody who hasn’t read his book, Music in the United States: An Historical Introduction, should read it. He put me back in touch with and made me feel better about my banjo playing and the folk level, which had been basically kind of ridiculed in some of the other circles I’d been in during that era.
Laurie Spiegel playing the banjo in October 1962. Photo courtesy Laurie Spiegel.
I had just been down in the mountains in western North Carolina—with my banjo over one shoulder and my so-called “portable” reel-to-reel tape recorder over the other shoulder—listening to and enjoying older music and the culture that comes from early music. I mean, music from Europe went into those hills before the Baroque era and evolved on its own there, amazing music. I had just come back from there when I did Appalachian Grove and wanted to capture some of the feeling of being down there.
The wonderful thing about being surrounded by scientists, and not being in a computer music studio in a music department, is that a lot of scientists really love music. They are unabashedly lovers of fine music that’s meaningful in all the ways that I find music meaningful. They go to classical concerts, and they play instruments themselves. They love music the way ordinary people do. Whereas, something happens when you put music into an academic context in which down the hall is a science lab where everything has to be provable and rationalizable. You begin to get pieces where every note needs to be able to be explained, a certain level of self-consciousness begins to be laid on a musical experience. I’m not saying that always happens, but it seemed to be a tendency in academia during that period which was not present at Bell Labs. FJO: What’s nice about the re-issue of your first album, The Expanding Universe, that came out last year is that we can finally hear all of the compositions you created at Bell Labs. LS: Well, most of them. I did an awful lot of stuff. Two and a half hours, or a little more than that, was all we could fit on two CDs. FJO: Only a tiny portion of that material was issued on the original LP, which curiously was released by the folk music label Philo. LS: Another thing that I keep harping on is that the computer is a folk instrument. One of my favorite subjects in college had been anthropology. You have all these various techniques of going into an alien society and trying to figure out what’s important. One of the techniques is to try to figure out the cultural premises, the rock bottom assumptions that members of that culture would make. So I took a look at a number of different distribution media for music: classical concert venues; grassroots organizations like community sings; bands and church groups; parlor music, music that is done at home with people gathering around a piano singing or playing guitar together; and electronic media—photography, radio, and electronic music. I looked at the characteristics of the music that is disseminated by each of these methods and certain patterns begin to fall out.
The classical model is a finite piece of music with a fixed form that is attributable to one creator—Beethoven, for example. But the electronic model is very similar to the folk model. You have material that floats around and is transmitted from person to person. It’s in variable form; it’s constantly being transformed and modified to be useful to whoever is working with it, the same way folk songs are. People will come up with new lyrics for the same melody, or they’ll change it from a ballad to a dance piece. Nobody can remember what the origin is. There is no single creator. There’s no owner. The concept of ownership doesn’t come in. In the way that electronic sounds go around—people sample things, they do remixes or sampling, they borrow snatches of sound from each other’s pieces—the concept of a finite fixed-form piece with an identifiable creator that is property and a medium of exchange or the embodiment of economic value really disappears in both folk music and electronic and computer music in similar ways. FJO: But certainly in the earliest era of electronic music, there would be these musique concrète and studio-generated electronic music tape pieces that are even more fixed than a piece by Beethoven because not only is there one piece, there’s only one interpretation of it because the interpretation is a fixed form.
Laurie Spiegel with her analog synthesizer and reel-to-reel tape recorders in 1971. Photo by Stan Bratman, courtesy Laurie Spiegel.
LS: That was pretty much true back when electronic music could only be disseminated on reel-to-reel up until cassettes were invented, since you had to actually own two reel-to-reel machines to make a copy and very few people did. You would have tape concerts where you could play pieces for people or it might get on the radio or a record as a medium of dissemination. But once there were cassettes, you started to get people doing mixes and overdubs, excerpting things and chopping things together. Not a lot of people did the kinds of techniques that had been used in classic studio technique—lots of splicing and cutting—on cassette. To edit a cassette tape is pretty unusual. Then when you got digital recording, the first wave of digital excerpting was samplers before personal computers and the internet made other ways more feasible. The business end of the music industry is trying very hard to make everything identifiable and institute royalty systems and stuff. But I think, even though I’d benefit from receiving royalties, it’s to some degree a losing battle and a superimposition of a model that no longer really fits. We don’t have a new model yet that provides economic support back, but maybe we don’t need one–because music production is so much cheaper and faster. FJO: I definitely want to talk more about these issues with you, but let’s get back to Philo. It’s really unusual for them to have released an LP of electronic music. That record proves in a way that the divide between folk music and electronic music was a fake war that was created in part by the media overblowing some people’s negative reactions to Dylan plugging in at 1963 Newport Folk Festival. LS: Well, I was a folk person and a banjo person. The lowest, most grassroots technology and the most sophisticated electronic technology you would think would be diametrical opposites, but the fact that you can make music independently at home, and make music locally with other people in an informal way without any of the traditional skills such as keyboard skills and music notation, that’s a great commonality. FJO: And some of the popular rock groups at that time were also doing some very sophisticated stuff with electronics. LS: Pink Floyd. FJO: Perhaps even more so some of the German groups like Tangerine Dream and Kraftwerk, many of whose recordings were purely electronic music without vocals or anything else. There isn’t that much of a sonic difference between some of their music and some of the stuff on the Expanding Universe LP. LS: Yeah, there is and there isn’t. In a way, it’s almost closer to minimalism. I’m thinking the earlier Terry Riley pieces like Poppy Nogood and In C, which are pretty much open form. My pieces tend to actually be relatively short and have pretty clear forms and the processes in them tend more toward melodic evolution than repetition. FJO: But in terms of the surface sound, I think the music on that LP could appeal to anybody who’s a fan of Tangerine Dream, and having that recording appear on Philo rather than one of the labels that was releasing electronic music that had been created in university settings, like CRI, seems like a reaching out to this broader audience.
Laurie Spiegel playing the electric guitar at a NAMM showcase in Anaheim in the late 1980s. Photo courtesy Laurie Spiegel.
LS: I have been in multiple musical worlds simultaneously throughout most of my career. I haven’t lived in the classical world, although I still totally love classical music, probably really the best. But none of those labels would have had me. Philo were willing. And then Rounder took it and kept it right up until Unseen Worlds Records put out the CD re-release. I mean, listen to Appalachian Grove and Patchwork and Drums. They’re clearly closer to a grassroots, folk sensibility than they are to any of the post-Webernite composers. But I did get it through personal connections that were more in the folk world. I had a roommate for about 14 months, Steve Rathe of Murray Street Productions, who was at that point working for NPR. He decided to move to New York and stayed here “for 2 weeks” until he could find a place, which turned into 14 months, which was actually great. I like him a lot. And he connected me up with Philo. He went to them and said, “You gotta hear this stuff.” That’s how that really happened. He still invites bunches of people over to his loft to just have an old fashioned country music evening with banjos and fiddles, and I play banjo or fiddle or guitar at those. FJO: You’ve never gone over to one of these things and played with Music Mouse. LS: No. Music Mouse doesn’t work like that. I have jammed playing Music Mouse, but it doesn’t lend itself well to playing with other people, because it tends to not be good for standard chord changes. FJO: Now in terms of how worlds opened up, I’m curious about how your music wound up getting sent into outer space on Voyager.
The gold-plated Sounds of Earth Record containing Laurie Spiegel’s realization of Johannes Kepler’s Harmonices Mundi and its gold-aluminum cover (left). Photo by NASA (Public Domain). A copy of this record was sent into outer space on both the Voyager 1 and 2 spacecrafts in 1977. The cover was designed to protect the record from micrometeorite bombardment and also provides a potential extra-terrestrial finder a key to playing the record. The explanatory diagram appears on both the inner and outer surfaces of the cover, as the outer diagram will be eroded in time.
LS: I was visiting friends up in Woodstock on a lovely summer’s afternoon, and somehow a phone call got forwarded to me and they said, “We’re with NASA, and we would like to use some of your work for the purpose of contacting extraterrestrial life.” And I said, “What kind of a crank call is this? If you’re really from NASA, send something to my address on NASA letterhead. Okay, goodbye.” And they did, which really surprised me.
There are a number of algorithmic works. One type might start with a truly logical progression that generates the information for a piece. Another kind is to use the patterns we find in nature and translate those into the auditory modality, like the Kepler piece [which is what was put on Voyager]. Kepler of course didn’t have the means to do that back in the 16th century. But we do. FJO: And so you realized that. LS: Yeah, yeah. Ann Druyan, Timothy Ferris, and Carl Sagan liked it for the opening cut on the Sounds from Earth record. There are two records on Voyager. One is Music from Earth. It’s not in the music part. It’s in the Sounds from Earth. FJO: That’s always bothered me. LS: No. It really is simply a translation into sound of the angular velocities of the planets. It’s a transcription really. I don’t think of it as a composition. It’s an orchestration I did, and I think I did a good one, because I have listened to some other ones and they seem rather dry and academic sounding; whereas, I somehow, being me, managed to get some sense of feeling into the ways that I mixed it and the pace at which I let it unfold, and the decisions I made such as only including the planets that were known during Kepler’s times instead of all of the planets we later came to know. FJO: There was an LP that came out of another realization of Kepler’s Harmony of the World in the late ‘70s, and in that realization the other three additional planets discovered after Kepler’s lifetime were represented as percussion tracks. There is some similarity between that recording and what you did. LS: It’s the same solar system. FJO: But still I hear your sensibility in your version somehow. LS: But it’s not an original piece by me. If anybody composed it, it was Kepler who created this score, or as Kepler would have said, “It’s a composition by God rendered audible to man,” although I don’t know if he really believed in God. His mother was almost burnt at the stake as a witch. FJO: That leads us into this whole question of who can claim compositional ownership of algorithmic compositions. LS: Well, if the piece is generated by a process then whoever creates the process you would think composes the piece. It gets more complicated when it’s an interactive algorithmic situation. I have never called Music Mouse an algorithmic music generator. It’s interactive. It’s an “intelligent instrument”, an instrument with a certain amount of music intelligence embedded in it, mostly really by a model of what I would call “music space” – music theory, rhythmic structures, and orchestrational parameters that one can interact with. If someone composes with that, to some degree, it’s a remote collaboration because there is certain decision making I put into that program that they’re stuck with. And the rest of it is up to them. So there is decision making from both me and them, in that the computer is really almost passive. I would say it only does what you tell it to in simple situations like Music Mouse. In complex situations such as the entire world internet system, things become so complex that things will happen that the system was not instructed to do. But that’s on a different scale from a program where you actually describe a process of music generation, or a program such as Music Mouse, where if you do exactly the same thing you will always get exactly the same result, as with other instruments.
Music Mouse running under the STEEM Atari STe Emulator on a Windows Vista PC. Photo courtesy Laurie Spiegel.
FJO: Allow me to play devil’s advocate. LS: Go for it. FJO: Even if you’re creating music on a piano, there are things that are built into that piano that sort of predetermine the kind of things you can do with it. LS: Sure, each instrument really does have an aesthetic domain. You obviously can’t do the same music with a flute and with a harp. But you say that you could hear my sensibility in the Kepler. You probably hear a related sensibility when you listen to my piano pieces or my orchestral writing. FJO: Yes. LS: So the medium interacts certainly with the individual person expressing themselves through the medium. It’s sort of a collaboration between a structure and a person. FJO: Well, the reason I’m bringing this up is the story you told me last week about a music composition teacher being upset with you because your software made it difficult for him to know if his students were actually composing the music he assigned them to write. LS: I wrote Music Mouse for my own use, and then I showed it to people and they wanted copies of it, and then they showed it to people, and it got to the point where more people wanted copies of it than I could sit down and explain how to use it to and so I wrote a manual. Then it kept snowballing, and it needed a publisher, so I gave it to Dave Oppenheim at OpCode to publish. And then a lot more people had it. At one point, later when Dr. T’s Music Software were publishing it, Music Mouse was bundled with [Commodore] Amiga computers, and something like 10,000 copies of it shipped. A lot of people used that program.
So I began to get feedback back from all manner of people who I didn’t know. The program was in many contexts I had never dreamed it would be in. So I get a somewhat upset letter from a college music teacher telling me that because of my program, he doesn’t know how to grade his students. He can’t tell if they know harmony, or they’re relying on my software for the harmony that they’re using in the compositional exercises they’re submitting to him. What is he supposed to do about that? How is he supposed to grade them? Music isn’t really something that’s supposed to be graded anyway. But yeah, a lot of unexpected and interesting things happened as a result of that program going out in the world on such a large scale.
Floppy discs for two of Laurie Spiegel’s software programs, Music Mouse and MIDI Terminal, as issued in the 1980s. Photo courtesy Laurie Spiegel.
FJO: One of the things that I find fascinating about it is it can help you get out of habits that you had. LS: I used to call it an idea generator. You’re certainly not going to be able to do anything you ever did on a keyboard or guitar, and you will be doing other kinds of things. And you’ll be focusing not on the level of the individual notes, but on the shapes of the phrases and the architecture of the musical gesture. It forces you to conceptualize on a larger scale. Music composing often really bogs down at the level of the note, and people lose perspective and they muddle around. If it’s really beautifully done, you can be utterly fascinated and transfixed by what’s happening on the level of the notes. But you also find an awful lot of pieces that seem to just kind of go on and on and wander around because the person creating them has lost perspective in terms of an overall form. Music Mouse orients you to think on a slightly larger scale of the phrase or the gesture. Of course, you can still wander around, making a mess for a really long time. We’ve all done that. But it’s an improvising instrument and it’s a brainstorming instrument. FJO: In terms of how it affected your own composition process, are there things in your music that are different before Music Mouse and after Music Mouse? LS: Music Mouse had things in common with the various FORTRAN IV and C programs I wrote at Bell Labs, but I can’t begin to say how much the orchestration of electronic sounds that could be dealt with in real time changed in a single decade. I mean, you talk about 1975 when I was doing pieces like Patchwork at Bell Labs. In 1985, I was doing pieces like Cavis Muris and the orchestration of real-time electronic sounds, real-time digital sounds, was just light years more advanced. It’s amazing what happened orchestrationally in that decade with the development of real-time digital audio. FJO: I love the back story of Cavis Muris. LS: I’m very fond of mice actually. There was a little family of mice living in the loft at that point. But the mouse of Music Mouse initially was the mouse input device of the early Apple Macintosh. It occurred to me, when I got my first Mac. It was not the very first one, the very limited 128k. By the 512k Mac, it became usable. So what would be the most logical thing you’d want to do with a mouse-controlled instrument? You would want to push sound around with the mouse. So, it was Music Mouse, and then I just kept refining it and refining it. That’s how it got its name. Now, of course, nobody uses mice. Well, some people still use mice. And of course there are still plenty of real mice. FJO: I still use one, but I also still use a PalmPilot. LS: I always used a trackball, which I guess I would have had to call it “Music Rat,” because it is definitely bigger than a mouse. I was thinking of doing a Rhythm Rat at one point, but I never got that far. There were too many other things going on. I might do a Counterpoint Chipmunk at some point. I don’t know. I would love to get back to coding. It’s just been so busy and the technology changes under me faster than I can learn to keep up with it in my spare time with so many other things always going on.
One of Laurie Spiegel’s current compositional work stations.
FJO: The constant change in technology raises other issues about the future of musicality. Being adept at something because you’ve mastered it over the course of many years is an alien concept to a lot of people nowadays. But in a society where the technology changes at the drop of a dime, it’s really difficult to become proficient in any specific thing. LS: You are right. People used to learn a tool or technique and refine and develop their use of it for the rest of their lives. Now we can’t even run the software we used most just a few years ago. We are always beginners, over and over.
This constant transitioning fits with the attention span of the channel flipper or the web browser. And process of facing the blank page until some creation takes form on it is now rare. More and more of today’s digital tools come up with a menu of selections, like GarageBand. Here’s a library of instruments, pick one—multiple choice initial templates. Do you want to make this kind of piece or that? Things start with “here are some options you can select among” as opposed to starting with something in my mind which I’m hearing in the silence in my imagination. Back in the dark ages when I was young, you had a great deal of time to focus on what was happening in your mind and information could proliferate, amplify itself, and take form in your imagination without that much interruption from outside. You had your mind to yourself. I don’t think kids walk home from school anymore. I don’t know. All parents seem to be hell-bent on making sure they’re safe and picking them up. And they are constantly interacting, with people or with devices or with people via devices.
Our culture is at this point full of people who are focused outward and are processing incoming material all the time. So you’ve got musical forms which are mixes, mash-ups, remixes, collages, processed versions, and sampling, all kinds of making of new pieces out of pre-existing materials rather than starting with some sound that you begin to hear in your imagination. I’m a little concerned about this because there’s just nothing like the imagination—being able to focus inward and listen to what your own auditory mechanism wants to hear—listening for what it wants to hear and what it would generate on its own for itself. You can do processing of the stuff coming at you ‘til the cows come home, but are you going to get something that’s really the expression of your individuality and your sensibility the same way as listening to your own inner ear? Are you going to come up with something original and authentically uniquely you? FJO: But you were saying before that we’ve moved to this point where nobody owns a sound and that reconnects us with much earlier folk music traditions. LS: Well, people still do, but it seems to be very hard to enforce ownership of sounds. FJO: I loved the story you told Simon Reynolds about wanting to listen to an LP you thought you had and when you were not able to find that recording, you made your own music instead. LS: That’s where the piece The Expanding Universe came from. I was looking back and forth through my LPs, and I wanted to hear something like that—not a drone piece, not a static piece, not like La Monte Young, and also not something that was a symphony. It just needed to be this organic, slowly growing thing, and I couldn’t find it, so—do-it-yourselfer attitude—I made one. FJO: So do you think it’s less likely that somebody would do that now? LS: Would somebody feel a desire to hear a certain kind of thing and go looking for it? Would they hear something inside their head and want to hear it in sound? It seems that people are fending off a great deal now. The dominant process is overload compensation: how can I rule out things that I don’t want to focus on so that I can ingest a manageable amount of information and really be involved in it. Attention is now the scarce commodity. Information used to be the scarce commodity, “information” including music of course.
Laurie Spiegel’s loft is an oasis of books, musical instruments, electronic equipment, and toys.
FJO: In terms of finding that original sound, there’s a piece of yours that I certainly think is one of the most original sounding pieces and it’s one of my favorites—Voices Within. It’s also one of the only pieces that you did using alternative tunings. LS:Wandering in Our Time is similar, although not as highly structured as Voices Within. It’s easier to use tonality or modality. Microtonality is hard to deal with. I didn’t use any particular microtonal scale. It was really by feel. FJO: But there’s a real sense of it being another world. LS: It was a very internal world. I keep using the word emotions, but emotionally, subjectively, the kind of unformed sense of experience you can’t even identify or label or describe, but it’s something haunting you inside that you feel music is the way to express. Does that make any sense? FJO: Yes, but the reason I bring it up is because of what you were saying about attention being so hard to come by these days. That piece really struck me because I didn’t have a framework for listening to it since it was so unlike anything else. With technology today and where we are in terms of being offered all these possibilities and having to choose from a set of options rather than striking out on our own paths, I wonder how possible it is for a piece like that to be created now. LS: You wouldn’t have come up with that piece on a keyboard-based synthesizer. It needed a synthesizer without a keyboard. To some degree, all of these computer programs for music out there now are virtually keyboard synthesizers; they all give you a scale. You have to really work to get out of the scales, those normal diatonic scales that are in every software package on the market. There are a lot of assumptions about the nature of music in most of the commercial software. They’re perfectly fine for making music that’s a lot like previous music, but not in terms of finding those places on the edge of what we know where we’re feeling for something that is so subjective and so tenuously there that we can’t begin to describe it. Those kinds of aesthetic experiences in sound are not really what the software that most music is done on today is optimized for. I suppose I’m guilty of using existing software by other people as much as anyone, but you do have to really work to get beyond the assumptions inherent in most software tools for any of the creative arts these days. FJO: In terms of working within conventions, it was fascinating for me to discover Waves and Hearing Things, your pieces for orchestra.
LS: I can thank Jake Druckman for actually giving me an opportunity to, both of those opportunities actually. He agented both of those. Everybody just wants to hear my electronic stuff, pretty much. FJO: But those pieces are extraordinary, too. They’re very interesting musical paths that might not have been intuitive had you not immersed yourself in electronic music. I hear the same kinds of transformations of timbres—an instrument emerges out of a cloud of sound the way that a timbre would emerge in electronic pieces from that time. Yet it’s all done acoustically. LS: But that happens in classical music, too. Much as I would have never admitted it to other kids at Juilliard, I absolutely love Rimsky-Korsakov and how he orchestrates. His orchestration is one of my great inspirations. And I love his orchestration book, too. It’s just really about sound and feeling it, it’s not about instruments ranges or any kind of nuts-and-bolts level stuff. You could say that what he does in some of his orchestrations is virtually electronic. It’s so focused on the sounds that you practically forget that they’re instruments.
Orchestras are great because you have all these timbres—wow! Then again, I love writing for solo instruments, too. Concerts are good, and I’ve enjoyed many concerts, but to me the most important music was always the music that happened at home where I would just pick up my guitar and play it to feel better, or I would sit there and sight read at the keyboard, which I used to love to do a lot, but haven’t had much time for in recent years. Or playing music for just one other person. Or playing music with one other person at home. Writing music that somebody can just put on the piano, trying to write things that are not that hard to play so that more people can play them. I’m not interested in virtuosity. I’m not interested in writing show pieces for concert halls. I’m interested in writing something that someone can sit down and play at home and enjoy the musical experience of playing it. That’s more important for me as a composer, so I tend to write pieces just for guitar or piano, the instruments that I have played the most. FJO: That’s a beautiful statement because so many people talk about getting into electronic music so that they could write music that they weren’t able to get players to play, creating a music that is even too hard for the virtuosos, music that’s beyond human ability. You’re saying the exact opposite. LS: Well, that too. It’s not an either/or. They’re both valid. That’s one of the reasons to do Music Mouse. It’s as close as you can come to playing an entire orchestra live in real time. I have all this timbral control. Nine of the twelve tracks on my CD “Unseen Worlds” were created with just Music Mouse and it was like playing a pretty full orchestra. FJO: So if you somehow had the time to take those pieces and orchestrate them and have them played by actual orchestras, would that be aesthetically satisfying you? LS: That seems like an awful lot of time and work to do something that already exists, as opposed to doing something different if given the opportunity to do something for orchestra. But yeah, that would be interesting. They would be different pieces, I would think. But that would be a lot of work. Well, it might not be. Actually you could automate an awful lot more of the transcription than you used to be able to do. Writing notes down, God, it’s so much slower than playing. That’s partly why I’ve always been an improviser. Jack Duarte, my teacher in London, said “composition is improvisation slowed down with a chance to go back and fix the bad bits”. Or “bad notes,” I think he may have said.
Laurie Spiegel playing the lute in 1991. Photo by Paul Colin, courtesy Laurie Spiegel.
FJO: So we’ve talked about the composer and the interpreter, what about the listener? LS: Well, I think one of my advantages as a composer was that I didn’t accept the identity professionally until I had already grown up as a listener and a player. The emotional level is the level at which I am primarily motivated and always have been. I’m still the teenage girl who, after a fight with my father, would take my guitar out on the porch and just play to make myself feel better. That’s who I am musically. I kind of knew what I liked as a listener, and what I liked was music that would express emotions that I didn’t have a way of expressing, where somebody understood me and expressed in their music what I was feeling in ways that I couldn’t express myself. So, to some degree, I think I see the role of the composer as giving vicarious self-expression to people, although at this point, with the technology we have, there’s no reason for anybody who wants to make music not to be able to. But there really still are levels of ability. Not everybody’s going to be Beethoven or Bach. There still will always be room for truly amazing artists of composition and sound who can do things that other people can’t. It’s just that I really kind of rail against the old dichotomy of the small elite of highly skilled makers of music and this vast number of passive listeners that have no way to actively express some thoughts in music. That seems really wrong to me, and that no longer needs to be the case. But that’s not to say that it isn’t still worth listening, because there aren’t that many truly great works out there, percentage-wise.
In addition to her musical compositions, computer software, and extensive writings about music, nature and many other topics, Laurie Spiegel is also a visual artist. These are two of her Xerographs.
Today marks the 40th anniversary of Nicolas Collins’s Pea Soup, a piece that uses electronics to “play” the signature acoustics of a space. In honor of that milestone, Collins today unveils Pea Soup To Go, a free virtual jukebox programed with recordings of 70 different versions of the work, iterations which span decades and continents.
Since the composition relies on linked microphones and loudspeakers in a “self-stabilizing feedback network” to map and respond to changes in the room and produce the sonic content featured in the piece, it might just be one of the purest forms of ambient music available. The jukebox shuffles the various collected recordings, masking transitions between each with long crossfades, allowing listeners to dip into this historic stock pot and feast until they are full.
Molly Sheridan: How do you tend to explain this piece to people who haven’t yet heard it, especially those without a great deal of technical background?
Nicolas Collins: Technically it’s pretty simple. Everybody seems to have heard the squeal of feedback at some point, and most are familiar with the fact that moving the microphone (or electric guitar) usually changes the pitch of the feedback. I explain that the phase shifter (the electronic gizmo at the heart of the piece) emulates a hand moving the mike every time the feedback starts to swell. The piece has a sufficiently dreamy, non-threatening quality that most people don’t worry too much about the how and why.
MS: And that idea led you to the title Pea Soup?
NC: The immersive quality of the sound field brought to mind the cliché of a fog “as thick as pea soup.” Rather silly, in retrospect, but I was pretty young and now I’m stuck with it.
MS: While reading up on the history of Pea Soup, I was surprised to discover that the work can involve (or always does?) live musicians. This was something I didn’t quite pick out in the first few iterations of the piece I heard via the jukebox. They are charged with interacting with the electronics (or later the software) in some specific ways. Can you explain why you prescribe their actions in the way that you do? And then this of course made me curious about the impact of the audience in the space and therefore on thework itself.
NC: Left to its own devices the Pea Soup feedback network creates simple, languid melodies whose pitches are derived from the resonant frequencies of the room (and the tempo reflects the reverberation time–larger rooms play slower tunes.) A small change in the room acoustics can cause a pitch to be added to or dropped from the melody, like some slow hocket music. I ask performers to “play” the acoustics by walking around the room, since interfering with the reflecting paths of the feedback often causes a change in the patterns. They play notes as well: playing a unison with a feedback pitch, then bending slightly out of tune, can stop the feedback; playing an octave or fifth above a feedback pitch can cause the feedback to break to the upper interval; and introducing a pitch that hasn’t been heard in the feedback from several minutes often brings it back into the melodic pattern.
Audience sounds and movement obviously influence the patterns as well–a performance in a noisy bar unfolds very differently than in a quiet, formal concert hall. I’ve also installed the work in gallery settings, where interaction with the audience becomes central.
In performance I usually let the feedback system stabilize for a few minutes, as a sort of alap introducing the scale of the room, before the players start. The web app (Pea Soup To Go) shuffles a library of around 70 performance recordings, with long fade-ins and fade-outs. The sequence is random (or as close as I can get), as is the selection of in- and out-points for each file, so the recordings always start at different times–sometimes one drops right in on a musician’s sounds, but sometimes you have to wait a few minutes to hear a player. Plus the players are instructed to play “inside” the feedback texture, rather than soloing on top, so it’s not always easy to distinguish the instrumental voices. MS: Okay, now for the gear snobs in the crowd, this piece offers some interesting insights into the punishment time can dish out on work that involves specific electronic components that can break down and become obsolete. This led you to some particular extremes—I especially loved the correspondence you exchanged with Carl Countryman, the maker of the phase shifter you originally employed in the piece. Can you tell us a bit about that evolution and how it affected the work?
NC: This will make me sound even older than I am, but back in 1974 there were no digital delays (or at least no affordable ones). The studio at Wesleyan had three Countryman Phase Shifters that Alvin Lucier had bought to do what’s called “Haas-effect Panning,” which is a way to pan sounds quite realistically using very short time delays. I had been working a lot with feedback, and discovered that changing the phase shifter’s delay setting could emulate moving a mike, opening up a whole new vista of quasi-automated feedback manipulation. Pea Soup emerged as one of the major products of my undergraduate education.
After college I moved on to other materials and technologies (early microcomputer music, live sampling and signal processing, collaboration with improvisers.) But I’d return to feedback from time to time, and when, through my day job in New York, I ran into Carl Countryman at trade shows I’d always ask if he had any of the Phase Shifters back at his warehouse. By the 1980s he was making very popular high-quality Direct Boxes and lavaliere microphones, and the phase shifters were long gone and, it seems, not missed–his answer was always “no.”
Then in the late 1990s I was in Berlin with a DAAD fellowship, and an ensemble with which I was working (Kammernesemble Neue Musik Berlin) asked if they could revive Pea Soup. At first I tried to reconstruct the original analog circuit. I emailed Mr. Countryman, who obviously still remembered my unwanted nagging, and he sent me the schematic with the explicit understanding that I was never to bother him about this device again. The circuit is not complicated, but it has one odd custom-made part that was difficult to duplicate. I did a few performances with my best attempt in the analog domain, but after a few years I wrote a software emulation of the original analog boxes that, with enough code tweaking, evolved into a pretty convincing substitute.
Software has allowed me to add a few features that would have been great to have back in 1974 but were out of reach then (such as a filter that automatically nulls pitches that would otherwise dominant in the texture.) Programs are not as cute as little metal boxes, but they’re lighter and can be distributed more freely, like old-fashioned paper scores: I’ve posted the program on my web site, where anyone who’s interested can download it and perform Pea Soup without the need fly in Nic and his gear. MS: How does the experience of Pea Soup via this clever website relate the performance experience of hearing it live for you?
NC: In a big space with big speakers Pea Soup can be a very immersive and interactive experience—“church of sound,” as one friend once called it. The web app (Pea Soup To Go) is obviously more like listening to a recording of a concert than experiencing a live event, but this is a record that never ends, never repeats—a multi-disk CD changer in “shuffle mode” with a twist: the long crossfades knit the 70 files into one continuous performance. Since every room is in a different, architecturally determined “key,” you end up hearing a series of odd, vaguely modal chord changes that stretch out over an almost glacial time scale.
MS: Even before I started reading the background on Pea Soup, I kept thinking of Cage and Lucier associations related to “hearing” a space–using a space and its contents as so essential to the end sonic result. Do you hear this piece as in that evolutionary line? In what ways does it intersect and/or diverge?
NC: Yes, it certainly is in that line. I was a young, impressionable student of Lucier’s at the time I made Pea Soup. I was drawn to feedback under the twin influences of Lucier and Cage. I loved Lucier’s extraction of musical material from fundamental acoustical phenomena (think of Vespers and I Am Sitting In A Room). My parents were both architectural historians, and the link between music and architecture was critical to my finding a comfortable place to work. And feedback became the solution to my Cage-induced ambivalence about making personal musical decisions in a world where all sounds could be “musical sounds”: turn up the volume and let nature/god/architects do the rest—a sort of acoustical I Ching.
Divergence? I think my generation of musicians and composers is (and always was) much more comfortable with the idea of improvisation than our teachers were: Cage hated it; Lucier kept trying to come up with other words to describe it. In Pea Soup and most of my other work I embrace improvisation, I hand a lot of responsibility off to my players, and live with the consequences.
I also see each musical generation incorporating a new generation of technology. My peers and I embraced synthesizers, effect boxes, homemade circuitry, computers. And technological shifts often beget stylistic changes – some modest, some significant. There’s a certain kind of technological interactivity that I believe is, for better or for worse, the gift of my generation of experimental music composers.
MS: Even though this was originally a student piece, you note that the lessons of architectural acoustics have continued to engage you, making this piece of ongoing interest even 40 years later. What have some of those lessons been?
NC: I still have difficulty making certain musical decisions, and I often return to acoustics to clarify the edges or underpinnings of a piece. In the end no sound gets to the ear without engaging with acoustics, and the physical reality of sound keeps me grounded. There’s a certain primordial consonance or orderliness or reassuring “rightness” in it, that I find helpful when I’m feeling lost.
Recently, while tweaking the software for Pea Soup, I discovered a simple way of mapping the resonant frequencies of a room to conventional music notation. I’ve written a piece (Roomtone Variations) that uses this technique to create a site-specific score for any concert space, in real time, in the presence of the audience. The score is projected on a screen for all to see as it unfolds, and after the analytical intro (which takes about two minutes) an ensemble performs purely acoustic variations on this “architectural tone row” – a kind of “Pea Soup Unplugged.”
Another new piece, Speak, Memory, uses room reverberation as short-term memory for image files and sound bites. In the course of the performance I display the transformation of the original pictures and sounds as they are “forgotten” by the room. (I hope to include both these pieces on my first concert in New York in many years, at Roulette on March 9.)
You could look at this obsession in one of two ways, I suppose: either I am somewhat pathetic for, at the age of 60, still being hung up on my first true love from age 20; or it’s a sign of deep commitment to one’s fundamental beliefs. Take your choice.
(New Amsterdam 057) Performed by:
with cameos by Anthony LaMarca, Aaron Roche, and Jay Hammond Order on Bandcamp
The work of electronic musician/sound artist Michael Hammond first engaged my ears while listening to Sarah Kirkland Snider’s large-scale work Penelope, to which Hammond contributed elegantly subtle electronic textures. Negative Space is the first full length album of Hammond’s own recording project No Lands; it features nine electronic works that combine song format and ambient soundscape—the work of, as Hammond states in the liner notes, “Three years and a hurricane.”
Much of this music was created in the wake of Hurricane Sandy, an event that greatly affected Hammond, a Red Hook, Brooklyn resident and member of the New Amsterdam Records team. The dreamy nature of the music, with restless patches of multi-textured noise, synth washes, and eerie pitch-shifted voices, is both graceful and slightly disturbing at times. While the music has a surface level techno/dance music feel, substantial composerly attention is devoted to form, color, and line, making Negative Space a gratifying listening experience.
My vote for song of the summer (at least for this morning) comes courtesy of Boston-based pop omnivores Pulitzer Prize Fighter and their first single since their late-2012 EP, All Sweetness and Light. “Movies” ticks off all the boxes for a good summer song: a relentless hook, genial amounts of volume, sing-along lyrics proclaiming the merits of shrugging off thoughts of mortality by just doing stuff, a low-key, meandering haze of disposable leisure. Not least, it packages up some nice musical nostalgia, be it a sunny ’70s squall of parallel-harmony guitars, a cool, noir-ish pour of muted trumpet, or the comforting psychedelic worry of a fully diminished seventh chord. (Listen carefully, at the dominant pause just before the end of the bridge, and you can hear a lovely, chromatically descending keyboard decoration buried in the mix like some unexploded ordnance from the British Invasion.)
Summer music, for me anyway, tends to rise and fall on its leveraging of nostalgia, even more so now that actual summer vacation time is an increasingly distant memory. I’m already nostalgic for the beginning of this summer, when a lazy, sun-dappled respite was still a naïve possibility rather than an unattainable grail. In that spirit, here’s a handful of more recent local releases of varying retro commitment and/or critique. Lewis Spratlan: Apollo and Daphne Variations; A Summer’s Day; Concerto for Saxophone and Orchestra
Eliot Gattegno, saxophones
Boston Modern Orchestra Project; Gil Rose, conductor
(BMOP/sound 1035) Buy now: Excerpt from Lewis Spratlan’s A Summer’s Day
Spratlan’s musical version of A Summer’s Day (2008), commissioned and premiered by BMOP, has the instant nostalgia of a strongly evoked, specific time and place. His “Pre-Dawn Nightmare” includes fragments of the theme song to The Sopranos; “At the Computer” evokes the sounds of an already-obsolete desktop machine. And the connective tissue of the piece, the folk-like tune presented at the outset (“Hymn to the Summer Solstice”), is a memory of summer romanticized into an abstraction. But the tune is repeatedly interrupted and contradicted; and Spratlan is more interested in reversing the usual polarity of such tone poems, taking trompe-l’oeil musical literalisms (and some flat-out literalisms, as with the rhythmically dribbled ball in “Pick-up Basketball Game at the Park”) and working them into a fluid, chromatic musical texture until they turn back into pure sound. (BMOP’s stylistic facility is a boon here, shifting effortlessly between limpid lushness and a more incisive, new music briskness.)
The Concerto for Saxophones and Orchestra (well-assayed, on both soprano and tenor instruments, by saxophonist Eliot Gattegno) and the Apollo and Daphne Variations do something similar with nostalgic styles, the inevitable jazz references in the former, a deliberately Schumann-esque Romanticism in the latter. Three very different pieces, but all engaged in a rich dance between the memory of something, the actuality of the thing being remembered, and the persistent present that the memory can’t quite mask. Mehmet Ali Sanlıkol: Whatsnext
Download from Bandcamp
To be sure, only a couple of tracks on Mehmet Ali Sanlıkol’s big-band album, released this spring, directly traffic in nostalgia, and the nostalgia is pretty specific: “Kozan March” convincingly reimagines a Cypriot folk song as a Neal-Hefti-ish workout; “Gone Crazy: a Noir Fantasy” tosses out handfuls of noir signifiers, with some sirens and police whistles to boot. But much of the fizz of the album—which alternates between a 17-piece traditional band and a 13-piece ensemble that includes traditional Turkish instruments—is Sanlıkol’s use of various vintage sounds, from an eerily formal harpsichord on “Better Stay Home” to the pastoral warblings of a Turkish ney on “The Blue Soul of Turkoromero” to a pellucidly primeval analog synth lead on “N.O.H.A.”
And, anyway, Whatsnext is just superb summer music. Sanlıkol—Turkish-born, Berklee- and NEC-educated—slips Turkish sounds and ideas into a polished, modern big-band idiom with wrinkle-free ease. Relaxed and cool, it turns out, is a universal, cross-cultural virtue. Neil Cicierega: Mouth Silence Buy now:
Download available from the artist for a donation.
A good mash-up is a double-shot of impressive cleverness, making two disparate pieces of music play nice with one other. A great mash-up uses that superimposition to tap into some deep commonality across the genre spectrum. Somerville-based Neil Cicierega, though, has devoted 2014 to a style of mash-up even more outlandishly transcendent, as if tapping into a conspiracy theory explaining some alternate history of pop culture.
Like this spring’s MouthSounds, Mouth Silence makes esoteric use of deliberately banal material, a churn of nostalgia refashioned into something resembling the soundtrack to a Hanna-Barbera adaptation of a Don DeLillo novel. Mouth Sounds— while positing the formerly annoyingly ubiquitous Smashmouth hit “All-Star” as the hidden key to four decades of pop-music history—repeatedly dredged up musical madeleines from the ’80s, ’90s, and ’00s, only to immediately undercut and profane them. Mouth Silence goes one step further, wreaking havoc on numerous songs that themselves capitalize on nostalgia in one way or another: “Crocodile Rock,” “Born to Run,” “Wonderwall.” REM’s “End of the World” and Billy Joel’s “We Didn’t Start the Fire” end up in a Street Fighter match of boomer timelines; the good old dark days of Pokémon panic are re-animated into a golem-like stand-in for every fleetingly misunderstood fad. Cicierega’s mischief is so deep that even the moments that don’t quite mesh feel more like elusive clues for any would-be cultural Dale Cooper. And the 24:03 mark? We all go a little mad sometimes. Boston Symphony Chamber Players
Music by Mozart, Beethoven, Brahms, Copland, Fine, Carter, and Piston (1964)
Music by Mozart, Brahms, Schubert, Poulenc, Colgrass, Villa-Lobos, Haieff, and Barber (1968)
(BSO Classics) Buy now:
Download directly from the Boston Symphony Orchestra.
Back in April, to mark the 50th anniversary of the Boston Symphony Chamber Players, the BSO began re-releasing re-mastered editions of four recordings the group made for RCA in the 1960s. The bulk of the repertoire is Austro-Germanic bread and butter: Beethoven, Brahms, Mozart. But the recordings also included some then-contemporary repertoire, and the result is some prime Boston-School neo-classicism, in rich, time-capsule performances. On the first set, Aaron Copland’s Vitebsk gets a sharp, grim reading; Walter Piston’s 1946 Divertimento is vigorous fun. One of the century’s more notable collection of principal winds—including flutist Doriot Anthony Dwyer and oboist Ralph Gomberg—takes on Elliott Carter’s 1948 Wind Quintet. The best is an exhilarating, athletic account of Irving Fine’s 1957 Fantasia for String Trio, with violinist Joseph Silverstein, violist Burton Fine, and cellist Jules Eskin (today the group’s sole remaining founding member). Excerpt from Irving Fine’s Fantasia for String Trio
The second re-issue includes Gomberg and Sherman Walt on Alexei Haieff’s lean, light Three Bagatelles for oboe and bassoon, along with Burton Fine and Vic Firth on Michael Colgrass’s Variations for Four Drums and Viola. As a bonus, there is a previously unreleased live recording of Samuel Barber’s Summer Music, a truly excellent performance, as bright and cool and languid as a gin and tonic on the lawn.