Author: Eric Chasalow

Memories of Milton

Our community, no, our world, has always been defined in one way or another by the presence of Milton Byron Babbitt. We have all, at one time or another, had to come to terms with his music, with his ideas about music, and with the place he helped define for composers within the American university. Anyone who can make us question the way forward so profoundly is helping us become ourselves, whether we like it or not. And with Milton Babbitt’s passing a few years ago, our world is diminished in ways we have yet to realize.  From my tone, my great affection may be obvious, but even people who—to put it kindly—were not fond of Milton owe him a debt of gratitude.

For those of us making our way in the studio, Milton represented something very special.  His leadership was instrumental in establishing the Columbia-Princeton Electronic Music Center. While Luening and Ussachevsky were responsible for the series of Rockefeller grants that established the Center, as well as arranging the loan of the RCA Mark II synthesizer and its move up to the Prentice building on 125th Street in New York, Milton helped establish the C-P EMC as a major center of musical inquiry. (This in spite of the fact that ultimately only Milton, Charles Wuorinen, David Lewin, and perhaps a few others of whom I am not aware ever used the RCA.)  In his role as co-director of the EMC, as in every role he took on, Milton served as an example of someone with the very highest standards for the art of composition.

The fledgling profession needed a public advocate and Milton Babbitt was electronic music’s most articulate spokesperson. He appeared on radio and television and also sat on government and foundation panels.  His lecture topic was often ostensibly technical.  To quote a line from a lecture recorded at the New England Conservatory in the 1960s, “[T]he joy of the electronic medium is that once you can capture sound… it’s not susceptible to change” (by which he meant no longer susceptible to inaccuracy).   In other words, Milton used the newness of electronic music as a way to speak about cherished ideas about music that, while given new power by the evolving technology, were already part of his musical thinking.  When Milton spoke, as he often did, about the importance of “time and order in music” he could just as easily have been speaking about Schoenberg as electronic music.  This too was the power of his example—musical thinking, while potentially expanded by changing technology, is always at the core of what we do.  Milton, and others at Columbia, always asked, “How can I use the tools that I now find in front of me to explore musical ideas?”

In spite of what many might think, Milton’s position on rigor in electronic music practice was as much practical as ideological.  The world of electroacoustic music is still tainted by the notion that the studio is the playground of dilettantes, while real composers write for “real” instruments.  In the late 1950s, when the EMC was established, it was critical that the new venture have the highest aspirations if it were to survive and flourish.  For all that was being invested in grant dollars, facilities, personnel, and all the public notice, there would also need to be a tangible payoff.

Arguably that payoff came quickly.  Composers flocked to the Columbia-Princeton studio—Stravinsky and Shostakovich among many others visited.  At Babbitt’s suggestion at Tanglewood in the summer of 1958, the young Argentine composer Mario Davidovsky came to work there, a move that helped define his career and who, along with another émigré composer, Bülent Arel (who came to CP-EMC from Turkey in 1959), shaped a major electronic music tradition. The studio also drew countless students who were inspired to make their own musical and technical contributions, among them Wendy Carlos, Charles Dodge, Halim El-Dabh, Alice Shields, and Robert Moog.

One aspect of the legacy of the early EMC is the notion of a studio as a musical instrument rather than simply a facility for research.  Hundreds of pieces were composed at Columbia through its first few decades, and a number of these have become acknowledged as masterpieces.  I would include on this list many of Babbitt’s electronic works, especially the pieces for live and pre-recorded resources, such as Philomel (soprano and tape) and Reflections (piano and tape).

Babbitt’s electronic works, all realized on the mammoth RCA Mark II synthesizer are idiosyncratic and highly personal.  The RCA, as Milton put it in the 1997 interview he contributed to our Video Archive of the Electroacoustic Music

, “was the perfect instrument for me.” It was the RCA’s degree of rhythmic precision that most interested Milton.  Ironically, while most composers were excited about the new sounds that the studio made possible, Milton claimed no interest in the sounds themselves.  I suspect that, as in many such things, he was overstating his case.  I cannot hear pieces like Ensembles for Synthesizer without feeling great affection for the sound of the piece.  And one aspect that makes Philomel such a coherent world is the way in which the electronic sounds interact with the vocal sounds (recorded by Bethany Beardslee) that Milton took pains to process through the RCA.

I have avoided the temptation to personalize this essay to this point, but while I never studied with Milton, he had a profound effect on my life and work both through the example of his music, which I heard performed live frequently through the 1970s and ’80s, and through his consistent personal encouragement over our friendship of thirty years—this in spite of a vast difference in our personal aesthetics. (I suspect that many readers could report the same.) Rarely a day goes by when I do not think, usually fondly, of some thought-provoking phrase of Milton’s.  In writing my tribute piece for his 80th birthday, Left to His Own Devices, I spent an exhaustingly intensive few months with samples from archival interviews Milton had given over many years.  In hearing those same phrases over and over again, intoned in that beautiful, resonant voice, a layered and nuanced music emerged and became one with the composer himself.  It is this unified yet ineffable complexity, together with a core of generosity and kindness, that will always be the way I remember the man.

A table with a variety of electroacoustic music gear. Image courtesy Blake Zidell & Associates for NYCEMF and the New York Philharmonic Biennial)

Electroacoustic Music is Not About Sound

A table with a variety of electroacoustic music gear. Image courtesy Blake Zidell & Associates for NYCEMF and the New York Philharmonic Biennial)

Yes, I do mean this title to be provocative, but my intention is to question some of our priorities and assumptions about composing, not to be polemical or suggest some correct way of composing. Rather, I am sharing some thinking that I have found serves my students and me well. The main thing I want to explore is my own attitude about musical time. Admittedly, this is a huge topic, with whole books and dissertations rightly devoted to it. I can only scratch the surface in a blog post so will just try to (re-)start a conversation about something which seems, strangely, to have become accepted as settled business. While I am at it, I am wondering too about our seeming complacence at having given up control of pitch.

There are basic aspects of compositional thinking that seem to have become almost extinct—particularly, but not exclusively, in the realm of electroacoustic music.   To put it plainly, the ideas of narrative structure and of pitch specificity are now rarely considered. To claim that pitch specificity is important is to risk being labeled a reactionary or, worse yet, conventional. An even more profound change has taken place in our discourse regarding time—the most salient feature of music.  There is the strong suggestion that it is quaint to think of music as a narrative form, unfolding in time. The notion seems so old fashioned that the use of time-denying alternative terminology, adapted from the non-time-based arts, has become accepted practice. (The term sound-object comes to mind.) But, there is still a lot to be gained by an awareness of and the ability to control pitch, no matter how abstract and seemingly “unpitched” musical materials may be. And the unfolding of structure moment by moment is still what music is about—that is, it is about time. I love inventing sounds as much as anyone, but without attention to time we just have sounds. Sound unfolding in time, on the other hand, produces musical thought.  I write this while fully realizing that some readers will find this statement obvious, while others will find it either shortsighted or just plain wrong.

Scrutiny of the nature of sound itself has intensified over the years, especially in the context of electroacoustic music, where the possibilities for the creation and manipulation of sound are truly endless. In fact, the very experience of composing in the studio encourages this focus.  It is an incredibly gratifying experience to work directly with sound, listening to and changing material in real time.  The immediacy of this experience is one of the things that sets work in the studio apart from instrumental writing. This change in our way of making music convinced many composers that a fundamental change was also at hand in the very way in which a piece could embody meaning. The de-emphasis of pitch as the main carrier of an idea, in favor of a more foreground function for timbre, was already well underway in the early 20th century. (The Farben movement in Schoenberg’s Opus 16 is one of the usual examples, while Scelsi demonstrates a further development.) From the 1950s on, the development of technology to capture and manipulate sound accelerated this conceptual transformation.

A mixing console, processor, speaker, video screen and other equipment in Eric Chasalow's music studio.

New materials do demand new approaches, but this does not erase the necessity of paying attention to shaping the narrative. On the contrary, distinctive sounds, each potential in its own perceived space, allow for a new narrative clarity. Just as in film, our more famous time-based cousin, music can have multiple narratives intertwining and adding complexity to the flow of ideas. With crosscutting, flashback, and the like, one can create powerful illusions of nonlinearity, but in no case are we able to escape the reality that time only moves forward.  When we acknowledge this fact, we face the necessity of structuring musical time with great care.  If we do, it is more likely that the music will require and reward an intensified engagement by the listener. This allows us to invoke memory in subtle and powerful ways.

I am very well aware of philosophies that propose to disrupt older notions about musical time, deriving from work that goes back at least to the mid 20th century.  There are tropes on the static as “the eternal” (Messiaen), “moment form” (Stockhausen), and “discontinuity” (my old friend, Jonathan Kramer). It’s just that no matter how many alternative philosophies I encounter, I am always led back to the fact that there is still power in the flow of one moment of experience to the next. It is true that our brains can hold multiple impressions at once, and reorder and reconsider them fluidly. Still, we experience a piece as a succession of elements, and the ordering of these drives the overall experience. If I can get you to care about how time increments in my piece, you will become an engaged listener. Conversely, if I cannot convince you to follow the narrative journey, you will not hear what I have to say. If I only convince you to listen some of the time—to drop in and out of awareness—I have provided, at best, an assemblage of moments rather than a cohesive argument. Another way of thinking about this is in relation to aleatoric relationships we encounter everyday. We may be surrounded by objects, and it is possible that by being awake to our surroundings we will become aware of inherent, even beautiful structures, but it is more likely that the chance experience will not rise above the mundane. (Apologies to John Cage, whom I heard express otherwise many times.) The artist is able to create and reveal meaningful connections where we may not otherwise find them, and for composers, time is the most powerful domain with which to achieve this.

All of the proceeding, however, cannot exist unless listeners allow for the time necessary to experience a piece of music. This has certainly become more and more rare in lives mediated by devices and experienced in five-second chunks. My most naïve idea may be that anyone is willing to concentrate and truly listen through a piece of music at all. If we cannot make this assumption however, we lose musical experience, so to abandon this hope is to abandon music. There is a larger topic here about where we are when we hear music—a concert hall (or alternative formal space) or online, on the subway, in a variety of other informal contexts.

Let’s turn then to the matter of pitch. Why does an increased interest in sound, or the foregrounding of one of its elements, timbre, mean that now pitch is an unimportant element? Am I the only one who finds it ironic that, as we pay such close attention to sound, so little attention is given to pitch specificity? Pitch is such an important part of the complex we call “sound.” Yes, timbre and pitch are not independent in the physical embodiment of a sound, but we can and do think of controlling them independently, and there are many computer tools for doing so. Isn’t ignoring pitch structure a kind of dumbing down? Aren’t we asking listeners to stop paying attention to important details when we fail to make choices regarding pitch? Are we perhaps giving up the precise control of pitch because new technologies make other things easier? Do the newer contexts and new technologies distract us? Perhaps some of us have emerged from the highly politicized prominence of serialism with such distaste for pitch that we feel relief in its seeming erasure.  Perhaps it is just the pendulum swinging from one extreme to the polar opposite. Whatever the reason, I find the lack of attention to pitch impoverishing.  We need every detail, every nuance at our disposal as musicians. Performers know the importance of nuance very well, while composers sometimes are too willing to let some things slide. What is especially great about electroacoustic music though is that it adds to what can make up the layers of meaning in music. Sound of any source and quality can be brought into dialog with any other, creating layers of meaning.  Spoken texts can collide with environmental sounds, familiar instruments, or synthesized sounds that seem completely nonreferential.  Even with these diverse and complex sources, pitch is still very much present and need not be ignored.

While many of my electroacoustic pieces provide good examples of what I am discussing, the beginning of one older piece, Crossing Boundaries (2000), is particularly clear.  The piece layers sounds from many sources, including recordings of spoken text from archives and answering machines, and bits extracted from historical recordings.  It starts with a quick succession of pitched sources that combine into a complex that we can hear as mostly an Eb chord, oscillating between minor and major. The sounds are more fluid and ever-changing than one would get in an instrumental piece, yet the Eb moving to D, then elsewhere (one can follow very specifically) creates a harmonic framework that provides a feeling of upbeat, focusing attention on the entrance of speaking voices.  The whole middle of the piece lingers around G, but as this starts to move, it changes the sense of time passing dramatically. For much of the piece, our attention is on the voices as they speak various short phrases, many of which refer to the concept of time. The piece is, then, an expansion of word-painting technique, and the underpinning for this “metamusical narrative” is a framework of sonorities that is always kaleidoscopic and never imitative of traditional instruments, but where the pitch choices matter a great deal. It is an example of pitch structure shaping the larger musical trajectory of an electroacoustic piece.  I must add too that, in spite of this example, I do not mean to suggest that tempered pitches are necessary. The entire universe of microtonal tunings is wide open, especially with tools that allow our precise control of frequency.

What is true of composing is also true in analysis. One may discover meaningful relationships within a piece by considering the dimension of pitch where one might not expect. My former student, John Mallia, did his dissertation on Varèse’s Poème électronique, a piece most often discussed in terms of the wide array of sound sources it employs. John discussed these too, but much of his work looked at aspects of the structure where harmonic relationships were clearly very important, particularly in shaping phrases.  The analysis even finds precedence for these structures in Varèse’s instrumental pieces. It should not be so surprising that composers carry what they know about music from working with instruments into their studio work. The trick is to use the new context to spawn new musical possibilities, but figuring these out does not require throwing out old concerns as much as we might imagine. There have been numerous examples throughout history of new forms developing through a tension between evolutionary and revolutionary thinking, and there is no reason to think we have somehow recently escaped the value of historical precedents.

The Opportunity of Electroacoustic Musicology

A table with a variety of electroacoustic music gear. Image courtesy Blake Zidell & Associates for NYCEMF and the New York Philharmonic Biennial)

I want to start this post with a challenge to my musicologist colleagues (I hope there are musicologists reading NewMusicBox), but it is really a call to action for us all. The exploration of electroacoustic music, its historical and social dimensions, is long overdue. In fact, as so many pivotal figures pass away, I cannot fathom why there has not been a rush to collect primary source material, let alone to interpret it. The lack of this activity spurred the creation of the Video Archive of Electroacoustic Music and gave the collecting of oral histories urgency when my wife and I started in the 1990s. Much as we both care about this work, we are no longer able to collect these oral histories, yet this work is increasingly important today. It was alarming to us at that time that no one had captured the stories of electroacoustic music’s pioneering composers and engineers.  Though aware of the great work being done by Vivian Perlis at Yale, we knew that no one had yet filmed the stories of figures such as Bebe Barron (composer of experimental electronic film scores and collaborator with John Cage, Earle Brown and others) who was very ill. Neither was there much about the founders of any of the first studios in the USA, including the Columbia-Princeton Electronic Music Center, San Francisco Tape Music Center, University of Illinois studio, and Bell Laboratories. We rushed to capture what we could, given scant resources and the many other demands on our time. We were certain then that whole careers could be made mining these materials if only someone could preserve the stories. And today there remain untapped opportunities to do critical archival work, to interpret the stories, and to study the music itself.

Engineer Bill McGinnis in the electronic music studio in his home.

Bill McGinnis, the first engineer of the San Francisco Tape Music Center (before Don Buchla came in). Photo taken in 1998 when we interviewed him in his home studio in San Francisco.

Twenty years since we began our collecting, electroacoustic music is still essentially unclaimed territory—especially outside of its popular music dimension. It seems ironic that, in a time when we fetishize even the most mundane activities and record them with our phones, there is still such little effort made to capture and interpret this pivotal transformation of music.  There have been a few modest bright spots, including the conference on the late electroacoustic works of Luigi Nono at Tufts that I was pleased to participate in this past March. Alas, this seems to be the exception, which leads me to revive my speculations on some of the forces that have delayed the development of a subspecialty in electroacoustic musicology. I write this with the hope that things may be poised for a change and that some of you will take up the challenge.

Composers and musicologists still do not communicate with each other very much or very well.

Why then has this work been slow to start, and what might change the situation? For one thing, composers and musicologists still do not communicate with each other very much or very well. This is true even at my own educational institution, where relations between programs are excellent and we actually all like one another.  I contend that there is still too much of a separation, often due to historical animosity and unfortunate, longstanding battles over turf.  These are destructive and if we do not put them to rest and start doing more to think as one profession, we will continue to fade into irrelevancy. I do not object to the study of popular music, but I do not subscribe to an idea, gaining in currency, that turning to the study of popular genres is the path to relevancy in music education. Instead, I propose that a more robust conversation about musical ideas and the creation of new work, the kind of conversation found in all of the other arts, is essential. Such a profession-wide conversation is currently lacking. I mention this larger issue here because there are lots of dimensions to electroacoustic music that make it fertile ground for engendering a lively and productive scholarly argument of the kind we desperately need.

Another factor to consider is the technology of electroacoustic music, which presents both opportunity and obstacle.  On one hand, the novelty inherent in the constant flow of new music hardware and software products is attractive to a broad segment of the technology-worshiping public.  The energy of invention around this marketplace is alluring, lending a pseudo-scientific legitimacy that other music does not enjoy. Old technology—say, a violin—becomes invisible to us, but this means that instead of being diverted by the instrument itself, we are more likely to engage with the music. The violin is very little changed over its several hundred years and has ceased to be a novelty, so we are able to ask instead: what new musical ideas can this instrument express? In fact, fewer people than ever study the music itself, which is part of a larger problem, but this also explains something about why we are stuck in the starting gate with electroacoustic music.

My relationship to cars is a pretty good analogy to how I’ve worked with synthesizers: They look shiny, sexy, and inviting at first, but once I drive one a little, it becomes just a way to get from point A to point B—at least until something goes wrong.

The obsession with the devices of electroacoustic music is as much a problem for composers as it is for scholars. Music by definition is highly abstract, and thinking in music is hard, even more so when there are frequently no scores to consult.  It is much easier to become immersed in the features of some new “toy.” The non-abstract, concrete aspects of music hardware and software make these much easier to relate to than music itself.

I confess that the allure of the first synthesizers in the 1960s was one of the things that drew me into composing electroacoustic music. At the time I was in high school, I had been composing for a couple of years and quite accidentally was lent an ARP Odyssey synthesizer. I had heard Mort Subotnick’s Silver Apples of the Moon and was frustrated not to be able to get similar results from this keyboard-reliant minisynth.  Over the subsequent years, as I was introduced to many other analog synthesizers from Moog, Buchla (the one used by Subotnick), Serg, EML, and others, and each time found I was working against rather than with the differing architectures. I really wanted the machine to become almost invisible and to allow me to make the music I was hearing. My relationship to cars is a pretty good analogy:  they look shiny, sexy, and inviting at first, but once I drive one a little, it becomes just a way to get from point A to point B—at least until something goes wrong.

Still, as a steward of one of the remaining Buchla 100 systems, I have an acute awareness that the history of the technology is also in need of attention. There are many stories that need to be told through this lens: How is it that the Buchla and the Moog are fundamentally similar, yet so different? What are the relationships among the musical approaches composers take and the idiosyncrasies of particular technology? I am ultimately, however, much more interested in the music that this technology allows us to create, and I imagine that at least some musicologist colleagues would be, too. But if there is little work being done on the history of the instruments, there is even less addressing the music itself. For perspective, consider again how a historical change in our relationship to the instruments is part of this mix. It would seem odd to point out that there is a much larger body of work on the piano music of Mozart than on the evolution of the piano, yet in electroacoustic music, this balance is reversed. All of the energy in the room is constantly sucked up by the vast and ever-expanding array of new products, and so little is left to consider what is being achieved musically with these tools. I think we all recognize this phenomenon, but that recognition has not changed anything much in the forty years I have been in the field.

Closeup image of an old patch-cord synthesizer

Electroacoustic musicology, far from being a narrow subspecialty, could develop as a broad range of investigative possibilities, many organically interdisciplinary in nature.  The relationship to the history of science and technology is perhaps the most obvious, but there are many other linkages.  And as much as I argue here for this work as a branch of so-called “art music,” the connections to the world of popular music represent the richest pathway between the academy and the larger public.  Having collected the oral histories, I often feel too that one especially ripe approach would be ethnographic.

It would seem odd to point out that there is a much larger body of work on the piano music of Mozart than on the evolution of the piano, yet in electroacoustic music, this balance is reversed.

As I mentioned in my previous post, electroacoustic music inhabits a network of communities, some organized around institutions and others around compositional approaches or particular technologies.  If one adds the recognition that there is a large body of music without notated scores, the similarity to the world of the ethnomusicologist is inescapable.  Ethnographic work though is only one possibility of many.  The field is wide open and accessible on multiple levels to any ambitious and pro-active scholar who is willing to eschew the conservative canon in favor of a somewhat more recent and— arguably—field-changing history.