Tag: music technology

I Came Here With Nothing: 21st-Century Paths in Music Education

A woman and a man at a mixing desk

I applied to the San Francisco Conservatory of Music’s Technology and Applied Composition (TAC) program as a very lost graphic design major transferring from the University of San Francisco. While I didn’t come from a strict classical background, I was a multi-instrumentalist, songwriter, and a passionate electronic music maker. Thankfully, SFCM saw a certain sense of originality and talent in my art, and I got accepted into one of the newest and most groundbreaking music technology programs that a conservatory has ever seen.

Why should a conservatory have a technology composition program? Music is always evolving, and conservatories should be the place where we can explore and pioneer this evolution. For 98 years, in general at SFCM you’d hear music from the 17th to the 20th centuries radiating out of its practice rooms and into the hallways. The game changed with the birth of TAC, founded by Executive Director MaryClare Brzytwa. With programs like TAC, students get the best of both worlds—the opportunity to study foundational classical music as well as the ability to explore new cutting-edge music technology. Coming from a graphic design background, using the latest tools and software had always been second nature to me. What I deeply hungered for was access to knowledge of the classical music world, something which most other music technology programs severely lacked. Every conservatory should offer an integral technology and applied composition program as a bachelor’s degree course.

The TAC program is designed to push students out of their comfort zones and into the realm of growth, experience, and discovery. From building musical applications in Max/MSP to cranking out a fully orchestrated score for a five-minute film cue, students are constantly challenged to learn more and perform better.

As freshmen, we are thrown immediately into the fire as one of our first tasks is to compose a one-minute suite for a fictitious video game. Only the work of a select few will be chosen to be recorded by professional musicians at Sony PlayStation’s studio in San Mateo. This assignment is especially challenging because most of us are scrambling to learn how to use a digital audio workstation for the first time.

It’s a fun ride to be on, when we walk into the classroom not knowing what to expect then suddenly have a ton of opportunities thrown at us. Last semester we worked with a local filmmaker on his documentary. The filmmaker himself had contacted the school looking to see if any composition students were interested in scoring his film, as the film had a heavy opera theme throughout. This became one of TAC’s main semester projects and, like every project, we poured everything we had into it. I ended up scoring 20 minutes of music for this project, and it became my first real professional credit.

Recently, TAC students have been working in collaboration with the San Francisco Art Institute (SFAI) on scoring their student film. The students at SFAI are creating a modern Macbeth, and our scores will be recorded at Skywalker Ranch.  Opportunities like these are continually laid out on the table for us, and we are responsible for grasping as many as we can and then running with them.

These are just some of the experiences we get, on top of having the traditional conservatory experience of studying music theory, orchestration, and harmony, as well as history and other general education classes.

If I were to offer advice to other conservatories based on my experiences in the TAC program, I’d say there are a few core ideas that will need to be implemented:

Recording Department
It’s important to teach students the most foundational recording studio skills. Recording, mixing, and editing know-how are invaluable for a lifelong career in music. Every live performance at the conservatory needs to be recorded, so teaching and allowing students to run this process achieves both goals at the same time. The recording department at SFCM is now almost completely student-run, with the head audio engineer being third-year TAC student Seira McCarthy.

Special Projects
Find outside projects that people in the community are passionate about and see if they would like to collaborate, or have students put together a seasonal concert of live electronic and avant-garde music. It’s important to be open and allow students to explore their creativity in a program like this. There is no straight path to achieving music nirvana, so it’s important to have an accepting and open program for students to show off their performance and composition skills.

A Badass Executive Director
My director and mentor MaryClare Brzytwa has pushed me further than I ever dreamed of going, and has never let me get away with cutting any corners. As an example, when I applied for a job to be a department assistant, she assigned me to be a recording engineer because during my freshman year, when other students were setting up audio equipment, she saw me hiding because I didn’t know how to do it. She wanted to be sure I would do the work I ran away from. And it worked. By forcing me to break down those barriers, I’m no longer afraid of setting up mics or running Pro Tools sessions.

Danielle Ferrari at the mixing desk

Photo by Carlin Ma

After four years of TAC, when you look around the conservatory, classical music is still radiating out of practice rooms. But now, there are also students playing synthesizers, building MIDI instruments, and collaborating as creators and musicians as well.

As that lost graphic design major, TAC was the gold mine I found that allowed me to learn everything I needed and more to be happy as a music creator.

Your Computer is Listening. Are you?

Six years ago, I wrote an article stemming from a lively discussion that I had with a few friends on the work of David Cope’s artificial intelligence compositional program “Emily Howell.” My intention had been two-fold: to approach the philosophical challenges of our society accepting music originating from an extra-human source, while also attempting to discuss whether “Emily Howell’s work” met the definition of a composed piece—or if extraordinary human effort was involved in the final product.

This inquiry will take a very different approach.

We begin with the hypothesis that, due to the rate of growth and development of A.I. technology, #resistanceisfutile. Which is to say that computer-composed music is here, and the conversation needs to change.

Need proof? When I wrote the article six years ago, there were roughly two or three A.I. programs, mostly theoretical and almost exclusively confined to academic institutions. In the two weeks between agreeing to write this article and sitting at down to flesh out my notes, a new program using Google’s A.I. open platform was released. In the week and a half between writing my first draft and coming back for serious revisions, another A.I. music system was publicly announced with venture capital funding of $4 million.  The speed at which new technology in this field is developed and released is staggering, and we cannot discuss if it might change the musical landscape, but rather how we will adapt to it.

Advances in the capacity and ease of use in digitally based media have fundamentally changed the ways that creators and producers interact with audiences and each other and—in many ways—they have bridged some of the gaps between “classical” and “popular” music.

Ted Hearne introduced me to the beauty and artistic possibilities of Auto-Tune in The Source (digital processing design by Philip White). After seeing a demo of Kamala Sankaram’s virtual reality operetta The Parksville Murders, I programmed a session at OPERA America’s New Works Forum, bringing in the composer, producers (Opera on Tap), and director (Carri Ann Shim Sham) to introduce their work to presenters and producers of opera from around the country. While still a beta product, it led to a serious discussion about the capacity of new technologies to engage audiences outside of a more traditional performance space.

The Transactional Relationship 

In the tech world, A.I. is equated to the Holy Grail, “poised to reinvent computing itself.” It will not just automate processes, but continually improve upon itself, freeing the programmer and the consumer from constantly working out idiosyncrasies or bugs. It is already a part of our daily lives—including Google’s search function, Siri, and fraud detection on credit cards. The intuitive learning will be essential to mass-acceptance of self-driving cars, which will save tens of thousands of lives annually.

So why is A.I. composition not the next great innovation to revolutionize the music industry? Let’s return to the “Prostitute Metaphor” from my original article. To summarize, I argued that emotional interactions are based on a perceived understanding of shared reality, and if one side is disingenuous or misrepresenting the situation, the entire interaction has changed ex post facto. The value we give to art is mutable.

A.I.’s potential to replace human function has become a recurring theme in our culture. In the last 18 months, Westworld and Humans have each challenged their viewers to ask how comfortable they are with autonomous, human-esque machines (while Lars and the Real Girl explores the artificial constructs of relationships with people who may or may not ever have lived).

I’ll conclude this section with a point about how we want to feel a connection to people that move us, as partners and as musicians. Can A.I. do this? Should A.I. do this? And (as a segue to the next section), what does it mean when the thing that affects us—the perfectly created partner, the song or symphony that hits you a certain way—can be endlessly replicated?

Audiences are interested in a relationship with the artist, living or dead, to the point that the composer’s “brand” determines the majority of the value of the work (commissioning fees, recording deals, royalty percentages, etc.), and the “pre-discovery” work of famous creators have been sought after as important links to the creation of the magnum opus.

Supply and Demand

What can we learn about product and consumption (supply and demand) as we relate this back to composition in the 21st century?

If you don’t know JukeDeck, it’s worth checking out. It was the focal point of Alex Marshall’s January 22, 2017, New York Times article “From Jingles to Pop Hits, A.I. Is Music to Some Ears.” Start with the interface:

 Two JukeDeck screenshots--the first shows the following list of genres: piano, folk, rock, ambient, cinematic, pop, chillout, corporate, drum and bass, and synth pop; and the second shows the following list of moods: uplifting, melancholic, dark, angry, sparse, meditative, sci-fi, action, emotive, easy listening, tech, aggressive, and tropical

Doesn’t it seem like an earlier version of Spotify?

Two smartphone screenshots from an earlier version of Spotify, the first one features an album called Swagger with a shuffle play option and a list of four of the songs: "Ain't No Rest for the Wicked," "Beat The Devil's Tattoo," "No Good," and "Wicked Ones"; the second one features an album called Punk Unleashed with a shuffle play option and a list of five of the songs: "Limelight," "Near to the Wild Heart of Life," "Buddy," "Not Happy," and "Sixes and Sevens."

“Spotify is a new way of listening to music.” This was their catchphrase (see way-back machine to 6/15/11). They dropped that phrase once it became the primary way that people consume music. The curation can be taken out of the consumer’s hands—not only is it easier, but also smarter. The consumer should feel worldlier for learning about new groups and hearing new music.

The problem, at least in practice, is that this was not the outcome. The same songs keep coming up, and with prepackaged playlists for “gym,” “study,” “dim the lights,” etc., the listener does not need to engage as the music becomes a background soundtrack instead of a product to focus on.

My contention is not that the quality of music decreased, but that the changing consumption method devalues each moment of recorded sound. The immense quantity of music now available makes the pool larger, and thus the individuals (songs/tracks/works) inherently have less value.

We can’t erase the Pandora’s Box of Spotify, so it is important to focus on how consumption is changing.

A.I. Composition Commercial Pioneers

Returning to JukeDeck: what exactly are they doing and how does it compare to our old model of Emily Howell?

Emily Howell was limited (as of 2011) to the export of the melodic, harmonic, and rhythmic ideas, requiring someone to ultimately render it playable by musicians. JukeDeck is more of a full-stack service. The company has looked at the monetization and has determined that creating digital-instrument outputs in lieu of any notated music offers the immediate gratification that audiences are increasingly looking for.

I encourage you to take a look at the program and see how it creates music in different genres. Through my own exploration of the JukeDeck, I felt that the final product was something between cliché spa music and your grandparent’s attempt at dubstep, yet JukeDeck is signing on major clients (the Times article mentions Coca-Cola). While a composer might argue that the music lacks any artistic merit, at least one company with a large marketing budget has determined that they get more value out of this than they do from a living composer (acknowledging that a composer will most likely charge more than $21.99 for a lump-sum royalty buyout). So in this situation, the ease of use and cost outweigh the creative input.

The other company mentioned in the article that hopes to (eventually) monetize A.I. composition is Flow Machines, funded by the European Research Council (ERC) and coordinated by François Pachet (Sony CSL Paris – UMPC).

Flow Machines is remarkably different. Instead of creating a finished product, its intention is to be a musical contributor, generating ideas that others will then expand upon and make their own. Pachet told the Times, “Most people working on A.I. have focused on classical music, but I’ve always been convinced that composing a short, catchy melody is probably the most difficult task.” His intention seems to be to draw on the current pop music model of multiple collaborators/producers offering input on a song that often will be performed by a third party.

While that may be true, I think that the core concept might be closer to “classical music” than he thinks.

While studying at École D’Arts Americaines de Fontainebleau, I took classes in the pedagogy of Nadia Boulanger. Each week would focus on the composition of a different canonical composer. We would study each composer’s tendencies, idiosyncrasies, and quirks through a series of pieces, and were then required to write something in their style. The intention was to internalize what made them unique and inform some of our own writing, if only through expanding our musical language. As Stravinsky said, “Lesser artists borrow, greater artists steal.”

What makes Flow Machine or JukeDeck (or Emily Howell?) different from Boulanger’s methodology? Idiosyncrasies. Each student took something different from that class. They would remember, internalize, and reflect different aspects of what was taught. The intention was never to compose the next Beethoven sonata or Mahler symphony, but to allow for the opportunity to incorporate the compositional tools and techniques into a palate as the student developed. While JukeDeck excludes the human component entirely, Flow Machine removes the learning process that is fundamental to the development of a composer. In creating a shortcut for the origination of new, yet ultimately derivative ideas or idioms, composers may become less capable of making those decisions themselves. The long-term effect could be a generation of composers who cannot create – only expand upon an existing idea.

What would happen if two A.I. programs analyzed the same ten pieces with their unique neural networks and were asked to export a composite? Their output would be different, but likely more closely related than if the same were asked of two human composers. As a follow up, if the same ten pieces were run through the same program on the same day, would they export the same product? What about a week later, after the programs had internalized other materials and connections in their neural networks?

What makes Flow Machine unique is the acknowledgment of its limitations. It is the Trojan Horse of A.I. music. It argues that it won’t replace composition, but help facilitate it with big data strategies. If we were discussing any non-arts industry, it might be championed as a “disruptive innovator.” Yet this becomes a slippery slope. Once we can accept that a program can provide an artistic contribution instead of facilitating the production of an existing work, the precedent has been set. At what point might presenters begin to hire arrangers and editors in lieu of composers?

No one can effectively predict whether systems like Flow Machine will be used by classical composers to supplement their own creativity. Both recording and computer notation programs changed the way that composers compose and engage – each offering accessibility as a trade-off for some other technical element of composition.

I could foresee a future when multiple famous “collaborators” might input a series of musical ideas or suggestions into a program (i.e. playlist of favorite works), and the musically literate person becomes an editor or copyist, working in the background to make it cohesive. Does that sound far-fetched? Imagine the potential for a #SupremeCourtSymphony or #DenzelWashingtonSoundtrack. They could come on stage after the performance and discuss their “musical influences” as one might expect from any post-premiere talkback.

So what does it all mean?

In the short term, the people who make their living creating the work that is already uncredited and replicable by these programs may be in a difficult situation.

A classically trained composer who writes for standard classical outlets (symphony, opera, chamber music, etc.) will not be disadvantaged any further than they already are. Since Beethoven’s death in 1827 and the deification/canonization/historical reflection that followed, living composers have been in constant competition with their non-living counterparts, and even occasionally with their own earlier works. It will (almost) always be less expensive to perform something known than to take the risk to invest in something new. There may be situations where A.I.-composed music is ultimately used in lieu of a contemporary human creation, if only because the cost is more closely comparable to utilization of existing work, but I suspect that the priorities of audiences will not change quite as quickly in situations where music is considered a form of art.

Show me the money

I focused on JukeDeck and Flow Machine over the many other contributors to this field because they are the two with the greatest potential for monetization. (Google’s Magenta is a free-form “let’s make something great together” venture only possible with the funding of Google’s parent company Alphabet behind it, and various other smaller programs are working off of this open-source system.)

Acknowledging monetization is the key question when considering a future outside of academia. The supposed threat of A.I. music is that it might eliminate the (compensated) roles that composers play in the 21st century, and the counter-perspective is how to create more paying work for these artists.

Whether it is a performing arts organization looking to strengthen its bottom line or composers trying to support themselves through their work, acknowledging shifts in consumer priorities is essential to ensuring long-term success. We need to consider that many consumers are seeking a specific kind of experience in both their recorded and live performance that has diverged more in the last 15 years than in the preceding 50.

It is cliché, but we need more disruptive innovations in the field. Until we reach the singularity, A.I. systems will always be aggregators, culling vast quantities of existing data but limited in their ability to create anything fundamentally new.

Some of the most successful examples of projects that have tried to break out of the confines of how we traditionally perceive performance (in no particular order):

  • Hopscotch, with a group of six composers, featuring multiple storylines presented in segments via limousines, developed and produced by The Industry.
  • Ghosts of Crosstown, a site-specific collaboration between six composers, focusing on the rise and fall of an urban center, developed and produced by Opera Memphis.
  • As previously mentioned, Ted Hearne’s The Source, a searing work about Chelsea Manning and her WikiLeaks contributions, with a compiled libretto by Mark Doten. Developed and produced by Beth Morrison Projects (obligatory disclaimer – I worked on this show).
  • David Lang’s anatomy theater—an immersive experience (at the L.A. premiere, the audience ate sausages while a woman was hanged and dissected)—attempting to delve not just into a historical game of grotesque theater, but also creating the mass hysteria that surrounded it (the sheer number of people who were “unsettled” by this work seems to be an accomplishment – and once again, while I did not fully develop this show, I was a part of the initial planning at Beth Morrison Projects).

Craft is not enough. Quoting Debussy, “Works of art make rules but rules do not make works of art.” As we enter this brave new world of man versus machine, competing for revenue derived not just of brawn but increasingly of intellect, composers will ultimately be confronted—either directly or indirectly—with the need to validate their creations as something beyond that of an aggregate.

I am optimistic about the recent trend of deep discussion about who our audiences are and how we can engage them more thoroughly. My sincere hope is that we can continue to move the field forward, embracing technologies that allow creators to grow and develop new work, while finding ways to contextualize the truly magnificent history that extends back to the origins of polyphony. While I am doubtful about the reality of computer origination of ideas upending the system, I’m confident that we can learn from these technological innovations and their incorporation in our lives to understand the changes that need to be made to secure the role of contemporary classical music in the 21st century.

 

The Electric Heat of Creativity—Remembering Donald Buchla (1937-2016)

In 1962-63, in a vacated Elizabethan house on Russian Hill, Ramon Sender and I joined our equipment to make a shared studio; this became The San Francisco Tape Music Center. After the house burned down in 1963-64, we moved to Divisadero Street where we spent days and nights wiring a patch bay console we had got in the AT&T graveyard; we needed to tie all our equipment together. A bit like Dr. Frankenstein, we were putting all kinds of discarded equipment together to create an instrument that would allow for the composer to be a “studio artist.” The device or devices could not have an interface that was associated with any traditional music making, especially not a black and white keyboard.  It would have to have the capacity to control all the musical dimensions as equal partners. We thought, we talked, and we read. Our first imagined system came from what we knew about graphic synthesis.

A bit like Dr. Frankenstein, we were putting all kinds of discarded equipment together to create an instrument that would allow for the composer to be a “studio artist.”

We knew the work of Norman McLaren and were aware of many of the other experiments taking place. Drawing seemed like an intriguing approach to a personalized music maker.

We outlined the following process:

• Create a pattern of holes on a flat round disc
• Spin the disc with a variable speed motor.
• Pass light through the rotating disc.
• Convert the resulting light pattern to sound by placing a photo cell to receive the light pattern passing through the disc.

A pattern could be made for each sound; the size of the pattern would represent amplitude; the shape would result in timbre and the speed of rotation would be some kind of frequency change.

Our soldering skills, starting from zero, quickly grew to modest, but, alas, never to excellent nor even good.  Where and how were we to start?

Instead of continuing our Frankensteinian kludge approach to hunting and gathering in electronic graveyards, we decided to put an ad in the San Francisco Chronicle to find someone who might be interested in building our device. The first person to answer the ad seemed to have some sort of eye dysfunction; his eyes were focusing on two different and constantly changing places at the same time. Unaware that the ’60s drug scene had begun, we described what we were after.  The fellow seemed interested and, after waiting several days without another answer to the ad, he was the only one we had. So we gave him a key to the studio and told him to go ahead and see what he could do.  On arriving at the studio the next morning, we were horrified to discover that he had cut a bunch of wires in the back of our newly wired patch-bay. We took back our key and began the tedious task of putting the patch bay back together.

A short time later, another engineer appeared. He seemed quite normal; that is, he appeared to see and hear in appropriate ways.  We presented our idea and, quietly, he said, “Yeah, I can do it.”

The next day he arrived with a machine; a paper disc attached to a little rotary motor mounted on a board and a couple of batteries a flash light, a small loud speaker, and a small amount of circuitry. He turned it on, and it made a nasty sound!!! Amazed and thrilled, we declared, “It Works!” And he dryly responded, “Yeah, but this isn’t the way to do it.”  That was the arrival of Don Buchla!

A handwritten drawing on a piece of graph paper showing the foot pedal, motor, lamp, lens, and audio output for the light synth.

Don Buchla’s original drawing of the Light Synth

After this, Ramon went on to work on upgrading the studio and I immersed myself in the task of understanding what Don was talking about. He introduced me to the world of voltage control. An entirely new vocabulary was suddenly entering my ears.  The only vocabulary I had for musical sounds were a handful of Italian words—piano, forte, crescendo. This new vocabulary consisted of words from outer space—transistors, resistors, capacitors, diodes, and integrated circuits.  Don was “the man who fell to earth.”

I bought the Navy manual on electronics, but, after starting it, realized that I had to take a step back and get some basics and bought the Navy manual on electricity!  The bedtime reading was intense. After a few weeks of the basics of electricity, I plunged into the manual on electronics.  After a bit of scanning and surface exploration, I found myself struggling with that new vocabulary of transistors and diodes. It took a lot of aspirin [for the nightly headaches] and searching, to be able to follow what Don was explaining.  The long nights morphed from struggling with the steepest learning curve I have ever experienced to a dialog between myself and Don in an attempt to conceptualize a new composer’s creative tool. With Don’s help, even with only a rudimentary understanding of electronics, it was possible to see the power of control voltage as shaping the energy of musical gestures.  Traditionally the result of the fingers on the keyboard, the arm energizing the bow that energizes the strings of a violin, the air blown into a flute, could be understood as metaphors for gesturally-shaped control voltages. It was elegant; it appeared to satisfy the characteristics of all musical dimensions; pitch, amplitude, timbre, timing, and—a brand new dimension—spacial positioning.

With Don’s help, even with only a rudimentary understanding of electronics, it was possible to see the power of control voltage as shaping the energy of musical gestures.

The idea, suddenly, and without aspirin, was coming into focus. We worked regularly for almost a year; I would describe the functionality I thought was necessary to do something musically and Don would look up as if looking at the ceiling or somewhere within himself, return his gaze to me and say, “I made a module that does that.” Was he saying he made it some time ago and had just remembered it or had he designed it at that moment? I never knew and when I would ask him, he would always just smile; that coy half smile of his. But, somehow, within a few days he would bring me a drawing of the new module.

With every meeting a new module would arrive, and eventually he designed an entire analogue computer-like music making machine. It was all on paper. We would need $500 dollars for him to make it.  With the help of the Rockefeller Foundation we finally were able to pay for the parts.  Don never built a prototype; he just arrived one day with the entire machine. At the bottom of every module it read, “San Francisco Tape Music Center, Inc.”  I was upset that we were suddenly in “business”; “OK” he said, and the next ones he made were called “Buchla and Associates” and the now historic Buchla 100 was born.

Within a few weeks of the delivery and public unveiling of the 100, I moved to New York and installed a large 100 system.  There I worked (played, really) with it continuously creating Silver Apples of the Moon and The Wild Bull.  My relationship with Don remained constant, but now, over the phone. I kept finding things that we had not considered or just plain got wrong.

He would say, “I just made a new envelope generator with a pulse out at the end of the envelope.”

“Great, how soon can I get it?”

Don: “I have already mailed it to you.”

I would call him and say, “Could you make a module that would allow me to convert my voice into a control voltage?”

Long pause. No doubt he was looking at the ceiling.

“I have made that.”

A week later one of the first envelope followers arrived and, in addition to knobs, I could use my voice and finger pressure to control all the dimensions of music.

The Buchla 200

The Buchla 200

Within a few years of back and forth additions to the 100, he went on to make, what many of us consider to be the Stradivarius of analog machines, the Buchla 200.

Many of us consider the Buchla 200 to be the Stradivarius of analog machines.

Don had an unusual genius in the creation of interfaces. In adapting our hands to a rectangular piano keyboard, it takes the first several years to master the art of using the thumb. It made sense as the evolution of music and musical instruments morphed together through time.  But, with the explosion of electronic technology in the second half of the 20th century, we no longer needed to be bound to music or the instruments of those traditions; yet the piano keyboard was brought forward and became the instrument for the new technology.  As McLuhan said, “We look at the present through a rear-view mirror. We march backwards into the future.”  Don’s answer to a new interface for a new music was Thunder, his ergonomic interface.

A photo of Buchla's Thunder interface

Buchla’s Thunder

It went on and on, but for me, the three most revolutionary interfaces were: Thunder; Lightning, a baton that could be waved in the air producing a “joy stick” X, Y array of voltages; and his “Kinesthetic Multi-Dimensional Input Port Module with Motion Sensing Rings” that produced X, Y, Z control voltages.

For me, the three most revolutionary Buchla interfaces were Thunder, Lightning, and the Kinesthetic Multi-Dimensional Input Port Module with Motion Sensing Rings.

After a lifetime of designing and building, Don went back to his masterpiece of the 1970s to create the 200e. He had been eager for me to have the 200e but I resisted.  In 2010, my opera Jacob’s Room was going to get its premiere in Austria.  The sponsors wanted me to make a short European tour of solo performances with the video artist, Lillevan, who had done the live video for the opera, but from the late ’80s, I had made a transition to computers and stopped performing in public. So I decided to give the 200e a spin and flew out to spend a few days with Don as he showed me how it worked and I picked the modules I thought I could use.

I took it with me to Europe to tour with Lillevan and made a patch in the hotel the first night I arrived.  With no real time to work with the 200, I had decided to work mostly with sound files on my Mac using Ableton but maybe still use the 200e in some way.  Our first concert was at the Modern Art Museum in Liechtenstein. I did the whole performance with only the Mac, but at the end of the concert, the audience kept cheering, “How about an encore?” Lillevan said. “An encore?! I had never done an encore, what could it be?”  I looked at the 200e, made a few adjustments to the patch and said, “Let’s do it.”  It was as if it was 1966 in my studio on Bleecker Street.  I turned and knobs even repatched as I played.  I was ecstatic; the audience was ecstatic.

Don and I had remained close for 53 years, although for about 30 years, the friendship was without the virtual electric connection we had in the early days.  But since that performance in Liechtenstein until his recent death, we shared again that wonderful electric heat of creativity.

A group of six musicians playing very bizarre looking instruments.

Don’s “popcorn” performance at the 1980 Festival of the Bourges International Institute of Electroacoustic Music.

Ramon and I had brought Don home from the hospital after his cancer treatment and I began to fly out regularly to be with him and his wife Nannick Bonnel. He was determined to live as fully as he could for as long as he could.  Early in his recovery when he was home from the hospital but still not able to walk, I remember calling him from the airport to tell him that I was on my way up to see him in their hilltop house in Berkeley.  My greeting was, as always, the idiotic “How are you doing?”

“Great!” he said, “I just got back from a walk.”

“A walk?” I said in true amazement, “My God! I would have trouble walking on that incredibly steep hill. Where did you walk?”

“Oh,” he answered proudly, “I walked from the bedroom to the kitchen; it didn’t take so long!”

We both laughed.

Over the next several years, with a lot of help from Nannick, he got himself around. Every time I performed in the Bay Area, I stayed with them and he came to performances.  He also did his own performances from time to time and traveled up until the last few years.

Donald Buchla (left) and Morton Subotnick at NAMM.

The picture above is at the NAMM show when he was signing on to a company that would be selling his equipment.  He continued to create complex imaginative modules, the last one being the “Polyphonic Rhythm Generator”; a set of interconnected rings of sequenced pulses which was his homage to the great North Indian Tala tradition! He just kept going.

Toward the end he began using a walker. When I came out to visit, he wanted to go to the Berkeley Museum. It was a very rainy few days, but he walked, one tiny slow step at a time, to their car. Nannick got his wheel chair into the rear while I helped him into the car. I tried to help him, but he waved me off fiercely as he pulled himself slowly from the walker into the front seat.  We went to the museum and I wheeled him from painting to painting; bringing him as close as possible to every painting so that he could see it.

After that, we decided to go to a movie! Michael Moore’s Where to Invade Next. It was at a theatre in Berkeley on the 2nd floor without an elevator.  While Nannick found a place to park the car, Don and I walked up those stairs, one painful step after another.  In the theater we had to go up again to get a seat. He sat forward staring at the screen, trying to comprehend and see. After we began the painful steps down.

I saw him again a few months later; Joan was able to make the trip with me. Don was clearly deteriorating rapidly. He wanted to go out to a restaurant where we could see the sunset over San Francisco.  We went, even more painfully, wheelchair to walker, step by step.

He finally gave into the big sleep.  Rest well, my dear friend!

Morton Subotnick and Donald Buchla

One of the last photos of Morton Subotnick and Donald Buchla together.