Tag: electroacoustic music

Pamela Z: Expanding Our Imaginations

The only thing that is almost as exciting as watching and listening to a multimedia performance by Pamela Z is to hear her talk about it, which she does for almost an hour in a fascinating conversation that spans a wide range of topics including: creating and performing during the pandemic; her artistic beginnings as a singer-songwriter and how she transitioned into an experimental composer; a difficult encounter with TSA agents; dealing with constant changes in technology; and her obsession with old telephones.

Although Pamela is a composer who is mostly focused on creating new sounds by new means, it was extremely interesting to hear her describe her occasional frustration with the ephemerality of so many of the devices on which we all have become so dependent.

At one point she exclaims, “There are a lot of people in the world who all they care about is changing things. They don’t get attached to something. They really think everything is oh so yesterday, so six months ago. That is not compatible in a way with becoming virtuosic on anything. Building an instrument that you can become virtuosic on without having to pause every few minutes to update it and then change all of the things that no longer work with the update and blah, blah, blah, blah, blah. I always jokingly say: ‘Wouldn’t it be weird if you were a violinist or a cellist or something and every six months somebody would show up at your house and take your cello away from you and say, Here, this is the new cello, and you need to learn to play this one. And by the way, we’ve made the fretboard a little narrower because you don’t need all that extra space?’”

And yet, those technological changes and sometimes the strange glitches and disconnects that result from them have informed so much of this San Francisco Bay Area-based maverick’s creative work. Attention, a work she created for the Del Sol String Quartet, will forever change your perception of telephones ringing. Baggage Allowance will make you rethink your next airplane trip when it is safe to take one again. She hopes Times3, her sonic installation created for the 2021 Prototype Festival to accompany a walk around Times Square that has now been extended through April 30, 2021, “cues people into the thought of expanding their imagination to past, present, and future of whatever place they’re in.”

Pamela Z’s quest for new solutions which create problems that are also an integral part of the resultant work also informs her brand new Ink, a work which includes some surreal reflections on how musicians interact with notated scores which will be premiered by the San Francisco-based chorus Volti in an online performance on April 24.

Aside from learning more about all of these one-of-a-kind compositions, it’s a delight to hear all of her stories since, as anyone who has experienced her work already knows, she is an extremely engaging storyteller. Our time together over Zoom was a non-stop adventure except for, perhaps appropriately, the occasional internet connection hiccup which we mostly were able to fix in post-production editing.

New Music USA · SoundLives — Pamela Z: Expanding Our Imaginations
Frank J. Oteri in conversation with Pamela Z
March 16, 2021—4:00pm EDT via Zoom
Via a Zoom Conference Call between San Francisco CA and New York NY
Additional voiceovers by Brigid Pierce and Jonathan Stone; audio editing by Anthony Nieves

Hearing a Person—Remembering Ruth Anderson (1928-2019)

A woman sitting in a yellow armchair cross-legged

The last music Ruth Anderson heard before she died was Judith Blegen singing Kein Musik ist ja nicht auf erden …, with which Mahler’s Fourth Symphony ends, a song which had been a touchstone for us for many years and which I had been unable to find for weeks among our record collection despite just about reorganizing the collection in my search. Then I looked among the contemporary LPs, and there it was, next to David Behrman’s beautiful On the Other Ocean—the wrong part of the century but very much the right context.

Listening to it now, again, I find I am immersed in her text piece Sound Portrait: Hearing a Person, which she created first for her students then for others, in 1973:

“In a darkened room, find a comfortable, totally relaxed position.

Listen to a piece of music.

Think of someone you love.
Do not think of the music.

When you find your thought of the person is gone, bring it
back gently.
Let other thoughts come, and then let them go.

As the music progresses, let the thought image of the person
be central.
Be unaware of the music.

Let anything which happens happen, except keep easily bringing
back, letting, the person image occupy you.

You will find explanations of the person—the music will explain the person.

The music ideas, counterpoint, extensions, contrasts, repetitions, variants,
Rhythms, textures, qualities of sound, all music elements are of the person,

sometimes very literally, sometimes suggesting, sometimes exact, sometimes
understood, sometimes leading to understanding, sometimes verging on language, always primarily nonverbal, always a known sense, a coming of a known sense.

You will find after, an understanding of the person you did not have,
and a personal relationship to the music.
The music, too, will be known.”

For her, it was always that movement of the Mahler 4th.

Ruth Anderson, composer, teacher, flutist, and orchestrator died peacefully at Calvary Hospital, New York, on November 29th, 2019, aged 91. She was a Professor Emeritus of Hunter College, CUNY, where she was the director of the Hunter College Electronic Music Studio from 1968 to 1979, the first operative electronic music studio in the CUNY system and one of the first in the USA to be founded and directed by a woman.

Our earliest meeting was in that studio, where I was to substitute for her while she went on sabbatical. Ruth had first asked Pauline Oliveros to run the studio for her, but Pauline too was on sabbatical and suggested that Ruth contact me—I was then still living in England and eager to come over here. I went to the studio to meet her, nervous, and a bit apprehensive, not having worked hands-on with voltage-controlled equipment, a key part of the studio’s design then. Ruth turned up in white shorts, a blue shirt, and sneakers with a hole in the right toe. I relaxed.

It was a very good studio, beautifully equipped, with a dedicated technician, Jan Hall, who designed new gear and kept everything running very smoothly. Ruth loved tech, tools of any kind really (hence the house we built in Montana, her birthplace), and the studio was her home. It was home to some of the students also, who brought in a couch, a lamp. Jan brought in his slippers.

She was an inspired teacher. Of her studio seminar, she wrote (in a letter to me in 1973), “I give lots of facts, and am very demanding—that they know, that they have self-respect, that they only DO—and sometimes I see it’s not yet the time to DO and they will… and leave them alone, or help when I see they need it… and from students who have been with me before, begin to understand this is not a course, but some equipment and a safe place to be. As soon as the students so-called begin to know each other, to hear a great variety of music, or also experience acoustics, experience each other through sound, like the skin-resistance oscillators, they learn and do, or dream.”

Born in Kalispell, Montana on March 21, 1928, Ruth received a BA in flute, subsequently studying privately with Johnnie Wummer and Jean-Pierre Rampal, then the MA in composition from the University of Washington. At UW she took courses with the poet Theodore Roethke, and she later came to know many other poets including Jean Garrigue, May Swenson (with whom she did Pregnant Dream), W.S. Merwin, and Louise Bogan (whose haunting poem, Little Lobelia is the source of Ruth’s I come out of your sleep). She was one of the first four women admitted to the Princeton University Graduate School program in composition, where she received a fellowship. Two Fulbright Scholarships took her to Paris (1958–60), where she studied composition privately with Darius Milhaud and Nadia Boulanger, who encouraged Ruth to also study Gregorian chant at the Abbey of Solesmes.

Ruth’s was a multi-faceted career. She toured as a flutist with the Totenberg Instrumental Ensemble from 1951-58 and was principal flutist with the Boston Pops (1957-58). As a freelance instrumental and choral arranger, she was also an orchestrator for NBC-TV and the Lincoln Center Theater production of Annie Get Your Gun with Ethel Merman (1966) and Show Boat.

Ruth Anderson

Ruth Anderson. Photo by Manny Albam.

Her establishment of the Hunter College Electronic Music Studio and her involvement with the downtown music scene brought a burst of creative activity when her studies of psychoacoustics, Zen Buddhism, and her teaching intersected, sparking a number of works for tape which are truly innovative. SUM: State of the Union Message from 1973 is a hilarious collage, a send-up of both Nixon and TV commercials, its duration being exactly that of Nixon’s State of the Union Message that year and saying, as she put it, “as little, and by extension, as much as the president, and using the one medium we all share.” Ruth was a superb analog editor, and I recall coming into the Hunter studio at some point while she was working on SUM and finding a fishing line strung across the room with an amazing number of pieces of tape, some very small, delicately suspended by splicing tape, and trying to figure out how she could keep them all straight. She knew what each was, being both persistent and precise, a perfectionist.

She wrote of her work, “It has evolved from an understanding of sound as energy which affects one’s state of being. [These are] pieces intended to further wholeness of self and unity with others.” Hearing a Person is a beautiful, practical example of this key intention, as is the classic tape work Points (1973-4), created entirely from sine tones at a time when few others were interested in so seemingly basic a waveform. But to Ruth, sines are “the basic building blocks of all sound… a sine tone is a single frequency focal point of high energy… Separate sine waves enter at five-second intervals, accumulate in a long veil on one channel while another set of sines is introduced on the second channel and continuing this way with the veils of sound shifting in and out of each other at a very low dynamic level. The high focus of energy of a sine wave, the outsize breathing interval of five-second entries, the calm of the veils and timeless quality are some of the elements I can isolate which have made this a healing piece, one that consistently generates in listeners a sense of repose and quiet energy.”

Jan Hall has said of Ruth, “She was brave,” and certainly to undertake such a challenge as a piece consisting entirely of sine tones at, necessarily, a soft level, is to work entirely exposed. Indeed, she wrote to Charles Amirkhanian in 1977, “Sines are extremely difficult to record, and then it’s difficult to maintain a copy of sines without collecting burbles on tape—this is a strong reason for wanting the piece on a record where burbles don’t collect.” Points was released on the seminal LP produced by Charles Amirkhanian on the 1750 Arch Records label New Music for Electronic and Recorded Media, which was actually the first collection of electronic music by women, in 1977. She was a bit thrown when that quote appeared in the sleeve notes, but I’ve always loved it. With that LP and two releases on Max Schubel’s label Opus One, her work began to be known in the US and internationally.

In addition to Points, a rich array of works in diverse media appeared during the ‘70s and early ‘80s, such as I come out of your sleep—a 4 channel tape piece from 1979; the text pieces Silent Sound (1978), the Sound Portraits, Greetings from the Right Hemisphere (1979); interactive biofeedback pieces such as Centering (1979) for four performers wearing galvanic skin resistance oscillators through which they respond involuntarily to a dancer who is, in turn, responding to their sounds. Ruth’s delight in play comes through in the sonic installations and games she created for exhibitions, collaborating with Jan Hall and Bob Bielecki. For example, in Tuneable Hopscotch (1975), the individual squares generate pitches as you land on them but someone else is at controls on the wall, changing the pitches even as you select them. In Time and Tempo (1984) the viewer’s biofeedback controls the movement of a clock’s hands, slowing as you approach stillness. Together we also created a number of Hearing Studies for the Introduction to Music and other courses at Hunter. Grants from the National Endowment for the Arts and Creative Artists Public Service (CAPS), the Martha Baird Rockefeller Foundation, the Alice M. Ditson fund and residencies at Yaddo and the MacDowell Colony supported much of this work.

To her great pleasure, Here, a solo vinyl album of five pieces, two never before released plus three works mentioned above—SUM, Points, and I come out of your sleep—will be released on the Arc Light Editions label in 2020. The test pressings came shortly before she went into the hospital and we were able to listen to them together. She was on morphine and deeply tired, but as she listened, she started moving her hands to re-shape a phrase, murmuring how she would like to have changed this or that sound, composing right to the end. It was wonderful to see, and she was delighted with the quality of the pressings. Thank you, Jennifer Lucy Allan, visionary producer of Arc Light Editions.

Annea Lockwood and Ruth Anderson

Annea Lockwood and Ruth Anderson

And then there is Flathead Lake, where Ruth spent much of her childhood and where we built our house. She needed a home of her own there, and I needed to be among mountains at least part of the year, so in 1975, barely a year after we’d begun to live together (we moved fast!) we bought about two acres on the lake, added forty feet to it when I assumed that the boundary survey stakes must have been at the halfway point of the property and started felling dead trees on adjacent land. We bought it. We mapped out a floor plan—Ruth did all the designing meticulously—and started to build. About that detailed planning, I remember going to the local lumber yard and saying we wanted to buy exactly four hundred and sixty-four six-inch nails. Tim smiled, estimated that at so many pounds, and we were then his customers for the next fourteen or so years.

Montana House

The house that Ruth Anderson and Annea Lockwood built together on Flathead Lake in Montana.

We built wall and floor frames, siding and flooring, got help with the really tough things from her family and from professionals, who would often just take off to go elk hunting mid-project, roofs (there was more than one; the place kept growing like a plant), plumbing, electricity, and finally turned the garage over to the pros, being tired of building by then. Ruth finished the interior tongue-and-groove walls for the bedroom just in time for Christmas in ’89, having spent the whole year there while I went back and forth, teaching. But what astounds and moves me very much to think back on it, is that she did that while enduring chronic fatigue, the cause of her year off!

Strong of spirit, self-reliant, a brilliant mind, Ruth loved structures of all kinds. She was also tenacious, very funny with a dry wit, and delighted in the absurd. Her last piece was Furnishing the Garden (2002–approximately 2012), installed at our New York home. Discarded chairs and an old stripped sofa frame began to appear, leaning against a tree here, planted among wild roses there, and best of all, a child’s tiny wooden chair wedged part-way up a large tulip tree. It makes a handy launching pad for local wildlife.

Chair in tree

A detail of Anderson’s outdoor installation, “Furnishing the Garden.”

I am so grateful to have been able to share her life for forty-six years. We were finally able to marry legally in 2005 (in Canada), something Pauline and Ione had done and recommended, not long before.

Ruth’s archive will be placed in the Music Division of The New York Public Library.

GREETINGS FROM THE RIGHT HEMISPHERE

“You are invited to a party.

I will furnish wine, cheese, bread and a SPLENDID opportunity for all who come to KNOW one another.

As you enter, leave your left hemisphere – all your words – speaking, reading, writing – at the door.

Let yourself be known, and know others, through your right hemisphere – YOUR SENSES – through all forms of non-verbal communication.

Have a lovely time.”
–Ruth Anderson

Ruth Anderson at Flathead Lake

Daria Semegen: So Many Awareness Pixels Going On at the Same Time

Featuring video presentations and photography (unless otherwise noted)
by Molly Sheridan
Conversation transcribed by Julia Lu

Back in the fall of 2000, a 1976 LP with the curious title Electronic Music Winners got something of a second life when it was revealed that Radiohead had sampled two of the tracks from it in the song “Idioteque” from their then just-released album Kid A—specifically Arthur Krieger’s Short Piece and Paul Lansky’s earliest computer composition mild und leise, both of which were on the album’s second side. The story goes that Jonny Greenwood found the LP in a used record bin while on tour in the United States. While the news sent folks scrambling around to look for the then long out of print record (which now can still fetch a fair sum on sites like eBay and Discogs, presumably because of the Radiohead connection), it actually got me to buy Kid A (and soon thereafter everything else in Radiohead’s discography) because I was a big fan of that Electronic Music Winners LP, having bought my copy for a dollar at a Salvation Army store when I was in high school.  But it also made me wonder what might have happened had Greenwood sampled material from the album’s first side, specifically a piece with a rather formalistic name, Electronic Composition No. 1, by Daria Semegen, which had always been my favorite track on it. That piece, along with a longer electronic piece called Arc, which I had fallen in love with when I listened to it on LP at the Columbia University music library as an undergrad, had long been the only music I had ever heard by Daria Semegen, but I always wanted to hear more.

Then about a little over a month ago, I attended the BMI Student Composer Awards ceremony and reception. It’s always a great evening, not just because it’s an opportunity to meet all the new awardees, but also because, from time to time, people who have won the award in previous years show up to honor their new award compatriots. When BMI Foundation President Deirdre Chadwick announced from the podium that one of the previous winners in attendance that evening was Daria Semegen, my jaw dropped. Semegen, it turns out, won the award twice, in 1967 and 1969, for compositions that had nothing to do with electronic music—a duo for flute and piano and two song cycles (for soprano and baritone respectively, both scored for large chamber ensembles). So as soon as the ceremony ended, I rushed up to Semegen, whom I had never previously met, and told her what a fan I was of those two electronic pieces. And she said, “Well, if you ever want to see a real electronic music studio, come out and visit me at Stony Brook University.” She went on to describe some of the vintage synthesizers and oscillators there, as well as the splicing stations for reel-to-reel tapes, and I was transfixed.

Frustratingly, there isn’t a ton of detailed information about Daria Semegen either online or off-line. There has never been a commercially released recording (on LP or CD) devoted exclusively to her music, and the handful of pieces by her that appear on compilations are now mostly out of print. She doesn’t have her own website, and the page about her on Wikipedia is somewhat scant, as is the entry in the 1980 Grove Dictionary of American Music. But she does figure prominently in Women Composers and Music Technology in the United States: Crossing the Line (Routledge 2006) by Elizabeth Hinkle-Turner (whose 1991 D.M.A. dissertation for the University of Illinois at Urbana-Champaign was devoted to Semegen’s output), although not in most of the standard musicological literature on electronic music. And scores for nine of Semegen’s acoustic compositions—solo piano pieces as well as works for chamber ensembles (including the pieces for which she received her two BMI Student Composer Awards)—are available through the American Composers Alliance (who proved extremely helpful to me in preparing my talk with her). Other than that, there’s a short biography of her on the website for Stony Brook University, where she has taught since 1974.

But I got the sense after spending a fascinating afternoon chatting with her that the typical goalposts by which so many careerist composers measure their success do not really matter to her.  “Basically I share, but that is not my main drive,” she quipped toward the end of our talk.  “I don’t sit around and think about, “Hey, I’m going to be sharing this.” You listen to Electronic Composition No. 1¸ and that gets pretty bizarre.  When I was making some of those sounds, I would say, “Whoa, this is really kicking it around here.  Gee, I wonder how an audience would react?”  But then I’d basically let them worry about it.  I’m not going to tell them what to do or how to react.  That’s not my job!”

I do, however, feel that part of my job in life is to call attention to people who have created extraordinary music and have insightful things to say about it, and Daria Semegen is certainly one of them. I wish we could have continued talking for several hours, and I look forward to revisiting her studio one of these days and learning more. But in the meantime, there’s a lot of information to process here.


Daria Semegen in conversation with Frank J. Oteri
In the Electronic Music Studio at Stony Brook University in Stony Brook, New York
June 14, 2018 at 1:00 p.m.
Video presentation by Molly Sheridan

Frank J. Oteri: I think you were five or six years old when you immigrated to the United States. Do you have any early memories from before you arrived here?

Daria Semegen:  Oh yeah, definitely.  I remember riding on a baby elephant at a zoo.  For me it was a really out-of-this-world experience, so I’ll never forget that.  I also remember running on a low wall and then falling on the right side of my face.  That’s something to remember, childhood accidents.  And then other traumas—my dad committed suicide in the refugee camp we were in. He was sick. He had appendicitis and peritonitis. They did not have enough antibiotics in those days, and so he possibly had an infection in the brain.  So that happened and that was a very bizarre experience because I also experienced the different way people were behaving, as well as different ways of dressing—the whole ceremonial thing with funerals.  Then for a year, I was wearing a black band on my left arm signifying a death.

FJO:  I know there are people who wear all black for a period of time after a death of someone significant in their lives, but I had never heard of wearing just a band.

DS:  It isn’t done these days, but I wore the band. I also generally had dark clothes.  I choose to wear dark clothes now for a variety of reasons, one of them being that I see the world outside myself as where I want to see colorful things.  It would be too much if I had to deal with managing a colorist wardrobe. I also relate in a different way to colors such as black and white. I even tell my students that one technique to understand something is to push away a lot of other things from one’s consciousness.  I go into a mode automatically that I taught myself, especially when I want to appreciate sonic things, any kind of sounds, which is really like starting from a blank sheet of paper.  So it’s a very important technique, especially when re-hearing sounds and appreciating them—meaning understanding them and feeling them intuitively or technically, depending on what purpose you set out to approach these sounds.  I do it as tabula rasa, which means an erased blackboard, a blank sheet of paper.  It’s a wonderful mode to be able to snap into without having other things crowding you. Your judgment can be fresher and you can really enjoy that experience or not, depending on what is going on with these sounds.  You can have a more authentic experience rather than being influenced all the time by everything, because we’re constantly being assaulted by so many other things going on.

“I think our audio and visual world, as well as our reactions to it, are coming from defense mechanisms.”

If you’re someone who’s sensitive in different ways, you will have so many awareness pixels going on at the same time that you have to manage the situation in order to have a really authentic, focused awareness of whatever you’re dealing with as an artist.  I think our audio and visual world, as well as our reactions to it, are coming from defense mechanisms.  Say I’m walking in the woods and I become more alert.  It’s because it’s unfamiliar; it’s really a kind of experimental environment if I don’t know this place or have any particular set expectations. The alertness is there in terms of the appreciation of different things.  Beauty and danger is this type of alertness that is there when I’m dealing with appreciating—meaning really experiencing and trying to understand—visual and audio art.

FJO:  It’s amazing how you arrived at such a deeply conceptualized approach to creating and experiencing other people’s creations all from telling the story of wearing a black arm band after your father died and then deciding to wear dark clothes throughout your life. That experience was clearly very significant for you.

DS:  That was just one landmark event.  I also have the experience of being someone who could not speak the language of the country I was in because in the home, we were speaking a different language.  Out there was either West Germany or America, these two different places.  So I had to observe, I had to listen and watch, in order to understand what was going on.  When I first went to an American school, I could not speak English. It’s a very different experience, and it involves listening and learning, especially kids being very curious about their world, their situation, and people’s expressions, their tone of voice, all of these things.  I would watch and listen much more than verbally communicate.

Daria Semegen in her electronic music studio at Stony Brook University.

Daria Semegen in her electronic music studio at Stony Brook University.

FJO:  So what language did you grow up speaking at home?

DS:  I would speak Ukrainian.  I also can manage different Slavic languages, because it’s really a family of languages. During my Fulbright scholar year, I went to Poland specifically to be with Lutosławski, whose music I had heard at orchestral concerts as an undergrad at the Eastman School of Music.  We had Rochester Philharmonic performances, so I had heard his Trois Poèmes d’Henri Michaux and also Jeux Vénitiens and I was totally blown away with the sound, the complexity, and the expression, the nuances, and the subtleties; this is without having seen any of the scores.  Anyway, when I applied for a Fulbright, I actually passed Czechoslovakian language.  I had to be language approved and at the University of Rochester, they only had a Czechoslovak scholar, so I had to speak and read in that language that I had never read before, but later I had to really learn Polish.

FJO:  They had Czech, but they didn’t have Ukrainian or Russian?

DS:  Well, maybe they had Russian, but the person they had was a professor of Slavic languages.

FJO:  But of course, we’re jumping way ahead in your life story, so let’s go back to your speaking Ukrainian at home growing up. Aside from what you were saying about that language sounding totally different than the German and English you were exposed to in the outside world, it also looks different since Ukrainian doesn’t use the same alphabet.

DS:  Of course not.  It’s Cyrillic.

FJO:  So you had a double whammy. In addition to not being able to understand the language, you couldn’t even read the letters to get a sense of how it sounded.

DS:  That’s right.

FJO:  Perhaps in some way this made you susceptible to being more open to and empathic towards completely new ways to experience sound.

“We are every day and every moment, a different listener and a different viewer.”

DS:  Empathic, to me, means becoming more connected, and appreciating, to me, means understanding more in different situations and at different times, because we are every day and every moment, a different listener and a different viewer, a different observer.  So I never have a fixed idea of things being only a particular way.  They could be a little bit different the next time I experience them.  And I’m not upset by that fact.  I’m intrigued by the fact that these things can be different and varied. There is an interesting variance when I re-appreciate different art objects, for instance such as paintings or videos.  They’re all different, unique experiences within the general aspect of supposedly knowing these things.

FJO:  Of course with a painting or a video, although your perception of it will vary each time you experience it, the work is an unchanging set object.

DS:  That’s correct.

FJO:  But with music, you have this extra layer. If it’s a performed piece of music, a piece of music that is interpreted by musicians other than yourself (and even if it’s yourself), it’s never going to be exactly the same twice.  Of course, that becomes a different issue in fixed media electronic music, and navigating between these two realms has been a duality in your creative life.  If you’re writing pieces for musicians who are going interpret it, maybe they’re going to hold a note a little longer, play it a little faster, put some sort of element of themselves in it. This is very different from something that exists in an invariable form.  Yet, as you point out, perception will always be different when you come back to it.

DS:  Yes, it is.  I don’t place particularly one value or another on fixed media or performed works.  I think these are different experiences in terms of how they’re being thought about, conceived, and what they are as finished products.  Fixed media to me is really having the opportunity to get things the way you’d like them without compromising, which is what I usually end up having to do if I’m dealing with live performance.

But in my pieces with instruments and electronic sounds, I don’t want the electronic sounds to sound like an accompaniment to a live instrument.  That’s just doesn’t work for me.  I tend to compose the electronic part as a piece almost on its own.  And I like to have the instrumentalists, if possible, improvise, so their creativity is involved with the fixed media.  It’s a kind of response and also a combination of complement and contrast to different degrees as the player chooses to interpret.

FJO:  It’s also really highlighting the fixed and non-fixed natures of these two different realms. The piece involves fixed media which, by its nature, always stays the same, but the player brings something new to it each time.

DS:  Exactly.

FJO:  To bring this conversation back to your childhood for just a little while longer, I’m curious about your earliest musical memory.

DS:  One memory I have is my reaction to very early piano lessons when I was seven or eight.

FJO:  But you were already in the States by then.

DS:  Yes.

A vintage patch cord analog synthesizer at Daria Semegen's electronic music studio at Stony Brook University.

It seems like a great distance from Daria Semegen’s earliest music training, piano lessons, to this vintage patch cord analog synthesizer in her electronic music studio at Stony Brook University. But after learning about her earliest sonic memory, it all makes sense.

FJO:  Do you have any music related memories from before you came to America.

DS:  Well, I know that people sang.  I usually would be around adults having meetings because my father was involved in journalism.  He was a lawyer originally, and also he taught school as my mother did.  But we were in a refugee camp.  We lived in three or four of these places, being moved to different ones.  So that was an interesting disruption and interruption, and meant traveling around.  I thought that this was a normal way of being, and it was interesting for me.  Perhaps adult refugee people would be miserable, but I think for kids, this was a very interesting oddball experience.

I remember one particular thing—a guy moving a ladder around, and the sounds that that made.  What he was doing was improvising electrical wiring connections between different living spaces, made up rooms whose walls were Army blankets.  You’d have a large hall in a building, which had these separate dwelling places, whose walls were blankets.  And this guy was making some kind of shielding.  Later on I figured out what he was doing, but it was fascinating to me. I guess he was connecting light bulbs with each other.  Suddenly these lights went on.  I thought it was fantastic.  It was really fun to see that, and see how somebody can make something like this as really an improv.  Of course, when you’re a kid, you don’t know what that is, or what steps it takes, but this thing is happening, and later on, I understood better what was going on.  And I remembered the interesting noise-scraping sounds on the floor with this ladder.  He would move it along several times.  I was totally fascinated with this odd experience.

FJO:  I love that this earliest musical memory is of something really experimental and unusual. And one could claim that it pre-destined you for a life devoted to exploring sonic phenomena.

DS:  Well, that’s called interpretation.

FJO:  Of course, but those kinds of early memories are the things that stick and have a lasting impact.  It’s like the famous story of La Monte Young listening, fascinated, to the drones that were created by the electrical power transformers in Idaho as a little boy, which eventually left an indelible mark on all of his music.

DS:  There’s a wonderful scene in a Satyajit Ray movie with electrical wires humming on these electronic grids in the middle of vast fields with a train that’s coming in the distance, and then you hear the sound, and it’s kind of a Doppler effect. The Doppler effect is also in the Pierre Schaeffer piece, the train piece.

FJO:  One of the earliest examples of musique concrète.

DS:  Yes.

FJO:  Which was actually created right around the time you were born, entering into an existence when that way of making music became a possibility.

DS:  Mhmm.

A reel-to-reel containing recordings of several classic 1950s tape music compositions by Luciano Berio, Henri Pousseur, and Bruno Maderna is one of the treasures on display in Daria Semegen’s electronic music studio at Stony Brook University.

A reel-to-reel containing recordings of several classic 1950s tape music compositions by Luciano Berio, Henri Pousseur, and Bruno Maderna is one of the treasures on display in Daria Semegen’s electronic music studio at Stony Brook University.

FJO:  But of course at that time you wouldn’t have known about any of that. And soon after that, you left Europe and came to the United States where you mentioned something about piano lessons when you were seven or eight.

DS:  Oh, that was very interesting to me.  My mother decided to get a piano as basically a piece of furniture, because this was in vogue—everyone should have a piano and a kid who’s practicing or can play tunes.  So I was taking piano lessons. It was particularly convenient for me because we were living on the third floor of an apartment building and the piano teacher was on the first floor.  So I merely had to slide a banister a couple of times to get to my piano lessons.

“My piano teacher wasn’t always thrilled when I’d come in with a little sketch of something.”

As a child, you’re learning the different note names, coordinating with your body positions, and learning what’s called technique.  But I found out that music had something to do with paper.  It was like a drawing to me.  And so right away I wanted to try to this out. All this stuff started coming together, but I was always being drawn away from the idea of only focusing on the instrument itself and instead was really starting to focus on how these things looked.  So it was a visual experience, as well as connecting it with sounds.  The possibility of varying these things became very interesting to me.  So I would re-write some of these tunes that I was learning, and since this was taking up some of my time, my piano teacher wasn’t always thrilled when I’d come in with a little sketch of something, because that was not considered the goal of what I was supposed to be paying attention to as a piano student, which was practicing and perfecting, not necessarily varying something and being creative with it.

FJO:  So your teacher never remarked that maybe you’re a composer.

DS:  Oh, hell no.

FJO:  So when did the concept of being a composer enter your consciousness?

DS:  I gradually did these things on my own.  And after listening to recordings, I became interested in what this music looked like.  So I ordered some pocket scores so I could see the music notation.  I had a few different scores—Haydn symphonies, Mozart and Beethoven string quartets.  I decided to make a little project for myself.  I asked the music shop person to order me music paper so I could copy the scores, which were tiny, into a bigger size.  I don’t know why I did that.  I just wanted to be with this stuff as notation and to try to understand it in some way.  It was a kind of experiment, but I actually learned a hell of a lot from that in terms of how the material is organized, which instruments are playing when and why, and how this expression is being managed by the composer.  When I was a kid, I was not necessarily having all of these descriptions or vocabulary come to me right away, but I was getting intense impressions and non-verbalized insights that built a kind of intuitive base for appreciating, meaning knowing different things, and also comparing.

FJO:  And at some point, it morphed into composition.

DS:  Oh, this was going on all the time because I would be writing small pieces for piano and then I wrote a couple of string quartet movements.  These things were done on my own, including two orchestral movements.  Then, when I was a freshman in high school, I asked to study theory and so I began studying with a school music supervisor in my area who was also an Episcopal organist and a choir director, and he was the conductor of the civic symphony.  So they played these orchestral movements.  At one of their concerts, they played Respighi and a Beethoven symphony. I went to their rehearsals.  It was great for me to do that because I had listened to these pieces on recordings.  I was given a set of Toscanini Beethoven Symphonies with the NBC Symphony.  My stepfather was associated with a radio station.  He was given a Saturday show, a Ukrainian program on which I sometimes would recite poetry, by the way.  That’s another thing.  But this was interesting, going to rehearsals and learning the musical culture—what the etiquette is and how things are managed.  Observing is a fascinating experience for me.

A group of "benders" on top of a canister.

A group of “benders” on top of a canister in Daria Semegen’s electronic music studio at Stony Brook University is one of the many personal “intuitive” quirks scattered around the space.

FJO:  It’s extraordinary that you had an experience hearing music you wrote for orchestra live so early in your life.

DS:  Very.

FJO:  It’s interesting to me because orchestral music doesn’t seem to have remained a focus for you as a composer, even though you had this early experience.

DS:  Well, it was.  Later on, as an undergrad at Eastman, I had written a three-movement, large orchestral piece called Triptych for Orchestra that had won an award. They had a symposium and that piece was played there.  I also have a piece for orchestra and baritone voice. I have different ensemble pieces, as well as instrumental music.  My first experiences were really with instruments.  They weren’t with electronics, except for a record player.  This came later, from my tendency to experiment and search for new things.  It’s basically curiosity, being inspired by certain things and asking a few questions—what can I do with this?  And how? I would not worry about everything being perfect right away.  It’s possible to not have certain kinds of boundaries, which makes it then possible to think and experience beyond the current stage of experiences that I’ve had.  And that was very interesting for me.

When I was a [college] sophomore, I think, because this was 1965, one of my upperclassmen was Bob Ludwig—the audio engineer who’s got a zillion Grammys by now.  He was given a couple of Ampex portable machines by, I think, his uncle.  He became interested in doing some kind of project, so he came to me and he said, “Hey, you have any ideas for what we can do with these machines?  I want to record et cetera, and try to learn how to edit.”  So I said, “Okay, let me get an idea.” I put together a kind of spatial notation piece for six instruments and then had the idea where these instruments would play and we would record.  So we recorded.  Then we would mess around with the tape, meaning editing.  We took splicings and changed the tape in different ways.  We’re talking really basic, non-studio work, really very experimental, approaching some kind of tape music.  We literally glued together the tape part, then had the instruments play live from the score with the tape parts.  The piece is called Six Plus.

FJO:  I’ve read about it.  I’ve never heard it, but I know of its existence.  When you did that, were you aware of Pierre Schaeffer and all the musique concrète pieces?

“I had never heard the words ‘electronic music.’ There were no courses in this thing. This was basically experimenting from scratch.”

DS:  No.  I had never heard the words “electronic music.”  There were no courses in this thing.  This was basically experimenting from scratch.  What happened after that is some students in town who were from the Rochester Institute of Technology, RIT, were in photography and other visual arts.  And by the way, the term visual arts didn’t exist.  These students had this big studio where they’re taking slides and sorting them.  And they were going to have a show with five projectors.  They explained to me what they had in mind.  They wanted a composer who would work on a sound score.  So I said okay. I didn’t think of it as writing the music from scratch.  I said, “What do you have here as sounds that you want me to work with?”  And they said, “Well, here’s a bunch of records.”  I made a soundtrack for them using tape, but it was really ad-libbing and trying things out, seeing what would happen, and basically learning how to organize and create an expression or expressions that would respond to and be compatible, or not, in different ways with what was going on in their visual expression.

FJO: I’ve looked at some of your earliest instrumental scores. There’s one that probably pre-dates this.  You have a series of pieces that you compiled in the 1960s, but I imagine they’re significantly earlier than that, the Five Early Pieces for solo piano.

DS:  I must have been a sophomore or a junior at Eastman.  They were written after I got bored with the kinds of student piano literature that existed. One of my friends, who was teaching at the Hochstein Music School in town, went away for a few weeks and said, “Will you take my piano students?” My instrument was piano, so I said okay. I went over there and decided that if I’m going to do this for a bunch of weeks, and they have this boring music to play as etudes, let me write my own stuff.  That’s where the Five Early Pieces come from.  So these were not virtuoso pieces.

FJO:  Sure.  But don’t be too dismissive of them. When Peter Schickele gave the keynote address at the Chamber Music America conference some years back, he lamented that so few composers exploring new compositional techniques wrote easier pieces for young players.

DS:  Oh yeah?

FJO:  Such pieces are really a way to introduce these techniques to musicians. And I think your pieces do that.  One of them is in septimal time, and the last one is actually a 12-tone piece.

DS:  That’s right. It’s a 12-tone piece for kids. And it’s actually not unpleasant to play it. It has lyricism in it as well.  I have another piece that I wrote, which is something like a 17-minute long movement for piano and violin, called Music for Violin and Piano.  And I used a few phrases from that 12-tone study for piano students.  I used these phrases toward the end of the piece, because they are very lyrical and they fit into that place in the piece.

FJO:  The other thing I was wondering about when I was looking at the score for that piece is how you were first exposed to 12-tone music?  Was it when you were at Eastman?

DS: It was actually a dilemma for me at Tanglewood.  I was a student there.  So many things happened to me the summer after freshman year and also during sophomore year.  But I think the summer after freshman year, I ended up at Tanglewood with people around like Aaron Copland, Donald Martino, Gunther Schuller, and Elliott Carter who was giving talks on his Double Concerto, which was presented there.  So this was very fascinating for me, and it was like hitting a wall of this suddenly very complex, chromatic music.  So I had little conversations with Schuller asking about 12-tone music.  I was interested in knowing why people chose this way of expression.  And he really couldn’t answer this.  That is stuff that I had to discover on my own.  I had spent two solid years in that phenomenal Sibley Music Library at Eastman, looking and listening to every 20th-century piece I could get a hold of, and in this way learned a lot merely from observation.  Much more than from taking a class, let’s say. At that point, it really was the most valuable thing I could have done because that gradually revealed to me lots of details. Then I started trying these things out on my own.  More than reading a book about something, or being told about something, it was really experiencing a certain reality and coming to different realizations.

FJO:  Of course the other thing that happened when you were at Eastman is you studied with Samuel Adler, who is one of the most significant authorities on orchestration.  Did that have any impact?

DS:  He was a very perceptive person, and he would let me do whatever I was going to do because I was doing things with intent.  Although what I was writing was not at all in his stylistic practice, and I was doing experimentation, he could appreciate my situation let’s say.  I was just doing what I was doing.  So it was not a matter of teaching so much as a matter of suggestion here and there, which I think is very valuable—to have someone stand out of the way, and then make different comments here and there, and in some cases, little practical things such as maybe not two double basses, but three double basses because of certain pitch situations where possibly they may be perceived out of tune or out of focus.  So these little tidbits, and later on, I think in junior year, I got into the graduate orchestration class which I wanted to be in, which I knew would be a lot better than being in instrumentation class.

I tried a couple of weeks of that with Aldo Provenzano, who was visiting from Juilliard and was a chain smoking, Henry Mancini/101 Strings-type arranger and composer.  I really did not need what I considered baby shit level.  By that time I had written a couple of orchestra movements and was into a long orchestral piece with everything in it, including contrabassoon.  It was ridiculous.  I wanted to show that I could get into an orchestration class.  How absurd.  I was told, well you’ve got to write an orchestral piece, so I said, “Okay, I’ll do it.”  This is the way they want it, here it is.  Boom!

In that class with grads, there was a lab orchestra every week.  That was the deal.  So I could do my experimental organizing and arranging and trying out orchestral effects with an orchestra there.  I had volunteers in my dorm.  My dorm friends copying the parts for my one-minute, or two-minute, or three-minute musical experiments every week.

FJO:  Wow.

DS:  I think that kind of thing was also fabulous because Sam Adler was conducting and we were getting recordings.  It would be taped.

FJO:  One of the most ambitious pieces you wrote during that time was Lieder auf der Flucht, a song cycle for soprano and eight instruments with German poetry, which actually got you your first BMI Student Composer Award back in 1967.

DS:  Yeah.  That was one.  And then there was another one.  I guess there were several.  And I think for that, I had sent in a couple.  In a way, I was very inexperienced as a freshman and sophomore. I remember meeting Sam Adler for a lesson and he asked me, “Are you applying to BMI?  Are you applying to something else?”  And I said, “I don’t know.  What is that?”  And he said, “Don’t come back for a lesson next week unless you’ve mailed out this stuff.”  He was behaving in a very annoying way— like, “What do you mean you didn’t apply?”  So I said, “Well, I didn’t apply because the piece I wrote was kind of short.  Don’t they want really long pieces?”  He said, “Come on.  Send this thing in.” So I said, “Maybe I’ll send in two pieces.”

FJO: You must have also sent in your flute and piano duo, Quattro, because that piece was also acknowledged that year.

“I remember meeting Sam Adler for a lesson and he asked me, ‘Are you applying to BMI?’
And I said, ‘I don’t know.  What is that?’ And he said, ‘Don’t come back for a lesson next week unless you’ve mailed out this stuff.’”

DS:  Oh, there’s also that.  They were written very near each other, and they’re cool pieces.  So I sent that in with the song cycle, but my idea was, “Unless a piece is 15-minutes long, why bother sending it?”—which is absurd.

FJO:  A striking aspect of that song cycle, which was your first really substantive vocal piece, is that you chose to set another language instead of English.  You set German.

DS:  That’s because what English feels like for me is not the same as German.  And it’s not the same as French.  I also have songs in French.  I have songs in English, but English for me has a different intensity than much more intense languages like German, French, and Polish for instance.  Once I learned Polish, I was reading Polish poetry; it’s no wonder that they have Nobelists who were poets, because it’s really splendid stuff.  Even the sound of it, the nuances are way beyond anything in English.  It may be not nice to say that, but in terms of the comparative experience, the nuances in different languages vary.  And the details in languages such as Polish give it much more intensity for me.  It’s a kind of increased thesaurus of feelings that are available in certain languages.

FJO:  So you’re fresh out of Eastman.  You’ve got your degree.  You’re off to Poland on a Fulbright after you pass the Czech language exam and therefore, weirdly as we discussed, you were allowed to go to Poland to study with Lutosławski.  You’d mentioned that you really admired his Jeux Venitiens. There’s a piece of yours called Jeux des Quatres; it’s a chamber piece, but it’s scored for a very odd instrumental combination.

DS:  Clarinet, cello, piano, trombone.

FJO:  It’s full of extended techniques, and there’s indeterminacy in it as well.  One of the pages looks like a sort of mobile.

DS:  This highly drunk page of gestures, which is one of the movements.

FJO: There definitely seems to be a relationship to Lutosławski in that piece. Was that one of the pieces you were working on when you were studying with him?

DS:  No, that was the year after.  At that point I was at Yale as a grad student. So this was a piece which was different from Lutosławski, but having experienced Lutosławski was part of that difference.  I was pushing in this other way, this other direction of writing scores.  The visual object was very important to getting the sonic result—why that notation and why not some other notation?

FJO:  But then when you got to Yale, you became really deeply immersed in electronic music.

DS:  I started becoming involved. There was a very rudimentary studio.  There was this oddball situation of an ARP synthesizer.  A big whopper ARP, so not a mini, and I thought it was awfully clumsy in different ways.  I also found the ARP sonically too homogenous, believe it or not.  I didn’t describe it to myself that way because it was too early for me to realize why I wasn’t terribly attracted to this thing, which seemed to be able to do all sorts of things.  But in terms of expression, it was not the most exciting thing.  What I did have in there were a few oscillators, which could be made to fool around with each other in different ways.  And these were not part of a synthesizer; they were just an analog, mini-studio.  Then there were filters and noise generators, and also a spring reverb that was kind of strange sounding.  But the strangeness actually enhanced some of my loop sounds that I was making from scratch, from splicing.  I did a lot of splicing. It gave me enough time to hear things and be able to get an up-close-and-personal experience with the sounds because there was absolutely no automaticity involved.

Daria Semegen still manipulates sound using reel-to-reel tapes and teaches her students to do so as well. And, after more than a half century of experience, she has well-tested preferences for what the best angles are for splicing. This is why there is a slab of wood positioned on this reel-to-reel machine (one of four at her Electronic Music Studio at Stony Brook University) which enables people working with the tape deck to precisely eyeball where to position a razor blade in order to make a splice on the tape.

Daria Semegen still manipulates sound using reel-to-reel tapes and teaches her students to do so as well. And, after more than a half century of experience, she has well-tested preferences for what the best angles are for splicing. This is why there is a slab of wood positioned on this reel-to-reel machine (one of four at her Electronic Music Studio at Stony Brook University) which enables people working with the tape deck to precisely eyeball where to position a razor blade in order to make a splice on the tape.

FJO:  You already mentioned doing this piece at Eastman for the six players and messing around with tape, so that was really your first electro-acoustic piece.

DS:  Yeah.

FJO: But it was quite a transition to go from being a composer who was involved with playing the piano and working with an orchestra who did one experimental piece involving manipulating taped sounds to being somebody who knows how to cut and splice tape and mess around with oscillators.  These seem like totally different skill sets.

DS:  Oh, I think it’s part of putting things together.  I also realize, for instance, my Electronic Composition No. 1, which is pretty elaborate, is named that because it is an experience that to me was in a way parallel to visual art composition.  It was much more about construction, rather than a conventional musical composition that stays in its own little prescribed world of what’s expected and what will be subject to a lot of rules and regulations.

“Tonal music is a world that has definite, prescribed behaviors, etiquettes, and expectations.”

We know, for instance, that tonal music is a world that has definite, prescribed behaviors, etiquettes, and expectations. On the whiteboard is [the phrase] “tonal obligations,” which is what I advise students to be aware of when they are writing music which is essentially not tonal, and then they put in something which creates the psychological expectation of tonal music.  Awareness is so vital in terms of the sonic expression.  So I was dealing with this other array of possibilities that was not coming from that world and that I could organize in different ways instead of dealing with absolute pitches and absolute rhythms.  Working in this way was a fascinating experience for me.

The whiteboard in Daria Semegen's electronic music studio at Stony Brook University which includes the phrase "tonal obligations" as well as a couple of twelve-tone rows and a diatonic scale.

The whiteboard in Daria Semegen’s electronic music studio at Stony Brook University.

FJO:  Writing music for you stemmed from being obsessed with figuring out how notation works and how that then gets translated into sound.  But electronic music is conceptualized in a very different way.  There have been attempts to notate it, but it sort of defies notation, even though there are these bizarre scores for some of the early electronic pieces by Ligeti and Stockhausen, which could probably never be used to recreate a performance.

DS:  Oh, it’s absurd.

FJO:  Right?  Crazy.  But once upon a time, it had to be written down in some form in order to obtain a copyright for it.

DS:  That’s right.  Even if you wrote BS, which a lot of these scores were, only a title page where you scribbled something, and then wrote copyright C and your name.  As you point out, otherwise you couldn’t get a copyright.

FJO:  So is there a score for Electronic Composition No. 1 floating around?

DS:  You don’t need a score.

FJO:  No, but is there one?

DS:  No.  Why would I need it? It’s impossible to notate all of the things that are going on in the piece.  It would just be a superficial skeletal sketch.  I’ve had students who would write analyses of that piece. For instance, one student analyzed densities. The piece has so many parameters going on.  And for me, a parameter is any area that’s available to be varied that you can realize and work with.  That means you have to be aware that a certain area is a viable, working zone for that particular piece.

FJO:  So, in the creation of it, did you make any written-down sketches?

DS: No need.  For me, the most important thing in doing electronic music is that I don’t need some kind of thing on paper.  This is purely a sonic art, just as visual art would be in working with colors, rather than painting by numbers.  You can have a lot of in between things happening, which are not having to comport with conventional ways of writing or visualizing the thing. You can be as specific or as vague as you need to be for musical expression at any moment in your sonic work.

FJO:  So in a way, it must have been greatly liberating for you since you were so fixated on notation.

DS:  Oh, it was not the intent—”Hey, I’m bored with notation; let me break away.”  This was like, “Wow, I don’t even have to pay any attention to notation.  I can just listen.” Now that’s the thrill of it all in the studio.  For me, this is really the purest way of dealing with musical sounds, where you’re only dealing with the sounds.  You’re not being distracted by visual stuff, even though I’ve done soundtracks with visual things and also with choreography.  But creating the electronic music pieces by themselves is not dependent on having to translate a notation or to re-translate the sounds into a notation except if somebody wants to do this for analytical purposes.  The visual stuff is not the piece.

FJO:  Then of course, the other thing is you had the experience of participating in an orchestration lab where you had musicians play whatever you brought in for them every week.  Most people have to wait a lot longer than that. Even folks who receive a commission to write an orchestra piece usually have to turn in the piece many months in advance, and then they eventually get to hear the piece or simply hear a lot of people struggling.  Others may write a piece and it could be more than a decade before they hear it.  Whereas if you create an electronic thing—

DS:  It’s there. So the immediacy of that is very much like mixing your own colors if you’re a painter.  Or creating shapes if you’re a sculptor, or even in working with light as a medium, which is an interesting kind of thing.  Years ago I became aware of light in museum exhibits, and I realized this is an extremely important factor in creating the expression of the visual art, whether it’s a sculpture or whether it’s a painting hanging on a wall.  The color of the light, the shading—the Kelvins now as we say—is part of the expression in showing a visual art piece.

FJO:  To get back to Electronic Composition No. 1. You gave it that title, but it’s not the first electronic piece you did.

DS:  There’s actually a sketch before that which is from Yale.  In Electronic Composition No. 1, I use sounds from the Yale studio, but the rest of the sounds and manipulation is from the Columbia-Princeton Electronic Music Center.

FJO:  So was Trill Study a Yale piece?

DS:  Oh, no.  Trill Study was after. It was during the composition of a score for an animation called Out of IntoTrill Study is an intense loop piece.  The trill is literally spliced.  Would you believe that it is more intense a trill than anything from a synthesizer?  It’s very easy on a Buchla synthesizer to create a trill. But it’s not got the same bite—meaning the timbral color of the attack points when you’re alternating from one note to another note of the two-note trill.

The Buchla Synthesizer at Daria Semegen's Electronic Music Studio at Stony Brook University.

The Buchla Synthesizer at Daria Semegen’s Electronic Music Studio at Stony Brook University.

By comparative listening, I said, “Hey, what would this trill be like if I actually spliced the freaking thing together?” It’s a lot more work, but once you have it spliced, you have this beautiful sounding passage and then you can do all kinds of looping and variable speed to create different lengths of loops, and have several loops on different tape recorders, just making sound clouds.

Then recording the result of that, doing some improv and then organizing phrases from that by chopping stuff out and putting it in a different order.  So it’s a lot for me like the visual arts.  There’s something that you can experience with that, but the ultimate thing is: what does it sound like?  If the mechanical uniformity of something like a repeated pattern on any synthesizer or system is too perfect, it doesn’t sound nuanced.

“By comparative listening, I said, ‘Hey, what would this trill be like if I actually spliced the freaking thing together?’”

Now mind you, I’m not trying to sound like natural instruments.  I’m trying to appreciate the aesthetic qualities that I experience with those instruments and trying to break the uniformity and the expectation of uniformity in the more mechanistic musical world of electronic gadgets—which can make very perfect things for you, but those things can be too perfect. So it’s better for me to have spliced that trill. That’s my one loop splice; one loop can create other loops in variance with that.

FJO:  Electronic Composition No. 1 was a watershed piece for you in a lot of ways.  At that point, you were working at the Columbia-Princeton Electronic Music Center established by Luening and Ussachevsky, which is a legendary place.

DS:  Mostly Ussachevsky, who actually had training in engineering from Pomona College, California, and a Ph.D. from Eastman later on.  He was very much instrumental in the actual kinks of the place, and the tone, the ambience, the welcoming nature of that place, in encouraging composers, really worldwide, who could come and work there if they were interested.  I think it was run in a way that encouraged creative possibilities.

FJO:  Now when you were at Yale, you didn’t have access to a studio on that level, but you did work with Bülent Arel, who had also created work at the Columbia-Princeton center.

DS:  That Yale studio was rudimentary, and it also had that giant Arp which was not engaging to work with.  I think it was an ergonomics issue.  I had a similar issue with Peter Zinovieff’s machine from his electronic music lab in Britain, which he had loaned to Ussachevsky. This giant synthesizer was put into our faculty studio that I worked in.  And so there it was.  It had wonderful capabilities, but was somewhat ergonomically tedious to work with.

“When some systems become ergonomically too gadgety, it does not intrigue me very much.”

I need a certain degree of immediacy. Getting sonic results now in some of the things that I do obviously takes more time, but when some systems become ergonomically too gadgety, it does not intrigue me very much.  Software often has problems in that way, because the people who design the software have a different way of working than I do, different degrees of intuiting what they go for first and what happens next. Remember I was talking about starting tabula rasa, and then putting certain things in.  I’m also focused on organizing things that I choose if the software allows me to.  When things are too classified, it become impractical.  So I like to organize my gadgets for each piece in a different way.  In a way, it’s organizing your own studio of materials in the software.

A vintage ARP 2600 is still in use in Daria Semegen's electronic music studio at Stony Brook University.

A vintage ARP 2600 is still in use in Daria Semegen’s electronic music studio at Stony Brook University.

FJO:  Of course it’s funny talking about this in the context of the Columbia-Princeton Electronic Music Center, because the whole thing got started with the ergonomically impractical giant Mark II.

DS:  Yes. That RCA Mark II used rollers, kind of like a piano roll that you would type.  So they had two typewriters with rolls for that.

FJO:  Did you ever work on that?

DS:  No, very few people did.  Milton Babbitt.  And then Wuorinen had a piece on there, but that was a one-time deal.  It was a machine that Milton worked on.

FJO:  So what machines were you working on then?

DS:  Oh, the analog studio.  And there was a Buchla 100 there, and that was terrific.  And that’s used in the Out of Into score.  So yes, a lot of sounds from that place with some beautiful machines, including a terrific filter that was so incredibly discreet.  It was a slider, a kind of graphic filter-type machine, where I could get wonderful sound changes in time.  If I wanted my sound expression in filtering to change in a certain kind of intensity, I could get that from this type of filter.  It was one of a kind.  I’m sure that it’s in storage somewhere.  I don’t think anybody would want to throw it out.

Some of the extraordinary vintage equipment in Daria Semegen's electronic music studio at Stony Brook University.

Some of the extraordinary vintage equipment in Daria Semegen’s electronic music studio at Stony Brook University.

There are some incredible pieces of equipment sonically, like Elektro-Mess-Technik plate reverb units.  We have one of those here, which is this large gray box with a handle. That’s a very special sound.  We also have several models of the giant reverb plate from the same company, EMT, that are in storage.  Those have to be in a separate room, because otherwise such a reverb unit will pick up sounds from the studio through the walls of the unit.

FJO:  To go back to the early 1970s at the Columbia Princeton Electronic Music Center, you had also mentioned that you were still cutting and splicing reel to reels when you were there.

DS:  Yeah.

FJO:  But since synthesizers were now on the market and were getting smaller and smaller—to the point that people were starting to set up studios in their own homes and even travel around with Moogs and Buchlas—wasn’t it somewhat old fashioned at that point to still be working that way?

DS:  I think that perhaps is a typical perception of things, but I think in terms of the experience creatively, of working with devices.  I tend not to like to work boxed into one box. Whatever work I’m doing on a Buchla or any software, that is only the beginning of what I do.  I have to do a tremendous amount of editing and varying in order to get to a completion with a lot of these sounds that I use.  Patterns, textures, and combinations of timbre—it’s not enough for me to limit myself to the Buchla and for my piece to be about that.  It is not going to be my piece sticking with only one type of expression or one unit.  It involves different types of comparative listening and different listening techniques, which are not conventional ear training.  Then choosing sounds, characteristics, and expressions through comparisons.  Comparative listening is very important.

“Editing is a constituent part of what my pieces are about.”

There are all these techniques that are involved in editing sound materials, and paying attention to things like attack characteristics and the expression of simultaneities—slices of sound, as well as how long the sound landscapes work.  And then what their densities are as they vary and what sorts of expressions they create.  For instance, for me, sonic intensity is only one single parameter.  So I’m aware of that through the piece and in designing and choosing different sound components.  I feel that my sound sources regardless of what they are, or even whether they are analog or digital, doesn’t matter as much as having the possibility to control these things in different ways once they are stored, let’s say.  Because I work with stored sounds, basically on tape or digitally.  Editing is a constituent part of what my pieces are about.

FJO:  I’m going to jump ahead to 1990 then, because what you just said reminds me of a fascinating statement you made in the CD booklet notes for the recording of a piece for MIDI grand piano that you wrote for Loretta Goldberg called Rhapsody: “These new tools cannot change or solve the perplexing compositional problems often encountered in creating a new work, whose ultimate purpose is to communicate with my audience once more with feeling.”

DS:  That’s right.

FJO:  That sentiment is worlds away from “who cares if you listen”!

“You cannot own your sounds or your work too much and be so possessive that you will not change things and ignore your intuitive reactions.”

DS:  Oh, I know.  But I think Milton would say that this was overblown, or improperly interpreted, et cetera.  And, of course, he felt it was sensationalized in various ways.  But in giving a bit of a perspective on that—who is the initial audience to my sounds?  Who is that?  Me.  I have to be the first listener to my sounds.  And I modify them according to the reactions that I have as a listener who is, again, approaching the experience of listening without being, let me say, possessive about the sounds.  I tell my students that you cannot own your sounds or your work too much and be so possessive that you will not change things and ignore your intuitive reactions.  I think intuitive reactions are vital in creating a personal fingerprint on your art instead of being so possessive that you own it too much to improve it.

A collection of loops created by Stony Brook students using magnetic tape saturate a wall in Daria Semegen's electronic music studio.

A collection of loops created by Stony Brook students using magnetic tape saturate a wall in Daria Semegen’s electronic music studio.

FJO:  Now in terms of taking work and giving it a personal fingerprint, one of the pieces of yours I find super fascinating and wonderful is Arc. And yet this was a piece that was created to accompany dancers, and the choreography was already completed before you composed this piece. You had to make your music precisely fit with that.

DS:  That’s right. It was very interesting. I was given large graph paper, almost like Chinese scrolls, and each square was a beat. I had these little graph paper squares with annotations by the choreographer Mimi Garrard indicating things like lighting changes—because this choreography was also synchronized with a digitally controlled lighting system which was called CORTLI, as in courtly dancing, but it was also an acronym.

A page from the movement and lighting "score" for Arc which was fixed before Daria Semegen began composing the music for it. (Image courtesy Daria Semegen.)

A page from the movement and lighting “score” for Arc which was fixed before Daria Semegen began composing the music for it. (Image courtesy Daria Semegen.)

It was put together at Bell Labs by James Seawright, who was the head of visual arts at Princeton and was also on the staff at Columbia-Princeton as one of their technical people.  He’s a phenomenal kinetic sculptor.  Just amazing.  I remember as a kid at Eastman I would look through Time magazine and I saw a picture of Jimmy Seawright.  I didn’t know who he was, but he was there with one of his electronic sculptures, and there was a little write up about it.  I had no idea that six or so years later I’d be collaborating with James Seawright and doing two scores for two different choreographies, including Arc.  Anyway, the scores that I came up with had to be really on target in terms of the tempo and their work.  Arc consists of an A-B-C-B-A tempo shape, let’s say, starting with slow movements on the outer ends and then faster, and then the fastest.

FJO:  Arc has deeply resonated with me for decades, and I’ve sometimes wondered—especially after reading that comment of yours about communication and feeling—whether a piece like this could somehow serve as a gateway for listeners who love the standard orchestra and chamber music repertoire but might not be initially amenable to electronic music.

DS:  It’s more accessible.  It’s more familiar.  But the timbral world there is not; it’s other earthly, let’s say, when compared with instrumental sounds.  It’s a simpler score in different ways than something like Electronic Composition No. 1 and things like Arabesque, which is way different.  Arc has more clearly displayed sounds that you can hear as they change and modify, morph in expression with timbre, which was interesting.  And it’s a Buchla piece.

An archival photo from the original Mimi Garrard Dance Theatre production of Arc in May of 1977 which featured a portable computer-controlled lighting system by Mimi Ganard and James Seawright and an electronic musical score by Daria Semegen. (Photo courtesy Daria Semegen.)

An archival photo from the original Mimi Garrard Dance Theatre production of Arc in May of 1977 which featured a portable computer-controlled lighting system by Mimi Ganard and James Seawright and an electronic musical score by Daria Semegen. (Photo courtesy Daria Semegen.)

FJO:  Arabesque is a much more recent piece. But when I was looking at a score of the second movement of your set of Three Piano Pieces from the 1960s, the one that jumps all over the place, I was struck by how reminiscent it was to Arabesque.

DS:  That aesthetic?

FJO:  Yeah.

DS:  You got it!

FJO:  Of course, in the electronic realm there are many things that you can do that you couldn’t do in quite the same way in a solo piano piece. You’re not limited to the timbres of a piano. You’re constantly manipulating timbre, and the electronic medium is also not limited to 12-tone equal temperament. Arabesque is filled with all sorts of microtonal intervals. But the gestures are still somewhat similar. It also doesn’t sound like any other electronic music I know from what I’ll call, for lack of a better term, the post-analog era. You’re still continuing to explore all those wonderful old-school electronic music sounds, yet your music has continued to evolve within that medium.

“I don’t think of old school and new school or in between schools.”

DS:  I don’t think of old school and new school or in between schools.  I simply relate to the material in my piece rather than worrying about or being aware of this or that school.  What you describe in this piano piece that’s way early is a musical behavior that is toccata-like.  Well, there you go.  If you’re going to compare it to body language musically, then you can say, “Well, yeah.  This piece [Arabesque] is kind of toccata-like.” But it’s only a section of the piece and that describes a sort of body language, which I’m aware of in terms of how things move musically.

FJO:  Okay, old school and new school are probably the wrong words for me to be using to explain this, but here we are, we’re sitting in this amazing studio with reel-to-reel machines.  You know, I haven’t really seen a lot of those around elsewhere so much these days.

DS:  Well, maybe people don’t know how to work with them!  And, of course, this is only one part of the experience.  We also have the digital world.  We also have digital editing available. To my taste, people don’t use it intricately enough. They could be experimenting with digital editing in a way that goes deeper and gives a greater array of possibilities to choose from, and that is an exciting thing to do.  That’s what I like to do with digital editing because it quickly expands the choices that I have, but I have to instigate the changes myself, because hey, the job of a composer is to choose!

Various vintage oscillators and reverb units surround a state of the art digital mixing console.

Analog and digital equipment co-exists in Daria Semegen’s electronic music studio at Stony Brook University.

FJO:  So one of the choices you’ve made is to still work with reel to reels.

DS:  Yes, depending on what techniques I’m using.  Because, for instance, I can very easily make elaborate improvisations in a studio that create more complex material or generations of complexity.  Let’s say you’re starting with original simpler material, going to several generations of layers.  And then you can extract chunks from that to be used in other ways.

“The job of a composer is to choose!”

I think of improv as a very viable technique.  When you hear my pieces, they don’t sound improvised; they sound deliberate because the sounds were deliberately chosen. But then we go to consider how these sounds were made.  That’s where anything goes.  It could be improv. It could be something that’s just the opposite, something very precise.  I go between these two different worlds of improvisation and precision, using randomness as a tool to generate material of different characters.  And not staying in one particular catechism of rules.

FJO:  That’s a very inspiring thing, not only from a creative standpoint but also from a pedagogical one. So we should conclude by talking a bit about teaching.  You’ve been here at Stony Brook since 1974.

DS:  Yes.

FJO:  That’s a very long time—44 years.

DS:  Mhmm.

FJO:  And although the studio has grown and has lots of newer equipment, there is equipment here that goes back to when you first got here, and stuff from even earlier than that, that you still use and that your students can also use.

DS:  That’s right.  It’s nice to have an array of possibilities available.  The instruments that you have also influence your perception, your thinking, and the way you can work.  For instance, my digital editing has a lot to do with my experiences splicing tapes.  I used to mess around, changing transients, by cutting slivers off the attack points of tape just to see what the heck would happen.  And then using different angle cuts on the tape attacks or sometimes endings.  So doing these little playing-around experiments are all lessons in sonic experiences.  Because ultimately that’s what happens: You make changes.  You listen to it.  My digital editing is very much influenced by this.  I also have one piano piece, which is influenced by working with electronic sound textures.  I explained that a little bit in a program note. These two seemingly disparate worlds are all interconnected here and there, sometimes more intensely or less intensely.  All these things exist.  So it’s having in my head these various experiences, including this.

FJO:  So a final question.  You said the audience begins with you.

“I’m not going to tell an audience what to do or how to react. That’s not my job!”

DS:  Yeah, I’ll be the first listener.  Then basically I share, but that is not my main drive.  I don’t sit around and think about, “Hey, I’m going to be sharing this.” You listen to Electronic Composition No. 1¸ and that gets pretty bizarre.  When I was making some of those sounds, I would say, “Whoa, this is really kicking it around here.  Gee, I wonder how an audience would react?”  But then I’d basically let them worry about it.  I’m not going to tell them what to do or how to react.  That’s not my job!

Daria Semegen with a bunch of wires in her mouth sitting in front of the Buchla synthesizer at the Stony Brook Electronic Music Center.

Resonating Filters: How to Listen and Be Heard

I have been writing all this month about how a live sound processing musician could develop an electroacoustic musicianship—and learn to predict musical outcomes for a given sound and process—just by learning a few things about acoustics/psychoacoustics and how some of these effects work. Coupled with some strategies about listening and playing, this can make it possible for the live processor to create a viable “instrument.” Even when processing the sounds of other musicians, it enables the live sound processing player to behave and react musically like any other musician in an ensemble and not be considered as merely creating effects. 

In the previous post, we talked about the relationship between delays, feedback, and filters.   We saw how the outcome of various configurations of delay times and feedback is directly affected by the characteristics of the sounds we put into them, whether they be short or long, resonant or noise.   We looked at pitch-shifts created by Doppler effect in Multi-tap delays and how one might use any of these things when creating live electroacoustic music using live sound processing techniques.  As I demonstrated, it’s about the overlap of sounds, about operating in a continuum from creating resonance to creating texture and rhythm.  It’s about being heard and learning to listen. Like all music. Like all instruments.

It’s about being heard and learning to listen. Like all music. Like all instruments.

To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them.  I hope this information will be useful to live sound processors (because we need to know how to be heard as a separate musical voice and also be flexible with our control especially in live sound processing).  This information should also be useful to instrumentalists processing their own sound (because it will speed the process of finding what sounds good on your instrument, will help with predicting outcomes of various sound processing techniques). It should especially helpful for preparing for improvisation, or any live processing project (without the luxury of a long time schedule), and so too I hope for composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.

Resonance / Filtering in More Detail

We saw in the last post how delays and filters are intertwined in their construction and use, existing in a continuum from short delays to long delays, producing rhythm, texture, and resonance depending on the length of the source audio events being processed, and the length of the delays (as well as feedback).

A special case is that of a very short delay (1-30ms) when combined with lots of feedback (90% or more).  The sound circulates so fast through the delay that it creates resonance at the speed of the circulation, creating clear pitches we can count on.

The effect is heard best with a transient (a very short sound such as a hand clap, vocal fricatives “t” “k”, or a snare drum hit).   For instance, if I have a 1ms delay and lots of feedback and input a short transient sound, we will hear a ringing at 1000Hz.   This is how fast that sample has been going through the delay (1000 times per second).  This is roughly the same pitch as “B” on the piano (a little sharp).  Interestingly, if we change the delay to 2ms, the pitch heard will be 500Hz (also “B” but an octave lower), 3ms yields “E” (333Hz), 4ms yields another “B” (250Hz), and 5ms a “G” (200Hz), and so on in kind of upside down overtone series.

Karplus-Strong Algorithm / Periodicity Pitch

A very short delay combined with high feedback resembles physical modeling synthesis techniques, which are very effective for simulating plucked string and drum sounds.  One such method, the Karplus-Strong Algorithm, consists of a recirculating delay line with a filter in its feedback loop.  The delay line is filled with samples of noise.  As the samples recirculate through the filter in the feedback loop, the samples that are passed through the delay line create a “periodic sample pattern” which is directly related to how many samples there are.  Even though the input signal is pure noise, the algorithm creates a complex sound with pitch content that is related to the length of the delay. “Periodicity pitch” has been well studied in the field of psychoacoustics, and it is known that even white noise, if played with a delayed copy of itself, will have pitch. This is true even if it is sent separately to each ear. The low pass filter in the feedback loop robs the noise of a little of its high frequency components at each pass through the circuit, replicating the acoustical properties of a plucked string or drum.

If we set up a very short delay and use lots of feedback, and input any short burst of sound—a transient, click, or vocal fricative—we can get a similar effect of a plucking sound or a resonant click.  If we input a longer sound at the same frequency as what the delay is producing (or at multiples of that frequency), then those overtones will be accentuated, in the same way some tones are louder when we sing in the shower, because they are being reinforced.   The length of the delay determines the pitch and the feedback amount (and any filter we use in the feedback loop determines the sustain and length of the note).

Filtering & Filter Types

Besides any types of resonance we might create using short delays, there are also many kinds of audio filters we might use for any number of applications including live sound processing: Low Pass Filters, High Pass Filters, etc.

A diagram of various filter types.

But by far the most useful tools for creating a musical instrument out of live processing are resonant filters, and specifically the BandPass and Comb filters, so let’s just focus on those. When filters have sharp cutoffs they also will boost certain frequencies near their cutoff points to be louder than the input. This added resonance results from using sharp cutoffs.  BandPass filters allow us to “zoom” in on one region of a sounds spectrum and reject the rest.  Comb filters, created when a delayed copy of a sound is added to itself, results in many evenly spaced regions (“teeth”) of the sound being cancelled out, and creating a characteristic sound.

The most useful tools for creating a musical instrument out of live processing are resonant filters.

The primary elements of a BandPass filter that we would want to control would be center frequency, bandwidth, and Filter Q (which is defined as center frequency divided by bandwidth, but which we can just consider to be how narrow or “sharp” the peak is or how resonant it is).    When the “Q” is high (very resonant), we can make use of this property to create or underscore certain overtones in a sound that we want to bring out or to experiment with.

Phasing / Flanging / Chorus: These are all filtering-type effects, using very short and automatically varying delay times.  A phase-shifter delays the sound by less than one cycle (cancelling out some frequencies through the overlap and producing a non-uniform, but comb-like filter). A flanger, which sounds a bit more extreme, uses delays around 5-25ms, producing a more uniform comb filter (evenly spaced peaks and troughs in the spectrum). It is named after the original practice of audio engineers who would press down on one reel (flange) of an analog tape deck, slowing it down slightly as it played in nearly sync with an identical copy of the audio on a second tape deck.  Chorus, uses even longer delay times and multiple copies of a sound at longer delay times.

A tutorial on Phasing Flanging and Chorus

For my purposes, as a live processor trying to create an independent voice in an improvisation, I find these three effects most useful if I treat them the same as filters, except that since they are built on delays I can change, there might be the possibility to increase or decrease delay times and get a Doppler effect, too, or play with feedback levels to accentuate certain tones.

I use distortion the same way I would use a filter—as a non-temporal transformation.

DistortionFrom my perspective, whatever methods are used to get distortion add and subtract overtones from our sound, so for my live processing purposes, I use them the same way I would use filters—as non-temporal transformations. Below is a gorgeous example of distortion, not used on a guitar. The only instruction in the score for the electronics is to gradually bring up the distortion in one long crescendo.  I ran the electronics for the piece a few times in the ‘90s for cellist Maya Beiser, and got to experience how strongly the overtones pop out because of the distortion pedal, and move around nearly on their own.

Michael Gordon Industry

Pitch-Shift / Playback Speed Changes / Reversing Sounds

I once heard composer and electronic musician, Nic Collins say that to make experimental music one need only “play it slow, play it backwards.” Referring to pre-recorded sounds, these are certainly time-honored electroacoustic approaches borne out of a time when only tape recorders, microphones, and a few oscillators were used to make electronic music masterpieces.

For live processing of sound, pitch-shift and/or time-stretch continue to be simple and valuable processes.  Time compression and pitch-shift are connected by physics; sounds played back slower also are correspondingly lower in pitch and when played back faster are higher in pitch. (With analog tape, or a turntable, if you play a sound back at twice the speed, it plays back an octave higher because the soundwaves are playing back twice as fast, so it doubles the frequency.)

The relationship between speed of playback and time-stretch was decoupled in mid-‘90s.

This relationship between speed of playback and time-stretch was decoupled in mid-‘90s with faster computers and realtime spectral analysis, and other methods, making it possible to more easily do one without the other.  It is also now the norm. In much of the commercial music software my students use, it is possible to slow down a sound and not change its pitch (certainly more useful for changing tempo in a song with pre-recorded acoustic drums), and being able to pitch-shift or Autotune a voice without changing its speed is also a very useful tool for commercial production.  Each of these decoupled programs/methods (with names like “Warp”, “Flex”, etc.) are sometimes based on granular synthesis or phase vocoders, which each add their own sonic residue (essentially errors or noises endemic to the method when using extreme parameter settings).  Sometimes these mistakes, noise, and glitch sounds are useful and fun to work with, too.

An example of making glitch music with Ableton’s Warp mode (their pitch-shift with no time-compression/expansion mode).

Some great work by Philip White and Ted Hearne using Autotune gone wild on their record R We Who R We

Justin Bieber 800% slower (using PaulStretch extreme sound stretch) is a favorite of mine, but trying to use a method like this for a performance (even if it were possible in real-time) might be a bit unwieldly and make for a very long performance, or very few notes performed. Perhaps we could just treat this like a “freeze” delay function for our purposes in this discussion. Nevertheless, I want to focus here on old-school, time-domain, interconnected pitch-shift and playback speed changes which are still valuable tools.

I am quite surprised at how many of my current students have never tried slowing down the playback of a sound in realtime.  It’s not easy to do with their software in realtime, and some have never had access to a variable speed tape recorder or a turntable, and so they are shockingly unfamiliar with this basic way of working. Thankfully there are some great apps that can be used to do this and, with a little poking around, it’s also possible to do using most basic music software.

A Max patch demo of changing playback speeds and reversing various kinds of sound.

Some sounds sound nearly the same when reversed, some not.

There are very big differences in what happens when pitch-shifting various kinds of sounds (or changing speed or direction of playback).  The response of speech-like sounds (with lots of formants, pitch, and overtone changes within the sound) differs from what happens to string-like (plucked or bowed) or percussive sounds.  Some sound nearly the same when reversed, some not. It is a longer conversation to discuss global predictions about what the outcome of our process will be for every possible input sound (as we can more easily do with delays/filters/feedback) but here are a few generalizations.

Strings can be pitch-shifted up or down, and sound pretty good, bowed and especially plucked.  If the pitch-shift is done without time compression or expansion, then the attack will be slower, so it won’t “speak” quickly in the low end.  A vibrato might get noticeably slow or fast with pitch-changes.

Pitch-shifting a vocal sound up or down can create a much bigger and iconic change in the sound, personality, or even gender of the voice. Pitching a voice up we get the iconic (or annoying) sound of Alvin and the Chipmunks.

Pitch-shifting a voice down, we get slow slurry speech sounding like Lurch from the Addams Family, or what’s heard in all the DJ Screw’s chopped and screwed mixtapes (or even a gender change, as in John Oswald’s Plunderphonics Dolly Parton think piece from 1988).

John Oswald: Pretender (1988) featuring the voice of Dolly Parton

But if the goal is to create a separate voice in an improvisation, I would prefer to pitch-shift the sound, then also put it through a delay, with feedback. That way I can create sound loops of modulated arpeggios moving up and up and up (or down, or both) in a symmetrical movement using the original pitch interval difference (not just whole tone and diminished scales, but everything in between as well). Going up in pitch gets higher until it’s just shimmery (since overtones are gone as it nears the limits of the system).  Going down in pitch gets lower and the sound also gets slower. Rests and silences are slow, too. In digital systems, the noise may build up as some samples must be repeated to play back the sound at that speed.  These all can relate back to Hugh Le Caine’s early electroacoustic work Dripsody for variable speed tape recorder (1955) which, though based on a single sample of a water drop, makes prodigious use of ascending arpeggios created using only tape machines.

Hugh Le Caine: Dripsody (1955)

Which brings me to the final two inter-related topic of these posts—how to listen and how to be heard.

How to Listen

Acousmatic or Reduced listening – The classic discussion by Pierre Schaeffer (and in the writings of Michel Chion), is where I start with every group of students in my Electronic Music Performance classes. We need to be able to hear the sounds we are working on for their abstracted essences.  This is in sharp contrast to the normal listening we do every day, which he called causal listening (what is the sound source?) and semantic listening (what does the sound mean?).

We need to be able to hear the sounds we are working on for their abstracted essences.

We learn to describe sounds in terms of their pitch (frequency), volume (amplitude), and tone/timbre (spectral qualities).  Very importantly, we also listen to how these parameters change over time and so we describe envelope, or what John Cage called the morphology of the sound, as well as describing a sound’s duration and rhythm.

Listening to sound acousmatically can directly impact how we can make ourselves be heard as creating a separate viable “voice” using live processing.  So much of what a live sound processor improvising in real-time needs to control is the ability to provide contrast with the source sound. This requires knowledge of what the delays and filters and processes will produce with many types of possible input sounds (what I have been doing here), a good technical setup that is easy to change quickly and reactively, and it requires active acousmatic listening, and good ear/hand coordination (as with every instrument) to find the needed sounds at the right moment. (And that just takes practice!)

All the suggestions I have made relate back to the basic properties we listen for in acousmatic listening. Keeping that in mind, let’s finish out this post with how to be heard, and specifically what works for me and my students, in the hope it will be useful for some of you as well.

How to be Heard
(How to Make a Playable Electronic Instrument Out of Live Processing)

Sound Decisions: Amplitude Envelope / Dynamics

A volume pedal, or some way to control volume quickly, is the first tool I need in my setup, and the first thing I teach my students. Though useful for maintaining the overall mix, more importantly it enables me to shape the volume and subtleties of my sound to be different than that of my source audio. Shaping the envelope/dynamics of live-processed sounds of other musicians is central to my performing, and an important part of the musical expression of my sound processing instrument.  If I cannot control volume, I cannot do anything else described in these blog posts.  I use volume pedals and other interfaces, as well as compressor/limiters for constant and close control over volume and dynamics.

Filtering / Pitch-Shift (non-temporal transformations)

To be heard when filtering or pitch-shifting with the intention of being perceived as a separate voice (not just an effect) requires displacement of some kind. Filtering or pitch-shifting, with no delay, transforms the sound and gesture being played, but it does not create a new gesture because both the original and the processed sound are taking up the same space either temporally or spectrally or both.  So, we need to change the sound in some way to create some contrast. We can do this by changing parameters of the filter (Q, bandwidth, or center frequency), or by delaying the sound with a long enough delay that we hear the processed version as a separate event.  That delay time should be more than 50-100ms, depending on the length of the sound event. Shorter delays would just give use more filtering if the sounds overlap.

  • When filtering or pitch shifting a sound we will not create a second voice unless we displace it in some way. Think of how video feedback works, the displacement makes it easier to perceive.
  • Temporal displacement: We can delay the sound we are filtering (same as filtering a sound we have just delayed). The delay time must be long enough so there is no overlap and it is heard as a separate event. Pitch-shifts that cause the sound to play back faster or slower might introduce enough temporal displacement on their own if the shift is extreme.
  • Timbral displacement: If we create a new timbral “image” that is so radically different from the original, we might get away with it.
  • Changes over time / modulations: If we do filter sweeps, or change the pitch-shift that contrast what the instrument is doing, we can be heard better.
  • Contrast: If the instrument is playing long tones, then I would choose to do a filter sweep, or change delay times, or pitch-shift. This draws attention to my sound as a separate electronically mediated sound.  This can be done manually (literally a fader), or as some automatic process that we turn on/off and then control in some way.

Below is an example of me processing Gordon Beeferman’s piano in an unreleased track. I am using very short delays with pitch-shift to create a hazy resonance of pitched delays and I make small changes to the delay and pitch-shift to contrast with what he does in terms of both timbre and rhythm.

Making it Easier to Play

Saved States/Presets

I cannot possibly play or control more than a few parameters at once.

Since I cannot possibly play or control more than a few parameters at once, and I am using a computer, I find it easier to create groupings of parameters, my own created “presets” or “states” that I can move between, and know I can get to them, as I want to.

Trajectories

Especially if I play solo, sometimes it is helpful if some things can happen on their own. (After all, I am using a computer!)  If possible, I will set up a very long trajectory or change in the sound, for instance, a filter-sweep, or slow automated changes to pitch shifts.   This frees up my hands and mind to do other things, and assures that not everything I am doing happens in 8-bar loops.

Rhythm

I cannot express strongly enough how important control over rhythm is to my entire concept. It is what makes my system feel like an instrument. My main modes of expression are timbre and rhythm.  Melody and direct expression of pitch using electronics are slightly less important to me, though the presence of pitches is never to be ignored. I choose rhythm as my common ground with other musicians. It is my best method to interact with them.

Nearly every part of my system allows me to create and change rhythms by altering delay times on-the-fly, or simply tapping/playing the desired pulse that will control my delay times or other processes.  Being able to directly control the pulse or play sounds has helped me put my body into my performance, and this too helps me feel more connected to my setup as an instrument.

Even using an LFO (Low Frequency Oscillator) to make tremolo effects and change volume automatically can also be interesting and I would consider as part of my rhythmic expression (and the speed of which I’d want to be able to control in while performing.)

I am strongly attracted to polyrhythms. (Not surprisingly, my family is Greek, and there was lots of dancing in odd time signatures growing up.) Because it is so prevalent in my music, I implemented a mechanism that allows me to tap delay times and rhythms that are complexly related to what is happening in the ensemble at that moment.  After pianist Borah Bergman once explained a system he thought I could use for training myself to perform complex rhythms, I created a Max patch to implement what he taught me, and I started using this polyrhythmic metronome to control the movement between any two states/presets quickly, creating polyrhythmic electroacoustics. Other rhythmic control sources I have used included Morse Code as rhythm, algorithmic processes, and a rhythm engine influenced by North Indian Classical Tala, and whatever else interests me for a particular project.

With rhythm, it is about locking it in.

With rhythm, it is about locking it in.  It’s important that I can control my delays and rhythm processes so I can have direct interaction with the rhythm of other musicians I am playing with (or that I make a deliberate choice not to do so).

Chuck, a performance I like very much by Shackle (Anne La Berge on flute & electronics and Robert van Heumen on laptop-instrument) which does many of the things I have written about here.

Feedback Smears / Beautiful Madness

Filters and delays are always interconnected and feedback is the connective tissue.

As we have been discussing, filters and delays are always interconnected and feedback is the connective tissue.  I make liberal use of feedback with Doppler shift (Interpolating delays) for weird pitch-shifts and I use feedback to create resonance (with short filters) or I use feedback to quickly build up of density or texture when using longer delays.  With pitch-shift, as mentioned above, feedback can create symmetrical arpeggiated movement of the original pitch difference.   And feedback is just fun because it’s, well, feedback!  It’s slightly dangerous and uncontrollable, and brings lovely surprises.  That being said, I use a compressor or have a kill-switch at hand so as not to blow any speakers or lose any friends.

David Behrman: Wave Train (1966)

A recording of me with Hans Tammen’s Third Eye Orchestra.  I am using only a phaser on my microphone and lots of feedback to create this sound, and try to keep the timing with the ensemble manually.

Here are some useful strategies for using live processing that I hope are useful

Are you processing yourself and playing solo?

Do any transformation, go to town!

The processes you choose can be used to augmenting your instrument, or create an independent voice.  You might want to create algorithms that can operate independently especially for solo performing so some things will happen on their own.

Are you playing in an ensemble, but processing your own sound?

What frequencies / frequency spaces are already being used?
Keep control over timbre and volume at all times to shape your sound.
Keep control of your overlap into other players’ sound (reverb, long delays, noise)

Keep control over the rhythm of your delays, and your reverb.  They are part of the music, too.

Are you processing someone else’s sound?

Make sure your transformations maintain the separate sonic identity of other players and your sound as I have been discussing in these posts.

Build an instrument/setup that is playable and flexible.

Create some algorithms that can operate independently

How to be heard / How to listen: redux

  • If my performer is playing something static, I feel free to make big changes to their sound.
  • If my live performer is playing something that is moving or changing (in pitch, timbre or rhythm), I choose to either create something static out of their sound, or I choose to move differently (contrast their movement moving faster or slower or in a different direction, or work with a different parameter). This can be as simple as slowing down my playback speed.
  • If my performer is playing long tones on the same pitch, or a dense repeating or legato pattern, or some kind of broad spectrum sound, I might filter it, or create glissando effects with pitch-shifts ramping up or down.
  • If my performer is playing short tones or staccato, I can use delays or live-sampling to create rhythmic figures.
  • If my performer is playing short bursts of noise, or sounds with sharp fast attacks, that is a great time to play super short delays with a lot of feedback, or crank up a resonant filter to ping it.
  • If they are playing harmonic/focused sound with clear overtones, I can mess it up with all kinds of transformations, but I’ll be sure to delay it / displace it.
When you are done, know when to turn it off.

In short and in closing: Listen to the sound.  What is static? Change it! Do something different.   And when you are done, know when to turn it off.

On “Third Eye” from Bitches Re-Brewed (2004) by Hans Tammen, I’m processing saxophonist Martin Speicher

Suggested further reading

Michel Chion (translated by Claudia Gorbman): Audio-Vision: Sound on Screen (Columbia University Press, 1994)
(Particularly his chapter, “The Three Listening Modes” pp. 25–34)

Dave Hunter: “Effects Explained: Modulation—Phasing, Flanging, and Chorus” (Gibson website, 2008)

Dave Hunter: “Effects Explained: Overdrive, Distortion, and Fuzz” (Gibson website, 2008)

Delays, Feedback, and Filters: A Trifecta

My last post, “Delays as Music,” was about making music using delays as an instrument, specifically in the case of the live sound processor. I discussed bit about how delays work and are constructed technically, how they have been used in the past, a bit about how we perceive sound, and how we perceive different delay times when used with sounds of various lengths. This post is a continuation of that discussion. (So please do read last week’s post first!)

We are sensitive to delay times as short as a millisecond or less.

I wrote about our responsiveness to miniscule differences in time, volume, and timbre between the sounds arriving in our ears, which is our skill set as humans for localizing sounds—how we use our ears to navigate our environment. Sound travels at approximately 1,125 feet per second but though all sound waves we hear in a sound are travelling at the same speed, the low frequency waves (which are longer) tend to bend and wrap around objects, while high frequencies are absorbed or bounce off of objects in our environment. We are sensitive to delay times as short as a millisecond or less, as related to the size of our heads and the physical distance between our ears.  We are able to detect tiny differences in volume between the ear that is closer to a sound source and the other.  We are able to discern small differences in timbre, too, as some high frequency sounds are literally blocked by our heads. (To notice this phenomena in action, cover your left ear with your hand and with your free hand, rustle your fingers first in the uncovered ear and then in the covered one.  Notice what is missing.)

These psychoacoustic phenomena (interaural time difference, interaural level difference, and head shadow) are useful not only for an audio engineer, but are also important for us when considering the effects and uses of delay in electroacoustic musical contexts.

My “aesthetics of delay” are similar to what audio engineers use, as rule of thumb, for using delay as an audio effect, or to add spatialization.  The difference in my approach is that I want to find a way to recognize and find sounds I can put into a delay, so that I can predict what will happen to them in real time as I am playing with various parameter settings. I use the changes in delay times as a tool to create and control rhythm, texture, and timbral changes. I’ve tried to develop a kind of electronic musicianship, which incorporates acousmatic listening and quick responses, and hope to share some of this.

It’s all about the overlap of sound.

As I wrote, it’s all about the overlap of sound.  If a copy of a sound, delayed by 1-10ms, is played with the original, we simply hear it as a unified sound, changed in timbre. Short delayed sounds nearly always overlap. Longer delays might create rhythms or patterns; medium length delays might create textures or resonance.  It depends on the length of the sound going into the delay, and what that length is with respect to the length of the delay.

This post will cover more ground about delays and how they can be used to play dynamic, gestural, improvised electroacoustic music. We also will look at the relationship between delays and filtering, and in the next and last post I’ll go more deeply into filtering as a musical expression and how to listen and be heard in that context.

Mostly, I’ll focus on the case of the live processor who is using someone else’s sound or a sound that cannot be completely foreseen (and not always using acoustic instruments as a source– Joshua Fried does this beautifully with sampling/processing live radio in his Radio Wonderland project).  However, despite this focus, I am optimistic that this information will also useful to solo instrumentalists using electronics on their own sound as well as to composers wanting to build improvisational systems into their work.

No real tips and tricks here (well maybe a few), but I do hope to communicate some ideas I have about how to think about effects and live audio manipulation in a way that outlasts current technologies. Though some of the examples below will use the Max programming language, it is because it is my main programming environment, but also well suited to diagram and explain my points.

We want more than one, we want more than one, we want…

As I wrote last week, musicians often want to be able to play more than one delayed sound, or to repeat that delayed sound several times. To do this, we either use more delays, or we use feedback to route a portion of our output back into the input.

When using feedback to create many delays, we route a portion of our output back into the input of the delay. By routing only some of the sound (not 100%), the repeated sound is a little quieter each time and eventually the sound dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an almost endless repeat, and might even overload/clip if we add new sounds (like a too full fountain).

Using multi-tap delays, or a few delays in parallel, we can make many copies of the sound from the same input, and play them simultaneously.  We could set up different delay lengths with odd spacings, and if the delays are longer than the sound we put in, we might get some fun rhythmic complexity (and polyrhythmic echoes).  With very short delays, we’ll get a filtered sound from the multiple copies being played nearly simultaneously.

Any of these delayed signals (taps) could in turn be sent back into the multi-tap delay’s input in a feedback network.   It is possible to put any number and combination of additional delays and filter in the feedback loop as well, and these complex designs are what make the difference between all the flavors of delay types that are commonly used.

It doesn’t matter how we choose to create our multiple delays.  If the delays are longer than the sounds going into them, then we don’t get overlap, and we’ll hear a rhythm or pattern.  If the delays are medium length (compared to our input sound), we’ll hear some texture or internal rhythms or something undulating.  If the delays are very short, we get filtering and resonance.

Overlap is what determines the musical potential for what we will get out of our delay.

The overlap is what determines the musical potential for what we will get out of our delay. For live sound processing in improvised music, it is critical to listen analytically (acousmatically) to the live sound source we are processing.  Based on what we hear, it is possible to make real-time decisions about what comes next and know exactly what we will get out.

Time varying delay – interpolating delay lines

Most cheaper delay pedals and many plugins make unwanted noise when the delay times are changed while a sound is playing. Usually described as clicks, pops, crackling or “zipper noise”, these sounds occur because the delays are “non-interpolating.”   These sounds happen because the changes in the delay times are not smooth, causing the audio to be played back with abrupt changes in volume.  If you never change delay times during performance, fixed simple delays and a non-interpolating delay is fine.

Changing delay times is very useful for improvisation and turning delay into an instrument. To avoid the noise and clicks we need to use “interpolating” delays, which might mean a slightly more expensive pedal or plugin or a little more programming. As performers or users of commercial gear we may not be privy to all the different techniques being used in every piece of technology we encounter. (Linear or higher order interpolation, windowing/overlap, and selection of delayed sounds from several parallel delay lines are a few techniques.) For the live sound processor / improviser what matters is: Can I change my delay times live?  What artifacts are introduced when I change it?  Are they musically useful to me?  (Sometimes we like glitches, too.)

Doppler shift!  Making delays fun.

A graphic representation of the Doppler Shift

An interesting feature/artifact of interpolating delays is the characteristic pitch shift that many of them make.  This pitch shift is similar to how the Doppler shift phenomenon works.

The characteristic pitch shift that many interpolating delays make is similar to how the Doppler Effect works.

A stationary sound source normally sends out sound waves in all directions around itself, at the speed of sound. If that sound source starts to move toward a stationary listener (or if the listener moves toward the sound), the successive wave fronts start getting compressed in time and hit the listener’s ears with greater frequency.  Due to the relative motion of the sound source to the listener, the sound’s frequency has in effect been raised.  If the sound source instead moves away from the listener, the opposite holds true: the wave fronts are encountered at a slower rate than previously, and the pitch seems to have been lowered. [Moore, 1990]

OK, but in plainer English: When a car drives past you on the street or highway, you hear the sound go up in pitch as it approaches, and as it passes, it goes back down.   This is the Doppler Effect.  The soundwaves travel at the same speed always, but they are coming from an object that is moving so their frequency goes up and then goes down when it is moving again away from you.

A sound we put into a delay line (software / pedal / tape loop) is like a recording.  If you play it back faster, the pitch goes higher as the sound waves hit your ears in faster succession, and if you slow it down, it plays back lower.  Just like what happens to the sound of a passing siren from a train or car horn that gets higher as it approaches and passes you: when delayed sounds are varied in time, the same auditory illusion is created. The pitch goes down as delay time is increased up as delay time is decreased, with the same Doppler Effect as the case of the stationary listener and moving sound source.

Using a Doppler Effect makes the delay more of an “instrument.”

Using a Doppler Effect makes the delay more of an “instrument” because it’s possible to repeat the sound and also alter it.  In my last post I discussed many types of reflections and repetitions in the visual arts, some exact and natural and others more abstract and transformed as reflections. Being able to alter the repetition of a sound in this way is of key importance to me.  Adding additional effects in with the delays is important for building a sound that is musically identifiable as separate from that of the musician I use as my source.

Using classic electroacoustic methods for transforming sounds, we can create new structures and gestures out of a live sound source. Methods such as pitch-shifting, speeding sounds up or slowing them down, or a number of filtering techniques, work better if we also use delays and time displacement as a way to distinguish these elements from the source sounds.

Many types of delay and effects plugins and pedals on the market are based on simple combinations of the principal parameters I have been outlining (e.g. how much feedback, how short a delay, how it is routed). For example, Ping Pong Delay delays a signal 50-100ms or more and alternates sending it back and forth between the left and right channels, sometimes with high feedback so it goes on for a while. Flutter Echo is very similar to the Ping Pong Delay, but with shorter delay times to cause more filtering to occur—an acoustic effect that is sometimes found in a very live sounding public spaces.  Slapback Echo has a longer delay time (75ms or more) with no feedback.

FREEZE!  Infinite Delay and Looping

Some delay devices will let us hold a sample indefinitely in the delay.  We can loop a sound and “freeze” it, adding additional sounds sometime later if we choose. The layer cake of loops built up lends itself to an easy kind of improvisation which can be very beautiful.

“Infinite” delay is used by an entire catalog of genres and musical scenes.

Looping with infinite delay is used by an entire catalog of genres and musical scenes from noise to folk music to contemporary classical.  The past few years especially, it’s been all over YouTube and elsewhere online thanks to apps and applications like Ableton Live and hardware like Line 6, a popular 6-channel looper pedal. Engaging in a form of live-composing/production, musicians generate textures and motifs, constructing them into entire arrangements, often based upon the sound of one instrument, in many tracks, all played live and in the moment.  In terms of popular electronic music practice, looping and grid interfaces seem to be the most salient and popularly-used paradigms for performance and interface since the late 2000s.

Looping music is often about building up an entire arrangement, from scratch, and with no sounds heard that are not first played by the instrumentalist, live, before their repetition (the sound of which is possibly slightly different and mediated by being heard over speakers).

With live sound processing, we use loops, too, of course. The moment I start to loop a sound “infinitely,” I am, theoretically, no longer working with live sound processing, but I am processing something that happened in the past—this is sometimes called “live sampling” and we could quibble about the differences.  To make dynamic live-looping for improvised music, whether done by sampling/looping other musicians, or by processing one’s own sound, it is essential to be flexible and be able/willing to change the loops in some way, perhaps quickly, and to make alterations to the audio recorded in real-time.  These alterations can be a significant part of the expressiveness of the sound.

For me, the most important part of working with long delays (or infinite loops) is that I be able to create and control rhythms with those delays.  I need to lock-in (synchronize) my delay times while I play. Usually I do this manually, by listening, and then using a Tap Tempo patch I wrote (which is what I’ll do when I perform this weekend at Spectrum as part of Nick Didkovsky’s Deviant Voices Festival on October 21 at Spectrum and the following day with Ras Moshe as part of the Quarry Improvised Music Series at Triskelion Arts).

Short delays are mostly about resonance. In my next and final post, I will talk more about filters and resonance, why using them together with delay is important, as well as strategies for how to be heard when live processing acoustic sound in an improvisation.

In closing, here is an example from What is it Like to be a Bat? my digital chamber punk trio with Kitty Brazelton (active 1996-2009 and which continues in spirit). In one piece, I turned the feedback up on my delay as high as I could get away with (nearly causing microphones and sound system to feedback too), then yelled “Ha!” into my microphone, and set off sequence of extreme delay changes with an interpolating delay in a timing we liked. Joined by drummer Danny Tunick, who wrote a part to go with it, we’d repeat this sequence four times, each time louder, noisier, different but somehow repeatable at each performance. It became a central theme in that piece, and was recorded as the track “Batch 4” part of our She Said – She Said, “Can You Sing Sermonette With Me?” on the Bat CD for Tzadik label.

Some recommended further reading and listening

Thom Holmes, Electronic and Experimental Music (Routledge, 2016)

Jennie Gottschalk, Experimental Music Since 1970 (Bloomsbury Academic, 2016)

Geoff Smith, “Creating and Using Custom Delay Effects” (for the website Sound on Sound, May 2012) Smith writes: “If I had to pick a single desert island effect, it would be delay. Why? Well, delay isn’t only an effect in itself; it’s also one of the basic building blocks for many other effects, including reverb, chorus and flanging — and that makes it massively versatile.”

He also includes many good recipes and examples of different delay configurations.

Phil Taylor, “History of Delay” (written for the website for Effectrode pedals)

Daniel Steinhardt and Mick Taylor, “Delay Basics: Uses, Misuses & Why Quick Delay Times Are Awesome” (from their YouTube channel, That Pedal Show)
Funny

Delays as Music

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).


Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.


My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.


Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop
http://preparedguitar.blogspot.de/2015/04/the-birth-of-loop-by-michael-peters.html

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

LiveLooping.org
(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines