Tag: live electronic music

Jane Ira Bloom: Valuing Choices Made in the Moment  

Video presentations and photography by Molly Sheridan
Transcription by Julia Lu

While thinking beyond musical genres is a hallmark of a great many of today’s musical creators, Jane Ira Bloom clearly maneuvers within a genre while at the same time subverting any attempt at making generalizations about her work. The primary mode of music-making she engages in is performing her own instrumental compositions on the soprano saxophone in the company of a small group of like-minded collaborative improvisers, and those compositions are clearly indebted to the jazz tradition. But there are important exceptions to just about every detail of that description that are key to defining who she is as a musician.

She primarily performs her own musical creations, but just about every album she has ever recorded, as well as most of her live performances, also include at least one example of her own extremely personal interpretations of an American standard or a classic jazz composition. But while the American songbook has been an unending fount of inspiration for her improvisations and has even informed the ways she has constructed melodies in her own compositions, she has never featured a singer in any of her projects thus far. And, with the exception of her most recent recording, Wild Lines, which includes recitations of poetry by Emily Dickinson, all her performances are un-texted instrumentals. She performs almost exclusively on the soprano saxophone (there’s been a stray track here and there over the years of her on alto), but she began her musical studies on the piano, and the grand piano she keeps in her living room is the main instrument on which she composes. She has primarily performed with and composes for a small cadre of fellow travelers with whom she has worked for decades (e.g. Fred Hersch, Mark Dresser, Bobby Previte), but she has also written music for orchestra, wind band, dance and film, and has participated in improvisatory world music collaborations with Chinese pipa virtuoso Min Xiao-Fen and South Indian vocalist and vina player Geetha Ramanathan Bennett (who died just a day after we recorded our talk with Jane Ira Bloom). Bloom acknowledges and embraces the jazz tradition, but for more than 30 years her saxophone improvisations have incorporated an electronic music component which she triggers in real time through the use of foot pedals, and sometimes the other musicians in her combos operate electronic devices as well.

“I’m definitely a lateral thinker,” Bloom acknowledged when we visited her to talk about her various musical experiences and how they have shaped her aesthetics as a composer and a performer. “There’s no question in my mind that my strong background as a melodist, as someone who’s loved and studied melody in many forms, takes me wherever I go. I’m a saxophonist who’s very much interested in sound, and I’ve spent a long time working on a particular sound that I really invested a lot of thought in on the instrument I play—the soprano saxophone. And I’m interested in phrasing and breath. All those things travel with me wherever I go, and when I’m using the live electronics, that’s where they’re compelled from. It’s me; it’s not a black box. It’s not an idea. I’ve learned an awful lot from the Afro-American music tradition and the American songbook, as well as exposing myself to world musics and all kinds of contemporary classical music. … I know what’s authentic and real about who I am, and I take that with me wherever my imagination takes me.”

In addition to the aforementioned 2017 Emily Dickinson-inspired album, Bloom’s imagination has led her to create a series of responses to abstract expressionist paintings by Jackson Pollock (“the freedom he was in touch with … is something that, as jazz musicians, we can tap into so easily”) as well as motion-inspired melodic improvisation (“I collaborated with choreographers who were much more cognizant of this quality … you could make sound change by moving”). Her use of real-time live electronic processing in her saxophone playing has been an ongoing component of her musical explorations. Her description of it makes it seem a lot simpler than it actually sounds:

Basically what I do with the electronics is I still play the saxophone, but I play through microphones that access electronic sounds that I blend and combine with my acoustic sound. And I trigger them using foot pedals, live and in the moment. Over the years, I’ve gotten skillful playing on one foot and tapping my toe on some pre-programmed settings that I’ve designed—on basically an old harmonizer and an old digital delay—and combining them in unusual ways. … I’ve spent some time trying to get the way I use them as an improviser as fluid as if it was a key on my saxophone. … It makes sense to me when the sounds appear and when they don’t, when I choose to use them and when I choose not to use them. It’s got to be fast. It’s got to be intuitive, because I’m using them very much in the moment of improvising.

Perhaps the most unusual place Bloom’s imagination has taken her was to work with the American space program, which happened, as she explained to us, as a result of an unsolicited letter to NASA that her friend, actor Brian Dennehy, suggested she should write.

“I thought he was nuts,” she remembered. “But some time went by and I actually sat down and I wrote a letter in the dark—a letter in a bottle, right?—inquiring whether NASA had ever done any research on the future of the arts and space, in zero gravity environments. Something I was always fascinated with. Six months later, I get this envelope back, which has the NASA logo on the front of the envelope from a guy by the name of Robert Schulman, director of the NASA Art Program. … Bob and I corresponded for years. He was interested in jazz musicians—lucky me, you know. Eventually I posed the idea, how about NASA commissioning the first musician for the Art Program? And he loved the idea.”

Dennehy’s “nutty” suggestion ultimately culminated in a 1989 concert at the Kennedy Space Center featuring the Brevard Symphony Orchestra in a performance of Fire and Imagination, an original work by Bloom scored for soprano saxophone, electronics, orchestra and “a whole bunch of ringers, the jazz musicians that were in the piece.” Although the work has yet to be performed in its original version since the premiere and has also never been commercially recorded (though some reworkings of that material surfaced on her landmark 1992 album Art and Aviation), Bloom’s association with NASA has had some unusual ripple effects. In 1998, an asteroid discovered on September 25, 1984 by B. A. Skiff at the Anderson Mesa Station of Lowell Observatory was named after her—6083 Janeirabloom!

As for what her next project will be, she has no firm ideas and, as an adherent to valuing choices made in the moment, she seems to like it that way.

A conversation with Frank J. Oteri in Bloom’s Manhattan apartment
August 14, 2018—5:00 p.m.
Video presentation by Molly Sheridan

Frank J. Oteri:  You do a variety of different things.  You’re a composer, a saxophonist, and a bandleader. Is there one word that you gravitate toward more than any other to describe what you do?  If you were to meet somebody randomly, say on an airplane, and that person asked what you did, what would you say?

“I’m definitely a lateral thinker”

Jane Ira Bloom:  Wow, nobody ever asked me that before!  I’ve got to think about that.  Usually I always call myself a saxophonist-composer, but I’m definitely a lateral thinker because I’ve always been interested in multi-disciplinary thinking.  It’s an interesting question, but I haven’t got an immediate answer.

FJO:  That’s fine, but there’s a corollary to that, which is perhaps equally unanswerable. You have been inspired by so many different things—such as electronics and non-Western musical traditions—and you’ve even composed works for symphony orchestra and wind band, as well as collaborated with filmmakers and choreographers, but your music primarily exists within a rubric that, for lack of a better term, we call jazz.  So if that same somebody asked about what kind of music you do, what would you say to that?

JIB:  I can’t come up with words.  I think the world of my imagination goes wherever it goes and has been its own explanation for itself, whether I’m interested in dance, lighting, theater, film, movement, painting, or whatever grabs my attention.  I’m just trying to keep myself interested. I think, as time has gone on, I’m just letting that process happen more fluidly than it did in the beginning when there were more careful definitions to the different areas where I worked, whether I’m working with world music musicians or with jazz or new music improvisers or in an environment that looks even slightly more classical.  It’s just me being interested and still being curious.  Maybe that’s why it’s not so easy for me to find the categorical word for what it is, but I can tell you how it feels.

FJO:  So how does it feel?

JIB:  It feels open.  It feels like there are possibilities.  It feels like I can’t always anticipate what’s going to happen next.  I go through periods of time where I get interested in a topic and go down the rabbit hole. Then there are also fallow periods where I don’t know what’s coming next, and I start getting nervous.  It’s a kind of ebb and flow.

FJO:  So are you okay with the word “jazz” to describe your music?

JIB:  Sure.  Creative improvisation.  We’re improvisers who make up musical ideas in the moment and value that—that’s the important thing.  We value those choices.  I guess the thing I’ve learned over time is that the more you’ve done it, the more environments and the more experience you’ve had doing it, sometimes you can make better choices.

FJO:  I would posit that in addition to what you said about valuing the choices that you arrive at in the moment, you also value the choices that other musicians make in the moment who are performing with you. That seems to be a very big part of it.

JIB:  Absolutely.  I’m a completely collaborative animal.

FJO:  One of the reasons I wanted to begin our discussion by asking these questions is that one of the reasons we have these conversations on NewMusicBox is so that music creators have an opportunity to describe their music in their own words and it is not filtered through someone else’s ideas about them. In preparing for our talk, I was reading a lot of things that others have said about you and one thing that struck me, which I read in a few different places, was seeing you described as “an avant-garde jazz composer.” While there are certainly elements of what you do that are extraordinarily progressive and very innovative, I personally don’t think the term avant-garde accurately describes it since, no matter how out you go with some of these worlds, you’re always very clearly mindful of the tradition at the same time.

JIB:  Well, there’s no question in my mind that my strong background as a melodist, as someone who’s loved and studied melody in many forms, takes me wherever I go.  I’m a saxophonist who’s very much interested in sound, and I’ve spent a long time working on a particular sound that I really invested a lot of thought in on the instrument I play—the soprano saxophone.  And I’m interested in phrasing and breath.  All those things travel with me wherever I go, and when I’m using the live electronics, that’s where they’re compelled from.  It’s me; it’s not a black box.  It’s not an idea.  I’ve learned an awful lot from the Afro-American music tradition and the American songbook, as well as exposing myself to world musics and all kinds of contemporary classical music.  But I don’t reflect a lot on what I call myself.  I know what’s authentic and real about who I am, and I take that with me wherever my imagination takes me.

FJO:  One thing that definitely strikes me about your love for the jazz tradition and the American songbook is that although most of your recorded output is devoted to your own compositions, with the exception of your album Modern Drama, I can’t think of any recording of yours that doesn’t include at least one reinvention of either a song standard or a classic jazz composition.

JIB:  You’re absolutely right.  I guess I can’t let go of that.  And Sixteen Sunsets was a compilation of American songbook standards.  It was my ballads album.

FJO:  So what motivates you to keep going back to that material?

“I didn’t learn it; I grew up listening to it. It’s in my bones.”

JIB:  Those are primary sounds for me.  That understanding about how melodies work comes from knowing that music on the most primary level.  I didn’t learn it; I grew up listening to it.  It’s in my bones.  I know the lyrics to all the songs.  So I think the knowledge of that music and that largely Jewish songwriting tradition—whether it comes from cantorial song or not—also follows me, and it informs me even when I’m writing. The kind of linear line-writing that you hear on many of my original compositions—they have this different kind of motion and flow, but it’s informed by the same kind of pearl stringing that I’ve learned from studying Harold Arlen or Richard Rodgers, their great melodies and why they work.  That stuff still informs even the melodies that I write that don’t sound anything like that.

The pile of pencils and erasers that Jane Ira Bloom stuffs inside her piano on the frame in front of the strings and some music manuscript paper.

FJO:  It’s interesting to hear you talk about melody and line and breath as I stare to my right at your beautiful old grand piano, which has manuscript paper on it and a bunch of pencils and an eraser stacked inside it.  And I’m remembering reading somewhere that although you’ve been playing the saxophone since you were a child, your first instrument was actually the piano.

JIB:  A composer needs to know the piano, and I studied piano for a while. I started when I was very young. But I must have been 9 or 10 years old when I started studying saxophone in public school.  Then it wasn’t long after I began studying that I started to study with this master teacher Joe Viola, when I was living outside Boston.  Saxophone players know about this guy.  He was a great woodwind virtuoso, and he had this special feeling for the saxophone. Why did I pick up the saxophone in the first place? I was in third grade and it was shiny, that’s why.  But the soprano saxophone—I think when I heard that sound, I said, “Yeah, I like that!”

FJO:  Of course, the soprano saxophone has the most unusual history of the entire saxophone family in jazz.  There isn’t this through line the way there is with alto players or tenor players.  There was Sidney Bechet early on, but later a huge gap during the bop era. Then all of a sudden Steve Lacy appeared on the scene and soon after that John Coltrane takes up the soprano sax, but not as his primary instrument.  And starting in the ‘70s, the soprano sax has had this other whole life as a smooth jazz instrument due to Grover Washington and, later on, Kenny G who is almost an exact contemporary of yours.  But what you do sounds nothing like that.  Going back to running into that random person talking to you at the airport, when you say that you play the soprano sax, I’m sure the first thing that person is going to say is, “Oh, like Kenny G?”

JIB:  Not any more.  Actually, the latest thing people say is, “Do you play pool?”  They see the soprano case, and it looks like a pool cue case.  But it used to happen a while back, and the fact that people knew what a soprano saxophone looked like was pretty interesting—just on a general audience level.  That’s certainly what Kenny G brought to the instrument, so thank you.

I’ve always thought that if you’re the kind of person that’s interested in playing an instrument that doesn’t have too much of a stylistic lineage attached to it—unlike all the great saxophone players on the tenor and the alto—and that if you’re interested in doing something new, soprano is maybe not a bad choice.  It suits me, for sure, that it has the history that it does and that I’ve been able to create a sound on it.  I suppose you could think, not having been over-influenced by a whole stylistic lineage, to create a new sound on it.

FJO:  That’s a very inspiring thought, although you were not completely without influences. You mentioned Joe Viola.

JIB:  A primary influence, yeah.

FJO:  But since there isn’t this lineage in terms of who you grew up listening to and who you gravitated toward musically, it probably wasn’t other soprano players.

“I pick my own notes.”

JIB:  No, not at all.  I was listening to Sonny Rollins.  I was listening to all kinds of things.  I was listening to violin players, but especially trumpet players.  And I was listening to vocalists.  I was getting ideas from other places that I’ve attached to this instrument.  I spent some time studying how people negotiated on a different instrument.  For example, I’ve always loved the sense of struggle that’s in the trumpet.  That’s what I’ve always loved about Booker Little and Miles Davis, so I’ve gleaned something from them.  Same thing with Sonny Rollins.  It’s not necessarily looking around for influences to imitate the notes that people play; it’s more getting a kinesthetic feel for where they were that informs me and what I do.  I pick my own notes.

FJO:  Now in terms of picking those notes, you said that the piano is a necessary thing for composing.

JIB:  Yeah, there it is.

FJO:  So you compose your music at the piano, not at the saxophone, or do you do a little bit of both?

JIB:  Sometimes ideas come from the horn, too, so a little of both.  But primarily I sit at the piano.

Jane Ira Bloom sitting in front of her grand piano.

FJO:  One of the most interesting comments we recorded in a conversation in the last few years was when we did a talk with Béla Fleck, who’s now writing for orchestra.  He talked about how he came up with clarinet lines in the orchestration at the banjo.  He composes from the banjo. He jots down ideas in banjo tablature and then someone else turns it into something that other players can read from.

JIB:  Cool. That’s so unique.

FJO:  I thought that your compositional process might have been somewhat similar, but then I learned you had a background in piano. When we walked in and saw the piano with all the manuscripts on it, I realized that the way you write music was completely different and that the piano plays a significant role in how you compose.

JIB:  Well, for the harmonic information that you hear on my original compositions, yeah.  But let’s face it, I’m a line player.  I’m a horn player, so I play the piano like a horn player.  They inform each other, believe me.

FJO:  In terms of what informs your musical ideas, for almost a century people have come up through improvisatory music by woodshedding and apprenticing as a side person in other people’s ensembles.  What’s amazing to me is that you really didn’t do that at all.  You seem to have emerged fully formed. I’ve only heard two albums that you’re a side person on, and I think there are only three.

JIB:  There are a few.

FJO:  Well, the two that I am aware of are both really wonderful records, but you recorded them after you had already released recordings under your own name.  The first one is this really odd record from pretty early on in your career, Frederick Hand’s Jazz Antiqua.

JIB:  Oh my goodness, yeah. This flute player, Keith Underwood, was a friend of mine from New Haven, from Yale.  He was doing this work with Fred Hand, so when the call went out for soprano saxophone, I think Keith told Fred about me.  That was a long time ago.  I’m trying to think of some other ones.  I apprenticed with vibes player David Friedman and recorded with him.  I also recorded some albums, but it wasn’t at that early time, with vocalist Jay Clayton and did some guest appearances on some other people’s albums. But you’re right.  Largely I had a different path.

Coming out of New Haven in the ‘70s, I was around a fascinating community of new music improvisers and jazz musicians.  I’ve read books about this. They now call this the New Haven Renaissance. If I listed all the musicians who were actually in New Haven at that one time in the ‘70s—it was this fascinating creative music community and everybody was inspiring everybody else.  At that time, Wadada Leo Smith was in New Haven, and he was making albums on his own—LPs; there were no such things as CDs then.  He had important music to document that he was playing, and there were no record companies that were getting Leo to record for them.  So he was making his own albums and documenting his own music. Everybody got inspired by him: George Lewis, Gerry Hemingway, Pheeroan akLaff, myself, Mark Dresser, and Mark Helias—loads and loads of musicians were there, and it inspired all of us.  I was inspired to start my own record company.  It was like 1976.  I had been playing duets with a bass player named Kent McLagan.  We had important music that we were making.  Why not document it?  And I learned how to make a record and how to promote my own music. Trial by fire, I learned how to do it myself, by asking a lot of questions and making a lot of mistakes and figuring it out. They turned out to be my calling cards when I moved to New York City.  That’s a really different path than going off to apprentice with some great. I have a few early stories. I remember I sat in once with Mercer Ellington. But I knew that wasn’t my path.  It just wasn’t me, so I followed this different direction.

FJO:  I have to confess that I don’t know either of those first two records, aside from the little snippets from them that you posted on your website—one of which was a very intriguing gamelan-tinged piece.

JIB: Oh, “Shan Dara.” That’s with David Friedman.

FJO:  I’d really love to hear the whole thing one day. But after these two completely self-produced and self-released albums, you recorded an album for a very highly respected independent label, Enja, with an unbelievable cast of characters.  Two of the members of the quartet album you recorded had been part of the landmark Ornette Coleman Quartet—Charlie Haden and Ed Blackwell.  And the other player was Fred Hersch, who went on to become a very important collaborator of yours. So how did this come together?

JIB:  Thank you Matthias Winckelmann, the head of Enja Records. He knew about me through David Friedman, the vibes player, because I’d been on tour with David.  He said, “I’d like to make a record; who’d you like in your rhythm section?”  I was given the chance to name my dream rhythm section. So wow, hell, I want to play with Charlie Haden and Ed Blackwell!  I want Fred Hersch playing piano with me! It was just me having my chance to pick the dream rhythm section of all time.

FJO:  So you didn’t know those people?  You’d never worked with any of them before?

JIB:  I had met Blackwell and I had played with him in New Haven.  And Fred and I had also done some playing together.  I don’t think I had played with Charlie, but I knew I wanted to play with him.

FJO:  To stray a little bit from the chronology here, I find your history of making recordings to be somewhat emblematic of our times.  You formed your own record label.  After that, you recorded an album on this really prestigious independent label.  Then you got picked up by one of the global Goliaths, Columbia/CBS, now Sony.  You did two albums with them.  Then you went back to do doing stuff on indies—a series of really important albums on Arabesque, a terrific label which no longer exists, and then a disc on ArtistShare. But your recent albums are back on your own label. So you made a full circle.

“I was the only self-producing jazz artist at CBS.”

JIB:  Complete circle.  But having all the skills as a producer from the get-go has been an asset throughout everything.  I was the only self-producing jazz artist at CBS.  I produced those albums myself.  It was unheard of.  But it was because I had the skills.  At the time, George Butler was the A & R person at CBS.  He knew I could do it.  He had evidence. But isn’t it interesting—the full circle?  I started off on Outline Records, went around the block, and now I’m just back doing what I always did on Outline Records.  And, you know, it just has kind of worked.  I’ve been making albums for so long now that I’ve been fortunate enough that even with an independent label, when I’m ready I can produce an album and it comes to the attention of people in the writing community and the jazz radio community and they look forward to it.  I have a long-time history with people.

And I work with a terrific team. Max Horowitz at Crossover Media has been working with me for over 15 years, and now my niece Amanda Bloom is working with him. So I’m not doing it by myself anymore.  I’ve got good help.  And I also work with Jim Eigo at Jazz Promo Services. These are people who are very, very helpful.

FJO:  I imagine the same has been true for how you’ve published your music.  You’ve written several works for wind ensemble, as well as for orchestra, so you had to prepare scores and parts for all of these.  Is there a place where people can go to get this material?  I imagine it’s all self-published.

JIB:  Yeah, I’ve got them.  All the scores and parts are sitting behind those two cabinets over there.

Shelves in Jane Ira Bloom's cabinet filled with her orchestral scores and parts.

FJO:  So you had a whole self-publishing operation, preparing performance materials, renting them out, etc.?

JIB:  Well, at that time I was getting grants and I got help from some great copyists to find my way through the orchestra.  I remember a particularly wonderful copyist by the name of Randa Kirschbaum, who is the best there was and who helped me get through my orchestra experiences.  That’s a whole other issue.  But I didn’t find a continuation of that work that was easy for me at that time, and I was less successful about recording a large ensemble work.  So the stuff that you hear is for smaller ensembles.

FJO:  It’s all very personal and very intimate; the exact opposite of orchestral music. You’ve mostly recorded quartets—you with piano, bass, and drums. But you also frequently feature unaccompanied soprano saxophone solos on many of your recordings and Early Americans, the recording you made just prior to your most recent one, is with a trio of just you, bass, and drums, no piano.

JIB:  Yeah, I’m just getting comfortable with that.  I’ve been playing in a trio for years and years with Mark Helias and Bobby Previte, and finally the guys said, “Hey, Jane, it’s time to document this thing.”  So we literally just went into the studio and did what we do.  It was a long time coming, but you can feel how natural it is. And winning a Grammy for surround sound for that, I can’t tell you how it makes me smile on the inside, collaborating with the engineer Jim Anderson and my co-producer Darcy Proper.  These were people who took me to a new place.

FJO:  So in your experience does winning a Grammy still have the ability to get significant attention for a recording? Does it increase sales? What role does it play at this point?

JIB:  Well, I did start getting more calls. It’s just more public awareness of my work, that’s all.  There’s just something about the mystique of it.  The fact that this jazz trio album won in a category of music against musics from all other kinds of disciplines was really a very satisfying moment for us.  We didn’t expect it.  There were all kinds of music, but it was about the surround sound technology and the music that made it happen.

Jane Ira Bloom's Grammy

FJO:  Going back to talking about your earlier large ensemble music for a moment, creating music with a small ensemble of people you’ve worked with for a long time is such a stark contrast to how, especially, orchestra music gets rehearsed, performed, and—if you’re fortunate enough—recorded. It’s a very different experience to create music for a large group of people that you might never have met before to working with a small group of creative improvisers who you’ve known for years. You know what they can do and you have an idea about what they’re going to bring to your music, as opposed to when you’re dealing with a large ensemble, for whom you have to have everything worked out in advance and very clearly notated and with whom you’re lucky if you get two rehearsals.

JIB:  Oh believe me, I know.  You spend several years writing a piece of music, you get a few hours of rehearsal, and boom.  That was a startling realization.  They’re completely different worlds, and the task and the skill of the colorist, the orchestrator—their knowledge of instruments and their combinations and the unique qualities that create sonic originality in the orchestra—is a skill like no other.  I was dabbling.  I was just taking my world and seeing where I could go in that playground.  But the world that I largely work in is, as you say, more long-term collaborations with people who I’ve gotten to know over long periods of time.

“My greatest excitement comes from playing with musicians who I know really well.”

I tend to stay playing with people a lot longer than most.  I think it’s because of what you’re talking about, that unconscious communication that develops among improvising musicians over long periods of time.  Not that it shouldn’t be informed by new input and new ideas, because we’re all growing and are going in different directions at times, but I do truly value what’s very special about musicians who’ve known each other and played with each other for a long time—particularly when you go into the studio, which has its own set of issues.  How do you get spontaneity and creativity and the unexpected to still happen in places where just about everything in the environment is trying to tell you the opposite of that?  I tend to find my greatest excitement comes from playing with musicians who I know really well.

FJO:  That’s very different from that first non-self-released recording where you picked your dream team, and then they just showed up at the studio and you recorded an album with them.

JIB:  Yeah, I think I got together with Blackwell and Fred a couple of times, but I don’t think Charlie was ever there for any of the rehearsals!

FJO:  Now, for Modern Drama, was that an ensemble that had been touring or was that also put together just to make the recording?

JIB:  We’d been playing together some.  It was a combination of some of my work with vibist David Friedman and some developing work over a long period of time with Fred Hersch, and at that time it was Ratzo Harris on bass and Tom Rainey on drums.  That was an expression of things I was doing with live electronics, compositions that expressed that, and I wanted to document that and this very special chemistry with those people.

FJO:  It would be great to have you explain how you operate the electronics in a performance, but first, how did you first become interested in working with electronics and how did you learn about it?

JIB:  I always loved electronic sound—I’m talking early electronics, analog electronics.  I’m talking about when the Moog synthesizer first hit and when some of the first composers integrated electronics into their music, like [Morton Subotnick’s] Silver Apples of the Moon.  I can remember being in college and studying electronic music with Robert Moore, having our first hands-on sessions with these synthesizers that looked like refrigerators.  There were lots of faders and dials.  That’s how I learned about electronics. It was really old fashioned.  So I have a predilection in my thinking toward this less digital and more analog approach to these Forbidden Planet kinds of sounds.  That’s what appeals to me.  So I worked with some specialists who helped me design what you would call an effects processing setup.

Basically what I do with the electronics is I still play the saxophone, but I play through microphones that access electronic sounds that I blend and combine with my acoustic sound.  And I trigger them using foot pedals, live and in the moment. Over the years, I’ve gotten skillful playing on one foot and tapping my toe on some pre-programmed settings that I’ve designed—on basically an old harmonizer and an old digital delay—and combining them in unusual ways.  What you’re hearing on the recordings is balancing that electronic sound with the acoustic.  It blends a little easier because I’m dealing with more analog kinds of electronic sounds.  They’re not as cold and digital sounding as some can sound.  I’ve spent some time trying to get the way I use them as an improviser as fluid as if it was a key on my saxophone.  I wanted to have the breath that still compels my saxophone sound to the electronic sound.  I still wanted to have the phrasing that’s behind who I am as a saxophonist.  I’m still a saxophone player.  That’s really what’s at the core of it.  It’s just I hear this expanse of electronic sound that can open up from the acoustic.  And that’s why I feel like it makes sense to me.  It makes sense to me when the sounds appear and when they don’t, when I choose to use them and when I choose not to use them.  It’s got to be fast. It’s got to be intuitive, because I’m using them very much in the moment of improvising. And it has to have a warmth and a breath that is still compelled from being a saxophone player.

FJO:  So in terms of it being in the moment, you’ve got these pre-set things, but you might decide to take it out of the recording studio into a live performance, let’s say, which comes with another whole set of baggage.  How do you make sure the space can handle the balances with that?

JIB:  It’s always a balancing act.

FJO:  But it could be that the spirit moves you in a live setting and there are tons of electronics in some of them, or it could be that the spirit doesn’t move you and you’re completely acoustic.  That decision happens in the moment.

JIB:  It does.  And also the composer in me is thinking about a set of music that has a beginning, a middle, and an end, and also hears when the ear needs to relax from being saturated with electronic sound, when things needs to thin out, just as an orchestrator would go from a thicker density to a thinner density.  There’s a lot of skill to thinking about how you go from an acoustic to an electronic place in a piece that helps listeners’ ears not feel jarred.  I have thought about that a lot.  When you hear the electronics on the recordings, there’s a lot of extra help from Jim Anderson, now my almost life-long engineer. How we work, how we record the saxophone, how the electronics appear in the sonic picture, lots and lots of detailed thinking goes into making this thing that I’m talking about in a recorded fashion.

FJO:  I wish I could have heard this material live, if it was done live, but one of my all-time favorite recordings of yours is Like Silver Like Song, which is the one record where you’re not the only person using electronics.

JIB:  Jamie Saft, what a foil.  Mark Dresser. Bobby Previte.  All master composers, by the way, in their own rights.  And, interestingly enough, whether they’re playing acoustically or not, all are clearly influenced by electronic thinking in their sonic palette.  It was another dream team.  I love that recording, too.  I treasure listening to the music that we made together.

FJO:  How did that material work live?  Did it work live?

JIB:  It was easy.  When the guys are on same page with you, it’s just fun.

FJO:  But in order to make it cohere in a live performance setting, did you have a live mixer with you on stage?

JIB:  I would have loved to have had an onstage mixer.  But we were all composers balancing our instrumental contribution live somehow and doing the best we could.  We played in all kinds of spaces.  I remember once playing in the Rose Planetarium with Jamie and Mark. Somehow we make it work.

FJO:  To take it back to Modern Drama, there’s a lot of stuff on there that seems like it would be hard to replicate live.

JIB:  The only thing that would be hard to replicate was the gizmo designed by my friend Kent McLagan, a bassist whom I spent my early years performing with who is also a mechanical engineer and physicist.  We designed this strain gauge attachment that we put on the bell of the soprano so that, based on how fast I was sweeping the bell of the horn, it would create a flurry of sound regeneration in the harmonizer.  So I kind of hot-rodded my harmonizer to be controlled by this strain gauge—Kent called it a strain gauge; it was measuring velocity.

FJO:  So that’s the wacky sound on “Rapture of the Flat”?

JIB:  Yeah, and it appeared on many things.  On “Over Stars,” a lot of the electronic, silvery, shimmering sounds that you hear, that’s the strain gauge of me swirling the soprano around.

FJO:  I’m a huge fan of “Rapture of the Flat” since it’s such a strange combination of things. It starts out with this kind of straight-ahead rock and roll riff, but then all of a sudden it becomes this insane, out-there electronic thing.

JIB:  It’s one of the pieces I dearly love listening to.  I’ll never forget Fred Hersch playing the Hammond B3.  That was a great time we had doing that. But the strain gauge wasn’t very portable.  It looked like a piece of equipment out of War of the Worlds actually.  But I still travel with the harmonizer and the digital delay. They look like antiques.  And I have these foot pedals and stuff, it’s very old-fashioned, live-electronics effects processing.  It’s not fancy, but I can still do it.

FJO:  Now when you say War of the Worlds, where my mind immediately goes is thinking about how you got connected to NASA.

JIB:  Wow, that’s a story.  Flashback to me in the 1980s.  Things were not going great with my career. I was having dinner with a friend of mine, the actor Brian Dennehy.  And I said, “Brian, things just aren’t looking so good.”  This is a true story.  Brian said to me, “Well, what are you interested in?”  And I said, “Well, I’ve always been interested in the space program.  I’ve watched every launch since the Mercury days, and I’ve always been fascinated with space exploration.”  He says, “Well, why don’t you write a letter to NASA?”  I said, “What do you mean, write a letter to NASA?”  “Just write a letter.  Tell ‘em what you’re interested in.” I thought he was nuts.

“Brian Dennehy said, ‘Why don’t you write a letter to NASA?’ … I thought he was nuts.”

But some time went by and I actually sat down and I wrote a letter in the dark—a letter in a bottle, right?—inquiring whether NASA had ever done any research on the future of the arts and space, in zero gravity environments.  Something I was always fascinated with.  Six months later, I get this envelope back, which has the NASA logo on the front of the envelope from a guy by the name of Robert Schulman, director of the NASA Art Program.  I didn’t even know what that was.  I’d just basically written this letter saying I’m a jazz artist and I’ve been interested in exploring. Anyway, turns out a correspondence develops between me and Robert Schulman, and I learn about this organization that’s been in existence at NASA since the beginning of the space program called the NASA Art Program where they commission visual artists, famous ones, to experience what goes on with the space program and everything, from the launch, the landing, the deep space program, astronaut training.  They invite artists to observe this, and from this, to create a work of art, a visual work of art that they would contribute to NASA’s Space Art Collection, which I didn’t even know existed.

Bob and I corresponded for years.  He was interested in jazz musicians—lucky me, you know.  He started sending me all kinds of wonderful stuff, press releases and stuff from NASA. Eventually I posed the idea, how about NASA commissioning the first musician for the Art Program?  And he loved the idea.  That was the start of it.  We had all kinds of corporate sponsors for this big concert to happen.  I basically joined a NASA art team that came down to the Kennedy Space Center for the first launch after the Challenger accident.  It was the space shuttle Discovery.  I traveled with the artists and went to all the facilities, to the launch and the landing at Edwards Air Force Base.  I went to a jet propulsion lab to see the deep space telemetry.  It was a peak experience in my life, no question about it.  And from that, I created a new work, which we premiered at the Kennedy Space Center.

FJO:  Now when you say NASA commissioned it, there was a concert, but then what happened?  Did they send it into space?  What was NASA’s role in it?

JIB:  Well, I can tell you about the concert.  It was an experience like no other.  It was this wonderful special NASA audience concert that was held at the Kennedy Space Center with the Brevard Symphony Orchestra. I brought down a whole bunch of ringers, the jazz musicians that were in the piece.  In addition to the visual artists who were also there contributing to the evening, there were several astronauts who gave talks before the concert took place.  I remember meeting Astronaut Robert Crippen and Astronaut John Young. I shook hands with a guy who went to the moon.  It was a NASA evening that was documented; it was video-ed.  Where did the piece ultimately land in NASA’s Space Art Collection? Wherever it goes. There’s a piece of my score that’s there, and there’s this video recording of the piece.  But more importantly, it turned out to be an experience that’s informed almost all my musical thinking and writing since then.  It was one of my first large orchestration experiences, and it was also a time when I was integrating live electronics and surround sound. So many concepts that were channeled into that experience are still with me in work that I’m exploring today. I cite that experience as incredibly pivotal in my thinking.

FJO:  And yet it has still never been released in the original format you conceived it.

JIB:  No, just the electronic trio piece that’s in the middle of it—a piece that I performed with Jerry Granelli on electro-acoustic percussion and Rufus Reid on bass and prepared electronic tape, and me on electronics—that’s called “Most Distant Galaxy.”  That’s recorded on my album Art and Aviation. That was the second or third movement.  I forget which.

FJO:  Although most of the pieces on Art and Aviation also have space-inspired names.

JIB:  Yeah, it was right around that time, but that’s the only one that’s directly material from that. Art and Aviation was a spin-off of the work that I did for NASA.  I did a huge piece at Town Hall.  Oh, I’ll never forget that one.

FJO:  I was at that concert.  It was the first time I heard you perform live.

JIB:  Wow!  Yeah, that was a fun one.  That was the first time I integrated getting the brass section up in the balconies to do some surround sound effects.

FJO:  Now the other thing that’s on that record, which I find funny because it’s quite a contrast from all these space exploration-inspired things, is a piece called “I Believe Anita.”

JIB:  That piece was very important, and it’s important today.  I still perform that piece, and I still believe her.  Absolutely.

FJO:  Anita Hill was just in the news again recently. They were talking to her about how back then there were no hashtags.  There was no #MeToo back then. A lot of people believed her, but it ultimately didn’t make a difference. Clarence Thomas still got nominated to the Supreme Court.

JIB:  Hard to believe, but I believe Anita.

FJO:  So when you play that piece now, how do you frame it?

JIB:  History.  It’s bearing out history—sticking to your convictions and seeing how history plays things out.

FJO:  You were talking earlier about being a melodist. That’s another area I would love to talk about in greater detail with you because you developed this whole technique that you call motion-inspired melodies, which you’ve also described as painting with sound.

JIB:  On a detailed level, there’s always been an interest in melodic lines that have their own unique sense of motion flow—accelerando and deccelerando, groups of fives and eights and nines, not just chugging along in eighth notes and sixteenth notes.  It’s been a characteristic of my melodic line writing for a long time.  You can hear it in almost all—I can show it to you.  It has informed so much.  It comes from this sense of motion filled-ness, physical motion.  I’ve always been interested both in my own body when I play and then translating that into sound and how that compels melodies in different ways, too.  It’s all one thing.

“Even before I even thought about it, I always moved a lot when I played.”

Intuitively, even before I even thought about it, I always moved a lot when I played.  I didn’t know why I was doing it.  I just felt things in my body when I play.  As time went on, I collaborated with choreographers who were much more cognizant of this quality and interested in it and actually made me look at it in a much more concrete way, to think about what you could do, to look at it and think about it, and how you could make sound change by moving. It was really choreographers like Richard Bull—who did Improvisational Dance Ensemble—that got me really thinking about it.  So much other compositional thought was generated from the movement, whether it was making melodies or being inspired by Jackson Pollock in the Chasing Paint album, trying to think of arcing sound in space the way Pollock moved a brush.  I was always a visual thinker, so this was a real natural place for me to go, to think of sculpting sound with movement and then augmenting that with electronics and melodic line writing.

FJO:  Your first Pollock piece goes all the way back to your first combo album, and then it grew into this larger six-movement suite that’s on Chasing Paint.

JIB:  Yeah.  I was always interested in Jackson Pollock. He spoke to me, I guess, as he’s spoken to many improvisers.

FJO:  A painting of his was even used for the cover of Ornette Coleman’s album Free Jazz.

JIB:  Absolutely. He speaks to improvisers.

FJO:  So, in terms of this arcing sound, do you encourage the other players in the group to also move around?  If you’re sitting at a piano, that seems like it might be hard to do.

JIB:  Well, I don’t dictate.  But I know there was a period of time when I was recording with Fred, I can remember one piece called “The Race (for Shirley Muldowney),” where we put some of the effects processing in the strings of the piano, so Fred was actually playing with effects processing in that piece.  I can think of times where Bobby Previte—although he himself was not using any extended electronic sounds, his compositional thinking on the set is so compelled by visual thought.  It’s just in his head.

FJO:  Yeah, well he’s created a whole cycle of pieces based on paintings by Joan Miró.

JIB:  Oh yeah.  Right.  I was on one of his Joan Miró pieces.  I’m with like-minded collaborators.  So again, I don’t dictate to people about that, but clearly there’s something in the air.

FJO:  So were your Pollock pieces inspired by specific paintings?

JIB:  Absolutely.  And when we played the pieces, I made some really good color printouts, the best I could, so people had them on their stands. And then at one point, we did play at the Museum of Modern in Art in Houston, where we actually played in front of a Pollock. It was not one of the ones that I’d literally written a piece about, but it was right behind us.  You could just turn around and look at it.  And that was so cool.

FJO:  And the group you performed those pieces with was another dream team.

JIB:  Yeah. Fred, Mark Dresser, and Bobby Previte—wonderful quartet.

FJO:  There’s a real chemistry between the four of you.

JIB:  Absolutely.  And sometimes it’s not what people think, that you put likeminded people together.  Sometimes it’s the very unique characteristics of each of the players, and the strengths that they bring that are very different from one another.  And those people had it.  That’s what I remember about that quartet. I think very fondly about that collaboration now.

FJO:  You’ve recorded at least two albums with that exact lineup, and then others where there’s almost all of them.

JIB:  Yeah, it shifted a little bit.  But we did the Red Quartets and then the Pollock album, Chasing Paint.

FJO:  Another thing that’s probably related to your being inspired by painting is that you are also a photographer.  When did that start?

JIB:  High school.  I was one of those people who spent a lot of time in the old days in the dark room sniffing chemicals.  I just had a passion for black and white photography.

FJO: Interestingly though, in terms of everything we were saying about the melody line and hearing something, having it be balanced and wanting it to be just right, is that it shows how mindful you are of the world around you—in the way that a photographer also usually is, but in a way that perhaps abstract expressionist painters aren’t as much.  Their processes inform their work, and the work is what it is.  So even though you’ve been inspired by Pollock, your aesthetic is very different from his.  Or at least it seems to be.

“The freedom Pollock was in touch with is something that, as jazz musicians, we can tap into so easily.”

JIB:  Who knows?  He just speaks to me.  The freedom he was in touch with, this motion in nature is something that, as jazz musicians, we can tap into so easily.  I know so much about what he was talking about, that fractal nature of the movement of wind and moving grasses or branches or trees, and how that manifests visually in the natural world, and also feeling how that might be in sound.  You don’t know how people inspire you.  It’s not that you’re like them; it’s that they speak to you about something.  Thank you, Jackson Pollock.  That’s all I can say.

One of the many pieces of art that is hanging in Jane Ira Bloom's apartment.

One of the many pieces of art that is hanging in Jane Ira Bloom’s apartment.

FJO:  Jumping to the present moment, when I first heard about this I thought it was so incongruous, yet it totally works.  Another person who’s inspired you, another great American cultural icon, is Emily Dickinson.  But I never would have made that connection.

JIB:  I think the first time I was exposed to her poetry was through The Belle of Amherst with Julie Harris on WGBH in Boston. It was a basically one-woman show about the life and poetry of Emily Dickinson.  I think that’s where it began.  It took a long time simmering, but I think I went to a lecture on Dickinson’s poetry given at the Philactetes Society.  I’ve forgotten who the poet was who gave that lecture, but that’s what sparked it.  I forget when that was.  But then I started re-reading.  Somehow I didn’t understand her, but I got her.  I don’t know why.  I don’t intellectually understand her, but there’s something about the way she used words that feels like the way jazz musicians abstract notes and ideas.  That’s where I started from.

FJO:  And it’s so fascinating that you issued performances both with the words being recited and without them, so listeners can either hear it with the words or not.  You can have two completely different experiences with it.

JIB:  Those fragments of the poetry inspire the music that you hear, where we go with it.  But it’s a different approach to intersecting music and words than traditional settings of poems.  I was not interested in that approach at all.  It’s really a much more abstract relationship to her and to her poetry.

FJO:  You mentioned performing with Jay Clayton, but on your own music you’ve never worked with a singer.

JIB:  No, nor had I ever done anything with words.  Never.  This was the first time.  And my husband is an actor and a director!  But this was the first time that I actually did a collaboration with literature, and it was very meaningful to me.

FJO:  I find it somewhat strange that you’ve never included a singer in your music, especially after hearing all of the stuff that you’ve said about melody, as well as being inspired by the American Songbook.  I could imagine a recording of you with a singer that would be as symbiotic as the album that John Coltrane recorded with Johnny Hartman, which really sounds like two singers—Hartman singing the words and Coltrane singing on his saxophone.

JIB:  That’s right.  It may be in the future.  In truth, I do think when I play ballads that I am singing those songs into the saxophone.  But what collaboration might be in the future, who knows?

FJO:  Okay, so what would be a dream project that you’d love to do that you haven’t done yet?

JIB:  I just went this weekend to the MOMIX Dance Theatre.  Years ago, I wrote some music for the dance company Pilobolus and one of the original dancers, Moses Pendleton, started this company called MOMIX, which is dedicated not only to dance but a high use of stagecraft in lighting and illusion, to create very magical looking effects on stage.  I remember thinking when I left, “I wonder if I could get a grant to get together with a really powerful design team, lighting designers and stage production designers, people who do this kind of thing.  How fascinating it might be to create the music that I create with this other kind of visual element—simultaneously.  But we’d definitely have to get a grant for this one.”  That’s the latest thing that occurred to me.

A pocket-sized audio-recorder on a pile of music manuscript paper in one of the corners on the right hand corner of Jane Ira Bloom's grand piano.

Along with all the music manuscript paper, Jane Ira Bloom also keeps a pocket-sized audio-recorder at her piano.

FJO:  One area that we didn’t touch on that we should are those fascinating world music collaborations that you did about ten years ago, which really took you in new directions.  I actually heard a connection between those performances and your Early Americans trio album, where there’s finally no piano which means you can freely venture beyond the 12-tone equal-tempered scale and improvise on other modes. I did hear things that hinted at this terrain in several of the pieces on that album, like “Dangerous Times” or “Other Eyes,” which perhaps came from your experience in those world music collaborations.

JIB:  Well, I’ve always been interested in world musics.  Not that I’ve studied any in great detail as some of my colleagues have, who have gone to different parts of the world to study shakuhachi or Indian music. But I’ve always had this open ear.  It all started probably in the 1970s when I listened to the Nonesuch World Music Explorer series in the library.  I used to listen to music from all over the world and let it into my musical thought.  Over the years, I’ve collaborated with musicians who were more studied than I in traditional world musics, whether it’s Geetha Ramanathan Bennett and her husband Frank Bennett and being exposed to beautiful South and North Indian music, whether it has to do with the years listening to Asian music, the shakuhachi or the Chinese guqin, having experiences improvising with the master pipa player and improviser Min Xiao-Fen or Korean music, being exposed to it through my friend Jin Hi Kim. Again, it’s all learning by doing and being around the musicians themselves.  And they themselves were interested in collaborating with jazz artists.  I was improvising together with musicians who wanted to share vocabulary with me.  That’s how it happened.

FJO:  It was so incredible hearing Geetha Ramanathan Bennett play “Cheek to Cheek” on one of those performances. That blew my mind.

JIB:  Wasn’t that amazing listening to “Cheek to Cheek on the veena, how she can handle the harmonic changes on a veena?

FJO:  That would be a great thing to take into the studio and record.

JIB:  I know.  I still talk with Geetha every now and then.  She’s out on the West Coast with Frank.  We’re longtime friends and collaborators from 1970-something.  Again, the collaborations that I really value are deep, long-term ones.

FJO:  So we’ve already planned at least three new projects for you, something with a singer, and a multi-media improvisation with music and lighting, and a cross-cultural recording.

JIB:  Thank you.

Two shelves in Jane Ira Bloom's living room reveal some of the sheet music and books that have been important to her.

Two shelves in Jane Ira Bloom’s living room reveal some of the sheet music and books that have been important to her.

FJO:  A last area I was curious about, because it’s been a part of your life for a very long time, is your teaching at the New School.

JIB:  I’ve been there 20 years.

FJO:  So what keeps you doing it?  What inspires you?

JIB:  I’m the most reluctant educator there is, but what inspires me is I like being around young people.  I like being around unfettered enthusiasm, the idealism, all of the energy.  It fuels me. I give it back to them, but they give it to me.

FJO:  So what sort of projects do you do with them to get them thinking out of the box?

“I like being around unfettered enthusiasm.”

JIB:  There are several courses I’ve taught over the years to do just that.  A class called “Linear Composition for Improvisers”—definitely getting improvisers into a composing mode and thinking outside of their comfort zones.  I’ve taught the music of Ornette Coleman.  I’ve taught a course on how to play ballads.  Teaching young people how to play slow.  I have a course that I designed that I teach with my husband called “Improvisatory Artist Lab” where we combine classical artists, jazz artists, and drama students, to do new creative work together.  For them to learn about each other’s vocabularies, cross-disciplinary projects and thinking.  There’s a course I designed taking young composers up to the New York Public Library for the Performing Arts at Lincoln Center, having them research a topic of their choice and then creating a new work of art that we perform at Lincoln Center at the end of the semester.  All of it is pushing the boundaries.

Resonating Filters: How to Listen and Be Heard

I have been writing all this month about how a live sound processing musician could develop an electroacoustic musicianship—and learn to predict musical outcomes for a given sound and process—just by learning a few things about acoustics/psychoacoustics and how some of these effects work. Coupled with some strategies about listening and playing, this can make it possible for the live processor to create a viable “instrument.” Even when processing the sounds of other musicians, it enables the live sound processing player to behave and react musically like any other musician in an ensemble and not be considered as merely creating effects. 

In the previous post, we talked about the relationship between delays, feedback, and filters.   We saw how the outcome of various configurations of delay times and feedback is directly affected by the characteristics of the sounds we put into them, whether they be short or long, resonant or noise.   We looked at pitch-shifts created by Doppler effect in Multi-tap delays and how one might use any of these things when creating live electroacoustic music using live sound processing techniques.  As I demonstrated, it’s about the overlap of sounds, about operating in a continuum from creating resonance to creating texture and rhythm.  It’s about being heard and learning to listen. Like all music. Like all instruments.

It’s about being heard and learning to listen. Like all music. Like all instruments.

To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them.  I hope this information will be useful to live sound processors (because we need to know how to be heard as a separate musical voice and also be flexible with our control especially in live sound processing).  This information should also be useful to instrumentalists processing their own sound (because it will speed the process of finding what sounds good on your instrument, will help with predicting outcomes of various sound processing techniques). It should especially helpful for preparing for improvisation, or any live processing project (without the luxury of a long time schedule), and so too I hope for composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.

Resonance / Filtering in More Detail

We saw in the last post how delays and filters are intertwined in their construction and use, existing in a continuum from short delays to long delays, producing rhythm, texture, and resonance depending on the length of the source audio events being processed, and the length of the delays (as well as feedback).

A special case is that of a very short delay (1-30ms) when combined with lots of feedback (90% or more).  The sound circulates so fast through the delay that it creates resonance at the speed of the circulation, creating clear pitches we can count on.

The effect is heard best with a transient (a very short sound such as a hand clap, vocal fricatives “t” “k”, or a snare drum hit).   For instance, if I have a 1ms delay and lots of feedback and input a short transient sound, we will hear a ringing at 1000Hz.   This is how fast that sample has been going through the delay (1000 times per second).  This is roughly the same pitch as “B” on the piano (a little sharp).  Interestingly, if we change the delay to 2ms, the pitch heard will be 500Hz (also “B” but an octave lower), 3ms yields “E” (333Hz), 4ms yields another “B” (250Hz), and 5ms a “G” (200Hz), and so on in kind of upside down overtone series.

Karplus-Strong Algorithm / Periodicity Pitch

A very short delay combined with high feedback resembles physical modeling synthesis techniques, which are very effective for simulating plucked string and drum sounds.  One such method, the Karplus-Strong Algorithm, consists of a recirculating delay line with a filter in its feedback loop.  The delay line is filled with samples of noise.  As the samples recirculate through the filter in the feedback loop, the samples that are passed through the delay line create a “periodic sample pattern” which is directly related to how many samples there are.  Even though the input signal is pure noise, the algorithm creates a complex sound with pitch content that is related to the length of the delay. “Periodicity pitch” has been well studied in the field of psychoacoustics, and it is known that even white noise, if played with a delayed copy of itself, will have pitch. This is true even if it is sent separately to each ear. The low pass filter in the feedback loop robs the noise of a little of its high frequency components at each pass through the circuit, replicating the acoustical properties of a plucked string or drum.

If we set up a very short delay and use lots of feedback, and input any short burst of sound—a transient, click, or vocal fricative—we can get a similar effect of a plucking sound or a resonant click.  If we input a longer sound at the same frequency as what the delay is producing (or at multiples of that frequency), then those overtones will be accentuated, in the same way some tones are louder when we sing in the shower, because they are being reinforced.   The length of the delay determines the pitch and the feedback amount (and any filter we use in the feedback loop determines the sustain and length of the note).

Filtering & Filter Types

Besides any types of resonance we might create using short delays, there are also many kinds of audio filters we might use for any number of applications including live sound processing: Low Pass Filters, High Pass Filters, etc.

A diagram of various filter types.

But by far the most useful tools for creating a musical instrument out of live processing are resonant filters, and specifically the BandPass and Comb filters, so let’s just focus on those. When filters have sharp cutoffs they also will boost certain frequencies near their cutoff points to be louder than the input. This added resonance results from using sharp cutoffs.  BandPass filters allow us to “zoom” in on one region of a sounds spectrum and reject the rest.  Comb filters, created when a delayed copy of a sound is added to itself, results in many evenly spaced regions (“teeth”) of the sound being cancelled out, and creating a characteristic sound.

The most useful tools for creating a musical instrument out of live processing are resonant filters.

The primary elements of a BandPass filter that we would want to control would be center frequency, bandwidth, and Filter Q (which is defined as center frequency divided by bandwidth, but which we can just consider to be how narrow or “sharp” the peak is or how resonant it is).    When the “Q” is high (very resonant), we can make use of this property to create or underscore certain overtones in a sound that we want to bring out or to experiment with.

Phasing / Flanging / Chorus: These are all filtering-type effects, using very short and automatically varying delay times.  A phase-shifter delays the sound by less than one cycle (cancelling out some frequencies through the overlap and producing a non-uniform, but comb-like filter). A flanger, which sounds a bit more extreme, uses delays around 5-25ms, producing a more uniform comb filter (evenly spaced peaks and troughs in the spectrum). It is named after the original practice of audio engineers who would press down on one reel (flange) of an analog tape deck, slowing it down slightly as it played in nearly sync with an identical copy of the audio on a second tape deck.  Chorus, uses even longer delay times and multiple copies of a sound at longer delay times.

A tutorial on Phasing Flanging and Chorus

For my purposes, as a live processor trying to create an independent voice in an improvisation, I find these three effects most useful if I treat them the same as filters, except that since they are built on delays I can change, there might be the possibility to increase or decrease delay times and get a Doppler effect, too, or play with feedback levels to accentuate certain tones.

I use distortion the same way I would use a filter—as a non-temporal transformation.

DistortionFrom my perspective, whatever methods are used to get distortion add and subtract overtones from our sound, so for my live processing purposes, I use them the same way I would use filters—as non-temporal transformations. Below is a gorgeous example of distortion, not used on a guitar. The only instruction in the score for the electronics is to gradually bring up the distortion in one long crescendo.  I ran the electronics for the piece a few times in the ‘90s for cellist Maya Beiser, and got to experience how strongly the overtones pop out because of the distortion pedal, and move around nearly on their own.

Michael Gordon Industry

Pitch-Shift / Playback Speed Changes / Reversing Sounds

I once heard composer and electronic musician, Nic Collins say that to make experimental music one need only “play it slow, play it backwards.” Referring to pre-recorded sounds, these are certainly time-honored electroacoustic approaches borne out of a time when only tape recorders, microphones, and a few oscillators were used to make electronic music masterpieces.

For live processing of sound, pitch-shift and/or time-stretch continue to be simple and valuable processes.  Time compression and pitch-shift are connected by physics; sounds played back slower also are correspondingly lower in pitch and when played back faster are higher in pitch. (With analog tape, or a turntable, if you play a sound back at twice the speed, it plays back an octave higher because the soundwaves are playing back twice as fast, so it doubles the frequency.)

The relationship between speed of playback and time-stretch was decoupled in mid-‘90s.

This relationship between speed of playback and time-stretch was decoupled in mid-‘90s with faster computers and realtime spectral analysis, and other methods, making it possible to more easily do one without the other.  It is also now the norm. In much of the commercial music software my students use, it is possible to slow down a sound and not change its pitch (certainly more useful for changing tempo in a song with pre-recorded acoustic drums), and being able to pitch-shift or Autotune a voice without changing its speed is also a very useful tool for commercial production.  Each of these decoupled programs/methods (with names like “Warp”, “Flex”, etc.) are sometimes based on granular synthesis or phase vocoders, which each add their own sonic residue (essentially errors or noises endemic to the method when using extreme parameter settings).  Sometimes these mistakes, noise, and glitch sounds are useful and fun to work with, too.

An example of making glitch music with Ableton’s Warp mode (their pitch-shift with no time-compression/expansion mode).

Some great work by Philip White and Ted Hearne using Autotune gone wild on their record R We Who R We

Justin Bieber 800% slower (using PaulStretch extreme sound stretch) is a favorite of mine, but trying to use a method like this for a performance (even if it were possible in real-time) might be a bit unwieldly and make for a very long performance, or very few notes performed. Perhaps we could just treat this like a “freeze” delay function for our purposes in this discussion. Nevertheless, I want to focus here on old-school, time-domain, interconnected pitch-shift and playback speed changes which are still valuable tools.

I am quite surprised at how many of my current students have never tried slowing down the playback of a sound in realtime.  It’s not easy to do with their software in realtime, and some have never had access to a variable speed tape recorder or a turntable, and so they are shockingly unfamiliar with this basic way of working. Thankfully there are some great apps that can be used to do this and, with a little poking around, it’s also possible to do using most basic music software.

A Max patch demo of changing playback speeds and reversing various kinds of sound.

Some sounds sound nearly the same when reversed, some not.

There are very big differences in what happens when pitch-shifting various kinds of sounds (or changing speed or direction of playback).  The response of speech-like sounds (with lots of formants, pitch, and overtone changes within the sound) differs from what happens to string-like (plucked or bowed) or percussive sounds.  Some sound nearly the same when reversed, some not. It is a longer conversation to discuss global predictions about what the outcome of our process will be for every possible input sound (as we can more easily do with delays/filters/feedback) but here are a few generalizations.

Strings can be pitch-shifted up or down, and sound pretty good, bowed and especially plucked.  If the pitch-shift is done without time compression or expansion, then the attack will be slower, so it won’t “speak” quickly in the low end.  A vibrato might get noticeably slow or fast with pitch-changes.

Pitch-shifting a vocal sound up or down can create a much bigger and iconic change in the sound, personality, or even gender of the voice. Pitching a voice up we get the iconic (or annoying) sound of Alvin and the Chipmunks.

Pitch-shifting a voice down, we get slow slurry speech sounding like Lurch from the Addams Family, or what’s heard in all the DJ Screw’s chopped and screwed mixtapes (or even a gender change, as in John Oswald’s Plunderphonics Dolly Parton think piece from 1988).

John Oswald: Pretender (1988) featuring the voice of Dolly Parton

But if the goal is to create a separate voice in an improvisation, I would prefer to pitch-shift the sound, then also put it through a delay, with feedback. That way I can create sound loops of modulated arpeggios moving up and up and up (or down, or both) in a symmetrical movement using the original pitch interval difference (not just whole tone and diminished scales, but everything in between as well). Going up in pitch gets higher until it’s just shimmery (since overtones are gone as it nears the limits of the system).  Going down in pitch gets lower and the sound also gets slower. Rests and silences are slow, too. In digital systems, the noise may build up as some samples must be repeated to play back the sound at that speed.  These all can relate back to Hugh Le Caine’s early electroacoustic work Dripsody for variable speed tape recorder (1955) which, though based on a single sample of a water drop, makes prodigious use of ascending arpeggios created using only tape machines.

Hugh Le Caine: Dripsody (1955)

Which brings me to the final two inter-related topic of these posts—how to listen and how to be heard.

How to Listen

Acousmatic or Reduced listening – The classic discussion by Pierre Schaeffer (and in the writings of Michel Chion), is where I start with every group of students in my Electronic Music Performance classes. We need to be able to hear the sounds we are working on for their abstracted essences.  This is in sharp contrast to the normal listening we do every day, which he called causal listening (what is the sound source?) and semantic listening (what does the sound mean?).

We need to be able to hear the sounds we are working on for their abstracted essences.

We learn to describe sounds in terms of their pitch (frequency), volume (amplitude), and tone/timbre (spectral qualities).  Very importantly, we also listen to how these parameters change over time and so we describe envelope, or what John Cage called the morphology of the sound, as well as describing a sound’s duration and rhythm.

Listening to sound acousmatically can directly impact how we can make ourselves be heard as creating a separate viable “voice” using live processing.  So much of what a live sound processor improvising in real-time needs to control is the ability to provide contrast with the source sound. This requires knowledge of what the delays and filters and processes will produce with many types of possible input sounds (what I have been doing here), a good technical setup that is easy to change quickly and reactively, and it requires active acousmatic listening, and good ear/hand coordination (as with every instrument) to find the needed sounds at the right moment. (And that just takes practice!)

All the suggestions I have made relate back to the basic properties we listen for in acousmatic listening. Keeping that in mind, let’s finish out this post with how to be heard, and specifically what works for me and my students, in the hope it will be useful for some of you as well.

How to be Heard
(How to Make a Playable Electronic Instrument Out of Live Processing)

Sound Decisions: Amplitude Envelope / Dynamics

A volume pedal, or some way to control volume quickly, is the first tool I need in my setup, and the first thing I teach my students. Though useful for maintaining the overall mix, more importantly it enables me to shape the volume and subtleties of my sound to be different than that of my source audio. Shaping the envelope/dynamics of live-processed sounds of other musicians is central to my performing, and an important part of the musical expression of my sound processing instrument.  If I cannot control volume, I cannot do anything else described in these blog posts.  I use volume pedals and other interfaces, as well as compressor/limiters for constant and close control over volume and dynamics.

Filtering / Pitch-Shift (non-temporal transformations)

To be heard when filtering or pitch-shifting with the intention of being perceived as a separate voice (not just an effect) requires displacement of some kind. Filtering or pitch-shifting, with no delay, transforms the sound and gesture being played, but it does not create a new gesture because both the original and the processed sound are taking up the same space either temporally or spectrally or both.  So, we need to change the sound in some way to create some contrast. We can do this by changing parameters of the filter (Q, bandwidth, or center frequency), or by delaying the sound with a long enough delay that we hear the processed version as a separate event.  That delay time should be more than 50-100ms, depending on the length of the sound event. Shorter delays would just give use more filtering if the sounds overlap.

  • When filtering or pitch shifting a sound we will not create a second voice unless we displace it in some way. Think of how video feedback works, the displacement makes it easier to perceive.
  • Temporal displacement: We can delay the sound we are filtering (same as filtering a sound we have just delayed). The delay time must be long enough so there is no overlap and it is heard as a separate event. Pitch-shifts that cause the sound to play back faster or slower might introduce enough temporal displacement on their own if the shift is extreme.
  • Timbral displacement: If we create a new timbral “image” that is so radically different from the original, we might get away with it.
  • Changes over time / modulations: If we do filter sweeps, or change the pitch-shift that contrast what the instrument is doing, we can be heard better.
  • Contrast: If the instrument is playing long tones, then I would choose to do a filter sweep, or change delay times, or pitch-shift. This draws attention to my sound as a separate electronically mediated sound.  This can be done manually (literally a fader), or as some automatic process that we turn on/off and then control in some way.

Below is an example of me processing Gordon Beeferman’s piano in an unreleased track. I am using very short delays with pitch-shift to create a hazy resonance of pitched delays and I make small changes to the delay and pitch-shift to contrast with what he does in terms of both timbre and rhythm.

Making it Easier to Play

Saved States/Presets

I cannot possibly play or control more than a few parameters at once.

Since I cannot possibly play or control more than a few parameters at once, and I am using a computer, I find it easier to create groupings of parameters, my own created “presets” or “states” that I can move between, and know I can get to them, as I want to.


Especially if I play solo, sometimes it is helpful if some things can happen on their own. (After all, I am using a computer!)  If possible, I will set up a very long trajectory or change in the sound, for instance, a filter-sweep, or slow automated changes to pitch shifts.   This frees up my hands and mind to do other things, and assures that not everything I am doing happens in 8-bar loops.


I cannot express strongly enough how important control over rhythm is to my entire concept. It is what makes my system feel like an instrument. My main modes of expression are timbre and rhythm.  Melody and direct expression of pitch using electronics are slightly less important to me, though the presence of pitches is never to be ignored. I choose rhythm as my common ground with other musicians. It is my best method to interact with them.

Nearly every part of my system allows me to create and change rhythms by altering delay times on-the-fly, or simply tapping/playing the desired pulse that will control my delay times or other processes.  Being able to directly control the pulse or play sounds has helped me put my body into my performance, and this too helps me feel more connected to my setup as an instrument.

Even using an LFO (Low Frequency Oscillator) to make tremolo effects and change volume automatically can also be interesting and I would consider as part of my rhythmic expression (and the speed of which I’d want to be able to control in while performing.)

I am strongly attracted to polyrhythms. (Not surprisingly, my family is Greek, and there was lots of dancing in odd time signatures growing up.) Because it is so prevalent in my music, I implemented a mechanism that allows me to tap delay times and rhythms that are complexly related to what is happening in the ensemble at that moment.  After pianist Borah Bergman once explained a system he thought I could use for training myself to perform complex rhythms, I created a Max patch to implement what he taught me, and I started using this polyrhythmic metronome to control the movement between any two states/presets quickly, creating polyrhythmic electroacoustics. Other rhythmic control sources I have used included Morse Code as rhythm, algorithmic processes, and a rhythm engine influenced by North Indian Classical Tala, and whatever else interests me for a particular project.

With rhythm, it is about locking it in.

With rhythm, it is about locking it in.  It’s important that I can control my delays and rhythm processes so I can have direct interaction with the rhythm of other musicians I am playing with (or that I make a deliberate choice not to do so).

Chuck, a performance I like very much by Shackle (Anne La Berge on flute & electronics and Robert van Heumen on laptop-instrument) which does many of the things I have written about here.

Feedback Smears / Beautiful Madness

Filters and delays are always interconnected and feedback is the connective tissue.

As we have been discussing, filters and delays are always interconnected and feedback is the connective tissue.  I make liberal use of feedback with Doppler shift (Interpolating delays) for weird pitch-shifts and I use feedback to create resonance (with short filters) or I use feedback to quickly build up of density or texture when using longer delays.  With pitch-shift, as mentioned above, feedback can create symmetrical arpeggiated movement of the original pitch difference.   And feedback is just fun because it’s, well, feedback!  It’s slightly dangerous and uncontrollable, and brings lovely surprises.  That being said, I use a compressor or have a kill-switch at hand so as not to blow any speakers or lose any friends.

David Behrman: Wave Train (1966)

A recording of me with Hans Tammen’s Third Eye Orchestra.  I am using only a phaser on my microphone and lots of feedback to create this sound, and try to keep the timing with the ensemble manually.

Here are some useful strategies for using live processing that I hope are useful

Are you processing yourself and playing solo?

Do any transformation, go to town!

The processes you choose can be used to augmenting your instrument, or create an independent voice.  You might want to create algorithms that can operate independently especially for solo performing so some things will happen on their own.

Are you playing in an ensemble, but processing your own sound?

What frequencies / frequency spaces are already being used?
Keep control over timbre and volume at all times to shape your sound.
Keep control of your overlap into other players’ sound (reverb, long delays, noise)

Keep control over the rhythm of your delays, and your reverb.  They are part of the music, too.

Are you processing someone else’s sound?

Make sure your transformations maintain the separate sonic identity of other players and your sound as I have been discussing in these posts.

Build an instrument/setup that is playable and flexible.

Create some algorithms that can operate independently

How to be heard / How to listen: redux

  • If my performer is playing something static, I feel free to make big changes to their sound.
  • If my live performer is playing something that is moving or changing (in pitch, timbre or rhythm), I choose to either create something static out of their sound, or I choose to move differently (contrast their movement moving faster or slower or in a different direction, or work with a different parameter). This can be as simple as slowing down my playback speed.
  • If my performer is playing long tones on the same pitch, or a dense repeating or legato pattern, or some kind of broad spectrum sound, I might filter it, or create glissando effects with pitch-shifts ramping up or down.
  • If my performer is playing short tones or staccato, I can use delays or live-sampling to create rhythmic figures.
  • If my performer is playing short bursts of noise, or sounds with sharp fast attacks, that is a great time to play super short delays with a lot of feedback, or crank up a resonant filter to ping it.
  • If they are playing harmonic/focused sound with clear overtones, I can mess it up with all kinds of transformations, but I’ll be sure to delay it / displace it.
When you are done, know when to turn it off.

In short and in closing: Listen to the sound.  What is static? Change it! Do something different.   And when you are done, know when to turn it off.

On “Third Eye” from Bitches Re-Brewed (2004) by Hans Tammen, I’m processing saxophonist Martin Speicher

Suggested further reading

Michel Chion (translated by Claudia Gorbman): Audio-Vision: Sound on Screen (Columbia University Press, 1994)
(Particularly his chapter, “The Three Listening Modes” pp. 25–34)

Dave Hunter: “Effects Explained: Modulation—Phasing, Flanging, and Chorus” (Gibson website, 2008)

Dave Hunter: “Effects Explained: Overdrive, Distortion, and Fuzz” (Gibson website, 2008)

Delays as Music

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).

Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.

My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.

Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines