Tag: signal processing

Delays as Music

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).


Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.


My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.


Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop
http://preparedguitar.blogspot.de/2015/04/the-birth-of-loop-by-michael-peters.html

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

LiveLooping.org
(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines

Live Sound Processing and Improvisation

Intro to the Intro

I have been mulling over writing about live sound processing and improvisation for some time, and finally I have my soapbox!  For two decades, as an electronic musician working in this area, I’ve been trying to convince musicians, sound engineers, and audiences that working with electronics to process and augment the sound of other musicians is a fun and viable way to make music.

Also a vocalist, I often use my voice to augment and control the sound processes I create in my music which encompasses both improvised and composed projects. I also have been teaching (Max/MSP, Electronic Music Performance) for many years. My opinions are influenced by my experiences as both an electronic musician who is performer/composer and a teacher (who is forever a student).

A short clip of my duo project with trombonist Jen Baker, “Clip Mouth Unit,” where I process both her sound and my voice.

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

So many gestural controllers have become part of the fabric of our everyday lives.

The interest has increased because (of course) so many more musicians have laptops and smartphones, and so many interesting game and gestural controllers have become part of the fabric of our everyday lives. With so many media tools at our disposal, we have all become amateur designers/photographers/videographers, and also musicians, both democratizing creativity (at least to those with the funds for laptops/smartphones) and exponentially increasing and therefore diluting the resulting output pool of new work.

Image of a hatted and bespectacled old man waving his index finger with the caption, "Back in my day... no real-time audio on our laptops (horrors!)"

Back when I was starting out (in the early ’90s), we did not have real-time audio manipulations at our fingertips—nothing easy to download or purchase or create ourselves (unlike the plethora of tools available today).  Although Sensorlab and iCube were available (but not widely), we did not have powerful sensors on our personal devices, conveniently with us at all times, that could be used to control our electronic music with the wave of a hand or the swipe of a finger. (Note: this is quite shocking to my younger students.) There is also a wave of audio analysis tools using Music Information Retrieval (MIR) and alternative controllers, previously only seen at research institutions and academic conferences, all going mainstream. Tools such as the Sunhouse sensory percussion/drum controller, which turns audio into a control source, are becoming readily available and popular in many genres.

In the early ’90s, I was a performing rock-pop-jazz musician, experimenting with free improv/post-jazz. In grad school, I became exposed for the first time to “academic” computer music: real-time, live electroacoustics, usually created by contemporary classical composers with assistance from audio engineers-turned-computer programmers (many of whom were also composers).

My professor at NYU, Robert Rowe, and his colleagues George Lewis, Roger Dannenberg and others were composer-programmers dedicated to developing systems to get their computers to improvise, or building other kinds of interactive music systems.  Others, like Cort Lippe, were developing pieces for an early version of Max running on a NeXT computer using complex real-time audio manipulations of a performer’s sound, and using that as the sole electroacoustic—and live—sound source and for all control (a concept that I personally became extremely interested and invested in).

As an experiment, I decided to see if I could create a simplified versions of these live sound processing ideas I was learning about. I started to bring them to my free avant-jazz improv sessions and to my gigs, using a complicated Max patch I made to control an Eventide H3000 effects processor (which was much more affordable than the NeXT machine, plus we had one at NYU). I did many performances with a core group of people, willing to let me put microphones on everyone and process them during our performances.

Collision at Baktun 1999. Paul Geluso (bass), Daniel Carter (trumpet), Tom Beyer (drums), Dafna Naphtali (voice, live sound processing), Kristin Lucas (video projection / live processing), and Leopanar Witlarge (horns).

Around that time I also met composer/programmer/performer Richard Zvonar, who had made a similarly complex Max patch as “editor/librarian” software for the H3000, to enable him to create all the mind-blowing live processing he used in his work with Diamanda Galás, Robert Black (State of the Bass), and others. Zvonar was very encouraging about my quest to control the H3000 in real-time via a computer. (He was “playing” his unit from the front panel.)  I created what became my first version of a live processing “instrument” (which I dubbed “kaleid-o-phone” at some point). My subsequent work with Kitty Brazelton and Danny Tunick, in What is it Like to be a Bat?, really stretched me to find ways to control live processing in extreme and repeatable ways that became central and signature elements of our work together, all executed while playing guitar and singing—no easy feat.

Six old laptops all open and lined up in two rows of three on a couch.

Since then, over 23 years and 7 laptops, many gigs and ensembles, and releasing a few CDs, I’ve all along worked on that same “instrument,” updating my Max patch, trying out many different controllers and ideas, adding real-time computer-based audio. (Only once that was possible on a laptop, in the late ’90s.) I’m just that kinda gal; I like to tinker!

In the long run, what is more important to me than the Max programming I did for this project is that I was able to develop for myself an aesthetic practice and rules for my live sound processing about respecting the sound and independence of the other musicians to help me to make good music when processing other people’s sound.

The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

Many people, of course, use live processing on their own sound, so what’s the big deal? Musicians are excited to extend their instruments electronically and there is much more equipment on stage in just about every genre to prove it. The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

However, I am primarily interested in learning how a performer can use live processing on someone else’s sound, in a way that it can become a truly independent voice in an ensemble.

What is Live Sound Processing, really?

To perform with live sound processing is to alter and affect the sounds of acoustic instruments, live, in performance (usually without the aid of pre-recorded audio), and in this way create new sounds, which in turn become independent and unique voices in a musical performance.

Factoring in the acoustic environment of the performance space, it’s possible to view each performance as site-specific, as the live sound processor reacts not only to the musicians and how they are playing but also to the responsiveness and spectral qualities of the room.

Although, in the past, the difference between live sound processing and other electronic music practices has not been readily understood by audiences (or even many musicians), in recent years the complex role of the “live sound processor” musician has evolved to often be that of a contributing, performing musician, sitting on stage within the ensemble and not relegated, by default, to the sound engineer position in the middle or back of the venue.

Performers as well as audiences can now recognize electroacoustic techniques when they hear them.

With faster laptops and more widespread use and availability of classic live sound processing as software plugins, these live sound processing techniques have gradually become more acceptable over 20 years—and in many music genres practically expected (not to mention the huge impact these technologies have had in more commercial manifestations of electronic dance music or EDM). Both performers and audiences have become savvier about many electroacoustic techniques and sounds and can now recognize them by hearing them.

We really need to talk…

I’d like to encourage a discourse about this electronic musicianship practice, to empower live sound processors to use real-time (human/old-school) listening and analysis of sounds (being played by others), and to develop skills for real-time (improvised) decisions about how to respond and manipulate those sounds in a way that facilitates their electronic-music-sounds being heard—and understood—as a separate performing (and musicianly) voice.

In this way, the live sound processor is not always dependent on and following the other musicians (who are their sound source), their contributions not simply “effects” that are relegated to the background. Nor will the live sound processor be brow-beating the other musicians into integrating themselves with, or simply following, inflexible sounds and rhythms of their electronics expressed as an immutable/immobile/unresponsive block of sound that the other musicians must adapt to.

My Rules

My self-imposed guidelines were developed over several years of performing and sessions are:

  1. Never interfere with a musician’s own musical sound, rhythm or timbre. (Unless they want you to!)
  2. Be musically identifiable to both co-players and audience (if possible).
  3. Incorporate my body to use some kind of physical interaction between the technology and myself, either through controllers or the acoustics of the sound itself, or my own voice.

I wrote about these rules in “What if Your Instrument is Invisible?” (my chapter contribution to the excellent book, Musical Instruments in the 21st Century: Identities, Configurations, Practices (Springer 2016).

The first two rules, in particular, are the most important ones and will inform virtually everything I will write in coming weeks about live sound processing and improvisation.

My specific area of interest is live processing techniques used in improvised music, and in other settings in which the music is not all pre-composed. Under such conditions, many decisions must be made by the electronic musician in real-time. My desire is to codify the use of various live sound processing techniques into a pedagogical approach that blends listening techniques, a knowledge of acoustics / psychoacoustics, and tight control over the details of live sound processing of acoustic instruments and voice. The goal is to improve communication between musicians and optional scoring of such work, to make this practice easier for new electronic musicians, and to provide a foundation for them to develop their own work.

You are not alone…

There are many electronic musicians who work as I do with live sound processing of acoustic instruments in improvised music. Though we share a bundle of techniques as our central mode of expression, there is very wide range of possible musical approaches and aesthetics, even within my narrow definition of “Live Sound Processing” as real-time manipulation of the sound of an acoustic instrument to create an identifiable and separate musical voice in a piece of music.

In 1995, I read a preview of what Pauline Oliveros and the Deep Listening Band (with Stuart Dempster and David Gamper) would be doing at their concert at the Kitchen in New York City. Still unfamiliar with DLB’s work, I was intrigued to hear about E.I.S., their “Expanded Instrument System” described as an “interactive performer controlled acoustic sound processing environment” giving “improvising musicians control over various parameters of sound transformation” such as “delay time, pitch transformation” and more. (It was 1995, and they were working with the Reson8 for real-time processing of audio on a Mac, which I had only seen done on NeXT machines.) The concert was beautiful and mesmerizing. But lying on the cushions at the Kitchen, bathing in the music’s deep tones and sonically subtle changes, I realized that though we were both interested in the same technologies and methods, my aesthetics were radically different from that of DLB. I was, from the outset, more interested in noise/extremes and highly energetic rhythms.

It was an important turning point for me as I realized that to assume what I was aiming to do was musically equivalent to DLB simply because the technological ideas were similar was a little like lumping together two very different guitarists just because they both use Telecasters. Later, I was fortunate enough to get to know both David Gamper and Bob Bielecki through the Max User Group meetings I ran at Harvestworks, and to have my many questions answered about the E.I.S. system and their approach.

There is now more improvisation than I recall witnessing 20 years ago.

Other musicians important for me to mention who are working with live sound processing of other instruments and improvisation for some time: Lawrence Casserley, Joel Ryan (both in their own projects and long associations with saxophonist Evan Parker’s “ElectroAcoustic” ensemble), Bob Ostertag (influential in all his modes of working), and Satoshi Takeishi and Shoko Nagai’s duo Vortex. More recently: Sam Pluta (who creates “reactive computerized sound worlds” with Evan Parker, Peter Evans, Wet Ink and others), and Hans Tammen. (Full disclosure, we are married to each other!)

Joel Ryan and Evan Parker at STEIM.

In academic circles, computer musicians, always interested in live processing, have more often taken to the stage as performers operating their software (moving from the central/engineer position). It seems there is also more improvisation than I recall witnessing 20 years ago.

But as for me…

In my own work, I gravitate toward duets and trios, so that it is very clear what I am doing musically, and there is room for my vocal work. My duos are with pianist Gordon Beeferman (our new CD, Pulsing Dot, was just released), percussionist Luis Tabuenca (Index of Refraction), and Clip Mouth Unit—a project with trombonist Jen Baker. I also work occasionally doing live processing with larger ensembles (with saxophonist Ras Moshe’s Music Now groups and Hans Tammen’s Third Eye Orchestra).

Playing with live sound processing is like building a fire on stage.

I have often described playing with live sound processing as like “building a fire on stage”, so I will close by taking the metaphor a bit further. There are two ways to start a fire with a lot of planning or improvisation, which method we choose to start with use depends on environmental conditions (wind, humidity, location), the tools we have at hand, and also what kind of person we are (a planner/architect, or more comfortable thinking on our feet).

In the same way, every performance environment impacts on the responsiveness and acoustics of musical instruments used there. This is much more pertinent, when “live sound processing” is the instrument. The literal weather, humidity, room acoustics, even how many people are watching the concert, all affect the defacto responsiveness of a given room, and can greatly affect the outcome especially when working with feedback or short delays and resonances. Personally, I am a bit of both personality types—I start with a plan, but I’m also ready to adapt. With that in mind, I believe the improvising mindset is needed for working most effectively with live sound processing as an instrument.

A preview of upcoming posts

What follows in my posts this month will be ideas about how to play better as an electronic musician using live acoustic instruments as sound sources. These ideas are (I hope) useful whether you are:

  • an instrumentalist learning to add electronics to your sound, or
  • an electronic musician learning to play more sensitively and effectively with acoustic musicians.

In these upcoming posts, you can read some of my discussions/explanations and musings about delay as a musical instrument, acoustics/psychoacoustics, feedback fun, filtering/resonance, pitch-shift and speed changes, and the role of rhythm in musical interaction and being heard. These are all ideas I have tried out on many of my students at New York University and The New School, where I teach Electronic Music Performance, as well as from a Harvestworks presentation, and from my one-week course on the subject at the UniArts Summer Academy in Helsinki (August 2014).


Dafna Naphtali creating music from her laptop which is connected to a bunch of cables hanging down from a table. (photo by Skolska/Prague)

Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist.   As a performer and composer of experimental, contemporary classical and improvised music since the mid-1990s, she creates custom Max/MSP programming incorporating polyrhythmic metronomes, Morse Code, and incoming audio signals to control her sound-processing of voice and other instruments, and other projects such as music for robots, audio augmented reality sound walks and “Audio Chandelier” multi-channel sound projects.  Her new CD Pulsing Dot with pianist Gordon Beeferman is on Clang Label.