Author: Dafna Naphtali

Resonating Filters: How to Listen and Be Heard

I have been writing all this month about how a live sound processing musician could develop an electroacoustic musicianship—and learn to predict musical outcomes for a given sound and process—just by learning a few things about acoustics/psychoacoustics and how some of these effects work. Coupled with some strategies about listening and playing, this can make it possible for the live processor to create a viable “instrument.” Even when processing the sounds of other musicians, it enables the live sound processing player to behave and react musically like any other musician in an ensemble and not be considered as merely creating effects. 

In the previous post, we talked about the relationship between delays, feedback, and filters.   We saw how the outcome of various configurations of delay times and feedback is directly affected by the characteristics of the sounds we put into them, whether they be short or long, resonant or noise.   We looked at pitch-shifts created by Doppler effect in Multi-tap delays and how one might use any of these things when creating live electroacoustic music using live sound processing techniques.  As I demonstrated, it’s about the overlap of sounds, about operating in a continuum from creating resonance to creating texture and rhythm.  It’s about being heard and learning to listen. Like all music. Like all instruments.

It’s about being heard and learning to listen. Like all music. Like all instruments.

To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them.  I hope this information will be useful to live sound processors (because we need to know how to be heard as a separate musical voice and also be flexible with our control especially in live sound processing).  This information should also be useful to instrumentalists processing their own sound (because it will speed the process of finding what sounds good on your instrument, will help with predicting outcomes of various sound processing techniques). It should especially helpful for preparing for improvisation, or any live processing project (without the luxury of a long time schedule), and so too I hope for composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.

Resonance / Filtering in More Detail

We saw in the last post how delays and filters are intertwined in their construction and use, existing in a continuum from short delays to long delays, producing rhythm, texture, and resonance depending on the length of the source audio events being processed, and the length of the delays (as well as feedback).

A special case is that of a very short delay (1-30ms) when combined with lots of feedback (90% or more).  The sound circulates so fast through the delay that it creates resonance at the speed of the circulation, creating clear pitches we can count on.

The effect is heard best with a transient (a very short sound such as a hand clap, vocal fricatives “t” “k”, or a snare drum hit).   For instance, if I have a 1ms delay and lots of feedback and input a short transient sound, we will hear a ringing at 1000Hz.   This is how fast that sample has been going through the delay (1000 times per second).  This is roughly the same pitch as “B” on the piano (a little sharp).  Interestingly, if we change the delay to 2ms, the pitch heard will be 500Hz (also “B” but an octave lower), 3ms yields “E” (333Hz), 4ms yields another “B” (250Hz), and 5ms a “G” (200Hz), and so on in kind of upside down overtone series.

Karplus-Strong Algorithm / Periodicity Pitch

A very short delay combined with high feedback resembles physical modeling synthesis techniques, which are very effective for simulating plucked string and drum sounds.  One such method, the Karplus-Strong Algorithm, consists of a recirculating delay line with a filter in its feedback loop.  The delay line is filled with samples of noise.  As the samples recirculate through the filter in the feedback loop, the samples that are passed through the delay line create a “periodic sample pattern” which is directly related to how many samples there are.  Even though the input signal is pure noise, the algorithm creates a complex sound with pitch content that is related to the length of the delay. “Periodicity pitch” has been well studied in the field of psychoacoustics, and it is known that even white noise, if played with a delayed copy of itself, will have pitch. This is true even if it is sent separately to each ear. The low pass filter in the feedback loop robs the noise of a little of its high frequency components at each pass through the circuit, replicating the acoustical properties of a plucked string or drum.

If we set up a very short delay and use lots of feedback, and input any short burst of sound—a transient, click, or vocal fricative—we can get a similar effect of a plucking sound or a resonant click.  If we input a longer sound at the same frequency as what the delay is producing (or at multiples of that frequency), then those overtones will be accentuated, in the same way some tones are louder when we sing in the shower, because they are being reinforced.   The length of the delay determines the pitch and the feedback amount (and any filter we use in the feedback loop determines the sustain and length of the note).

Filtering & Filter Types

Besides any types of resonance we might create using short delays, there are also many kinds of audio filters we might use for any number of applications including live sound processing: Low Pass Filters, High Pass Filters, etc.

A diagram of various filter types.

But by far the most useful tools for creating a musical instrument out of live processing are resonant filters, and specifically the BandPass and Comb filters, so let’s just focus on those. When filters have sharp cutoffs they also will boost certain frequencies near their cutoff points to be louder than the input. This added resonance results from using sharp cutoffs.  BandPass filters allow us to “zoom” in on one region of a sounds spectrum and reject the rest.  Comb filters, created when a delayed copy of a sound is added to itself, results in many evenly spaced regions (“teeth”) of the sound being cancelled out, and creating a characteristic sound.

The most useful tools for creating a musical instrument out of live processing are resonant filters.

The primary elements of a BandPass filter that we would want to control would be center frequency, bandwidth, and Filter Q (which is defined as center frequency divided by bandwidth, but which we can just consider to be how narrow or “sharp” the peak is or how resonant it is).    When the “Q” is high (very resonant), we can make use of this property to create or underscore certain overtones in a sound that we want to bring out or to experiment with.

Phasing / Flanging / Chorus: These are all filtering-type effects, using very short and automatically varying delay times.  A phase-shifter delays the sound by less than one cycle (cancelling out some frequencies through the overlap and producing a non-uniform, but comb-like filter). A flanger, which sounds a bit more extreme, uses delays around 5-25ms, producing a more uniform comb filter (evenly spaced peaks and troughs in the spectrum). It is named after the original practice of audio engineers who would press down on one reel (flange) of an analog tape deck, slowing it down slightly as it played in nearly sync with an identical copy of the audio on a second tape deck.  Chorus, uses even longer delay times and multiple copies of a sound at longer delay times.

A tutorial on Phasing Flanging and Chorus

For my purposes, as a live processor trying to create an independent voice in an improvisation, I find these three effects most useful if I treat them the same as filters, except that since they are built on delays I can change, there might be the possibility to increase or decrease delay times and get a Doppler effect, too, or play with feedback levels to accentuate certain tones.

I use distortion the same way I would use a filter—as a non-temporal transformation.

DistortionFrom my perspective, whatever methods are used to get distortion add and subtract overtones from our sound, so for my live processing purposes, I use them the same way I would use filters—as non-temporal transformations. Below is a gorgeous example of distortion, not used on a guitar. The only instruction in the score for the electronics is to gradually bring up the distortion in one long crescendo.  I ran the electronics for the piece a few times in the ‘90s for cellist Maya Beiser, and got to experience how strongly the overtones pop out because of the distortion pedal, and move around nearly on their own.

Michael Gordon Industry

Pitch-Shift / Playback Speed Changes / Reversing Sounds

I once heard composer and electronic musician, Nic Collins say that to make experimental music one need only “play it slow, play it backwards.” Referring to pre-recorded sounds, these are certainly time-honored electroacoustic approaches borne out of a time when only tape recorders, microphones, and a few oscillators were used to make electronic music masterpieces.

For live processing of sound, pitch-shift and/or time-stretch continue to be simple and valuable processes.  Time compression and pitch-shift are connected by physics; sounds played back slower also are correspondingly lower in pitch and when played back faster are higher in pitch. (With analog tape, or a turntable, if you play a sound back at twice the speed, it plays back an octave higher because the soundwaves are playing back twice as fast, so it doubles the frequency.)

The relationship between speed of playback and time-stretch was decoupled in mid-‘90s.

This relationship between speed of playback and time-stretch was decoupled in mid-‘90s with faster computers and realtime spectral analysis, and other methods, making it possible to more easily do one without the other.  It is also now the norm. In much of the commercial music software my students use, it is possible to slow down a sound and not change its pitch (certainly more useful for changing tempo in a song with pre-recorded acoustic drums), and being able to pitch-shift or Autotune a voice without changing its speed is also a very useful tool for commercial production.  Each of these decoupled programs/methods (with names like “Warp”, “Flex”, etc.) are sometimes based on granular synthesis or phase vocoders, which each add their own sonic residue (essentially errors or noises endemic to the method when using extreme parameter settings).  Sometimes these mistakes, noise, and glitch sounds are useful and fun to work with, too.

An example of making glitch music with Ableton’s Warp mode (their pitch-shift with no time-compression/expansion mode).

Some great work by Philip White and Ted Hearne using Autotune gone wild on their record R We Who R We

Justin Bieber 800% slower (using PaulStretch extreme sound stretch) is a favorite of mine, but trying to use a method like this for a performance (even if it were possible in real-time) might be a bit unwieldly and make for a very long performance, or very few notes performed. Perhaps we could just treat this like a “freeze” delay function for our purposes in this discussion. Nevertheless, I want to focus here on old-school, time-domain, interconnected pitch-shift and playback speed changes which are still valuable tools.

I am quite surprised at how many of my current students have never tried slowing down the playback of a sound in realtime.  It’s not easy to do with their software in realtime, and some have never had access to a variable speed tape recorder or a turntable, and so they are shockingly unfamiliar with this basic way of working. Thankfully there are some great apps that can be used to do this and, with a little poking around, it’s also possible to do using most basic music software.

A Max patch demo of changing playback speeds and reversing various kinds of sound.

Some sounds sound nearly the same when reversed, some not.

There are very big differences in what happens when pitch-shifting various kinds of sounds (or changing speed or direction of playback).  The response of speech-like sounds (with lots of formants, pitch, and overtone changes within the sound) differs from what happens to string-like (plucked or bowed) or percussive sounds.  Some sound nearly the same when reversed, some not. It is a longer conversation to discuss global predictions about what the outcome of our process will be for every possible input sound (as we can more easily do with delays/filters/feedback) but here are a few generalizations.

Strings can be pitch-shifted up or down, and sound pretty good, bowed and especially plucked.  If the pitch-shift is done without time compression or expansion, then the attack will be slower, so it won’t “speak” quickly in the low end.  A vibrato might get noticeably slow or fast with pitch-changes.

Pitch-shifting a vocal sound up or down can create a much bigger and iconic change in the sound, personality, or even gender of the voice. Pitching a voice up we get the iconic (or annoying) sound of Alvin and the Chipmunks.

Pitch-shifting a voice down, we get slow slurry speech sounding like Lurch from the Addams Family, or what’s heard in all the DJ Screw’s chopped and screwed mixtapes (or even a gender change, as in John Oswald’s Plunderphonics Dolly Parton think piece from 1988).

John Oswald: Pretender (1988) featuring the voice of Dolly Parton

But if the goal is to create a separate voice in an improvisation, I would prefer to pitch-shift the sound, then also put it through a delay, with feedback. That way I can create sound loops of modulated arpeggios moving up and up and up (or down, or both) in a symmetrical movement using the original pitch interval difference (not just whole tone and diminished scales, but everything in between as well). Going up in pitch gets higher until it’s just shimmery (since overtones are gone as it nears the limits of the system).  Going down in pitch gets lower and the sound also gets slower. Rests and silences are slow, too. In digital systems, the noise may build up as some samples must be repeated to play back the sound at that speed.  These all can relate back to Hugh Le Caine’s early electroacoustic work Dripsody for variable speed tape recorder (1955) which, though based on a single sample of a water drop, makes prodigious use of ascending arpeggios created using only tape machines.

Hugh Le Caine: Dripsody (1955)

Which brings me to the final two inter-related topic of these posts—how to listen and how to be heard.

How to Listen

Acousmatic or Reduced listening – The classic discussion by Pierre Schaeffer (and in the writings of Michel Chion), is where I start with every group of students in my Electronic Music Performance classes. We need to be able to hear the sounds we are working on for their abstracted essences.  This is in sharp contrast to the normal listening we do every day, which he called causal listening (what is the sound source?) and semantic listening (what does the sound mean?).

We need to be able to hear the sounds we are working on for their abstracted essences.

We learn to describe sounds in terms of their pitch (frequency), volume (amplitude), and tone/timbre (spectral qualities).  Very importantly, we also listen to how these parameters change over time and so we describe envelope, or what John Cage called the morphology of the sound, as well as describing a sound’s duration and rhythm.

Listening to sound acousmatically can directly impact how we can make ourselves be heard as creating a separate viable “voice” using live processing.  So much of what a live sound processor improvising in real-time needs to control is the ability to provide contrast with the source sound. This requires knowledge of what the delays and filters and processes will produce with many types of possible input sounds (what I have been doing here), a good technical setup that is easy to change quickly and reactively, and it requires active acousmatic listening, and good ear/hand coordination (as with every instrument) to find the needed sounds at the right moment. (And that just takes practice!)

All the suggestions I have made relate back to the basic properties we listen for in acousmatic listening. Keeping that in mind, let’s finish out this post with how to be heard, and specifically what works for me and my students, in the hope it will be useful for some of you as well.

How to be Heard
(How to Make a Playable Electronic Instrument Out of Live Processing)

Sound Decisions: Amplitude Envelope / Dynamics

A volume pedal, or some way to control volume quickly, is the first tool I need in my setup, and the first thing I teach my students. Though useful for maintaining the overall mix, more importantly it enables me to shape the volume and subtleties of my sound to be different than that of my source audio. Shaping the envelope/dynamics of live-processed sounds of other musicians is central to my performing, and an important part of the musical expression of my sound processing instrument.  If I cannot control volume, I cannot do anything else described in these blog posts.  I use volume pedals and other interfaces, as well as compressor/limiters for constant and close control over volume and dynamics.

Filtering / Pitch-Shift (non-temporal transformations)

To be heard when filtering or pitch-shifting with the intention of being perceived as a separate voice (not just an effect) requires displacement of some kind. Filtering or pitch-shifting, with no delay, transforms the sound and gesture being played, but it does not create a new gesture because both the original and the processed sound are taking up the same space either temporally or spectrally or both.  So, we need to change the sound in some way to create some contrast. We can do this by changing parameters of the filter (Q, bandwidth, or center frequency), or by delaying the sound with a long enough delay that we hear the processed version as a separate event.  That delay time should be more than 50-100ms, depending on the length of the sound event. Shorter delays would just give use more filtering if the sounds overlap.

  • When filtering or pitch shifting a sound we will not create a second voice unless we displace it in some way. Think of how video feedback works, the displacement makes it easier to perceive.
  • Temporal displacement: We can delay the sound we are filtering (same as filtering a sound we have just delayed). The delay time must be long enough so there is no overlap and it is heard as a separate event. Pitch-shifts that cause the sound to play back faster or slower might introduce enough temporal displacement on their own if the shift is extreme.
  • Timbral displacement: If we create a new timbral “image” that is so radically different from the original, we might get away with it.
  • Changes over time / modulations: If we do filter sweeps, or change the pitch-shift that contrast what the instrument is doing, we can be heard better.
  • Contrast: If the instrument is playing long tones, then I would choose to do a filter sweep, or change delay times, or pitch-shift. This draws attention to my sound as a separate electronically mediated sound.  This can be done manually (literally a fader), or as some automatic process that we turn on/off and then control in some way.

Below is an example of me processing Gordon Beeferman’s piano in an unreleased track. I am using very short delays with pitch-shift to create a hazy resonance of pitched delays and I make small changes to the delay and pitch-shift to contrast with what he does in terms of both timbre and rhythm.

Making it Easier to Play

Saved States/Presets

I cannot possibly play or control more than a few parameters at once.

Since I cannot possibly play or control more than a few parameters at once, and I am using a computer, I find it easier to create groupings of parameters, my own created “presets” or “states” that I can move between, and know I can get to them, as I want to.

Trajectories

Especially if I play solo, sometimes it is helpful if some things can happen on their own. (After all, I am using a computer!)  If possible, I will set up a very long trajectory or change in the sound, for instance, a filter-sweep, or slow automated changes to pitch shifts.   This frees up my hands and mind to do other things, and assures that not everything I am doing happens in 8-bar loops.

Rhythm

I cannot express strongly enough how important control over rhythm is to my entire concept. It is what makes my system feel like an instrument. My main modes of expression are timbre and rhythm.  Melody and direct expression of pitch using electronics are slightly less important to me, though the presence of pitches is never to be ignored. I choose rhythm as my common ground with other musicians. It is my best method to interact with them.

Nearly every part of my system allows me to create and change rhythms by altering delay times on-the-fly, or simply tapping/playing the desired pulse that will control my delay times or other processes.  Being able to directly control the pulse or play sounds has helped me put my body into my performance, and this too helps me feel more connected to my setup as an instrument.

Even using an LFO (Low Frequency Oscillator) to make tremolo effects and change volume automatically can also be interesting and I would consider as part of my rhythmic expression (and the speed of which I’d want to be able to control in while performing.)

I am strongly attracted to polyrhythms. (Not surprisingly, my family is Greek, and there was lots of dancing in odd time signatures growing up.) Because it is so prevalent in my music, I implemented a mechanism that allows me to tap delay times and rhythms that are complexly related to what is happening in the ensemble at that moment.  After pianist Borah Bergman once explained a system he thought I could use for training myself to perform complex rhythms, I created a Max patch to implement what he taught me, and I started using this polyrhythmic metronome to control the movement between any two states/presets quickly, creating polyrhythmic electroacoustics. Other rhythmic control sources I have used included Morse Code as rhythm, algorithmic processes, and a rhythm engine influenced by North Indian Classical Tala, and whatever else interests me for a particular project.

With rhythm, it is about locking it in.

With rhythm, it is about locking it in.  It’s important that I can control my delays and rhythm processes so I can have direct interaction with the rhythm of other musicians I am playing with (or that I make a deliberate choice not to do so).

Chuck, a performance I like very much by Shackle (Anne La Berge on flute & electronics and Robert van Heumen on laptop-instrument) which does many of the things I have written about here.

Feedback Smears / Beautiful Madness

Filters and delays are always interconnected and feedback is the connective tissue.

As we have been discussing, filters and delays are always interconnected and feedback is the connective tissue.  I make liberal use of feedback with Doppler shift (Interpolating delays) for weird pitch-shifts and I use feedback to create resonance (with short filters) or I use feedback to quickly build up of density or texture when using longer delays.  With pitch-shift, as mentioned above, feedback can create symmetrical arpeggiated movement of the original pitch difference.   And feedback is just fun because it’s, well, feedback!  It’s slightly dangerous and uncontrollable, and brings lovely surprises.  That being said, I use a compressor or have a kill-switch at hand so as not to blow any speakers or lose any friends.

David Behrman: Wave Train (1966)

A recording of me with Hans Tammen’s Third Eye Orchestra.  I am using only a phaser on my microphone and lots of feedback to create this sound, and try to keep the timing with the ensemble manually.

Here are some useful strategies for using live processing that I hope are useful

Are you processing yourself and playing solo?

Do any transformation, go to town!

The processes you choose can be used to augmenting your instrument, or create an independent voice.  You might want to create algorithms that can operate independently especially for solo performing so some things will happen on their own.

Are you playing in an ensemble, but processing your own sound?

What frequencies / frequency spaces are already being used?
Keep control over timbre and volume at all times to shape your sound.
Keep control of your overlap into other players’ sound (reverb, long delays, noise)

Keep control over the rhythm of your delays, and your reverb.  They are part of the music, too.

Are you processing someone else’s sound?

Make sure your transformations maintain the separate sonic identity of other players and your sound as I have been discussing in these posts.

Build an instrument/setup that is playable and flexible.

Create some algorithms that can operate independently

How to be heard / How to listen: redux

  • If my performer is playing something static, I feel free to make big changes to their sound.
  • If my live performer is playing something that is moving or changing (in pitch, timbre or rhythm), I choose to either create something static out of their sound, or I choose to move differently (contrast their movement moving faster or slower or in a different direction, or work with a different parameter). This can be as simple as slowing down my playback speed.
  • If my performer is playing long tones on the same pitch, or a dense repeating or legato pattern, or some kind of broad spectrum sound, I might filter it, or create glissando effects with pitch-shifts ramping up or down.
  • If my performer is playing short tones or staccato, I can use delays or live-sampling to create rhythmic figures.
  • If my performer is playing short bursts of noise, or sounds with sharp fast attacks, that is a great time to play super short delays with a lot of feedback, or crank up a resonant filter to ping it.
  • If they are playing harmonic/focused sound with clear overtones, I can mess it up with all kinds of transformations, but I’ll be sure to delay it / displace it.
When you are done, know when to turn it off.

In short and in closing: Listen to the sound.  What is static? Change it! Do something different.   And when you are done, know when to turn it off.

On “Third Eye” from Bitches Re-Brewed (2004) by Hans Tammen, I’m processing saxophonist Martin Speicher

Suggested further reading

Michel Chion (translated by Claudia Gorbman): Audio-Vision: Sound on Screen (Columbia University Press, 1994)
(Particularly his chapter, “The Three Listening Modes” pp. 25–34)

Dave Hunter: “Effects Explained: Modulation—Phasing, Flanging, and Chorus” (Gibson website, 2008)

Dave Hunter: “Effects Explained: Overdrive, Distortion, and Fuzz” (Gibson website, 2008)

Delays, Feedback, and Filters: A Trifecta

My last post, “Delays as Music,” was about making music using delays as an instrument, specifically in the case of the live sound processor. I discussed bit about how delays work and are constructed technically, how they have been used in the past, a bit about how we perceive sound, and how we perceive different delay times when used with sounds of various lengths. This post is a continuation of that discussion. (So please do read last week’s post first!)

We are sensitive to delay times as short as a millisecond or less.

I wrote about our responsiveness to miniscule differences in time, volume, and timbre between the sounds arriving in our ears, which is our skill set as humans for localizing sounds—how we use our ears to navigate our environment. Sound travels at approximately 1,125 feet per second but though all sound waves we hear in a sound are travelling at the same speed, the low frequency waves (which are longer) tend to bend and wrap around objects, while high frequencies are absorbed or bounce off of objects in our environment. We are sensitive to delay times as short as a millisecond or less, as related to the size of our heads and the physical distance between our ears.  We are able to detect tiny differences in volume between the ear that is closer to a sound source and the other.  We are able to discern small differences in timbre, too, as some high frequency sounds are literally blocked by our heads. (To notice this phenomena in action, cover your left ear with your hand and with your free hand, rustle your fingers first in the uncovered ear and then in the covered one.  Notice what is missing.)

These psychoacoustic phenomena (interaural time difference, interaural level difference, and head shadow) are useful not only for an audio engineer, but are also important for us when considering the effects and uses of delay in electroacoustic musical contexts.

My “aesthetics of delay” are similar to what audio engineers use, as rule of thumb, for using delay as an audio effect, or to add spatialization.  The difference in my approach is that I want to find a way to recognize and find sounds I can put into a delay, so that I can predict what will happen to them in real time as I am playing with various parameter settings. I use the changes in delay times as a tool to create and control rhythm, texture, and timbral changes. I’ve tried to develop a kind of electronic musicianship, which incorporates acousmatic listening and quick responses, and hope to share some of this.

It’s all about the overlap of sound.

As I wrote, it’s all about the overlap of sound.  If a copy of a sound, delayed by 1-10ms, is played with the original, we simply hear it as a unified sound, changed in timbre. Short delayed sounds nearly always overlap. Longer delays might create rhythms or patterns; medium length delays might create textures or resonance.  It depends on the length of the sound going into the delay, and what that length is with respect to the length of the delay.

This post will cover more ground about delays and how they can be used to play dynamic, gestural, improvised electroacoustic music. We also will look at the relationship between delays and filtering, and in the next and last post I’ll go more deeply into filtering as a musical expression and how to listen and be heard in that context.

Mostly, I’ll focus on the case of the live processor who is using someone else’s sound or a sound that cannot be completely foreseen (and not always using acoustic instruments as a source– Joshua Fried does this beautifully with sampling/processing live radio in his Radio Wonderland project).  However, despite this focus, I am optimistic that this information will also useful to solo instrumentalists using electronics on their own sound as well as to composers wanting to build improvisational systems into their work.

No real tips and tricks here (well maybe a few), but I do hope to communicate some ideas I have about how to think about effects and live audio manipulation in a way that outlasts current technologies. Though some of the examples below will use the Max programming language, it is because it is my main programming environment, but also well suited to diagram and explain my points.

We want more than one, we want more than one, we want…

As I wrote last week, musicians often want to be able to play more than one delayed sound, or to repeat that delayed sound several times. To do this, we either use more delays, or we use feedback to route a portion of our output back into the input.

When using feedback to create many delays, we route a portion of our output back into the input of the delay. By routing only some of the sound (not 100%), the repeated sound is a little quieter each time and eventually the sound dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an almost endless repeat, and might even overload/clip if we add new sounds (like a too full fountain).

Using multi-tap delays, or a few delays in parallel, we can make many copies of the sound from the same input, and play them simultaneously.  We could set up different delay lengths with odd spacings, and if the delays are longer than the sound we put in, we might get some fun rhythmic complexity (and polyrhythmic echoes).  With very short delays, we’ll get a filtered sound from the multiple copies being played nearly simultaneously.

Any of these delayed signals (taps) could in turn be sent back into the multi-tap delay’s input in a feedback network.   It is possible to put any number and combination of additional delays and filter in the feedback loop as well, and these complex designs are what make the difference between all the flavors of delay types that are commonly used.

It doesn’t matter how we choose to create our multiple delays.  If the delays are longer than the sounds going into them, then we don’t get overlap, and we’ll hear a rhythm or pattern.  If the delays are medium length (compared to our input sound), we’ll hear some texture or internal rhythms or something undulating.  If the delays are very short, we get filtering and resonance.

Overlap is what determines the musical potential for what we will get out of our delay.

The overlap is what determines the musical potential for what we will get out of our delay. For live sound processing in improvised music, it is critical to listen analytically (acousmatically) to the live sound source we are processing.  Based on what we hear, it is possible to make real-time decisions about what comes next and know exactly what we will get out.

Time varying delay – interpolating delay lines

Most cheaper delay pedals and many plugins make unwanted noise when the delay times are changed while a sound is playing. Usually described as clicks, pops, crackling or “zipper noise”, these sounds occur because the delays are “non-interpolating.”   These sounds happen because the changes in the delay times are not smooth, causing the audio to be played back with abrupt changes in volume.  If you never change delay times during performance, fixed simple delays and a non-interpolating delay is fine.

Changing delay times is very useful for improvisation and turning delay into an instrument. To avoid the noise and clicks we need to use “interpolating” delays, which might mean a slightly more expensive pedal or plugin or a little more programming. As performers or users of commercial gear we may not be privy to all the different techniques being used in every piece of technology we encounter. (Linear or higher order interpolation, windowing/overlap, and selection of delayed sounds from several parallel delay lines are a few techniques.) For the live sound processor / improviser what matters is: Can I change my delay times live?  What artifacts are introduced when I change it?  Are they musically useful to me?  (Sometimes we like glitches, too.)

Doppler shift!  Making delays fun.

A graphic representation of the Doppler Shift

An interesting feature/artifact of interpolating delays is the characteristic pitch shift that many of them make.  This pitch shift is similar to how the Doppler shift phenomenon works.

The characteristic pitch shift that many interpolating delays make is similar to how the Doppler Effect works.

A stationary sound source normally sends out sound waves in all directions around itself, at the speed of sound. If that sound source starts to move toward a stationary listener (or if the listener moves toward the sound), the successive wave fronts start getting compressed in time and hit the listener’s ears with greater frequency.  Due to the relative motion of the sound source to the listener, the sound’s frequency has in effect been raised.  If the sound source instead moves away from the listener, the opposite holds true: the wave fronts are encountered at a slower rate than previously, and the pitch seems to have been lowered. [Moore, 1990]

OK, but in plainer English: When a car drives past you on the street or highway, you hear the sound go up in pitch as it approaches, and as it passes, it goes back down.   This is the Doppler Effect.  The soundwaves travel at the same speed always, but they are coming from an object that is moving so their frequency goes up and then goes down when it is moving again away from you.

A sound we put into a delay line (software / pedal / tape loop) is like a recording.  If you play it back faster, the pitch goes higher as the sound waves hit your ears in faster succession, and if you slow it down, it plays back lower.  Just like what happens to the sound of a passing siren from a train or car horn that gets higher as it approaches and passes you: when delayed sounds are varied in time, the same auditory illusion is created. The pitch goes down as delay time is increased up as delay time is decreased, with the same Doppler Effect as the case of the stationary listener and moving sound source.

Using a Doppler Effect makes the delay more of an “instrument.”

Using a Doppler Effect makes the delay more of an “instrument” because it’s possible to repeat the sound and also alter it.  In my last post I discussed many types of reflections and repetitions in the visual arts, some exact and natural and others more abstract and transformed as reflections. Being able to alter the repetition of a sound in this way is of key importance to me.  Adding additional effects in with the delays is important for building a sound that is musically identifiable as separate from that of the musician I use as my source.

Using classic electroacoustic methods for transforming sounds, we can create new structures and gestures out of a live sound source. Methods such as pitch-shifting, speeding sounds up or slowing them down, or a number of filtering techniques, work better if we also use delays and time displacement as a way to distinguish these elements from the source sounds.

Many types of delay and effects plugins and pedals on the market are based on simple combinations of the principal parameters I have been outlining (e.g. how much feedback, how short a delay, how it is routed). For example, Ping Pong Delay delays a signal 50-100ms or more and alternates sending it back and forth between the left and right channels, sometimes with high feedback so it goes on for a while. Flutter Echo is very similar to the Ping Pong Delay, but with shorter delay times to cause more filtering to occur—an acoustic effect that is sometimes found in a very live sounding public spaces.  Slapback Echo has a longer delay time (75ms or more) with no feedback.

FREEZE!  Infinite Delay and Looping

Some delay devices will let us hold a sample indefinitely in the delay.  We can loop a sound and “freeze” it, adding additional sounds sometime later if we choose. The layer cake of loops built up lends itself to an easy kind of improvisation which can be very beautiful.

“Infinite” delay is used by an entire catalog of genres and musical scenes.

Looping with infinite delay is used by an entire catalog of genres and musical scenes from noise to folk music to contemporary classical.  The past few years especially, it’s been all over YouTube and elsewhere online thanks to apps and applications like Ableton Live and hardware like Line 6, a popular 6-channel looper pedal. Engaging in a form of live-composing/production, musicians generate textures and motifs, constructing them into entire arrangements, often based upon the sound of one instrument, in many tracks, all played live and in the moment.  In terms of popular electronic music practice, looping and grid interfaces seem to be the most salient and popularly-used paradigms for performance and interface since the late 2000s.

Looping music is often about building up an entire arrangement, from scratch, and with no sounds heard that are not first played by the instrumentalist, live, before their repetition (the sound of which is possibly slightly different and mediated by being heard over speakers).

With live sound processing, we use loops, too, of course. The moment I start to loop a sound “infinitely,” I am, theoretically, no longer working with live sound processing, but I am processing something that happened in the past—this is sometimes called “live sampling” and we could quibble about the differences.  To make dynamic live-looping for improvised music, whether done by sampling/looping other musicians, or by processing one’s own sound, it is essential to be flexible and be able/willing to change the loops in some way, perhaps quickly, and to make alterations to the audio recorded in real-time.  These alterations can be a significant part of the expressiveness of the sound.

For me, the most important part of working with long delays (or infinite loops) is that I be able to create and control rhythms with those delays.  I need to lock-in (synchronize) my delay times while I play. Usually I do this manually, by listening, and then using a Tap Tempo patch I wrote (which is what I’ll do when I perform this weekend at Spectrum as part of Nick Didkovsky’s Deviant Voices Festival on October 21 at Spectrum and the following day with Ras Moshe as part of the Quarry Improvised Music Series at Triskelion Arts).

Short delays are mostly about resonance. In my next and final post, I will talk more about filters and resonance, why using them together with delay is important, as well as strategies for how to be heard when live processing acoustic sound in an improvisation.

In closing, here is an example from What is it Like to be a Bat? my digital chamber punk trio with Kitty Brazelton (active 1996-2009 and which continues in spirit). In one piece, I turned the feedback up on my delay as high as I could get away with (nearly causing microphones and sound system to feedback too), then yelled “Ha!” into my microphone, and set off sequence of extreme delay changes with an interpolating delay in a timing we liked. Joined by drummer Danny Tunick, who wrote a part to go with it, we’d repeat this sequence four times, each time louder, noisier, different but somehow repeatable at each performance. It became a central theme in that piece, and was recorded as the track “Batch 4” part of our She Said – She Said, “Can You Sing Sermonette With Me?” on the Bat CD for Tzadik label.

Some recommended further reading and listening

Thom Holmes, Electronic and Experimental Music (Routledge, 2016)

Jennie Gottschalk, Experimental Music Since 1970 (Bloomsbury Academic, 2016)

Geoff Smith, “Creating and Using Custom Delay Effects” (for the website Sound on Sound, May 2012) Smith writes: “If I had to pick a single desert island effect, it would be delay. Why? Well, delay isn’t only an effect in itself; it’s also one of the basic building blocks for many other effects, including reverb, chorus and flanging — and that makes it massively versatile.”

He also includes many good recipes and examples of different delay configurations.

Phil Taylor, “History of Delay” (written for the website for Effectrode pedals)

Daniel Steinhardt and Mick Taylor, “Delay Basics: Uses, Misuses & Why Quick Delay Times Are Awesome” (from their YouTube channel, That Pedal Show)
Funny

Delays as Music

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).


Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.


My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.


Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop
http://preparedguitar.blogspot.de/2015/04/the-birth-of-loop-by-michael-peters.html

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

LiveLooping.org
(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines

Live Sound Processing and Improvisation

Intro to the Intro

I have been mulling over writing about live sound processing and improvisation for some time, and finally I have my soapbox!  For two decades, as an electronic musician working in this area, I’ve been trying to convince musicians, sound engineers, and audiences that working with electronics to process and augment the sound of other musicians is a fun and viable way to make music.

Also a vocalist, I often use my voice to augment and control the sound processes I create in my music which encompasses both improvised and composed projects. I also have been teaching (Max/MSP, Electronic Music Performance) for many years. My opinions are influenced by my experiences as both an electronic musician who is performer/composer and a teacher (who is forever a student).

A short clip of my duo project with trombonist Jen Baker, “Clip Mouth Unit,” where I process both her sound and my voice.

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

So many gestural controllers have become part of the fabric of our everyday lives.

The interest has increased because (of course) so many more musicians have laptops and smartphones, and so many interesting game and gestural controllers have become part of the fabric of our everyday lives. With so many media tools at our disposal, we have all become amateur designers/photographers/videographers, and also musicians, both democratizing creativity (at least to those with the funds for laptops/smartphones) and exponentially increasing and therefore diluting the resulting output pool of new work.

Image of a hatted and bespectacled old man waving his index finger with the caption, "Back in my day... no real-time audio on our laptops (horrors!)"

Back when I was starting out (in the early ’90s), we did not have real-time audio manipulations at our fingertips—nothing easy to download or purchase or create ourselves (unlike the plethora of tools available today).  Although Sensorlab and iCube were available (but not widely), we did not have powerful sensors on our personal devices, conveniently with us at all times, that could be used to control our electronic music with the wave of a hand or the swipe of a finger. (Note: this is quite shocking to my younger students.) There is also a wave of audio analysis tools using Music Information Retrieval (MIR) and alternative controllers, previously only seen at research institutions and academic conferences, all going mainstream. Tools such as the Sunhouse sensory percussion/drum controller, which turns audio into a control source, are becoming readily available and popular in many genres.

In the early ’90s, I was a performing rock-pop-jazz musician, experimenting with free improv/post-jazz. In grad school, I became exposed for the first time to “academic” computer music: real-time, live electroacoustics, usually created by contemporary classical composers with assistance from audio engineers-turned-computer programmers (many of whom were also composers).

My professor at NYU, Robert Rowe, and his colleagues George Lewis, Roger Dannenberg and others were composer-programmers dedicated to developing systems to get their computers to improvise, or building other kinds of interactive music systems.  Others, like Cort Lippe, were developing pieces for an early version of Max running on a NeXT computer using complex real-time audio manipulations of a performer’s sound, and using that as the sole electroacoustic—and live—sound source and for all control (a concept that I personally became extremely interested and invested in).

As an experiment, I decided to see if I could create a simplified versions of these live sound processing ideas I was learning about. I started to bring them to my free avant-jazz improv sessions and to my gigs, using a complicated Max patch I made to control an Eventide H3000 effects processor (which was much more affordable than the NeXT machine, plus we had one at NYU). I did many performances with a core group of people, willing to let me put microphones on everyone and process them during our performances.

Collision at Baktun 1999. Paul Geluso (bass), Daniel Carter (trumpet), Tom Beyer (drums), Dafna Naphtali (voice, live sound processing), Kristin Lucas (video projection / live processing), and Leopanar Witlarge (horns).

Around that time I also met composer/programmer/performer Richard Zvonar, who had made a similarly complex Max patch as “editor/librarian” software for the H3000, to enable him to create all the mind-blowing live processing he used in his work with Diamanda Galás, Robert Black (State of the Bass), and others. Zvonar was very encouraging about my quest to control the H3000 in real-time via a computer. (He was “playing” his unit from the front panel.)  I created what became my first version of a live processing “instrument” (which I dubbed “kaleid-o-phone” at some point). My subsequent work with Kitty Brazelton and Danny Tunick, in What is it Like to be a Bat?, really stretched me to find ways to control live processing in extreme and repeatable ways that became central and signature elements of our work together, all executed while playing guitar and singing—no easy feat.

Six old laptops all open and lined up in two rows of three on a couch.

Since then, over 23 years and 7 laptops, many gigs and ensembles, and releasing a few CDs, I’ve all along worked on that same “instrument,” updating my Max patch, trying out many different controllers and ideas, adding real-time computer-based audio. (Only once that was possible on a laptop, in the late ’90s.) I’m just that kinda gal; I like to tinker!

In the long run, what is more important to me than the Max programming I did for this project is that I was able to develop for myself an aesthetic practice and rules for my live sound processing about respecting the sound and independence of the other musicians to help me to make good music when processing other people’s sound.

The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

Many people, of course, use live processing on their own sound, so what’s the big deal? Musicians are excited to extend their instruments electronically and there is much more equipment on stage in just about every genre to prove it. The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

However, I am primarily interested in learning how a performer can use live processing on someone else’s sound, in a way that it can become a truly independent voice in an ensemble.

What is Live Sound Processing, really?

To perform with live sound processing is to alter and affect the sounds of acoustic instruments, live, in performance (usually without the aid of pre-recorded audio), and in this way create new sounds, which in turn become independent and unique voices in a musical performance.

Factoring in the acoustic environment of the performance space, it’s possible to view each performance as site-specific, as the live sound processor reacts not only to the musicians and how they are playing but also to the responsiveness and spectral qualities of the room.

Although, in the past, the difference between live sound processing and other electronic music practices has not been readily understood by audiences (or even many musicians), in recent years the complex role of the “live sound processor” musician has evolved to often be that of a contributing, performing musician, sitting on stage within the ensemble and not relegated, by default, to the sound engineer position in the middle or back of the venue.

Performers as well as audiences can now recognize electroacoustic techniques when they hear them.

With faster laptops and more widespread use and availability of classic live sound processing as software plugins, these live sound processing techniques have gradually become more acceptable over 20 years—and in many music genres practically expected (not to mention the huge impact these technologies have had in more commercial manifestations of electronic dance music or EDM). Both performers and audiences have become savvier about many electroacoustic techniques and sounds and can now recognize them by hearing them.

We really need to talk…

I’d like to encourage a discourse about this electronic musicianship practice, to empower live sound processors to use real-time (human/old-school) listening and analysis of sounds (being played by others), and to develop skills for real-time (improvised) decisions about how to respond and manipulate those sounds in a way that facilitates their electronic-music-sounds being heard—and understood—as a separate performing (and musicianly) voice.

In this way, the live sound processor is not always dependent on and following the other musicians (who are their sound source), their contributions not simply “effects” that are relegated to the background. Nor will the live sound processor be brow-beating the other musicians into integrating themselves with, or simply following, inflexible sounds and rhythms of their electronics expressed as an immutable/immobile/unresponsive block of sound that the other musicians must adapt to.

My Rules

My self-imposed guidelines were developed over several years of performing and sessions are:

  1. Never interfere with a musician’s own musical sound, rhythm or timbre. (Unless they want you to!)
  2. Be musically identifiable to both co-players and audience (if possible).
  3. Incorporate my body to use some kind of physical interaction between the technology and myself, either through controllers or the acoustics of the sound itself, or my own voice.

I wrote about these rules in “What if Your Instrument is Invisible?” (my chapter contribution to the excellent book, Musical Instruments in the 21st Century: Identities, Configurations, Practices (Springer 2016).

The first two rules, in particular, are the most important ones and will inform virtually everything I will write in coming weeks about live sound processing and improvisation.

My specific area of interest is live processing techniques used in improvised music, and in other settings in which the music is not all pre-composed. Under such conditions, many decisions must be made by the electronic musician in real-time. My desire is to codify the use of various live sound processing techniques into a pedagogical approach that blends listening techniques, a knowledge of acoustics / psychoacoustics, and tight control over the details of live sound processing of acoustic instruments and voice. The goal is to improve communication between musicians and optional scoring of such work, to make this practice easier for new electronic musicians, and to provide a foundation for them to develop their own work.

You are not alone…

There are many electronic musicians who work as I do with live sound processing of acoustic instruments in improvised music. Though we share a bundle of techniques as our central mode of expression, there is very wide range of possible musical approaches and aesthetics, even within my narrow definition of “Live Sound Processing” as real-time manipulation of the sound of an acoustic instrument to create an identifiable and separate musical voice in a piece of music.

In 1995, I read a preview of what Pauline Oliveros and the Deep Listening Band (with Stuart Dempster and David Gamper) would be doing at their concert at the Kitchen in New York City. Still unfamiliar with DLB’s work, I was intrigued to hear about E.I.S., their “Expanded Instrument System” described as an “interactive performer controlled acoustic sound processing environment” giving “improvising musicians control over various parameters of sound transformation” such as “delay time, pitch transformation” and more. (It was 1995, and they were working with the Reson8 for real-time processing of audio on a Mac, which I had only seen done on NeXT machines.) The concert was beautiful and mesmerizing. But lying on the cushions at the Kitchen, bathing in the music’s deep tones and sonically subtle changes, I realized that though we were both interested in the same technologies and methods, my aesthetics were radically different from that of DLB. I was, from the outset, more interested in noise/extremes and highly energetic rhythms.

It was an important turning point for me as I realized that to assume what I was aiming to do was musically equivalent to DLB simply because the technological ideas were similar was a little like lumping together two very different guitarists just because they both use Telecasters. Later, I was fortunate enough to get to know both David Gamper and Bob Bielecki through the Max User Group meetings I ran at Harvestworks, and to have my many questions answered about the E.I.S. system and their approach.

There is now more improvisation than I recall witnessing 20 years ago.

Other musicians important for me to mention who are working with live sound processing of other instruments and improvisation for some time: Lawrence Casserley, Joel Ryan (both in their own projects and long associations with saxophonist Evan Parker’s “ElectroAcoustic” ensemble), Bob Ostertag (influential in all his modes of working), and Satoshi Takeishi and Shoko Nagai’s duo Vortex. More recently: Sam Pluta (who creates “reactive computerized sound worlds” with Evan Parker, Peter Evans, Wet Ink and others), and Hans Tammen. (Full disclosure, we are married to each other!)

Joel Ryan and Evan Parker at STEIM.

In academic circles, computer musicians, always interested in live processing, have more often taken to the stage as performers operating their software (moving from the central/engineer position). It seems there is also more improvisation than I recall witnessing 20 years ago.

But as for me…

In my own work, I gravitate toward duets and trios, so that it is very clear what I am doing musically, and there is room for my vocal work. My duos are with pianist Gordon Beeferman (our new CD, Pulsing Dot, was just released), percussionist Luis Tabuenca (Index of Refraction), and Clip Mouth Unit—a project with trombonist Jen Baker. I also work occasionally doing live processing with larger ensembles (with saxophonist Ras Moshe’s Music Now groups and Hans Tammen’s Third Eye Orchestra).

Playing with live sound processing is like building a fire on stage.

I have often described playing with live sound processing as like “building a fire on stage”, so I will close by taking the metaphor a bit further. There are two ways to start a fire with a lot of planning or improvisation, which method we choose to start with use depends on environmental conditions (wind, humidity, location), the tools we have at hand, and also what kind of person we are (a planner/architect, or more comfortable thinking on our feet).

In the same way, every performance environment impacts on the responsiveness and acoustics of musical instruments used there. This is much more pertinent, when “live sound processing” is the instrument. The literal weather, humidity, room acoustics, even how many people are watching the concert, all affect the defacto responsiveness of a given room, and can greatly affect the outcome especially when working with feedback or short delays and resonances. Personally, I am a bit of both personality types—I start with a plan, but I’m also ready to adapt. With that in mind, I believe the improvising mindset is needed for working most effectively with live sound processing as an instrument.

A preview of upcoming posts

What follows in my posts this month will be ideas about how to play better as an electronic musician using live acoustic instruments as sound sources. These ideas are (I hope) useful whether you are:

  • an instrumentalist learning to add electronics to your sound, or
  • an electronic musician learning to play more sensitively and effectively with acoustic musicians.

In these upcoming posts, you can read some of my discussions/explanations and musings about delay as a musical instrument, acoustics/psychoacoustics, feedback fun, filtering/resonance, pitch-shift and speed changes, and the role of rhythm in musical interaction and being heard. These are all ideas I have tried out on many of my students at New York University and The New School, where I teach Electronic Music Performance, as well as from a Harvestworks presentation, and from my one-week course on the subject at the UniArts Summer Academy in Helsinki (August 2014).


Dafna Naphtali creating music from her laptop which is connected to a bunch of cables hanging down from a table. (photo by Skolska/Prague)

Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist.   As a performer and composer of experimental, contemporary classical and improvised music since the mid-1990s, she creates custom Max/MSP programming incorporating polyrhythmic metronomes, Morse Code, and incoming audio signals to control her sound-processing of voice and other instruments, and other projects such as music for robots, audio augmented reality sound walks and “Audio Chandelier” multi-channel sound projects.  Her new CD Pulsing Dot with pianist Gordon Beeferman is on Clang Label.