15-NaphtaliInPerformance

Resonating Filters: How to Listen and Be Heard

To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them. I hope this information will be useful to live sound processors, as well as instrumentalists processing their own sound (particularly in real time) and composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.

Written By

Dafna Naphtali

I have been writing all this month about how a live sound processing musician could develop an electroacoustic musicianship—and learn to predict musical outcomes for a given sound and process—just by learning a few things about acoustics/psychoacoustics and how some of these effects work. Coupled with some strategies about listening and playing, this can make it possible for the live processor to create a viable “instrument.” Even when processing the sounds of other musicians, it enables the live sound processing player to behave and react musically like any other musician in an ensemble and not be considered as merely creating effects. 

In the previous post, we talked about the relationship between delays, feedback, and filters.   We saw how the outcome of various configurations of delay times and feedback is directly affected by the characteristics of the sounds we put into them, whether they be short or long, resonant or noise.   We looked at pitch-shifts created by Doppler effect in Multi-tap delays and how one might use any of these things when creating live electroacoustic music using live sound processing techniques.  As I demonstrated, it’s about the overlap of sounds, about operating in a continuum from creating resonance to creating texture and rhythm.  It’s about being heard and learning to listen. Like all music. Like all instruments.

It’s about being heard and learning to listen. Like all music. Like all instruments.

To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them.  I hope this information will be useful to live sound processors (because we need to know how to be heard as a separate musical voice and also be flexible with our control especially in live sound processing).  This information should also be useful to instrumentalists processing their own sound (because it will speed the process of finding what sounds good on your instrument, will help with predicting outcomes of various sound processing techniques). It should especially helpful for preparing for improvisation, or any live processing project (without the luxury of a long time schedule), and so too I hope for composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.

Resonance / Filtering in More Detail

We saw in the last post how delays and filters are intertwined in their construction and use, existing in a continuum from short delays to long delays, producing rhythm, texture, and resonance depending on the length of the source audio events being processed, and the length of the delays (as well as feedback).

A special case is that of a very short delay (1-30ms) when combined with lots of feedback (90% or more).  The sound circulates so fast through the delay that it creates resonance at the speed of the circulation, creating clear pitches we can count on.

The effect is heard best with a transient (a very short sound such as a hand clap, vocal fricatives “t” “k”, or a snare drum hit).   For instance, if I have a 1ms delay and lots of feedback and input a short transient sound, we will hear a ringing at 1000Hz.   This is how fast that sample has been going through the delay (1000 times per second).  This is roughly the same pitch as “B” on the piano (a little sharp).  Interestingly, if we change the delay to 2ms, the pitch heard will be 500Hz (also “B” but an octave lower), 3ms yields “E” (333Hz), 4ms yields another “B” (250Hz), and 5ms a “G” (200Hz), and so on in kind of upside down overtone series.

Karplus-Strong Algorithm / Periodicity Pitch

A very short delay combined with high feedback resembles physical modeling synthesis techniques, which are very effective for simulating plucked string and drum sounds.  One such method, the Karplus-Strong Algorithm, consists of a recirculating delay line with a filter in its feedback loop.  The delay line is filled with samples of noise.  As the samples recirculate through the filter in the feedback loop, the samples that are passed through the delay line create a “periodic sample pattern” which is directly related to how many samples there are.  Even though the input signal is pure noise, the algorithm creates a complex sound with pitch content that is related to the length of the delay. “Periodicity pitch” has been well studied in the field of psychoacoustics, and it is known that even white noise, if played with a delayed copy of itself, will have pitch. This is true even if it is sent separately to each ear. The low pass filter in the feedback loop robs the noise of a little of its high frequency components at each pass through the circuit, replicating the acoustical properties of a plucked string or drum.

If we set up a very short delay and use lots of feedback, and input any short burst of sound—a transient, click, or vocal fricative—we can get a similar effect of a plucking sound or a resonant click.  If we input a longer sound at the same frequency as what the delay is producing (or at multiples of that frequency), then those overtones will be accentuated, in the same way some tones are louder when we sing in the shower, because they are being reinforced.   The length of the delay determines the pitch and the feedback amount (and any filter we use in the feedback loop determines the sustain and length of the note).

Filtering & Filter Types

Besides any types of resonance we might create using short delays, there are also many kinds of audio filters we might use for any number of applications including live sound processing: Low Pass Filters, High Pass Filters, etc.

A diagram of various filter types.

But by far the most useful tools for creating a musical instrument out of live processing are resonant filters, and specifically the BandPass and Comb filters, so let’s just focus on those. When filters have sharp cutoffs they also will boost certain frequencies near their cutoff points to be louder than the input. This added resonance results from using sharp cutoffs.  BandPass filters allow us to “zoom” in on one region of a sounds spectrum and reject the rest.  Comb filters, created when a delayed copy of a sound is added to itself, results in many evenly spaced regions (“teeth”) of the sound being cancelled out, and creating a characteristic sound.

The most useful tools for creating a musical instrument out of live processing are resonant filters.

The primary elements of a BandPass filter that we would want to control would be center frequency, bandwidth, and Filter Q (which is defined as center frequency divided by bandwidth, but which we can just consider to be how narrow or “sharp” the peak is or how resonant it is).    When the “Q” is high (very resonant), we can make use of this property to create or underscore certain overtones in a sound that we want to bring out or to experiment with.

Phasing / Flanging / Chorus: These are all filtering-type effects, using very short and automatically varying delay times.  A phase-shifter delays the sound by less than one cycle (cancelling out some frequencies through the overlap and producing a non-uniform, but comb-like filter). A flanger, which sounds a bit more extreme, uses delays around 5-25ms, producing a more uniform comb filter (evenly spaced peaks and troughs in the spectrum). It is named after the original practice of audio engineers who would press down on one reel (flange) of an analog tape deck, slowing it down slightly as it played in nearly sync with an identical copy of the audio on a second tape deck.  Chorus, uses even longer delay times and multiple copies of a sound at longer delay times.

A tutorial on Phasing Flanging and Chorus

For my purposes, as a live processor trying to create an independent voice in an improvisation, I find these three effects most useful if I treat them the same as filters, except that since they are built on delays I can change, there might be the possibility to increase or decrease delay times and get a Doppler effect, too, or play with feedback levels to accentuate certain tones.

I use distortion the same way I would use a filter—as a non-temporal transformation.

DistortionFrom my perspective, whatever methods are used to get distortion add and subtract overtones from our sound, so for my live processing purposes, I use them the same way I would use filters—as non-temporal transformations. Below is a gorgeous example of distortion, not used on a guitar. The only instruction in the score for the electronics is to gradually bring up the distortion in one long crescendo.  I ran the electronics for the piece a few times in the ‘90s for cellist Maya Beiser, and got to experience how strongly the overtones pop out because of the distortion pedal, and move around nearly on their own.

Michael Gordon Industry

Pitch-Shift / Playback Speed Changes / Reversing Sounds

I once heard composer and electronic musician, Nic Collins say that to make experimental music one need only “play it slow, play it backwards.” Referring to pre-recorded sounds, these are certainly time-honored electroacoustic approaches borne out of a time when only tape recorders, microphones, and a few oscillators were used to make electronic music masterpieces.

For live processing of sound, pitch-shift and/or time-stretch continue to be simple and valuable processes.  Time compression and pitch-shift are connected by physics; sounds played back slower also are correspondingly lower in pitch and when played back faster are higher in pitch. (With analog tape, or a turntable, if you play a sound back at twice the speed, it plays back an octave higher because the soundwaves are playing back twice as fast, so it doubles the frequency.)

The relationship between speed of playback and time-stretch was decoupled in mid-‘90s.

This relationship between speed of playback and time-stretch was decoupled in mid-‘90s with faster computers and realtime spectral analysis, and other methods, making it possible to more easily do one without the other.  It is also now the norm. In much of the commercial music software my students use, it is possible to slow down a sound and not change its pitch (certainly more useful for changing tempo in a song with pre-recorded acoustic drums), and being able to pitch-shift or Autotune a voice without changing its speed is also a very useful tool for commercial production.  Each of these decoupled programs/methods (with names like “Warp”, “Flex”, etc.) are sometimes based on granular synthesis or phase vocoders, which each add their own sonic residue (essentially errors or noises endemic to the method when using extreme parameter settings).  Sometimes these mistakes, noise, and glitch sounds are useful and fun to work with, too.

An example of making glitch music with Ableton’s Warp mode (their pitch-shift with no time-compression/expansion mode).

Some great work by Philip White and Ted Hearne using Autotune gone wild on their record R We Who R We

Justin Bieber 800% slower (using PaulStretch extreme sound stretch) is a favorite of mine, but trying to use a method like this for a performance (even if it were possible in real-time) might be a bit unwieldly and make for a very long performance, or very few notes performed. Perhaps we could just treat this like a “freeze” delay function for our purposes in this discussion. Nevertheless, I want to focus here on old-school, time-domain, interconnected pitch-shift and playback speed changes which are still valuable tools.

I am quite surprised at how many of my current students have never tried slowing down the playback of a sound in realtime.  It’s not easy to do with their software in realtime, and some have never had access to a variable speed tape recorder or a turntable, and so they are shockingly unfamiliar with this basic way of working. Thankfully there are some great apps that can be used to do this and, with a little poking around, it’s also possible to do using most basic music software.

A Max patch demo of changing playback speeds and reversing various kinds of sound.

Some sounds sound nearly the same when reversed, some not.

There are very big differences in what happens when pitch-shifting various kinds of sounds (or changing speed or direction of playback).  The response of speech-like sounds (with lots of formants, pitch, and overtone changes within the sound) differs from what happens to string-like (plucked or bowed) or percussive sounds.  Some sound nearly the same when reversed, some not. It is a longer conversation to discuss global predictions about what the outcome of our process will be for every possible input sound (as we can more easily do with delays/filters/feedback) but here are a few generalizations.

Strings can be pitch-shifted up or down, and sound pretty good, bowed and especially plucked.  If the pitch-shift is done without time compression or expansion, then the attack will be slower, so it won’t “speak” quickly in the low end.  A vibrato might get noticeably slow or fast with pitch-changes.

Pitch-shifting a vocal sound up or down can create a much bigger and iconic change in the sound, personality, or even gender of the voice. Pitching a voice up we get the iconic (or annoying) sound of Alvin and the Chipmunks.

Pitch-shifting a voice down, we get slow slurry speech sounding like Lurch from the Addams Family, or what’s heard in all the DJ Screw’s chopped and screwed mixtapes (or even a gender change, as in John Oswald’s Plunderphonics Dolly Parton think piece from 1988).

John Oswald: Pretender (1988) featuring the voice of Dolly Parton

But if the goal is to create a separate voice in an improvisation, I would prefer to pitch-shift the sound, then also put it through a delay, with feedback. That way I can create sound loops of modulated arpeggios moving up and up and up (or down, or both) in a symmetrical movement using the original pitch interval difference (not just whole tone and diminished scales, but everything in between as well). Going up in pitch gets higher until it’s just shimmery (since overtones are gone as it nears the limits of the system).  Going down in pitch gets lower and the sound also gets slower. Rests and silences are slow, too. In digital systems, the noise may build up as some samples must be repeated to play back the sound at that speed.  These all can relate back to Hugh Le Caine’s early electroacoustic work Dripsody for variable speed tape recorder (1955) which, though based on a single sample of a water drop, makes prodigious use of ascending arpeggios created using only tape machines.

Hugh Le Caine: Dripsody (1955)

Which brings me to the final two inter-related topic of these posts—how to listen and how to be heard.

How to Listen

Acousmatic or Reduced listening – The classic discussion by Pierre Schaeffer (and in the writings of Michel Chion), is where I start with every group of students in my Electronic Music Performance classes. We need to be able to hear the sounds we are working on for their abstracted essences.  This is in sharp contrast to the normal listening we do every day, which he called causal listening (what is the sound source?) and semantic listening (what does the sound mean?).

We need to be able to hear the sounds we are working on for their abstracted essences.

We learn to describe sounds in terms of their pitch (frequency), volume (amplitude), and tone/timbre (spectral qualities).  Very importantly, we also listen to how these parameters change over time and so we describe envelope, or what John Cage called the morphology of the sound, as well as describing a sound’s duration and rhythm.

Listening to sound acousmatically can directly impact how we can make ourselves be heard as creating a separate viable “voice” using live processing.  So much of what a live sound processor improvising in real-time needs to control is the ability to provide contrast with the source sound. This requires knowledge of what the delays and filters and processes will produce with many types of possible input sounds (what I have been doing here), a good technical setup that is easy to change quickly and reactively, and it requires active acousmatic listening, and good ear/hand coordination (as with every instrument) to find the needed sounds at the right moment. (And that just takes practice!)

All the suggestions I have made relate back to the basic properties we listen for in acousmatic listening. Keeping that in mind, let’s finish out this post with how to be heard, and specifically what works for me and my students, in the hope it will be useful for some of you as well.

How to be Heard
(How to Make a Playable Electronic Instrument Out of Live Processing)

Sound Decisions: Amplitude Envelope / Dynamics

A volume pedal, or some way to control volume quickly, is the first tool I need in my setup, and the first thing I teach my students. Though useful for maintaining the overall mix, more importantly it enables me to shape the volume and subtleties of my sound to be different than that of my source audio. Shaping the envelope/dynamics of live-processed sounds of other musicians is central to my performing, and an important part of the musical expression of my sound processing instrument.  If I cannot control volume, I cannot do anything else described in these blog posts.  I use volume pedals and other interfaces, as well as compressor/limiters for constant and close control over volume and dynamics.

Filtering / Pitch-Shift (non-temporal transformations)

To be heard when filtering or pitch-shifting with the intention of being perceived as a separate voice (not just an effect) requires displacement of some kind. Filtering or pitch-shifting, with no delay, transforms the sound and gesture being played, but it does not create a new gesture because both the original and the processed sound are taking up the same space either temporally or spectrally or both.  So, we need to change the sound in some way to create some contrast. We can do this by changing parameters of the filter (Q, bandwidth, or center frequency), or by delaying the sound with a long enough delay that we hear the processed version as a separate event.  That delay time should be more than 50-100ms, depending on the length of the sound event. Shorter delays would just give use more filtering if the sounds overlap.

  • When filtering or pitch shifting a sound we will not create a second voice unless we displace it in some way. Think of how video feedback works, the displacement makes it easier to perceive.
  • Temporal displacement: We can delay the sound we are filtering (same as filtering a sound we have just delayed). The delay time must be long enough so there is no overlap and it is heard as a separate event. Pitch-shifts that cause the sound to play back faster or slower might introduce enough temporal displacement on their own if the shift is extreme.
  • Timbral displacement: If we create a new timbral “image” that is so radically different from the original, we might get away with it.
  • Changes over time / modulations: If we do filter sweeps, or change the pitch-shift that contrast what the instrument is doing, we can be heard better.
  • Contrast: If the instrument is playing long tones, then I would choose to do a filter sweep, or change delay times, or pitch-shift. This draws attention to my sound as a separate electronically mediated sound.  This can be done manually (literally a fader), or as some automatic process that we turn on/off and then control in some way.

Below is an example of me processing Gordon Beeferman’s piano in an unreleased track. I am using very short delays with pitch-shift to create a hazy resonance of pitched delays and I make small changes to the delay and pitch-shift to contrast with what he does in terms of both timbre and rhythm.

Making it Easier to Play

Saved States/Presets

I cannot possibly play or control more than a few parameters at once.

Since I cannot possibly play or control more than a few parameters at once, and I am using a computer, I find it easier to create groupings of parameters, my own created “presets” or “states” that I can move between, and know I can get to them, as I want to.

Trajectories

Especially if I play solo, sometimes it is helpful if some things can happen on their own. (After all, I am using a computer!)  If possible, I will set up a very long trajectory or change in the sound, for instance, a filter-sweep, or slow automated changes to pitch shifts.   This frees up my hands and mind to do other things, and assures that not everything I am doing happens in 8-bar loops.

Rhythm

I cannot express strongly enough how important control over rhythm is to my entire concept. It is what makes my system feel like an instrument. My main modes of expression are timbre and rhythm.  Melody and direct expression of pitch using electronics are slightly less important to me, though the presence of pitches is never to be ignored. I choose rhythm as my common ground with other musicians. It is my best method to interact with them.

Nearly every part of my system allows me to create and change rhythms by altering delay times on-the-fly, or simply tapping/playing the desired pulse that will control my delay times or other processes.  Being able to directly control the pulse or play sounds has helped me put my body into my performance, and this too helps me feel more connected to my setup as an instrument.

Even using an LFO (Low Frequency Oscillator) to make tremolo effects and change volume automatically can also be interesting and I would consider as part of my rhythmic expression (and the speed of which I’d want to be able to control in while performing.)

I am strongly attracted to polyrhythms. (Not surprisingly, my family is Greek, and there was lots of dancing in odd time signatures growing up.) Because it is so prevalent in my music, I implemented a mechanism that allows me to tap delay times and rhythms that are complexly related to what is happening in the ensemble at that moment.  After pianist Borah Bergman once explained a system he thought I could use for training myself to perform complex rhythms, I created a Max patch to implement what he taught me, and I started using this polyrhythmic metronome to control the movement between any two states/presets quickly, creating polyrhythmic electroacoustics. Other rhythmic control sources I have used included Morse Code as rhythm, algorithmic processes, and a rhythm engine influenced by North Indian Classical Tala, and whatever else interests me for a particular project.

With rhythm, it is about locking it in.

With rhythm, it is about locking it in.  It’s important that I can control my delays and rhythm processes so I can have direct interaction with the rhythm of other musicians I am playing with (or that I make a deliberate choice not to do so).

Chuck, a performance I like very much by Shackle (Anne La Berge on flute & electronics and Robert van Heumen on laptop-instrument) which does many of the things I have written about here.

Feedback Smears / Beautiful Madness

Filters and delays are always interconnected and feedback is the connective tissue.

As we have been discussing, filters and delays are always interconnected and feedback is the connective tissue.  I make liberal use of feedback with Doppler shift (Interpolating delays) for weird pitch-shifts and I use feedback to create resonance (with short filters) or I use feedback to quickly build up of density or texture when using longer delays.  With pitch-shift, as mentioned above, feedback can create symmetrical arpeggiated movement of the original pitch difference.   And feedback is just fun because it’s, well, feedback!  It’s slightly dangerous and uncontrollable, and brings lovely surprises.  That being said, I use a compressor or have a kill-switch at hand so as not to blow any speakers or lose any friends.

David Behrman: Wave Train (1966)

A recording of me with Hans Tammen’s Third Eye Orchestra.  I am using only a phaser on my microphone and lots of feedback to create this sound, and try to keep the timing with the ensemble manually.

Here are some useful strategies for using live processing that I hope are useful

Are you processing yourself and playing solo?

Do any transformation, go to town!

The processes you choose can be used to augmenting your instrument, or create an independent voice.  You might want to create algorithms that can operate independently especially for solo performing so some things will happen on their own.

Are you playing in an ensemble, but processing your own sound?

What frequencies / frequency spaces are already being used?
Keep control over timbre and volume at all times to shape your sound.
Keep control of your overlap into other players’ sound (reverb, long delays, noise)

Keep control over the rhythm of your delays, and your reverb.  They are part of the music, too.

Are you processing someone else’s sound?

Make sure your transformations maintain the separate sonic identity of other players and your sound as I have been discussing in these posts.

Build an instrument/setup that is playable and flexible.

Create some algorithms that can operate independently

How to be heard / How to listen: redux

  • If my performer is playing something static, I feel free to make big changes to their sound.
  • If my live performer is playing something that is moving or changing (in pitch, timbre or rhythm), I choose to either create something static out of their sound, or I choose to move differently (contrast their movement moving faster or slower or in a different direction, or work with a different parameter). This can be as simple as slowing down my playback speed.
  • If my performer is playing long tones on the same pitch, or a dense repeating or legato pattern, or some kind of broad spectrum sound, I might filter it, or create glissando effects with pitch-shifts ramping up or down.
  • If my performer is playing short tones or staccato, I can use delays or live-sampling to create rhythmic figures.
  • If my performer is playing short bursts of noise, or sounds with sharp fast attacks, that is a great time to play super short delays with a lot of feedback, or crank up a resonant filter to ping it.
  • If they are playing harmonic/focused sound with clear overtones, I can mess it up with all kinds of transformations, but I’ll be sure to delay it / displace it.
When you are done, know when to turn it off.

In short and in closing: Listen to the sound.  What is static? Change it! Do something different.   And when you are done, know when to turn it off.

On “Third Eye” from Bitches Re-Brewed (2004) by Hans Tammen, I’m processing saxophonist Martin Speicher

Suggested further reading

Michel Chion (translated by Claudia Gorbman): Audio-Vision: Sound on Screen (Columbia University Press, 1994)
(Particularly his chapter, “The Three Listening Modes” pp. 25–34)

Dave Hunter: “Effects Explained: Modulation—Phasing, Flanging, and Chorus” (Gibson website, 2008)

Dave Hunter: “Effects Explained: Overdrive, Distortion, and Fuzz” (Gibson website, 2008)