Counterstream Radio is your online home for exploring the music of America’s composers. Drawing upon New Music USA’s substantial library of recordings, our programming is remarkable for its depth and eclecticism. The station streams influential music of many pedigrees 24 hours a day. Keep listening and discover the sound of music without limits. Click here to open Counterstream Radio.
Why bother replicating environmental sounds through electronic music synthesis when recording something is faster and more accurate? What is the point of recreating something when that thing already exists. For these questions, I have a philosophical answer and a practical answer.
On the philosophical side, fabricating a simulacra of the sounds around us is at its core a meditative process, built equally around practices of listening and analysis. It pays respect to the omnipresence of the invisible and honors the complexity of seemingly simple things. It unlocks new techniques for interaction with our instruments and enriches our experience of the world apart from them: “what makes up that sound” becomes something of a walking mantra impressing itself on everything you hear.
On the practical side, a recording is a life-like portrait, fixed and unchanging. It excludes from us the agency to restructure the world it captures. It relegates our creative interactions to the realm of post-processing (i.e. filtering, adding reverb, etc.) to emphasize or hide aspects of the events captured on tape.
The technique I’ll explain in this article takes the opposite approach: utilizing filtering, reverb, etc. as foundational elements for creating real-world portraiture while retaining the freedom of dream-logic malleability. Can you record the sound of a tin room in which a prop plane idles while its engine keeps changing size? Maybe. Can you synthesize it? Definitely.
Approaching a sound with the goal of recreating it is like listening to an exploded diagram, where a sonic totality is divided into components and considered individually. It is with an ear to this deliberate listening that I share with you words that have guided my work for the past decade, passed along to me by the great Bob Snyder, a Chicago-based artist, educator and friend, in the form of his “Ear Training” synthesis exercises. He started with a simple question through which the components of any sound can be observed and serve as a roadmap for from-scratch fabrication. “Is a sound noisy or tonal, and is its movement (if it has any) regular or irregular?”
Let’s do a quick exercise: listen to a sound, any sound (a baby crying, a phone ringing), and ask yourself: can I hum it? Trace the movement of the sound with your hand in the air and observe: is it rising and falling in a pattern? The answers to these questions point toward the equipment needed to recreate them. If the sound is tonal (if you can hum it), select an oscillator; if it isn’t, choose a noise generator. There are of course plenty of sounds that have both (a howling wind, the word “cha,” etc.) but for this initial thought experiment choose a tone or noise source to best fit whatever is the sound’s dominant component.
Next, is something about the sound changing? It could be its amplitude, its pitch, its timbre, etc., but if you find yourself tracing out this motion with your hand note how your hand is moving: regularly (up and down, like a car alarm) or less regularly (like shoes clanking away in a drier). A repeating motion would point toward a looping, cyclical modulator (a low frequency oscillator, a sequencer, etc.), where irregular motion would indicate something either noise-based or a mixture of otherwise unrelated things. Either jot these observations down or keep them in your head, whatever works best for you— the important thing is to remain cognizant of them as they accumulate.
To recreate a sound from scratch is to assemble these observations as discrete instructional steps. Try not to get bogged down by the totality of the sound itself. Instead focus on these component parts: the sound is nothing more than a list of them in aggregate.
Start with the basics—tone or noise, what about it is it changing— and slowly zoom in on the details from there. Wind blowing through a grove of trees is noisy and irregular. Sometimes the leaves rustle with more treble, sometimes with more mid-range. These various noisy timbres seem to happen sequentially, rather than simultaneously, as if the branches pushed one way sound different than when the wind changes direction and pushes them the other, and so on. Study the sound, note these characteristics, think of your observations as a decoder ring.
Hopefully this provides something of an overview of the opportunities that are possible in synthesizing environmental sounds and lays out some of the aspects of sound to focus on in your listening. Now let’s try our hand at a concrete example and patch something up!
I’d like to synthesize the sounds of the beach, in particular a memory I have of an afternoon spent there as a child. We’ll begin with the sound of ocean waves from the listening perspective of the shoreline. It’s low tide and the surf is mild. The sun hangs in the air, lazily
Once we have a working version of our central sound component, I find it helpful to surround it with supporting contextual sonics. These reinforce our creation’s place in this fabricated soundscape and allow for a degree of set-dressing about which the details are entirely ours to decide. Are these ocean waves happening on a beach or are they crashing in an office? Those decisions are executed through the inclusion of these background characters.
For this patch, I’ll play it straight and set the sound stereotypically. To create the sense of a shoreline, the focus will be on a pair of hallmarks—things you might hear (and in this case things I remember hearing) while sitting on the beach and listening to the waves: the dull roar of the ocean and the whipping hiss of the wind.
In tuning these sounds I’ll be utilizing Low and High Pass filters, and doing so with an ear for how each filter type represents distance: using Low Pass filters for sounds that are far away (and whose top end has rolled off), and High Pass filters for sounds that are close-up (and whose top end is accentuated). Additionally, setting the relative level of these sounds against each other paints a portrait of attention: the sounds being focused on (in this case the waves) can seem louder than their neighbors (the wind, the ocean), and should that observation shift for any reason this balance can be adjusted accordingly.
Finally, the addition of narrative elements can lend to this sound-portrait some much-appreciated variety: if the background is always there, the things that come and go can pull us into a far more immersive listening experience.
To illustrate this point we’ll create the sound of a single-passenger plane in flight, passing overhead. Unlike our wave, wind and ocean patches, this one is definitely hummable and will require tone sources to synthesize. While there are myriad ways to go about recreating engine sonics, each essentially contains at least an oscillator and at least some timbral complexity, especially if that engine is full of moving parts! The aspects that you choose to focus on in your own engine synthesis work will depend greatly on your listening work: what about the sound jumps out to you? What is essential? In the case of the single-passenger plane, I’ll be celebrating its beat-frequency-like movement, its stereo position adjustments and the Doppler Effect that occurs as it passes from one side of the beach to the other.
Now that we have our waves, our environment and our wildcard narrative element, let’s combine them into a performance. The world we create in the mixing of these sounds is at any point re-definable: on a whim the ocean can become tiny, the wind can whip itself up into a terrifying wall, the waves can pause and hold mid-crash. While the example illustrated below is one that tilts towards accuracy it can at any moment morph into something else entirely: a far more fantastical collage of sonic impossibilities or simply the next memory that comes to mind. The fluidity of the portrait is entirely yours to decide.
Like any skill, decoding and fabricating environmental sounds is an exercise that rewards practice. I encourage you to start as soon as you finish this article. Close your eyes and whatever you hear or imagine first ask yourself: what makes up that sound? Thanks for listening.
I have been writing all this month about how a live sound processing musician could develop an electroacoustic musicianship—and learn to predict musical outcomes for a given sound and process—just by learning a few things about acoustics/psychoacoustics and how some of these effects work. Coupled with some strategies about listening and playing, this can make it possible for the live processor to create a viable “instrument.” Even when processing the sounds of other musicians, it enables the live sound processing player to behave and react musically like any other musician in an ensemble and not be considered as merely creating effects.
In the previous post, we talked about the relationship between delays, feedback, and filters. We saw how the outcome of various configurations of delay times and feedback is directly affected by the characteristics of the sounds we put into them, whether they be short or long, resonant or noise. We looked at pitch-shifts created by Doppler effect in Multi-tap delays and how one might use any of these things when creating live electroacoustic music using live sound processing techniques. As I demonstrated, it’s about the overlap of sounds, about operating in a continuum from creating resonance to creating texture and rhythm. It’s about being heard and learning to listen. Like all music. Like all instruments.
It’s about being heard and learning to listen. Like all music. Like all instruments.
To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them. I hope this information will be useful to live sound processors (because we need to know how to be heard as a separate musical voice and also be flexible with our control especially in live sound processing). This information should also be useful to instrumentalists processing their own sound (because it will speed the process of finding what sounds good on your instrument, will help with predicting outcomes of various sound processing techniques). It should especially helpful for preparing for improvisation, or any live processing project (without the luxury of a long time schedule), and so too I hope for composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.
Resonance / Filtering in More Detail
We saw in the last post how delays and filters are intertwined in their construction and use, existing in a continuum from short delays to long delays, producing rhythm, texture, and resonance depending on the length of the source audio events being processed, and the length of the delays (as well as feedback).
A special case is that of a very short delay (1-30ms) when combined with lots of feedback (90% or more). The sound circulates so fast through the delay that it creates resonance at the speed of the circulation, creating clear pitches we can count on.
The effect is heard best with a transient (a very short sound such as a hand clap, vocal fricatives “t” “k”, or a snare drum hit). For instance, if I have a 1ms delay and lots of feedback and input a short transient sound, we will hear a ringing at 1000Hz. This is how fast that sample has been going through the delay (1000 times per second). This is roughly the same pitch as “B” on the piano (a little sharp). Interestingly, if we change the delay to 2ms, the pitch heard will be 500Hz (also “B” but an octave lower), 3ms yields “E” (333Hz), 4ms yields another “B” (250Hz), and 5ms a “G” (200Hz), and so on in kind of upside down overtone series.
Karplus-Strong Algorithm / Periodicity Pitch
A very short delay combined with high feedback resembles physical modeling synthesis techniques, which are very effective for simulating plucked string and drum sounds. One such method, the Karplus-Strong Algorithm, consists of a recirculating delay line with a filter in its feedback loop. The delay line is filled with samples of noise. As the samples recirculate through the filter in the feedback loop, the samples that are passed through the delay line create a “periodic sample pattern” which is directly related to how many samples there are. Even though the input signal is pure noise, the algorithm creates a complex sound with pitch content that is related to the length of the delay. “Periodicity pitch” has been well studied in the field of psychoacoustics, and it is known that even white noise, if played with a delayed copy of itself, will have pitch. This is true even if it is sent separately to each ear. The low pass filter in the feedback loop robs the noise of a little of its high frequency components at each pass through the circuit, replicating the acoustical properties of a plucked string or drum.
If we set up a very short delay and use lots of feedback, and input any short burst of sound—a transient, click, or vocal fricative—we can get a similar effect of a plucking sound or a resonant click. If we input a longer sound at the same frequency as what the delay is producing (or at multiples of that frequency), then those overtones will be accentuated, in the same way some tones are louder when we sing in the shower, because they are being reinforced. The length of the delay determines the pitch and the feedback amount (and any filter we use in the feedback loop determines the sustain and length of the note).
Filtering & Filter Types
Besides any types of resonance we might create using short delays, there are also many kinds of audio filters we might use for any number of applications including live sound processing: Low Pass Filters, High Pass Filters, etc.
But by far the most useful tools for creating a musical instrument out of live processing are resonant filters, and specifically the BandPass and Comb filters, so let’s just focus on those. When filters have sharp cutoffs they also will boost certain frequencies near their cutoff points to be louder than the input. This added resonance results from using sharp cutoffs. BandPass filters allow us to “zoom” in on one region of a sounds spectrum and reject the rest. Comb filters, created when a delayed copy of a sound is added to itself, results in many evenly spaced regions (“teeth”) of the sound being cancelled out, and creating a characteristic sound.
The most useful tools for creating a musical instrument out of live processing are resonant filters.
The primary elements of a BandPass filter that we would want to control would be center frequency, bandwidth, and Filter Q (which is defined as center frequency divided by bandwidth, but which we can just consider to be how narrow or “sharp” the peak is or how resonant it is). When the “Q” is high (very resonant), we can make use of this property to create or underscore certain overtones in a sound that we want to bring out or to experiment with.
Phasing / Flanging / Chorus: These are all filtering-type effects, using very short and automatically varying delay times. A phase-shifter delays the sound by less than one cycle (cancelling out some frequencies through the overlap and producing a non-uniform, but comb-like filter). A flanger, which sounds a bit more extreme, uses delays around 5-25ms, producing a more uniform comb filter (evenly spaced peaks and troughs in the spectrum). It is named after the original practice of audio engineers who would press down on one reel (flange) of an analog tape deck, slowing it down slightly as it played in nearly sync with an identical copy of the audio on a second tape deck. Chorus, uses even longer delay times and multiple copies of a sound at longer delay times.
A tutorial on Phasing Flanging and Chorus
For my purposes, as a live processor trying to create an independent voice in an improvisation, I find these three effects most useful if I treat them the same as filters, except that since they are built on delays I can change, there might be the possibility to increase or decrease delay times and get a Doppler effect, too, or play with feedback levels to accentuate certain tones.
I use distortion the same way I would use a filter—as a non-temporal transformation.
Distortion – From my perspective, whatever methods are used to get distortion add and subtract overtones from our sound, so for my live processing purposes, I use them the same way I would use filters—as non-temporal transformations. Below is a gorgeous example of distortion, not used on a guitar. The only instruction in the score for the electronics is to gradually bring up the distortion in one long crescendo. I ran the electronics for the piece a few times in the ‘90s for cellist Maya Beiser, and got to experience how strongly the overtones pop out because of the distortion pedal, and move around nearly on their own.
I once heard composer and electronic musician, Nic Collins say that to make experimental music one need only “play it slow, play it backwards.” Referring to pre-recorded sounds, these are certainly time-honored electroacoustic approaches borne out of a time when only tape recorders, microphones, and a few oscillators were used to make electronic music masterpieces.
For live processing of sound, pitch-shift and/or time-stretch continue to be simple and valuable processes. Time compression and pitch-shift are connected by physics; sounds played back slower also are correspondingly lower in pitch and when played back faster are higher in pitch. (With analog tape, or a turntable, if you play a sound back at twice the speed, it plays back an octave higher because the soundwaves are playing back twice as fast, so it doubles the frequency.)
The relationship between speed of playback and time-stretch was decoupled in mid-‘90s.
This relationship between speed of playback and time-stretch was decoupled in mid-‘90s with faster computers and realtime spectral analysis, and other methods, making it possible to more easily do one without the other. It is also now the norm. In much of the commercial music software my students use, it is possible to slow down a sound and not change its pitch (certainly more useful for changing tempo in a song with pre-recorded acoustic drums), and being able to pitch-shift or Autotune a voice without changing its speed is also a very useful tool for commercial production. Each of these decoupled programs/methods (with names like “Warp”, “Flex”, etc.) are sometimes based on granular synthesis or phase vocoders, which each add their own sonic residue (essentially errors or noises endemic to the method when using extreme parameter settings). Sometimes these mistakes, noise, and glitch sounds are useful and fun to work with, too.
An example of making glitch music with Ableton’s Warp mode (their pitch-shift with no time-compression/expansion mode).
Some great work by Philip White and Ted Hearne using Autotune gone wild on their record R We Who R We
Justin Bieber 800% slower (using PaulStretch extreme sound stretch) is a favorite of mine, but trying to use a method like this for a performance (even if it were possible in real-time) might be a bit unwieldly and make for a very long performance, or very few notes performed. Perhaps we could just treat this like a “freeze” delay function for our purposes in this discussion. Nevertheless, I want to focus here on old-school, time-domain, interconnected pitch-shift and playback speed changes which are still valuable tools.
I am quite surprised at how many of my current students have never tried slowing down the playback of a sound in realtime. It’s not easy to do with their software in realtime, and some have never had access to a variable speed tape recorder or a turntable, and so they are shockingly unfamiliar with this basic way of working. Thankfully there are some great apps that can be used to do this and, with a little poking around, it’s also possible to do using most basic music software.
A Max patch demo of changing playback speeds and reversing various kinds of sound.
Some sounds sound nearly the same when reversed, some not.
There are very big differences in what happens when pitch-shifting various kinds of sounds (or changing speed or direction of playback). The response of speech-like sounds (with lots of formants, pitch, and overtone changes within the sound) differs from what happens to string-like (plucked or bowed) or percussive sounds. Some sound nearly the same when reversed, some not. It is a longer conversation to discuss global predictions about what the outcome of our process will be for every possible input sound (as we can more easily do with delays/filters/feedback) but here are a few generalizations.
Strings can be pitch-shifted up or down, and sound pretty good, bowed and especially plucked. If the pitch-shift is done without time compression or expansion, then the attack will be slower, so it won’t “speak” quickly in the low end. A vibrato might get noticeably slow or fast with pitch-changes.
Pitch-shifting a vocal sound up or down can create a much bigger and iconic change in the sound, personality, or even gender of the voice. Pitching a voice up we get the iconic (or annoying) sound of Alvin and the Chipmunks.
Pitch-shifting a voice down, we get slow slurry speech sounding like Lurch from the Addams Family, or what’s heard in all the DJ Screw’s chopped and screwed mixtapes (or even a gender change, as in John Oswald’s Plunderphonics Dolly Parton think piece from 1988).
John Oswald: Pretender (1988) featuring the voice of Dolly Parton
But if the goal is to create a separate voice in an improvisation, I would prefer to pitch-shift the sound, then also put it through a delay, with feedback. That way I can create sound loops of modulated arpeggios moving up and up and up (or down, or both) in a symmetrical movement using the original pitch interval difference (not just whole tone and diminished scales, but everything in between as well). Going up in pitch gets higher until it’s just shimmery (since overtones are gone as it nears the limits of the system). Going down in pitch gets lower and the sound also gets slower. Rests and silences are slow, too. In digital systems, the noise may build up as some samples must be repeated to play back the sound at that speed. These all can relate back to Hugh Le Caine’s early electroacoustic work Dripsody for variable speed tape recorder (1955) which, though based on a single sample of a water drop, makes prodigious use of ascending arpeggios created using only tape machines.
Hugh Le Caine: Dripsody (1955)
Which brings me to the final two inter-related topic of these posts—how to listen and how to be heard.
How to Listen
Acousmatic or Reduced listening – The classic discussion by Pierre Schaeffer (and in the writings of Michel Chion), is where I start with every group of students in my Electronic Music Performance classes. We need to be able to hear the sounds we are working on for their abstracted essences. This is in sharp contrast to the normal listening we do every day, which he called causal listening (what is the sound source?) and semantic listening (what does the sound mean?).
We need to be able to hear the sounds we are working on for their abstracted essences.
We learn to describe sounds in terms of their pitch (frequency), volume (amplitude), and tone/timbre (spectral qualities). Very importantly, we also listen to how these parameters change over time and so we describe envelope, or what John Cage called the morphology of the sound, as well as describing a sound’s duration and rhythm.
Listening to sound acousmatically can directly impact how we can make ourselves be heard as creating a separate viable “voice” using live processing. So much of what a live sound processor improvising in real-time needs to control is the ability to provide contrast with the source sound. This requires knowledge of what the delays and filters and processes will produce with many types of possible input sounds (what I have been doing here), a good technical setup that is easy to change quickly and reactively, and it requires active acousmatic listening, and good ear/hand coordination (as with every instrument) to find the needed sounds at the right moment. (And that just takes practice!)
All the suggestions I have made relate back to the basic properties we listen for in acousmatic listening. Keeping that in mind, let’s finish out this post with how to be heard, and specifically what works for me and my students, in the hope it will be useful for some of you as well.
How to be Heard
(How to Make a Playable Electronic Instrument Out of Live Processing)
Sound Decisions: Amplitude Envelope / Dynamics
A volume pedal, or some way to control volume quickly, is the first tool I need in my setup, and the first thing I teach my students. Though useful for maintaining the overall mix, more importantly it enables me to shape the volume and subtleties of my sound to be different than that of my source audio. Shaping the envelope/dynamics of live-processed sounds of other musicians is central to my performing, and an important part of the musical expression of my sound processing instrument. If I cannot control volume, I cannot do anything else described in these blog posts. I use volume pedals and other interfaces, as well as compressor/limiters for constant and close control over volume and dynamics.
To be heard when filtering or pitch-shifting with the intention of being perceived as a separate voice (not just an effect) requires displacement of some kind. Filtering or pitch-shifting, with no delay, transforms the sound and gesture being played, but it does not create a new gesture because both the original and the processed sound are taking up the same space either temporally or spectrally or both. So, we need to change the sound in some way to create some contrast. We can do this by changing parameters of the filter (Q, bandwidth, or center frequency), or by delaying the sound with a long enough delay that we hear the processed version as a separate event. That delay time should be more than 50-100ms, depending on the length of the sound event. Shorter delays would just give use more filtering if the sounds overlap.
When filtering or pitch shifting a sound we will not create a second voice unless we displace it in some way. Think of how video feedback works, the displacement makes it easier to perceive.
Temporal displacement: We can delay the sound we are filtering (same as filtering a sound we have just delayed). The delay time must be long enough so there is no overlap and it is heard as a separate event. Pitch-shifts that cause the sound to play back faster or slower might introduce enough temporal displacement on their own if the shift is extreme.
Timbral displacement: If we create a new timbral “image” that is so radically different from the original, we might get away with it.
Changes over time / modulations: If we do filter sweeps, or change the pitch-shift that contrast what the instrument is doing, we can be heard better.
Contrast: If the instrument is playing long tones, then I would choose to do a filter sweep, or change delay times, or pitch-shift. This draws attention to my sound as a separate electronically mediated sound. This can be done manually (literally a fader), or as some automatic process that we turn on/off and then control in some way.
Below is an example of me processing Gordon Beeferman’s piano in an unreleased track. I am using very short delays with pitch-shift to create a hazy resonance of pitched delays and I make small changes to the delay and pitch-shift to contrast with what he does in terms of both timbre and rhythm.
Making it Easier to Play
I cannot possibly play or control more than a few parameters at once.
Since I cannot possibly play or control more than a few parameters at once, and I am using a computer, I find it easier to create groupings of parameters, my own created “presets” or “states” that I can move between, and know I can get to them, as I want to.
Especially if I play solo, sometimes it is helpful if some things can happen on their own. (After all, I am using a computer!) If possible, I will set up a very long trajectory or change in the sound, for instance, a filter-sweep, or slow automated changes to pitch shifts. This frees up my hands and mind to do other things, and assures that not everything I am doing happens in 8-bar loops.
I cannot express strongly enough how important control over rhythm is to my entire concept. It is what makes my system feel like an instrument. My main modes of expression are timbre and rhythm. Melody and direct expression of pitch using electronics are slightly less important to me, though the presence of pitches is never to be ignored. I choose rhythm as my common ground with other musicians. It is my best method to interact with them.
Nearly every part of my system allows me to create and change rhythms by altering delay times on-the-fly, or simply tapping/playing the desired pulse that will control my delay times or other processes. Being able to directly control the pulse or play sounds has helped me put my body into my performance, and this too helps me feel more connected to my setup as an instrument.
Even using an LFO (Low Frequency Oscillator) to make tremolo effects and change volume automatically can also be interesting and I would consider as part of my rhythmic expression (and the speed of which I’d want to be able to control in while performing.)
I am strongly attracted to polyrhythms. (Not surprisingly, my family is Greek, and there was lots of dancing in odd time signatures growing up.) Because it is so prevalent in my music, I implemented a mechanism that allows me to tap delay times and rhythms that are complexly related to what is happening in the ensemble at that moment. After pianist Borah Bergman once explained a system he thought I could use for training myself to perform complex rhythms, I created a Max patch to implement what he taught me, and I started using this polyrhythmic metronome to control the movement between any two states/presets quickly, creating polyrhythmic electroacoustics. Other rhythmic control sources I have used included Morse Code as rhythm, algorithmic processes, and a rhythm engine influenced by North Indian Classical Tala, and whatever else interests me for a particular project.
With rhythm, it is about locking it in.
With rhythm, it is about locking it in. It’s important that I can control my delays and rhythm processes so I can have direct interaction with the rhythm of other musicians I am playing with (or that I make a deliberate choice not to do so).
Chuck, a performance I like very much by Shackle (Anne La Berge on flute & electronics and Robert van Heumen on laptop-instrument) which does many of the things I have written about here.
Feedback Smears / Beautiful Madness
Filters and delays are always interconnected and feedback is the connective tissue.
As we have been discussing, filters and delays are always interconnected and feedback is the connective tissue. I make liberal use of feedback with Doppler shift (Interpolating delays) for weird pitch-shifts and I use feedback to create resonance (with short filters) or I use feedback to quickly build up of density or texture when using longer delays. With pitch-shift, as mentioned above, feedback can create symmetrical arpeggiated movement of the original pitch difference. And feedback is just fun because it’s, well, feedback! It’s slightly dangerous and uncontrollable, and brings lovely surprises. That being said, I use a compressor or have a kill-switch at hand so as not to blow any speakers or lose any friends.
David Behrman: Wave Train (1966)
A recording of me with Hans Tammen’s Third Eye Orchestra. I am using only a phaser on my microphone and lots of feedback to create this sound, and try to keep the timing with the ensemble manually.
Here are some useful strategies for using live processing that I hope are useful
Are you processing yourself and playing solo?
Do any transformation, go to town!
The processes you choose can be used to augmenting your instrument, or create an independent voice. You might want to create algorithms that can operate independently especially for solo performing so some things will happen on their own.
Are you playing in an ensemble, but processing your own sound?
What frequencies / frequency spaces are already being used?
Keep control over timbre and volume at all times to shape your sound.
Keep control of your overlap into other players’ sound (reverb, long delays, noise)
Keep control over the rhythm of your delays, and your reverb. They are part of the music, too.
Are you processing someone else’s sound?
Make sure your transformations maintain the separate sonic identity of other players and your sound as I have been discussing in these posts.
Build an instrument/setup that is playable and flexible.
Create some algorithms that can operate independently
How to be heard / How to listen: redux
If my performer is playing something static, I feel free to make big changes to their sound.
If my live performer is playing something that is moving or changing (in pitch, timbre or rhythm), I choose to either create something static out of their sound, or I choose to move differently (contrast their movement moving faster or slower or in a different direction, or work with a different parameter). This can be as simple as slowing down my playback speed.
If my performer is playing long tones on the same pitch, or a dense repeating or legato pattern, or some kind of broad spectrum sound, I might filter it, or create glissando effects with pitch-shifts ramping up or down.
If my performer is playing short tones or staccato, I can use delays or live-sampling to create rhythmic figures.
If my performer is playing short bursts of noise, or sounds with sharp fast attacks, that is a great time to play super short delays with a lot of feedback, or crank up a resonant filter to ping it.
If they are playing harmonic/focused sound with clear overtones, I can mess it up with all kinds of transformations, but I’ll be sure to delay it / displace it.
When you are done, know when to turn it off.
In short and in closing: Listen to the sound. What is static? Change it! Do something different. And when you are done, know when to turn it off.
On “Third Eye” from Bitches Re-Brewed (2004) by Hans Tammen, I’m processing saxophonist Martin Speicher
Suggested further reading
Michel Chion (translated by Claudia Gorbman): Audio-Vision: Sound on Screen (Columbia University Press, 1994)
(Particularly his chapter, “The Three Listening Modes” pp. 25–34)