Tag: improvisation

Resonating Filters: How to Listen and Be Heard

I have been writing all this month about how a live sound processing musician could develop an electroacoustic musicianship—and learn to predict musical outcomes for a given sound and process—just by learning a few things about acoustics/psychoacoustics and how some of these effects work. Coupled with some strategies about listening and playing, this can make it possible for the live processor to create a viable “instrument.” Even when processing the sounds of other musicians, it enables the live sound processing player to behave and react musically like any other musician in an ensemble and not be considered as merely creating effects. 

In the previous post, we talked about the relationship between delays, feedback, and filters.   We saw how the outcome of various configurations of delay times and feedback is directly affected by the characteristics of the sounds we put into them, whether they be short or long, resonant or noise.   We looked at pitch-shifts created by Doppler effect in Multi-tap delays and how one might use any of these things when creating live electroacoustic music using live sound processing techniques.  As I demonstrated, it’s about the overlap of sounds, about operating in a continuum from creating resonance to creating texture and rhythm.  It’s about being heard and learning to listen. Like all music. Like all instruments.

It’s about being heard and learning to listen. Like all music. Like all instruments.

To finish out this month of posts about live sound processing, I will talk about a few more effects, and some strategies for using them.  I hope this information will be useful to live sound processors (because we need to know how to be heard as a separate musical voice and also be flexible with our control especially in live sound processing).  This information should also be useful to instrumentalists processing their own sound (because it will speed the process of finding what sounds good on your instrument, will help with predicting outcomes of various sound processing techniques). It should especially helpful for preparing for improvisation, or any live processing project (without the luxury of a long time schedule), and so too I hope for composers who are considering writing for live processing, or creating improvisational setting for live electroacoustics.

Resonance / Filtering in More Detail

We saw in the last post how delays and filters are intertwined in their construction and use, existing in a continuum from short delays to long delays, producing rhythm, texture, and resonance depending on the length of the source audio events being processed, and the length of the delays (as well as feedback).

A special case is that of a very short delay (1-30ms) when combined with lots of feedback (90% or more).  The sound circulates so fast through the delay that it creates resonance at the speed of the circulation, creating clear pitches we can count on.

The effect is heard best with a transient (a very short sound such as a hand clap, vocal fricatives “t” “k”, or a snare drum hit).   For instance, if I have a 1ms delay and lots of feedback and input a short transient sound, we will hear a ringing at 1000Hz.   This is how fast that sample has been going through the delay (1000 times per second).  This is roughly the same pitch as “B” on the piano (a little sharp).  Interestingly, if we change the delay to 2ms, the pitch heard will be 500Hz (also “B” but an octave lower), 3ms yields “E” (333Hz), 4ms yields another “B” (250Hz), and 5ms a “G” (200Hz), and so on in kind of upside down overtone series.

Karplus-Strong Algorithm / Periodicity Pitch

A very short delay combined with high feedback resembles physical modeling synthesis techniques, which are very effective for simulating plucked string and drum sounds.  One such method, the Karplus-Strong Algorithm, consists of a recirculating delay line with a filter in its feedback loop.  The delay line is filled with samples of noise.  As the samples recirculate through the filter in the feedback loop, the samples that are passed through the delay line create a “periodic sample pattern” which is directly related to how many samples there are.  Even though the input signal is pure noise, the algorithm creates a complex sound with pitch content that is related to the length of the delay. “Periodicity pitch” has been well studied in the field of psychoacoustics, and it is known that even white noise, if played with a delayed copy of itself, will have pitch. This is true even if it is sent separately to each ear. The low pass filter in the feedback loop robs the noise of a little of its high frequency components at each pass through the circuit, replicating the acoustical properties of a plucked string or drum.

If we set up a very short delay and use lots of feedback, and input any short burst of sound—a transient, click, or vocal fricative—we can get a similar effect of a plucking sound or a resonant click.  If we input a longer sound at the same frequency as what the delay is producing (or at multiples of that frequency), then those overtones will be accentuated, in the same way some tones are louder when we sing in the shower, because they are being reinforced.   The length of the delay determines the pitch and the feedback amount (and any filter we use in the feedback loop determines the sustain and length of the note).

Filtering & Filter Types

Besides any types of resonance we might create using short delays, there are also many kinds of audio filters we might use for any number of applications including live sound processing: Low Pass Filters, High Pass Filters, etc.

A diagram of various filter types.

But by far the most useful tools for creating a musical instrument out of live processing are resonant filters, and specifically the BandPass and Comb filters, so let’s just focus on those. When filters have sharp cutoffs they also will boost certain frequencies near their cutoff points to be louder than the input. This added resonance results from using sharp cutoffs.  BandPass filters allow us to “zoom” in on one region of a sounds spectrum and reject the rest.  Comb filters, created when a delayed copy of a sound is added to itself, results in many evenly spaced regions (“teeth”) of the sound being cancelled out, and creating a characteristic sound.

The most useful tools for creating a musical instrument out of live processing are resonant filters.

The primary elements of a BandPass filter that we would want to control would be center frequency, bandwidth, and Filter Q (which is defined as center frequency divided by bandwidth, but which we can just consider to be how narrow or “sharp” the peak is or how resonant it is).    When the “Q” is high (very resonant), we can make use of this property to create or underscore certain overtones in a sound that we want to bring out or to experiment with.

Phasing / Flanging / Chorus: These are all filtering-type effects, using very short and automatically varying delay times.  A phase-shifter delays the sound by less than one cycle (cancelling out some frequencies through the overlap and producing a non-uniform, but comb-like filter). A flanger, which sounds a bit more extreme, uses delays around 5-25ms, producing a more uniform comb filter (evenly spaced peaks and troughs in the spectrum). It is named after the original practice of audio engineers who would press down on one reel (flange) of an analog tape deck, slowing it down slightly as it played in nearly sync with an identical copy of the audio on a second tape deck.  Chorus, uses even longer delay times and multiple copies of a sound at longer delay times.

A tutorial on Phasing Flanging and Chorus

For my purposes, as a live processor trying to create an independent voice in an improvisation, I find these three effects most useful if I treat them the same as filters, except that since they are built on delays I can change, there might be the possibility to increase or decrease delay times and get a Doppler effect, too, or play with feedback levels to accentuate certain tones.

I use distortion the same way I would use a filter—as a non-temporal transformation.

DistortionFrom my perspective, whatever methods are used to get distortion add and subtract overtones from our sound, so for my live processing purposes, I use them the same way I would use filters—as non-temporal transformations. Below is a gorgeous example of distortion, not used on a guitar. The only instruction in the score for the electronics is to gradually bring up the distortion in one long crescendo.  I ran the electronics for the piece a few times in the ‘90s for cellist Maya Beiser, and got to experience how strongly the overtones pop out because of the distortion pedal, and move around nearly on their own.

Michael Gordon Industry

Pitch-Shift / Playback Speed Changes / Reversing Sounds

I once heard composer and electronic musician, Nic Collins say that to make experimental music one need only “play it slow, play it backwards.” Referring to pre-recorded sounds, these are certainly time-honored electroacoustic approaches borne out of a time when only tape recorders, microphones, and a few oscillators were used to make electronic music masterpieces.

For live processing of sound, pitch-shift and/or time-stretch continue to be simple and valuable processes.  Time compression and pitch-shift are connected by physics; sounds played back slower also are correspondingly lower in pitch and when played back faster are higher in pitch. (With analog tape, or a turntable, if you play a sound back at twice the speed, it plays back an octave higher because the soundwaves are playing back twice as fast, so it doubles the frequency.)

The relationship between speed of playback and time-stretch was decoupled in mid-‘90s.

This relationship between speed of playback and time-stretch was decoupled in mid-‘90s with faster computers and realtime spectral analysis, and other methods, making it possible to more easily do one without the other.  It is also now the norm. In much of the commercial music software my students use, it is possible to slow down a sound and not change its pitch (certainly more useful for changing tempo in a song with pre-recorded acoustic drums), and being able to pitch-shift or Autotune a voice without changing its speed is also a very useful tool for commercial production.  Each of these decoupled programs/methods (with names like “Warp”, “Flex”, etc.) are sometimes based on granular synthesis or phase vocoders, which each add their own sonic residue (essentially errors or noises endemic to the method when using extreme parameter settings).  Sometimes these mistakes, noise, and glitch sounds are useful and fun to work with, too.

An example of making glitch music with Ableton’s Warp mode (their pitch-shift with no time-compression/expansion mode).

Some great work by Philip White and Ted Hearne using Autotune gone wild on their record R We Who R We

Justin Bieber 800% slower (using PaulStretch extreme sound stretch) is a favorite of mine, but trying to use a method like this for a performance (even if it were possible in real-time) might be a bit unwieldly and make for a very long performance, or very few notes performed. Perhaps we could just treat this like a “freeze” delay function for our purposes in this discussion. Nevertheless, I want to focus here on old-school, time-domain, interconnected pitch-shift and playback speed changes which are still valuable tools.

I am quite surprised at how many of my current students have never tried slowing down the playback of a sound in realtime.  It’s not easy to do with their software in realtime, and some have never had access to a variable speed tape recorder or a turntable, and so they are shockingly unfamiliar with this basic way of working. Thankfully there are some great apps that can be used to do this and, with a little poking around, it’s also possible to do using most basic music software.

A Max patch demo of changing playback speeds and reversing various kinds of sound.

Some sounds sound nearly the same when reversed, some not.

There are very big differences in what happens when pitch-shifting various kinds of sounds (or changing speed or direction of playback).  The response of speech-like sounds (with lots of formants, pitch, and overtone changes within the sound) differs from what happens to string-like (plucked or bowed) or percussive sounds.  Some sound nearly the same when reversed, some not. It is a longer conversation to discuss global predictions about what the outcome of our process will be for every possible input sound (as we can more easily do with delays/filters/feedback) but here are a few generalizations.

Strings can be pitch-shifted up or down, and sound pretty good, bowed and especially plucked.  If the pitch-shift is done without time compression or expansion, then the attack will be slower, so it won’t “speak” quickly in the low end.  A vibrato might get noticeably slow or fast with pitch-changes.

Pitch-shifting a vocal sound up or down can create a much bigger and iconic change in the sound, personality, or even gender of the voice. Pitching a voice up we get the iconic (or annoying) sound of Alvin and the Chipmunks.

Pitch-shifting a voice down, we get slow slurry speech sounding like Lurch from the Addams Family, or what’s heard in all the DJ Screw’s chopped and screwed mixtapes (or even a gender change, as in John Oswald’s Plunderphonics Dolly Parton think piece from 1988).

John Oswald: Pretender (1988) featuring the voice of Dolly Parton

But if the goal is to create a separate voice in an improvisation, I would prefer to pitch-shift the sound, then also put it through a delay, with feedback. That way I can create sound loops of modulated arpeggios moving up and up and up (or down, or both) in a symmetrical movement using the original pitch interval difference (not just whole tone and diminished scales, but everything in between as well). Going up in pitch gets higher until it’s just shimmery (since overtones are gone as it nears the limits of the system).  Going down in pitch gets lower and the sound also gets slower. Rests and silences are slow, too. In digital systems, the noise may build up as some samples must be repeated to play back the sound at that speed.  These all can relate back to Hugh Le Caine’s early electroacoustic work Dripsody for variable speed tape recorder (1955) which, though based on a single sample of a water drop, makes prodigious use of ascending arpeggios created using only tape machines.

Hugh Le Caine: Dripsody (1955)

Which brings me to the final two inter-related topic of these posts—how to listen and how to be heard.

How to Listen

Acousmatic or Reduced listening – The classic discussion by Pierre Schaeffer (and in the writings of Michel Chion), is where I start with every group of students in my Electronic Music Performance classes. We need to be able to hear the sounds we are working on for their abstracted essences.  This is in sharp contrast to the normal listening we do every day, which he called causal listening (what is the sound source?) and semantic listening (what does the sound mean?).

We need to be able to hear the sounds we are working on for their abstracted essences.

We learn to describe sounds in terms of their pitch (frequency), volume (amplitude), and tone/timbre (spectral qualities).  Very importantly, we also listen to how these parameters change over time and so we describe envelope, or what John Cage called the morphology of the sound, as well as describing a sound’s duration and rhythm.

Listening to sound acousmatically can directly impact how we can make ourselves be heard as creating a separate viable “voice” using live processing.  So much of what a live sound processor improvising in real-time needs to control is the ability to provide contrast with the source sound. This requires knowledge of what the delays and filters and processes will produce with many types of possible input sounds (what I have been doing here), a good technical setup that is easy to change quickly and reactively, and it requires active acousmatic listening, and good ear/hand coordination (as with every instrument) to find the needed sounds at the right moment. (And that just takes practice!)

All the suggestions I have made relate back to the basic properties we listen for in acousmatic listening. Keeping that in mind, let’s finish out this post with how to be heard, and specifically what works for me and my students, in the hope it will be useful for some of you as well.

How to be Heard
(How to Make a Playable Electronic Instrument Out of Live Processing)

Sound Decisions: Amplitude Envelope / Dynamics

A volume pedal, or some way to control volume quickly, is the first tool I need in my setup, and the first thing I teach my students. Though useful for maintaining the overall mix, more importantly it enables me to shape the volume and subtleties of my sound to be different than that of my source audio. Shaping the envelope/dynamics of live-processed sounds of other musicians is central to my performing, and an important part of the musical expression of my sound processing instrument.  If I cannot control volume, I cannot do anything else described in these blog posts.  I use volume pedals and other interfaces, as well as compressor/limiters for constant and close control over volume and dynamics.

Filtering / Pitch-Shift (non-temporal transformations)

To be heard when filtering or pitch-shifting with the intention of being perceived as a separate voice (not just an effect) requires displacement of some kind. Filtering or pitch-shifting, with no delay, transforms the sound and gesture being played, but it does not create a new gesture because both the original and the processed sound are taking up the same space either temporally or spectrally or both.  So, we need to change the sound in some way to create some contrast. We can do this by changing parameters of the filter (Q, bandwidth, or center frequency), or by delaying the sound with a long enough delay that we hear the processed version as a separate event.  That delay time should be more than 50-100ms, depending on the length of the sound event. Shorter delays would just give use more filtering if the sounds overlap.

  • When filtering or pitch shifting a sound we will not create a second voice unless we displace it in some way. Think of how video feedback works, the displacement makes it easier to perceive.
  • Temporal displacement: We can delay the sound we are filtering (same as filtering a sound we have just delayed). The delay time must be long enough so there is no overlap and it is heard as a separate event. Pitch-shifts that cause the sound to play back faster or slower might introduce enough temporal displacement on their own if the shift is extreme.
  • Timbral displacement: If we create a new timbral “image” that is so radically different from the original, we might get away with it.
  • Changes over time / modulations: If we do filter sweeps, or change the pitch-shift that contrast what the instrument is doing, we can be heard better.
  • Contrast: If the instrument is playing long tones, then I would choose to do a filter sweep, or change delay times, or pitch-shift. This draws attention to my sound as a separate electronically mediated sound.  This can be done manually (literally a fader), or as some automatic process that we turn on/off and then control in some way.

Below is an example of me processing Gordon Beeferman’s piano in an unreleased track. I am using very short delays with pitch-shift to create a hazy resonance of pitched delays and I make small changes to the delay and pitch-shift to contrast with what he does in terms of both timbre and rhythm.

Making it Easier to Play

Saved States/Presets

I cannot possibly play or control more than a few parameters at once.

Since I cannot possibly play or control more than a few parameters at once, and I am using a computer, I find it easier to create groupings of parameters, my own created “presets” or “states” that I can move between, and know I can get to them, as I want to.

Trajectories

Especially if I play solo, sometimes it is helpful if some things can happen on their own. (After all, I am using a computer!)  If possible, I will set up a very long trajectory or change in the sound, for instance, a filter-sweep, or slow automated changes to pitch shifts.   This frees up my hands and mind to do other things, and assures that not everything I am doing happens in 8-bar loops.

Rhythm

I cannot express strongly enough how important control over rhythm is to my entire concept. It is what makes my system feel like an instrument. My main modes of expression are timbre and rhythm.  Melody and direct expression of pitch using electronics are slightly less important to me, though the presence of pitches is never to be ignored. I choose rhythm as my common ground with other musicians. It is my best method to interact with them.

Nearly every part of my system allows me to create and change rhythms by altering delay times on-the-fly, or simply tapping/playing the desired pulse that will control my delay times or other processes.  Being able to directly control the pulse or play sounds has helped me put my body into my performance, and this too helps me feel more connected to my setup as an instrument.

Even using an LFO (Low Frequency Oscillator) to make tremolo effects and change volume automatically can also be interesting and I would consider as part of my rhythmic expression (and the speed of which I’d want to be able to control in while performing.)

I am strongly attracted to polyrhythms. (Not surprisingly, my family is Greek, and there was lots of dancing in odd time signatures growing up.) Because it is so prevalent in my music, I implemented a mechanism that allows me to tap delay times and rhythms that are complexly related to what is happening in the ensemble at that moment.  After pianist Borah Bergman once explained a system he thought I could use for training myself to perform complex rhythms, I created a Max patch to implement what he taught me, and I started using this polyrhythmic metronome to control the movement between any two states/presets quickly, creating polyrhythmic electroacoustics. Other rhythmic control sources I have used included Morse Code as rhythm, algorithmic processes, and a rhythm engine influenced by North Indian Classical Tala, and whatever else interests me for a particular project.

With rhythm, it is about locking it in.

With rhythm, it is about locking it in.  It’s important that I can control my delays and rhythm processes so I can have direct interaction with the rhythm of other musicians I am playing with (or that I make a deliberate choice not to do so).

Chuck, a performance I like very much by Shackle (Anne La Berge on flute & electronics and Robert van Heumen on laptop-instrument) which does many of the things I have written about here.

Feedback Smears / Beautiful Madness

Filters and delays are always interconnected and feedback is the connective tissue.

As we have been discussing, filters and delays are always interconnected and feedback is the connective tissue.  I make liberal use of feedback with Doppler shift (Interpolating delays) for weird pitch-shifts and I use feedback to create resonance (with short filters) or I use feedback to quickly build up of density or texture when using longer delays.  With pitch-shift, as mentioned above, feedback can create symmetrical arpeggiated movement of the original pitch difference.   And feedback is just fun because it’s, well, feedback!  It’s slightly dangerous and uncontrollable, and brings lovely surprises.  That being said, I use a compressor or have a kill-switch at hand so as not to blow any speakers or lose any friends.

David Behrman: Wave Train (1966)

A recording of me with Hans Tammen’s Third Eye Orchestra.  I am using only a phaser on my microphone and lots of feedback to create this sound, and try to keep the timing with the ensemble manually.

Here are some useful strategies for using live processing that I hope are useful

Are you processing yourself and playing solo?

Do any transformation, go to town!

The processes you choose can be used to augmenting your instrument, or create an independent voice.  You might want to create algorithms that can operate independently especially for solo performing so some things will happen on their own.

Are you playing in an ensemble, but processing your own sound?

What frequencies / frequency spaces are already being used?
Keep control over timbre and volume at all times to shape your sound.
Keep control of your overlap into other players’ sound (reverb, long delays, noise)

Keep control over the rhythm of your delays, and your reverb.  They are part of the music, too.

Are you processing someone else’s sound?

Make sure your transformations maintain the separate sonic identity of other players and your sound as I have been discussing in these posts.

Build an instrument/setup that is playable and flexible.

Create some algorithms that can operate independently

How to be heard / How to listen: redux

  • If my performer is playing something static, I feel free to make big changes to their sound.
  • If my live performer is playing something that is moving or changing (in pitch, timbre or rhythm), I choose to either create something static out of their sound, or I choose to move differently (contrast their movement moving faster or slower or in a different direction, or work with a different parameter). This can be as simple as slowing down my playback speed.
  • If my performer is playing long tones on the same pitch, or a dense repeating or legato pattern, or some kind of broad spectrum sound, I might filter it, or create glissando effects with pitch-shifts ramping up or down.
  • If my performer is playing short tones or staccato, I can use delays or live-sampling to create rhythmic figures.
  • If my performer is playing short bursts of noise, or sounds with sharp fast attacks, that is a great time to play super short delays with a lot of feedback, or crank up a resonant filter to ping it.
  • If they are playing harmonic/focused sound with clear overtones, I can mess it up with all kinds of transformations, but I’ll be sure to delay it / displace it.
When you are done, know when to turn it off.

In short and in closing: Listen to the sound.  What is static? Change it! Do something different.   And when you are done, know when to turn it off.

On “Third Eye” from Bitches Re-Brewed (2004) by Hans Tammen, I’m processing saxophonist Martin Speicher

Suggested further reading

Michel Chion (translated by Claudia Gorbman): Audio-Vision: Sound on Screen (Columbia University Press, 1994)
(Particularly his chapter, “The Three Listening Modes” pp. 25–34)

Dave Hunter: “Effects Explained: Modulation—Phasing, Flanging, and Chorus” (Gibson website, 2008)

Dave Hunter: “Effects Explained: Overdrive, Distortion, and Fuzz” (Gibson website, 2008)

From the Machine: Realtime Algorithmic Approaches to Harmony, Orchestration, and More

As we discussed last week, the development of a realtime score, in which compositional materials can be continuously modified, re-arranged, or created ex nihilo during performance and displayed to musicians as musical notation, is no longer the stuff of fantasy. The musical and philosophical implications of such an advance are only beginning to be understood and exploited by composers. This week, I’d like to share some algorithmic techniques that I’ve been developing in an attempt to grapple with some of the compositional possibilities offered by realtime notation. These range from the more linear and performative to the more abstract and computation-intensive; they deal with musical parameters ranging from harmony and form to orchestration and dynamics. Given the relative novelty and almost unlimited nature of the subject matter (to say nothing of the finite space allotted for the task), consider this a report from one person’s laboratory, rather than anything like a comprehensive survey.

HARMONY & VOICE LEADING

How might we begin to create something musically satisfying from just this raw vocabulary?

I begin with harmony, as it is the area that first got me interested in modeling musical processes using computer algorithms. I have always been fascinated by the way in which a mechanistic process like connecting the tones of two harmonic structures, according to simple rules of motion, can produce such profound emotional effects in listeners. It is also an area that seems to still hold vast unexplored depths—if not in the discovery of new vertical structures[1], at the very least in their horizontal succession. The sheer combinatorial magnitude of harmonic possibilities is staggering: consider each pitch class set from one to twelve notes in its prime form, multiplied by the number of possible inversional permutations of each one (including all possible internal octave displacements), multiplied by the possible chromatic transpositions for each permutation—for just a single vertical structure! When one begins to consider the horizontal dimension, arranging two or more harmonic structures in succession, the numbers involved are almost inconceivable.

The computer is uniquely suited to dealing with the calculation of just such large data sets. To take a more realistic and compositionally useful example: what if we wanted to calculate all the inversional permutations of the tetrachord {C, C#, D, E} and transpose them to each of the twelve chromatic pitch levels? This would give us all the unique pitch class orderings, and thus the complete harmonic vocabulary, entailed by the pitch class set {0124}, in octave-condensed form. These materials might be collected into a harmonic database, one which can we can sort and search in musically relevant ways, then draw on in performance to create a wide variety of patterns and textures.

First we’ll need to find all of the unique orderings of the tetrachord {C, C#, D, E}. A basic law of combinatorics states that there will be n! distinct permutations of a set of n items. This (to brush up on our math) means that for a set of 4 items, we can arrange them in 4! (4 x 3 x 2 x 1 = 24) ways. Let’s first construct an algorithm that will return the 24 unique orderings of our four-element set and collect them into a database.

example 1

Branciforte-example-1

Next, we need to transpose each of these 24 permutations to each of the 12 chromatic steps, giving us a total of 288 possible structures. To work something like this out by hand might take us fifteen or twenty minutes, while the computer can calculate such a set near-instantly.

example 2

Branciforte-example-2

The question of what to do with this database of harmonic structures remains: how might we begin to create something musically satisfying from just this raw vocabulary? The first thing to do might be to select a structure (1-288) at random and begin to connect it with other structures by a single common tone. For instance, if the first structure we draw is number 126 {F# A F G}, we might create a database search tool that allows us to locate a second structure with a common tone G in the soprano voice.

example 3:

To add some composer interactivity, let’s add a control that allows us to specify which voice to connect on the subsequent chord using the numbers 1-4 on the computer keypad. If we want to connect the bass voice, we can press 1; the tenor voice, 2; the alto voice, 3; or the soprano voice, 4. Lastly, let’s orchestrate the four voices to string quartet, with each structure lasting a half note.

example 4:

This is a very basic example of a performance tool that can generate a series of harmonically self-similar structures, connect them to one another according to live composer input, and orchestrate them to a chamber group in realtime. While our first prototype produces a certain degree of musical coherence by holding one of the four voices constant between structures, it fails to specify any rules governing the movement of the other three voices. Let’s design another algorithm whose goal is to control the horizontal plane more explicitly and keep the overall melodic movement a bit smoother.

A first approach might be to calculate the total melodic movement between the current structure and each candidate structure in the database, filtering out candidates whose total movement exceeds a certain threshold. We can calculate the total melodic movement for each candidate by measuring the distance in semitones between each voice in the current structure and the corresponding voice in the candidate structure, then adding together all the individual distances.[2]

example 5.0

Branciforte-example-5.0

While this technique will certainly reduce the overall disjunction between structures, it still fails to provide rules that govern the movement of individual voices. For this we will need an interval filter, an algorithm that determines the melodic intervals created by moving from the current structure to each candidate and only allows through candidates that adhere to pre-defined intervallic preferences. We might want to prevent awkward melodic intervals such as tritones and major sevenths. Or perhaps we’d like the soprano voice to move by step (ascending or descending minor and major seconds) while allowing the other voices to move freely. We will need to design a flexible algorithm that allows us to specify acceptable/unacceptable melodic intervals for each voice, including ascending movement, descending movement, and melodic unisons.

example 5.1

Branciforte-example-5.1

A final consideration might be the application of contrapuntal rules, such as the requirement that the lowest and highest voices move in either contrary or oblique motion. This could be implemented as yet another filter for candidate structures, allowing a contrapuntal rule to be specified for each two-voice combination.

example 5.2

Branciforte-example-5.2

Let’s create another musical example that implements these techniques to produce smoother movement between structures. We’ll broaden our harmonic palette this time to include four diatonic tetrachords—{0235}, {0135}, {0245}, and {0257}—and orchestrate our example for solo piano. We can use the same combinatoric approach as we did earlier, computing the inversional permutations of each tetrachord to develop our vocabulary. To keep the data set manageable, let’s limit generated material to a specific range of the piano, perhaps C2 to C6.

We’ll start by generating all of the permutations of {0235}, transposing each one so that its lowest pitch is C2, followed by each of the remaining three tetrachords. Before adding a structure to the database, we will add a range check to make sure that no generated structure contains any pitch above C6. If it does, it will be discarded; if not, it will be added to the database. We will repeat this process for each chromatic step from C#2 to A5 (A5 being the highest chromatic step that will produce in-range structures) to produce a total harmonic vocabulary of 2976 structures.

Let’s begin our realtime compositional process by selecting a random structure from among the 2976. In order to determine the next structure, we’ll begin by running all of the candidates through our semitonal movement algorithm, calculating the distances among voices in the first structure and all other structures in the database. To reduce disjunction between structures, but avoid repetitions and extremely small harmonic movements, let’s allow total movements of between 4 and 10 semitones. All structures that fall within that range will then be passed through to the interval check algorithm, where they will be tested against our intervallic preferences for each voice. Finally, all remaining candidates will be checked for violation of any contrapuntal rules that have been defined for each voice pair. Depending on how narrowly we set each of one these filters, we might reduce our candidate set from 2976 to somewhere between 5 and 50 harmonic structures. We can again employ an aleatoric variable to choose freely among these, given that each has met all of our horizontal criteria.

To give this algorithm a bit more musical interest, let’s also experiment with arpeggiation and slight variations in harmonic rhythm. We can define four arpeggio patterns and allow the algorithm to choose one randomly for each structure that is generated.

example 6:

While this example still falls into the category of initial experiment or étude, it might be elaborated to produce more compositionally satisfying results. Instead of a meandering harmonic progression, perhaps we could define formal relationships such as multi-measure repeats, melodic cells that recur in different voices, or the systematic use of transposition or inversion. Instead of a constant stream of arpeggios, the musical texture could be varied in realtime by the composer. Perhaps the highest note (or notes) of each arpeggio could be orchestrated to another monophonic instrument as a melody, or the lowest note (or notes) re-orchestrated to a bass instrument. These are just a few extemporaneous examples; the possibility of achieving more sophisticated results is simply a matter of identifying and solving increasingly abstract musical problems algorithmically.

Here’s a final refinement to our piano étude, with soprano voice reinterpreted as a melody and bass voice reinforced an octave below on the downbeat of each chord change.

example 6.1:

ORCHESTRATION

In all of the above examples, we limited our harmonic vocabulary to structures that we knew were playable by a given instrument or instrument group. Orchestration was thus a pre-compositional decision, fixed before the run time of the algorithm and invariable during performance. Let’s now turn to the treatment of orchestration as an independent variable, one that might also be subject to algorithmic processing and realtime manipulation.

There are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

This is an area of inquiry unlikely to arise in electronic composition, due to the theoretical lack of a fixed range in electronic and virtual instruments. In resigning ourselves to working with traditional acoustic instruments, the abstraction of “pure composition” must be reconciled with practical matters such as instrument ranges, questions of performability, and the creation of logical yet engaging individual parts for performers. This is a potentially vast area of study, one that cuts across disciplines such as mathematics/logic, acoustics, aesthetics, and performance practice. Thus, I must here reprise my caveat from earlier: the techniques I’ll present were developed to provide practical solutions to pressing compositional problems in my own work. While reasonable attempts were made to seek robust solutions, there are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

The basic problem of orchestration might be stated as follows: how do we distribute n number of simultaneous notes (or events) to i number of instruments with fixed ranges?

Some immediate observations that follow:

a) The number of notes to be orchestrated can be greater than, less than, or equal to the number of instruments.
b) Instruments have varying degrees of polyphony, ranging from the ability to play only a single pitch to many pitches simultaneously. For polyphonic instruments, idiosyncratic physical properties of the instrument govern which kind of simultaneities can occur.
c) For a given group of notes and a fixed group of instruments, there may be multiple valid orchestrations. These can be sorted by applying various search criteria: playability/difficulty, adherence to the relationships among instrument ranges, or a composer’s individual orchestrational preferences.
d) Horizontal information can also be used to sort valid orchestrations. Which orchestration produces the least/most amount of melodic disjunction from the last event per instrument? From the last several events? Are there specific intervals that are to be preferred for a given instrument moving from one event to another?
e) For a given group of notes and a fixed group of instruments, there may be no valid orchestration.

Given the space allotted, I’d like to focus on the last three items, limiting ourselves to scenarios in which the number of notes to be orchestrated is the same as the number of instruments available and all instruments are acting monophonically.[3]

Let’s return to our earlier example of four-part harmonic events orchestrated for string quartet, with each instrument playing one note. By conservative estimate, a string quartet has a composite range of C2 to E7 (36 to 100 as MIDI pitch values). This does not mean, however, that any four pitches within that range will be playable by the instrument vector {Violin.1, Violin.2, Viola, Cello} in a one note/one instrument arrangement.

example 7

Branciforte-example-7

The most efficient way to determine whether a structure is playable by a given instrument vector—and, if so, which orchestrations are in-range—is to calculate the n! permutations of the structure and pass each one through a per-note range check corresponding to each of the instruments in the instrument vector. If each note of the permutation is in-range for its assigned instrument, then the permutation is playable. Here’s an example of a range check procedure for the MIDI structure {46 57 64 71} for the instrument vector {Violin.1 Violin.2 Viola Cello}.

example 8

Branciforte-example-8

By generating the twenty-four permutations of the harmonic structure ({46 57 64 71}, {57 64 71 46}, {64 71 46 57}, etc.) and passing each through a range check for {Violin.1 Violin.2 Viola Cello}, we discover that there are only six permutations that are collectively in-range. There is a certain satisfaction in knowing that we now possess all of the possible orchestrations of this harmonic structure for this group of instruments (leaving aside options like double stops, harmonics, etc.).

Although the current example only produces six in-range permutations, larger harmonic structures or instrument groups could bring the number of playable permutations into the hundreds, or even thousands. Our task, therefore, becomes devising systems for searching the playable permutations in order to locate those that are most compositionally useful. This will allow us to automatically orchestrate incoming harmonic data according to various criteria in a realtime performance setting, rather than pre-auditioning and choosing among the playable permutations manually.

There are a number of algorithmic search techniques that I’ve found valuable in this regard. These can be divided into two broad categories: filters and sorts. A filter is a non-negotiable criterion; in our current example, perhaps a rule such as “Violin 1 or Violin 2 must play the highest note.” A sort, on the other hand, is a method of ranking results according to some criterion. Perhaps we want to rank possible orchestrations by their adherence to the natural low-to-high order of the instruments’ ranges; we might order the instruments by the average pitch in their range and then rank permutations according to their deviation from that order. For a less common orchestration, we might decide to choose the permutation that deviates as much as possible from the instruments’ natural order.

example 9

Branciforte-example-9

By applying this filter and sort, the permutation {57 71 64 46} is returned for the instrument vector {Violin.1 Violin.2 Viola Cello}. As we specified, the highest note is played by either Violin 1 or Violin 2 (Violin 2 in this case), while the overall distribution of pitches from low-to-high deviates significantly from the instruments ranges from low to high. Mission accomplished.

Let’s also expand our filtering and sorting mechanisms from vertical criteria to include horizontal criteria. Vertical criteria, like the examples we just looked at, can be applied with information about only one structure; horizontal criteria take into account movement between two or more harmonic structures. As we saw in our discussion of harmony, horizontal criteria can provide useful information such as melodic movement for each voice, contrapuntal relationships, total semitonal movement, and more; this kind of data is equally useful in assessing possible orchestrations. In addition to optimizing the intervallic movement of each voice to produce more playable parts, horizontal criteria can be used creatively to control parameters such as voice crossing or harmonic spatialization.

example 10

Branciforte-example-10

Such horizontal considerations can be combined with vertical rules to achieve sophisticated orchestrational control. Each horizontal and vertical criterion can be assigned a numerical weight denoting its importance when used as a sorting mechanism. We might assign a weight of 0.75 to the rule that Violin 1 or Violin 2 contains the highest pitch, a weight of 0.5 to the rule that voices do not cross between structures, and a weight of 0.25 to the rule that no voice should play a melodic interval of a tritone. This kind of a weighted search more closely models the multivariate process of organic compositional decision-making. Unlike the traditional process of orchestration, It has the advantage of being executable in realtime, thus allowing a variety of indeterminate data sources to be processed according to a composer’s wishes.

While such an algorithm is perfectly capable of running autonomously, it can also be performatively controlled by varying parameters such as search criteria, weighting, and sorting direction. Other basic performance controls can be devised to quickly re-voice note data to different parts of the ensemble. Mute and solo functions for each instrument or instrument group can be used to modify the algorithm’s behavior on the fly, paying homage to a ubiquitous technique used in electronic music performance. The range check algorithm we developed earlier could alternatively be used to transform a piece’s instrumentation from performance to performance, instantly turning a work for string quartet and voice into a brass quintet. The efficacy of any of these techniques will, of course, vary according to instrumentation and compositional aims, but there is undoubtedly a range of compositional possibilities waiting to be explored in the domain of algorithmic orchestration.

IDEAS FOR FURTHER EXPLORATION

The techniques outlined above barely scratch the surface of the harmonic and orchestrational applications of realtime algorithms—and we have yet to consider several major areas of musical architecture such as rhythm, dynamics, and form! Another domain that holds great promise is the incorporation of live performer feedback into the algorithmic process. Given my goal of writing a short-form post and not a textbook, however, I’ll have to be content to conclude with a few rapid-fire ideas as seeds for further exploration.

Dynamics:

Map MIDI values (0-127) to musical dynamics markings (say, ppp to fff) and use a MIDI controller with multiple faders/pots to control musical dynamics of individual instruments during performance. Alternatively, specify dynamics algorithmically/pre-compositionally and use the MIDI controller only to modify them, re-balancing the ensemble as needed.

Rhythm:

Apply the combinatoric approach used for harmony and orchestration to rhythm, generating all the possible permutations of note attacks and rests within a given temporal space. Use probabilistic techniques to control rhythmic density, beat stresses, changes of grid, and rhythmic variations. Assign different tempi and/or meters to individual members of an ensemble, with linked conductor cursors providing an absolute point of reference for long-range synchronization.

Form:

Create controls that allow musical “snapshots” to be stored, recalled, and intelligently modified during performance. As an indeterminate composition progresses, a composer can save and return to previous material later in the piece, perhaps transforming it using harmonic, melodic, or rhythmic operations. Or use an “adaptive” model, where a composer can comment on an indeterminate piece as it unfolds, using a “like”/”dislike” button to weight future outcomes towards compositionally desirable states.

Performer Feedback:

Allow one or more members of an ensemble to improvise within given constraints and use pitch tracking to create a realtime accompaniment. Allow members of an ensemble to contribute to an adaptive algorithm, where individual or collective preferences influence the way the composition unfolds.

Next week, we will wrap up the series with a roundtable conversation on algorithms in acoustic music with pianist/composer Dan Tepfer, composer Kenneth Kirschner, bassist/composer Florent Ghys, and Jeff Snyder, director of the Princeton Laptop Orchestra.



1. These having been theoretically derived and catalogued by 20th century music theorists such as Allen Forte. I should add here, however, that while theorists like Forte may have definitively designated all the harmonic species (pitch class sets of one to twelve notes in their prime form), the totality of individual permutations within those species still remains radically under-theorized. An area of further study that would be of interest of me is the definitive cataloguing of the n! inversional permutations of each pitch-class set of n notes. The compositional usefulness of such a project might begin to break down with structures where n > 8 (octachords already producing 40,320 discrete permutations), but would nonetheless remain useful from an algorithmic standpoint, where knowledge of not only a structure’s prime form but also its inversional permutation and chromatic transposition could be highly relevant.


2. In calculating the distance between voices, we are not concerned with the direction a voice moves, just how far it moves. So whether the pitch C moves up a major third to E (+3 semitones) or down a major third to Ab (-3 semitones) is of no difference to us in this instance; we can simply calculate its absolute value, yielding a value of 3.


3. Scenarios in which the number of available voices does not coincide with the number of pitches to be orchestrated necessitates the use of the mathematical operation of combination and a discussion of cardinality, which is beyond the scope of the present article.

Delays, Feedback, and Filters: A Trifecta

My last post, “Delays as Music,” was about making music using delays as an instrument, specifically in the case of the live sound processor. I discussed bit about how delays work and are constructed technically, how they have been used in the past, a bit about how we perceive sound, and how we perceive different delay times when used with sounds of various lengths. This post is a continuation of that discussion. (So please do read last week’s post first!)

We are sensitive to delay times as short as a millisecond or less.

I wrote about our responsiveness to miniscule differences in time, volume, and timbre between the sounds arriving in our ears, which is our skill set as humans for localizing sounds—how we use our ears to navigate our environment. Sound travels at approximately 1,125 feet per second but though all sound waves we hear in a sound are travelling at the same speed, the low frequency waves (which are longer) tend to bend and wrap around objects, while high frequencies are absorbed or bounce off of objects in our environment. We are sensitive to delay times as short as a millisecond or less, as related to the size of our heads and the physical distance between our ears.  We are able to detect tiny differences in volume between the ear that is closer to a sound source and the other.  We are able to discern small differences in timbre, too, as some high frequency sounds are literally blocked by our heads. (To notice this phenomena in action, cover your left ear with your hand and with your free hand, rustle your fingers first in the uncovered ear and then in the covered one.  Notice what is missing.)

These psychoacoustic phenomena (interaural time difference, interaural level difference, and head shadow) are useful not only for an audio engineer, but are also important for us when considering the effects and uses of delay in electroacoustic musical contexts.

My “aesthetics of delay” are similar to what audio engineers use, as rule of thumb, for using delay as an audio effect, or to add spatialization.  The difference in my approach is that I want to find a way to recognize and find sounds I can put into a delay, so that I can predict what will happen to them in real time as I am playing with various parameter settings. I use the changes in delay times as a tool to create and control rhythm, texture, and timbral changes. I’ve tried to develop a kind of electronic musicianship, which incorporates acousmatic listening and quick responses, and hope to share some of this.

It’s all about the overlap of sound.

As I wrote, it’s all about the overlap of sound.  If a copy of a sound, delayed by 1-10ms, is played with the original, we simply hear it as a unified sound, changed in timbre. Short delayed sounds nearly always overlap. Longer delays might create rhythms or patterns; medium length delays might create textures or resonance.  It depends on the length of the sound going into the delay, and what that length is with respect to the length of the delay.

This post will cover more ground about delays and how they can be used to play dynamic, gestural, improvised electroacoustic music. We also will look at the relationship between delays and filtering, and in the next and last post I’ll go more deeply into filtering as a musical expression and how to listen and be heard in that context.

Mostly, I’ll focus on the case of the live processor who is using someone else’s sound or a sound that cannot be completely foreseen (and not always using acoustic instruments as a source– Joshua Fried does this beautifully with sampling/processing live radio in his Radio Wonderland project).  However, despite this focus, I am optimistic that this information will also useful to solo instrumentalists using electronics on their own sound as well as to composers wanting to build improvisational systems into their work.

No real tips and tricks here (well maybe a few), but I do hope to communicate some ideas I have about how to think about effects and live audio manipulation in a way that outlasts current technologies. Though some of the examples below will use the Max programming language, it is because it is my main programming environment, but also well suited to diagram and explain my points.

We want more than one, we want more than one, we want…

As I wrote last week, musicians often want to be able to play more than one delayed sound, or to repeat that delayed sound several times. To do this, we either use more delays, or we use feedback to route a portion of our output back into the input.

When using feedback to create many delays, we route a portion of our output back into the input of the delay. By routing only some of the sound (not 100%), the repeated sound is a little quieter each time and eventually the sound dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an almost endless repeat, and might even overload/clip if we add new sounds (like a too full fountain).

Using multi-tap delays, or a few delays in parallel, we can make many copies of the sound from the same input, and play them simultaneously.  We could set up different delay lengths with odd spacings, and if the delays are longer than the sound we put in, we might get some fun rhythmic complexity (and polyrhythmic echoes).  With very short delays, we’ll get a filtered sound from the multiple copies being played nearly simultaneously.

Any of these delayed signals (taps) could in turn be sent back into the multi-tap delay’s input in a feedback network.   It is possible to put any number and combination of additional delays and filter in the feedback loop as well, and these complex designs are what make the difference between all the flavors of delay types that are commonly used.

It doesn’t matter how we choose to create our multiple delays.  If the delays are longer than the sounds going into them, then we don’t get overlap, and we’ll hear a rhythm or pattern.  If the delays are medium length (compared to our input sound), we’ll hear some texture or internal rhythms or something undulating.  If the delays are very short, we get filtering and resonance.

Overlap is what determines the musical potential for what we will get out of our delay.

The overlap is what determines the musical potential for what we will get out of our delay. For live sound processing in improvised music, it is critical to listen analytically (acousmatically) to the live sound source we are processing.  Based on what we hear, it is possible to make real-time decisions about what comes next and know exactly what we will get out.

Time varying delay – interpolating delay lines

Most cheaper delay pedals and many plugins make unwanted noise when the delay times are changed while a sound is playing. Usually described as clicks, pops, crackling or “zipper noise”, these sounds occur because the delays are “non-interpolating.”   These sounds happen because the changes in the delay times are not smooth, causing the audio to be played back with abrupt changes in volume.  If you never change delay times during performance, fixed simple delays and a non-interpolating delay is fine.

Changing delay times is very useful for improvisation and turning delay into an instrument. To avoid the noise and clicks we need to use “interpolating” delays, which might mean a slightly more expensive pedal or plugin or a little more programming. As performers or users of commercial gear we may not be privy to all the different techniques being used in every piece of technology we encounter. (Linear or higher order interpolation, windowing/overlap, and selection of delayed sounds from several parallel delay lines are a few techniques.) For the live sound processor / improviser what matters is: Can I change my delay times live?  What artifacts are introduced when I change it?  Are they musically useful to me?  (Sometimes we like glitches, too.)

Doppler shift!  Making delays fun.

A graphic representation of the Doppler Shift

An interesting feature/artifact of interpolating delays is the characteristic pitch shift that many of them make.  This pitch shift is similar to how the Doppler shift phenomenon works.

The characteristic pitch shift that many interpolating delays make is similar to how the Doppler Effect works.

A stationary sound source normally sends out sound waves in all directions around itself, at the speed of sound. If that sound source starts to move toward a stationary listener (or if the listener moves toward the sound), the successive wave fronts start getting compressed in time and hit the listener’s ears with greater frequency.  Due to the relative motion of the sound source to the listener, the sound’s frequency has in effect been raised.  If the sound source instead moves away from the listener, the opposite holds true: the wave fronts are encountered at a slower rate than previously, and the pitch seems to have been lowered. [Moore, 1990]

OK, but in plainer English: When a car drives past you on the street or highway, you hear the sound go up in pitch as it approaches, and as it passes, it goes back down.   This is the Doppler Effect.  The soundwaves travel at the same speed always, but they are coming from an object that is moving so their frequency goes up and then goes down when it is moving again away from you.

A sound we put into a delay line (software / pedal / tape loop) is like a recording.  If you play it back faster, the pitch goes higher as the sound waves hit your ears in faster succession, and if you slow it down, it plays back lower.  Just like what happens to the sound of a passing siren from a train or car horn that gets higher as it approaches and passes you: when delayed sounds are varied in time, the same auditory illusion is created. The pitch goes down as delay time is increased up as delay time is decreased, with the same Doppler Effect as the case of the stationary listener and moving sound source.

Using a Doppler Effect makes the delay more of an “instrument.”

Using a Doppler Effect makes the delay more of an “instrument” because it’s possible to repeat the sound and also alter it.  In my last post I discussed many types of reflections and repetitions in the visual arts, some exact and natural and others more abstract and transformed as reflections. Being able to alter the repetition of a sound in this way is of key importance to me.  Adding additional effects in with the delays is important for building a sound that is musically identifiable as separate from that of the musician I use as my source.

Using classic electroacoustic methods for transforming sounds, we can create new structures and gestures out of a live sound source. Methods such as pitch-shifting, speeding sounds up or slowing them down, or a number of filtering techniques, work better if we also use delays and time displacement as a way to distinguish these elements from the source sounds.

Many types of delay and effects plugins and pedals on the market are based on simple combinations of the principal parameters I have been outlining (e.g. how much feedback, how short a delay, how it is routed). For example, Ping Pong Delay delays a signal 50-100ms or more and alternates sending it back and forth between the left and right channels, sometimes with high feedback so it goes on for a while. Flutter Echo is very similar to the Ping Pong Delay, but with shorter delay times to cause more filtering to occur—an acoustic effect that is sometimes found in a very live sounding public spaces.  Slapback Echo has a longer delay time (75ms or more) with no feedback.

FREEZE!  Infinite Delay and Looping

Some delay devices will let us hold a sample indefinitely in the delay.  We can loop a sound and “freeze” it, adding additional sounds sometime later if we choose. The layer cake of loops built up lends itself to an easy kind of improvisation which can be very beautiful.

“Infinite” delay is used by an entire catalog of genres and musical scenes.

Looping with infinite delay is used by an entire catalog of genres and musical scenes from noise to folk music to contemporary classical.  The past few years especially, it’s been all over YouTube and elsewhere online thanks to apps and applications like Ableton Live and hardware like Line 6, a popular 6-channel looper pedal. Engaging in a form of live-composing/production, musicians generate textures and motifs, constructing them into entire arrangements, often based upon the sound of one instrument, in many tracks, all played live and in the moment.  In terms of popular electronic music practice, looping and grid interfaces seem to be the most salient and popularly-used paradigms for performance and interface since the late 2000s.

Looping music is often about building up an entire arrangement, from scratch, and with no sounds heard that are not first played by the instrumentalist, live, before their repetition (the sound of which is possibly slightly different and mediated by being heard over speakers).

With live sound processing, we use loops, too, of course. The moment I start to loop a sound “infinitely,” I am, theoretically, no longer working with live sound processing, but I am processing something that happened in the past—this is sometimes called “live sampling” and we could quibble about the differences.  To make dynamic live-looping for improvised music, whether done by sampling/looping other musicians, or by processing one’s own sound, it is essential to be flexible and be able/willing to change the loops in some way, perhaps quickly, and to make alterations to the audio recorded in real-time.  These alterations can be a significant part of the expressiveness of the sound.

For me, the most important part of working with long delays (or infinite loops) is that I be able to create and control rhythms with those delays.  I need to lock-in (synchronize) my delay times while I play. Usually I do this manually, by listening, and then using a Tap Tempo patch I wrote (which is what I’ll do when I perform this weekend at Spectrum as part of Nick Didkovsky’s Deviant Voices Festival on October 21 at Spectrum and the following day with Ras Moshe as part of the Quarry Improvised Music Series at Triskelion Arts).

Short delays are mostly about resonance. In my next and final post, I will talk more about filters and resonance, why using them together with delay is important, as well as strategies for how to be heard when live processing acoustic sound in an improvisation.

In closing, here is an example from What is it Like to be a Bat? my digital chamber punk trio with Kitty Brazelton (active 1996-2009 and which continues in spirit). In one piece, I turned the feedback up on my delay as high as I could get away with (nearly causing microphones and sound system to feedback too), then yelled “Ha!” into my microphone, and set off sequence of extreme delay changes with an interpolating delay in a timing we liked. Joined by drummer Danny Tunick, who wrote a part to go with it, we’d repeat this sequence four times, each time louder, noisier, different but somehow repeatable at each performance. It became a central theme in that piece, and was recorded as the track “Batch 4” part of our She Said – She Said, “Can You Sing Sermonette With Me?” on the Bat CD for Tzadik label.

Some recommended further reading and listening

Thom Holmes, Electronic and Experimental Music (Routledge, 2016)

Jennie Gottschalk, Experimental Music Since 1970 (Bloomsbury Academic, 2016)

Geoff Smith, “Creating and Using Custom Delay Effects” (for the website Sound on Sound, May 2012) Smith writes: “If I had to pick a single desert island effect, it would be delay. Why? Well, delay isn’t only an effect in itself; it’s also one of the basic building blocks for many other effects, including reverb, chorus and flanging — and that makes it massively versatile.”

He also includes many good recipes and examples of different delay configurations.

Phil Taylor, “History of Delay” (written for the website for Effectrode pedals)

Daniel Steinhardt and Mick Taylor, “Delay Basics: Uses, Misuses & Why Quick Delay Times Are Awesome” (from their YouTube channel, That Pedal Show)
Funny

From the Machine: Realtime Networked Notation

Last week, we looked at algorithms in acoustic music and the possibility of employing realtime computation to create works that combine pre-composition, generativity, chance operations, and live data input. This week, I will share some techniques and software tools I’ve developed that make possible what might be called an interactive score. By interactive score, I mean a score that is continuously updatable during performance according to a variety of realtime input. Such input might be provided from any number of digitized sources: software user interface, hardware controllers, audio signals, video stream, light sensors, data matrices, or mobile apps; the fundamental requirement is that the score is able to react to input instantaneously, continuously translating fluctuations in input data into a musical notation that is intelligible to performers.

THE ALGORITHMIC/ACOUSTIC DIVIDE

It turns out that this last requirement has historically been quite elusive. As early as the 1950s, composers were turning to computer algorithms to generate and process compositional data. The resultant information could either be translated into traditional music notation for acoustic performance (in the early days, completely by hand; in later years, by rendering the algorithm’s output as MIDI data and importing it into a software notation editor) or realized as an electronic composition. Electronic realizations emerged as perhaps the more popular approach, for several reasons. First, by using electronically generated sounds, composers gained the ability to precisely control and automate the timbre, dynamics, and spatialization of sound materials through digital means. Second, and perhaps more importantly, by jettisoning human performers—and thus the need for traditional musical notation—composers were able to reduce the temporal distance between a musical idea and its sonic realization. One could now audition the output of a complex algorithmic process instantly, rather than undertake the laborious transcription process required to translate data output into musical notation. Thus, the bottleneck between algorithmic idea and sonic realization was reduced, fundamentally, to the speed of one’s CPU.

As computation speeds increased, the algorithmic paradigm was extended to include new performative and improvisational possibilities. By the mid-1970s, with the advent of realtime computing, composers began to create algorithms that included not only sophisticated compositional architectures, but also permitted continuous manipulation and interaction during performance. To take a simple example: instead of designing an algorithm that harmonizes a pre-written melody according to 18th-century counterpoint rules, one could now improvise a melody during performance and have the algorithm intelligently harmonize it in realtime. If multiple harmonizations could satisfy the voice-leading constraints, the computer might use chance procedures to choose among them, producing a harmonically indeterminate—yet, perhaps, melodically determinate—musical passage.

Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered.

This is just one basic example of combining live performance input with musically intelligent realtime computation; more complex and compositionally innovative applications can easily be imagined. What is notable with even a simple example like our realtime harmonizer, however, is the degree to which such a process resists neat distinctions such as “composition”/“performance”/“improvisation” or “fixed”/“indeterminate.” It is all of these at once, it is each of these to varying degrees, and yet it is also something entirely new as well. Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered. I would argue that the possibility of instantiating realtime compositional intelligence in machines holds the most radically transformative promise for a paradigmatic shift in music in the years ahead.

All of this, of course, has historically involved a bit of a trade-off: composers who wished to explore such realtime compositional possibilities were forced to limit themselves to electronic and virtual sound sources. For those who found it preferable to continue to work exclusively with acoustic instruments—whether for their complex yet identifiable spectra, their rich histories in music composition and performance, or the interpretative subtleties of human performers—computer algorithms offered an elaborate pre-compositional device, but nothing more.[1]

BRIDGING THE GAP

This chasm between algorithmic music realized electronically (where sophisticated manipulation of tempi, textural density, dynamics, orchestration, and form could be achieved during performance) and algorithmic music realized acoustically (where algorithmic techniques were only to be employed pre-compositionally to inscribe a fixed work) is something that has frustrated and fascinated me for years. As a student of algorithmic composition, I often wished that I could achieve the same enlarged sense of compositional possibility offered by electronically realized systems—including generativity, stochasticity, and performative plasticity—using traditional instruments and human performers.

This, it seemed, hinged upon a digital platform for realtime notation: a software-based score that could accept abstract musical information (such as rhythmic values, pitch data, and dynamics) as input and convert it into a readable measure of notation. The notational mechanism must also be continuously updatable: it must allow for a composer’s live data input to change the notation of subsequent measures during performance. It must here strike a balance between temporal interactivity for the composer and readability for the performer, since most performers are accustomed to reading at least a few notes ahead in the score. Lastly, the platform must be able to synchronize notational outputs for two or more performers, allowing an ensemble to be coordinated rhythmically.

Fortunately, technologies do now exist—some commercially available and others that can be realized as custom software—that satisfy each of these notational requirements.

I have chosen to develop work in Cycling ’74’s Max/MSP environment, for several reasons. First, Max supports realtime data input and output, which provides the possibility of transcending the merely pre-compositional use of algorithms. Second, two third-party notation objects —bach.score[2] and MaxScore[3]—have recently been developed for the Max environment, which allow for numerical data to be translated into traditional (as well as more experimental forms of) musical notation. For years, this remained a glaring inadequacy in Max, as native objects do not provide anything beyond the most basic notational support. Third, Max has several objects designed to facilitate communication among computers on a local network; although most of these objects are low-level in their implementation, they can be coaxed into a lightweight, low-latency, and relatively intelligent computer network with some elaboration.

REALTIME INTERACTIVE NOTATION: UNDER THE HOOD

Let’s take a look at the basic mechanics of interactive notation using the bach.score object instantiated in Max/MSP. (For those unfamiliar with the Max/MSP programming environment, I will attempt to sufficiently summarize/contextualize the operations involved so that this process can be understood in more general terms.) bach.score is a user-interface object that can be used to display and edit musical notation. While not quite as robust as commercial notation software such as Sibelius or Finale, it features many of the same operations: manual note entry with keyboard and mouse, clef and instrument name display, rhythmic and tuplet notation, accidentals and microtones, score text, MIDI playback, and more. However, bach.score‘s most powerful feature is its ability to accept formatted text messages to control almost every aspect of its operation in realtime.

To take a basic example, if we wanted to display the first four notes of an ascending C major arpeggio as quarter notes in 4/4 (with quarter note = 60 BPM) in Sibelius, we would first have to set the tempo and time signature manually, then enter the pitches using the keyboard and mouse. With bach.score, we could simply send a line of text to accomplish all of this in a single message:

(( ((4 4) (60)) (1/4 (C4)) (1/4 (E4)) (1/4 (G4)) (1/4 (C5)) ))

example 1:

And if we wanted to display the first eight notes of an ascending C major scale as eighth notes:

(( ((4 4) (60)) (1/8 (C4)) (1/8 (D4)) (1/8 (E4)) (1/8 (F4)) (1/8 (G4)) (1/8 (A4)) (1/8 (B4)) (1/8 (C5)) ))

example 2:

Text strings are sent in a format called a Lisp-like linked list (llll, for short). This format uses nested brackets to express data hierarchically, in a branching tree-like structure. This turns out to be a powerful metaphor for expressing the hierarchy of a score, which bach.score organizes in the following way:

voices > measures > rhythmic durations > chords > notes/rests > note metadata (dynamics, etc.)

The question might be raised: why learn an arcane new text format and be forced to type long strings of hierarchically arranged numbers and brackets to achieve something that might be accomplished by an experienced Finale user in 20 seconds? The answer is that we now have a method of controlling a score algorithmically. The process of formatting messages for bach.score can be simplified by creating utility scripts that translate between the language of the composer (“ascending”; “subdivision”; “F major”) and that of the machine. This allows us to control increasingly abstract compositional properties in powerful ways.

Let’s expand upon our arpeggio example above, and build an algorithm that allows us to change the arpeggio’s root note, the chord type (and corresponding key signature), the rhythmic subdivision used, and the arpeggio’s direction (ascending, descending, or random note order).

example 3:

Let’s add a second voice to create a simple canonic texture. The bottom voice adds semitonal transposition and rhythmic rotation from the first voice as variables.

example 4:

To add some rhythmic variety, we might also add a control that allows us to specify the probability of rest for each note. Finally, let’s add basic MIDI playback capabilities so we can audition the results as we modify musical parameters.

example 5:

While our one-measure canonic arpeggiator leaves a lot to be desired compositionally, it gives an indication of the sorts of processes that can be employed once we begin thinking algorithmically. (In the next post, we will explore more sophisticated examples of algorithmic harmony, voice-leading, and orchestration.) It is important to keep in mind that unlike similar operations for transposition, inversion, and rotation in a program like Sibelius, the functions we have created here will respond to realtime input. This means that our canonic function could be used to process incoming MIDI data from a keyboard or a pitch-tracked violin, creating a realtime accompaniment that is canonically related to the input stream.

PRACTICAL CONSIDERATIONS: RHTYHMIC COORDINATION AND REALTIME SIGHT-READING

Before going any further with our discussions of algorithmic compositional techniques, we should return to more practical considerations related to a realtime score’s performability. Even if we are able to generate satisfactory musical results using algorithmic processes, how will we display the notation to a group of musicians in a way that allows them to perform together in a coordinated manner? Is there a way to establish a musical pulse that can be synced across multiple computers/mobile devices? And if we display notation to performers precisely at the instant it is being generated, will they be able to react in time to perform the score accurately? Should we, instead, generate material in advance and provide a notational pre-display, so that an upcoming bar can be viewed before having to perform it? If so, how far in advance?

I will share my own solutions to these problems—and the thinking that led me to them—below. I should stress, however, that a multiplicity of answers are no doubt possible, each of which might lead to novel musical results.

I’ve addressed the question of basic rhythmic coordination by stealing a page from Sibelius’s/Finale’s book: a vertical cursor that steps through the score at the tempo indicated. By programming the cursor to advance according to a quantized rhythmic grid (usually either quarter or eighth note), one can visually indicate both the basic pulse and the current position in the score. While this initially seemed a perfectly effective and minimal solution, rehearsal and concert experience has indicated that it is good practice to also have a large numerical counter to display the current beat. (This is helpful for those 13/4 measures with 11 beats of rest.)

example 6:

With a “conductor cursor” in place to indicate metric pulse and current score position, we turn to the question of how best to synchronize multiple devices (e.g. laptops, tablets) so that each musician’s cursor can be displayed at precisely the same position across devices. This is a critical question, as deviations in the range of a few milliseconds across devices can undermine an ensemble’s rhythmic precision and derail any collective sense of groove. In addition to synchronizing cursor positions, communication among devices will likely be needed to pipe score data (e.g. notes/rests, time signatures, dynamics, expression markings) from a central computer—where the master score is being generated and compiled—to performers’ devices as individual parts.

Max/MSP has several objects that provide communication across a network, including udpsend and udpreceive, jit.net.send and jit.net.recv, and a suite of Java classes that use the mxj object as a host—each of these has its advantages and drawbacks. Udpsend and udpreceive allow Max messages to be sent to another device on a network by specifying its IP address; they provide the fastest transfer speeds and are therefore perhaps the most commonly used. The downside to using UDP packets is that there is no specification for error-checking, such as guaranteed message delivery or guaranteed ordered delivery. This means that, while it provides the fastest transfer speeds, UDP does not necessarily guarantee that data packets will make it to their destination, or that packets will be received in the correct order. jit.net.send and jit.net.recv are very similar in their Max/MSP implementation, but use the TCP/IP transfer protocol, which does provide for error-checking; the tradeoff is that they have slightly slower delivery times. mxj-based objects provide useful functionality such as the ability to query one’s own IP address (net.local) and multicasting (net.multi.send and net.multi.recv), but require Java to be installed on performers’ machines—something which, experience has shown, cannot always be assumed.

I have chosen to use jit.net.send and jit.net.recv exclusively in all of my recent work. The slight tradeoff in speed is offset by the reliability they provide during performance. Udpsend and udpreceive might work flawlessly for 30 minutes and then drop a data packet, causing the conductor cursor to skip a beat or a blank measure to be unintentionally displayed. This is, of course, unacceptable in a critical performance situation. To counteract the slightly slower performance of jit.net.send and jit.net.recv (and to further increase network reliability), I have also chosen to use wired Ethernet connections between devices via a 16-port Ethernet switch.[4]

Lastly, we come to the question of how much notational pre-display to provide musicians for sight-reading purposes. We must bear in mind that the algorithmic paradigm makes possible an indeterminate compositional output, so it is entirely possible that musicians will be sight-reading music during performance that they have not previously seen or rehearsed together. Notational pre-display might provide musicians with information about the most efficient fingerings for the current measure, alert them to an upcoming change in playing technique or a cue from a fellow musician, or allow them to ration their attention more effectively over several measures. In fact, it is not uncommon for musicians to glance several systems ahead, or even quickly scan an entire page, to gather information about upcoming events or gain some sense of the musical composition as a whole. The drawback to providing an entire page of pre-generated material, from a composer’s point of view, is that it limits one’s ability to interact with a composition in realtime. If twenty measures of music have been pre-generated, for instance, and a composer wishes to suddenly alter the piece’s orchestration or dynamics, he/she must wait for those twenty measures to pass before the orchestrational or dynamic change takes effect. In this way, we can note an inherent tension between a performer’s desire to read ahead and a composer’s desire to exert realtime control over the score.

Since it was the very ability to exert realtime control over the score which attracted me to networked notation in the first place, I’ve typically opted to keep the notational pre-display to a bare minimum in my realtime works. I’ve found that a single measure of pre-display is usually a good compromise between realtime control for the composer and readability for the performer. (Providing the performer with one measure of pre-display does prohibit certain realtime compositional possibilities that are of interest to me, such as a looping function that allows the last x measures heard during performance to be repeated on a composer’s command.) Depending on tempo and musical material, less than a measure of pre-display might be feasible; this necessitates updating data in a measure as it is being performed, however, which runs the risk of being visually distracting to a performer.­

An added benefit of limiting pre-display to one measure is that a performer need only see two measures at any given time: the current measure and the following measure. This has led to the development of what I call an “A/B” notational format, an endless repeat structure comprising two measures. Before the start of the piece, the first two measures are pre-generated and displayed. As the piece begins, the cursor moves through measure 1; when it reaches the first beat of measure 2, measure 3 is pre-generated and replaces measure 1. When the cursor reaches the first beat of measure 3, measure 4 is pre-generated and replaces measure 2, and so on. In this way, a performer can always see two full bars of music (the current bar and the following bar) at the downbeat of any given measure. This system also keeps the notational footprint small and consistent on a performer’s screen, allowing for their part to be zoomed to a comfortable size for reading, or for the inclusion of other instruments’ parts to facilitate ensemble coordination.

example 7:

SO IT’S POSSIBLE… NOW WHAT?

Given this realtime notational bridge from the realm of computation to the realm of instrumental performance, a whole new world of compositional possibilities begins to emerge. In addition to traditional notation, non-standard notational forms such as graphical, gestural, or text-based can all be incorporated into a realtime networked environment. Within the realm of traditional notation, composers can begin to explore non-fixed, performable approaches to orchestration, dynamics, harmony, and even spatialization in the context of an acoustic ensemble. Next week, we will look at some of these possibilities more closely, discussing a range of techniques for controlling higher-order compositional parameters, from the linear to the more abstract.



1. Notable exceptions to this include the use of mechanical devices and robotics to operate acoustic instruments through digital means (popular examples: Yamaha Disklavier, Pat Metheny’s Orchestrion Project, Squarepusher’s Music for Robots, etc.). The technique of score following—which uses audio analysis to correlate acoustic instruments’ input to a position in a pre-composed score—should perhaps also be mentioned here. Score following provides for the compositional integration of electronic sound sources and DSP into acoustic music performance; since it fundamentally concerns itself with a pre-composed score, however, it cannot be said to provide a truly interactive compositional platform.


2. Freely available through the bach project website.


3. Info and license available at the MaxScore website.


4. A wired Ethernet connection is not strictly necessary for all networked notation applications. If precise timing of events is not compositionally required, a higher-latency wireless network can yield perfectly acceptable results. Moreover, recent technologies such as Ableton Link make possible wireless rhythmic synchronization among networked devices, with impressive perceptual precision. Ableton Link does not, however, allow for the transfer of composer-defined data packets, an essential function for the master/slave data architecture employed in my own work. At the time of this writing, I have not found a wireless solution for transferring data packets that yields acceptable (or even consistent) rhythmic latencies for musical performance.

Delays as Music

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).


Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.


My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.


Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop
http://preparedguitar.blogspot.de/2015/04/the-birth-of-loop-by-michael-peters.html

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

LiveLooping.org
(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines

From the Machine: Computer Algorithms and Acoustic Music

The possibility of employing an algorithm to shape a piece of music, or certain aspects of a piece of music, is hardly new. If we define algorithmic composition broadly as “creating from a set of rules or instructions,” the technique is in some sense indistinguishable from musical composition itself. While composers prior to the 20th century were unlikely to have thought of their work in explicitly algorithmic terms, it is nonetheless possible to view aspects of their practice in precisely this way. From species counterpoint to 14th-century isorhythm, from fugue to serialization, Western music has made use of rule-based compositional techniques for centuries. It might even be argued that a period of musical practice can be roughly defined by the musical parameters it derives axiomatically and the parameters left open to “taste,” serendipity, improvisation, or chance.

A relatively recent development in rule-based composition, however, is the availability of raw computational power capable of millions of calculations per second and its application to compositional decision-making. If a compositional process can be broken down into a specific enough list of instructions, a computer can likely perform them—and usually at speeds fast enough to appear instantaneous to a human observer. A computer algorithm is additionally capable of embedding non-deterministic operations such as chance procedures (using pseudo-random number generators), probability distributions (randomness weighted toward certain outcomes), and realtime data input into its compositional hierarchy. Thus, any musical parameter—e.g. harmony, form, dynamics, or orchestration—can be controlled in a number of meaningful ways: explicitly pre-defined, generated according to a deterministic set of rules (conditional), chosen randomly (aleatoric), chosen according to weighted probability tables (probabilistic), or continuously controlled in real time (improvisational). This new paradigm allows one to conceive of the nature of composition itself as a higher-order task, one requiring adjudication among ways of choosing for each musically relevant datum.

Our focus here will be the application of computers toward explicitly organizational, non-sonic ends.

Let us here provisionally distinguish between the use of computers to generate/process sound and to generate/process compositional data. While, it is true, computers do not themselves make such distinctions, doing so will allow us to bracket questions of digital sound production (synthesis or playback) and digital audio processing (DSP) for the time being. While there is little doubt that digital synthesis, sampling, digital audio processing, and non-linear editing have had—and will continue to have—a profound influence on music production and performance, it is my sense that these areas have tended to dominate discussions of the musical uses of computers, overshadowing the ways in which computation can be applied to questions of compositional structure itself. Our focus here will therefore be the application of computers toward explicitly organizational, non-sonic ends; we will be satisfied leaving sound production to traditional acoustic instruments and human performers. (This, of course, requires an effective means of translating algorithmic data into an intelligible musical notation, a topic which will be addressed at length in next week’s post.)

Let us further distinguish between two compositional applications of algorithms: pre-compositional use and performance use. Most currently available and historical implementations of compositional data processing are of the former type: they are designed to aid in an otherwise conventional process of composition, where musical data might be generated or modified algorithmically, but is ultimately assembled into a fixed work by hand, in advance of performan­ce.[1]

A commonplace pre-compositional use of data processing might be the calculation of a musical motif’s retrograde inversion in commercial notation software, or the transformation of a MIDI clip in a digital audio workstation using operations such as transposition, rhythmic augmentation/diminution, or randomization of pitch or note velocity. On the more elaborate end of the spectrum, one might encounter algorithms that translate planets’ orbits into rhythmic relationships, prime numbers into harmonic sequences, probability tables into melodic content, or pixel data from a video stream into musical dynamics. Given the temporal disjunction between the run time of the algorithm and the subsequent performance of the work, such operations can be auditioned by a composer in advance, selecting, discarding, editing, re-arranging, or subjecting materials to further processing until an acceptable result is achieved. Pre-compositional algorithms are thus a useful tool when a fixed, compositionally determinate output is desired: the algorithm is run, the results are accepted or rejected, and a finished result is inscribed—all prior to performance.[2]

It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance.

With the advent of realtime computing and modern networking technologies, however, new possibilities can be imagined beyond the realm of algorithmic pre-composition. It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance. A composer might trigger sections of a musical composition in non-linear fashion, use faders to control dynamic relationships between instruments, or directly enter musical information (e.g. pitches or rhythms) that can be incorporated into the algorithmic process on the fly. Such techniques have, of course, been common performance practice in electronic music for decades; given the possibility of an adequate realtime notational mechanism, they might become similarly ubiquitous in notated acoustic composition in the coming years.

Besides improvisational flexibility, performance use of compositional algorithms offers composers the ability to render aleatoric and probabilistic elements anew during each performance. Rather than such variables being frozen into fixed form during pre-composition, they will be allowed to retain their fundamentally indeterminate nature, producing musical results that vary with each realization. By precisely controlling the range, position, and function of random variables, composers can define sophisticated hierarchies of determinacy and indeterminacy in ways that would be unimaginable to early pioneers of aleatoric or indeterminate composition.

Thus, in addition to strictly pre-compositional uses of algorithms, a composer’s live data input can work in concert with conditional, aleatoric, probabilistic, and pre-composed materials to produce what might be called a “realtime composition” or a­n “interactive score.”

We may, in fact, be seeing the beginnings of a new musical era, one in which pre-composition, generativity, indeterminacy, and improvisation are able to interact in heretofore unimaginable ways. Instances in which composers sit alongside a chamber group or orchestra during performance, modifying elements of a piece such as dynamics, form, and tempo in real time via networked devices, may become commonplace. Intelligent orchestration algorithms equipped with transcription capabilities might allow a pianist to improvise on a MIDI-enabled keyboard and have the results realized by a string quartet in (near) real time. A musical passage might be constructed by composing a fixed melody along with a probabilistic table of possible harmonic relationships (or, conversely, by composing a fixed harmonic progression with variable voice leading and orchestration), creating works that blur the lines between indeterminacy and fixity, composition and improvisation, idea and realization. Timbral or dynamic aspects of a work might be adjusted during rehearsal in response to the specific acoustic character of a performance space. Formal features, such as the order of large-scale sections, might be modified by a composer mid-performance according to audience reaction or whim.

While the possibilities are no doubt vast, the project of implementing a coherent, musically satisfying realtime algorithmic work is still a formidable one: many basic technological pieces remain missing or underdeveloped (requiring a good deal of programming savvy on a composer/musician’s part), the practical requirements for performance and notation are not yet standardized, and even basic definitions and distinctions remain to be theorized.

In this four-part series, I will present a variety of approaches to employing computation in the acoustic domain, drawn both from my own work as well as that of fellow composer/performers. Along the way, I will address specific musical and technological questions I’ve encountered, such as strategies for networked realtime notation, algorithmic harmony and voice leading, rule-based orchestration, and more. While I have begun to explore these compositional possibilities only recently, and am surely only scratching the surface of what is possible, I have been fascinated and encouraged by the early results. It is my hope that these articles might be a springboard for conversation and future experimentation for those who are investigating—or considering investigating—this promising new musical terrain.



1. One might similarly describe a piece of music such as John Cage’s Music of Changes, or the wall drawings of visual artist Sol Lewitt, as works based on pre-compositional (albeit non-computer-based) algorithms.


2. Even works such as Morton Feldman’s graph pieces can be said to be pre-compositionally determinate in their formal dimension: while they leave freedom for a performer to choose pitches from a specified register, their structure and pacing is fixed and cannot be altered during performance.


Joseph Branciforte

Joseph Branciforte is a composer, multi-instrumentalist, and recording/mixing engineer based out of New York City. As composer, he has developed a unique process of realtime generative composition for instrumental ensembles, using networked laptops and custom software to create an “interactive score” that can be continuously updated during performance. As producer/engineer, Branciforte has lent his sonic touch to over 150 albums, working with such artists as Ben Monder, Vijay Iyer, Tim Berne, Kurt Rosenwinkel, Steve Lehman, Nels Cline, Marc Ribot, Mary Halvorson, Florent Ghys, and Son Lux along the way. His production/engineering work can be heard on ECM, Sunnyside Records, Cantaloupe Music, Pi Recordings, and New Amsterdam. He is the co-leader and drummer of “garage-chamber” ensemble The Cellar and Point, whose debut album Ambit was named one of the Top 10 Albums of 2014 by WNYC’s New Sounds and praised by outlets from the BBC to All Music Guide. His current musical efforts include a collaborative chamber project with composer Kenneth Kirschner and an electronic duo with vocalist Theo Bleckmann.

Live Sound Processing and Improvisation

Intro to the Intro

I have been mulling over writing about live sound processing and improvisation for some time, and finally I have my soapbox!  For two decades, as an electronic musician working in this area, I’ve been trying to convince musicians, sound engineers, and audiences that working with electronics to process and augment the sound of other musicians is a fun and viable way to make music.

Also a vocalist, I often use my voice to augment and control the sound processes I create in my music which encompasses both improvised and composed projects. I also have been teaching (Max/MSP, Electronic Music Performance) for many years. My opinions are influenced by my experiences as both an electronic musician who is performer/composer and a teacher (who is forever a student).

A short clip of my duo project with trombonist Jen Baker, “Clip Mouth Unit,” where I process both her sound and my voice.

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

So many gestural controllers have become part of the fabric of our everyday lives.

The interest has increased because (of course) so many more musicians have laptops and smartphones, and so many interesting game and gestural controllers have become part of the fabric of our everyday lives. With so many media tools at our disposal, we have all become amateur designers/photographers/videographers, and also musicians, both democratizing creativity (at least to those with the funds for laptops/smartphones) and exponentially increasing and therefore diluting the resulting output pool of new work.

Image of a hatted and bespectacled old man waving his index finger with the caption, "Back in my day... no real-time audio on our laptops (horrors!)"

Back when I was starting out (in the early ’90s), we did not have real-time audio manipulations at our fingertips—nothing easy to download or purchase or create ourselves (unlike the plethora of tools available today).  Although Sensorlab and iCube were available (but not widely), we did not have powerful sensors on our personal devices, conveniently with us at all times, that could be used to control our electronic music with the wave of a hand or the swipe of a finger. (Note: this is quite shocking to my younger students.) There is also a wave of audio analysis tools using Music Information Retrieval (MIR) and alternative controllers, previously only seen at research institutions and academic conferences, all going mainstream. Tools such as the Sunhouse sensory percussion/drum controller, which turns audio into a control source, are becoming readily available and popular in many genres.

In the early ’90s, I was a performing rock-pop-jazz musician, experimenting with free improv/post-jazz. In grad school, I became exposed for the first time to “academic” computer music: real-time, live electroacoustics, usually created by contemporary classical composers with assistance from audio engineers-turned-computer programmers (many of whom were also composers).

My professor at NYU, Robert Rowe, and his colleagues George Lewis, Roger Dannenberg and others were composer-programmers dedicated to developing systems to get their computers to improvise, or building other kinds of interactive music systems.  Others, like Cort Lippe, were developing pieces for an early version of Max running on a NeXT computer using complex real-time audio manipulations of a performer’s sound, and using that as the sole electroacoustic—and live—sound source and for all control (a concept that I personally became extremely interested and invested in).

As an experiment, I decided to see if I could create a simplified versions of these live sound processing ideas I was learning about. I started to bring them to my free avant-jazz improv sessions and to my gigs, using a complicated Max patch I made to control an Eventide H3000 effects processor (which was much more affordable than the NeXT machine, plus we had one at NYU). I did many performances with a core group of people, willing to let me put microphones on everyone and process them during our performances.

Collision at Baktun 1999. Paul Geluso (bass), Daniel Carter (trumpet), Tom Beyer (drums), Dafna Naphtali (voice, live sound processing), Kristin Lucas (video projection / live processing), and Leopanar Witlarge (horns).

Around that time I also met composer/programmer/performer Richard Zvonar, who had made a similarly complex Max patch as “editor/librarian” software for the H3000, to enable him to create all the mind-blowing live processing he used in his work with Diamanda Galás, Robert Black (State of the Bass), and others. Zvonar was very encouraging about my quest to control the H3000 in real-time via a computer. (He was “playing” his unit from the front panel.)  I created what became my first version of a live processing “instrument” (which I dubbed “kaleid-o-phone” at some point). My subsequent work with Kitty Brazelton and Danny Tunick, in What is it Like to be a Bat?, really stretched me to find ways to control live processing in extreme and repeatable ways that became central and signature elements of our work together, all executed while playing guitar and singing—no easy feat.

Six old laptops all open and lined up in two rows of three on a couch.

Since then, over 23 years and 7 laptops, many gigs and ensembles, and releasing a few CDs, I’ve all along worked on that same “instrument,” updating my Max patch, trying out many different controllers and ideas, adding real-time computer-based audio. (Only once that was possible on a laptop, in the late ’90s.) I’m just that kinda gal; I like to tinker!

In the long run, what is more important to me than the Max programming I did for this project is that I was able to develop for myself an aesthetic practice and rules for my live sound processing about respecting the sound and independence of the other musicians to help me to make good music when processing other people’s sound.

The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

Many people, of course, use live processing on their own sound, so what’s the big deal? Musicians are excited to extend their instruments electronically and there is much more equipment on stage in just about every genre to prove it. The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

However, I am primarily interested in learning how a performer can use live processing on someone else’s sound, in a way that it can become a truly independent voice in an ensemble.

What is Live Sound Processing, really?

To perform with live sound processing is to alter and affect the sounds of acoustic instruments, live, in performance (usually without the aid of pre-recorded audio), and in this way create new sounds, which in turn become independent and unique voices in a musical performance.

Factoring in the acoustic environment of the performance space, it’s possible to view each performance as site-specific, as the live sound processor reacts not only to the musicians and how they are playing but also to the responsiveness and spectral qualities of the room.

Although, in the past, the difference between live sound processing and other electronic music practices has not been readily understood by audiences (or even many musicians), in recent years the complex role of the “live sound processor” musician has evolved to often be that of a contributing, performing musician, sitting on stage within the ensemble and not relegated, by default, to the sound engineer position in the middle or back of the venue.

Performers as well as audiences can now recognize electroacoustic techniques when they hear them.

With faster laptops and more widespread use and availability of classic live sound processing as software plugins, these live sound processing techniques have gradually become more acceptable over 20 years—and in many music genres practically expected (not to mention the huge impact these technologies have had in more commercial manifestations of electronic dance music or EDM). Both performers and audiences have become savvier about many electroacoustic techniques and sounds and can now recognize them by hearing them.

We really need to talk…

I’d like to encourage a discourse about this electronic musicianship practice, to empower live sound processors to use real-time (human/old-school) listening and analysis of sounds (being played by others), and to develop skills for real-time (improvised) decisions about how to respond and manipulate those sounds in a way that facilitates their electronic-music-sounds being heard—and understood—as a separate performing (and musicianly) voice.

In this way, the live sound processor is not always dependent on and following the other musicians (who are their sound source), their contributions not simply “effects” that are relegated to the background. Nor will the live sound processor be brow-beating the other musicians into integrating themselves with, or simply following, inflexible sounds and rhythms of their electronics expressed as an immutable/immobile/unresponsive block of sound that the other musicians must adapt to.

My Rules

My self-imposed guidelines were developed over several years of performing and sessions are:

  1. Never interfere with a musician’s own musical sound, rhythm or timbre. (Unless they want you to!)
  2. Be musically identifiable to both co-players and audience (if possible).
  3. Incorporate my body to use some kind of physical interaction between the technology and myself, either through controllers or the acoustics of the sound itself, or my own voice.

I wrote about these rules in “What if Your Instrument is Invisible?” (my chapter contribution to the excellent book, Musical Instruments in the 21st Century: Identities, Configurations, Practices (Springer 2016).

The first two rules, in particular, are the most important ones and will inform virtually everything I will write in coming weeks about live sound processing and improvisation.

My specific area of interest is live processing techniques used in improvised music, and in other settings in which the music is not all pre-composed. Under such conditions, many decisions must be made by the electronic musician in real-time. My desire is to codify the use of various live sound processing techniques into a pedagogical approach that blends listening techniques, a knowledge of acoustics / psychoacoustics, and tight control over the details of live sound processing of acoustic instruments and voice. The goal is to improve communication between musicians and optional scoring of such work, to make this practice easier for new electronic musicians, and to provide a foundation for them to develop their own work.

You are not alone…

There are many electronic musicians who work as I do with live sound processing of acoustic instruments in improvised music. Though we share a bundle of techniques as our central mode of expression, there is very wide range of possible musical approaches and aesthetics, even within my narrow definition of “Live Sound Processing” as real-time manipulation of the sound of an acoustic instrument to create an identifiable and separate musical voice in a piece of music.

In 1995, I read a preview of what Pauline Oliveros and the Deep Listening Band (with Stuart Dempster and David Gamper) would be doing at their concert at the Kitchen in New York City. Still unfamiliar with DLB’s work, I was intrigued to hear about E.I.S., their “Expanded Instrument System” described as an “interactive performer controlled acoustic sound processing environment” giving “improvising musicians control over various parameters of sound transformation” such as “delay time, pitch transformation” and more. (It was 1995, and they were working with the Reson8 for real-time processing of audio on a Mac, which I had only seen done on NeXT machines.) The concert was beautiful and mesmerizing. But lying on the cushions at the Kitchen, bathing in the music’s deep tones and sonically subtle changes, I realized that though we were both interested in the same technologies and methods, my aesthetics were radically different from that of DLB. I was, from the outset, more interested in noise/extremes and highly energetic rhythms.

It was an important turning point for me as I realized that to assume what I was aiming to do was musically equivalent to DLB simply because the technological ideas were similar was a little like lumping together two very different guitarists just because they both use Telecasters. Later, I was fortunate enough to get to know both David Gamper and Bob Bielecki through the Max User Group meetings I ran at Harvestworks, and to have my many questions answered about the E.I.S. system and their approach.

There is now more improvisation than I recall witnessing 20 years ago.

Other musicians important for me to mention who are working with live sound processing of other instruments and improvisation for some time: Lawrence Casserley, Joel Ryan (both in their own projects and long associations with saxophonist Evan Parker’s “ElectroAcoustic” ensemble), Bob Ostertag (influential in all his modes of working), and Satoshi Takeishi and Shoko Nagai’s duo Vortex. More recently: Sam Pluta (who creates “reactive computerized sound worlds” with Evan Parker, Peter Evans, Wet Ink and others), and Hans Tammen. (Full disclosure, we are married to each other!)

Joel Ryan and Evan Parker at STEIM.

In academic circles, computer musicians, always interested in live processing, have more often taken to the stage as performers operating their software (moving from the central/engineer position). It seems there is also more improvisation than I recall witnessing 20 years ago.

But as for me…

In my own work, I gravitate toward duets and trios, so that it is very clear what I am doing musically, and there is room for my vocal work. My duos are with pianist Gordon Beeferman (our new CD, Pulsing Dot, was just released), percussionist Luis Tabuenca (Index of Refraction), and Clip Mouth Unit—a project with trombonist Jen Baker. I also work occasionally doing live processing with larger ensembles (with saxophonist Ras Moshe’s Music Now groups and Hans Tammen’s Third Eye Orchestra).

Playing with live sound processing is like building a fire on stage.

I have often described playing with live sound processing as like “building a fire on stage”, so I will close by taking the metaphor a bit further. There are two ways to start a fire with a lot of planning or improvisation, which method we choose to start with use depends on environmental conditions (wind, humidity, location), the tools we have at hand, and also what kind of person we are (a planner/architect, or more comfortable thinking on our feet).

In the same way, every performance environment impacts on the responsiveness and acoustics of musical instruments used there. This is much more pertinent, when “live sound processing” is the instrument. The literal weather, humidity, room acoustics, even how many people are watching the concert, all affect the defacto responsiveness of a given room, and can greatly affect the outcome especially when working with feedback or short delays and resonances. Personally, I am a bit of both personality types—I start with a plan, but I’m also ready to adapt. With that in mind, I believe the improvising mindset is needed for working most effectively with live sound processing as an instrument.

A preview of upcoming posts

What follows in my posts this month will be ideas about how to play better as an electronic musician using live acoustic instruments as sound sources. These ideas are (I hope) useful whether you are:

  • an instrumentalist learning to add electronics to your sound, or
  • an electronic musician learning to play more sensitively and effectively with acoustic musicians.

In these upcoming posts, you can read some of my discussions/explanations and musings about delay as a musical instrument, acoustics/psychoacoustics, feedback fun, filtering/resonance, pitch-shift and speed changes, and the role of rhythm in musical interaction and being heard. These are all ideas I have tried out on many of my students at New York University and The New School, where I teach Electronic Music Performance, as well as from a Harvestworks presentation, and from my one-week course on the subject at the UniArts Summer Academy in Helsinki (August 2014).


Dafna Naphtali creating music from her laptop which is connected to a bunch of cables hanging down from a table. (photo by Skolska/Prague)

Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist.   As a performer and composer of experimental, contemporary classical and improvised music since the mid-1990s, she creates custom Max/MSP programming incorporating polyrhythmic metronomes, Morse Code, and incoming audio signals to control her sound-processing of voice and other instruments, and other projects such as music for robots, audio augmented reality sound walks and “Audio Chandelier” multi-channel sound projects.  Her new CD Pulsing Dot with pianist Gordon Beeferman is on Clang Label.

Chris Brown: Models are Never Complete

Despite his fascination with extremely dense structures, California-based composer Chris Brown is surprisingly tolerant about loosely interpreting them. Chalk it up to being realistic about expectations, or a musical career that has been equally devoted to composing and improvising, but to Brown “the loose comes with the tight.” That seemingly contradictory dichotomy informs everything he creates, whether it’s designing elaborate electronic feedback systems that respond to live performances and transform them in real time or—for his solo piano tour-de-force Six Primes—calculating a complete integration of pitch and meter involving 13-limit just intonation and a corresponding polyrhythm of, say, 13 against 7.

“I’ve always felt that being a new music composer, part of the idea is to be an explorer,” Brown admitted when we chatted with him in a Lower East Side hotel room at a break before a rehearsal during his week-long residency at The Stone.  “It’s so exciting and fresh to be at that point where you have this experience that is new.  It’s not easy to get there.  It takes a lot of discipline, but actually to have the discipline is the virtue itself, to basically be following something, testing yourself, looking for something that’s new, until eventually you find it.”

Yet despite Brown’s dedication and deep commitment to uncharted musical relationships that are often extraordinarily difficult to perform, Brown is hardly a stickler for precision.

“If you played it perfectly, like a computer, it wouldn’t sound that good,” he explained. “I always say when I’m working with musicians, think of these as targets. … It’s not about getting more purity.  There’s always this element that’s a little out of control. … If we’re playing a waltz, it’s not a strict one-two-three; there’s a little push-me pull-you in there.”

Brown firmly believes that the human element is central and that computers should never replace people.  As he put it, “It’s really important that we don’t lose the distinction of what the model is rather than the thing it’s modeled on. I think it’s pretty dangerous to do that, actually.”

So for Brown, musical complexity is ultimately just a means to an end which is about giving listeners greater control of their own experiences with what they are hearing. In the program notes for a CD recording of his electro-acoustic sound installation Talking Drum, Brown claimed that he reason he is attracted to complex music is “because it allows each listener the freedom to take their own path in exploring a sound field.”

Brown’s aesthetics grew out of his decades of experience as an improviser—over the years he’s collaborated with an extremely wide range of musicians including Wayne Horvitz, Wadada Leo Smith, and Butch Morris—and from being one of the six composers who collectively create live networked computer music as The Hub. Long before he got involved in any of these projects, Brown was an aspiring concert pianist who was obsessed with Robert Schumann’s Piano Concerto which he performed with the Santa Cruz Symphony as an undergrad. Now he has come to realize that even standard classical works are not monoliths.

“Everybody in that Schumann Piano Concerto is hearing something slightly different, too, but there’s this idea somehow that this is an object that’s self-contained,” he pointed out.  “It’s actually an instruction for a ritual that sounds different every time it’s done.  Compositions are more or less instructions for what they should do, but I’m not going to presume that they’re going to do it exactly the same way every time.”

Chris Brown’s first album was released in 1989, ironically the same year as the birth of another musical artist who shares his name, a Grammy Award-winning and Billboard chart-topping R & B singer-songwriter and rapper.  This situation has led to some funny anecdotes involving mistaken identity—calls to his Mills College office requesting he perform Sweet Sixteen parties—as well as glitches on search engines including the one on Amazon.

“These are basically search algorithm anomalies,” he conceded wryly. To me it’s yet another reason to heed his advice about machines and not to overly rely on them to solve all the world’s problems.


Chris Brown in conversation with Frank J. Oteri
Recorded at Off Soho Suites Hotel, New York, NY
June 22, 2017—3:00 p.m.
Video presentations and photography by Molly Sheridan
Transcribed by Julia Lu.

Frank J. Oteri:  Once I knew you were coming to New York City for a week-long residency at The Stone and that we’d have a chance to have a conversation, I started looking around to see if there were any recordings of your music that I hadn’t yet heard. When I did a search on Amazon, I kept getting an R & B singer-songwriter and rapper named Chris Brown, who was actually born the year that the first CD under your name was released.

Chris Brown:  Say no more.

FJO:  I brought it up because I think it raises some interesting issues about celebrity. There is now somebody so famous who has your name, and you’ve had a significant career as a composer for years before he was born.  But maybe there’s a silver lining in it. Perhaps it’s brought other people to your music who might not otherwise have known about it—people who were looking for the other Chris Brown, especially on Amazon since both your recordings and his show up together.

CB:  These are basically search algorithm anomalies, but the story behind that is that when the famous Chris Brown started to become famous, I started getting recorded messages on my office phone machine at Mills, because people would search for Chris Brown’s music and it would take them to the music department at Mills.  They would basically be fan gushes for the most part.  Sometimes they would involve vocalizing, because they were trying to get a chance to record.  Sometimes they would ask if he could play their Sweet Sixteen party.  There were tons of them.  At the beginning, every day, there were long messages of crying and doing anything so that they could get close to Chris Brown in spite of the fact that my message was always a professorial greeting.  It didn’t matter.  So it was a hassle.  Occasionally I would engage with the people by saying this is not the right Chris Brown and trying to send them somewhere else.

It’s a common name. When I was growing up, there weren’t that many Chrises, but somehow it got really popular in the ‘80s and ‘90s.  Anyway, these days not much happens, except that what it’s really meant is kind of a blackout for me on internet searches.  It’s hard to find me if somebody’s looking.  Since I started working at Mills, the first thing that David Rosenboom said to me when I came in is there’s thing called the internet and you should get an email account.  Everybody was making funny little handles for themselves as names.  From that day, mine was cbmuse for Chris Brown Music.  I still have that same email address at Mills.edu.  So I go by cbmuse.  That’s the best I can do.  Sometimes some websites say Christopher Owen Brown, using the John Luther Adams approach to too many John Adamses.  It’s kind of a drag, but on the other hand, it’s a little bit like living on the West Coast anyway, which is that you’re out of the main commercial aspect of your field, which is really in New York. On the West Coast, there’s not as much traffic so you have more time and space.  To some extent, you’re not so much about your handle; you still get to be an individual and be yourself. I could have made a new identity for myself, but I sort of felt like I don’t want to do that.  I’ve always gone by Chris Brown.  I’ve never really attached to Christopher Brown.  Maybe this is a longer answer than you were looking for.

FJO:  It’s more than I thought I’d get. I thought it could have led to talking about your piece Rogue Wave, which features a DJ. Perhaps Rouge Wave could be a gateway piece for the fans of the other Chris Brown to discover your music.

CB:  I don’t think that happens though.  That was not an attempt to do something commercial.  I could talk about that if you like, since we’re on it.  Basically, the DJ on it, Eddie Def, was somebody I met through a gig where I was playing John Zorn’s music at a rock club in San Francisco and through Mike Patton, who knew about him. He invited Eddie to play in the session and he just blew me away.  I was playing samples and he was playing samples.  I was playing mine off my Mac IIci, with a little keyboard, and he was playing off records.  He was cutting faster than I was some of the time.  Usually you think, “Okay, I’ve a got a sample in every key. I can go from one to the other very quickly.”  He just matched me with every change.  So we got to be friends and really liked each other.  We did a number of projects together.  That was just one of them. He’s a total virtuoso, so that’s why I did a piece with him.

FJO:  You’ve worked with so many different kinds of musicians over the years.  From a stylistic perspective, it’s been very open-ended.  The very first recording I ever heard you on, which was around the time it came out, was Wayne Horvitz’s This New Generation, which is a fascinating record because it mixes these really out there sounds with really accessible grooves and tunes.

CB:  I knew Wayne from college at UC Santa Cruz. He was kind of the ringmaster of the improv scene in the early ‘70s in Santa Cruz.  I wasn’t quite in that group, but I would join it and I picked up a lot about what was going on in improvised music through participating with them in some of their jam sessions.  Wayne and I were friends, so when he moved to New York, I’d sometimes come to visit him.  Eventually, he moved out of New York to San Francisco.  I had an apartment available in my building, so he lived in it.  He was basically living above us. He was continuing to do studio projects, and this was one of them.  He had his little studio setup upstairs and one day he said, “Would you come upstairs and record a couple of tracks for me?” He played his stuff and he asked me to play one of the electro-acoustic instruments that I built, so I did.  I didn’t think too much more of it than that, but then it appeared on this Electra-Nonesuch record and there was a little check for it. It was my little taste of that part of the new music scene that was going on in New York.  Eventually Wayne moved out and now he lives in Seattle. We still see each other occasionally.  It’s an old friendship.

FJO:  You’ve actually done quite a bit of work with people who have been associated with the jazz community, even though I know that word is a limiting word, just like classical is a limiting word. You’ve worked with many pioneers of improvisational music, including Wadada Leo Smith and Butch Morris, and you were also a member of the Glenn Spearman Double Trio, which was a very interesting group.  It’s very sad.  He died very young.

CB:  Very.

FJO:  So how did you become involved with improvised music?

CB:  Well, I was a classically trained pianist and I eventually wound up winning a scholarship and played the [Robert] Schumann Piano Concerto with the Santa Cruz Symphony. But I was starting to realize that that was not going to be my future because I was interested in humanities and the new wave of philosophy—Norman O. Brown.  I got to study with him when I was there, and he told me I should really check out John Cage because he was a friend of Cage’s: “If you’re doing music, you should know what this is.”  So I went out and got the books, and I was completely beguiled and entranced by them.  It was a whole new way of listening to sound as well as music, or music as sound, erasing the boundary.  So I was very influenced by that, but almost at the same time I was getting to know these other friends in the department who were coming more out of rock backgrounds.  They were influenced by people like Cecil Taylor and the Art Ensemble of Chicago and the free jazz improvisers.  These jam sessions that Wayne would run were in some way related.  There were a lot of influences on that musical strain, but that’s where I started improvising.

To me, improvisation seems like the most natural thing in the world.

I was also studying with Gordon Mumma and with a composer named William Brooks, who was a Cage scholar as well as a great vocalist and somebody who’d studied with Kenneth Gaburo. With Brooks, I took a course that was an improvisation workshop where the starting point was no instruments, just movement and words—that part was from the Gaburo influence.  That was a semester of every night getting together and improvising with an ensemble.  I think it was eight people.  I’d love if that had been documented.  I have never seen or heard it since then, but it influenced me quite a bit.  To me, improvisation seems like the most natural thing in the world. Why wouldn’t a musician want to do it?  Then, on the other side of this, people from the New York school were coming by and were really trying to distinguish what they did from improvisation.  I think there was a bit of an uptown/downtown split there.  They were trying to say this is more like classical music and not like improvisation.  It’s a discipline of a different nature.  Ultimately I think it’s a class difference that was being asserted.  And I think Cage had something to do with that, trying to distinguish what he did from jazz.  He was trying to get away from jazz.

I didn’t have much of a jazz background, but I had an appreciation for it growing up in Chicago. I had some records.  At the beginning I’d say my taste in jazz was a little more Herbie Hancock influenced than Cecil Taylor.  But once I discovered Cecil Taylor, when I put that next to Karlheinz Stockhausen, I started to see that this is really kind of the same. This is music of the same time.  It may have been made in totally different ways, and it results from a different energy and feeling from those things, but it’s not that different.  And it seems to me that there’s more in common than there is not.  So I really never felt there was that boundary.  So I participated in sessions with musicians who were improvising with or without pre-designed structures. It was just something I did.

Once I discovered Cecil Taylor, when I put that next to Karlheinz Stockhausen, I started to see that this is really kind of the same.

The first serious professional group I got involved with was a group called Confluence.  This came about in the late 1970s with some of my older friends from Santa Cruz, who’d gone down and gotten master’s degrees at UC San Diego. It was another interesting convergence of these two sides of the world.  They worked with David Tudor on Rainforest, the piece where you attach transducers to an object, pick up the sound after it’s gone through the object, and then amplify it again.  Sometimes there’s enough sound out of the object itself that it has an acoustic manifestation.  Anyway, it’s a fantastic piece and they were basically bringing that practice into an improvisation setting.  The rule of the group was no pre-set compositional design and no non-homemade instruments.  You must start with an instrument you made yourself and usually those instruments were electro-acoustic, so they had pickups on them, somewhat more or less like Rainforest instruments.  The other people in that group were Tom Nunn and David Poyourow.  When David got out of school he wanted to move up to the Bay Area and continue this group.  One of the members of it then had been another designer, a very interesting instrument maker named Prent Rodgers.  And he bailed.  He didn’t want to be a part of it.  So they needed a new member.  So David asked me if I’d be interested, and I was.  I always had wanted to get more involved with electronic music, but being pretty much a classical nerd, I didn’t really have the chops for the technology.  David, on the other hand, came from that background.  His father was a master auto mechanic, from the electrical side all the way to the mechanical side. David really put that skill into his instrument building practice and then he taught it to me, basically.  He showed me how to solder, and I learned from Tom how to weld, because some of these instruments were made out of sheet metal with bronze brazing rods.  I started building those instruments in a sort of tradition they’d begun, searching for my own path with it, which eventually came about when I started taking pianos apart and making electric percussion instruments from it.

So, long story short, I was an improviser before I was a notes-on-paper composer.  That’s how I got into composing.  I started making music directly with instruments and with sound.  It was only as that developed further that I started wanting to structure them more.

FJO:  So you composed no original music before you started improvising?

CB:  There were a few attempts, but they were always fairly close to either Cageian influence or a minimalist influence.  I was trying out these different styles.  Early on, I was a follower and appreciator of Steve Reich’s music. Another thing I did while I was at Santa Cruz was play the hell out of Piano Phase.  We’d go into a practice room and play for hours, trying to perfect those phase transitions with two upright pianos.  I was also aware of Steve’s interest in music from Bali and from Africa. These were things that I appreciated also.

FJO:  I know that you spent some time in your childhood in the Philippines.

CB:  I grew up between the years of five and nine in the Philippines.  It wasn’t a long time, as life goes, but it was also where I started playing the piano.  I was five years old in the Philippines and taking piano lessons there.  I was quite taken with the culture, or with the cultural experience I had let’s say, while I was there.  I went to school with Filipino kids, and it was not isolated in some kind of American compound.  I grew up on the campus of the University of the Philippines, which is a beautiful area outside of the main city, Manila.

FJO:  Did you get to hear any traditional music?

Being an improviser is a great way to get into a cultural interaction.

CB:  Very little because the Philippines had their music colonized.  It exists though, and later I reconnected with musicians at that school, particularly José Maceda, which is another long story in my history.  I’ve made music with Filipino instruments and Filipino composers.  One of the nice things about being an improviser is that collaboration comes much easier than if you’re trying to control everything about the design of the piece of music, so I’ve collaborated with a lot of people all over the place, including performances before we really knew what we were doing.  It’s an exploratory thing you do with people, and it’s a great way to get into a cultural interaction.

Chris Brown in performance with Vietnamese-American multi-instrumentalist Vanessa Vân-Ánh Võ at San Francisco Exploratorium’s Kanbar Forum on April 13, 2017

FJO:  I want to get back to your comment about your first pieces being either Cageian or influenced by minimalism.  I found an early piano piece of yours called Sparks on your website, which is definitely a minimalist piece, but it’s a hell of a lot more dissonant than anything Reich would have written at that time. It’s based on creating gradual variance through repetition, but you’re fleshing out pitch relations in ways that those composers wouldn’t necessarily have done.

CB:  I’m very glad you brought that up.  I think that was probably the first piece that I still like and that has a quality to it that was original to me.  From Reich I was used to the idea of a piece of music as a continuous flow of repetitive action.  But it really came out of tuning pianos, basically banging on those top notes of the piano as you’re trying to get them into tune. I started to hear the timbre up there as being something that splits into different levels.  You can actually hear the pitch if you care to attend to it.  A lot of times the pitch is hard to get into tune there, especially with pianos that have three strings [per note]. They’re never perfectly in tune.  They’re also basically really tight, so their harmonic overtones are stretched.  They’re wider than they should be.  They’re inharmonic, rather than harmonic, so it’s a kind of a timbral event.  So what I was doing was kind of droning on a particular timbre that exists at the top of the piano, trying to move into a kind of trance state while I was moving as fast as I can repeating these notes. The piece starts at the very top two notes, and then it starts widening its scope, until it goes down an octave, and then it moves back up.  It was a process-oriented piece.  There wasn’t a defined harmonic spectrum to it except that which is created when you make that shape over a chromatically tuned top octave of the piano.  It didn’t have the score.  It was something that was in my brain.  It would be a little different every time, but basically it was a process, like a Steve Reich process piece, one of the earliest ones.

FJO:  So when did you create the notated score for it?

CB:  Well, I tried a couple of times, but it wasn’t very satisfactory. I made the first version for a pianist who lives in Germany named Jennifer Hymer. She played it first probably around 2000. Then 15 years later, another pianist at Mills—Julie Moon—played it, and she played the heck out of it. So now there is a score, but I still feel like I need to fix that score.

FJO:  I think it’s really cool, and I was thrilled that there was a score for it online that I could see. You also included a recording of it.

CB:  I just don’t think the score reflects as well as it could what the piece is about.  I always intended for there to be a little bit of freedom in it that isn’t apparent when you just write one set of notes going to the next set of notes.  There has to be a certain sensibility that needs to be described better.

FJO:  Bouncing off of this, though it might seem like a strange connection to make, when I heard that piece and thought about how it’s taking this idea of really hardcore early minimalist process music, but adding more layers of dissonance to it, it seemed in keeping with a quote that you have in your notes for the published recording of Talking Drum, which I thought was very interesting:  “I favor densely complex music, because it allows each listener the freedom to take their own path in exploring a sound field.”  I found that quote very inspiring because it focuses on the listener and giving the listener more choices about what to focus on.

CB:  I think I still agree with that. I’m not always quite going for the most complex thing I can find, but I do have an attraction to it. Most of the pieces that I do wind up being pretty complicated in terms of how I get to the result I’m after, even though those results may require more or less active listening. I was kind of struck last night by the performance I did of Six Primes with Zeena Parkins and Nate Wooley. The harmonic aspect of the music is much more prominent and much more beauty-oriented than the piano version is. When I play the piano version, it’s more about the intensity of the rhythms and of the dissonance of the piano, as opposed to the more harmonious timbre of the harp or the continuous and purer sound of the trumpet; the timbre makes the way that you play the notes different.

An excerpt from Chris Brown, Zeena Parkins and Nate Wooley’s trio performance of Structures from Six Primes at The Stone on June 21, 2017.

FJO: But I think also that this strikes to the heart of the difference between composition and improvisation.  I find it very interesting that you’ve gravitated toward these really completely free and open structures as an improviser, but your notated compositions are so highly structured.  There’s so much going on, and in a piece like Six Primes, you’re reflecting these ratios not just in the pitch relations, but also in the rhythmic relationships. Such complicated polyrhythms are much harder to do in the moment.

CB:  Of course.  But that’s why I’m doing it. I’m interested in doing things that haven’t been done before.  I’ve always felt that being a new music composer, part of the idea is to be an explorer.  Sometimes that motivation is going to get warped by the marketing of the music or by the necessity to make a career, but that was always what I was attracted to about it. From the first moment that I heard Cage’s music, I said, “This is an inventor.  This is somebody who’s inventing something new.”  It’s so exciting and fresh to be at that point where you have this experience that is new.  It’s not easy to get there.  It takes a lot of discipline, but actually to have the discipline is the virtue itself, to basically be following something, testing yourself, looking for something that’s new, until eventually you find it.

I’ve always felt that being a new music composer, part of the idea is to be an explorer.

This is the third cycle of me learning to play these pieces. At first, I just wanted to know it was possible. And next, I wanted to record it. This time, I’m looking to do a tour where I can perform it more than once. Each time I do it, it gets easier. At this point, I’m finally getting to what I want, for example with 13 against 7, I know perfectly how it sounds, but I don’t have to play it mechanically. It can breathe like any other rhythm does, but it has an identity that I can recognize because I’ve been doing it long enough. It seems strange to me that music is almost entirely dominated by divisions of two and three. We have five every once in a while, but most people can’t really do a five against four, except for percussionists. There are a lot of complex groupings of notes in Chopin, but those are gestures, almost improvisational gestures I think, rather than actual overlays of divisions of a beat. Some of this is influenced by my love and interest for African-based musics that have this complexity of rhythm that is simply beyond the capability of a standard European-trained musician, actually getting into the divisions of the time and executing them perfectly and doing them so much that they become second nature so that they can be alive in performance, rather than just reproduced. It’s a big challenge, but I’m looking for a challenge and I’m looking for a new experience that way.

An excerpt from Chris Brown’s premiere solo piano performance of Six Primes in San Francisco in 2014.

FJO:  So do you think you will eventually be able to improvise those polyrhythms?

CB:  Maybe, eventually, but I think you have to learn it first. The improvising part is after you’ve learned to do the thing already.  Yesterday I was improvising some of the time. What you do is you start playing one of the layers of the music. In Six Primes part of the idea is you have this 13 against 7, but 13 kind of exists as a faster tempo of the music, and 7 is a slower one.  They’re just geared and connected at certain places, but at any one time in your brain, while you’re playing that rhythm, it might be a little bit more involved in inflecting the 13 than the 7. Sometimes, when things are really pure, you get a feeling for both of them and they’re kind of talking to each other.  As a performer, I would say that that’s the goal.  It’s probably rarer than I wish at this point.  But the only way you can get there is by lots of practice and eventually it starts happening by itself.  I think it’s the same as if you’re playing the Schumann Piano Concerto.  You’re not aware of every gesture you’re making to make that music.  You’ve put it into your body, and it kind of comes out by rote.  You know you’re experiencing the flow of the music, and your body knows how to do it because you trained it.  So it’s the same with Six Primes, but it’s just the materials are different and the focus is different.

An excerpt from Chris Brown's piano score for Six Primes

An excerpt from the piano score for Six Primes © 2014 by Chris Brown (BMI). Published by Frog Peak Music. All rights reserved. International copyright secured. Reprinted with permission.

FJO:  And similarly to listen to it, you might not necessarily hear that’s what’s going on.  But maybe that’s okay.

CB:  Yes, that goes to the quote that there’s a multi-focal way of listening that I’m promoting; the music isn’t designed to have one focal point. It’s designed to have many layers and that basically means that listeners are encouraged to explore themselves. It’s an active listening rather than that you should be listening primarily to this part and not aware of that part.

The music isn’t designed to have one focal point.

FJO:  In a way, this idea of having such an integral relationship between pitches and rhythms is almost a kind of serialism, but the results are completely different. I also think your aesthetics, and what you’re saying about how one listens to it, is totally different.

CB:  I wouldn’t say it’s modeled on that, but I do like the heavy use of structure. It’s a sculptural aspect of making music. I do a lot of pre-composition. This stuff isn’t just springing out of nowhere. Six Primes actually has a very methodical formal design that’s explained in the notes to the CD. The basic idea is that you have these six prime numbers: 2, 3, 5, 7, 11, and 13. Those are the first six prime numbers. They’re related to intervals that are tuned by relationships that include that number as their highest prime factor. I know that sounds mathematical, but I’m trying to say it as efficiently as possible. For example, the interval of a perfect fifth is made of a relationship of a frequency that’s in the ratio of 3 to 2. So the highest prime of that ratio is a 3. Similarly, a major third is defined by the ratio of 5 to 4. So 5 is the highest prime. There’s also the 2 in there, but the 5 is the higher prime and that defines the major third. There are other intervals that are related to it, such as a 6 to 5, which is a minor third, where the 5 is also the highest prime. And 5 to 3, the major sixth, etc. Basically Western music is based around using 2, 3, and 5 and intervals that are related to that. Intervals that use 7 as the highest prime are recognizable to most western music listeners, but they’re also out of tune by as much as a third of a semi-tone. Usually people start saying, “Oh, I like the sound of that. I can hear it. It’s a harmony, but it sounds a little weird.” Particularly the 7 to 6 interval, which is a minor third that’s smaller than any of the standard ones that Western people are used to, is very attractive to most people but also kind of curious and possibly scary. When you take it to 11, you get into things that are halfway between the semitones of the equal tempered chromatic scale. And 13 is somewhere even beyond that. Okay, so there are all these intervals. The tuning for Six Primes is a twelve-note scale that contains at least two pitches from each of these first six prime factors, which results in a total of 75 unique intervals between each note and every other one in the set.

The cover for the CD of Six Primes

Last year, New World Records released a CD of Chris Brown performing Six Primes.
.

FJO:  Cellists and violinists tune their instruments all the time and since their instruments have an open neck, any pitch is equally possible. The same is true for singers. But pianists play keyboards that are restricted to 12 pitches per octave and that are tuned to 12-tone equal temperament. And since pianists rarely tune their own instruments, 12-tone equal temperament is basically a pre-condition for making music and it’s really hard to think beyond it. As a classically-trained pianist, how were you able to open your ears to other possibilities?

CB: It was hard. It was very frustrating. It took me a long time, and it started by learning to tune my instrument myself. The first thing was what are these pitches? Why do I not understand what everybody’s talking about when they’re talking about in tune and out of tune? I’m just not listening to it, because I’m playing on an instrument that’s usually somewhat out of tune. Basically pianists don’t develop the same kind of ear that violinists have to because they don’t have to tune the pitch with every note. So I was frustrated by my being walled off from that. But I guess not frustrated enough to pick up the violin and change instruments.

While I was an undergraduate and started getting interested through Cage in 20th-century American music, I discovered Henry Cowell’s piano music, the tone cluster pieces, and I loved them.  I just took to them like a duck to water, and I got to be good at it.  I had a beautiful experience playing some of his toughest tone cluster pieces at the bicentennial celebration of him in Menlo Park in 1976. I really bonded with that music and played it like I owned it.  I could play it on the spot. I had it memorized.   The roar of a tone cluster coming out of the piano was like liberation to me.

FJO:  And you recorded some of those for New Albion at some point.

CB:  That came out of a concert Sarah Cahill put together of different pianists playing; it was nice that that came out.

FJO:  It’s interesting that you mention Cowell because he was another one of these people like Wayne Horvitz who could take really totally whacked out ideas and find a way to make them sound very immediate and very accessible. It’s never off-putting, it’s more like “Oh, that’s pretty cool.” It might consist of banging all over the piano, but it’s also got a tune that you can walk away humming.

CB:  I like that a lot about Cowell.  He’s kind of unaffected in the way that something attracted him. He wrote these tunes when he was a teenager, for one thing.  But he wrote tunes for the rest of his life, too.  Sometimes he wrote pieces that have no tune at all.  The piece Antimony, for example, is amazingly harsh. There’s definitely some proto-Stockhausen there, but it’s not serial.  I think that the ability to not feel like you need to restrict yourself to any particular part of the language that you happen to be employing at the moment is something that is really an admirable achievement.  There’s something so tight about the Western tradition that once you start developing this personal language, you must not waver, that this is the thing that you have to offer and it’s the projection of your personality, how will you be recognized otherwise? I think that’s ultimately a straightjacket, so I’ve always admired people like Cowell and Anthony Braxton. Yesterday I was talking to Nate Wooley about the latest pieces that Braxton is putting out where he’s entirely abandoned the pulse; it’s all become just pure melody. He’s changing.  Why do we think that’s a bad idea?  Eclecticism—if you can do it well and can do it without feeling like you’re just making a collage with stuff you don’t understand—is the highest form, to be able to integrate more than one kind of musical experience into your work.

FJO:  It’s interesting that you started veered into a discussion about discovering Cowell’s piano music after I asked you about how you got away from 12-tone equal temperament. Most of Cowell’s music was firmly rooted in 12-tone equal, but he did understand the world beyond it and even tried to explore synchronizing pitch and rhythmic ratios in his experiments with the rhythmicon that Leon Theremin had developed right before he was kidnapped him and brought back to the Soviet Union.

CB:  I was definitely influenced by [Cowell’s book] New Musical Resources. As I read about the higher harmonics and integrating them into chords, I would reflect back on what it sounds like when you play it on the piano.  It is very dissonant because of the tuning.  And I realized that.  So I thought, “Well, okay, he just never got there.  He didn’t learn to tune his own piano, maybe I should do that, you know.” I get that some in Six Primes, I think, because there’s an integral relationship between all the notes. Even though the strings are inharmonic, there’s more fusion in the upper harmonics that can happen.  So these very dissonant chords also sound connected to me.  They’re not dissonant in the same way that an equal tempered version of it is.  They have a different quality.

I’m also noticing from the other piece we played the night you attended that was using the Partch scale, if you build tone cluster chords within the Partch scale, you get things that sound practically like triads, only they buzz with a kind of fusion that you can only have when the integral version of major seconds is applied carefully.  You get all kinds of different chords out of that.  It’s wonderful.

FJO:  Now when you say Partch scale, we’re basically talking about 11-limit just intonation, in terms of the highest primes, since the highest prime in his scale is 11.

CB:  Right, but it’s more than that. He did restrict himself to the 11-limit, but he didn’t include everything that’s available within that.  He made careful, judicious selections so that he could have symmetrical possibilities inside of the scale.  It’s actually more carefully and interestingly foundationally selected than I knew before I really studied it closely.

FJO:  But he worked with his own instruments which were designed specifically to play his 43-note scale whereas you are playing this score on a standard 7-white, 5-black keyed keyboard.

CB:  I took an 88-key MIDI controller and I was using it to trigger two octaves of 43 notes.  So I’ve mapped two octaves to the 88 keys. It winds up being 86, but it is possible to do that. I’m thinking in the future of figuring out a way to be able to shift those octaves so I’m not stuck in the same two-octave range, which I haven’t done yet, but that’s kind of trivial programming-wise.

FJO:  Of course, the other problem with that is the associations the standard keyboard has with specific intervals.

CB:  You have to forget that part, and that’s why I didn’t do it in Six Primes.  And also, if I’d done it on an acoustic piano, it really messes up the string tension on the piano.

FJO:  Julian Carrillo re-tuned a piano to 96 equal and that piano still exists somewhere.

CB:  Yeah, but you can’t re-tune it easily, let’s put it that way. And it loses its character throughout the range because the character of the piano is set up by the variable tension of the different ranges of its strings.

FJO:  But aside even from that, it changes the basic dexterity of what it means to play an octave and what it means to play a fifth.  Once you throw all those relationships out the window, your fingers are not that big, even if you have the hands of Rachmaninoff.

CB:  It becomes a different technique for sure. I’m not trying to extend the technique. What I’m doing with this is essentially I’m making another chromelodeon, which was Partch’s instrument that he used to accompany his ensemble and to also give them the pitch references that they needed, especially the singers, to be able to execute the intervals that he was writing.

FJO:  Well that’s one of the things I’m curious about.  When you’re working with other musicians obviously you can re-tune the keyboard.  You can re-tune a piano, you can work with an electronic keyboard where all these things are pre-set. But the other night, you were working with a cellist who sang as well and an oboist.  To get these intervals on an oboe requires special fingerings, but most players don’t know them.  With a cello there’s no fretboard, so anything’s possible but you really have to hear the intervals in order to reproduce them.  That’s even truer for a singer.  So how do those things translate when you work with other musicians, and how accurate do those intervals need to be for you?

CB:  Those are two questions really.  But I think the key is that you’ve got to have musicians who are interested in being able to hear and to play them.  You can’t expect to write them and then just get exactly what you want from any musician.  Until we wake up 150 years from now and maybe everybody will be playing in the Partch scale so you could write it and everybody can do it!  That’s a fantasy, but I think we’re moving more in that direction.  There are more and more musicians who are interested in learning to play these intervals and all I’m doing is exploiting what’s there.  I’m interested in it.  I talk to my friends who are, and they want to learn how to play like that and that’s what’s happening.  It’s a great thing to be able to have that experience, but it’s not something you can create by yourself.  You have to work with the people who can play the instruments.  For example, you mentioned the oboe. I asked Kyle [Bruckmann] what fingerings he’s using.  “Shouldn’t I put this in the score?”  And he said, “Most of the time what I’m doing is really more about embouchure.  And it’s maybe something that’s not so easily described.”  So it comes down to he’s getting used to what he needs to do with his mouth to make this pitch come out; he’s basically looking at a cents deviation.  So I’ll write the note, and I’ll put how many cents from the pitch that he’s fingering, or the pitch that he knows needs to be sounded.  He’s playing it out of tune with what the horn is actually designed to create and he’s limited in the way that notes sound.  He can’t do fortissimo on each of these notes.  He’s working with an instrument that’s designed for a tuning that he’s trying to play outside of.  It’s crazy. But so far, I would say it’s challenging, but not frustrating so much if I’m translating his experience correctly.  He seems to be very eager to be able to do it, and he’s nailing the pitches.  Sometimes I test him against my electronic chromelodeon and he’s almost always right on the pitch. He’s looking at a meter while he’s playing.  It’s something that a musician couldn’t have done 10 or 15 years ago before those pitch meters became so cheap and readily available.

More and more musicians are interested in learning to play these intervals.

FJO:  James Tenney had this theory that people heard within certain bands of deviations. If you study historical tunings like Werckmeister III, the key of C has a major third that’s 390 cents. In equal temperament, it’s 400 cents which is way too sharp since a pure major third is 386. You can clearly hear the difference, but a third of 390 is close enough to 386 for most people.

CB:  I always say when I’m working with musicians, think of these as targets. If you played it perfectly, like a computer, it wouldn’t sound that good. For example, last night, we had to re-tune the harp to play in the Six Primes tuning. Anybody who knows about harp tuning realizes there’s seven strings in the octave and you get all the other notes by altering one semitone sharp or flat on one of those strings. So it was a very awkward translation. Basically we had a total of 10 of the 12 Six Primes pitches represented. Two of them we couldn’t get. And the ones that we had were sometimes as much as 10 cents out, which is definitely more than it should be to be an accurate representation. But again, this is where the loose comes in with the tight.

In certain cases that wouldn’t work, but in a lot of cases it does. A slight out-of-tuneness can result in a chorus effect as part of the music, and I like that; it gives a shimmer. It’s like Balinese tuning. If that’s what we have to accept on this note, well then so be it you know. It actually richens the music in a way. It’s not about getting more purity. That’s what I feel like. There’s a thing I never quite agreed with Lou Harrison about, because he was always saying these are the real pure sounds. These are the only right ones. But they can get kind of sterile by themselves. He didn’t like the way the Balinese mistuned things. But from all those years of tuning pianos, I love the sound of a string coming into tune, the changes that happen, it makes the music alive on a micro-level. It’s important to be able to hear where the in-tune place is, but to play around that place is part of what I like. I don’t expect it to be perfectly in tune. Maybe it’s because I play a piano and on the extreme ranges of the piano, you can’t help that the harmonics are out of tune. They just are. There’s always this element that’s a little out of control, as well as the part that we can master and make truly evoke harmonic relationships.

FJO:  Now in terms of those relationships, is that sense of flexibility and looseness true for these rhythms as well?  Could there be rubatos in 17?

I don’t expect it to be perfectly in tune.

CB:  Yeah, I think that’s what I was saying about being able to play the rhythm in a lively way.  They can shift.  They can talk to each other.  Little micro-adjustments to inflect the rhythm.  If we’re playing a waltz, it’s not a strict one-two-three; there’s a little push-me pull-you in there. That’s how you give energy to the piece.  I think that it’s hard to get there with these complex relationships, but it’s definitely possible.

FJO:  So is your microtonal music always based on just intonation?  Have you ever explored other equal temperaments?

CB:  I’ve looked at them, but they don’t interest me as much because I’m more attracted to the uneven divisions than to the even ones.  Within symmetrical divisions, you can represent all kinds of things and you can even make unevenness out of the evenness if you like.  But it seems like composers get drawn to the kind of symmetrical kinds of structures, rather than asymmetrical ones.  Symmetry is fine, but somehow it reminds me of the Leonardo figure inside the triangle and the circle.  It’s ultimately confining.  I like the roughness and the unevenness of harmonic relationships.

FJO:  We only briefly touched on electronics when you said that you had a rough start with it as a classical music nerd. But I was very intrigued the other night by how Kyle Bruckmann’s oboe performance was enhanced and transformed by real-time electronic manipulations the other night in Snakecharmer, and was very curious after you mentioned that you had figured out how to make this old piece work again. I know the recording that Willie Winant made of that piece that was released in 1989, but to my ears it sounds like a completely different piece.  I think I like the new piece even more because it sounds more like a snake charmer to me this time; I didn’t quite understand the title before.

CB:  There are three recorded versions of that old piece.

FJO:  That was the only one I’ve heard.

CB:  They’re on the Room record.

FJO:  I don’t know that record.

CB:  Okay, that was rare.  It was a Swiss release.  But that’s kind of an important one for me in my development with electro-acoustic and interactive music. I should get it to you.  Anyway, the basic idea is any soloist can be the snake charmer, the person who’s instigating the feedback network to go through its paces and sort of guiding it.  Probably the strangest was when Willie did it because he can’t sustain.  He’s basically playing percussion, and he’s just basically playing whatever he hears and interacting with it intuitively.  But another version of it was with Larry Ochs playing sopranino saxophone so that’s probably closer; you might hear the relationship there.  It’s more the traditional image of the snake charmer.  It sounds an awful lot like a high oboe; that was a good version.  There’s also the version that I performed, singing and whistling as the input.  Those were three different tracks, but they all start out in a similar way.  Basically the programming aspect is that it goes through a sequence of voices.  And each of those voices transposes the input that it’s receiving from the player in different intervals as the piece goes on.  So there’s a shape of starting with a high transposition going down to where it’s no transposition and below and up again.  It’s a simple sinusoid-type shape.  The next voice comes in and does the same thing with a slightly different rhythmic inflection, then two voices come in together and fill out the field.  That’s the beginning of Snakecharmer in every version so far.  There are about six different voicing changes which are in addition to transposing in slightly different ways to provide rhythmic inflections.  They only respond on the beat. Whatever sound is coming in when it’s time for them to play, that’s the sound that gets transposed.  There are four of these processes going on at once.  Once again, it’s that complexity going on in the chaos created by these different orderings, transpositions of the source.  The other thing is the reason it’s a feedback network is that there comes a point where the player is playing, the sound responds to it, and then the sound that it responds with is louder than what the player’s doing, and that follows itself.  So you start getting a kind of data encoded feedback network that I think of as the snake, an ouroboros snake that’s eating its own tail.

FJO:  How much improvisation is involved?

CB:  Quite a bit.  I’ve never provided a score. I just tell the person what’s going on and ask them to explore the responsiveness of the network. Usually I’m tweaking different values in response to what they’re doing, so it’s a bit of a duet.

FJO:  Taking it back to Talking Drum, you have these notes explaining how people are walking around in this environment. There are these field recordings, and then there are musicians who are responding to them.  I can partially hear that, but I’m not exactly sure what I’m hearing.  Maybe that’s the point of it to some extent.

CB:  That’s not quite right.  We have the recording called Talking Drum.  That is a post-performance production piece that uses things that were recorded at different Talking Drum performances.  That uses field recordings.  In a performance of Talking Drum, there are no field recordings. Basically, the idea is that there are four stations that are connected with one MIDI cable. That cable allows them to share the same tempo. At each of the stations is a laptop computer, and a pitch follower, and somebody who’s playing into the microphone. So, the software that’s running is a rhythmic program I designed that I can give a basic tempo and beat structure to that can change automatically at different points in time, but that also responds to input from the performer, the basic idea being that if the player plays on a beat that’s a downbeat, that beat will be strengthened in the next iteration of the cycle. It basically adjusts to what it hears in relationship to its own beat cycle. The idea of the multiplicity of those stations where that’s happening, is that they are integrated by staying on the same pulse through the cable. The idea is that the audience is moving around the space that this installation is in and the mix they hear is different in each location. As they move, it shifts. It’s as if they were in a big mixing console, turning up one station and then turning down the other. What I was trying to do was to create a big environment that an audience can actively explore in the same way that I’ve talked about creating this dense listening environment and asking people to listen to different parts on their own. That actually came about from the experience of going to Cuba in the early ’90s, and being at some rumba parties where there were a lot of musicians spread out in different places. I wandered around with a binaural recorder and I recorded the sound as I was moving. Then when I listened to the recording, I was getting this shifting, tumbling sound field and I thought: “There’s no way you could ever reproduce this in a studio. It’s a much richer immersive way of listening. Why can’t I use this as a way to model some experience for live performance or for live audiences?”

The cover for Chris Brown's CD Talking Drum.

In 2005, Pogus Productions issued a CD realization of Chris Brown’s Talking Drum
.

FJO:  It actually reminds me of when I first heard Inuksuit, the John Luther Adams piece for all the percussionists.  It was impossible to hear everything that was going on at any one moment as a listener. That’s part of the point of it which, in a way, frustrates the whole Western notion of a composition being a totality that a composer conceives, interpreters perform, and listeners are intended to experience in full like, say, the Robert Schumann Piano Concerto. Interpretations of the Schumann might differ and listeners might focus on different things at different times, but it is intended to be experienced as a graspable totality, and a closed system. Whereas creating a musical paradigm where you can never experience it all is more open-ended, it’s more like life itself since we can never fully experience everything that’s going on around us.  But I have to confess that as a listener I’m very omnivorous and voracious so it’s kind of frustrating, because I do want to hear it all!

Compositions are more or less instructions, but I’m not going to presume that they’re going to do it exactly the same way every time.

CB:  Sorry! I think that’s part of the Cage legacy, too. You don’t expect to have it all and what you have is a lot.  Everybody in that Schumann Piano Concerto is hearing something slightly different, too, but there’s this idea somehow that this is an object that’s self-contained.  It’s actually an instruction for a ritual that sounds different every time it’s done.  But I think the ritual aspect of making music is something that really interests me and I would hate to be without it.  Compositions are more or less instructions for what they should do, but I’m not going to presume that they’re going to do it exactly the same way every time.  Maybe some of them think they do, but I don’t think performing artists do that really. It’s mostly about making something that’s appropriate to the moment even if it’s coming from something that’s entirely determined in its tonal and rhythmic structure. That to me is what makes live music always more interesting than fixed media music.  It’s actually not an object.  It’s not something that doesn’t change as a result of being performed.   Of course, fixed media depends on how it’s projected.

FJO:  Perhaps an extreme example of that would be the kinds of work that you do as part of the Hub—electronic music created in real time by a group of people who are physically separated from each other yet all networked together but it’s really there’s no centralized control and that’s kind of part of the point of it.

CB:  That’s right.  The idea is to set up the composition process, if you can call it that. It’s not really the same as composing, but it’s a designing.  You’re designing a system that you believe will be an interesting one for these automated instruments to interact inside of.  What we do is usually a specification; each piece has verbal instructions about how to design a system to interact with the other systems.  Then we get it together and get them working and they start making the sound of that piece which is never the same exactly, but it’s always recognizable to us as the piece that it is, because it’s a behavior. I would say within our group we get used to the kinds of sounds that everybody chooses to use to play their role in the piece, so it starts to get an ad hoc like personality from those personal choices that each person makes.

An excerpt of a networked computer performance by John Bischoff, Chris Brown and Tim Perkis (co-founders of the legendary computer network band The Hub) from the Active Music Series in Oakland’s Duende, February 2014.

FJO:  In terms of focusing listening, and perhaps you’ll debate this with me, it seems that, as listeners, we’re trained to focus on a text when a piece has a text. If someone’s singing words, those words become the focal point.  I hadn’t heard much music of yours featuring a text, but I did hear your new Jackson Mac Low song cycle the other night.

CB:  I don’t write a lot of songs, but when I do I find it’s usually a pleasure to work with a pre-set structure that you admire; it’s like you’re dressing up what’s already there rather than having to decide where it goes next.  Of course, you’re making decisions—like what is this going to be, is it going to be different, how is going to be different, how is it going to be the same?—but it’s nice to have that kind of foundation to build on.  It’s like collaboration.

FJO:  I thought it was beautiful, and I thought Theresa Wong’s voice was gorgeous. It was exquisite to hear those intervals sung in a pure tone and her diction was perfect, which was even more amazing since she was simultaneously playing the cello. But, at the same time, the Stone has weird acoustics.  It’s a great place, but it’s a hole in the wall that isn’t really thought out in terms of sound design so it was obviously beyond your control. I was sitting in the second row and I know Jackson Mac Low’s poems. So when I focused in, I could hear every word she was pronouncing. But I still couldn’t quite hear the words clearly, as opposed to the vocals on Music of the Lost Cities where I heard every word, since obviously, in post-production, you can change the levels. But it made me wonder, especially since you have this idea of a listener getting lost in the maze of what’s going on, how important is it for you that the words are comprehensible?

Music of the Lost Cities from Johanna Poethig on Vimeo.

CB:  Maybe it’s just me, but even in the best of circumstances, I have trouble getting all the words in songs that are staged.  Maybe it’s because I’m listening as a composer, so I’m always more drawn to the totality than I am just to the words.  Most regular people who are into music mostly through song are very wrapped up in the words.  But I’m not sure Mac Low’s words work that way anyway.  I think they are musical and they are kind of ephemeral in the way that they glow at different points.  And if you don’t get every one of them, in terms of what its meaning is, it’s not surprising.  It’s kind of a musical and sonorous object of its own.  So I guess I’m not exceptionally worried about that, although in the recording, I probably do want a better projection of that part of the music than what happened at the Stone.  I was sitting behind her and I was not hearing exactly what the balance is.  In the Stone, there are two speakers that are not ideally set up for the audience, so it’s not always there the way exactly you want it to be.

FJO:  So is this song cycle going to be on the next recording you do?

Most regular people who are into music mostly through song are very wrapped up in the words.

CB:  I hope we’re going to record it this summer, actually.  It’ll be a chance to get everything exactly right.  I’m very pleased that people are recognizing the purity of these chords that are being generated through the group, but there hasn’t been a perfect performance yet.  Maybe there never will be.  But the recording will get closer than any other one will, and that’ll be nice to hear, too.

FJO:  It’s like the recording project of all the Ben Johnston string quartets that finally got done. For the 7th quartet, which was over a thousand different intervals, they were tuning to intervals they heard on headphones and using click tracks in order to be able to do it. And they recorded sections at a time and then patched it all together. Who knows if any group will ever be able to perform this piece live, but at least there’s finally an audio document of what Ben Johnston was hearing in his head.

CB:  I think that’s really a monumental release.  Ben Johnston’s the one who has forged the path for those of us trying to make Western instruments play Harry Partch and other kinds of just intonation relationships.  It’s fantastic.  But I think the other thing that seems to be true is that if you make a record of it, people will learn to play it.  For example, Zeena and Nate the other night, in preparation for that performance, I was sending them music-minus-one practice MP3 files so that they could basically hear the relationships that they should be playing.  It helps a lot.  Recordings also definitely help to get these rhythmic relationships. I often listen to Finale play them back, just to check myself to see if I’m doing them correctly.  A lot of times, I’m not.  It drifts a little bit.

FJO:  But you said before that that’s okay.

CB:  But I want to know where it’s drifting.  I want to know where the center is as part of my learning process.  I use a metronome a lot, and I use the score a lot to check myself, and get better at it.

FJO:  You’ve put several scores of yours on your website. Sparks is on there.  Six Primes is on there.  And there’s another piece that you have on there that’s a trio in 7-limit just intonation—Chiaroscuro. Theoretically anybody could download these scores, work out the tunings for their instruments and play them.

CB:  Sure. Go for it. But they’re published by Frog Peak, so they can get the official copy there. I would like to support my publisher. Because of the way that my compositional practice has developed, a lot of my scores are kind of a mess. I had a lot of scores, but I haven’t released them because they’re kind of incomplete. They often involve electronic components that are difficult to notate, and I haven’t really figured out the proper way to do that. Where there are interactive components, how do you notate that? I’m not that interested in making pieces for electronics where the electronics is fixed and the performer just synchs to it. There’s only one piece I’ve played where I really like doing that and that’s the Luc Ferrari piece Cellule 75 that I recorded where the tape is so much like a landscape that you can just vary your synchronization with it.

FJO:  It’s interesting to hear you say that because back in 1989, you said…

CB:  Okay.  Here it comes.

FJO:  “I want electronics to enhance our experience of acoustics and of playing instruments.  Extending what we already do, instead of trying to imitate, improve upon, or replace it.”

A model is never a complete reading of the world.

CB:  Yeah, that was important.  That came out at a time when the industry was definitely moving towards more and more electronic versions of all the instruments, usually cheap imitations.  Eventually those become personalities of their own, but it seems to me they always start like much lesser versions of the thing they’re modeled on.  Maybe it has something to do with this idea of models.  We’re moving more and more into a virtual reality kind of world and I think it’s really important that we don’t lose the distinction of what the model is rather than the thing it’s modeled on. I think it’s pretty dangerous to do that, actually.  The more people live in exclusively modeled environments, the more out of touch they’re going to get and probably the sicker they’re going to get because a model is never a complete reading of the world.  It’s a way to try to understand something about that world. If you’re a programmer, you’re always creating models.  In a sense, a synthesizer is modeled on an acoustic reality. But once it comes out of the box into the world, it’s its own thing.  It’s that distinction I’m trying to get at.  I think we’re often seduced by the idea that the synthesized thing will replace the real thing rather than the synthesized thing just becoming another reality.  That’s why I’m interested in mixing these things:  singing with the synthesis. Becoming part of a feedback system with a synthetic instrument embraces that into a space and into a physical interaction. That seems to be more of a holistic way of expanding our ability to play music with ourselves, with our models of ourselves, with each other through models, or just seeing the models execute music of its own.  The danger comes when you try to make them somehow perfect an idea of what reality is and it becomes the new reality instead of becoming just a new part of the real world.

Martha Mooke: Walls, Windows, and Doors

Putting together a career as a working musician has never been easy, but one of the mantras for making it possible in the 21st century is: you must multitask. Most musicians multitask out of necessity, but for others it’s actually the source of their inspiration. And then there’s someone like Martha Mooke, who is engaged in so many different types of musical activities on a regular basis that it’s difficult for anyone else to keep track of them all. In any given week, she could be performing a solo concert on her electric five-string viola, playing in the viola section of a symphony orchestra or a Broadway pit orchestra, touring with a famous rock musician or with one of her own improvisational groups, and/or giving educational clinics to young string players on how to find their musical voice.

“I’ve had to come to terms with my different personalities,” Mooke acknowledges when we caught up with her in between gigs at the New York offices of ASCAP. ASCAP was actually a fitting place for us to talk, since it was through her ASCAP-produced Thru The Walls, a series of concerts that focused on composer-performers who worked in a variety of musical genres, that she first met David Bowie which ultimately led to her performing and recording with him and then a whole host of other luminaries.

“I wanted to have that juxtaposition of music worlds … all types of influences: jazz, electronics, rock, all kinds of things,” Mooke remembers. “I spoke with Tony [Visconti], who had a very broad background and broad interests. What could be better than having a renowned, legendary rock and roll producer introducing a new music concert? … Tony was living up in Rockland County at that time. I’d gone over to Tony’s house … and as I was leaving, he said, ‘By the way, I mentioned to my friend David this event tonight, and he said, he might come.’ And I’m like, ‘Right; sure.’ But sure enough, two minutes before the lights go down, in walks David Bowie.”

Within a year, the string quartet she put together to perform with Bowie appeared with him on the stage of Carnegie Hall for the annual Tibet House benefit and also in the recording studio for his 2002 album Heathen. She described similar chains of circumstances that led to her appearing on tour around the United States and Europe with Barbra Streisand in 2006 and 2007, her extensive educational work under the auspices of Yamaha (which is still ongoing), and one of her more recent obsessions—writing for symphonic wind band.

“I think it began almost as a joke, in a way,” she recalls. “I had never thought about writing for concert band. I had really, at that point, never written for an ensemble larger than a string quartet or a chamber ensemble. Then finally we said, ‘Let’s just do it.’ So I came up with the concept of X-ING … It’s the crossing of the worlds between electric viola and concert band. What happens when you cross those worlds? One of the things that happens is you don’t have to tell the band to play quieter because a string instrument is soloing. I just crank my volume; I go to 11.”

But no matter what musical activity she is involved in, she always views it as an opportunity not just to break through walls, but to open doors or to look out through a window in a new way. It’s a crucial life lesson that she taught herself very early on and one that she hopes to impart to others.

“I never accepted limitations and boundaries no matter what I was doing, whether it was because I was female or because whatever. If I liked doing something and had an interest in it, I just did it. I found opportunities. If the opportunities don’t exist, I make them. … I’m about overcoming those barriers and breaking through that inhibition factor, which seems to get more built into students as they go through school. … Unlimited possibilities. I would say you never know what you’ve been missing until you know what you’ve been missing.”


Martha Mooke in conversation with Frank J. Oteri at the NYC offices of ASCAP
February 14, 2017—1:00 p.m.
Video presentations by Molly Sheridan
Transcribed by Julia Lu

Frank J. Oteri:  I’m going to begin in a very unlikely place; I’m going to compare you to Gunther Schuller.

Martha Mooke:  Wow.  I’m actually honored.

FJO:  Well, one of the pieces of trivia regarding Gunther Schuller is that he was the only person who performed with both Toscanini and Miles Davis.  Plus the instrument he played was the French horn, which is not an instrument that you normally think of as being able to genre hop. That’s also true for your instrument, the viola. And yet, you’ve performed with David Bowie.  You’ve worked with Osvaldo Golijov, Lady Gaga and Tony Bennett, Alvin Singleton, and Barbra Streisand—so many people most people would never think of in the same sentence.

MM:  Right.  Actually it’s interesting that you mention Gunther Schuller because I’ve been doing a lot of research for this new piece that I’m writing for Symphony Space called Beats per Revolution.  It’s for electric viola, beat boxer, and chamber ensemble. All the musicians in the ensemble will be improvising, so one of the people I’ve been studying is Charles Mingus.  I actually purchased his score of Epitaph, which is a two-and-a-half-hour monster piece.  Gunther Schuller helped to finish that and he conducted it, so it’s kind of cool that you started out with that.  I’ve been immersed in Mingus and Gunther Schuller for the last few weeks.

FJO:  The other thing about Schuller is that in the ‘50s he codified this notion of there being a Third Stream—there was classical music, there was jazz, and then there was this third thing that emerged from connecting the other two. But for you, it’s not three; it’s not even four or five. You’ve gone beyond streams; you’re in an ocean of music!

MM:  I’m calling one of the segments of Beats per Revolution “Third Stream of Consciousness,” and that will be a little homage to that genre.  But I love instruments that you don’t think of as being in the forefront, improvising or playing in non-traditional ways, like a bassoon playing jazz.  Or a French horn.  It’s a wonderful opportunity to open things up from the inside.

FJO:  Interesting that you say open things up from the inside, because the horn and the viola are both essentially mid-range instruments.  We won’t get into viola jokes.

MM: We can laugh at them; we’ve overcome that.

FJO:  But the thing about the viola is that most people don’t know what it is.  If they see it, they’ll probably think that it’s a violin that’s a little too big.  Most people would probably just say it’s a violin, if they know the word violin.  On top of that, the viola is the only instrument that plays music written in this oddball clef that no one else can read.

MM:  It’s the only clef that really makes sense because middle C is actually on the middle line in alto clef.

FJO:  It really is in the middle, yet it’s a total outsider in a way.

“I love instruments that you don’t think of as being in the forefront, improvising or playing in non-traditional ways.”

MM:  Right. I think whatever instrument you’re playing needs to resonate with your soul. I started on the viola because in my public school class, when I was in fifth grade, the music teacher came in and said, “We have violins, violas, cellos, and basses; who wants to play violin?”  And pretty much everybody raised their hand.  Nobody knew what a viola was.  A few people knew what a cello was.  I always go the route of most resistance, so I picked the viola.  I worked with it and it resonated with me to the point where the music teacher wanted me to switch to violin because I was actually progressing a little more rapidly than the violin players were.  So I took one home one weekend, but I brought it back because it didn’t resonate under my ear; it didn’t do anything to my soul.  I’ve overcome that now, by adding the fifth string, but that’s how I began as a violist.

FJO:  I’m sure the reason why most students gravitate to the violin is that they are hoping to become soloists. A viola soloist is rare, but at that point you were just playing viola in your school’s string orchestra.

MM:  Yes. The middle school teacher came to the elementary school and started the program to feed us into the middle school.  Then that fed into the high school.  They were all public New York City schools.

FJO:  Wow, you’re a poster child for public school education and for music education programs in the school system.

MM:  Absolutely.

FJO:  This is something that we don’t quite have to the same extent anymore.

MM: There are a few programs still around, but it’s definitely not on the same level as it was in those days.

FJO:  It’s fascinating to me that playing the viola resonated with you so much that when the teacher asked you to switch to the violin, you tried the instrument and it didn’t speak to you.  At what point did you think to yourself that playing this instrument was what you want to do for the rest of your life?

MM:  I just kept doing it.  I was studying other things in school, but there was just something about music and playing the viola at Tottenville High School in Staten Island. I was a member of the string quartet and in the orchestra I got to play Bach’s Sixth Brandenburg with my sister, who at that time also played viola with the orchestra.  They also had music theory in this high school, so I just kept going because I was proficient and I loved it. So I was exploring and I was learning, taking private lessons and playing with community orchestras.

FJO:  The Sixth Brandenburg has no violins and so the violas are really carrying the melodies, which is pretty rare in the repertoire.

MM: Right.

A very young Martha Mooke playing viola at home in front of a bunch of potted plants.

From very early in Martha Mooke’s career.

FJO: The interesting thing about identifying with the viola and it being your instrument is that it really didn’t function so much as a foreground instrument until the mid-20th century.  I doubt that either the school or the community orchestra you were involved with was performing the Bartók Viola Concerto.

MM:  No. But I never accepted limitations and boundaries no matter what I was doing, whether it was because I was female or because whatever.  If I liked doing something and had an interest in it, I just did it.  I found opportunities.  If the opportunities don’t exist, I make them.  It never occurred to me that viola could not be a solo instrument. Then somebody gave me an album of Jean-Luc Ponty in my last year of high school, I think.  That opened up all kinds of new worlds for me, and I started delving into non-traditional string playing.

FJO:  Had you written any of your own music by that point?  Had you improvised?  Or were you just playing other people’s music?

“If the opportunities don’t exist, I make them.”

MM:  I used to write songs with a guitar.  I wrote a lot of singer-songwriter songs, and then I stopped because I felt like I got a little bit stuck.  I loved to sing.  My sister and I would sing together, but I didn’t see that that was going to be my career path.  I wanted to do something a little more than write songs.  After listening and exploring the world of Jean-Luc Ponty, I went and explored any jazz violinist I could find because I don’t think there were that many jazz or electric violists at that time. I hadn’t yet encountered The Velvet Underground with John Cale, but Turtle Island String Quartet was also popular back in those early days. So I went out and bought all the albums that I could, closed the shades and closed the doors, put on music and just started playing with it, improvising to it.  When I went away to college, I did that as well.

FJO:  Cale’s stint in The Velvet Underground pre-dates the Turtle Island String Quartet, but that probably wouldn’t have been something anyone would have exposed you to by the time you were in high school.

MM:  No, not at that point.  In fact, I didn’t discover them until the day I got a call to go on tour with John Cale.

FJO:  Really!

MM:  Then a whole other world opened up.  I ended up recording and doing a bunch of tours with John Cale and the Soldier String Quartet.

FJO:  Without having heard The Velvet Underground?

MM: Yeah, I didn’t really know about that world.

FJO:  Even though you grew up in New York City, your family probably didn’t listen to that music. Were they interested in classical music? What did your family listen to?

MM:  Neither of my parents were into music.  My father loved Gilbert and Sullivan, so we listened a lot to The Mikado and The Pirates of Penzance. And we watched the Boston Pops.  That was my classical music. That’s how I got to love Stravinsky and The Rite of Spring and started tuning into that world.  But I just gravitated towards music and my parents always supported me in that and paid for lessons.

FJO:  So there was no rock and roll in the household?

MM:  Not really.  There was more traditional stuff like Peter, Paul and Mary.

FJO: And I suppose even for people who were fans of harder rock, The Velvet Underground wouldn’t have been the mainstream.

MM:  Well, don’t forget, I also grew up in Staten Island.  At that point, to get to the city you almost needed a passport!  And when you’re not driving, it’s that much harder.  It takes two hours to get to the city from Staten Island, so there’s a big a culture gap in many ways.

FJO:  But hearing Jean-Luc Ponty opened your world up. Not just to improvisation but also to amplification and electronics.

MM:  The album that changed my world was called A Taste for Passion.  On the cover, Jean-Luc Ponty is cradling a beautiful blue five-string Barcus-Berry violin.  Within a year or so I convinced my parents to take me to Manny’s on 48th Street, and I went in and I bought a Barcus-Berry, the same color five-string. It was my first electric instrument.  I still have it in my instrument closet.

FJO:  So what’s the difference between a five-string violin and a five-string viola?

MM:  The range is the same, but on the viola you’ll have a longer fingerboard, which is pretty much the difference.  You don’t need to have a bigger body because you’re amplifying it, although the size of the body and the material of the body of an electric instrument does impact the sound.  But that instrument was an electric violin.  I couldn’t find any violas.  But it was five-string, which meant I could have the range of the violin and viola.  So I started exploring. I bought an old delay [unit] called an Effectron.  It was digital and analog. You had to push the buttons and you could only do one effect at a time.  But I made my very first demo tape with that. I didn’t have a recording studio or anything, and I wanted to apply for a residency at Harvestworks.  So I took headphones, plugged them into the headphone jack, put them on the floor, put the microphone to my tape recorder, put towels on the bottom and on the top, and that’s how I made my demo record.  They accepted it, and I got the residency. And because of that, I started recording my very first CD, Enharmonic Vision.

Marftha Mooke in performance on an electric five-string viola.

Marftha Mooke in performance on an electric five-string viola.

FJO:  Now before we make that jump, we’ve already made another jump, because at this point you’re creating your own music.  You went from playing viola in orchestras and performing classical music with lots of other people, to hearing Jean-Luc Ponty and wanting to improvise on an electric five-string. You’d written songs on a guitar and you sang, but when did you get the idea that you could make your own music for this instrument, and when did you decide to create music specifically for you to perform by yourself?

MM:  I just started to explore the world of improvisation in combination with the electronics.  I never studied formally.  I didn’t study jazz.  I didn’t study composition.  I was self-schooled in a way.  I discovered it on my own, so there was no wrong way.  I asked a lot of people what amplifier and what effects to get, but every person I asked had something totally different to say.  So I ended up doing just lots of trial and error, experimentation with sounds.  I discovered digital delay and that became a looping device; it was like an infinite echo.  They couldn’t start and stop at any time, just four seconds or eight seconds. But that’s where I started exploring.  Then I went to my first AES convention—Audio Engineering Society. I walked in and there was a guitar player there with this looping device called the JamMan made by Lexicon. I stood in front of this guy, and it was this big thing—eight seconds of delay that you could start and stop and so have control over.  So as soon as it became available on the market, I bought it and started working with it.  I was able to expand that to 32 seconds and, adding more electronics and just experimenting and building sounds, I started—through improvisation—creating works.

FJO:  There had been many people messing with delay units independently of one another by that time; it’s been part of the zeitgeist since the late ‘60s when Terry Riley experimented with his time lag accumulator and when Robert Fripp and Brian Eno had done concerts together in the early ‘70s. In fact, this was well after Fripp had started doing his solo Frippertronics, which was also a way of being an orchestra of one by controlling various effects units.  You hadn’t heard any of that stuff yet?

MM:  No.

FJO:  Well, all of that was improvisation-based music. No one was “writing” music involving delay units, at least not that anyone was aware of at the time.

MM: There was no repertoire, so again, just out of necessity, I started creating repertoire. Then, having a lot of composer friends, I started asking composers to write for me.

FJO:  The initial impulse came more from wanting to perform than out of wanting to compose?

I wasn’t calling myself a composer. I wasn’t calling myself anything. I was a player. I was a violist.

MM: I wasn’t calling myself a composer.  I wasn’t calling myself anything. I was a player.  I was a violist. Looking back on it now, I think I was just tapping into a way of expressing myself that I didn’t know I was able to do.  I was finding this voice within me.  The electric viola’s unlimited possibilities, the colors and the textures, were allowing me to really explore different worlds.  What was interesting was whenever I would find a new piece of equipment, I would always find the limitations of it right away.  So I would have to overcome that somehow.

FJO:  Like being restricted to looping either a four- or eight-second phrase.

MM: Exactly.  I just developed ways of working around it. Creating just kept going in that direction, because I had accessed something that I needed to get out—my inner voice.

FJO: And instead of avoiding the limitations of what you could do alone with this equipment by creating music to play with other people, you found workarounds so you could still do it yourself.

MM:  I think that was part of the exploration of my voice as a creative entity. I was just exploring by trial and error, listening to Jean-Luc Ponty, discovering Laurie Anderson, then Kronos Quartet big time, and following the Turtle Island String Quartet.

FJO:  Your first record came out in 1998. I remember the first time I heard you perform.  It was a year later at the Henry Street Settlement.  You were doing Vertical Corridors, which is still active in your repertoire and which you’ve since expanded and done other things with. I was so intrigued by what I heard you do that I immediately bought your CD there from someone who was selling stuff at a table.

MM:  Oh cool.

FJO:  But I was so bummed because the piece that I heard wasn’t on the CD.

MM:  Right.  It’s just come out this past year. It’s on No Ordinary Window.

FJO:  But it’s a different version than what I heard.

MM:  Well, every time I play it, it’s different.

FJO:  Anyway, the thing that struck me about that performance, even though you started creating so that you’d have music to play on your instrument, was that it made me forget the instrument. You were making music in real time, but because there were all these other effects, it didn’t sound like one person playing a viola.  Instead it was an immersive and all-encompassing sound world that sounded like a large group of people.  It could probably have been triggered from any instrument, so in a way it didn’t matter what the instrument was.  I felt the same way when I heard the pieces on that CD. The music was so harmonically—as well as contrapuntally—rich.

The cover for Martha Mooke's debut CD Enharmonic Vision

Martha Mooke’s debut CD Enharmonic Vision

MM:  When I’m writing for myself, it really starts out as a lot of experimentation, looking for different sounds and finding a recipe for a combination of sounds.  When you’re working with different electronic devices, you pluck one note and it could trigger a whole episode of beautiful harmonies or delays or a really interesting rhythm.  So when I find something and get that “Aha!” moment, then I start exploring that. A lot of my music I’ll notate after the fact, and I have to go in and figure out what it was that I did.  Sometimes it’s complicated to notate because, if it’s based on some harmonization or multi-effects processor, there are a lot of elements involved. With No Ordinary Window, I created a score and I did snapshots of the parameters that I use as far as reverb and delay and things like that.  So if somebody wants to perform the work other than me, they can do that with any other piece of equipment.

An excerpt from Martha Mooke's score for No Ordinary Window showing notations for various pedals as well as what the musician should play versus the actual resultant sound.

An excerpt from Martha Mooke’s score for No Ordinary Window showing notations for various pedals as well as what the musician should play versus the actual resultant sound. © 2014 by Martha Mooke, Vener Music (ASCAP). International copyright secured. All rights reserved. Reprinted with permission.

FJO:  So other people could play these pieces?

MM:  Yeah, but there’s a certain sound when I play—everything is my instrument.  It’s all an extension of me as a player—the instrument going into the electronics and into the speakers or whatever system it is.  It’s all me as an instrument.

FJO:  But at the point when you were creating the pieces that are included on Enharmonic Vision, they weren’t all necessarily written down.  I imagine that they were all amalgams of pre-conceived ideas, improvisation, and studio experimentation. You probably weren’t thinking of other people playing them.

MM:  Not at that point.  It was just something I was compelled to do.  Again because there weren’t that many people doing it at that time.  Then it was at around that time that I installed a pickup on my acoustic viola that I play in orchestras.  I would show up to an orchestra rehearsal and people would look at the pickup on my bridge and think right away that I play jazz.  That wasn’t part of the mainstream, so it sort of perked a little bit of interest at that time.

FJO:  A lot of violin and viola repair people are horrified by the notion of putting a pickup on a classical music instrument, that doing so is somehow tainting it.

MM:  I found very friendly luthiers that welcomed that, actually.  They loved the fact that I had a pickup on my bridge, and they could fix it if it needed to be fixed.  There’s one actually around the corner from here, Mathias Lehner. I bring my acoustic viola to him, and I bring my electric. If something happens to one of my Yamahas, I bring it to him and he can put the tailpiece on, which is all connected to the electronics. It’s a whole other world for them, and the ones that welcome that have that much more business, I guess!

FJO:  Before we leave Enharmonic Vision, the CD booklet has all these wonderful quotes from other people, but it doesn’t have quotes from you. So I want to ask you about certain aspects of those pieces.  There are so many different kinds of music on there.  Raindance sort of sounds like bluegrass to me, a little bit.  It comes out of that whole double-stop fiddling sound world.  Winds of Arden sounds like ambient soundscape-y kind of stuff.  And then Bones is filled with all these pizzicatos and extended techniques; it’s pretty avant-garde sounding. They’re all different, but you don’t seem to think of them that way, and it’s all a seamless and cohesive whole.

MM:  Because it’s me.  I guess this is both a plus and the bane of my existence.  When I released Enharmonic Vision, I did so as a solo entity.  I was the artist, the composer, the publisher, the record label.  Very naïve.  I took a few copies with me down to Tower Records in the Village.  I went up to the classical section, because that’s where the Kronos Quartet and Philip Glass were. I met the manager and I said, “I have this CD.  Would you listen to it?  Could you sell it here?”  He listened to it and he said, “Okay, I’ll take five copies on consignment.”  So I signed five copies over to him. He called me within the week and said, “We sold out; bring five more.”  He had liked it so much that he put it in the listening station between Kronos and Philip Glass, so people who would not have known to look under the filing of Martha Mooke saw me there.  They listened to it, and they bought it.  It was kind of neat.  I still have a few copies with the Tower Records price tag.

FJO:  But it’s interesting that you took your CD to the classical section. Listening to that album and even looking at the cover, I wouldn’t necessarily think it was something for the classical section.

MM:  Yeah.  I guess that as a violist, I came out of the classical world.  That was the thing with Tower; you had to fit into one of those slots.  Somewhere I guess it would have been great to be in different rooms, different slots, but that’s how it worked out.  So that’s where I first started selling my CDs.  And then they hooked me up with their distributor, Bayside, and that connected me with a distribution company.

FJO:  This was before you connected to anybody in the pop world.

MM:  Yeah.

FJO:  Even though the album looks more like an alternative rock album than a classical record.

MM:  Well, I guess unintentionally. It turned out the way that I dreamed of it.

FJO: I noticed that Bill Duckworth and Nora Farrell were connected to that first album. Bill Duckworth was such an extraordinary person.  He came out of classical music, but he was open to so many other things and he really opened the door for people creating music who weren’t necessarily writing it down for other people to play. And he, together with Nora Farrell, conceived and built one of the earliest musical performance interfaces on the internet. That was around that same time.

MM:  I became friends with them through the new music world.  And I really struck up a friendship with Nora who, I guess through conversations, joined on as the producer of Enharmonic Vision.  She actually designed the cover, too.  I had this really cool picture that had been taken at the World Trade Center during an orchid show.  I think I was lying on the floor.  Anyway, she came up with this cool idea and created the cover art as well.  And produced the recording.

FJO:  That makes sense, because one of her mantras was that classical music had to stop looking like classical music.

MM:  Yeah, she had a big influence. And Bill, too.  They were great friends.

FJO:  Still, at that point, you’re not thinking of yourself as a composer per se.  You’re a performer who is creating pieces for yourself, and you’re occasionally asking other people to write pieces for you.  So at what point did you start identifying yourself as a composer?

MM:  I guess it was around the time I became a member of ASCAP, which is where we happen to be sitting.  I became a member of ASCAP so I became a composer.  But I didn’t quite fit in with the classical concert world as a composer.  I wasn’t writing orchestral pieces or string quartets at that time, so that wasn’t really a place for me.  Shortly thereafter, I happened to be at a big membership meeting of ASCAP and listening to something that just made me say, “Wait a minute, I have to figure out where my voice is in this organization and in this music world.”  I remember Marilyn Bergman was the president.  She was walking up the aisle and—there’s where you need your 20-seconds elevator pitch—I just sort of stepped in front of her and said, “I’m an ASCAP classical composer, but I’m doing things that are beyond classical and I have this idea of doing something.” And she’s like, “Okay, talk to John LoFrumento.”  So I went over and talked to John.  He’s like, “Okay, that sounds good. Talk to—” and it went down the pike.  That’s how [my club concert series] Thru the Walls was conceived.  Out of necessity, because I needed a place where I could have my voice heard that was accepted and was legitimized in a way.

FJO: I didn’t realize that Thru the Walls came about so soon after you joined ASCAP.

MM:  Within a few years, I guess. It was at a membership meeting. There were lot of people in the room, and they didn’t discuss concert music at all.  And I think I got upset, because I thought, “I’m part of this terrific pro-composer, pro-writer organization, but I don’t know where my voice is in it.”  It was just kind of spontaneous.  I’m usually pretty shy.  But there was something that really pushed me. I had that moment with Marilyn to block her path and somehow explain with enough clarity that I was then able to make appointments with Lauren [Iossa] and with Fran [Richard] and Cia [Toscanini] where we sat down and came up with this idea.  I came up with the name: Thru the Walls—listening to something through the walls, not being able to easily identify what it is.  It was based on ASCAP composers who were also performers.  This is not a new concept—the composer as performer, or the performer as composer—but the idea was to take it into another context in the contemporary scene, bringing it down to The Cutting Room, which was a venue that was more likely to produce jazz and rock concerts.  You wouldn’t think of going to that venue to hear a classical music concert.

Tony Visconti and Martha Mooke

Tony Visconti and Martha Mooke

FJO: Nowadays everybody’s playing in clubs. But at the time Thru the Walls came into being, it wasn’t as typical. The other thing that made this series unusual, I think, is that it was officially embraced and directly supported by ASCAP, so it had this official imprimatur; others who were playing classical concerts in clubs didn’t have that kind of endorsement.  It also attracted a very diverse audience, which included people like David Bowie.

MM: Right.  Well, I understand a lot of things now about what I was doing that I really didn’t understand then. It’s all about reframing the situation.  Again, as far I’m concerned, as a musician I can be playing Beethoven one day, rock and roll the next day, and my own music the following day or something else.  So I don’t have these [walls] and I didn’t at that time, either—and this was pre-2000.  I had been doing sessions with Tony Visconti.  I had met him backstage at some concert that I played with the lead singer of the Zombies.  I had been asked to play in the string quartet. He got interested when I said I also play electric viola, so I started doing string sessions for Tony.  When Thru the Walls started developing, I wanted to have that juxtaposition of music worlds, composers who weren’t just doing classical.  It was all types of influences: jazz, electronics, rock, all kinds of things.  I spoke with Tony who had a very broad background and broad interests. What could be better than having a renowned, legendary rock and roll producer introducing a new music concert?  That sparked a lot of interest in both worlds.  People who knew Tony were like, “Why is he doing this?” And people from the classical world thought, “Why is this happening at The Cutting Room?”  Kudos to [The Cutting Room’s owner] Steve Walter who embraced us; that’s how it began.

FJO: I imagine that Bowie showed up because Tony produced some of Bowie’s records.

MM:  Right.  Tony was living up in Rockland County at that time. I had actually gone over to Tony’s house.  Tony also did Alexander Technique, and I was a little nervous, and he was sort of calming me down a bit.  As I was leaving, he said, “By the way, I mentioned to my friend David this event tonight, and he said, he might come.”  And I’m like, “Right; sure.”  But sure enough, two minutes before the lights go down, in walks David Bowie. He sits down at my table, and the rest is history.

FJO:  Well, not completely.  We’re going to make it history now.  How did it go from him being there to you performing and recording with him?

MM:  I guess you’d call it fate.  You’d call it circumstance.  January 2001 was the first Thru the Walls, and shortly after that I got a call from Tony that David was slated to play the Tibet House benefit concert that Philip Glass produces at Carnegie Hall.  That was going to be at the end of February.  He wanted to know if I could put a string quartet together to play with David.  So I said, “Yeah, I could do that.”  I did and it was amazing.  We rehearsed at Philip’s studio a few days beforehand. We played “Heroes”—string quartet and Tony played stand-up bass.  Can you imagine playing “Heroes” with David Bowie?  Moby was also on that concert.  Moby played guitar.  And we played another song, “Silly Boy Blue,” with David.  It was absolutely magical.

Martha Mooke and David Bowie wearing a long yellow scarf.

Martha Mooke and David Bowie backstage at Carnegie Hall.

FJO:  So that’s what opened the doors to your being a go-to side person for all these pop stars?

MM:  Yeah, that was a big door opener.  Then there was another Thru the Walls, which happened right after that. And that led to a bunch of other opportunities.  A little documentary was done for DCTV, downtown television. At that point I was recording Osvaldo Golijov’s Rocketekya, so they came up and they filmed the recording session with Alicia Svigals, David Krakauer, and Pablo Aslan. It was cool because the beginning of the tape is Rocketekya.  It’s a rocket ship taking off, so we don’t count in.  We count down—five, four, three, two, one.  Then it takes off.  Then we got a call from David to play with him at Tibet House again and, in the middle of that, he asked us to record with him on Heathen, which happened the weekend after 9/11.  It was very emotional in a lot of ways.  Then we just kept being asked back.  We became David Bowie’s quartet; then we became the quartet of Tibet House.  People were asking, “Can we borrow the quartet to play?”  Even after David didn’t do the benefit, which he did three years in a row, we kept coming back.  Philip kept calling us to come back; this is going to be our 17th year.

FJO:  Wow.  And the Streisand connection.  Did that happen through Marilyn Bergman?

MM:  No, but I did end up on tour with Marilyn.  It came about through my work as a Broadway player.  When Barbra was putting a U.S. tour together for 2006 and then the 2007 European tour, she hadn’t toured that much and she wanted to have a Broadway orchestra.  They used a rhythm section from L.A., and they culled from the different pit orchestras on Broadway.  I feel like I hit the lotto on that one.  It was just such an amazing experience.

Tony Bennett standing next to Martha Mooke who is holding a viola.

Tony Bennett with Martha Mooke

FJO:  Well, talk about putting together a career doing music. One day you could be on stage with the Westchester Philharmonic or the American Composers Orchestra, with whom I’ve seen you play.  Then in the pit with a Broadway orchestra another day.  Or backing up a rock band. Or part of a jazz group.  Or playing your own music by yourself.  Or writing music for other ensembles. But it seems that carrying out a specific role in each of these musical projects would require different approaches to where you personally fit in.  Do you feel you need to be in different mental spaces for each of these activities or is it all part of a continuum?

“I’ve had to come to terms with my different personalities.”

MM: I’ve had to come to terms with my different personalities.  As a player, I have to approach it from a different point of view.  If I haven’t created it, there’s an obligation to be true to the printed notes as long as they’re all printed out.  So I have to do my due diligence—woodshed and practice and, if it’s with an ensemble, rehearse.  But if it’s a piece that I’ve composed and I’m playing either with my quartet Scorchio or my piece e-chi—which is with a percussion ensemble—or something with a combination of notation and improvisation, that gets a little tricky for me. Because I’m coming at it as the composer, I have to work twice as hard to realize I can also take some license with what I play.  But realizing that I have written parts for the other players, I need to make sure that we’re literally on the same page.  With Bowing, the duo with Randy Hudson, we started just as improvisers and built the pieces which became part of the Café Mars record. Then I retro-notated Quantum, which is on that duo CD, for string quartet and then for string quintet, when we add a bass player.  Time in a Black Hole, which is with bass and percussion, is all just improvisation.  We don’t have a plan. We just get together; we meet, hit, and leave.

Martha Mooke and Randy Hudson in performance on electric viola and electric guitar.

One of Martha Mooke’s long standing groups is her duo Bowing with electric guitarist Randy Hudson

FJO:  I love this term retro-notated.  How much of this music is retro-notated? Can all of these pieces be retro-notated?

“Ultimately I retro-notated it, so it exists in a notated version.”

MM:  Sure. Sometimes there’ll be a bare bones notation, like in jazz you have a chart.  That’s how No Ordinary Window began, or Virtual Corridors.  For many years, when I played Virtual Corridors, it existed just as words on a page with maybe a couple of lines that I sketched out.  It was really more a description of what I’m doing.  Ultimately I retro-notated Virtual Corridors, so it exists in a notated version.  No Ordinary Window existed just basically as a solo line. Then I needed to figure out how to notate the electronics that I used—this amazing pedal by Eventide called the H9 that opened up a whole other world of sound.  I figured out a way of bringing that into the score by describing what kind of effect I’m using and then actually taking screen shots of my iPad, which has the exact parameters of reverb and what kind of effect or filter I’m using.

FJO: But considering all the improvisational passages you include in your own music, as well as all the educational workshops you do about improvisation, you’re somebody who wants to engender improvisation in other people.  When you retro-notate something and fix it on the page, aren’t you losing something in terms of what an interpreter is bringing to it?

MM: Well, the beauty of live performance, especially when you’re an improviser, is the energy of that and the communication with the audience.  I think sometimes that gets lost in more traditional concert settings where the audience comes in and they know they’re going to hear a Mozart symphony. Sometimes the tempi are different from what they remember, but they don’t realize there is communication going on between the performers and the audience.

When I’m performing solo, I make sure the audience is aware that they’re actually part of the performance.  I’m informed by them sitting out there.  I get feedback.  You can call it biofeedback or whatever.  For the audience, it becomes more of an experience than just being played at or played to.  You can’t notate that and that’s okay.  Likewise, a recording is just a moment in time.  But that’s okay, too, because hopefully people will have heard the music live and they’ll take that as a memory.  At some point the album will exist when I no longer exist.  Hopefully there’ll be enough material out there, whether it’s videos on YouTube or other iterations of performing the same piece, and they’ll draw their own conclusions.  People think that everybody’s hearing the same piece when it is the same notes being played, but everybody hears differently.  We all have our filters and our own way of processing—if you wake up in a bad mood, if you have a headache, if the temperature’s too hot in the room, or the person sitting next to you is a little odorous. It’s never going to be the same.  You’ll never process that experience the same way.  That’s something that, more and more, I’m putting an emphasis on, because it’s so easy just to stay home and watch on your screen, but you don’t get that same experience as being in the room when it’s happening. So as long as I’m alive, I’m going to keep that happening.

FJO: I’m curious about the pieces that have been written for you by other composers.  How much does somebody who wants to write a piece for you have to know about all of the electronics you use when you perform?  Or are these elements that you then add on as an interpreter, post-composition, in a sense becoming a sort of a co-composer?

MM: I did two full concerts of works that I commissioned, all from friends and acquaintances of mine.  Most of them didn’t know anything about writing for electric viola, let alone the electronics like foot pedals.  So most of them came to my studio once or twice, sitting on the floor.  Victoria Bond, Alvin Singleton, Tania Léon, and even Leroy Jenkins were asking me questions.  “Can you do this?  What happens if you do this?”  So there was a lot of collaboration in the pieces.

FJO:  And are those pieces fully notated, or were they retro-notated?

MM: Some of them have improvisation in them, like Alvin’s piece, which I’ve recorded.  Leroy’s piece was not notated with notes.  It was more a back and forth between the two of us, a conversation between a grandfather and a granddaughter.  But most of them were fully notated. There was one piece I remember, almost every note had a different effect.  I had to enlarge the score and color code everything. It doesn’t get performed as much these days.  But getting such a variety of pieces from the different composers was an incredible experience.

FJO:  Now in terms of your writing music for others, an area you’ve been working in quite a bit—which is somewhat surprising since there are no strings—is wind band music.

MM:  I think it began almost as a joke, in a way.  In one of the orchestras I play with, there was a French horn player who is the music director of the Ridgewood Concert Band—Chris Wilhjelm. We started talking, and he was intrigued by what I was doing as an electric violist. He thought it would be cool if we did something together at some point.  I had never thought about writing for concert band.  I had really, at that point, never written for an ensemble larger than a string quartet or a chamber ensemble.  Then finally we said, “Let’s just do it.”  So I came up with the concept of X-ING—as in pedestrian crossing or deer crossing.   It’s the crossing of the worlds between electric viola and concert band.  What happens when you cross those worlds?  One of the things that happens is you don’t have to tell the band to play quieter because a string instrument is soloing.  I just crank my volume; I go to 11.  The first movement is “Pegasus X-ING”—the winged horse.  I use electronics and, in the notated score, I had to notate so the conductor is actually seeing what he’s hearing.  There’s an effect where I play one note and a series of rhythms happens.  I play dah, but what you hear is da-da-da-da-da, da-da-da.  In the ending, I use a combination of loops and different effects to get the winged horse taking flight.  I keep the loop going while I switch instruments.  I switch to an instrument that’s actually re-tuned to E-flat and B-flat, so that I can play open strings and harmonics in the middle movement with the band that tunes to B-flat.  When I was writing the middle movement, I was at the MacDowell Colony; it was at the time when my uncle, whom I was very close to, was taken very ill in Miami, and he was actually at the point of crossing over.  That became that second movement, “X-ING Over”; that’s a tribute to him.  The last movement, “Double X-ING,” is rock and roll. It starts with a crazy cadenza with overdrive and all kinds of improv and loops and things going on.  And then we’re off, trap set and all.

The first page of the third and final movement of the full score of Martha Mooke's X-ING for wind band and solo electric viola.

The first page of the third and final movement of the full score of Martha Mooke’s X-ING for wind band and solo electric viola. © 2012 by Martha Mooke, Vener Music (ASCAP). International copyright secured. All rights reserved. Reprinted with permission.

FJO:  Then you wrote another band piece, which you’re not playing in at all.

MM:  That was one of the hardest moments I’ve had, understanding that I wasn’t going to be playing.  I spent a lot of time working on that.  I had to come to terms with how I would approach writing it.  With X-ING, I actually was playing and composing at the same time, but Skandhas, which is the name of the piece, came out of a different world.  I was composing more at the computer, using Sibelius. It does have elements of improvisation in it as well, but I had to remove myself and that was very challenging.  There’s some really cool things that I like about it, but after the premiere, I ended up doing some revisions and, who knows, I may still revise it more at some point.

FJO:  But you liked the experience enough and felt confident enough to go on to write yet another one.

MM:  I just finished my third piece, but I got a little sneaky with it.  It’s going to exist as two entities.  It will exist as an ensemble piece, but then there’ll be another version with electric viola obbligato improvisation. It’s not quite an alternate version, because the plan is for them to be performed on the same concert.

FJO:  So for the new piece, did you return to composing with your viola in one hand like you had done with X-ING?

“When you cross electric viola and concert band you don’t have to tell the band to play quieter because a string instrument is soloing.  I just crank my volume; I go to 11.”

MM:  I think I wrote less with the viola in hand. I had a keyboard and a computer. I also had to not make it too complicated, in terms of notation, since it’s not for a professional ensemble— although it could be played by professionals—and also had to bear in mind that it may be written for an ensemble that doesn’t have that much experience improvising.  In many school bands and orchestras, there’s not an opportunity for members of the ensemble to improvise, whether it’s the full ensemble improvising or members as soloists.

FJO: You’re also now performing X-ING with an orchestra.  So you’ve taken the band score and turned it into an orchestra score. Many people have written orchestra pieces and then have made band versions of them.  But this went the other way around.

MM: I’m retro-orchestrating!  I’m not a purist in anything that I do, so I don’t have a problem.  It’s another opportunity.  That’s another thing with the band world—they love playing new music and they love living composers.  They love supporting living composers, and they rehearse a lot.  Certainly there are orchestras that play new music and commission new works, but it’s a little bit different in the orchestra world.  So I love that the orchestra world is interested in performing it. The challenge was how to re-write the piece. It wasn’t just substituting violins for flutes and things like that.  I had to rework some of the innards.  I revised the middle movement a little bit, tightened it up in ways.  I’m looking forward to the first time performing it.

FJO: It’s funny that you wrote for band before you wrote for orchestra and that your first orchestra piece turned out to be a revision of a band piece. You’ve played in so many orchestras and so you really have an insider’s knowledge of the orchestra.  That’s not something you had with band. In fact, many composers who’ve written for orchestra, even ones who are master orchestrators, are reluctant to write for band since it’s just not something in their background.

MM: Yeah, it’s a big learning curve, learning the ranges of the different instruments and the transpositions, learning that you can’t just write a slide anywhere you want to for trombone because it may not happen, it may be over the break.  It’s not just write the notes into Sibelius and this is how it’s going to sound and if it’s red you can’t write it.  It doesn’t work that way.  There’s also a harp in the band version. I had to learn the intricacies of the harp.  I was actually writing that part when I was on the Streisand tour, so I had access to the harpist Laura Sherman; she would look at it and give me some hints.

FJO:  I thought that the band stuff grew out of all the education work you do, but it didn’t.  Still, I’m curious about that part of your life. You’ve done so many education seminars, teaching string players how to improvise and use electronics.

“I’m about overcoming barriers and breaking through that inhibition factor which seems to get more built into students as they go through school.”

MM:  A lot of that is due to my involvement with Yamaha. We came together when they were just designing their electric string line.  At that point, they were calling it Silent Violin, because the whole point was that you could plug your headphones into this instrument and nobody else would have to listen to you practice.  I happened to meet one of their team and they liked what I was doing, so they sent me a prototype of it and I said, “I love it with the headphones, but I want to play it loud.  Can you do this?”  That began our collaboration.  They’ve even invited me to go to their headquarters in Hamamatsu, Japan, to work with a design team.  I’ve been with them for a long time, and I have many generations of their instruments.  They’ve also been extremely supportive in the educational realm when there are opportunities to go to schools and to conferences to demonstrate and present workshops, working with students and also working with the teachers. A lot of times, the teachers don’t know how to work with electric string instruments, if they have them in their schools, or with improvisation and having it be an opportunity for the students to create and find their voices.  Sometimes they may not be as proficient as they’d like in order to be able to express themselves. I’ve discovered ways to help them overcome that, whether it’s by banging on a table, strumming the inside of the piano, or just playing some other sounds just to help them find their creative voice.  It’s all about discovering that voice inside that a lot of times kids are afraid of accessing.

Martha Mooke demonstrating string techniques for students at a clinic.

Martha Mooke demonstrating string techniques for students at a clinic.

One of my most popular workshops is called “Am I Allowed to Do That?” That literally came out of a workshop in a school. I sometimes start out with my acoustic viola, walking around the room, playing really crazy stuff just to get the students to respond without thinking, because that accesses something that they don’t know how to do usually.  They’re not supposed to do that.  They’re supposed to put that part away.  What happened is I went over to this violinist and started playing and said, “Answer me.  Don’t think.  Just answer me.”  And he looks around to see if his teacher’s looking and says, “Am I allowed to do that?” Yes, in this timeframe, you’re allowed to do that.  And you’re allowed to explore it after school, or at home.  In the school, in this class, you need to conform and do what you need to do, but I’m about overcoming those barriers and breaking through that inhibition factor which seems to get more built into students as they go through school.

FJO:  We began this conversation talking about how you started making solo music with an electric viola and various electronic effects units, which enabled you to create an almost orchestral-sounding sonic landscape all alone. It’s something you still continue to do, even though now you also do all these other projects.  The pieces on your new CD No Ordinary Window are fuller sounding than any of your solo work I had previously heard. And one of the pieces on it you perform live with video; it’s an immersive sight and sound experience that you’re triggering all by yourself which adds yet another layer.

MM:  These are actually two projects. No Ordinary Window is its own performance experience that doesn’t usually involve video.  The whole concept is finding these amazing spaces with a window, starting the concert before dark, and having the sunset be part of the lighting show.  It’s a window looking out, a window to the soul, and a window of opportunity.  I first envisioned No Ordinary Window in Sedona up in the Red Rocks.  There’s a chapel there and my dream was to play in that chapel as the sun was setting and having that be a natural lighting effect with the music.  As the concert starts, the audience sees the beautiful rocks outside as the sun is setting.  Then it gets dark outside and the windows become mirrors.  The audience sees themselves.  I was able to do that concert, though not in that chapel. I happened to be talking to the president of Eventide saying this is my dream concert.  He knew somebody that had a house on the next block and made that happen.  It happened to be the person that created Eventide.  Again, it’s all these coincidences.  But that’s the No Ordinary Window experience.

A Dream in Sound is on the recording of No Ordinary Window. Then I did a version of it for that became Dreams in Sound, which was essentially the same music, but it took on a whole different form with a string quartet where everybody was using effects.  I took that a step further when I got a commission from this improvisation festival in Prague and a foundation that discovered me through an event I produced a couple of years ago with Women In Music. It was another one of those Thru the Walls moments. I was commissioned to write a piece for this festival, so I took the dream experience to the next level.  I created a 50-minute piece called Dreaming in Sound.  I had another residency at Harvestworks that was supported by that foundation, and I was able to work with one of their engineers there and designed multi-dimensional effects and looping, not just for the solo viola, but also for four channel audio that I also controlled with a foot pedal.  I was able to launch sounds into four isolated speakers.  I had control over the speakers, rotating this way and that way; this was done through Max/MSP on computer. I knew that I had to also have a video element—that was part of the proposal—but I wasn’t quite finding the right way to go about it.

A couple of years ago through these monthly salons called LISA—Leaders in Software and Art—I met the woman that runs them, Isabel Draves, and we became friends. Her husband is this amazing software artist, Scott Draves. I was asking Isabel if she could recommend any video people.  And she said, “Scott has this new program and he loves your music, and he’d love to work with you.”  That’s how that whole collaboration came about.  The program—which he calls Dots—is “listening” to all of the music that I’m creating on the spot, and it’s responding to it.  There’s a big score, but there’s also lots of improvisation, and the video is responding to me.  And then I’m watching the video and responding to the video, which takes it to another place within the improvisation.  So, again, every time I do it, it’s different.  I premiered it in Prague during this festival.  Then I was able to do it the following week at National Sawdust because there was this big Creative Tech Week going on. It’s a big ensemble piece, but I’m the only live player in the room.

The cover for Martha Mooke's latest CD, No Ordinary Window.

Martha Mooke’s latest CD, No Ordinary Window.

FJO: Too bad that that wasn’t released on a DVD.  Maybe it will be at some point?

MM: I need to do it.

FJO:  Might there ever be a time when you incorporate immersive video with a larger ensemble?  It would be amazing to do that kind of thing with a wind band!

“You never know what you’ve been missing until you know what you’ve been missing.”

MM: Unlimited possibilities.  I would say you never know what you’ve been missing until you know what you’ve been missing.  A lot of what I do is exploration, trial and error experimentation.  Sometimes the best thing is if I’m improvising or I’m playing something, and something goes wrong with a foot pedal.  I misfire or I play something I didn’t mean to.  I take that as an opportunity to explore the space that I might not have explored before.  I didn’t really mean to do that, but it happened for a reason. So I’m going to go in that direction.

FJO:  It’s really an extension of your workshop where you give students permission to do anything.

MM:  I was taught a long time ago, even as a classical player, if you make a mistake don’t let on, don’t make a face.  Either make the same mistake again if it comes back or just keep going.  Most of the time, people won’t ever know if I intended to do it or not.  Hopefully they don’t.  Hopefully it just becomes part of that moment and that experience.

Martha Mooke outside in Sedona; thre's a rainbow in the sky.

Martha Mooke in Sedona

Joshua Fried: Let’s Dance

Joshua Fried

Joshua Fried begins each of his RADIO WONDERLAND shows with a spin of a boombox radio dial, snippets of caught commercials and DJ chatter popping out of the static and drawing his audience’s ears in on a raft of mainstream culture before he starts cutting it apart.

There is also a boombox in nearly every room of Fried’s apartment, which after a few hours in his company chatting about processing sound, seems to be not just a fun decorating choice but also an illustration of how connected he is to his music-making tools.

More than sharing space, however, it’s time that Fried has invested deeply in his music, labor-intensive processes becoming something of a hallmark. As a result, his projects have a tendency to spiral out across years of his professional life. Splicing elaborate tape loops and coding his own software have been just par for this artistic course—intimacy with the tools and materials an essential part of the work.

Yet whether in a dive for self-preservation or simply a yin-yang bit of balance, Fried sets up his musical game boards with elaborate care, but then prefers to play out the final aspects of his creative process live in front of an audience. In the ’90s that meant feeding his performers their material in real time over headphones. Since 2007, it most often finds him alone on stage, a couple pairs of men’s dress shoes concealing gate-triggering microphones and a Buick steering wheel drawing the audience’s eyes as he grabs bits of radio chatter from which he builds each RADIO WONDERLAND concert.

His creative path has led him from The Pyramid Club to more esoteric new music circles, but he hasn’t abandoned his pursuit of great grooves, and it’s a prime driver of RADIO WONDERLAND. “I had this metric, which is that I wanted it to be actually danceable,” he explains. “As a creator, as a composer, to have that metric and believe in it, to me, it’s not a cheap thing in the least. It’s so helpful. Sometimes you need a framework to hang your musical efforts on.” In live performance and in track after track on his just-released album SEiZE THE MEANS, the drive of the pulse, the transparency of the process, and common commercial radio core prove to amplify rather than dilute the music’s broader unique aspects.

Fried anticipated that his lack of interest in “high-end signal processing of very theoretical stuff that you do your Ph.D. thesis about” might result in his work being dismissed in certain circles, but while that has happened, he has actually felt accepted and free to pursue the work he wants even if it comes attached to a beat that encourages serious toe tapping. It’s not something he’s looking to transcend. “I love dancing. I sometimes find myself bopping my head in music concerts when it’s not really thought of as head-bopping music, but I’m hearing a pulse. Okay, maybe in that situation, maybe you could argue that I’m missing something. But there are many cases where I feel like no, I’m not. I am moved and I’m moving, and I’m immersed and involved. And I just love it.”

Joshua Fried in conversation with Molly Sheridan
November 10, 2016—11:00 a.m.
Video presentations and photography by Molly Sheridan
Transcribed by Julia Lu

Joshua Fried: I think I have long had this idea that I’m going to be the thorn in the side of some establishment that isn’t going to like me, and it turns out they do.

Molly Sheridan: But you don’t trust that?

JF: I have a little bit of imposter’s syndrome, but I’m on much more solid ground than I was when I started. It’s funny because “new music” is awash in people doing sophisticated things in funny meters and odd things with tonality and pitch, and whether I do or don’t, I tend to be accepted and no one has a problem with 4/4. It’s kind of amazing to me. I’m sort of waiting to be dismissed—and that’s happened to me—but I feel very accepted and able to pursue what I want. It just so happens that what I want is rather clubby, especially with RADIO WONDERLAND.

MS: I actually wanted to start by just talking about the evolution of RADIO WONDERLAND, especially for readers who may not be familiar with this project. It seems to me there’s a sort of ritual to these performances and to the pieces you create, including the equipment that you use and have used for a number of years now.

JF: Oh, yeah.

MS: So I want to trace the evolution of that visually and sonically, whether you have to go back to 1987 to do that, or just 2007.

JF: I have been cutting up sound and processing sound since I first started composing, and I started using radio really early on. I did one piece where I would start with FM radio playing the easy listening station—cascading strings and completely mellow “beautiful music”—and then cut to this underlying tape loop that was cut up very precisely. I would do it several times and it was random what I got from the cascading strings station. Then I was performing in clubs in New York with multi-channel tape-loop processing. Basically I was taking the technical structure of dub reggae, only instead of remixing an existing reggae song, I would remix a multi-channel tape loop that I had constructed laboriously and do that live.

I also had a thing where I would use something to trigger a gate. Like I would speak into the microphone, but it would be opening up a gate on a tape loop. It was theatrical. As a performative schtick, I started hiding the mic inside various objects. I put the mic inside a shoe and took it to the Pyramid Club where I was performing live, and I was whacking the shoe with a drumstick so the tape loop could be in time with my underlying groove. Then as I evolved as a composer, I wanted to do more with gates, so I said, let’s have four shoes. And this is 1988 at the La MaMa New Music Festival. I had the shoes and a radio—two channels of shoe-controlled gates from radio and two pre-recorded ongoing sounds.

Fried's stage set-up with shoes

Fried’s stage set-up with shoes (Yes, that’s Todd Reynolds in the background!)

Then a few years later, I realized I could do something that’s all radio. What I had to do next was the club-oriented funky tape loops that I had done in the ‘80s, only make those collages in real time in front of an audience and all out of commercial radio. I could do that with technology. I didn’t know what technology, but I knew I could do it with technology. I could trigger the radio with the shoes, but I wanted to do more. What I was doing in the ‘80s in clubs, these tape loops that I mentioned where I did things based on dub reggae, got increasingly intricate and I would do very high-precision tape splicing. As digital sampling was taking off, I would kind of say to myself, oh, I can do that with splicing and I would end up with something that was like those samplers, only more hi-fi because I had a quarter-inch tape deck, which was giving me better quality than the 8-bit or 12-bit samplers at the time. So there was this kind of odd period where, because I felt that I would live forever and it didn’t matter how long a project took, I would just do even more labor intensive, high-precision tape splicing.

But I slowly transitioned to MIDI and sampling, and so getting back to the beginnings of RADIO WONDERLAND, I realized that I could use technology to precisely cut up the found sound that I got off the radio and turn that into a groove. I have notebooks full of notes about what I could do and the more I thought about it, the more I got serious about it. I went through a period where I thought: how far am I willing to really elaborately process? Because what I love most in processing is the cutting up, running backwards, playing at different speeds, collaging as opposed to the high-end signal processing of very theoretical stuff that you do your Ph.D. thesis about. The simple processing that has a big musical payoff is more fascinating to me. What’s the least I can do, the most transparent processing I can do, and have it give me my musical result?

Sometimes you need a framework to hang your musical efforts on. And sometimes I think it doesn’t matter so much what that framework is.

And I had this metric, which is that I wanted it to be actually danceable. As a creator, as a composer, to have that metric and believe in it, to me, it’s not a cheap thing in the least. It’s so helpful. Sometimes you need a framework to hang your musical efforts on. And sometimes I think it doesn’t matter so much what that framework is. You need it. Especially when it comes to structuring things over time.

I was doing the tape loop stuff in clubs, and that was more or the less the ‘80s, and in the ‘90s it was the headphone-driven performance, [concert work that requires performers to try and imitate vocal sounds that are played over headphones]. Then halfway through that, I realized the next thing I wanted to do was club-oriented again, but by that time, I was so steeped in sort of the new music scene, it was no longer the Pyramid Club, it was the Bang on a Can Festival. And so when I first started doing RADIO WONDERLAND, it was music festivals and electronic nights, the Juilliard Electronic Music Festival and Boston Cyberarts. It didn’t really steer back to the clubs until I went through this long, long period of software development and then started channeling it to the clubs, and that’s a transition I’m sort of still making because I had so many years with the—if you want to call it—new music audience. The NewMusicBox audience! I still sort of feel like I’m steering back. In the late’80s, I was known if you read Billboard and not if you read the American Composers Forum newsletter. And then that switched. I still sort of feel I’m switching back.

MS: Was that all self-selected or did you feel pushed?

JF: It’s funny because I’ve sort of been following my nose the whole time as far as what I do. I was so involved with the clubs in the ‘80s, and to me it was equivalent with innovation. No, that’s not right. It’s not that simple. I was doing experimental stuff, and I was working a lot with Linda Fisher who’s a composer who worked with Cunningham and David Tudor and Douglas Dunn, who was a Cunningham dancer. But I was focused on the clubs; I was working in clubs. I could go on stage in any open-minded nightclub if I had my tape-loop act—I say open-minded, because at the time there was a certain population of people who enjoyed popular music but had to see a drum kit and/or a guitar on stage. There was one guy who said to me at the end of a gig, “If you had just had someone with a guitar on stage, even if they were just standing there, it would have made me feel more comfortable with what you did.” I was amazed at that. And I also really appreciated his honesty. He knew how absurd it was, and he was being completely real about it.

And then I got a record deal with a big record label. It went nowhere and it’s a long story, but it was a great thing that happened to me. I think I was kind of blown away emotionally, because I had this major label deal and I sort of didn’t know what to do with it. I didn’t have the skill to adapt. I tried to write some conventional pop songs for the occasion, but I didn’t do very many. They didn’t really fit. I needed to be like Howard Jones or M, the guy who did the song “Pop Muzik,” but I wasn’t versatile enough to do that. So I was just the tape loop guy doing my innovative stuff—which certain people really loved—marketed the wrong way.

It took me a long time to sort of get over it and decide what to do next. I didn’t have a next step for the record label, or I guess for the clubs. And then the headphone-driven stuff kind of took off, although it’s a slow motion take off. Over a few years, I did a lot of that stuff, and then the Bang on a Can All-Stars said, “Well, can we perform it?” And I said okay and I worked with them. I basically won’t let people perform this work unless I feel that they can do it—because it’s so awful if people don’t have the proper training. It’s hideously boring and uncomfortable, and it gives me and it gives the music a bad name. But if performers can handle it and they have worked with me or someone that I’ve worked with to know what I want from it, it can be this compelling, rigorous, worthy stuff. So anyway yes. The Bang on a Can All-Stars did it and then other people said they wanted to do it, and it had this life, including a 16-week series at HERE Arts Center in 2001.

It was so enormously labor intensive. It was amazing to be able to do it, but each performer can do each headphone role only once, so I rotate through performers. We had a total of 64 people over the course of this run. I would have to get more and more performers. How could I tour with this? I decided that this piece, if it can’t walk on its own, is going to have to be set by the side of road where if it wants to walk, it can walk, but I can no longer be pushing it along. I need something more practical, and that was going to be this radio, found sound, groove-based thing.

That’s also solo, so it makes so much more sense. Then all that was left was the years of doing the software programming. I did it myself in Max/MSP and it was a wonderful adventure, but it took years. It was absurd. By fall of 2007 I realized I have not utterly, thoroughly 100% debugged my own code. However, the state of performing this is hampered more by my lack of knowing how to do it and lack of rehearsal than by the bugs. I could put this on stage, work around the bugs, and six months of being on stage is going to put this out in the world. And it’s going to get that much better. Better than six more months of programming to iron out the last few bugs or add the last few features that I want. So all of sudden, I realized, oh, it’s not a matter of being done and then going on stage. I’m going on stage now. Let’s start gigging!

 

I decided that for a year I would just perform any and all performances—paid, unpaid, bring my own PA, what have you. This adventure started and I was going to do this for a year and then record. So that was fall of 2007. And then 2011—that’s a year after, right?—I realized I was doing more and more gigs. I started going out of town. I performed at this big sort of techno/rave-y complex in Venice, Italy. It was so great, but it was also crazy. I didn’t have a record to sell at the gigs. It seemed almost counterproductive. And also I didn’t mention, I made a deal with myself: not only was I going to stop coding—only since it’s Max/MSP, it’s drag another line with the mouse—but I was declaring a technology freeze. I wasn’t going to upgrade any piece of hardware or software until I had that record out. So I figured I’d gig for a year, do the record, upgrade the software. Instead it was a few years of gigging. Now, it’s antique software and a G3 Powerbook. It’s the same thing with my tape-loop stuff. When I started doing tape loops, it was high tech, but then I did it for so many years. Same thing kind of happened with RADIO WONDERLAND where I had a Powerbook that was state of the art and I just kept it. And I was so glad that I did.

Now my case might be extreme, but there are musicians and composers who are upgrading so fast, I feel like they’re not going into depth. On the other hand, they don’t need to go into depth the way I do. I get really involved with materials, the tools, and that is a big part of what I’m doing. Other composers are different. They’re pursuing other things, and they can have a—not a derogatory use of the term—more shallow connection with the nuts and bolts of their technology and it’s not such a wrenching big deal to upgrade. If they throw out their old software and have new software, great. They take advantage of that.

Fried's boombox collection

For me, it just couldn’t be that way. I wrote this software myself. I’m very intimate with it. It’s just not the same deal. I love that kind of intimacy with tools and materials. I guess for some composers, the intimacy is on the level of the score, or the concept, and the technology is secondary.

MS: Okay, that was a lot of answers to a lot of questions.

JF: Whew. So we’re done?

MS: We’re done! No, we’re not done. You were talking about intimacy, which makes me think about your use of commercial radio as your raw material. I’m curious, of all the things you could pick, what is your attraction to that specifically as your primary source?

JF: Well, there are a couple things that really dovetail nicely. Since I was kid, I’ve had this attraction to the commercial stuff and just reframing it as something that’s funny. When I was in fourth grade, we had a field trip to the L.A. airport and we got to walk inside an airplane. Then the next day, or maybe that afternoon, we were back in our homeroom in my elementary school, and we were asked to write about it. I wrote some spiel and at the end of it I wrote, “Welcome to the friendly skies of United.” It was a laugh line that has a certain needling twist to it.

Maybe that’s the whole sort of appropriative, ironic shtick that we’re all so tired of now, but I think I am of a generation where that is compelling to me. It’s a way of talking and of negotiating the world by quoting the mainstream stuff in this kind of snarky way. I feel in many ways, culturally we’re past that, but that kind of appropriation is like a language. And maybe this is a loaded word, but it is subversive. It is knocking, needling, and when I am cutting it up, it is cutting up the mainstream culture. It may be very basic, but great—be basic. Also, it’s ubiquitous, so it’s something that’s familiar and when I process the familiar, the process is that much more transparent. Just like when you do a cover tune, if you have an odd musical bent, your odd musical bent can be revealed by performing someone else’s work.

That’s why Devo’s version of “Satisfaction” is so satisfying, because we know this song and you get what Devo is. FM radio is dynamically compressed and has a decent frequency range. It is made to be grabbed and sampled. It’s so technically easy to grab the pre-compressed feed from FM radio. I know exactly where I have to put the volume control on my boombox. I don’t change the input level on my rig. I haven’t had to. And that’s great. It is perfectly pre-processed for the stuff that I’m doing.

MS: Is your choice of controllers born out of that same instinct—the steering wheel, the shoes? I mean, is that a joke? Is that a commentary? Is that playing off familiarity?

JF: It’s not the subversive appropriation kind of thing. I’m not knocking the industrial age because the steering wheel is a symbol of something evil. Arguably, it is. But I am doing it because of the transparency of the process when the controller is so large. I don’t want a tiny little knob that no one can see, so I want this object that’s the wheel.

Instead of the shoes, I could use electronic drum pads, but they have this sort of added message to me that you have to have something that looks like fancy high tech music hardware in order to whack something. But this is a completely un-acoustic instrument. The sound that you’re triggering has nothing to do with the physical makeup of the thing that you’re hitting. There’s this disconnect between the controller and the sound that results, and I want to underscore that disconnect. It’s a funny thing, and I’d rather have it be that funny thing than have it be like the cool drum pad. If you had the money to buy this in the music store, you could have this cool drum pad. I don’t like that.

Fried takes the wheel

Fried takes the wheel

Once I had the shoes, I knew that I wanted to have not just a large knob, but an ordinary object taken from life and give it that surreal feeling. I was really taken by surrealism when I was kid. It’s that kind of twist I was talking about before with appropriation. There’s a different, maybe related sort of twist when there’s something absurd. I just love it so much.

Another thing about the wheel is that, technically, it’s no different from the little knob you can get in the portable controller, which is a lot easier to pack on an airplane than a steering wheel, but you would never play a melody on that little knob. With the steering wheel, I can, and so now I practice the wheel, and it’s become this whole other level of instrument that I didn’t even realize. The quantitative difference of size is a real qualitative difference, and it’s so much fun.

MS: You’ve been working with commercial radio for a long time now. I’m curious if you’ve noted any changes to that particular stream of media and how that’s impacted your work.

JF: Well, part of it’s a little sad because when I started doing this, radio was more monolithic. Everybody knew half the songs on any of the pop stations. I don’t feel that’s the same thing now. Radio, even mainstream commercial radio, is in its niches. There was a sort of lingua franca of pop in the heyday of Michael Jackson and Madonna and Culture Club. They were so ubiquitous and corporate and massively popular. I was dismantling this common mainstream.

I have developed my aesthetic, but I haven’t really adapted. That’s just the way it goes.

I have developed my aesthetic, but I haven’t really adapted. That’s just the way it goes. My projects take absurd numbers of years to fully play out, and that’s more acceptable in the movie business than it is in the music business. But I’m here, and so part of what RADIO WONDERLAND signifies has evolved out from under me. I’m using vintage technology now in a way that I wasn’t back then by virtue of not changing the technology. Very recently, I decided to use AM radio because I need more topical stuff because of what’s happening in the world. That’s one thing that I decided only in the last few months. It’s not enough for me to know that crazy stuff is happening in the world. They’re kind of talking about it on NPR, but I want to be dealing with more commercial culture and they’re not talking about global warming on the rock station.

MS: Not just RADIO WONDERLAND but also your work with headphone-driven performance leads me to thoughts of how it pushes and pulls on the ideas of Cage, which is something you address specifically on your website:

It celebrates randomness in a way that’s utterly different from Cage.  Chance choices can be simply betterin the right context.

What are the elements of that “right context”?

JF: Well, there’s no one right context. But if you can create a context in which the best choice is going to be by the roll of the dice, you’ve created a beautiful situation.

I guess what I’m talking about is hey, we’re stochastic instead of completely random. I like the negotiation of what’s chance and what’s not chance, and also the extremes of how much I prepare, how much I work on my algorithms, and then how much I’m dependent on what happens to be on the radio or, with headphone-driven performance, how rigorous my input is and how it interacts with the complete lack of control of the performers. The chance choice can be the right choice, if in the right context. Building the kind of context that can do that gives me something that to my ears is just better than any other way. And it’s such a beautiful thing. You feel like you’re tapping into something, instead of sort of cheating it. Well, there’s my chord progression and if I avoid all the leading tones in the first half of the phrase, and then I hit octaves in the second half, then it will kind of cover up the fact that this is a lame chord progression. No, no, no! I want this. I want the dappled sunlight to fall on my fabric and it just has to be good enough fabric so that it looks good, however the sunlight falls on it. Something like that.

MS: I want to dig further into the process of the headphone-driven performance and learn more about what is really happening in those headphones—the audio score, if you will—that is generating the performance you want. Can you pull the curtain back? I’m sure that there’s a lot of thinking that went on with why you’re even doing that in the first place.

JF: You want to understand the mechanics.

MS: Yes, but you can be philosophical too.

JF: What the performers are hearing is mostly spoken word and some singing, and a lot of the spoken word is taken from very expressive, emotional parts of old movies. Like Richard Burton bellowing.

Just to be clear, I have six different channels of headphone material, all independent. So they can be unison or not, and they can have conversations and such. But it’s completely, rigorously timed because they’re not separate tape decks that are running out of synch; they’re all coming from the same multi-track sound source. The synch is maintained, and the accompanying music is on two additional tracks for left and right playback over a PA system. So the musical accompaniment and all six headphone tracks are audio scores—or audio parts, you could say—sent out via a headphone feed to the performers.

My instruction to them is not to repeat immediately after the input, which would be a sensible thing to do, but my instruction is to talk along with the input, which is not sensible. It’s ridiculous. It’s impossible. I’m asking them to be listening and talking at the same time, which kind of ruins their chances of hearing most of it, because they’re talking over it. But the headphones are fairly loud. They’re listening, they’re picking up stuff, and they’re vocalizing and catching stuff as they can, and as the headphone material repeats—and it repeats a lot—they get more of it and their proportion of gibberish to regular language gets more towards the regular language. I work with performers, one-on-one or in a group of two or three people, I demonstrate, I have them try this. It takes some understanding and most people don’t really believe until they try it that this really means doing this ridiculous thing of talking over. Now, sometimes your cue to start talking is the input itself. So obviously at that moment, the performer will enter late. I know that. That’s just the laws of physics. But I tell them, don’t think about that. You are there the whole time; just imagine that and keep on jumping ahead to the present moment. Try this for about a minute, and then you’ll kind of find a place where you can just go.

Headphone Driven Performance (demo)

Practice track for two performers (stereo)

One thing I say to them is you are doing this with utter confidence, believing that you’re absolutely getting it. That input, as you are saying it to yourself, is you. You are that accurate and you have that much confidence. At the same time, I’m not saying just pretend everything’s perfect because I told you to. I want the performers to really be trying. It takes effort. It takes a lot of concentration. You’re tuned into what’s happening. You’re picking up stuff, so you’re keeping these two things going. You’re working, but you also are constantly outputting with complete gusto. This kind of conversation over a couple of hours of demonstrating gets good performers in a place where I need them to be to do this, and so it comes out this sort of proto-language—half gibberish, half non-gibberish.

This evolved from a party game with these performance artists that I was collaborating with, and they called this party game the Nancy Sinatra game, because they were using a cassette tape of Nancy Sinatra’s greatest hits. I kind of took the idea for my own compositions and started making my own source tapes with the musical accompaniment. That covers a lot of it, doesn’t it?

MS: That does cover a lot of it, and it leads me very neatly to my next question, because even before knowing that bit of backstory, I was already struck by how big a role the aspect of game play or a puzzle to solve in the moment figures into both in the headphone-driven performance and RADIO WONDERLAND. Because you have a structure and there are rules, but then you’re getting things that are chance-y that are being thrown into the mix, and then you’re having to do something with that for an audience.

JF: The game is how I handle the input. That makes it exciting for me. One thing I sometimes say is that I feel like I come from a planet where it’s not live music unless it’s completely unexpected. If it comes from a score and you’ve rehearsed it, what’s that? You can’t do that. That’s just cheating. That’s not anything. Where we come from, live is where you deal with life as it comes, or something like that.

I feel like I come from a planet where it’s not live music unless it’s completely unexpected. If it comes from a score and you’ve rehearsed it? That’s just cheating.

I don’t actually come from that planet, but this sort of thing is compelling to me. It is such a great discipline, and it also puts the emphasis on things that I think should be emphasized. In this case, when it comes to RADIO WONDERLAND, it’s the process. It’s the juxtaposition. It’s what I do with it, as opposed to choosing the perfect sample—which would be, I think, just an awful way for me to compose. I’m kind of a perfectionist. So, given that, what would I do? I’d go over what’s in the commercial media and decide what’s best to dismantle because it’s sonically good, but more importantly, the content is what I think is just the thing that needs to be interrogated and subverted. I’m exhausted just thinking about that. I don’t want that. It’s not a good compositional challenge for me. It might be sort of a moneymaker, if I can grab something that’s so telling and it’s so hysterically funny. Then maybe I have to bargain to get the rights to it. Then I cut it up, and I make it into a dance track that could be fun and maybe get a lot of attention, but that’s so not the discipline that I want. To me, if I can develop the algorithms and train myself as a performer to deal with it as it comes in, those are good musical processes. That’s good performance training. It’s going to be a good performance.

It’s amazing how well things fit together, how the synchronicity seems to come up again and again. I remember one time when Will Smith, the movie star, was in the headlines a lot. I got the name Will Smith off the radio, and someone said to me, “Unbelievable! How did you get that? It’s so amazing that you got that because he’s iconic, and it’s such a coup.” Well, but that’s how this thing works because the stuff that’s the most popular comes up the most. And I love that. I find I’ve really learned a lesson that you can take two different pop songs from two different times—let’s say a commercial or a station ID and a pop song—cut them up, try to juxtapose them tonally, and your odds are better than even that they will somehow work.

Inside Fried's home studio

Inside Fried’s home studio

Now maybe I’ve had this sort of brilliance at improvising and choosing things that I don’t give myself credit for, but I think a big part of it is that there’s more sense in the stuff that we would grab by chance than we ever imagined. When I first made RADIO WONDERLAND, I made sure that there would be a means to take any of the individual bits and suck away the pitch—the De-Pitcher, I called it. Turns out what I used was ring modulation. Boom! Computationally, it’s incredibly cheap and easy, but I found after a while—it took me a long time to even believe it—I almost never have to use it! The pop song that I get 15 minutes after I grab the other pop song is gonna work. Or I can transpose with the wheel, so I have these five different bits from a pop song or a commercial from 15 minutes ago. Here’s a new slab of audio. I take a couple of different bits, juxtapose them, they’re in rhythm and maybe two thirds of the time I need to transpose with the wheel. And that’s it. I never suspected it would be that easy. I was kind of terrified. I figured you take two random songs, even if they’re both based on A-440, then we have like 24 different choices of different modes and stuff, different keys. They’re not going to match. They’re going to be badly dissonant in that way that’s just not fun musically, especially when I’m trying to be funky and groovy and melodic in a more-or-less conventional sense. It’s just not going to work out, and I’m going to need the De-Pitcher. I’m going to have to transpose like mad, that’s just how it is. That’s going to be part of the game of RADIO WONDERLAND. And it turns out that it wasn’t. It just tends to work.

MS: Does this process ever feel like it “fails”? Or maybe just that you couldn’t easily see how you were going to make it work in a way that was going to satisfy you and you had to sweat through that on stage? It sounds like that hasn’t happened.

JF: Oh, it happens and of course I blame myself. To the extent that I take credit when it works well, I also blame myself when I think it isn’t funky. I’m highly self-critical and I also have this absurd metric where I want it to be as danceable as my favorite dance track, even though that was worked over in a studio for three weeks and I have five minutes in front of people. I do have to scramble, and a lot of it has to do with timing. It’s also a question of how well I can hear, because it’s a most unforgiving set up in terms of monitoring.

If you’re in a rock band, or even if you’re playing from a score in a formal concert setting, you know your instrument is tuned. You know where the underlying beat is. You know what the conductor’s doing. You know where your hands are. You’re okay, even if you can’t hear that great. In a rock band, things are loud and chaotic, but your guitar has frets and you have your tuner. You feel the kick drum. You’re good to go. But with me, I don’t know what my instrument is until I’m on stage with it. I’m taking a piece of radio, usually around one second, and I cut it into eight bits and deploy them. I need to get a sense of how they differ from each other and what they sound like, and then decide how I want to further deploy them and transpose them. I have to hear them really well. I can’t decide that since my finger’s on the right fret and I know my telecaster and it’s in tune, that I’m okay. I’m kind of sunk. So it really depends on them.

MS: Why is the dancing so core to you?

JF: It’s a metric that I can believe in, and it’s so great to have that metric as a composer. I almost feel a little embarrassed because it’s so basic. A lot of my favorite music has never been assessed on the basis of whether or not people dance, and it’s successful on the basis of much more subtle things, but I’m in this situation.

But in addition to that metric, I love dancing. I sometimes find myself bopping my head in music concerts when it’s not really thought of as head-bopping music, but I’m hearing a pulse. Okay, maybe in that situation, maybe you could argue that I’m missing something. But there are many cases where I feel like no, I’m not. I am moved and I’m moving, and I’m immersed and involved. And I just love it.

And when the emotion isn’t completely positive, when it’s not just catharsis or love, when it’s sad, angry, difficult, and it’s danceable, oh that’s so powerful. It’s dark, but there’s this cathartic dancing. It can work so, so well. And I go out dancing; I’m still going to clubs. I feel a connection to that culture or cultures. I am also looking forward to going back to other stuff. There are areas I want to go with it that aren’t quite so dance-y, but the initial concept is so focused on that, mostly because of this idea of a metric.

And what a great guide it is. Because otherwise, if I was going to do a sound collage with radio and sophisticated algorithms, it doesn’t matter where you go with it. To put RADIO WONDERLAND through this almost absurd metric of having to be done in real time, without choice of material, and have it be danceable, to sort of make it through to the other side gives me these incredibly powerful tools, software which I intend to finally further develop now that I have the album out. I think I’ll be able to do longer-scale things and different time scales. It won’t be as much about dancing, which is a little bit like the dance music artists that branch out.

I kind of imagine that trajectory. This first album is basically a bunch of dance tracks with kind of a slower one at the end, but even the slower one at the end has this boom-boom bass drum. I like that trajectory, not because it matches to some sort of commercial flight pattern, but artistically, that discipline and those rules are putting me in a great position for the next step.

It’s a little bit like my performance technology which, believe it or not, does not allow me to loop anything that I have just played. It allows me to loop what was just on the radio, but when I process the radio with the shoe or the wheel, that doesn’t loop. It’s crazy if you think of the current state of Ableton Live and live processing technology, which is all about the live looping. You’re a soloist with your instrument and a bunch of pedals and software. You play your thing, you loop your thing, you play over the thing you looped. I don’t do that with RADIO WONDERLAND. If I’m not hitting the shoe, that sound doesn’t come out, and it has been such a discipline over the past few years to perform that way.

Now I’m ready to revise my software and say okay I’m going to include the ability to retain that pattern. When I transpose on the wheel, I’ll make a riff, and here’s this piece of radio, it’s deployed over one bar. It’s got some nice syncopation, but it’s all taken from one second of radio. Then I transpose it with the wheel, so all of sudden we have a four-bar phrase, and it’s fun, it’s tonal, and there’s something cool about the transition because it’s transposing a whole chord, which is a little bit like classic house music where there’s a sample and the musician just has one finger on the keyboard and they’re transposing the sample of that. That’s part of the house music sound that I really like. I do that with the wheel, right now, but if I have that four-bar pattern, it stops being a four-bar pattern when I turn away from the wheel and go back to the shoes, or what have you. But it’s been I think a more interesting, at least for now, that I got to this point without these various crutches or enhancements.

Fried's software

MS: So you’ve mentioned a few times since we’ve started the milestone position this record has in your mind. Let’s talk about the fact you have a new record out.

JF: That’s right.

MS: Congratulations!

JF: Thank you very much.

MS: Why did this record become so important for you? Every bit of the philosophy you’re underlining here is how exciting it is that it’s live. It’s live radio. You’re doing all the processing live. Why the hell did you want to make a record?

JF: You know, it’s funny, the turntablist Maria Chavez has talked about how she does not release recordings. And boy, I respect that. I’m a good candidate for not releasing recordings, but I wanted to. For one thing, and I’m glad you reminded me of this, one of the motivations of RADIO WONDERLAND was to become prolific because my process became slower and slower. I had this thing that became Headset Sextet. I finished it—or so I thought—in ’94, and then about three days before the opening night at La MaMa, I realized no, this is too good not to make it right. So I renamed it Work In Progress, and then I spent about another five years revising it, but the time scale is indefensible. It’s just absurd, but I’m proud I finally finished it.

But with RADIO WONDERLAND, I thought okay, let this be a ticket to being prolific. The album is part of that process. Can I be prolific in that I generate this new material and can have it out on recordings, which do this great job of representing you when you’re not there playing it? I never had a full album out, which seems crazy because in the ‘80s I had a record deal on a major label. I worked on remixes for famous recording artists. I work with recording technology, and yet I didn’t have my own album.

So the emotional stakes became kind of high, and it’s too bad because I’m older now, and maybe I’m less resilient as far as the sheer emotional strain of getting it all done. Part of the test of RADIO WONDERLAND is: Are these algorithms, or the algorithms plus me manipulating them, are they so robust that this can be a dance groove even without the loud PA and me up there in the excitement and electricity of live radio? I love that electricity. I live for it, and it is still fundamentally a live show. But I wanted to put it to that test.

Given that I wanted this album to sound good to my ears, I knew there was going to be some post-production. Well, how much? That is something I had to answer by doing it. One thing I’m happy about—and this had a lot to do with my co-producer Marcelo Anez—is that each track really is taken from a single concert without any non-radio overdubs. Some of it is highly processed—more processed by a long shot than anything I was able to do on stage. But a lot of this extra processing I can do on stage in the future. So it’s somewhat of a prospectus for new projects.

seize the means cover

Listen/Buy via Clang/iTunes/Spotify. Also available on vinyl or USB drive. No CDs!

MS: What about that fact that you’re going back and revisiting the work for this, because you’ve avoided that in the live version quite explicitly. It was all about the new, the first brush, and now you’re going back and not just looking at them once, but looking at them many times as you crafted them into an album.

JF: Well, I did resist that. I did a sort of test album—it was just three songs—a few years ago, where I chose three different concerts that I edited, not very carefully. I have hundreds of concert recordings, so isn’t it the perfect test of RADIO WONDERLAND to pick concerts at random and see how well they work as recordings? That was really dumb. What I want to do is choose the best concerts, and for me, a lot of that was the best grooves. It makes it a heck of a lot easier to go through hundreds of hours of concerts when you’re looking for good grooves, as opposed to simply looking for the best music. In order to favorably represent RADIO WONDERLAND, I realized what I had to do was listen more and edit less. So I went through and listened and listened and listened, and chose the best shows, the ones which needed the least amount of editing. And that felt fine. I’m very focused on live and real time and all the ephemeral stuff that we talked about, but I also like to geek out in a studio. I’ve long used recording technology and I love making records. This was a good reason to go and get into that headspace.

Some of the issues that I had to address on the album were almost purely technical having to do with the low end, and I can address that with the next iteration of the software, and that’s a really exciting prospect. So maybe instead of working on a track for three weeks before it’s really ready for a final mix, I can work on a track for a day before it’s ready for a final mix. My fantasy is that I will be able to put out as a live recording whatever I did that night without any post-production.

MS: But weren’t you distilling to a larger degree, because these tracks are like seven minutes, and it does seem like there’s a ritual to RADIO WONDERLAND performance. I don’t know if they’re always 30 minutes, but it has that kind of scope. And then you’re condensing it in some way.

JF: Oh, absolutely. Part of the process is to distill a 30-minute concert into a four- to eight-minute album track and not to pretend that they’re mini RADIO WONDERLAND concerts. The idea is to take a half hour to create a great groove, and that’s going to create a monster five-minute radio mix and twelve-minute remix of a dance track. It is perhaps an easy adjunct to the RADIO WONDERLAND concert format, but that is the needle I seem to be trying to thread. And it’s worked out okay. But you’re absolutely right. That’s a crucial part of it. Yes, I’m condensing them.

Oh, you brought that up because I was talking about releasing a live concert as is. Yeah, that would have to be a different thing. But that’s not what the album was. The album was to see, if I throw you right into the middle of the groove, is this going to make sense without the construction of the groove and without me jumping around and spinning wheels and stuff?

Fried's desk