Category: NewMusicBox

From the Machine: Realtime Algorithmic Approaches to Harmony, Orchestration, and More

As we discussed last week, the development of a realtime score, in which compositional materials can be continuously modified, re-arranged, or created ex nihilo during performance and displayed to musicians as musical notation, is no longer the stuff of fantasy. The musical and philosophical implications of such an advance are only beginning to be understood and exploited by composers. This week, I’d like to share some algorithmic techniques that I’ve been developing in an attempt to grapple with some of the compositional possibilities offered by realtime notation. These range from the more linear and performative to the more abstract and computation-intensive; they deal with musical parameters ranging from harmony and form to orchestration and dynamics. Given the relative novelty and almost unlimited nature of the subject matter (to say nothing of the finite space allotted for the task), consider this a report from one person’s laboratory, rather than anything like a comprehensive survey.

HARMONY & VOICE LEADING

How might we begin to create something musically satisfying from just this raw vocabulary?

I begin with harmony, as it is the area that first got me interested in modeling musical processes using computer algorithms. I have always been fascinated by the way in which a mechanistic process like connecting the tones of two harmonic structures, according to simple rules of motion, can produce such profound emotional effects in listeners. It is also an area that seems to still hold vast unexplored depths—if not in the discovery of new vertical structures[1], at the very least in their horizontal succession. The sheer combinatorial magnitude of harmonic possibilities is staggering: consider each pitch class set from one to twelve notes in its prime form, multiplied by the number of possible inversional permutations of each one (including all possible internal octave displacements), multiplied by the possible chromatic transpositions for each permutation—for just a single vertical structure! When one begins to consider the horizontal dimension, arranging two or more harmonic structures in succession, the numbers involved are almost inconceivable.

The computer is uniquely suited to dealing with the calculation of just such large data sets. To take a more realistic and compositionally useful example: what if we wanted to calculate all the inversional permutations of the tetrachord {C, C#, D, E} and transpose them to each of the twelve chromatic pitch levels? This would give us all the unique pitch class orderings, and thus the complete harmonic vocabulary, entailed by the pitch class set {0124}, in octave-condensed form. These materials might be collected into a harmonic database, one which can we can sort and search in musically relevant ways, then draw on in performance to create a wide variety of patterns and textures.

First we’ll need to find all of the unique orderings of the tetrachord {C, C#, D, E}. A basic law of combinatorics states that there will be n! distinct permutations of a set of n items. This (to brush up on our math) means that for a set of 4 items, we can arrange them in 4! (4 x 3 x 2 x 1 = 24) ways. Let’s first construct an algorithm that will return the 24 unique orderings of our four-element set and collect them into a database.

example 1

Branciforte-example-1

Next, we need to transpose each of these 24 permutations to each of the 12 chromatic steps, giving us a total of 288 possible structures. To work something like this out by hand might take us fifteen or twenty minutes, while the computer can calculate such a set near-instantly.

example 2

Branciforte-example-2

The question of what to do with this database of harmonic structures remains: how might we begin to create something musically satisfying from just this raw vocabulary? The first thing to do might be to select a structure (1-288) at random and begin to connect it with other structures by a single common tone. For instance, if the first structure we draw is number 126 {F# A F G}, we might create a database search tool that allows us to locate a second structure with a common tone G in the soprano voice.

example 3:

To add some composer interactivity, let’s add a control that allows us to specify which voice to connect on the subsequent chord using the numbers 1-4 on the computer keypad. If we want to connect the bass voice, we can press 1; the tenor voice, 2; the alto voice, 3; or the soprano voice, 4. Lastly, let’s orchestrate the four voices to string quartet, with each structure lasting a half note.

example 4:

This is a very basic example of a performance tool that can generate a series of harmonically self-similar structures, connect them to one another according to live composer input, and orchestrate them to a chamber group in realtime. While our first prototype produces a certain degree of musical coherence by holding one of the four voices constant between structures, it fails to specify any rules governing the movement of the other three voices. Let’s design another algorithm whose goal is to control the horizontal plane more explicitly and keep the overall melodic movement a bit smoother.

A first approach might be to calculate the total melodic movement between the current structure and each candidate structure in the database, filtering out candidates whose total movement exceeds a certain threshold. We can calculate the total melodic movement for each candidate by measuring the distance in semitones between each voice in the current structure and the corresponding voice in the candidate structure, then adding together all the individual distances.[2]

example 5.0

Branciforte-example-5.0

While this technique will certainly reduce the overall disjunction between structures, it still fails to provide rules that govern the movement of individual voices. For this we will need an interval filter, an algorithm that determines the melodic intervals created by moving from the current structure to each candidate and only allows through candidates that adhere to pre-defined intervallic preferences. We might want to prevent awkward melodic intervals such as tritones and major sevenths. Or perhaps we’d like the soprano voice to move by step (ascending or descending minor and major seconds) while allowing the other voices to move freely. We will need to design a flexible algorithm that allows us to specify acceptable/unacceptable melodic intervals for each voice, including ascending movement, descending movement, and melodic unisons.

example 5.1

Branciforte-example-5.1

A final consideration might be the application of contrapuntal rules, such as the requirement that the lowest and highest voices move in either contrary or oblique motion. This could be implemented as yet another filter for candidate structures, allowing a contrapuntal rule to be specified for each two-voice combination.

example 5.2

Branciforte-example-5.2

Let’s create another musical example that implements these techniques to produce smoother movement between structures. We’ll broaden our harmonic palette this time to include four diatonic tetrachords—{0235}, {0135}, {0245}, and {0257}—and orchestrate our example for solo piano. We can use the same combinatoric approach as we did earlier, computing the inversional permutations of each tetrachord to develop our vocabulary. To keep the data set manageable, let’s limit generated material to a specific range of the piano, perhaps C2 to C6.

We’ll start by generating all of the permutations of {0235}, transposing each one so that its lowest pitch is C2, followed by each of the remaining three tetrachords. Before adding a structure to the database, we will add a range check to make sure that no generated structure contains any pitch above C6. If it does, it will be discarded; if not, it will be added to the database. We will repeat this process for each chromatic step from C#2 to A5 (A5 being the highest chromatic step that will produce in-range structures) to produce a total harmonic vocabulary of 2976 structures.

Let’s begin our realtime compositional process by selecting a random structure from among the 2976. In order to determine the next structure, we’ll begin by running all of the candidates through our semitonal movement algorithm, calculating the distances among voices in the first structure and all other structures in the database. To reduce disjunction between structures, but avoid repetitions and extremely small harmonic movements, let’s allow total movements of between 4 and 10 semitones. All structures that fall within that range will then be passed through to the interval check algorithm, where they will be tested against our intervallic preferences for each voice. Finally, all remaining candidates will be checked for violation of any contrapuntal rules that have been defined for each voice pair. Depending on how narrowly we set each of one these filters, we might reduce our candidate set from 2976 to somewhere between 5 and 50 harmonic structures. We can again employ an aleatoric variable to choose freely among these, given that each has met all of our horizontal criteria.

To give this algorithm a bit more musical interest, let’s also experiment with arpeggiation and slight variations in harmonic rhythm. We can define four arpeggio patterns and allow the algorithm to choose one randomly for each structure that is generated.

example 6:

While this example still falls into the category of initial experiment or étude, it might be elaborated to produce more compositionally satisfying results. Instead of a meandering harmonic progression, perhaps we could define formal relationships such as multi-measure repeats, melodic cells that recur in different voices, or the systematic use of transposition or inversion. Instead of a constant stream of arpeggios, the musical texture could be varied in realtime by the composer. Perhaps the highest note (or notes) of each arpeggio could be orchestrated to another monophonic instrument as a melody, or the lowest note (or notes) re-orchestrated to a bass instrument. These are just a few extemporaneous examples; the possibility of achieving more sophisticated results is simply a matter of identifying and solving increasingly abstract musical problems algorithmically.

Here’s a final refinement to our piano étude, with soprano voice reinterpreted as a melody and bass voice reinforced an octave below on the downbeat of each chord change.

example 6.1:

ORCHESTRATION

In all of the above examples, we limited our harmonic vocabulary to structures that we knew were playable by a given instrument or instrument group. Orchestration was thus a pre-compositional decision, fixed before the run time of the algorithm and invariable during performance. Let’s now turn to the treatment of orchestration as an independent variable, one that might also be subject to algorithmic processing and realtime manipulation.

There are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

This is an area of inquiry unlikely to arise in electronic composition, due to the theoretical lack of a fixed range in electronic and virtual instruments. In resigning ourselves to working with traditional acoustic instruments, the abstraction of “pure composition” must be reconciled with practical matters such as instrument ranges, questions of performability, and the creation of logical yet engaging individual parts for performers. This is a potentially vast area of study, one that cuts across disciplines such as mathematics/logic, acoustics, aesthetics, and performance practice. Thus, I must here reprise my caveat from earlier: the techniques I’ll present were developed to provide practical solutions to pressing compositional problems in my own work. While reasonable attempts were made to seek robust solutions, there are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

The basic problem of orchestration might be stated as follows: how do we distribute n number of simultaneous notes (or events) to i number of instruments with fixed ranges?

Some immediate observations that follow:

a) The number of notes to be orchestrated can be greater than, less than, or equal to the number of instruments.
b) Instruments have varying degrees of polyphony, ranging from the ability to play only a single pitch to many pitches simultaneously. For polyphonic instruments, idiosyncratic physical properties of the instrument govern which kind of simultaneities can occur.
c) For a given group of notes and a fixed group of instruments, there may be multiple valid orchestrations. These can be sorted by applying various search criteria: playability/difficulty, adherence to the relationships among instrument ranges, or a composer’s individual orchestrational preferences.
d) Horizontal information can also be used to sort valid orchestrations. Which orchestration produces the least/most amount of melodic disjunction from the last event per instrument? From the last several events? Are there specific intervals that are to be preferred for a given instrument moving from one event to another?
e) For a given group of notes and a fixed group of instruments, there may be no valid orchestration.

Given the space allotted, I’d like to focus on the last three items, limiting ourselves to scenarios in which the number of notes to be orchestrated is the same as the number of instruments available and all instruments are acting monophonically.[3]

Let’s return to our earlier example of four-part harmonic events orchestrated for string quartet, with each instrument playing one note. By conservative estimate, a string quartet has a composite range of C2 to E7 (36 to 100 as MIDI pitch values). This does not mean, however, that any four pitches within that range will be playable by the instrument vector {Violin.1, Violin.2, Viola, Cello} in a one note/one instrument arrangement.

example 7

Branciforte-example-7

The most efficient way to determine whether a structure is playable by a given instrument vector—and, if so, which orchestrations are in-range—is to calculate the n! permutations of the structure and pass each one through a per-note range check corresponding to each of the instruments in the instrument vector. If each note of the permutation is in-range for its assigned instrument, then the permutation is playable. Here’s an example of a range check procedure for the MIDI structure {46 57 64 71} for the instrument vector {Violin.1 Violin.2 Viola Cello}.

example 8

Branciforte-example-8

By generating the twenty-four permutations of the harmonic structure ({46 57 64 71}, {57 64 71 46}, {64 71 46 57}, etc.) and passing each through a range check for {Violin.1 Violin.2 Viola Cello}, we discover that there are only six permutations that are collectively in-range. There is a certain satisfaction in knowing that we now possess all of the possible orchestrations of this harmonic structure for this group of instruments (leaving aside options like double stops, harmonics, etc.).

Although the current example only produces six in-range permutations, larger harmonic structures or instrument groups could bring the number of playable permutations into the hundreds, or even thousands. Our task, therefore, becomes devising systems for searching the playable permutations in order to locate those that are most compositionally useful. This will allow us to automatically orchestrate incoming harmonic data according to various criteria in a realtime performance setting, rather than pre-auditioning and choosing among the playable permutations manually.

There are a number of algorithmic search techniques that I’ve found valuable in this regard. These can be divided into two broad categories: filters and sorts. A filter is a non-negotiable criterion; in our current example, perhaps a rule such as “Violin 1 or Violin 2 must play the highest note.” A sort, on the other hand, is a method of ranking results according to some criterion. Perhaps we want to rank possible orchestrations by their adherence to the natural low-to-high order of the instruments’ ranges; we might order the instruments by the average pitch in their range and then rank permutations according to their deviation from that order. For a less common orchestration, we might decide to choose the permutation that deviates as much as possible from the instruments’ natural order.

example 9

Branciforte-example-9

By applying this filter and sort, the permutation {57 71 64 46} is returned for the instrument vector {Violin.1 Violin.2 Viola Cello}. As we specified, the highest note is played by either Violin 1 or Violin 2 (Violin 2 in this case), while the overall distribution of pitches from low-to-high deviates significantly from the instruments ranges from low to high. Mission accomplished.

Let’s also expand our filtering and sorting mechanisms from vertical criteria to include horizontal criteria. Vertical criteria, like the examples we just looked at, can be applied with information about only one structure; horizontal criteria take into account movement between two or more harmonic structures. As we saw in our discussion of harmony, horizontal criteria can provide useful information such as melodic movement for each voice, contrapuntal relationships, total semitonal movement, and more; this kind of data is equally useful in assessing possible orchestrations. In addition to optimizing the intervallic movement of each voice to produce more playable parts, horizontal criteria can be used creatively to control parameters such as voice crossing or harmonic spatialization.

example 10

Branciforte-example-10

Such horizontal considerations can be combined with vertical rules to achieve sophisticated orchestrational control. Each horizontal and vertical criterion can be assigned a numerical weight denoting its importance when used as a sorting mechanism. We might assign a weight of 0.75 to the rule that Violin 1 or Violin 2 contains the highest pitch, a weight of 0.5 to the rule that voices do not cross between structures, and a weight of 0.25 to the rule that no voice should play a melodic interval of a tritone. This kind of a weighted search more closely models the multivariate process of organic compositional decision-making. Unlike the traditional process of orchestration, It has the advantage of being executable in realtime, thus allowing a variety of indeterminate data sources to be processed according to a composer’s wishes.

While such an algorithm is perfectly capable of running autonomously, it can also be performatively controlled by varying parameters such as search criteria, weighting, and sorting direction. Other basic performance controls can be devised to quickly re-voice note data to different parts of the ensemble. Mute and solo functions for each instrument or instrument group can be used to modify the algorithm’s behavior on the fly, paying homage to a ubiquitous technique used in electronic music performance. The range check algorithm we developed earlier could alternatively be used to transform a piece’s instrumentation from performance to performance, instantly turning a work for string quartet and voice into a brass quintet. The efficacy of any of these techniques will, of course, vary according to instrumentation and compositional aims, but there is undoubtedly a range of compositional possibilities waiting to be explored in the domain of algorithmic orchestration.

IDEAS FOR FURTHER EXPLORATION

The techniques outlined above barely scratch the surface of the harmonic and orchestrational applications of realtime algorithms—and we have yet to consider several major areas of musical architecture such as rhythm, dynamics, and form! Another domain that holds great promise is the incorporation of live performer feedback into the algorithmic process. Given my goal of writing a short-form post and not a textbook, however, I’ll have to be content to conclude with a few rapid-fire ideas as seeds for further exploration.

Dynamics:

Map MIDI values (0-127) to musical dynamics markings (say, ppp to fff) and use a MIDI controller with multiple faders/pots to control musical dynamics of individual instruments during performance. Alternatively, specify dynamics algorithmically/pre-compositionally and use the MIDI controller only to modify them, re-balancing the ensemble as needed.

Rhythm:

Apply the combinatoric approach used for harmony and orchestration to rhythm, generating all the possible permutations of note attacks and rests within a given temporal space. Use probabilistic techniques to control rhythmic density, beat stresses, changes of grid, and rhythmic variations. Assign different tempi and/or meters to individual members of an ensemble, with linked conductor cursors providing an absolute point of reference for long-range synchronization.

Form:

Create controls that allow musical “snapshots” to be stored, recalled, and intelligently modified during performance. As an indeterminate composition progresses, a composer can save and return to previous material later in the piece, perhaps transforming it using harmonic, melodic, or rhythmic operations. Or use an “adaptive” model, where a composer can comment on an indeterminate piece as it unfolds, using a “like”/”dislike” button to weight future outcomes towards compositionally desirable states.

Performer Feedback:

Allow one or more members of an ensemble to improvise within given constraints and use pitch tracking to create a realtime accompaniment. Allow members of an ensemble to contribute to an adaptive algorithm, where individual or collective preferences influence the way the composition unfolds.

Next week, we will wrap up the series with a roundtable conversation on algorithms in acoustic music with pianist/composer Dan Tepfer, composer Kenneth Kirschner, bassist/composer Florent Ghys, and Jeff Snyder, director of the Princeton Laptop Orchestra.



1. These having been theoretically derived and catalogued by 20th century music theorists such as Allen Forte. I should add here, however, that while theorists like Forte may have definitively designated all the harmonic species (pitch class sets of one to twelve notes in their prime form), the totality of individual permutations within those species still remains radically under-theorized. An area of further study that would be of interest of me is the definitive cataloguing of the n! inversional permutations of each pitch-class set of n notes. The compositional usefulness of such a project might begin to break down with structures where n > 8 (octachords already producing 40,320 discrete permutations), but would nonetheless remain useful from an algorithmic standpoint, where knowledge of not only a structure’s prime form but also its inversional permutation and chromatic transposition could be highly relevant.


2. In calculating the distance between voices, we are not concerned with the direction a voice moves, just how far it moves. So whether the pitch C moves up a major third to E (+3 semitones) or down a major third to Ab (-3 semitones) is of no difference to us in this instance; we can simply calculate its absolute value, yielding a value of 3.


3. Scenarios in which the number of available voices does not coincide with the number of pitches to be orchestrated necessitates the use of the mathematical operation of combination and a discussion of cardinality, which is beyond the scope of the present article.

Delays, Feedback, and Filters: A Trifecta

My last post, “Delays as Music,” was about making music using delays as an instrument, specifically in the case of the live sound processor. I discussed bit about how delays work and are constructed technically, how they have been used in the past, a bit about how we perceive sound, and how we perceive different delay times when used with sounds of various lengths. This post is a continuation of that discussion. (So please do read last week’s post first!)

We are sensitive to delay times as short as a millisecond or less.

I wrote about our responsiveness to miniscule differences in time, volume, and timbre between the sounds arriving in our ears, which is our skill set as humans for localizing sounds—how we use our ears to navigate our environment. Sound travels at approximately 1,125 feet per second but though all sound waves we hear in a sound are travelling at the same speed, the low frequency waves (which are longer) tend to bend and wrap around objects, while high frequencies are absorbed or bounce off of objects in our environment. We are sensitive to delay times as short as a millisecond or less, as related to the size of our heads and the physical distance between our ears.  We are able to detect tiny differences in volume between the ear that is closer to a sound source and the other.  We are able to discern small differences in timbre, too, as some high frequency sounds are literally blocked by our heads. (To notice this phenomena in action, cover your left ear with your hand and with your free hand, rustle your fingers first in the uncovered ear and then in the covered one.  Notice what is missing.)

These psychoacoustic phenomena (interaural time difference, interaural level difference, and head shadow) are useful not only for an audio engineer, but are also important for us when considering the effects and uses of delay in electroacoustic musical contexts.

My “aesthetics of delay” are similar to what audio engineers use, as rule of thumb, for using delay as an audio effect, or to add spatialization.  The difference in my approach is that I want to find a way to recognize and find sounds I can put into a delay, so that I can predict what will happen to them in real time as I am playing with various parameter settings. I use the changes in delay times as a tool to create and control rhythm, texture, and timbral changes. I’ve tried to develop a kind of electronic musicianship, which incorporates acousmatic listening and quick responses, and hope to share some of this.

It’s all about the overlap of sound.

As I wrote, it’s all about the overlap of sound.  If a copy of a sound, delayed by 1-10ms, is played with the original, we simply hear it as a unified sound, changed in timbre. Short delayed sounds nearly always overlap. Longer delays might create rhythms or patterns; medium length delays might create textures or resonance.  It depends on the length of the sound going into the delay, and what that length is with respect to the length of the delay.

This post will cover more ground about delays and how they can be used to play dynamic, gestural, improvised electroacoustic music. We also will look at the relationship between delays and filtering, and in the next and last post I’ll go more deeply into filtering as a musical expression and how to listen and be heard in that context.

Mostly, I’ll focus on the case of the live processor who is using someone else’s sound or a sound that cannot be completely foreseen (and not always using acoustic instruments as a source– Joshua Fried does this beautifully with sampling/processing live radio in his Radio Wonderland project).  However, despite this focus, I am optimistic that this information will also useful to solo instrumentalists using electronics on their own sound as well as to composers wanting to build improvisational systems into their work.

No real tips and tricks here (well maybe a few), but I do hope to communicate some ideas I have about how to think about effects and live audio manipulation in a way that outlasts current technologies. Though some of the examples below will use the Max programming language, it is because it is my main programming environment, but also well suited to diagram and explain my points.

We want more than one, we want more than one, we want…

As I wrote last week, musicians often want to be able to play more than one delayed sound, or to repeat that delayed sound several times. To do this, we either use more delays, or we use feedback to route a portion of our output back into the input.

When using feedback to create many delays, we route a portion of our output back into the input of the delay. By routing only some of the sound (not 100%), the repeated sound is a little quieter each time and eventually the sound dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an almost endless repeat, and might even overload/clip if we add new sounds (like a too full fountain).

Using multi-tap delays, or a few delays in parallel, we can make many copies of the sound from the same input, and play them simultaneously.  We could set up different delay lengths with odd spacings, and if the delays are longer than the sound we put in, we might get some fun rhythmic complexity (and polyrhythmic echoes).  With very short delays, we’ll get a filtered sound from the multiple copies being played nearly simultaneously.

Any of these delayed signals (taps) could in turn be sent back into the multi-tap delay’s input in a feedback network.   It is possible to put any number and combination of additional delays and filter in the feedback loop as well, and these complex designs are what make the difference between all the flavors of delay types that are commonly used.

It doesn’t matter how we choose to create our multiple delays.  If the delays are longer than the sounds going into them, then we don’t get overlap, and we’ll hear a rhythm or pattern.  If the delays are medium length (compared to our input sound), we’ll hear some texture or internal rhythms or something undulating.  If the delays are very short, we get filtering and resonance.

Overlap is what determines the musical potential for what we will get out of our delay.

The overlap is what determines the musical potential for what we will get out of our delay. For live sound processing in improvised music, it is critical to listen analytically (acousmatically) to the live sound source we are processing.  Based on what we hear, it is possible to make real-time decisions about what comes next and know exactly what we will get out.

Time varying delay – interpolating delay lines

Most cheaper delay pedals and many plugins make unwanted noise when the delay times are changed while a sound is playing. Usually described as clicks, pops, crackling or “zipper noise”, these sounds occur because the delays are “non-interpolating.”   These sounds happen because the changes in the delay times are not smooth, causing the audio to be played back with abrupt changes in volume.  If you never change delay times during performance, fixed simple delays and a non-interpolating delay is fine.

Changing delay times is very useful for improvisation and turning delay into an instrument. To avoid the noise and clicks we need to use “interpolating” delays, which might mean a slightly more expensive pedal or plugin or a little more programming. As performers or users of commercial gear we may not be privy to all the different techniques being used in every piece of technology we encounter. (Linear or higher order interpolation, windowing/overlap, and selection of delayed sounds from several parallel delay lines are a few techniques.) For the live sound processor / improviser what matters is: Can I change my delay times live?  What artifacts are introduced when I change it?  Are they musically useful to me?  (Sometimes we like glitches, too.)

Doppler shift!  Making delays fun.

A graphic representation of the Doppler Shift

An interesting feature/artifact of interpolating delays is the characteristic pitch shift that many of them make.  This pitch shift is similar to how the Doppler shift phenomenon works.

The characteristic pitch shift that many interpolating delays make is similar to how the Doppler Effect works.

A stationary sound source normally sends out sound waves in all directions around itself, at the speed of sound. If that sound source starts to move toward a stationary listener (or if the listener moves toward the sound), the successive wave fronts start getting compressed in time and hit the listener’s ears with greater frequency.  Due to the relative motion of the sound source to the listener, the sound’s frequency has in effect been raised.  If the sound source instead moves away from the listener, the opposite holds true: the wave fronts are encountered at a slower rate than previously, and the pitch seems to have been lowered. [Moore, 1990]

OK, but in plainer English: When a car drives past you on the street or highway, you hear the sound go up in pitch as it approaches, and as it passes, it goes back down.   This is the Doppler Effect.  The soundwaves travel at the same speed always, but they are coming from an object that is moving so their frequency goes up and then goes down when it is moving again away from you.

A sound we put into a delay line (software / pedal / tape loop) is like a recording.  If you play it back faster, the pitch goes higher as the sound waves hit your ears in faster succession, and if you slow it down, it plays back lower.  Just like what happens to the sound of a passing siren from a train or car horn that gets higher as it approaches and passes you: when delayed sounds are varied in time, the same auditory illusion is created. The pitch goes down as delay time is increased up as delay time is decreased, with the same Doppler Effect as the case of the stationary listener and moving sound source.

Using a Doppler Effect makes the delay more of an “instrument.”

Using a Doppler Effect makes the delay more of an “instrument” because it’s possible to repeat the sound and also alter it.  In my last post I discussed many types of reflections and repetitions in the visual arts, some exact and natural and others more abstract and transformed as reflections. Being able to alter the repetition of a sound in this way is of key importance to me.  Adding additional effects in with the delays is important for building a sound that is musically identifiable as separate from that of the musician I use as my source.

Using classic electroacoustic methods for transforming sounds, we can create new structures and gestures out of a live sound source. Methods such as pitch-shifting, speeding sounds up or slowing them down, or a number of filtering techniques, work better if we also use delays and time displacement as a way to distinguish these elements from the source sounds.

Many types of delay and effects plugins and pedals on the market are based on simple combinations of the principal parameters I have been outlining (e.g. how much feedback, how short a delay, how it is routed). For example, Ping Pong Delay delays a signal 50-100ms or more and alternates sending it back and forth between the left and right channels, sometimes with high feedback so it goes on for a while. Flutter Echo is very similar to the Ping Pong Delay, but with shorter delay times to cause more filtering to occur—an acoustic effect that is sometimes found in a very live sounding public spaces.  Slapback Echo has a longer delay time (75ms or more) with no feedback.

FREEZE!  Infinite Delay and Looping

Some delay devices will let us hold a sample indefinitely in the delay.  We can loop a sound and “freeze” it, adding additional sounds sometime later if we choose. The layer cake of loops built up lends itself to an easy kind of improvisation which can be very beautiful.

“Infinite” delay is used by an entire catalog of genres and musical scenes.

Looping with infinite delay is used by an entire catalog of genres and musical scenes from noise to folk music to contemporary classical.  The past few years especially, it’s been all over YouTube and elsewhere online thanks to apps and applications like Ableton Live and hardware like Line 6, a popular 6-channel looper pedal. Engaging in a form of live-composing/production, musicians generate textures and motifs, constructing them into entire arrangements, often based upon the sound of one instrument, in many tracks, all played live and in the moment.  In terms of popular electronic music practice, looping and grid interfaces seem to be the most salient and popularly-used paradigms for performance and interface since the late 2000s.

Looping music is often about building up an entire arrangement, from scratch, and with no sounds heard that are not first played by the instrumentalist, live, before their repetition (the sound of which is possibly slightly different and mediated by being heard over speakers).

With live sound processing, we use loops, too, of course. The moment I start to loop a sound “infinitely,” I am, theoretically, no longer working with live sound processing, but I am processing something that happened in the past—this is sometimes called “live sampling” and we could quibble about the differences.  To make dynamic live-looping for improvised music, whether done by sampling/looping other musicians, or by processing one’s own sound, it is essential to be flexible and be able/willing to change the loops in some way, perhaps quickly, and to make alterations to the audio recorded in real-time.  These alterations can be a significant part of the expressiveness of the sound.

For me, the most important part of working with long delays (or infinite loops) is that I be able to create and control rhythms with those delays.  I need to lock-in (synchronize) my delay times while I play. Usually I do this manually, by listening, and then using a Tap Tempo patch I wrote (which is what I’ll do when I perform this weekend at Spectrum as part of Nick Didkovsky’s Deviant Voices Festival on October 21 at Spectrum and the following day with Ras Moshe as part of the Quarry Improvised Music Series at Triskelion Arts).

Short delays are mostly about resonance. In my next and final post, I will talk more about filters and resonance, why using them together with delay is important, as well as strategies for how to be heard when live processing acoustic sound in an improvisation.

In closing, here is an example from What is it Like to be a Bat? my digital chamber punk trio with Kitty Brazelton (active 1996-2009 and which continues in spirit). In one piece, I turned the feedback up on my delay as high as I could get away with (nearly causing microphones and sound system to feedback too), then yelled “Ha!” into my microphone, and set off sequence of extreme delay changes with an interpolating delay in a timing we liked. Joined by drummer Danny Tunick, who wrote a part to go with it, we’d repeat this sequence four times, each time louder, noisier, different but somehow repeatable at each performance. It became a central theme in that piece, and was recorded as the track “Batch 4” part of our She Said – She Said, “Can You Sing Sermonette With Me?” on the Bat CD for Tzadik label.

Some recommended further reading and listening

Thom Holmes, Electronic and Experimental Music (Routledge, 2016)

Jennie Gottschalk, Experimental Music Since 1970 (Bloomsbury Academic, 2016)

Geoff Smith, “Creating and Using Custom Delay Effects” (for the website Sound on Sound, May 2012) Smith writes: “If I had to pick a single desert island effect, it would be delay. Why? Well, delay isn’t only an effect in itself; it’s also one of the basic building blocks for many other effects, including reverb, chorus and flanging — and that makes it massively versatile.”

He also includes many good recipes and examples of different delay configurations.

Phil Taylor, “History of Delay” (written for the website for Effectrode pedals)

Daniel Steinhardt and Mick Taylor, “Delay Basics: Uses, Misuses & Why Quick Delay Times Are Awesome” (from their YouTube channel, That Pedal Show)
Funny

Questions I Ask Myself

In May 2017, I gave a talk at the New Music Gathering to share what I’ve learned about the practical work of being a composer. I’ve had some success writing percussion music, and I wanted to share exactly what I do for promotion, community building, professionalization, and business stuff in the hope that it could help others also have some success. I wanted to de-mystify this work (which is not hard, but can be mysterious), and so I pushed myself to be as open and transparent as I could.

That desire to open up took me farther than my spreadsheets. At the end of the session, I stepped back to reflect. I opened up about a sea change I’m going through, which is re-arranging my own ideas about what I think music is good at, what I have to offer through it, and what I want out of life.

I was nervous to share these very personal reflections, but I’m glad I did. It sounds like a lot of us are thinking similar thoughts but maybe not talking about them so much. Many people who were at that talk wrote to me to ask if I would share the text of one particular section. It was a list of tough questions that dog me about making my home in new music. They all more or less boil down to: is this a place where am I living my values?

I want to say up front: I’m not sharing them because I think they’re necessarily all fair questions, or kind questions, or because they add up to some kind of coherent critique of anything. I’m sharing them because they’re the ones that I wrestle with. Maybe you do, too.


THE QUESTIONS

Am I just trying to impress people and get famous?

Do the experiences I create draw people in or push people out?

My musical world seems to get smaller and smaller, and look more and more like me. Is that what I wanted?

Do I want to be in a culture where hierarchy and prestige have so much power? If not, would I ever have the guts to give up mine?

Do we composers really earn the reverence we are shown?

Our whole disposable capitalist culture is obsessed with novelty and progress. Is a value system based on the newness of music really as countercultural as I think it is?

How many of my ideas about new music really stand up to critical thought and how many are magical thinking?

Do I want to live a life in front of my computer making scores and sending emails?

How does the range of meaningful feeling I experience at a new music concert (or create for others to experience) stack up against other things I love—say, being outside on a summer night, cooking a big meal for friends, or swimming in a lake?

What’s the difference between being a champion of my community and being a partisan, fighting to expand the size and status of a little kingdom just because I happen to belong to it?

Am I OK making music basically with and for people who have received similar educations to me?

When I say “21st-century music” why do I really mean “21st-century music, except for everything made by people who aren’t educated in the culture I was”?

Have I used esoteric musical preferences and interests to feel different from (superior to?) other people? Has that isolated me? Can I in good faith encourage others to do the same?

Am I OK with an aesthetic ideology that values making people uncomfortable more than making people happy?

I LOVE to dance to music, maybe more than anything else in the world. Why am I in a musical culture with no dancing??

For all its education, the words of value I hear more than any others in this field are “weird,” “crazy,” and “cool.” Why is a sound cool if it’s crazy?   Is “weird” actually an interesting idea? Is “cool” enough for me?

Do the technical fixations I inherited—extended technique, virtuosity, hockets, structure, technology, etc.—actually relate to what I find meaningful and powerful in a musical experience? Do I use them to connect with people, or just to impress them?

I feel so much more joy and warmth and connection with others in informal musical situations and with amateurs than I do sitting on a stage in a big hall. So why do I focus so much of my energy on the big hall?

Do I want to learn from other people/traditions/cultures, or do I just want them to do music the way I do it?

Is my ignorance of other musical cultures just ignorance, or is it indifference? Is there a shade of contempt in that indifference?

Am I a snob?

If what I value most is connecting with other people deeply and sharing meaningful experiences, is the way I’m doing music really achieving that?


Forest Trails

Photo by Jens Lelie

I have been wrestling with these questions for the past few years. My ideas about success, music, life, what I want and what I have to offer—I feel like they’ve been melting and are only now, maybe, starting to take a new shape.

When I was a teenager I wanted music to be my passport to the world. I had friends who spent months at a time as happy vagabonds busking in Europe, traveling all over, never needing a hotel, always discovered and taken in by their counterparts after flying their hippie flags. That was my dream—to be able to walk up to any campfire, join in with my guitar, and, by the end of the night, turn some strangers into friends.

I snuck into music school in college and my dream changed. Instead of a passport, I wanted a VIP pass: access to those imagined Arcadias with names that glowed, words I’d never heard but that all of a sudden seemed very, very important—Aspen! Tanglewood! Darmstadt! I dreamed of a future where I’d gain admittance to a sequence of ever-smaller and ever-more-enviable rooms.

I never went to those places, but I have been in some very small and enviable rooms. I don’t want to sound ungrateful: some of them have been to my great personal and professional benefit and I’ll probably never give up the perks, no matter how conflicted I feel about them. But the truth is, I just don’t feel motivated anymore by the prospect of impressing people enough with my CV to move into the next, higher, smaller, more exclusive room. It’s unnerving, honestly, to look inside myself at the hole where that ambition used to be. I ask myself, a little bitterly: Are you getting lazy? Are you giving up?

What’s unsatisfying to me about those rooms is that they’re all in the same country. I’m back to wanting a passport. I don’t want to be a partisan for a territory, I want to travel to new ones. I want to connect with people who aren’t just like me. Music is a great way to do that, but it’s not enough to just open our doors to others; we have to be willing to leave our comfort zone, and take the risk of stepping into theirs. And when I arrive, the last thing I want to do is impress them with my CV. I want to impress them because I can listen deeply. Learn quickly. Fail happily. Connect openly.

As these feelings have become conscious in me, I have started to steer my life in a different direction. I’ve found two new ways to be a musician that feel much more like passports. Both lead me out of new music. One is learning to play the berimbau, which led me into capoeira, the national martial art of Brazil (at which I am a happily failing beginner). The other is teaching music at a prison, which has shown me that what I love about music—where I think it has power and where I have the most to offer—has nothing to do with newness, with style, or with my CV. It’s in how music creates a common ground for ambition, learning, creativity, self-discovery, and joy. It’s in how music can bypass guards and barbed wire, stigma and shame, and give people a way to celebrate and inspire each other, no matter how cut off they are.

I’ll write more about that in the next post.

From the Machine: Realtime Networked Notation

Last week, we looked at algorithms in acoustic music and the possibility of employing realtime computation to create works that combine pre-composition, generativity, chance operations, and live data input. This week, I will share some techniques and software tools I’ve developed that make possible what might be called an interactive score. By interactive score, I mean a score that is continuously updatable during performance according to a variety of realtime input. Such input might be provided from any number of digitized sources: software user interface, hardware controllers, audio signals, video stream, light sensors, data matrices, or mobile apps; the fundamental requirement is that the score is able to react to input instantaneously, continuously translating fluctuations in input data into a musical notation that is intelligible to performers.

THE ALGORITHMIC/ACOUSTIC DIVIDE

It turns out that this last requirement has historically been quite elusive. As early as the 1950s, composers were turning to computer algorithms to generate and process compositional data. The resultant information could either be translated into traditional music notation for acoustic performance (in the early days, completely by hand; in later years, by rendering the algorithm’s output as MIDI data and importing it into a software notation editor) or realized as an electronic composition. Electronic realizations emerged as perhaps the more popular approach, for several reasons. First, by using electronically generated sounds, composers gained the ability to precisely control and automate the timbre, dynamics, and spatialization of sound materials through digital means. Second, and perhaps more importantly, by jettisoning human performers—and thus the need for traditional musical notation—composers were able to reduce the temporal distance between a musical idea and its sonic realization. One could now audition the output of a complex algorithmic process instantly, rather than undertake the laborious transcription process required to translate data output into musical notation. Thus, the bottleneck between algorithmic idea and sonic realization was reduced, fundamentally, to the speed of one’s CPU.

As computation speeds increased, the algorithmic paradigm was extended to include new performative and improvisational possibilities. By the mid-1970s, with the advent of realtime computing, composers began to create algorithms that included not only sophisticated compositional architectures, but also permitted continuous manipulation and interaction during performance. To take a simple example: instead of designing an algorithm that harmonizes a pre-written melody according to 18th-century counterpoint rules, one could now improvise a melody during performance and have the algorithm intelligently harmonize it in realtime. If multiple harmonizations could satisfy the voice-leading constraints, the computer might use chance procedures to choose among them, producing a harmonically indeterminate—yet, perhaps, melodically determinate—musical passage.

Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered.

This is just one basic example of combining live performance input with musically intelligent realtime computation; more complex and compositionally innovative applications can easily be imagined. What is notable with even a simple example like our realtime harmonizer, however, is the degree to which such a process resists neat distinctions such as “composition”/“performance”/“improvisation” or “fixed”/“indeterminate.” It is all of these at once, it is each of these to varying degrees, and yet it is also something entirely new as well. Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered. I would argue that the possibility of instantiating realtime compositional intelligence in machines holds the most radically transformative promise for a paradigmatic shift in music in the years ahead.

All of this, of course, has historically involved a bit of a trade-off: composers who wished to explore such realtime compositional possibilities were forced to limit themselves to electronic and virtual sound sources. For those who found it preferable to continue to work exclusively with acoustic instruments—whether for their complex yet identifiable spectra, their rich histories in music composition and performance, or the interpretative subtleties of human performers—computer algorithms offered an elaborate pre-compositional device, but nothing more.[1]

BRIDGING THE GAP

This chasm between algorithmic music realized electronically (where sophisticated manipulation of tempi, textural density, dynamics, orchestration, and form could be achieved during performance) and algorithmic music realized acoustically (where algorithmic techniques were only to be employed pre-compositionally to inscribe a fixed work) is something that has frustrated and fascinated me for years. As a student of algorithmic composition, I often wished that I could achieve the same enlarged sense of compositional possibility offered by electronically realized systems—including generativity, stochasticity, and performative plasticity—using traditional instruments and human performers.

This, it seemed, hinged upon a digital platform for realtime notation: a software-based score that could accept abstract musical information (such as rhythmic values, pitch data, and dynamics) as input and convert it into a readable measure of notation. The notational mechanism must also be continuously updatable: it must allow for a composer’s live data input to change the notation of subsequent measures during performance. It must here strike a balance between temporal interactivity for the composer and readability for the performer, since most performers are accustomed to reading at least a few notes ahead in the score. Lastly, the platform must be able to synchronize notational outputs for two or more performers, allowing an ensemble to be coordinated rhythmically.

Fortunately, technologies do now exist—some commercially available and others that can be realized as custom software—that satisfy each of these notational requirements.

I have chosen to develop work in Cycling ’74’s Max/MSP environment, for several reasons. First, Max supports realtime data input and output, which provides the possibility of transcending the merely pre-compositional use of algorithms. Second, two third-party notation objects —bach.score[2] and MaxScore[3]—have recently been developed for the Max environment, which allow for numerical data to be translated into traditional (as well as more experimental forms of) musical notation. For years, this remained a glaring inadequacy in Max, as native objects do not provide anything beyond the most basic notational support. Third, Max has several objects designed to facilitate communication among computers on a local network; although most of these objects are low-level in their implementation, they can be coaxed into a lightweight, low-latency, and relatively intelligent computer network with some elaboration.

REALTIME INTERACTIVE NOTATION: UNDER THE HOOD

Let’s take a look at the basic mechanics of interactive notation using the bach.score object instantiated in Max/MSP. (For those unfamiliar with the Max/MSP programming environment, I will attempt to sufficiently summarize/contextualize the operations involved so that this process can be understood in more general terms.) bach.score is a user-interface object that can be used to display and edit musical notation. While not quite as robust as commercial notation software such as Sibelius or Finale, it features many of the same operations: manual note entry with keyboard and mouse, clef and instrument name display, rhythmic and tuplet notation, accidentals and microtones, score text, MIDI playback, and more. However, bach.score‘s most powerful feature is its ability to accept formatted text messages to control almost every aspect of its operation in realtime.

To take a basic example, if we wanted to display the first four notes of an ascending C major arpeggio as quarter notes in 4/4 (with quarter note = 60 BPM) in Sibelius, we would first have to set the tempo and time signature manually, then enter the pitches using the keyboard and mouse. With bach.score, we could simply send a line of text to accomplish all of this in a single message:

(( ((4 4) (60)) (1/4 (C4)) (1/4 (E4)) (1/4 (G4)) (1/4 (C5)) ))

example 1:

And if we wanted to display the first eight notes of an ascending C major scale as eighth notes:

(( ((4 4) (60)) (1/8 (C4)) (1/8 (D4)) (1/8 (E4)) (1/8 (F4)) (1/8 (G4)) (1/8 (A4)) (1/8 (B4)) (1/8 (C5)) ))

example 2:

Text strings are sent in a format called a Lisp-like linked list (llll, for short). This format uses nested brackets to express data hierarchically, in a branching tree-like structure. This turns out to be a powerful metaphor for expressing the hierarchy of a score, which bach.score organizes in the following way:

voices > measures > rhythmic durations > chords > notes/rests > note metadata (dynamics, etc.)

The question might be raised: why learn an arcane new text format and be forced to type long strings of hierarchically arranged numbers and brackets to achieve something that might be accomplished by an experienced Finale user in 20 seconds? The answer is that we now have a method of controlling a score algorithmically. The process of formatting messages for bach.score can be simplified by creating utility scripts that translate between the language of the composer (“ascending”; “subdivision”; “F major”) and that of the machine. This allows us to control increasingly abstract compositional properties in powerful ways.

Let’s expand upon our arpeggio example above, and build an algorithm that allows us to change the arpeggio’s root note, the chord type (and corresponding key signature), the rhythmic subdivision used, and the arpeggio’s direction (ascending, descending, or random note order).

example 3:

Let’s add a second voice to create a simple canonic texture. The bottom voice adds semitonal transposition and rhythmic rotation from the first voice as variables.

example 4:

To add some rhythmic variety, we might also add a control that allows us to specify the probability of rest for each note. Finally, let’s add basic MIDI playback capabilities so we can audition the results as we modify musical parameters.

example 5:

While our one-measure canonic arpeggiator leaves a lot to be desired compositionally, it gives an indication of the sorts of processes that can be employed once we begin thinking algorithmically. (In the next post, we will explore more sophisticated examples of algorithmic harmony, voice-leading, and orchestration.) It is important to keep in mind that unlike similar operations for transposition, inversion, and rotation in a program like Sibelius, the functions we have created here will respond to realtime input. This means that our canonic function could be used to process incoming MIDI data from a keyboard or a pitch-tracked violin, creating a realtime accompaniment that is canonically related to the input stream.

PRACTICAL CONSIDERATIONS: RHTYHMIC COORDINATION AND REALTIME SIGHT-READING

Before going any further with our discussions of algorithmic compositional techniques, we should return to more practical considerations related to a realtime score’s performability. Even if we are able to generate satisfactory musical results using algorithmic processes, how will we display the notation to a group of musicians in a way that allows them to perform together in a coordinated manner? Is there a way to establish a musical pulse that can be synced across multiple computers/mobile devices? And if we display notation to performers precisely at the instant it is being generated, will they be able to react in time to perform the score accurately? Should we, instead, generate material in advance and provide a notational pre-display, so that an upcoming bar can be viewed before having to perform it? If so, how far in advance?

I will share my own solutions to these problems—and the thinking that led me to them—below. I should stress, however, that a multiplicity of answers are no doubt possible, each of which might lead to novel musical results.

I’ve addressed the question of basic rhythmic coordination by stealing a page from Sibelius’s/Finale’s book: a vertical cursor that steps through the score at the tempo indicated. By programming the cursor to advance according to a quantized rhythmic grid (usually either quarter or eighth note), one can visually indicate both the basic pulse and the current position in the score. While this initially seemed a perfectly effective and minimal solution, rehearsal and concert experience has indicated that it is good practice to also have a large numerical counter to display the current beat. (This is helpful for those 13/4 measures with 11 beats of rest.)

example 6:

With a “conductor cursor” in place to indicate metric pulse and current score position, we turn to the question of how best to synchronize multiple devices (e.g. laptops, tablets) so that each musician’s cursor can be displayed at precisely the same position across devices. This is a critical question, as deviations in the range of a few milliseconds across devices can undermine an ensemble’s rhythmic precision and derail any collective sense of groove. In addition to synchronizing cursor positions, communication among devices will likely be needed to pipe score data (e.g. notes/rests, time signatures, dynamics, expression markings) from a central computer—where the master score is being generated and compiled—to performers’ devices as individual parts.

Max/MSP has several objects that provide communication across a network, including udpsend and udpreceive, jit.net.send and jit.net.recv, and a suite of Java classes that use the mxj object as a host—each of these has its advantages and drawbacks. Udpsend and udpreceive allow Max messages to be sent to another device on a network by specifying its IP address; they provide the fastest transfer speeds and are therefore perhaps the most commonly used. The downside to using UDP packets is that there is no specification for error-checking, such as guaranteed message delivery or guaranteed ordered delivery. This means that, while it provides the fastest transfer speeds, UDP does not necessarily guarantee that data packets will make it to their destination, or that packets will be received in the correct order. jit.net.send and jit.net.recv are very similar in their Max/MSP implementation, but use the TCP/IP transfer protocol, which does provide for error-checking; the tradeoff is that they have slightly slower delivery times. mxj-based objects provide useful functionality such as the ability to query one’s own IP address (net.local) and multicasting (net.multi.send and net.multi.recv), but require Java to be installed on performers’ machines—something which, experience has shown, cannot always be assumed.

I have chosen to use jit.net.send and jit.net.recv exclusively in all of my recent work. The slight tradeoff in speed is offset by the reliability they provide during performance. Udpsend and udpreceive might work flawlessly for 30 minutes and then drop a data packet, causing the conductor cursor to skip a beat or a blank measure to be unintentionally displayed. This is, of course, unacceptable in a critical performance situation. To counteract the slightly slower performance of jit.net.send and jit.net.recv (and to further increase network reliability), I have also chosen to use wired Ethernet connections between devices via a 16-port Ethernet switch.[4]

Lastly, we come to the question of how much notational pre-display to provide musicians for sight-reading purposes. We must bear in mind that the algorithmic paradigm makes possible an indeterminate compositional output, so it is entirely possible that musicians will be sight-reading music during performance that they have not previously seen or rehearsed together. Notational pre-display might provide musicians with information about the most efficient fingerings for the current measure, alert them to an upcoming change in playing technique or a cue from a fellow musician, or allow them to ration their attention more effectively over several measures. In fact, it is not uncommon for musicians to glance several systems ahead, or even quickly scan an entire page, to gather information about upcoming events or gain some sense of the musical composition as a whole. The drawback to providing an entire page of pre-generated material, from a composer’s point of view, is that it limits one’s ability to interact with a composition in realtime. If twenty measures of music have been pre-generated, for instance, and a composer wishes to suddenly alter the piece’s orchestration or dynamics, he/she must wait for those twenty measures to pass before the orchestrational or dynamic change takes effect. In this way, we can note an inherent tension between a performer’s desire to read ahead and a composer’s desire to exert realtime control over the score.

Since it was the very ability to exert realtime control over the score which attracted me to networked notation in the first place, I’ve typically opted to keep the notational pre-display to a bare minimum in my realtime works. I’ve found that a single measure of pre-display is usually a good compromise between realtime control for the composer and readability for the performer. (Providing the performer with one measure of pre-display does prohibit certain realtime compositional possibilities that are of interest to me, such as a looping function that allows the last x measures heard during performance to be repeated on a composer’s command.) Depending on tempo and musical material, less than a measure of pre-display might be feasible; this necessitates updating data in a measure as it is being performed, however, which runs the risk of being visually distracting to a performer.­

An added benefit of limiting pre-display to one measure is that a performer need only see two measures at any given time: the current measure and the following measure. This has led to the development of what I call an “A/B” notational format, an endless repeat structure comprising two measures. Before the start of the piece, the first two measures are pre-generated and displayed. As the piece begins, the cursor moves through measure 1; when it reaches the first beat of measure 2, measure 3 is pre-generated and replaces measure 1. When the cursor reaches the first beat of measure 3, measure 4 is pre-generated and replaces measure 2, and so on. In this way, a performer can always see two full bars of music (the current bar and the following bar) at the downbeat of any given measure. This system also keeps the notational footprint small and consistent on a performer’s screen, allowing for their part to be zoomed to a comfortable size for reading, or for the inclusion of other instruments’ parts to facilitate ensemble coordination.

example 7:

SO IT’S POSSIBLE… NOW WHAT?

Given this realtime notational bridge from the realm of computation to the realm of instrumental performance, a whole new world of compositional possibilities begins to emerge. In addition to traditional notation, non-standard notational forms such as graphical, gestural, or text-based can all be incorporated into a realtime networked environment. Within the realm of traditional notation, composers can begin to explore non-fixed, performable approaches to orchestration, dynamics, harmony, and even spatialization in the context of an acoustic ensemble. Next week, we will look at some of these possibilities more closely, discussing a range of techniques for controlling higher-order compositional parameters, from the linear to the more abstract.



1. Notable exceptions to this include the use of mechanical devices and robotics to operate acoustic instruments through digital means (popular examples: Yamaha Disklavier, Pat Metheny’s Orchestrion Project, Squarepusher’s Music for Robots, etc.). The technique of score following—which uses audio analysis to correlate acoustic instruments’ input to a position in a pre-composed score—should perhaps also be mentioned here. Score following provides for the compositional integration of electronic sound sources and DSP into acoustic music performance; since it fundamentally concerns itself with a pre-composed score, however, it cannot be said to provide a truly interactive compositional platform.


2. Freely available through the bach project website.


3. Info and license available at the MaxScore website.


4. A wired Ethernet connection is not strictly necessary for all networked notation applications. If precise timing of events is not compositionally required, a higher-latency wireless network can yield perfectly acceptable results. Moreover, recent technologies such as Ableton Link make possible wireless rhythmic synchronization among networked devices, with impressive perceptual precision. Ableton Link does not, however, allow for the transfer of composer-defined data packets, an essential function for the master/slave data architecture employed in my own work. At the time of this writing, I have not found a wireless solution for transferring data packets that yields acceptable (or even consistent) rhythmic latencies for musical performance.

Delays as Music

As I wrote in my previous post, I view performing with “live sound processing” as a way to make music by altering and affecting the sounds of acoustic instruments—live, in performance—and to create new sounds, often without the use of pre-recorded audio. These new sounds, have the potential to forge an independent and unique voice in a musical performance. However, their creation requires, especially in improvised music, a unique set of musicianship skills and knowledge of the underlying acoustics and technology being used. And it requires that we consider the acoustic environment and spectral qualities of the performance space.

Delays and Repetition in Music

The use of delays in music is ubiquitous.  We use delays to locate a sound’s origin, create a sense of size/space, to mark musical time, create rhythm, and delineate form.

The use of delays in music is ubiquitous.

As a musical device, echo (or delay) predates electronic music. It has been used in folk music around the world for millennia for the repetition of short phrases: from Swiss yodels to African call and response, for songs in the round and complex canons, as well as in performances sometimes taking advantage of unusual acoustic spaces (e.g. mountains/canyons, churches, and unusual buildings).

In contemporary music, too, delay and reverb effects from unusual acoustic spaces have been included the Deep Listening cavern music of Pauline Oliveros, experiments using the infinite reverbs in the Tower of Pisa (Leonello Tarbella’s Siderisvox), and organ work at the Cathedral of St. John the Divine in NY using its 7-second delay. For something new, I’ll recommend the forthcoming work of my colleague, trombonist Jen Baker (Silo Songs).

Of course, delay was also an important tool in the early studio tape experiments of Pierre Schaeffer (Etude aux Chemin de Fer) as well as Terry Riley and Steve Reich. The list of early works using analog and digital delay systems in live performances is long and encompasses many genres of music outside the scope of this post—from Robert Fripp’s Frippertronics to Miles Davis’s electric bands (where producer Teo Macero altered the sound of Sonny Sharrock’s guitar and many other instruments) and Herbie Hancock’s later Mwandishi Band.

The use of delays changed how the instrumentalists in those bands played.  In Miles’s work we hear not just the delays, but also improvised instrumental responses to the sounds of the delays and—completing the circle—the electronics performers respond to by manipulating their delays in-kind. Herbie Hancock was using delays to expand the sound of his own electric Rhodes, and as Bob Gluck has pointed out (in his 2014 book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band), he “intuitively realized that expressive electronic musicianship required adaptive performance techniques.” This is something I hope we can take for granted now.

I’m skipping any discussion of the use of echo and delay in other styles (as part of the roots of Dub, ambient music, and live looping) in favor of talking about the techniques themselves, independent of the trappings of a specific genre, and favoring how they can be “performed” in improvisation and as electronic musical sounds rather than effects.

Sonny Sharrock processed through an Echoplex by Teo Macero on Miles Davis’s “Willie Nelson” (which is not unlike some recent work by Johnny Greenwood)

By using electronic delays to create music, we can create exact copies or severely altered versions of our source audio, and still recognize it as a repetition, just as we might recognize each repetition of the theme in a piece organized as a theme and variations, or a leitmotif repeated throughout a work. Besides the relationship of delays to acoustic music, the vastly different types of sounds that we can create via these sonic reflections and repetitions have a corollary in visual art, both conceptually and gesturally. I find these analogies to be useful especially when teaching. Comparisons to work from the visual and performing arts that have inspired me in my work include images, video, and dance works.  These are repetitions (exact or distorted), Mandelbrot-like recursion (reflections, altered or displaced and re-reflected), shadows, and delays.  The examples below are analogous to many sound processes I find possible and interesting for live performance.

Sounds we create via sonic reflections and repetitions have a corollary in visual art.

I am a musician not an art critic/theorist, but I grew up in New York, being taken to MoMA weekly by my mother, a modern dancer who studied with Martha Graham and José Limon.  It is not an accident that I want to make these connections. There are many excellent essays on the subject of repetition in music and electronic music, which I have listed at the end of this post.  I include the images and links below as a way to denote that the influences in my electroacoustic work are not only in music and audio.

In “still” visual art works:

  • The reflected, blurry trees in the water of a pond in Claude Monet’s Poplar series creates new composite and extended images, a recurring theme in the series.
  • Both the woman and her reflection in Pablo Picasso’s Girl Before a Mirror are abstracted and interestingly the mirror itself is both the vehicle for the reiteration and an exemplified object.
  • There are also repetitions, patterns, and “rhythms” in work by Chuck Close, Andy Warhol, Sol Lewitt, M.C. Escher, and many other painters and photographers.

In time-based/performance works:

  • Fase, Four Movements to the Music of Steve Reich, is a dance choreographed in 1982 by Anne Teresa De Keersmaeker to Steve Reich’s Music for 18 Musicians. De Keersmaeker uses shadows with the dancers. The shadows create a 3rd (and 4th and 5th) dancer which shift in and out of focus turning the reflected image presented as partnering with the live dancers into a kind of sleight-of-hand.
  • Iteration plays an important role in László Moholy-Nagy’s short films, shadow play constructions, and his Light Space Modulator (1930)
  • Reflection/repetition/displacement are inherent to the work of countless experimental video artists, starting with Nam June Paik, who work with video synthesis, feedback and modified TVs/equipment.

Another thing to be considered is that natural and nearly exact reflections can also be experienced as beautifully surreal. On a visit to the Okefenokee swamp in Georgia long ago, my friends and I rode in small flat boats on Mirror Lake and felt we were part of a Roger Dean cover for a new Yes album.

Okefenokee Swamp

Okefenokee Swamp

Natural reflections, even when nearly exact, usually have some small change—a play in the light or color, or slight asymmetry—that gives it away. In all of my examples, the visual reflection is not “the same” as the original.   These nonlinear differences are part the allure of the images.

These images are all related to how I understand live sound processing to impact on my audio sources. Perfect mirrors create surreal new images/objects extending away from the original.  Distorted reflections (anamorphosis) create a more separate identity for the created image, one that can be understood as emanating from the source image, but that is inherently different in its new form. Repetition/mirrors: many exact or near exact copies of the same image/sound form patterns, rhythms, or textures creating a new composite sound or image.  Phasing/shadows—time-based or time-connected: the reflected image changes over time in its physical placement with regards to the original and creating a potentially new composite sound.   Most of these ways of working require more than simple delay and benefit from speed changes, filtering, pitch-shift/time-compression, and other things I will delve into in the coming weeks.

The myths of Echo and Narcissus are both analogies and warning tales for live electroacoustic music.

We should consider the myths of Echo and Narcissus both as analogies and warning tales for live electroacoustic music. When we use delays and reverb, we hear many copies of our own voice/sound overlapping each other and create simple musical reflections of our own sound, smoothed out by the overlaps, and amplified into a more beautiful version of ourselves!  Warning!  Just like when we sing in the shower, we might fall in love the sound (to the detriment of the overall sound of the music).


Getting techie Here – How does Delay work?

Early Systems: Tape Delay

A drawing of the trajectory of a piece of magnetic tape between the reels, passing the erase, record, and playback heads.

A drawing by Mark Ballora which demonstrates how delay works using a tape recorder. (Image reprinted with permission.)

The earliest method used to artificially create the effect of an echo or simple delay was to take advantage of the spacing between the record and playback heads on a multi-track tape recorder. The output of the playback head could be read by the record head and rerecorded on a different track of the same machine.  That signal would then be read again by the playback head (on its new track).  The signal will have been delayed by the amount of time it took for the tape to travel from the record head to the playback head.

The delay time is determined by the physical distance between the tape heads, and by the tape speed being used.  One limitation is that delay times are limited to those that can be created at the playback speed of the tape. (e.g. At a tape speed of 15 inches per second (ips), tape heads spaced 3/4 to 2 inches apart can create echoes at 50ms to 133ms; at 7ips yields 107ms to 285ms, etc.)

Here is an example of analog tape delay in use:

Longer/More delays: By using a second tape recorder, we can make a longer sequence of delays, but it would be difficult to emulate natural echoes and reverberation because all our delay lengths would be simple multiples of the first delay. Reverbs have a much more complex distribution of many, many small delays. The output volume of those delays decreases differently (more linearly) in a tape system than it would in a natural acoustic environment (more exponentially).

More noise: Another side effect of creating the delays by re-recording audio is that after many recordings/repetitions the audio signal will start to degrade, affecting its overall spectral qualities, as the high and low frequencies die out more quickly, eventually degrading into, as Hal Chamberlin has aptly described it in his 1985 book Musical Applications of Microprocessors, a “howl with a periodic amplitude envelope.”

Added noise from degradation and overlapped voice and room acoustics is turned into something beautiful in I Am Sitting In A Room, Alvin Lucier’s seminal 1969 work.  Though not technically using delay, the piece is a slowed down microcosm of what happens to sound when we overlap / re-record many many copies of the same sound and its related room acoustics.

A degree of unpredictability certainly enhances the use of any musical device being used for improvisation, including echo and delay. Digital delay makes it possible to overcome the inherent inflexibility and static quality of most tape delay systems, which remain popular for other reasons (e.g. audio quality or nostalgia as noted above).

The list of influential pieces using a tape machine for delay is canonically long.  A favorite of mine is Terry Riley’s piece, Music for the Gift (1963), written for trumpeter Chet Baker. It was the first use of very long delays on two tape machines, something Riley dubbed the “Time Lag Accumulator.”

Terry Riley: Music for the Gift III with Chet Baker

Tape delay was used by Pauline Oliveros and others from the San Francisco Tape Music Center for pieces that were created live as well as in the studio, with no overdubs, which therefore could be considered performances and not just recordings.   The Echoplex, created around 1959, was one of the first commercially manufactured tape delay systems and was widely used in the ‘60s and ‘70s. Advances in the design of commercial tape delays, included the addition of more and moveable tape-heads, increased the number of delays and flexibility of changing delay times on the fly.

Stockhausen’s Solo (1966), for soloist and “feedback system,” was first performed live in Tokyo using seven tape recorders (the “feedback” system) with specially adjustable tape heads to allow music played by the soloist to “return” at various delay times and combinations throughout the piece.  Though technically not improvised, Solo is an early example of tape music for performed “looping.”  All the music was scored, and a choice of which tape recorders would be used and when was determined prior to each performance.

I would characterize the continued use of analog tape delay as nostalgia.

Despite many advances in tape delay, today digital delay is much more commonly used, whether it is in an external pedal unit or computer-based. This is because it is convenient—it’s smaller, lighter, and easier to carry around—and because it much more flexible. Multiple outputs don’t require multiple tape heads or more tape recorders. Digital delay enables quick access to a greater range of delay times, and the maximum delay time is simply a function of the available memory (and memory is much cheaper than it used to be).   Yet, in spite of the convenience and expandability of digital delay, there is continued use of analog tape delay in some circles.  I would simply characterize this as nostalgia (for the physicality of the older devices and dealing with analog tape, and for the warmth of analog sound; all of these we relate to music from an earlier time).

What is a Digital Delay?

Delay is the most basic component of most digital effects systems, and so it’s critical to discuss it in some detail before moving on to some of the effects that are based upon it.   Below, and in my next post, I’ll also discuss some physical and perceptual phenomena that need to be taken into consideration when using delay as a performance tool / ersatz instrument.

Basic Design

In the simplest terms, a “delay” is simple digital storage.  Just one audio sample or a small block of samples, are stored in memory then can be read and played back at some later time, and used as output. A one second delay (1000ms), mono, requires storing one second of audio. (At a 16-bit CD sample rate of 44.1kHz, this means about 88kb of data.) These sizes are teeny by today’s standards but if we use many delays or very long delays it adds up. (It is not infinite or magic!)

Besides being used in creating many types of echo-like effects applications, a simple one-sample delay is also a key component of the underlying structure of all digital filters, and many reverbs.  An important distinction between each of these applications is the length of the delay. As described below, when a delay time is short, the input sounds get filtered, and with longer delay times other effects such as echo can be heard.

Perception of Delay — Haas (a.k.a. Precedence) Effect

Did you ever drop a pin on the floor?   You can’t see it, but you still know exactly where it is? We humans naturally have a set of skills for sound localization.  These psychoacoustic phenomena have to do with how we perceive the very small time, volume, and timbre differences between the sounds arriving in our ears.

In 1949, Helmut Haas made observations about how humans localize sound by using simple delays of various lengths and a simple 2-speaker system.  He played the same sound (speech, short test tones), at the same volume, out of both speakers. When the two sounds were played simultaneously (no delay), listeners reported hearing the sound as if it were coming from the center point between the speakers (an audio illusion not very different from how we see).  His findings give us some clues about stereo sound and how we know where sounds are coming from.  They also relate to how we work with delays in music.

  • Between 1-10ms delay: If the delay between sounds is used was anywhere from 1ms to 10ms, the sound appears to emanate from the first speaker (the first sound we hear is where we locate the sound).pix here of Haas effect setup p 11
  • Between 10-30ms delay: The sound source continues to be heard as coming from the primary (first sounding) speaker, with the delay/echo adding a “liveliness” or “body” to the sound. This is similar to what happens in a concert hall—listeners are aware of the reflected sounds but don’t hear them as separate from the source.
  • Between 30-50ms delay: The listener becomes aware of the delayed signal, but still senses the direct signal as the primary source. (Think of the sound in a big box store “Attention shoppers!”)
  • At 50ms or more: A discrete echo is heard, distinct from the first heard sound, and this is what we often refer to as a “delay” or slap-back echo.

The important fact here is that when the delay between speakers is lowered to 10ms (1/100th of a second), the delayed sound is no longer perceived as a discrete event. This is true even when the volume of the delayed sound is the same as the direct signal. [Haas, “The Influence of Single Echo on the Audibility of Speech” (1949)].

A diagram of the Haas effect showing how the position of the listener in relationship to a sound source affects the perception of that sound source.

The Haas Effect (a.k.a. Precedence Effect) is related to our skill set for sound localization and other psychoacoustic phenomena. Learning a little about these phenomena (Interaural Time Difference, Interaural Level Difference, and Head Shadow) is useful not only for an audio engineer, but is also important for us when considering the effects and uses of delay in Electroacoustic musical contexts.

What if I Want More Than One?

Musicians usually want the choice to play more than one delayed sound, or to repeat their sound several times. We do this by adding more delays, or we can use feedback, and route a portion of our output right back into the input. (Delaying our delayed sound is something like an audio hall of mirrors.) We usually route only some of the sound (not 100%) so that each time the output is a little quieter and the sound eventually dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an endless repeat, and may even overload/clip if new sounds are added.

When two or more copies of the same sound event play at nearly the same time, they will comb filter each other. Our sensitivity to these small differences in timbre that result are a key to understanding, for instance, why the many reflections in a performance space don’t usually get mistaken for the real thing (the direct sound).   Likewise, if we work with multiple delays or feedback, when multiple copies of the same sound play over each other, they also necessarily interact and filter each other causing changes in the timbre. (This relates again to I Am Sitting In A Room.)

In the end, all of the above (delay length, using feedback or additional delays, overlap) all determine how we perceive the music we make using delays as a musical instrument. I will discuss Feedback and room acoustics and its potential role as a musical device in the next post later this month.


My Aesthetics of Delay

To close this post, here are some opinionated conclusions of mine based upon what I have read/studied and borne out in many, many sessions working with other people’s sounds.

  • Short delay times tend to change our perception of the sound: its timbre, and its location.
  • Sounds that are delayed longer than 50ms (or even up to 100ms for some musical sounds) become echoes, or musically speaking, textures.
  • At the in-between delay times (the 30-50ms range give or take a little) it is the input (the performed sound itself) that determines what will happen. Speech sounds or other percussive sounds with a lot of transients (high amplitude short duration) will respond differently than long resonant tones (which will likely overlap and be filtered). It is precisely in this domain that the live sound-processing musician will needs to do extra listening/evaluating to gain experience and predict what might be the outcome. Knowing what might happen in many different scenarios is critical to creating a playable sound processing “instrument.”

It’s About the Overlap

Using feedback on long delays, we create texture or density, as we overlap sounds and/or extend the echoes to create rhythm.  With shorter delays, using feedback instead can be a way to move toward the resonance and filtering of a sound.  With extremely short delays, control over feedback to create resonance is a powerful way to create predictable, performable, electronic sounds from nearly any source. (More on this in the next post.)

Live processing (for me) all boils down to small differences in delay times.

Live processing (for me) all boils down to these small differences in delay times—between an original sound and its copy (very short, medium and long delays).  It is a matter of the sounds overlapping in time or not.   When they overlap (due to short delay times or use of feedback) we hear filtering.   When the sounds do not overlap (delay times are longer than the discrete audio events), we hear texture.   A good deal of my own musical output depends on these two facts.


Some Further Reading and Listening

On Sound Perception of Rhythm and Duration

Karlheinz Stockhausen’s 1972 lecture The Four Criterion of Electronic Music (Part I)
(I find intriguing Stockhausen’s discussion of unified time structuring and his description of the continuum of rhythms: from those played very fast (creating timbre), to medium fast (heard as rhythms), to very very slow (heard as form). This lecture both expanded and confirmed my long-held ideas about the perceptual boundaries between short and long repetitions of sound events.)

Pierre Schaeffer’s 1966 Solfège de l’Objet Sonore
(A superb book and accompanying CDs with 285 tracks of example audio. Particularly useful for my work and the discussion above are sections on “The Ear’s Resolving Power” and “The Ear’s Time Constant” and many other of his findings and examples. [Ed. note: Andreas Bick has written a nice blog post about this.])

On Repetition in All Its Varieties

Jean-Francois Augoyard and Henri Torgue, Sonic Experience: a Guide to Everyday Sounds (McGill-Queen’s University Press, 2014)
(See their terrific chapters on “Repetition”, “Resonance” and “Filtration”)

Elizabeth Hellmuth Margulis, On Repeat: How Music Plays the Mind (Oxford University Press, 2014)

Ben Ratliff, Every Song Ever (Farrar, Straus and Giroux, 2016)
(Particularly the chapter “Let Me Listen: Repetition”)

Other Recommended Reading

Bob Gluck’s book You’ll Know When You Get There: Herbie Hancock and the Mwandishi Band (University of Chicago Press, 2014)

Michael Peter’s essay “The Birth of the Loop
http://preparedguitar.blogspot.de/2015/04/the-birth-of-loop-by-michael-peters.html

Phil Taylor’s essay “History of Delay

My chapter “What if your instrument is Invisible?” in the 2017 book Musical Instruments in the 21st Century as well as my 2010 Leonardo Music Journal essay “A View on Improvisation from the Kitchen Sink” co-written with Hans Tammen.

LiveLooping.org
(A musician community built site around the concept of live looping with links to tools, writing, events, etc.)

Some listening

John Schaeffer’s WNYC radio program “New Sounds” has featured several episodes on looping.
Looping and Delays
Just Looping Strings
Delay Music

And finally something to hear and watch…

Stockhausen’s former assistant Volker Müller performing on generator, radio, and three tape machines

Progressive Chamber Music

[Ed. Note: Later this week (October 14-15, 2017), Sirius Quartet will present their second annual Progressive Chamber Music Festival for two nights at the Greenwich House Music School in New York City. We asked the quartet’s four members to tell the story of the evolution of the group into a post-genre ensemble and why they decided to create their own music festival. Founding violist Ron Lawrence describes how the quartet came into being and the underlying aesthetics that inform what/how the group plays as well as the music festival they curate. Second violinist Gregor Huebner explains how the quartet evolved into a group of composer-performers. Cellist Jeremy Harman, the quartet’s most recent addition, describes why he joined the group seven years ago. And first violinist Fung Chern Hwei explains how the idea for a festival emerged over a conversation between the four of them while they were on tour in Kuala Lumpur, Malaysia. Along the way, each describes their own personal musical journeys and the directions those journeys have taken as a result of playing music with each other. As with the four separate parts that seamlessly weave together in a string quartet performance, whether it’s pre-composed or improvised, their four independent narratives inform and enhance each other.-FJO.]


The members of the Sirius Quartet standing in front of a wall.

The Sirius Quartet (from left to right): Fung Chern Hwei, Jeremy Harman, Ron Lawrence, and Gregor Huebner.

Ron Lawrence

The conflict/merging of the sacred and the profane has been a major theme in western culture since the rise of Christianity.  In the modern world, one expression of this conversation has been the gradual breakdown of the barriers between contemporary academic music and popular and folk music traditions.  The aesthetic of the Sirius Quartet and our Progressive Chamber Music Festival is an expression of this ongoing blending.  We created the festival to be an annual opportunity to showcase the diversity and depth of the community of like-minded composer/performers.  On a more prosaic level it is an attempt to create a new “bin in the record store” for this mulatto style (perhaps labeled “omnivores’ delight”).

The Sirius Quartet revels in the musical smorgasbord that the digital tidal wave has brought to the internet.

The Sirius Quartet revels in the musical smorgasbord that the digital tidal wave has brought to the internet.  With a few taps on a keyboard, anyone can access the entire canon of humanity’s musical experience.  The opportunities for cross-fertilization of musical styles, performance techniques, and creating new social contexts for musical performance are abundant.  The Sirius Quartet has been dedicated to exploring this new world, and the artists we’ve presented during the Progressive Chamber Music Festival for the past two years all embrace and explore these possibilities.

However, there are dangers in the digital tidal wave that has washed over the new millennium.  Beyond the obvious steering of a complacent audience into the “if you like that, you’ll love this” cul-de-sac, the configuration of the software programs and their default settings creates a huge temptation to allow the machines and plug-ins to make crucial aesthetic decisions.

Without making a conscious decision, the medium can become the message.  For example, the editing process can dictate what should be musical/emotional decisions.  The click map is a wonderful tool when writing music to picture, but expressing rubato is time consuming.  It’s easier to just loop some cool beats and lay it on the click map.  The technology has dictated the musical style. Plug-in technology is also insidious.  Rather than make a conscious decision about the color palette, the composer/producer will just plug in the funky ’70s Fender Twin bass sound from his or her library.  It would take hours of painstaking listening to get under the hood and tweak the software to find an original sound. Once again the technology has preemptively dictated choices, homogenizing the style. The composers/performers of Sirius and our colleagues use improvisation and the spontaneity of extended techniques to combat this homogenization.  We want our music feel homemade and give the audience the sensation of “fresh from the pot.”

I think my personal journey to becoming a creative musician began while driving around Michigan as a teenager with my car radio blaring rock and roll. I reveled in the breathtaking tonal and emotional palette of the electric guitar. When I arrived in New York in the early 1980s, the classical conservatory training of instrumentalists was increasingly specialized and recording techniques were creating a style and sound that worshiped velocity and close-miked sizzle over warmth and soulfulness.  There was a “correct violin sound” and one’s education and technical training focused exclusively on producing that timbre and emotional quality.  I yearned for that wider palette of the electric guitar. As a listener, I was as drawn to Sonny Boy Williamson or Bata drumming as I was to Babbitt or Boulez.  New York, always a nexus for the melting pot of cultures, gave me the opportunity for an almost anthropological exploration of the roots of popular and folk music styles.

Playing with a few charanga and tango bands taught me that each particular style has its own unique technical challenges distinct from the classical tradition.  Not only does each folkloric tradition have a unique rhythmic feel, but one’s physical approach to the instrument must be flexible enough to step outside of the classical concept of “good violin” playing.  As a performer and composer, choices of bow distribution, quality of attack and decay, and tonal variety inform the rhythmic feel and emotional content of any style.

Eventually, I was asked to join the Dave Soldier Electric String Quartet.  As a mainstay of the downtown, Knitting Factory music scene, Dave introduced me to that diverse, eclectic collection of urban, postmodern creative “folk” musicians.  Here was my wider palette.  When the Soldier Quartet disbanded, I founded the Sirius Quartet as a vehicle to continue these explorations for composers/performers.


Gregor Huebner’s reharmonization of Lennon & McCartney’s “Eleanor Rigby” performed by Sirius Quartet

Gregor Huebner 

I joined the Sirius Quartet around 2004, shortly after I started working with jazz pianist Richie Beirach.  Richie and I recorded the albums Round about Bartók and Round about Federico Mompou, which were very much about exploring the intersection of composition and improvisation in more of a jazz context.  At the time, Sirius Quartet was really focused on contemporary classical and avant-garde jazz composers.  As a player and a composer, this was a perfect group for me to explore my own musical identity and ideas.  I started composing pieces for Sirius which included both the extended techniques of the contemporary classical “language” as well as the spirit of improvisation from my jazz experiences with Beirach, Randy Brecker, Billy Hart, and George Mraz, with whom I play in a quintet.

These days we are a string quartet which writes its own music and incorporates improvisation in many different forms.

When Jeremy and Chern Hwei—two fantastic composers and improvisers—joined the quartet, it felt like focusing on our own music was the way forward. So these days we are a string quartet which writes its own music and incorporates improvisation in many different forms.  That is my own personal definition of what we are calling “progressive chamber music” as it applies to Sirius and we can stretch that term very broadly to include all kinds of creative small ensemble music, which is the focus of our annual festival.


Jeremy Harman’s composition More Than We Are performed by Sirius Quartet

Jeremy Harman

I grew up spending equal amounts of time immersed in classical music via cello lessons, playing in my school orchestras, and playing in a quartet with high school friends, as well as the rock/metal world, which was a very different circle of people, most of whom were self-taught and were more focused on writing original music.  I always felt equally at home in both worlds, and at the same time, maybe like in each world that I wasn’t able to fully be myself as a musician due to both collective and personal misperceptions that these two were incompatible.  Throughout my life, I’ve sought to bridge this gap on a personal level, and when I auditioned for Sirius Quartet in 2010, I found some like-minded string players who each came from a pretty unique background of musical influences, but who shared my desire to build bridges between genres, and more specifically to blur the lines between supposed high-brow and low-brow art and music.  We all have a classical background, but each of us have spent our lives reaching beyond that in our own ways, which have included exploring various types of improvisation, from soloing over chord changes to playing completely free with no premeditated musical goals or expectations, exploring alternative and extended techniques of playing to widen our sonic palette, and composing our own music which we hope reflects our unique identities as both individuals and as a quartet.

Each of us have spent our lives reaching beyond our classical backgrounds.

With seven years in the group, I am still the newest member of the Sirius Quartet, and most of its history predates me.  Initially the quartet came out of the Soldier String Quartet run by violinist Dave Soldier in the late ’80s as Ron has already mentioned, but as the resident “rookie” here, I think they did some very interesting work as part of the early Knitting Factory/“downtown” scene, working with artists such as Elliott Sharp and Nick Didkovsky and playing a lot of music that was more on the experimental side.

As has been said, in recent years, the quartet has focused more on original works by members of the quartet itself and has leaned more toward the jazz side of things, collaborating with a lot of phenomenal musicians including Linda Oh, Steve Wilson, Richard Sussman and Rufus Reid who all have written really incredible music incorporating the string quartet into more traditional jazz ensembles and instrumentations.

As a quartet, I think we occupy a somewhat unique position in the New York City music scene. So we wanted to put together a festival that brings together musicians from the various corners of the musical worlds we occupy.  There are already some fantastic music festivals in the city, but we thought there was plenty of room for another one.  If there were a Venn diagram that existed and each of these festivals occupied their own circle, I think that the circle that the Progressive Chamber Music Festival would occupy would have significant overlap with all of them.


Violinist Fung Chern Hwei playing with other members of the Sirus Quartet.

Fung Chern Hwei

The genesis of the Progressive Chamber Music Festival happened one fine October morning in 2015.  The place was Kuala Lumpur, Malaysia, where we were on tour and had a day off.  Before the sun displayed its full equatorial glory, we were enjoying breakfast at a South Indian-Malaysian roadside food stall—more commonly known as a “mamak” stall.  Words were exchanged over a topic as old as the quartet’s two-decade long career: How does one define the musical direction the quartet is taking? How does that fit into the current musical landscape of the music scene in New York and elsewhere?  Many of our fans and listeners would agree, it can be difficult to place the group in a certain category.  Sirius has a fascinating lineage of former members who have developed their own projects writing and/or performing contemporary classical music and/or various non-traditional genres—Todd Reynolds, Meg Okura, Jennifer Choi, Dave Eggar, just to name a few.

The current incarnation of the quartet is primarily focused on music that is internally written, as three of the four of us are composers.  We all compose in different styles and methods, since each of us came from a slightly different musical background.  The end result is, I think, an eclectic body of music that pulls listeners in many directions and hopefully both challenges and intrigues them.

We don’t rule anything out.

We don’t rule anything out in the music we write: tonality, atonality, groove, form, etc., and we like to incorporate improvisation in various ways to achieve various goals in our music.  This could range from creating vamps in the midst of otherwise through-composed music (to bring about a change of pace or vibe) to finding ways of embellishing or improvising on a previously written part in one of our pieces, to linking various movements and/or pieces together with free improvisation, which we’ve found can create a nice heightened sense of focus in the audience since what is composed and what is improvised becomes less and less distinct.

We have had the absolute pleasure to work with accomplished creative jazz musicians like Uri Caine and John Escreet, both of whom in their own way share our affinity for line-blurring. They have each written some amazing music that we have performed together over the years which consists of very interesting mixtures of composed and improvised material.  I certainly don’t think this is unique to our quartet; I think there is a growing movement of creative musicians of all stripes blending these elements in a myriad of really interesting ways.

Getting back to our breakfast in Kuala Lumpur, we didn’t necessarily come to any explicit conclusions when talking about our place in the larger world of creative music, but we found the discussion to be really enjoyable and it gave us a chance to reflect upon and really appreciate the musical community that we are a part of.  New York City has long been an incubator for cross-genre pollination and experimentation in all corners of the music community. It is not difficult to find artists and groups, many of them personal friends of ours, who fall outside of the mainstream categories of “concert” or “art” music.  So someone probably half-jokingly mentioned putting together a festival with a bunch of friends and colleagues whose music resonates with us and who we respect very much as artists, and we thought it actually sounded like a good idea!

Currently the festival is a total DIY operation, but the goal is basically to give each artist the chance to do solely what best represents them and their creative identity without having to compromise anything.  The name “Progressive Chamber Music Festival” retains the ambiguity of the types of music presented, therefore giving musicians absolute freedom of expression, while at the same time it clearly defines the philosophy that I think we and our musical comrades stand for—progressiveness within but also regardless of convention.  We hope to challenge the common notion of what chamber music should be, while inviting old and new voices to partake.

Ron Lawrence, Gregor Huebner, Jeremy Harman, and Fung Chern Hwei standing in back of empty chairs.

From the Machine: Computer Algorithms and Acoustic Music

The possibility of employing an algorithm to shape a piece of music, or certain aspects of a piece of music, is hardly new. If we define algorithmic composition broadly as “creating from a set of rules or instructions,” the technique is in some sense indistinguishable from musical composition itself. While composers prior to the 20th century were unlikely to have thought of their work in explicitly algorithmic terms, it is nonetheless possible to view aspects of their practice in precisely this way. From species counterpoint to 14th-century isorhythm, from fugue to serialization, Western music has made use of rule-based compositional techniques for centuries. It might even be argued that a period of musical practice can be roughly defined by the musical parameters it derives axiomatically and the parameters left open to “taste,” serendipity, improvisation, or chance.

A relatively recent development in rule-based composition, however, is the availability of raw computational power capable of millions of calculations per second and its application to compositional decision-making. If a compositional process can be broken down into a specific enough list of instructions, a computer can likely perform them—and usually at speeds fast enough to appear instantaneous to a human observer. A computer algorithm is additionally capable of embedding non-deterministic operations such as chance procedures (using pseudo-random number generators), probability distributions (randomness weighted toward certain outcomes), and realtime data input into its compositional hierarchy. Thus, any musical parameter—e.g. harmony, form, dynamics, or orchestration—can be controlled in a number of meaningful ways: explicitly pre-defined, generated according to a deterministic set of rules (conditional), chosen randomly (aleatoric), chosen according to weighted probability tables (probabilistic), or continuously controlled in real time (improvisational). This new paradigm allows one to conceive of the nature of composition itself as a higher-order task, one requiring adjudication among ways of choosing for each musically relevant datum.

Our focus here will be the application of computers toward explicitly organizational, non-sonic ends.

Let us here provisionally distinguish between the use of computers to generate/process sound and to generate/process compositional data. While, it is true, computers do not themselves make such distinctions, doing so will allow us to bracket questions of digital sound production (synthesis or playback) and digital audio processing (DSP) for the time being. While there is little doubt that digital synthesis, sampling, digital audio processing, and non-linear editing have had—and will continue to have—a profound influence on music production and performance, it is my sense that these areas have tended to dominate discussions of the musical uses of computers, overshadowing the ways in which computation can be applied to questions of compositional structure itself. Our focus here will therefore be the application of computers toward explicitly organizational, non-sonic ends; we will be satisfied leaving sound production to traditional acoustic instruments and human performers. (This, of course, requires an effective means of translating algorithmic data into an intelligible musical notation, a topic which will be addressed at length in next week’s post.)

Let us further distinguish between two compositional applications of algorithms: pre-compositional use and performance use. Most currently available and historical implementations of compositional data processing are of the former type: they are designed to aid in an otherwise conventional process of composition, where musical data might be generated or modified algorithmically, but is ultimately assembled into a fixed work by hand, in advance of performan­ce.[1]

A commonplace pre-compositional use of data processing might be the calculation of a musical motif’s retrograde inversion in commercial notation software, or the transformation of a MIDI clip in a digital audio workstation using operations such as transposition, rhythmic augmentation/diminution, or randomization of pitch or note velocity. On the more elaborate end of the spectrum, one might encounter algorithms that translate planets’ orbits into rhythmic relationships, prime numbers into harmonic sequences, probability tables into melodic content, or pixel data from a video stream into musical dynamics. Given the temporal disjunction between the run time of the algorithm and the subsequent performance of the work, such operations can be auditioned by a composer in advance, selecting, discarding, editing, re-arranging, or subjecting materials to further processing until an acceptable result is achieved. Pre-compositional algorithms are thus a useful tool when a fixed, compositionally determinate output is desired: the algorithm is run, the results are accepted or rejected, and a finished result is inscribed—all prior to performance.[2]

It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance.

With the advent of realtime computing and modern networking technologies, however, new possibilities can be imagined beyond the realm of algorithmic pre-composition. It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance. A composer might trigger sections of a musical composition in non-linear fashion, use faders to control dynamic relationships between instruments, or directly enter musical information (e.g. pitches or rhythms) that can be incorporated into the algorithmic process on the fly. Such techniques have, of course, been common performance practice in electronic music for decades; given the possibility of an adequate realtime notational mechanism, they might become similarly ubiquitous in notated acoustic composition in the coming years.

Besides improvisational flexibility, performance use of compositional algorithms offers composers the ability to render aleatoric and probabilistic elements anew during each performance. Rather than such variables being frozen into fixed form during pre-composition, they will be allowed to retain their fundamentally indeterminate nature, producing musical results that vary with each realization. By precisely controlling the range, position, and function of random variables, composers can define sophisticated hierarchies of determinacy and indeterminacy in ways that would be unimaginable to early pioneers of aleatoric or indeterminate composition.

Thus, in addition to strictly pre-compositional uses of algorithms, a composer’s live data input can work in concert with conditional, aleatoric, probabilistic, and pre-composed materials to produce what might be called a “realtime composition” or a­n “interactive score.”

We may, in fact, be seeing the beginnings of a new musical era, one in which pre-composition, generativity, indeterminacy, and improvisation are able to interact in heretofore unimaginable ways. Instances in which composers sit alongside a chamber group or orchestra during performance, modifying elements of a piece such as dynamics, form, and tempo in real time via networked devices, may become commonplace. Intelligent orchestration algorithms equipped with transcription capabilities might allow a pianist to improvise on a MIDI-enabled keyboard and have the results realized by a string quartet in (near) real time. A musical passage might be constructed by composing a fixed melody along with a probabilistic table of possible harmonic relationships (or, conversely, by composing a fixed harmonic progression with variable voice leading and orchestration), creating works that blur the lines between indeterminacy and fixity, composition and improvisation, idea and realization. Timbral or dynamic aspects of a work might be adjusted during rehearsal in response to the specific acoustic character of a performance space. Formal features, such as the order of large-scale sections, might be modified by a composer mid-performance according to audience reaction or whim.

While the possibilities are no doubt vast, the project of implementing a coherent, musically satisfying realtime algorithmic work is still a formidable one: many basic technological pieces remain missing or underdeveloped (requiring a good deal of programming savvy on a composer/musician’s part), the practical requirements for performance and notation are not yet standardized, and even basic definitions and distinctions remain to be theorized.

In this four-part series, I will present a variety of approaches to employing computation in the acoustic domain, drawn both from my own work as well as that of fellow composer/performers. Along the way, I will address specific musical and technological questions I’ve encountered, such as strategies for networked realtime notation, algorithmic harmony and voice leading, rule-based orchestration, and more. While I have begun to explore these compositional possibilities only recently, and am surely only scratching the surface of what is possible, I have been fascinated and encouraged by the early results. It is my hope that these articles might be a springboard for conversation and future experimentation for those who are investigating—or considering investigating—this promising new musical terrain.



1. One might similarly describe a piece of music such as John Cage’s Music of Changes, or the wall drawings of visual artist Sol Lewitt, as works based on pre-compositional (albeit non-computer-based) algorithms.


2. Even works such as Morton Feldman’s graph pieces can be said to be pre-compositionally determinate in their formal dimension: while they leave freedom for a performer to choose pitches from a specified register, their structure and pacing is fixed and cannot be altered during performance.


Joseph Branciforte

Joseph Branciforte is a composer, multi-instrumentalist, and recording/mixing engineer based out of New York City. As composer, he has developed a unique process of realtime generative composition for instrumental ensembles, using networked laptops and custom software to create an “interactive score” that can be continuously updated during performance. As producer/engineer, Branciforte has lent his sonic touch to over 150 albums, working with such artists as Ben Monder, Vijay Iyer, Tim Berne, Kurt Rosenwinkel, Steve Lehman, Nels Cline, Marc Ribot, Mary Halvorson, Florent Ghys, and Son Lux along the way. His production/engineering work can be heard on ECM, Sunnyside Records, Cantaloupe Music, Pi Recordings, and New Amsterdam. He is the co-leader and drummer of “garage-chamber” ensemble The Cellar and Point, whose debut album Ambit was named one of the Top 10 Albums of 2014 by WNYC’s New Sounds and praised by outlets from the BBC to All Music Guide. His current musical efforts include a collaborative chamber project with composer Kenneth Kirschner and an electronic duo with vocalist Theo Bleckmann.

Live Sound Processing and Improvisation

Intro to the Intro

I have been mulling over writing about live sound processing and improvisation for some time, and finally I have my soapbox!  For two decades, as an electronic musician working in this area, I’ve been trying to convince musicians, sound engineers, and audiences that working with electronics to process and augment the sound of other musicians is a fun and viable way to make music.

Also a vocalist, I often use my voice to augment and control the sound processes I create in my music which encompasses both improvised and composed projects. I also have been teaching (Max/MSP, Electronic Music Performance) for many years. My opinions are influenced by my experiences as both an electronic musician who is performer/composer and a teacher (who is forever a student).

A short clip of my duo project with trombonist Jen Baker, “Clip Mouth Unit,” where I process both her sound and my voice.

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

So many gestural controllers have become part of the fabric of our everyday lives.

The interest has increased because (of course) so many more musicians have laptops and smartphones, and so many interesting game and gestural controllers have become part of the fabric of our everyday lives. With so many media tools at our disposal, we have all become amateur designers/photographers/videographers, and also musicians, both democratizing creativity (at least to those with the funds for laptops/smartphones) and exponentially increasing and therefore diluting the resulting output pool of new work.

Image of a hatted and bespectacled old man waving his index finger with the caption, "Back in my day... no real-time audio on our laptops (horrors!)"

Back when I was starting out (in the early ’90s), we did not have real-time audio manipulations at our fingertips—nothing easy to download or purchase or create ourselves (unlike the plethora of tools available today).  Although Sensorlab and iCube were available (but not widely), we did not have powerful sensors on our personal devices, conveniently with us at all times, that could be used to control our electronic music with the wave of a hand or the swipe of a finger. (Note: this is quite shocking to my younger students.) There is also a wave of audio analysis tools using Music Information Retrieval (MIR) and alternative controllers, previously only seen at research institutions and academic conferences, all going mainstream. Tools such as the Sunhouse sensory percussion/drum controller, which turns audio into a control source, are becoming readily available and popular in many genres.

In the early ’90s, I was a performing rock-pop-jazz musician, experimenting with free improv/post-jazz. In grad school, I became exposed for the first time to “academic” computer music: real-time, live electroacoustics, usually created by contemporary classical composers with assistance from audio engineers-turned-computer programmers (many of whom were also composers).

My professor at NYU, Robert Rowe, and his colleagues George Lewis, Roger Dannenberg and others were composer-programmers dedicated to developing systems to get their computers to improvise, or building other kinds of interactive music systems.  Others, like Cort Lippe, were developing pieces for an early version of Max running on a NeXT computer using complex real-time audio manipulations of a performer’s sound, and using that as the sole electroacoustic—and live—sound source and for all control (a concept that I personally became extremely interested and invested in).

As an experiment, I decided to see if I could create a simplified versions of these live sound processing ideas I was learning about. I started to bring them to my free avant-jazz improv sessions and to my gigs, using a complicated Max patch I made to control an Eventide H3000 effects processor (which was much more affordable than the NeXT machine, plus we had one at NYU). I did many performances with a core group of people, willing to let me put microphones on everyone and process them during our performances.

Collision at Baktun 1999. Paul Geluso (bass), Daniel Carter (trumpet), Tom Beyer (drums), Dafna Naphtali (voice, live sound processing), Kristin Lucas (video projection / live processing), and Leopanar Witlarge (horns).

Around that time I also met composer/programmer/performer Richard Zvonar, who had made a similarly complex Max patch as “editor/librarian” software for the H3000, to enable him to create all the mind-blowing live processing he used in his work with Diamanda Galás, Robert Black (State of the Bass), and others. Zvonar was very encouraging about my quest to control the H3000 in real-time via a computer. (He was “playing” his unit from the front panel.)  I created what became my first version of a live processing “instrument” (which I dubbed “kaleid-o-phone” at some point). My subsequent work with Kitty Brazelton and Danny Tunick, in What is it Like to be a Bat?, really stretched me to find ways to control live processing in extreme and repeatable ways that became central and signature elements of our work together, all executed while playing guitar and singing—no easy feat.

Six old laptops all open and lined up in two rows of three on a couch.

Since then, over 23 years and 7 laptops, many gigs and ensembles, and releasing a few CDs, I’ve all along worked on that same “instrument,” updating my Max patch, trying out many different controllers and ideas, adding real-time computer-based audio. (Only once that was possible on a laptop, in the late ’90s.) I’m just that kinda gal; I like to tinker!

In the long run, what is more important to me than the Max programming I did for this project is that I was able to develop for myself an aesthetic practice and rules for my live sound processing about respecting the sound and independence of the other musicians to help me to make good music when processing other people’s sound.

The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

Many people, of course, use live processing on their own sound, so what’s the big deal? Musicians are excited to extend their instruments electronically and there is much more equipment on stage in just about every genre to prove it. The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

However, I am primarily interested in learning how a performer can use live processing on someone else’s sound, in a way that it can become a truly independent voice in an ensemble.

What is Live Sound Processing, really?

To perform with live sound processing is to alter and affect the sounds of acoustic instruments, live, in performance (usually without the aid of pre-recorded audio), and in this way create new sounds, which in turn become independent and unique voices in a musical performance.

Factoring in the acoustic environment of the performance space, it’s possible to view each performance as site-specific, as the live sound processor reacts not only to the musicians and how they are playing but also to the responsiveness and spectral qualities of the room.

Although, in the past, the difference between live sound processing and other electronic music practices has not been readily understood by audiences (or even many musicians), in recent years the complex role of the “live sound processor” musician has evolved to often be that of a contributing, performing musician, sitting on stage within the ensemble and not relegated, by default, to the sound engineer position in the middle or back of the venue.

Performers as well as audiences can now recognize electroacoustic techniques when they hear them.

With faster laptops and more widespread use and availability of classic live sound processing as software plugins, these live sound processing techniques have gradually become more acceptable over 20 years—and in many music genres practically expected (not to mention the huge impact these technologies have had in more commercial manifestations of electronic dance music or EDM). Both performers and audiences have become savvier about many electroacoustic techniques and sounds and can now recognize them by hearing them.

We really need to talk…

I’d like to encourage a discourse about this electronic musicianship practice, to empower live sound processors to use real-time (human/old-school) listening and analysis of sounds (being played by others), and to develop skills for real-time (improvised) decisions about how to respond and manipulate those sounds in a way that facilitates their electronic-music-sounds being heard—and understood—as a separate performing (and musicianly) voice.

In this way, the live sound processor is not always dependent on and following the other musicians (who are their sound source), their contributions not simply “effects” that are relegated to the background. Nor will the live sound processor be brow-beating the other musicians into integrating themselves with, or simply following, inflexible sounds and rhythms of their electronics expressed as an immutable/immobile/unresponsive block of sound that the other musicians must adapt to.

My Rules

My self-imposed guidelines were developed over several years of performing and sessions are:

  1. Never interfere with a musician’s own musical sound, rhythm or timbre. (Unless they want you to!)
  2. Be musically identifiable to both co-players and audience (if possible).
  3. Incorporate my body to use some kind of physical interaction between the technology and myself, either through controllers or the acoustics of the sound itself, or my own voice.

I wrote about these rules in “What if Your Instrument is Invisible?” (my chapter contribution to the excellent book, Musical Instruments in the 21st Century: Identities, Configurations, Practices (Springer 2016).

The first two rules, in particular, are the most important ones and will inform virtually everything I will write in coming weeks about live sound processing and improvisation.

My specific area of interest is live processing techniques used in improvised music, and in other settings in which the music is not all pre-composed. Under such conditions, many decisions must be made by the electronic musician in real-time. My desire is to codify the use of various live sound processing techniques into a pedagogical approach that blends listening techniques, a knowledge of acoustics / psychoacoustics, and tight control over the details of live sound processing of acoustic instruments and voice. The goal is to improve communication between musicians and optional scoring of such work, to make this practice easier for new electronic musicians, and to provide a foundation for them to develop their own work.

You are not alone…

There are many electronic musicians who work as I do with live sound processing of acoustic instruments in improvised music. Though we share a bundle of techniques as our central mode of expression, there is very wide range of possible musical approaches and aesthetics, even within my narrow definition of “Live Sound Processing” as real-time manipulation of the sound of an acoustic instrument to create an identifiable and separate musical voice in a piece of music.

In 1995, I read a preview of what Pauline Oliveros and the Deep Listening Band (with Stuart Dempster and David Gamper) would be doing at their concert at the Kitchen in New York City. Still unfamiliar with DLB’s work, I was intrigued to hear about E.I.S., their “Expanded Instrument System” described as an “interactive performer controlled acoustic sound processing environment” giving “improvising musicians control over various parameters of sound transformation” such as “delay time, pitch transformation” and more. (It was 1995, and they were working with the Reson8 for real-time processing of audio on a Mac, which I had only seen done on NeXT machines.) The concert was beautiful and mesmerizing. But lying on the cushions at the Kitchen, bathing in the music’s deep tones and sonically subtle changes, I realized that though we were both interested in the same technologies and methods, my aesthetics were radically different from that of DLB. I was, from the outset, more interested in noise/extremes and highly energetic rhythms.

It was an important turning point for me as I realized that to assume what I was aiming to do was musically equivalent to DLB simply because the technological ideas were similar was a little like lumping together two very different guitarists just because they both use Telecasters. Later, I was fortunate enough to get to know both David Gamper and Bob Bielecki through the Max User Group meetings I ran at Harvestworks, and to have my many questions answered about the E.I.S. system and their approach.

There is now more improvisation than I recall witnessing 20 years ago.

Other musicians important for me to mention who are working with live sound processing of other instruments and improvisation for some time: Lawrence Casserley, Joel Ryan (both in their own projects and long associations with saxophonist Evan Parker’s “ElectroAcoustic” ensemble), Bob Ostertag (influential in all his modes of working), and Satoshi Takeishi and Shoko Nagai’s duo Vortex. More recently: Sam Pluta (who creates “reactive computerized sound worlds” with Evan Parker, Peter Evans, Wet Ink and others), and Hans Tammen. (Full disclosure, we are married to each other!)

Joel Ryan and Evan Parker at STEIM.

In academic circles, computer musicians, always interested in live processing, have more often taken to the stage as performers operating their software (moving from the central/engineer position). It seems there is also more improvisation than I recall witnessing 20 years ago.

But as for me…

In my own work, I gravitate toward duets and trios, so that it is very clear what I am doing musically, and there is room for my vocal work. My duos are with pianist Gordon Beeferman (our new CD, Pulsing Dot, was just released), percussionist Luis Tabuenca (Index of Refraction), and Clip Mouth Unit—a project with trombonist Jen Baker. I also work occasionally doing live processing with larger ensembles (with saxophonist Ras Moshe’s Music Now groups and Hans Tammen’s Third Eye Orchestra).

Playing with live sound processing is like building a fire on stage.

I have often described playing with live sound processing as like “building a fire on stage”, so I will close by taking the metaphor a bit further. There are two ways to start a fire with a lot of planning or improvisation, which method we choose to start with use depends on environmental conditions (wind, humidity, location), the tools we have at hand, and also what kind of person we are (a planner/architect, or more comfortable thinking on our feet).

In the same way, every performance environment impacts on the responsiveness and acoustics of musical instruments used there. This is much more pertinent, when “live sound processing” is the instrument. The literal weather, humidity, room acoustics, even how many people are watching the concert, all affect the defacto responsiveness of a given room, and can greatly affect the outcome especially when working with feedback or short delays and resonances. Personally, I am a bit of both personality types—I start with a plan, but I’m also ready to adapt. With that in mind, I believe the improvising mindset is needed for working most effectively with live sound processing as an instrument.

A preview of upcoming posts

What follows in my posts this month will be ideas about how to play better as an electronic musician using live acoustic instruments as sound sources. These ideas are (I hope) useful whether you are:

  • an instrumentalist learning to add electronics to your sound, or
  • an electronic musician learning to play more sensitively and effectively with acoustic musicians.

In these upcoming posts, you can read some of my discussions/explanations and musings about delay as a musical instrument, acoustics/psychoacoustics, feedback fun, filtering/resonance, pitch-shift and speed changes, and the role of rhythm in musical interaction and being heard. These are all ideas I have tried out on many of my students at New York University and The New School, where I teach Electronic Music Performance, as well as from a Harvestworks presentation, and from my one-week course on the subject at the UniArts Summer Academy in Helsinki (August 2014).


Dafna Naphtali creating music from her laptop which is connected to a bunch of cables hanging down from a table. (photo by Skolska/Prague)

Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist.   As a performer and composer of experimental, contemporary classical and improvised music since the mid-1990s, she creates custom Max/MSP programming incorporating polyrhythmic metronomes, Morse Code, and incoming audio signals to control her sound-processing of voice and other instruments, and other projects such as music for robots, audio augmented reality sound walks and “Audio Chandelier” multi-channel sound projects.  Her new CD Pulsing Dot with pianist Gordon Beeferman is on Clang Label.