Tag: indeterminacy

inti figgis-vizueta: the ability to grow

Banner for SoundLives episode 23 featuring inti figgis-vizueta

Composer inti figgis-vizueta creates music that carefully balances experimentation and practicality. She likens her compositions to plants which have the ability to grow and change when different people perform them.

“We’re able to continue to revisit them and see how they’ve changed,” she explained when we met over Zoom in mid-June. “I’ll hear people come back and play something that I haven’t heard in years. I thought I had a stable sense of that piece in my mind and suddenly someone just blows me away with a completely different place that they go with it. And to me, that has to feel really exciting because the idea that like, we’re just writing something to exist in one form and then it just, you know, like time passes, just stops moving–it’s very strange.”

inti’s openness to collaboration and belief in interpretative agency has made her music particularly attractive to soloists and ensembles ranging from Andrew Yee and Conrad Tao to Roomful of Teeth, Ensemble Dal Niente, and even the Kronos Quartet who asked her to compose a piece for their 50 for the Future Project.

“I remember hearing about this project and being like, ‘God, I wish I could do that, but I’m never going to be in this thing,'” inti remembered. “It was kind of a short turnaround … I went through all of the other pieces that were up, because this project had been going on for five years and there was a gamut of pieces. There were ones that were so hard. Maybe a graduate string quartet could do it, with a lot of practice. To like very beautiful and simple and quite lyrical pieces with a 16th note pulse or something. … I ended up kind of going from this really complicated score to this very simple score of a single stave that everyone was reading from. … How it happens over time can be determined by the ensemble.”

Over the past few years, inti has gravitated a lot toward string quartets and percussion ensembles, two groups that might seem at oppositive ends of the sonic spectrum to some composers but not to her. “I do feel like there’s a certain level of a kind of shared musicality, a shared sense of tone and timbre and attack and all of these things that contribute to a group mentality of how to kind of play with and affect texture in like all of their kind of individual ways.”

But she is also interested in vocal music and has begun exploring it again after a hiatus of several years where she was mostly focused on instrumental music.

“I felt like instrumentalists were down to clown a little bit, where I just didn’t always feel that with vocal ensembles,” she acknowledged. “Then this year and last year has been this kind of a big resurgence of that in my music and in some ways, it’s teaching me things all over again, which has been really, really fun. … I get to kind of luxuriate a little bit in the quality of two people singing together, actually using all of the complexities of a word to push forward meaning. But to me it’s not narrative meaning, and that’s what I was afraid of, that when I had to engage language, I had to be tied to a narrative, instead of being tied to the complexities of thinking about something like love, or lots of other things.”

Ultimately, whatever the medium, inti is interested in constructing open structures that take performers and listeners to new places.

“For the most part my pieces are workshops in some ways,” she said. “It’s almost like a loose suit and then we fit it over the rehearsal.”

Giving Singers Creative Control

Full Disclosure: many of the samples I share in this article are from the See-A-Dot Music Catalog, a company for which I am the director.

What happens when performers can exercise some level of agency?

My usual experience in the choral world is one with a strict hierarchy: composers create works, conductors interpret them the best they can to try to deliver the composers’ intentions, and the singers do what they are told. But what happens when a piece makes room for individual performers to exercise some level of agency over the work? As a conductor, I find presenting my younger singers with indeterminacy an amazing teaching moment where my students break through to new levels of understanding. For instance, when presented with a moment of free choice-making in a piece, my singers’ first concern is almost always that they are doing it wrong. Teachers and conductors alike emphasize accuracy, intonation, and making sure students “know the notes.” When given a piece of music that is less interested in “getting it right” and more interested in giving performers creative agency, it can cause a great deal of consternation in rehearsals.

Composers can open up a whole new world of sound and textural possibilities through the use of indeterminate sections in a piece, whereby individual voices break away from others in their section and make creative choices on their own (usually within given parameters). This technique is common in instrumental music, but less so in choral scores. (Perhaps we’ve sung the chorus “All We Like Sheep” one too many times.) The degree of indeterminacy and individual choice can vary quite a bit, from pieces that present strict guidelines to pieces that include undefined symbols that are left solely to the performer’s interpretation.

This example, from Karen Siegel’s Maskil of David, shows a more constrained form of indeterminacy where each line is given a short phrase that is performed individually by each person in the section, though with the same sense of tempo and meter.

Another example, from Drew Corey’s piece Of All Of Them: the first, the last uses a similar technique, but without the strict rhythmic component. In the passage below, we have a few different events happening simultaneously. First, there is a warm harmonic background texture provided by the middle voices, created from repeating notes or phrases, sung independently by each performer in the section. Then, there are rhythmic and melodic unison gestures guided by the conductor and indicated with vertical dotted lines to show their simultaneity. The last phrase in the upper voice, however, is a solo part and is rhythmically independent from what occurs beneath it. While the conductor gives cues for the unison moments in the lower parts, the soloists freely sing the melody in whatever time they choose.

Performers who are actively part of the creative process connect with each other and the music in entirely new ways.

Similar to what I had discussed in my previous article on New Polyphony in new choral music, this compositional technique opens up a wide palette of textural opportunities, polyrhythms, and dense harmonies that might not have been as possible or easily achieved if strictly notated. More importantly, there is a level of engagement and presence that such techniques require from the singers. When performers are actively part of the creative process, they connect with each other and the music in entirely new ways.

The New Polyphony

Full Disclosure: many of the samples I share in this article are from the See-A-Dot Music Catalog, a company for which I am the director.

The choral composer’s concern is often one of accessibility.

A few years ago I attended a choral conference, and during one presentation on new music an attendee raised his hand and asked, “Is polyphony in choral music dead?” He was referring to the increased use of homophonic textures in choral music, which I believe comes from composers wanting to create increasingly dense harmonic textures to fit in with other mediums in the contemporary music world. The choral composer’s concern is often one of accessibility, since the majority of ensembles in the USA are avocational groups.

I don’t believe polyphony is dead, but I do think the future of choral music will embrace techniques that preserve the horizontal approach to writing, while maintaining accessibility and not falling into anachronistic musical styles like traditional tonal polyphony. Such techniques are already arising in today’s choral music. Our traditional interest in polyphonic textures and increased harmonic complexity can, for instance, be satisfied with thick layers of otherwise tonal material. When done well, these layers create dense, multi-faceted textures, without demanding a high level of virtuosity from the singers. Instead, these techniques will engage choristers as thoughtful and musical artists.

Layering melodic cells offset from one another is reminiscent of rounds and canons.

One technique that mimics the horizontal focus and function of polyphony, without quite as much attention to the consequent vertical harmonies, is the layering of melodic cells offset from one another. This is reminiscent of using rounds and canons, but instead of using the same material repeated in every part, each voice has its own repeated phrase. While the technique can be used in combination with other types of writing or just in specific sections of a composition, it can also stand on its own as the sole organizational idea behind a piece. This is seen in minimalist instrumental compositions such as Terry Riley’s famous In C, but is also found in vocal ensemble music and vocal adaptations of instrumental tunes.

Meredith Monk is a pioneer of this technique, which can be found in one of her most popular works for chorus, Panda Chant II. Panda Chant II uses a series of short cells, most in different meters, that are layered additively one on top of another sequentially, creating a dense wall of sound. Here’s a performance of the piece by the San Francisco Girls Chorus:

What I love about this piece as a conductor is how simple it is to teach to the singers. I have taught this piece to a variety of groups from traditional community-style choirs to workshops for dancers with little to no singing experience. Other recent compositions for choir, like Bettina Sheppard’s Love is Anterior to Life, use the same technique, but with a more familiar notation and choral structure. In this piece, each voice is given a melodic phrase, anywhere between one and eight beats in length, which are layered on top of one another creating an undulating texture series of phrases.

Pieces like this make for a great first rehearsal “win,” yet still remain musically engaging through the rehearsal process. However, while singers can generally memorize their parts after a few minutes of repetition, the simplicity of the piece emphasizes how ensemble listening is what makes or breaks a successful performance. It’s not enough in a piece like this to get your own part right; engaging with the how your part fits in with the others and maintaining a sense of meter, blend, and balance are key to its success.

These examples use layering as their sole method of composition, and this idea of combining various repeated phrases of different lengths can also be used to great effect without organizing them additively. Ethan Gans-Morse’s Kyrie is a similarly constructed minimalist piece, yet instead of layering each phrase on top of another in sequential order, these cells act as base musical material, which are combined through different permutations to create a narrative flow. The effect is a relentless and haunting experience.

These are just a few examples of ways to explore this idea. It can be a creative and effective way to bring a different kind of polyphonic texture to your work without increasing the harmonic and rhythmic difficulty of the individual lines, thus making the piece more accessible to avocational singers and students.

 

From the Machine: Conversations with Dan Tepfer, Kenneth Kirschner, Florent Ghys, and Jeff Snyder

Over the last three weeks, we’ve looked at various techniques for composing and performing acoustic music using computer algorithms, including realtime networked notation and algorithmic approaches to harmony and orchestration.

Their methods differ substantially, from the pre-compositional use of algorithms, to the realtime generation of graphic or traditionally notated scores, to the use of digitally controlled acoustic instruments and musical data visualizations.

This week, I’d like to open up the conversation to include four composer/performers who are also investigating the use of computers to generate, manipulate, process, and display musical data for acoustic ensembles. While all four share a similar enthusiasm for the compositional and performance possibilities offered by algorithms, their methods differ substantially, from the pre-compositional use of algorithms, to the realtime generation of graphic or traditionally notated scores, to the use of digitally controlled acoustic instruments and musical data visualizations.

Pianist/composer Dan Tepfer, known both for his expressive jazz playing and his interpretations of Bach’s Goldberg Variations, has recently unveiled his Acoustic Informatics project for solo piano. In it, Tepfer uses realtime algorithms to analyze and respond to note data played on his Yahama Disklavier piano, providing him with an interactive framework for improvisation. Through the use of musical delays, transpositions, inversions, and textural elaborations of his input material, he is able to achieve composite pianistic textures that would be impossible to realize with human performer or computer alone.

Composer Kenneth Kirschner has been using computers to compose electronic music since the 1990s, manipulating harmonic, melodic, and rhythmic data algorithmically to create long-form works from minimal musical source material. Several of his electronic works have recently been adapted to the acoustic domain, raising questions of musical notation for pieces composed without reference to fixed rhythmic or pitch grids.

Florent Ghys is a bassist and composer who works in both traditional and computer-mediated compositional contexts. His current research is focused on algorithmic composition and the use of realtime notation to create interactive works for acoustic ensembles.

Jeff Snyder is a composer, improviser, and instrument designer who creates algorithmic works that combine animated graphic notation and pre-written materials for mixed ensembles. He is also the director of the Princeton Laptop Orchestra (PLOrk), providing him with a wealth of experience in computer networking for live performance.

THE ROLE OF ALGORITHMS

JOSEPH BRANCIFORTE: How would you describe the role that computer algorithms play in your compositional process?

KENNETH KIRSCHNER: I come at this as someone who was originally an electronics guy, with everything done on synthesizers and realized electronically. So this computer-driven approach is just the way I work, the way I think compositionally. I’ve never written things with pencil and paper. I work in a very non-linear way, where I’m taking patterns from the computer and juxtaposing them with other patterns—stretching them, twisting them, transposing them.

I have to have that feedback loop where I can try it, see what happens, then try it again and see what happens.

A lot of my obsession over the last few years has been working with very reduced scales, often four adjacent semitones, and building patterns from that very restricted space. I find that as you transpose those and layer them over one another, you get a lot of very interesting emergent patterns. In principle, you could write that all out linearly, but I can’t imagine how I would do it, because so much of my process is experimentation and chance and randomness: you take a bunch of these patterns, slow this one down, transpose this one, layer this over that. It’s very fluid, very quick to do electronically—but hopelessly tedious to do if you’re composing in a linear, notated way. My whole development as a composer presupposes that realtime responsiveness. I have to have that feedback loop where I can try it, see what happens, then try it again and see what happens.

FLORENT GHYS: That’s very interesting, because we don’t come from the same background, but we ended up with algorithmic music for the same reasons. I come from a background of traditional acoustic music composition: writing down parts and scores for musicians. But I realized that the processes I was using as I was composing—canons, isorhythms, transpositions, stretching out durations—were very easy to reproduce in Max/MSP. I began by working with virtual instruments on the computer, fake sounds that gave me an idea of what it might sound like with a real ensemble. It was fascinating to listen to the results of an algorithmic process in real time—changing parameters such as density of rhythm, rhythmic subdivision, transposition, canonic relationships—and being able to hear the results on the spot. Even something as simple as isorhythm—a cell of pitches and a cell of rhythms that don’t overlap—writing something like that down takes some time. With an algorithmic process, I can go much faster and generate tons of material in a few minutes, rather than spending hours in Sibelius just to try out an idea.

DAN TEPFER: I’ve used algorithms in a number of ways. I’ve done stuff where I’ve generated data algorithmically that then gets turned into a relatively traditional composition, with notes on a page that people play. I’ve also experimented with live notation, which is more improvisationally based, but with some algorithmic processing in there too. And then there’s the stuff I’ve been doing recently with the Disklavier, where the algorithms react to what I’m improvising on the piano in real time.

With the live notation stuff, I’ve done it with string quartet, or wind quartet, and me on piano. I did one show where it was both of them together, and I could switch back and forth or have them both playing. I have a controller keyboard on top of the piano, and I can play stuff that gets immediately sent out as staff notation. There’s some processing where it’ll adapt what I’m playing to the ranges of each instrument, doubling notes or widening the register. Then there are musical controls where I can save a chord and transform it in certain ways just by pushing a button. At the rhythmic level, there’s usually a beat happening and this stuff is floating above it, a bit of an improvisational element where the musicians can sink into the groove.

JEFF SNYDER: I’ve got two main pieces that I would say fall into this category of realtime notation. The first is called Ice Blocks, which combines graphic notation with standard notation for open instrumentation. And then another one called Opposite Earth, which uses planets’ orbits as a graphic notation device. There are ten concentric circles, each one assigned to a performer. Each musician is a particular planet on an orbit around the sun. As the conductor, I can introduce vertical or horizontal lines from the center. The idea is that when your planet crosses one of those lines, you play a note. I have control over how fast each planet’s orbit is, as well as the color of the lines, which refer to pitch materials. There are five different colors that end up being five different chords. So it sets up a giant polyrhythm based on the different orbits and speeds.

Each planet can also rotate within itself, with additional notches functioning the same way as the lines do, although using unpitched sounds. That basically gives me another rhythmic divider to play with. I can remove or add orbits to thin out the texture or add density. It’s interesting because the piece allows me to do really complicated polyrhythms that couldn’t be executed as accurately with traditional notation. You might be playing sixteen against another person’s fifteen, creating this really complicated rhythmic relationship that will suddenly line up again. This makes it really easy: all you’re doing is watching a line, and each time you cross, you make a sound. You can do it even if the players aren’t particularly skilled.

PERFORMANCE PRACTICE AND USER EXPERIENCE

JB: I’m really interested in this question of performer “user experience” when working with realtime notational formats. What were the performers’ responses to dealing with your dynamic graphic notation, Jeff?

JS: The piece was played by PLOrk, which is a mix of composition grad students, who are up for anything, and then undergrads who are a mix of engineers and other majors. They get excited about the fact that it’s something different. But I’ve worked with more conservative ensembles and had performers say, “I’ve worked for so many years at my instrument, and you’re wasting my skills.” So people can have that response as well when you move away from standard notation.

With PLOrk, I was able to workshop the piece over a few months and we would discover together: “Is this going to be possible? Is this going to be too difficult? Is this going to be way too easy?” I could experiment with adding staff notation or using different colors to represent musical information. For me, it was super valuable because I wasn’t always able to gauge how effective certain things would be in advance. None of this stuff has a history, so it’s hard to know whether people can do certain things in a performance situation. Can people pay attention to different gradations of blue on a ring while they’re also trying to perform rhythms? I just have to test it, and then they’ll tell me whether it works.

JB: There’s always that initial hurdle to overcome with new notational formats. I’ve been using traditional notation in my recent work, albeit a scrolling version where performers can only see two measures at a time, but I remember a similar adjustment period during the first rehearsal with a string quartet. We set everyone up, got the Ethernet connections between laptops working, tested the latencies—everything looked good. But for the first fifteen minutes of rehearsal, the performers were all complaining that the software wasn’t working properly. “It just feels like it’s off. Maybe it’s not synced or something?” So I did another latency check, and everything was fine, under two milliseconds of latency.

DT: So the humans weren’t synced!

It’s just a new skill. Once performers get used to it, then they don’t want it to change.

JB: I reassured them that everything was working properly, and we kept rehearsing. After about 30 minutes, they started getting the hang of the scrolling notation—things were beginning to sound much more comfortable. So after rehearsal, as everyone was packing up, I said, “Is there anything you’d like me to change in the software, anything that would make the notation easier to deal with?” And they all said, “No! Don’t change a thing. It’s perfect!” And then I realized: it’s just a new skill. Once performers get used to it, then they don’t want it to change. They just need to know that it works and that they can rely on it.

But beyond the mechanics of using the software, I sometimes wonder whether it’s harder for a performer to commit to material that they haven’t seen or rehearsed in advance. They have no idea what’s coming next and it’s difficult to gain any sense of the piece as a whole.

FG: I think you’re touching on something related to musicianship. In classical music, the more you play a piece, the better you’re going to understand the music, the more you’re going to be able to make it speak and refine the dynamics. And within the context of the ensemble, you’ll understand the connections and coordination between all the musicians. So the realtime notation is going to be a new skill for musicians to learn—to be able to adapt to material that’s changing. It’s also the job of the composer to create a range of possibilities that musicians can understand. For instance, the piece uses certain types of rhythms or scales or motives; a performer might not know exactly what it’s going to be, but they understand the range of things that can happen.

KK: They need to be able to commit to the concept of the piece, rather than any of the specific details of the narrative.

DT: I think a key word here is culture. You’re seeing a microcosm of that when, in the time span of a rehearsal, you see a culture develop. At the beginning of the rehearsal, musicians are like, “It’s not working,” and then after a certain time they’re like, “Oh, it is working.” Culture is about expectations about what is possible. And if you develop something in the context of a group, where it is understood to be fully possible, then people will figure out ways to do it. It might start with a smaller community of musicians who can do it at first. But I think we’re probably not far from the time when realtime sight-reading will just be a basic skill. That’s going to be a real paradigm shift.

I think we’re probably not far from the time when realtime sight-reading will just be a basic skill. That’s going to be a real paradigm shift.

JB: How do you deal with the question of notational pre-display in your live notation work, Dan?

DT: It happens pretty much in real time.

JB: So you play a chord on your MIDI keyboard and it gets sent out to the musicians one measure at a time?

DT: They’re just seeing one note. There’s no rhythmic information. The real difficulty is that I have to send the material out about a second early in order to have any chance of maintaining consistency in the harmonic rhythm. It takes some getting used to, but it’s surprisingly intuitive after a while.

JS: That’s something I wasn’t able to address in the planets piece by the time of the performance: there was no note preparation for them, so lines just show up. I told the performers, “Don’t worry if a line appears right before your planet is about to cross it. Just wait until the next time it comes around again.” But it still stressed them out. As performers, they’re worried about “missing a note,” especially because the audience could see the notation too. So perhaps in the next version I could do something where the lines slowly fade in to avoid that issue.

JB: I have to sometimes remind myself that the performers are part of the algorithm, too. As much as we want the expanded compositional possibilities that come from working with computers, I think all of us value the process of working with real musicians.

KK: With these recent acoustic adaptations of my pieces, it was a whole different experience hearing it played with an actual pianist and cellists. It was a different piece. And I thought, “There is something in here that I want to pursue further.” There’s just a level of nuance you’re getting, a level of pure interpretation that’s not going to come through in my electronic work. But the hope is that by composing within the electronic domain, I’m stumbling upon compositional approaches that one may not find writing linearly.

COMPUTER AS COMPOSITIONAL SURROGATE

JB: I want to discuss the use of the computer as a “compositional surrogate.” The premise is that instead of working out all of the details of a piece in advance, we allow the computer to make decisions on our behalf during performance, based on pre-defined rules or preferences. There’s an argument that outsourcing these decisions to the computer is an abdication of the fundamental responsibility of being a composer, the subjective process of selection. But I’ve begun to see algorithm design as a meta-compositional process: uncovering the principles that underlie my subjective preferences and then embedding them into the algorithmic architecture itself.

KK: Right. There’s a sense that when something works musically, there’s a reason for it. And what we’re trying to do is uncover those reasons; the hope is that some of those rules that are affecting our aesthetic judgment are able to be discovered. Once you begin to codify some of that, you can offload it and shift some of the compositional responsibility to the computer. The idea is to build indeterminate pieces that have a degree of intelligence and adaptation to them. But that requires us to understand what some of those underlying mechanisms are that make us say “this is good” or “this is bad.”

For me, something might sound good one day, and another day I might hate it. I don’t know if you’re ever going to find a “rule” that can explain that.

FG: I don’t know. I’m a little skeptical. For me, something might sound good one day, and another day I might hate it. I don’t know if you’re ever going to find a “rule” that can explain that; there are so many factors that go into musical perception.

JB: A dose of skepticism is probably warranted if we’re talking about machines being able to intervene in questions of aesthetics. But to me, the beauty of designing a composer-centric framework is that it allows you to change your preferences from day to day. You can re-bias a piece to conform to whatever sounds good to you in the moment: a different tempo, more density, a slightly different orchestration. I’m not sure that we even need to understand the nature of our preferences, or be able to formalize them into rules, in order to have the computer act as an effective surrogate. Economists have a concept called “revealed preference,” where instead of looking at what consumers say they want, you look at their purchasing habits. That kind of thing could be applied to algorithm design, where the algorithm learns what you like simply by keeping track of your responses to different material.

KK: I’ve had a similar thought when working on some of my indeterminate pieces—that you want a button for “thumbs up” or “thumbs down.” If you could record the aggregate of all those decisions, you could begin to map them to a parameter space that has a greater chance of giving you good outcomes. You could also have different profiles for a piece. For example, I could do my “composer’s version” that contains my preferences and builds the piece in a certain direction; then I could hand it off to you, hit reset, and have you create your own version of the piece.

FG: In a lot of the algorithms I’ve been designing lately, I have a “determinacy-to-randomness” parameter where I can morph from something I’ve pre-written, like a melody or a series of pitches, to a probabilistic set of pitches, to a completely random set of pitches. With the probabilities, I allow the computer to choose whatever it wants, but I tell it, “I’d like to have more Gs and G#s, but not too many Cs.” So, weighted probabilities. We know that the random number generator in Max/MSP, without any scaling or probabilities, sounds like crap.

KK: It needs constraints.

JB: Finding ways to constrain randomness—where it’s musically controlled, but you’re getting new results with every performance—that’s become a major compositional concern for me. As an algorithm grows from initial idea to a performance-ready patch, the parameters become more abstract and begin to more closely model how I hear music as a listener. At the deepest level of aesthetic perception, you have things like balance, long-range form, tension/resolution, and expectation. I think probabilistic controls are very good at dealing with balance, and maybe not as good with the others.

FG: Yeah, when you deal with algorithms you go to a higher level of thinking. I’ve done things where I have a pattern that I like, and I want the computer to generate something else like it. And then eventually I know I want it to transform into another pattern or texture. But the tiny details of how it gets from A to B don’t really matter that much. It’s more about thinking of the piece as a whole.

NETWORKED NOTATION

JB: Jeff, I wanted to ask you about something a little more technical: when dealing with live notation in PLOrk, are you using wired or wireless connections to the performers’ devices?

JS: I’ve done live notation with both wireless and wired connections. In any kind of networking situation, we look at that question on a case-by-case basis. If we’re going to do wired, it simplifies things because we can rely on reasonable timing. If we’re going to do wireless, we usually have issues of sync that we have to deal with. For a long time, our solution has been LANdini, which was developed by Jascha Narveson. Recently, Ableton Link came out and that simplifies things. So if you don’t need certain features that LANdini offers—if you just need click synchronization—then Link is the simpler solution. We’ve been doing that for anything in which we just need to pulse things and make sure that the pulses show up at the same time, like metronomes.

JB: In my notation system, there’s a cursor that steps through the score, acting as a visual metronome to keep the musicians in sync. So transfer speed is absolutely critical there to make sure there’s as little latency as possible between devices. I’ve been using wired Ethernet connections, which ensures good speed and reliability, but it quickly becomes a real mess on stage with all the cables. Not to mention the hundreds I’ve spent on Ethernet adapters! Perhaps the way to do it is to have Ableton Link handle the metronome and then use wireless TCP/IP to handle the notation messages.

JS: That’s what I was just about to suggest. With Link, you can actually get information about which beat number you’re on, it’s not just a raw pulse.

JB: Does it work well with changing time signatures?

JS: That’s a good question, I haven’t tested that. I have discovered that any tempo changes make it go nuts. It takes several seconds to get back on track when you do a tempo change. So it’s limited in that way. But there are other possibilities that open up when you get into wireless notation. Something I’ve really wanted to do is use wireless notation for spatialization and group dynamics. So say you had a really large ensemble and everybody is looking at their own iPhone display, which is giving them graphic information about their dynamics envelopes. You could make a sound move through an acoustic ensemble, the same way electronic composers do with multi-speaker arrays, but with a level of precision that couldn’t be achieved with hand gestures as a conductor. It’d be easily automated and would allow complex spatial patterns to be manipulated, activating different areas of the ensemble with different gestures. That’s definitely doable, technically speaking, but I haven’t really seen it done.

BRINGING THE COMPOSER ON STAGE

Do you think that having the composer on stage as a privileged type of performer is potentially in conflict with the performers’ ownership of the piece?

JB: With this emerging ability for the composer to manipulate a score in realtime, I wonder what the effects will be on performance culture. Do you think that having the composer on stage as a privileged type of performer is potentially in conflict with the performers’ ownership of the piece?

FG: Bringing the composer on stage changes the whole dynamic. Usually instrumentalists rule the stage; they have their own culture. Now you’re up there with them, and it totally changes the balance. “Whoa, he’s here, he’s doing stuff. Why is he changing my part?”

JB: Right, exactly. In one of my early realtime pieces, I mapped the faders of a MIDI controller to the individual dynamic markings of each member of the ensemble. This quickly got awkward in rehearsal when one of the violinists said half-jokingly, “It seems like I’m playing too loudly because my dynamic markings keep getting lower and lower.”

DT: It’s like Ligeti-style: you go down to twelve ps! [laughs]

JB: From that point, I became very self-conscious about changing anything. I suddenly became aware of this strange dynamic, where I’m in sole control of the direction of the piece but also sitting on stage alongside the musicians.

DT: You know, it’s interesting—come to think of it, in everything I’ve done with live notation, I’m performing as well. I think that makes a huge difference, because I can lead by example.

KK: And you’re also on stage and you’re invested as a performer. Whereas Joe is putting himself in this separate category—the puppet master!

FG: I wonder if it’s not also the perception of the instrumentalists in what they understand about what you’re doing. In Dan’s case, they totally get what he’s doing: he’s playing a chord, it’s getting distributed, they have their note. It’s pretty simple. With more complex algorithmic stuff, they might not get exactly what you’re doing. But then they see an obvious gesture like lowering a fader, and they think, “Oh, he’s doing that!”

DT: Something nice and simple to complain about!

FG: Otherwise, you’re doing this mysterious thing that they have no idea about, and then they just have to play the result.

KK: This is why I think it’s really important to start working with a consistent group of musicians, because we’ll get past this initial level and start to see how they feel about it in the longer term as they get used to it. And that might be the same response, or it might be a very different response.

DT: Has anyone taken that step of developing this kind of work over a couple of years with the same group of people? I think then you’ll see performers finding more and more ways of embracing the constraints and making it their own. That’s where it gets exciting.


Well, that about does it for our four-part series. I hope that these articles have initiated conversation with respect to the many possible uses of computer algorithms in acoustic music, and perhaps provided inspiration for future work. I truly believe that the coupling of computation and compositional imagination offers one of the most promising vistas for musical discovery in the coming years. I look forward to the music we will collectively create with it.

Comments and questions about the series are very much welcome, either via the comments section below or any of the following channels:

josephbranciforte.com // facebook // twitter // instagram

From the Machine: Realtime Algorithmic Approaches to Harmony, Orchestration, and More

As we discussed last week, the development of a realtime score, in which compositional materials can be continuously modified, re-arranged, or created ex nihilo during performance and displayed to musicians as musical notation, is no longer the stuff of fantasy. The musical and philosophical implications of such an advance are only beginning to be understood and exploited by composers. This week, I’d like to share some algorithmic techniques that I’ve been developing in an attempt to grapple with some of the compositional possibilities offered by realtime notation. These range from the more linear and performative to the more abstract and computation-intensive; they deal with musical parameters ranging from harmony and form to orchestration and dynamics. Given the relative novelty and almost unlimited nature of the subject matter (to say nothing of the finite space allotted for the task), consider this a report from one person’s laboratory, rather than anything like a comprehensive survey.

HARMONY & VOICE LEADING

How might we begin to create something musically satisfying from just this raw vocabulary?

I begin with harmony, as it is the area that first got me interested in modeling musical processes using computer algorithms. I have always been fascinated by the way in which a mechanistic process like connecting the tones of two harmonic structures, according to simple rules of motion, can produce such profound emotional effects in listeners. It is also an area that seems to still hold vast unexplored depths—if not in the discovery of new vertical structures[1], at the very least in their horizontal succession. The sheer combinatorial magnitude of harmonic possibilities is staggering: consider each pitch class set from one to twelve notes in its prime form, multiplied by the number of possible inversional permutations of each one (including all possible internal octave displacements), multiplied by the possible chromatic transpositions for each permutation—for just a single vertical structure! When one begins to consider the horizontal dimension, arranging two or more harmonic structures in succession, the numbers involved are almost inconceivable.

The computer is uniquely suited to dealing with the calculation of just such large data sets. To take a more realistic and compositionally useful example: what if we wanted to calculate all the inversional permutations of the tetrachord {C, C#, D, E} and transpose them to each of the twelve chromatic pitch levels? This would give us all the unique pitch class orderings, and thus the complete harmonic vocabulary, entailed by the pitch class set {0124}, in octave-condensed form. These materials might be collected into a harmonic database, one which can we can sort and search in musically relevant ways, then draw on in performance to create a wide variety of patterns and textures.

First we’ll need to find all of the unique orderings of the tetrachord {C, C#, D, E}. A basic law of combinatorics states that there will be n! distinct permutations of a set of n items. This (to brush up on our math) means that for a set of 4 items, we can arrange them in 4! (4 x 3 x 2 x 1 = 24) ways. Let’s first construct an algorithm that will return the 24 unique orderings of our four-element set and collect them into a database.

example 1

Branciforte-example-1

Next, we need to transpose each of these 24 permutations to each of the 12 chromatic steps, giving us a total of 288 possible structures. To work something like this out by hand might take us fifteen or twenty minutes, while the computer can calculate such a set near-instantly.

example 2

Branciforte-example-2

The question of what to do with this database of harmonic structures remains: how might we begin to create something musically satisfying from just this raw vocabulary? The first thing to do might be to select a structure (1-288) at random and begin to connect it with other structures by a single common tone. For instance, if the first structure we draw is number 126 {F# A F G}, we might create a database search tool that allows us to locate a second structure with a common tone G in the soprano voice.

example 3:

To add some composer interactivity, let’s add a control that allows us to specify which voice to connect on the subsequent chord using the numbers 1-4 on the computer keypad. If we want to connect the bass voice, we can press 1; the tenor voice, 2; the alto voice, 3; or the soprano voice, 4. Lastly, let’s orchestrate the four voices to string quartet, with each structure lasting a half note.

example 4:

This is a very basic example of a performance tool that can generate a series of harmonically self-similar structures, connect them to one another according to live composer input, and orchestrate them to a chamber group in realtime. While our first prototype produces a certain degree of musical coherence by holding one of the four voices constant between structures, it fails to specify any rules governing the movement of the other three voices. Let’s design another algorithm whose goal is to control the horizontal plane more explicitly and keep the overall melodic movement a bit smoother.

A first approach might be to calculate the total melodic movement between the current structure and each candidate structure in the database, filtering out candidates whose total movement exceeds a certain threshold. We can calculate the total melodic movement for each candidate by measuring the distance in semitones between each voice in the current structure and the corresponding voice in the candidate structure, then adding together all the individual distances.[2]

example 5.0

Branciforte-example-5.0

While this technique will certainly reduce the overall disjunction between structures, it still fails to provide rules that govern the movement of individual voices. For this we will need an interval filter, an algorithm that determines the melodic intervals created by moving from the current structure to each candidate and only allows through candidates that adhere to pre-defined intervallic preferences. We might want to prevent awkward melodic intervals such as tritones and major sevenths. Or perhaps we’d like the soprano voice to move by step (ascending or descending minor and major seconds) while allowing the other voices to move freely. We will need to design a flexible algorithm that allows us to specify acceptable/unacceptable melodic intervals for each voice, including ascending movement, descending movement, and melodic unisons.

example 5.1

Branciforte-example-5.1

A final consideration might be the application of contrapuntal rules, such as the requirement that the lowest and highest voices move in either contrary or oblique motion. This could be implemented as yet another filter for candidate structures, allowing a contrapuntal rule to be specified for each two-voice combination.

example 5.2

Branciforte-example-5.2

Let’s create another musical example that implements these techniques to produce smoother movement between structures. We’ll broaden our harmonic palette this time to include four diatonic tetrachords—{0235}, {0135}, {0245}, and {0257}—and orchestrate our example for solo piano. We can use the same combinatoric approach as we did earlier, computing the inversional permutations of each tetrachord to develop our vocabulary. To keep the data set manageable, let’s limit generated material to a specific range of the piano, perhaps C2 to C6.

We’ll start by generating all of the permutations of {0235}, transposing each one so that its lowest pitch is C2, followed by each of the remaining three tetrachords. Before adding a structure to the database, we will add a range check to make sure that no generated structure contains any pitch above C6. If it does, it will be discarded; if not, it will be added to the database. We will repeat this process for each chromatic step from C#2 to A5 (A5 being the highest chromatic step that will produce in-range structures) to produce a total harmonic vocabulary of 2976 structures.

Let’s begin our realtime compositional process by selecting a random structure from among the 2976. In order to determine the next structure, we’ll begin by running all of the candidates through our semitonal movement algorithm, calculating the distances among voices in the first structure and all other structures in the database. To reduce disjunction between structures, but avoid repetitions and extremely small harmonic movements, let’s allow total movements of between 4 and 10 semitones. All structures that fall within that range will then be passed through to the interval check algorithm, where they will be tested against our intervallic preferences for each voice. Finally, all remaining candidates will be checked for violation of any contrapuntal rules that have been defined for each voice pair. Depending on how narrowly we set each of one these filters, we might reduce our candidate set from 2976 to somewhere between 5 and 50 harmonic structures. We can again employ an aleatoric variable to choose freely among these, given that each has met all of our horizontal criteria.

To give this algorithm a bit more musical interest, let’s also experiment with arpeggiation and slight variations in harmonic rhythm. We can define four arpeggio patterns and allow the algorithm to choose one randomly for each structure that is generated.

example 6:

While this example still falls into the category of initial experiment or étude, it might be elaborated to produce more compositionally satisfying results. Instead of a meandering harmonic progression, perhaps we could define formal relationships such as multi-measure repeats, melodic cells that recur in different voices, or the systematic use of transposition or inversion. Instead of a constant stream of arpeggios, the musical texture could be varied in realtime by the composer. Perhaps the highest note (or notes) of each arpeggio could be orchestrated to another monophonic instrument as a melody, or the lowest note (or notes) re-orchestrated to a bass instrument. These are just a few extemporaneous examples; the possibility of achieving more sophisticated results is simply a matter of identifying and solving increasingly abstract musical problems algorithmically.

Here’s a final refinement to our piano étude, with soprano voice reinterpreted as a melody and bass voice reinforced an octave below on the downbeat of each chord change.

example 6.1:

ORCHESTRATION

In all of the above examples, we limited our harmonic vocabulary to structures that we knew were playable by a given instrument or instrument group. Orchestration was thus a pre-compositional decision, fixed before the run time of the algorithm and invariable during performance. Let’s now turn to the treatment of orchestration as an independent variable, one that might also be subject to algorithmic processing and realtime manipulation.

There are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

This is an area of inquiry unlikely to arise in electronic composition, due to the theoretical lack of a fixed range in electronic and virtual instruments. In resigning ourselves to working with traditional acoustic instruments, the abstraction of “pure composition” must be reconciled with practical matters such as instrument ranges, questions of performability, and the creation of logical yet engaging individual parts for performers. This is a potentially vast area of study, one that cuts across disciplines such as mathematics/logic, acoustics, aesthetics, and performance practice. Thus, I must here reprise my caveat from earlier: the techniques I’ll present were developed to provide practical solutions to pressing compositional problems in my own work. While reasonable attempts were made to seek robust solutions, there are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

The basic problem of orchestration might be stated as follows: how do we distribute n number of simultaneous notes (or events) to i number of instruments with fixed ranges?

Some immediate observations that follow:

a) The number of notes to be orchestrated can be greater than, less than, or equal to the number of instruments.
b) Instruments have varying degrees of polyphony, ranging from the ability to play only a single pitch to many pitches simultaneously. For polyphonic instruments, idiosyncratic physical properties of the instrument govern which kind of simultaneities can occur.
c) For a given group of notes and a fixed group of instruments, there may be multiple valid orchestrations. These can be sorted by applying various search criteria: playability/difficulty, adherence to the relationships among instrument ranges, or a composer’s individual orchestrational preferences.
d) Horizontal information can also be used to sort valid orchestrations. Which orchestration produces the least/most amount of melodic disjunction from the last event per instrument? From the last several events? Are there specific intervals that are to be preferred for a given instrument moving from one event to another?
e) For a given group of notes and a fixed group of instruments, there may be no valid orchestration.

Given the space allotted, I’d like to focus on the last three items, limiting ourselves to scenarios in which the number of notes to be orchestrated is the same as the number of instruments available and all instruments are acting monophonically.[3]

Let’s return to our earlier example of four-part harmonic events orchestrated for string quartet, with each instrument playing one note. By conservative estimate, a string quartet has a composite range of C2 to E7 (36 to 100 as MIDI pitch values). This does not mean, however, that any four pitches within that range will be playable by the instrument vector {Violin.1, Violin.2, Viola, Cello} in a one note/one instrument arrangement.

example 7

Branciforte-example-7

The most efficient way to determine whether a structure is playable by a given instrument vector—and, if so, which orchestrations are in-range—is to calculate the n! permutations of the structure and pass each one through a per-note range check corresponding to each of the instruments in the instrument vector. If each note of the permutation is in-range for its assigned instrument, then the permutation is playable. Here’s an example of a range check procedure for the MIDI structure {46 57 64 71} for the instrument vector {Violin.1 Violin.2 Viola Cello}.

example 8

Branciforte-example-8

By generating the twenty-four permutations of the harmonic structure ({46 57 64 71}, {57 64 71 46}, {64 71 46 57}, etc.) and passing each through a range check for {Violin.1 Violin.2 Viola Cello}, we discover that there are only six permutations that are collectively in-range. There is a certain satisfaction in knowing that we now possess all of the possible orchestrations of this harmonic structure for this group of instruments (leaving aside options like double stops, harmonics, etc.).

Although the current example only produces six in-range permutations, larger harmonic structures or instrument groups could bring the number of playable permutations into the hundreds, or even thousands. Our task, therefore, becomes devising systems for searching the playable permutations in order to locate those that are most compositionally useful. This will allow us to automatically orchestrate incoming harmonic data according to various criteria in a realtime performance setting, rather than pre-auditioning and choosing among the playable permutations manually.

There are a number of algorithmic search techniques that I’ve found valuable in this regard. These can be divided into two broad categories: filters and sorts. A filter is a non-negotiable criterion; in our current example, perhaps a rule such as “Violin 1 or Violin 2 must play the highest note.” A sort, on the other hand, is a method of ranking results according to some criterion. Perhaps we want to rank possible orchestrations by their adherence to the natural low-to-high order of the instruments’ ranges; we might order the instruments by the average pitch in their range and then rank permutations according to their deviation from that order. For a less common orchestration, we might decide to choose the permutation that deviates as much as possible from the instruments’ natural order.

example 9

Branciforte-example-9

By applying this filter and sort, the permutation {57 71 64 46} is returned for the instrument vector {Violin.1 Violin.2 Viola Cello}. As we specified, the highest note is played by either Violin 1 or Violin 2 (Violin 2 in this case), while the overall distribution of pitches from low-to-high deviates significantly from the instruments ranges from low to high. Mission accomplished.

Let’s also expand our filtering and sorting mechanisms from vertical criteria to include horizontal criteria. Vertical criteria, like the examples we just looked at, can be applied with information about only one structure; horizontal criteria take into account movement between two or more harmonic structures. As we saw in our discussion of harmony, horizontal criteria can provide useful information such as melodic movement for each voice, contrapuntal relationships, total semitonal movement, and more; this kind of data is equally useful in assessing possible orchestrations. In addition to optimizing the intervallic movement of each voice to produce more playable parts, horizontal criteria can be used creatively to control parameters such as voice crossing or harmonic spatialization.

example 10

Branciforte-example-10

Such horizontal considerations can be combined with vertical rules to achieve sophisticated orchestrational control. Each horizontal and vertical criterion can be assigned a numerical weight denoting its importance when used as a sorting mechanism. We might assign a weight of 0.75 to the rule that Violin 1 or Violin 2 contains the highest pitch, a weight of 0.5 to the rule that voices do not cross between structures, and a weight of 0.25 to the rule that no voice should play a melodic interval of a tritone. This kind of a weighted search more closely models the multivariate process of organic compositional decision-making. Unlike the traditional process of orchestration, It has the advantage of being executable in realtime, thus allowing a variety of indeterminate data sources to be processed according to a composer’s wishes.

While such an algorithm is perfectly capable of running autonomously, it can also be performatively controlled by varying parameters such as search criteria, weighting, and sorting direction. Other basic performance controls can be devised to quickly re-voice note data to different parts of the ensemble. Mute and solo functions for each instrument or instrument group can be used to modify the algorithm’s behavior on the fly, paying homage to a ubiquitous technique used in electronic music performance. The range check algorithm we developed earlier could alternatively be used to transform a piece’s instrumentation from performance to performance, instantly turning a work for string quartet and voice into a brass quintet. The efficacy of any of these techniques will, of course, vary according to instrumentation and compositional aims, but there is undoubtedly a range of compositional possibilities waiting to be explored in the domain of algorithmic orchestration.

IDEAS FOR FURTHER EXPLORATION

The techniques outlined above barely scratch the surface of the harmonic and orchestrational applications of realtime algorithms—and we have yet to consider several major areas of musical architecture such as rhythm, dynamics, and form! Another domain that holds great promise is the incorporation of live performer feedback into the algorithmic process. Given my goal of writing a short-form post and not a textbook, however, I’ll have to be content to conclude with a few rapid-fire ideas as seeds for further exploration.

Dynamics:

Map MIDI values (0-127) to musical dynamics markings (say, ppp to fff) and use a MIDI controller with multiple faders/pots to control musical dynamics of individual instruments during performance. Alternatively, specify dynamics algorithmically/pre-compositionally and use the MIDI controller only to modify them, re-balancing the ensemble as needed.

Rhythm:

Apply the combinatoric approach used for harmony and orchestration to rhythm, generating all the possible permutations of note attacks and rests within a given temporal space. Use probabilistic techniques to control rhythmic density, beat stresses, changes of grid, and rhythmic variations. Assign different tempi and/or meters to individual members of an ensemble, with linked conductor cursors providing an absolute point of reference for long-range synchronization.

Form:

Create controls that allow musical “snapshots” to be stored, recalled, and intelligently modified during performance. As an indeterminate composition progresses, a composer can save and return to previous material later in the piece, perhaps transforming it using harmonic, melodic, or rhythmic operations. Or use an “adaptive” model, where a composer can comment on an indeterminate piece as it unfolds, using a “like”/”dislike” button to weight future outcomes towards compositionally desirable states.

Performer Feedback:

Allow one or more members of an ensemble to improvise within given constraints and use pitch tracking to create a realtime accompaniment. Allow members of an ensemble to contribute to an adaptive algorithm, where individual or collective preferences influence the way the composition unfolds.

Next week, we will wrap up the series with a roundtable conversation on algorithms in acoustic music with pianist/composer Dan Tepfer, composer Kenneth Kirschner, bassist/composer Florent Ghys, and Jeff Snyder, director of the Princeton Laptop Orchestra.



1. These having been theoretically derived and catalogued by 20th century music theorists such as Allen Forte. I should add here, however, that while theorists like Forte may have definitively designated all the harmonic species (pitch class sets of one to twelve notes in their prime form), the totality of individual permutations within those species still remains radically under-theorized. An area of further study that would be of interest of me is the definitive cataloguing of the n! inversional permutations of each pitch-class set of n notes. The compositional usefulness of such a project might begin to break down with structures where n > 8 (octachords already producing 40,320 discrete permutations), but would nonetheless remain useful from an algorithmic standpoint, where knowledge of not only a structure’s prime form but also its inversional permutation and chromatic transposition could be highly relevant.


2. In calculating the distance between voices, we are not concerned with the direction a voice moves, just how far it moves. So whether the pitch C moves up a major third to E (+3 semitones) or down a major third to Ab (-3 semitones) is of no difference to us in this instance; we can simply calculate its absolute value, yielding a value of 3.


3. Scenarios in which the number of available voices does not coincide with the number of pitches to be orchestrated necessitates the use of the mathematical operation of combination and a discussion of cardinality, which is beyond the scope of the present article.

From the Machine: Realtime Networked Notation

Last week, we looked at algorithms in acoustic music and the possibility of employing realtime computation to create works that combine pre-composition, generativity, chance operations, and live data input. This week, I will share some techniques and software tools I’ve developed that make possible what might be called an interactive score. By interactive score, I mean a score that is continuously updatable during performance according to a variety of realtime input. Such input might be provided from any number of digitized sources: software user interface, hardware controllers, audio signals, video stream, light sensors, data matrices, or mobile apps; the fundamental requirement is that the score is able to react to input instantaneously, continuously translating fluctuations in input data into a musical notation that is intelligible to performers.

THE ALGORITHMIC/ACOUSTIC DIVIDE

It turns out that this last requirement has historically been quite elusive. As early as the 1950s, composers were turning to computer algorithms to generate and process compositional data. The resultant information could either be translated into traditional music notation for acoustic performance (in the early days, completely by hand; in later years, by rendering the algorithm’s output as MIDI data and importing it into a software notation editor) or realized as an electronic composition. Electronic realizations emerged as perhaps the more popular approach, for several reasons. First, by using electronically generated sounds, composers gained the ability to precisely control and automate the timbre, dynamics, and spatialization of sound materials through digital means. Second, and perhaps more importantly, by jettisoning human performers—and thus the need for traditional musical notation—composers were able to reduce the temporal distance between a musical idea and its sonic realization. One could now audition the output of a complex algorithmic process instantly, rather than undertake the laborious transcription process required to translate data output into musical notation. Thus, the bottleneck between algorithmic idea and sonic realization was reduced, fundamentally, to the speed of one’s CPU.

As computation speeds increased, the algorithmic paradigm was extended to include new performative and improvisational possibilities. By the mid-1970s, with the advent of realtime computing, composers began to create algorithms that included not only sophisticated compositional architectures, but also permitted continuous manipulation and interaction during performance. To take a simple example: instead of designing an algorithm that harmonizes a pre-written melody according to 18th-century counterpoint rules, one could now improvise a melody during performance and have the algorithm intelligently harmonize it in realtime. If multiple harmonizations could satisfy the voice-leading constraints, the computer might use chance procedures to choose among them, producing a harmonically indeterminate—yet, perhaps, melodically determinate—musical passage.

Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered.

This is just one basic example of combining live performance input with musically intelligent realtime computation; more complex and compositionally innovative applications can easily be imagined. What is notable with even a simple example like our realtime harmonizer, however, is the degree to which such a process resists neat distinctions such as “composition”/“performance”/“improvisation” or “fixed”/“indeterminate.” It is all of these at once, it is each of these to varying degrees, and yet it is also something entirely new as well. Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered. I would argue that the possibility of instantiating realtime compositional intelligence in machines holds the most radically transformative promise for a paradigmatic shift in music in the years ahead.

All of this, of course, has historically involved a bit of a trade-off: composers who wished to explore such realtime compositional possibilities were forced to limit themselves to electronic and virtual sound sources. For those who found it preferable to continue to work exclusively with acoustic instruments—whether for their complex yet identifiable spectra, their rich histories in music composition and performance, or the interpretative subtleties of human performers—computer algorithms offered an elaborate pre-compositional device, but nothing more.[1]

BRIDGING THE GAP

This chasm between algorithmic music realized electronically (where sophisticated manipulation of tempi, textural density, dynamics, orchestration, and form could be achieved during performance) and algorithmic music realized acoustically (where algorithmic techniques were only to be employed pre-compositionally to inscribe a fixed work) is something that has frustrated and fascinated me for years. As a student of algorithmic composition, I often wished that I could achieve the same enlarged sense of compositional possibility offered by electronically realized systems—including generativity, stochasticity, and performative plasticity—using traditional instruments and human performers.

This, it seemed, hinged upon a digital platform for realtime notation: a software-based score that could accept abstract musical information (such as rhythmic values, pitch data, and dynamics) as input and convert it into a readable measure of notation. The notational mechanism must also be continuously updatable: it must allow for a composer’s live data input to change the notation of subsequent measures during performance. It must here strike a balance between temporal interactivity for the composer and readability for the performer, since most performers are accustomed to reading at least a few notes ahead in the score. Lastly, the platform must be able to synchronize notational outputs for two or more performers, allowing an ensemble to be coordinated rhythmically.

Fortunately, technologies do now exist—some commercially available and others that can be realized as custom software—that satisfy each of these notational requirements.

I have chosen to develop work in Cycling ’74’s Max/MSP environment, for several reasons. First, Max supports realtime data input and output, which provides the possibility of transcending the merely pre-compositional use of algorithms. Second, two third-party notation objects —bach.score[2] and MaxScore[3]—have recently been developed for the Max environment, which allow for numerical data to be translated into traditional (as well as more experimental forms of) musical notation. For years, this remained a glaring inadequacy in Max, as native objects do not provide anything beyond the most basic notational support. Third, Max has several objects designed to facilitate communication among computers on a local network; although most of these objects are low-level in their implementation, they can be coaxed into a lightweight, low-latency, and relatively intelligent computer network with some elaboration.

REALTIME INTERACTIVE NOTATION: UNDER THE HOOD

Let’s take a look at the basic mechanics of interactive notation using the bach.score object instantiated in Max/MSP. (For those unfamiliar with the Max/MSP programming environment, I will attempt to sufficiently summarize/contextualize the operations involved so that this process can be understood in more general terms.) bach.score is a user-interface object that can be used to display and edit musical notation. While not quite as robust as commercial notation software such as Sibelius or Finale, it features many of the same operations: manual note entry with keyboard and mouse, clef and instrument name display, rhythmic and tuplet notation, accidentals and microtones, score text, MIDI playback, and more. However, bach.score‘s most powerful feature is its ability to accept formatted text messages to control almost every aspect of its operation in realtime.

To take a basic example, if we wanted to display the first four notes of an ascending C major arpeggio as quarter notes in 4/4 (with quarter note = 60 BPM) in Sibelius, we would first have to set the tempo and time signature manually, then enter the pitches using the keyboard and mouse. With bach.score, we could simply send a line of text to accomplish all of this in a single message:

(( ((4 4) (60)) (1/4 (C4)) (1/4 (E4)) (1/4 (G4)) (1/4 (C5)) ))

example 1:

And if we wanted to display the first eight notes of an ascending C major scale as eighth notes:

(( ((4 4) (60)) (1/8 (C4)) (1/8 (D4)) (1/8 (E4)) (1/8 (F4)) (1/8 (G4)) (1/8 (A4)) (1/8 (B4)) (1/8 (C5)) ))

example 2:

Text strings are sent in a format called a Lisp-like linked list (llll, for short). This format uses nested brackets to express data hierarchically, in a branching tree-like structure. This turns out to be a powerful metaphor for expressing the hierarchy of a score, which bach.score organizes in the following way:

voices > measures > rhythmic durations > chords > notes/rests > note metadata (dynamics, etc.)

The question might be raised: why learn an arcane new text format and be forced to type long strings of hierarchically arranged numbers and brackets to achieve something that might be accomplished by an experienced Finale user in 20 seconds? The answer is that we now have a method of controlling a score algorithmically. The process of formatting messages for bach.score can be simplified by creating utility scripts that translate between the language of the composer (“ascending”; “subdivision”; “F major”) and that of the machine. This allows us to control increasingly abstract compositional properties in powerful ways.

Let’s expand upon our arpeggio example above, and build an algorithm that allows us to change the arpeggio’s root note, the chord type (and corresponding key signature), the rhythmic subdivision used, and the arpeggio’s direction (ascending, descending, or random note order).

example 3:

Let’s add a second voice to create a simple canonic texture. The bottom voice adds semitonal transposition and rhythmic rotation from the first voice as variables.

example 4:

To add some rhythmic variety, we might also add a control that allows us to specify the probability of rest for each note. Finally, let’s add basic MIDI playback capabilities so we can audition the results as we modify musical parameters.

example 5:

While our one-measure canonic arpeggiator leaves a lot to be desired compositionally, it gives an indication of the sorts of processes that can be employed once we begin thinking algorithmically. (In the next post, we will explore more sophisticated examples of algorithmic harmony, voice-leading, and orchestration.) It is important to keep in mind that unlike similar operations for transposition, inversion, and rotation in a program like Sibelius, the functions we have created here will respond to realtime input. This means that our canonic function could be used to process incoming MIDI data from a keyboard or a pitch-tracked violin, creating a realtime accompaniment that is canonically related to the input stream.

PRACTICAL CONSIDERATIONS: RHTYHMIC COORDINATION AND REALTIME SIGHT-READING

Before going any further with our discussions of algorithmic compositional techniques, we should return to more practical considerations related to a realtime score’s performability. Even if we are able to generate satisfactory musical results using algorithmic processes, how will we display the notation to a group of musicians in a way that allows them to perform together in a coordinated manner? Is there a way to establish a musical pulse that can be synced across multiple computers/mobile devices? And if we display notation to performers precisely at the instant it is being generated, will they be able to react in time to perform the score accurately? Should we, instead, generate material in advance and provide a notational pre-display, so that an upcoming bar can be viewed before having to perform it? If so, how far in advance?

I will share my own solutions to these problems—and the thinking that led me to them—below. I should stress, however, that a multiplicity of answers are no doubt possible, each of which might lead to novel musical results.

I’ve addressed the question of basic rhythmic coordination by stealing a page from Sibelius’s/Finale’s book: a vertical cursor that steps through the score at the tempo indicated. By programming the cursor to advance according to a quantized rhythmic grid (usually either quarter or eighth note), one can visually indicate both the basic pulse and the current position in the score. While this initially seemed a perfectly effective and minimal solution, rehearsal and concert experience has indicated that it is good practice to also have a large numerical counter to display the current beat. (This is helpful for those 13/4 measures with 11 beats of rest.)

example 6:

With a “conductor cursor” in place to indicate metric pulse and current score position, we turn to the question of how best to synchronize multiple devices (e.g. laptops, tablets) so that each musician’s cursor can be displayed at precisely the same position across devices. This is a critical question, as deviations in the range of a few milliseconds across devices can undermine an ensemble’s rhythmic precision and derail any collective sense of groove. In addition to synchronizing cursor positions, communication among devices will likely be needed to pipe score data (e.g. notes/rests, time signatures, dynamics, expression markings) from a central computer—where the master score is being generated and compiled—to performers’ devices as individual parts.

Max/MSP has several objects that provide communication across a network, including udpsend and udpreceive, jit.net.send and jit.net.recv, and a suite of Java classes that use the mxj object as a host—each of these has its advantages and drawbacks. Udpsend and udpreceive allow Max messages to be sent to another device on a network by specifying its IP address; they provide the fastest transfer speeds and are therefore perhaps the most commonly used. The downside to using UDP packets is that there is no specification for error-checking, such as guaranteed message delivery or guaranteed ordered delivery. This means that, while it provides the fastest transfer speeds, UDP does not necessarily guarantee that data packets will make it to their destination, or that packets will be received in the correct order. jit.net.send and jit.net.recv are very similar in their Max/MSP implementation, but use the TCP/IP transfer protocol, which does provide for error-checking; the tradeoff is that they have slightly slower delivery times. mxj-based objects provide useful functionality such as the ability to query one’s own IP address (net.local) and multicasting (net.multi.send and net.multi.recv), but require Java to be installed on performers’ machines—something which, experience has shown, cannot always be assumed.

I have chosen to use jit.net.send and jit.net.recv exclusively in all of my recent work. The slight tradeoff in speed is offset by the reliability they provide during performance. Udpsend and udpreceive might work flawlessly for 30 minutes and then drop a data packet, causing the conductor cursor to skip a beat or a blank measure to be unintentionally displayed. This is, of course, unacceptable in a critical performance situation. To counteract the slightly slower performance of jit.net.send and jit.net.recv (and to further increase network reliability), I have also chosen to use wired Ethernet connections between devices via a 16-port Ethernet switch.[4]

Lastly, we come to the question of how much notational pre-display to provide musicians for sight-reading purposes. We must bear in mind that the algorithmic paradigm makes possible an indeterminate compositional output, so it is entirely possible that musicians will be sight-reading music during performance that they have not previously seen or rehearsed together. Notational pre-display might provide musicians with information about the most efficient fingerings for the current measure, alert them to an upcoming change in playing technique or a cue from a fellow musician, or allow them to ration their attention more effectively over several measures. In fact, it is not uncommon for musicians to glance several systems ahead, or even quickly scan an entire page, to gather information about upcoming events or gain some sense of the musical composition as a whole. The drawback to providing an entire page of pre-generated material, from a composer’s point of view, is that it limits one’s ability to interact with a composition in realtime. If twenty measures of music have been pre-generated, for instance, and a composer wishes to suddenly alter the piece’s orchestration or dynamics, he/she must wait for those twenty measures to pass before the orchestrational or dynamic change takes effect. In this way, we can note an inherent tension between a performer’s desire to read ahead and a composer’s desire to exert realtime control over the score.

Since it was the very ability to exert realtime control over the score which attracted me to networked notation in the first place, I’ve typically opted to keep the notational pre-display to a bare minimum in my realtime works. I’ve found that a single measure of pre-display is usually a good compromise between realtime control for the composer and readability for the performer. (Providing the performer with one measure of pre-display does prohibit certain realtime compositional possibilities that are of interest to me, such as a looping function that allows the last x measures heard during performance to be repeated on a composer’s command.) Depending on tempo and musical material, less than a measure of pre-display might be feasible; this necessitates updating data in a measure as it is being performed, however, which runs the risk of being visually distracting to a performer.­

An added benefit of limiting pre-display to one measure is that a performer need only see two measures at any given time: the current measure and the following measure. This has led to the development of what I call an “A/B” notational format, an endless repeat structure comprising two measures. Before the start of the piece, the first two measures are pre-generated and displayed. As the piece begins, the cursor moves through measure 1; when it reaches the first beat of measure 2, measure 3 is pre-generated and replaces measure 1. When the cursor reaches the first beat of measure 3, measure 4 is pre-generated and replaces measure 2, and so on. In this way, a performer can always see two full bars of music (the current bar and the following bar) at the downbeat of any given measure. This system also keeps the notational footprint small and consistent on a performer’s screen, allowing for their part to be zoomed to a comfortable size for reading, or for the inclusion of other instruments’ parts to facilitate ensemble coordination.

example 7:

SO IT’S POSSIBLE… NOW WHAT?

Given this realtime notational bridge from the realm of computation to the realm of instrumental performance, a whole new world of compositional possibilities begins to emerge. In addition to traditional notation, non-standard notational forms such as graphical, gestural, or text-based can all be incorporated into a realtime networked environment. Within the realm of traditional notation, composers can begin to explore non-fixed, performable approaches to orchestration, dynamics, harmony, and even spatialization in the context of an acoustic ensemble. Next week, we will look at some of these possibilities more closely, discussing a range of techniques for controlling higher-order compositional parameters, from the linear to the more abstract.



1. Notable exceptions to this include the use of mechanical devices and robotics to operate acoustic instruments through digital means (popular examples: Yamaha Disklavier, Pat Metheny’s Orchestrion Project, Squarepusher’s Music for Robots, etc.). The technique of score following—which uses audio analysis to correlate acoustic instruments’ input to a position in a pre-composed score—should perhaps also be mentioned here. Score following provides for the compositional integration of electronic sound sources and DSP into acoustic music performance; since it fundamentally concerns itself with a pre-composed score, however, it cannot be said to provide a truly interactive compositional platform.


2. Freely available through the bach project website.


3. Info and license available at the MaxScore website.


4. A wired Ethernet connection is not strictly necessary for all networked notation applications. If precise timing of events is not compositionally required, a higher-latency wireless network can yield perfectly acceptable results. Moreover, recent technologies such as Ableton Link make possible wireless rhythmic synchronization among networked devices, with impressive perceptual precision. Ableton Link does not, however, allow for the transfer of composer-defined data packets, an essential function for the master/slave data architecture employed in my own work. At the time of this writing, I have not found a wireless solution for transferring data packets that yields acceptable (or even consistent) rhythmic latencies for musical performance.

From the Machine: Computer Algorithms and Acoustic Music

The possibility of employing an algorithm to shape a piece of music, or certain aspects of a piece of music, is hardly new. If we define algorithmic composition broadly as “creating from a set of rules or instructions,” the technique is in some sense indistinguishable from musical composition itself. While composers prior to the 20th century were unlikely to have thought of their work in explicitly algorithmic terms, it is nonetheless possible to view aspects of their practice in precisely this way. From species counterpoint to 14th-century isorhythm, from fugue to serialization, Western music has made use of rule-based compositional techniques for centuries. It might even be argued that a period of musical practice can be roughly defined by the musical parameters it derives axiomatically and the parameters left open to “taste,” serendipity, improvisation, or chance.

A relatively recent development in rule-based composition, however, is the availability of raw computational power capable of millions of calculations per second and its application to compositional decision-making. If a compositional process can be broken down into a specific enough list of instructions, a computer can likely perform them—and usually at speeds fast enough to appear instantaneous to a human observer. A computer algorithm is additionally capable of embedding non-deterministic operations such as chance procedures (using pseudo-random number generators), probability distributions (randomness weighted toward certain outcomes), and realtime data input into its compositional hierarchy. Thus, any musical parameter—e.g. harmony, form, dynamics, or orchestration—can be controlled in a number of meaningful ways: explicitly pre-defined, generated according to a deterministic set of rules (conditional), chosen randomly (aleatoric), chosen according to weighted probability tables (probabilistic), or continuously controlled in real time (improvisational). This new paradigm allows one to conceive of the nature of composition itself as a higher-order task, one requiring adjudication among ways of choosing for each musically relevant datum.

Our focus here will be the application of computers toward explicitly organizational, non-sonic ends.

Let us here provisionally distinguish between the use of computers to generate/process sound and to generate/process compositional data. While, it is true, computers do not themselves make such distinctions, doing so will allow us to bracket questions of digital sound production (synthesis or playback) and digital audio processing (DSP) for the time being. While there is little doubt that digital synthesis, sampling, digital audio processing, and non-linear editing have had—and will continue to have—a profound influence on music production and performance, it is my sense that these areas have tended to dominate discussions of the musical uses of computers, overshadowing the ways in which computation can be applied to questions of compositional structure itself. Our focus here will therefore be the application of computers toward explicitly organizational, non-sonic ends; we will be satisfied leaving sound production to traditional acoustic instruments and human performers. (This, of course, requires an effective means of translating algorithmic data into an intelligible musical notation, a topic which will be addressed at length in next week’s post.)

Let us further distinguish between two compositional applications of algorithms: pre-compositional use and performance use. Most currently available and historical implementations of compositional data processing are of the former type: they are designed to aid in an otherwise conventional process of composition, where musical data might be generated or modified algorithmically, but is ultimately assembled into a fixed work by hand, in advance of performan­ce.[1]

A commonplace pre-compositional use of data processing might be the calculation of a musical motif’s retrograde inversion in commercial notation software, or the transformation of a MIDI clip in a digital audio workstation using operations such as transposition, rhythmic augmentation/diminution, or randomization of pitch or note velocity. On the more elaborate end of the spectrum, one might encounter algorithms that translate planets’ orbits into rhythmic relationships, prime numbers into harmonic sequences, probability tables into melodic content, or pixel data from a video stream into musical dynamics. Given the temporal disjunction between the run time of the algorithm and the subsequent performance of the work, such operations can be auditioned by a composer in advance, selecting, discarding, editing, re-arranging, or subjecting materials to further processing until an acceptable result is achieved. Pre-compositional algorithms are thus a useful tool when a fixed, compositionally determinate output is desired: the algorithm is run, the results are accepted or rejected, and a finished result is inscribed—all prior to performance.[2]

It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance.

With the advent of realtime computing and modern networking technologies, however, new possibilities can be imagined beyond the realm of algorithmic pre-composition. It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance. A composer might trigger sections of a musical composition in non-linear fashion, use faders to control dynamic relationships between instruments, or directly enter musical information (e.g. pitches or rhythms) that can be incorporated into the algorithmic process on the fly. Such techniques have, of course, been common performance practice in electronic music for decades; given the possibility of an adequate realtime notational mechanism, they might become similarly ubiquitous in notated acoustic composition in the coming years.

Besides improvisational flexibility, performance use of compositional algorithms offers composers the ability to render aleatoric and probabilistic elements anew during each performance. Rather than such variables being frozen into fixed form during pre-composition, they will be allowed to retain their fundamentally indeterminate nature, producing musical results that vary with each realization. By precisely controlling the range, position, and function of random variables, composers can define sophisticated hierarchies of determinacy and indeterminacy in ways that would be unimaginable to early pioneers of aleatoric or indeterminate composition.

Thus, in addition to strictly pre-compositional uses of algorithms, a composer’s live data input can work in concert with conditional, aleatoric, probabilistic, and pre-composed materials to produce what might be called a “realtime composition” or a­n “interactive score.”

We may, in fact, be seeing the beginnings of a new musical era, one in which pre-composition, generativity, indeterminacy, and improvisation are able to interact in heretofore unimaginable ways. Instances in which composers sit alongside a chamber group or orchestra during performance, modifying elements of a piece such as dynamics, form, and tempo in real time via networked devices, may become commonplace. Intelligent orchestration algorithms equipped with transcription capabilities might allow a pianist to improvise on a MIDI-enabled keyboard and have the results realized by a string quartet in (near) real time. A musical passage might be constructed by composing a fixed melody along with a probabilistic table of possible harmonic relationships (or, conversely, by composing a fixed harmonic progression with variable voice leading and orchestration), creating works that blur the lines between indeterminacy and fixity, composition and improvisation, idea and realization. Timbral or dynamic aspects of a work might be adjusted during rehearsal in response to the specific acoustic character of a performance space. Formal features, such as the order of large-scale sections, might be modified by a composer mid-performance according to audience reaction or whim.

While the possibilities are no doubt vast, the project of implementing a coherent, musically satisfying realtime algorithmic work is still a formidable one: many basic technological pieces remain missing or underdeveloped (requiring a good deal of programming savvy on a composer/musician’s part), the practical requirements for performance and notation are not yet standardized, and even basic definitions and distinctions remain to be theorized.

In this four-part series, I will present a variety of approaches to employing computation in the acoustic domain, drawn both from my own work as well as that of fellow composer/performers. Along the way, I will address specific musical and technological questions I’ve encountered, such as strategies for networked realtime notation, algorithmic harmony and voice leading, rule-based orchestration, and more. While I have begun to explore these compositional possibilities only recently, and am surely only scratching the surface of what is possible, I have been fascinated and encouraged by the early results. It is my hope that these articles might be a springboard for conversation and future experimentation for those who are investigating—or considering investigating—this promising new musical terrain.



1. One might similarly describe a piece of music such as John Cage’s Music of Changes, or the wall drawings of visual artist Sol Lewitt, as works based on pre-compositional (albeit non-computer-based) algorithms.


2. Even works such as Morton Feldman’s graph pieces can be said to be pre-compositionally determinate in their formal dimension: while they leave freedom for a performer to choose pitches from a specified register, their structure and pacing is fixed and cannot be altered during performance.


Joseph Branciforte

Joseph Branciforte is a composer, multi-instrumentalist, and recording/mixing engineer based out of New York City. As composer, he has developed a unique process of realtime generative composition for instrumental ensembles, using networked laptops and custom software to create an “interactive score” that can be continuously updated during performance. As producer/engineer, Branciforte has lent his sonic touch to over 150 albums, working with such artists as Ben Monder, Vijay Iyer, Tim Berne, Kurt Rosenwinkel, Steve Lehman, Nels Cline, Marc Ribot, Mary Halvorson, Florent Ghys, and Son Lux along the way. His production/engineering work can be heard on ECM, Sunnyside Records, Cantaloupe Music, Pi Recordings, and New Amsterdam. He is the co-leader and drummer of “garage-chamber” ensemble The Cellar and Point, whose debut album Ambit was named one of the Top 10 Albums of 2014 by WNYC’s New Sounds and praised by outlets from the BBC to All Music Guide. His current musical efforts include a collaborative chamber project with composer Kenneth Kirschner and an electronic duo with vocalist Theo Bleckmann.

John King: It All Becomes Music

John King standing against a wall covered with a few branches.

About a month ago I was surfing through my Facebook news feed. Being afraid of rabbit holes, I tend never to do this very attentively or very frequently, but nevertheless something wound up catching my eye. It was just two lines, not even parsed into a proper sentence, about a recent performance. Even though I see tons of these every day, this one probably stood out because it included an image from the score. I was immediately drawn to it because there were no bar lines and it was just a single vocal part that was almost entirely in monotone. Then I noticed the post was from the eclectic composer John King and it had a link to his website. I loved several string quartets by King that I heard Ethel perform over a decade ago, both live and on recordings, and I also remembered an earlier, rather bizarre “double opera” that he had co-written with Polish composer Krzysztof Knittel which premiered at the Warsaw Autumn in 1999. But I had never before seen any of his scores. So I took a deep breath and clicked on the link to his website.

A screen capture of a John king Facebook post featuring an excerpt from a musical score preceded by the following text: "recent premiere of "The Park" by Loadbang....this excerpt entitled "larry" named after one of the denizens of Tompkins Square Park...]"

What I found there astonished me for a variety of reasons. I knew that he embraced a wide variety of styles—from Cagean indeterminacy and post-minimalism to rock, blues, and free improvisation. But I was hardly prepared for all the other kinds of music I found there: canons for chorus, orchestral pieces, a North Indian classical raga exposition, Baroque imitations, and numerous experimental operas. That was just the tip of the iceberg. Not only could I not believe how much music he had written—for example, 14 pieces totaling some six hours of music just last year—but also how open he was about all of it, including a piccolo concerto he composed when he was in high school. Everything was there with no hierarchy other that chronology. It was one of the most interesting composer websites I had ever visited. I had to talk to him for NewMusicBox!

Normally I prepare for these talks by attempting to listen to every single related scrap of music I can get my hands on. But there was no way I could do this with John King’s work; there was just too much of it. What was the secret to his being so immensely prolific and also so non-judgmental about it all? What could the rest of us learn from his equanimity?

I’m still not sure I got conclusive answers to these particular questions after spending a couple of hours with him at his home in the East Village, right across the street from Tomkins Square Park, which for him is a muse.  In fact, if anything, I walked away with even more questions. But I did leave there feeling inspired and more excited than I have in quite some time about the creative process. Our conversation was periodically drowned out by construction taking place in the neighborhood. Somehow that seemed appropriate though.  Initially Molly and I deliberated about how to proceed and what we might do to minimize the disturbances. King, however, was completely nonplussed by any of these additional unwanted sounds. For him it just added another sonic element, one that could potentially lead somewhere that was interesting. He told us stories about how birds chirping outside his window became part of one of his compositions. Occasionally he’d even point out moments when a hammer hit was synchronous with one of his syllables. Being so open, even to what others would perceive as noise or interruption, is perhaps as open-minded as you can be musically.


A conversation at John King’s apartment in New York City
June 15, 2016—10:00 a.m.
Video presentations and photography by Molly Sheridan
Transcribed by Julia Lu

Frank J. Oteri: Might it be too noisy for us to talk?

John King: I think we should just treat it as New York.

FJO:  True, and considering that indeterminate sound is actually an important part of your aesthetic, the random construction sounds might actually be appropriate.

JK:  I remember John Cage speaking about car alarms and store alarms in the ‘70s.  That store alarm always went off on a Friday evening at six, and it would be going all weekend long because the people wouldn’t come back.  I remember Cage saying that for a while it sort of got to him, but then the way he managed it was to imagine where the sound was coming from. He just thought about the spatialization of that alarm—it’s on, say, 17th Street, between Fifth and Sixth Avenues.  Then the unnervingness of it would just disappear for him.  Sometimes I try to do that.  Sometimes I’m successful.

FJO: For me, when I can hear slight variations over time, it goes from being this constant, annoying thing to being music.

JK:  Right, you can all of a sudden concentrate on the overtones or the inconsistent nature of the pitch.  Yeah.  And it becomes music. We all know and love The Stone, but it’s on a very busy corner and some people want there to be complete silence before the beginning of a piece or before the beginning of a concert.  I had a residency there last year and some of the greatest moments in my own music were when the string quartet faded out and the sound of a car faded up and then the car faded out and the string quartet kept going.  So, for the experience of the music and the environment in that particular moment, I think it’s fine.  For recordings probably not, because you want those to be a little bit more indicative of the piece.  When people listen to that recording and a fire engine goes by, that becomes part of the sound world.  But I don’t think of it as a distraction; I think of it as an addition.

FJO: But since a lot of your scores involve indeterminate elements, there are often elements of surprise to the realization of what you’ve put on the page.  So when you say that you want a recording to be more indicative of the piece, what exactly is the piece?

“I’ve been fortunate to have some pieces done many times, so we can hear many, many versions of the same piece. It’s like looking at a globe or a sphere and just turning it and turning it or pulling it, like it’s taffy; you see it’s still the thing, but you’re seeing all these different possibilities within it.”

JK:  For quite a while at least some elements within almost every single thing that I’ve written have been chance determined.  That to me is opening the door to all kinds of experiences. I’ve been fortunate to have some pieces done many times, so we can hear many, many versions of the same piece. It’s like looking at a globe or a sphere and just turning it and turning it or pulling it, like it’s taffy; you see it’s still the thing, but you’re seeing all these different possibilities within it.  The hammer downstairs just hit on one of my syllables, so that makes it beautiful for that second, that accident, that simultaneity. It’s all beautiful.

FJO: The thing that finally prompted me to talk to you was total serendipity. I was scrolling through my Facebook feed and there was a post mentioning some piece of yours that included an image from the score. It looked interesting, so I followed the link which took me to your website. When I got there, I was floored by how much music there is on it and how much of it was created in a relatively short amount of time.  I’ve followed your work for years, but I had no idea how much stuff you’ve done.  Last year you wrote 14 different pieces which total over six hours of music. That’s an incredible amount of work.

JK:  Well, it was a productive year, I guess. A lot of that music was for dance companies. I have what I consider to be the great fortune and opportunity of working with choreographers, which began in college at CalArts, the California Institute of the Arts, where I graduated in 1976.  Then I moved to New York, but I kept up relationships with some of those people and I also formed new relationships. Then I sent a cassette to John Cage.  He wanted to come over and listen to some of my stuff and that led to a commission from Merce Cunningham, and then that led to an almost 25-year association while I worked with other choreographers, too.  I’ve worked a lot with a choreographer named Kevin O’Day.  Each time someone says, “Do you want to write a piece for this choreography that I’m working on?”—and a lot of them are evening-length pieces—I go, “Well, sure.”  Then I say what I’m interested in. “I want to write something for choir and string quartet.”  “Okay.  Great.  Why don’t you do that? We’ll work with a young people’s choir in Mannheim, Germany, and we’ll get students from the Hochschule and have them be the string quartet.”

Then live electronics. I have a long-evolving electronics scenario that works through chance, but for every piece I can go in and change and manipulate little things, and then it becomes the electronics environment for that particular piece. Then some other piece will come along and I’ll continue the evolution of that particular way of manipulating, processing, and locating sound.

I’ve got these other ideas for this other long series of pieces called Free Palestine that I started in 2013. I’m still writing them.  I get an idea and for some reason the string quartet is the ensemble that I go to for fulfilling an idea.  I’ll have an idea for a compositional structure or a compositional motive or what I sometimes call an epiphany, and it somehow crystallizes into the string quartet.  So I write a lot of string quartets for that reason.

FJO: Part of that I’m sure grows out of your being a string player.

JK: Yeah, I’m sure.  And I use improvisation. I played violin for a while, very poorly, and now I play viola very poorly.  But I play the instruments, so the physicality of working out some things, getting the fingerings, feeling how the bow works—I do have that visceral, physical connection in parts of the creative process so then I can go, “Oh man, this feels great.”  And then it goes right into the piece.

FJO: So pieces evolve from playing around with ideas physically before you get to the page?

JK:  Well, that’s the way it works sometimes. Then other times I’m just walking through Tomkins Square Park, which is a great source of inspiration.  I just walk and often how things are put together comes from free, mindless thinking.  I’ve been working with a way of organizing time, which I call time vectors. I used that in the piece for six pianos that was done last year at Knockdown Center called Piano Vectors—six Steinway Ds in 40,000 square feet of space. I had this idea of how to organize them temporally and that’s where I began. Not a note was written.  I went through the whole thinking process of how it was to work with just time, like how to fill the time, and then from there it got more crystallized and I got to the actual notes for the piano and how it was going to be put onto the page.  Most of that was chance determined, then some of it was also a kind of physicality at the piano, with me playing.

John King leaning on a fence outside Tomkins Square Park.

FJO: This compositional process seems somewhat reminiscent of the micro-macrocosmic rhythmic structures that John Cage was composing with before he started using chance and which, in fact, led him directly to compose using chance: having a larger-scale structure of the piece in place before there are any specific notes.

JK:  Right.  The pre-composition or meta-composition, the composition before the composition, the overriding way things are organized.

FJO: So, maybe this is a silly question, but with a piece that lasts a relatively long amount of time—let’s say an hour—how long does it take to compose it?

JK:  Well, sometimes it doesn’t take very long at all.  It can take a couple of weeks or a month. Piano Vectors was the first piece that I fully realized within these time structures that I call vectors.  Then from there, there was a series of string quartets that were written, and then I also thought, “Well I’m going to write a string orchestra piece.”  Then I wrote a brass ensemble piece, because I was thinking about that, then a piece for percussion quartet. It’s like what Cage used to do with Fontana Mix. It can be done by itself, with Aria, or with Concert for Piano and Orchestra, because the system that he used to create one was the system he used to create all of them, so why can’t those things be played simultaneously?  The piece could be an hour long or 25-minutes long or an hour and a half long.  All those things are able to be accordionated—stretched or compressed—yet the structure, which to me is the overriding thing, is maintained.

FJO: Certainly the most extreme example of this that I can think of is the performance of Cage’s piece Organ2/ASLSP (As SLow aS Possible) in Halberstadt, Germany.

JK:  They’re still doing it…

FJO: They’re going to be doing it for 639 years.  But obviously he didn’t spend more than six centuries composing it.  Theoretically you could have a structure for something that lasts six hours, but maybe it took you only an hour to work it out.  Is it possible for the process of creating one of your pieces to actually be shorter than the realization of it?

“It’s surprising what happens when the mind has to get ideas together and you have to have something within a deadline. I can’t be like, ‘Is it a b-flat or a c-natural?’ All those things are gone—all those kind of self-doubts, self-criticisms.”

JK:  I don’t know if I’ve ever experienced that.  I taught at Dartmouth.  I took over Larry Polansky’s composition seminar for one semester.  I did these things with the graduate students that I called lightening composition.  If you play chess, you know what lightning chess is—it’s super fast.  You go with what you know, whatever your experience is with chess.  I had them do that with compositions, just as a way to kick start some ideas.  I gave them 15 minutes to come up with a piece.  I did it myself, too.  I never like giving students anything that I don’t participate in myself.  It’s surprising what happens when the mind has to get ideas together and you have to have something within a deadline.  A lot of people say, “Uh oh, I’ve got this deadline.” But I think, “Wow, I’ve got this deadline.”  You know, it’s got to be there.  I’ve got until Monday morning to finish it.  Great.  Because that means that I can’t be like, “Is it a b-flat or a c-natural?” All those things are gone—all those kind of self-doubts, self-criticisms. You just have to go with what you believe is coming from you as purely and as transparently as possible, and just do it.

FJO: So then do pieces ever get revised?

JK:  Sometimes they do.  The more notated the piece is, the more likely it is to be revised.  Some of the Free Palestine pieces were very open in terms of their interpretation. Then when we got into rehearsing them and then finally recording them, I had to do more arranging to make sure that everything worked.  So I took away some of the openness, but that was more pragmatic. I don’t think I’ll edit them.  But sometimes pieces change.

FJO: So, 14 pieces last year.  Six hours of music.  A productive year.  But I was just using 2015 as an example. It seems like you’ve been almost equally prolific every year for at least the past decade. How much time do you spend composing music in a given week?

JK:  Well, if I’m not traveling and don’t have something else going on, if it’s a week that’s just more or less a normal week and I’ve been commissioned to do something, I would guess between six and eight hours a day, sometimes more and sometimes a little bit less.  But I don’t divide the week into five days with two days.  I divide the week into seven days.

The upright piano in John King's apartment which has books of scores by Thelonious Monk and Charles Mingus.

FJO: And, in terms of inspiration, you mentioned to me before we were on camera that the piano you have here is a little bit beat up which could be a good thing—it could take you out of pre-conceptions about what a piano is supposed to sound like. I see a laptop over there and I saw some recording equipment in another room. So I imagine that this is your composition studio as well as your home; this is where you create your music.

“I don’t divide the week into five days with two days. I divide the week into seven days.”

JK:  Yeah.  There’s a little, mini-studio that I use, but even that. I remember—I think it was last year or it might have been the year before—I worked on some poetry of Wang Wei, who’s a Tang-era poet, a really beautiful poet, and he was also an artist and a kind of a political consultant to various people.  So I’m in there working on the piece, and it was eventually going to be for soprano and then myself on viola and Robert Dick on flute, and with live electronics.  But I’m in there working on it, I think it was April, and there were all these birds singing back there.  So I just threw out a microphone and grabbed some of the birds singing and tried to bring that into the piece. And that did become part of the piece.  Then when I was editing the piece, I remember wanting to make sure the bird sounds were there, but then the bird sounds were outside, too, and also inside what I was editing, so that was an interesting process of hearing what inspired something and then hearing where it ended up.

FJO: You’re able to create amid construction noise and birds singing—that’s all potential compositional fodder. So do you ever go off to artist colonies?

JK:  Well, I’ve had the good fortune of being at three residencies this past year—in Florida, Venice, and Bellagio, Italy.  That focuses you there for a certain amount of time and focuses the work that you can do there.  But New York is very inspiring, too, in terms of walking through the park.  I hear great things both from the park and what’s buzzing around inside.  I think both are great experiences.

A window in John King's apartment.

FJO: I’d like to take it all the way back to the past. Your website is an incredible time portal and archive of almost everything you’ve ever done.  I couldn’t believe it!

JK:  1972 is, I think, the earliest.

FJO: You included a piccolo concerto that you wrote when you were in high school. You included an image from a page of your manuscript.

JK:  And a very poor recording made on a cassette player.  Remember those cassette players that had one red button? It was my mother’s cassette player. I can still remember the piece. It’s in a very rudimentary baroque style.

FJO: And then there was this six-minute piece for guitar and piano that survives only on the recording from, I think, 1973.  I’m trying to remember.  It was hard for me to keep track of it all.  You might finally be the person who defeated me.  I always like to listen to everything somebody does before I talk to that person.  But with you I couldn’t possibly do it.  I would have wound up spending the rest of my life just listening to your music.

JK:  I’m so grateful and fortunate. Tony Kramer, a friend of mine from Philadelphia, looked at my old website and said, “You’ve got to get this organized and get it together.” And he helped support that new website.  It took many, many months.  I went back to reel-to-reel tapes and cassettes.  I digitized it all.  I remember putting a cassette into the machine.  I hit play and I didn’t know if the whole thing would come off the spindle—that was the age of some of these recordings!

FJO: But to me the most interesting thing about it is that you decided to include all this stuff, even something you wrote in high school.  You still have a sense of it being you and you’re O.K. with it being out in the world representing you. And yet, there are some things that appear to be missing.  You posted a String Quartet No. 2, but there isn’t a No. 1.

JK:  There’s also a String Quartet No. 3.  I stopped numbering after three.

FJO: Right.  But what happened to No. 1?

JK:  I do have the score somewhere.  It was written in what they call a gap year now, between high school and my first year of college.  I was not in school, but I was studying with a composition guy in Minneapolis.  He introduced me to Lutoslawski. I used to take people I liked and treat them as models.  I would write something that was sort of in that style to say this is what I liked and this is what I didn’t like.  Then I’d retain what felt like my own voice.  So that first string quartet was such a piece. It was written on manila manuscript in pencil and I never got a string quartet to play it.  But I still have it.

FJO: But you haven’t put a thing about it on your website. Why did that piece get left out, when you were open to everything else?

JK: I tried to find pieces that had some audio. I don’t think there’s anything there that doesn’t have either audio or video.  But I guess I should maybe scan it or something.

FJO: Now in terms of having it all out there, you include recordings and a page from the score. You don’t put up full scores, which means that people have to contact you to get the materials if they’re interested.

JK: Yeah.

FJO: So do people contact you about some of these older pieces?  Has having this resource given you this opportunity?

JK: Well, to a certain extent.  I do sometimes get string quartets that want to get the score for this piece called HardWood which, again, began as a piece for the Pennsylvania Ballet for Kevin O’Day’s choreography.  That was at the time that ETHEL was forming, and they performed it as a piece for the ballet.  Then they really liked three of the movements, so I said you can treat those like a concert version because it was, I think, a 25- or 30-minute-long piece.  They made it into a 15-minute suite. And that piece is the one that most people contact me about because it’s got this blues movement that’s got some really driving stuff in it.

FJO: What if someone came to you and asked to do that piccolo concerto?

JK: Wow, all that stuff that’s on paper with my pencil looks like I was drawing these big fat notes.  And the pencil is kind of smeared a little bit. I was so into Bach and all this stuff at the time.  I don’t know who would want to play that piccolo concerto. But if somebody did, I’d put it together.  I have the score. I bound it with twine.

FJO: One of the things that I thought was so sweet—it’s not as old as those pieces, but it goes back quite a ways—is a piano sonata after Mozart that you wrote for your mother’s 70th birthday.

JK: Oh yeah.

FJO: Once again, it’s totally unlike any other work of yours that I know.  But you put it up along with everything else anyway.

JK: I also wrote an adagio for my father—I think for his 80th birthday.  He really likes Wagner and the high Romantics. It’s not really Wagnerian, but it’s in that world.  It’s for a string orchestra, but I didn’t have the strings, so it’s just a sampler version of the string orchestra piece.  My mother used to tell me that they would listen to it at top volume.  Yeah.  Why not?

FJO: So you grew up in a household with parents who appreciated classical music.

JK: Yeah, my mother was a pianist, and we usually heard her play just at Christmas time, because she would play Christmas carols.  I played guitar in a rock band and they weren’t too happy about that, for many reasons.  But then I took up violin and started playing rudimentary things.  My mom and I played duets together and that was really fun.  My father loved to listen to music.  They had season tickets to the St. Paul Chamber Orchestra and the Minnesota Orchestra, and they were members of the Walker Arts Center and the Guthrie Theater.  So those were the places where I got my initial [exposure]. I remember seeing a Bertholt Brecht play for the first time—I was probably 14 or 15—at the Walker.  And I was just so blown away.  It was The Resistible Rise of Arturo Ui. It had projections and went slam back and forth between 1930s Chicago and Nazi Germany, and I was this kid going, “Wow.  This is so cool.”  And I saw touring operas at this place at the University of Minnesota called the Northrup Auditorium.  Madame Butterfly was my first opera.

John King's kitchen. A table with flowers and some bananas, a bookcase, and a wall filled with framed photographs.

FJO: Were you writing your own things yet at this point?

JK: I was.  When I first started playing violin, I had a friend in high school who played viola.  And I was studying counterpoint.  I wrote all these canons, because canons were these cool things that if you just kind of did them and followed all the rules—that my teacher was always correcting me about—you had a little composition. So I wrote lots of canons for her and me.  In high school, I was in a free education program where you could choose your own classes.  Those were the days of Summerhill.  It was an educational system out of England where kids were given the opportunity to make their own curriculum.  You decide what you want to do with your time, and so I studied violin, piano, and counterpoint.  And I was in a rock band, so that was part of my curriculum.  I was also reading Plato and studying Chinese history, but all on my own.  I just decided to do those things.

During that time, I also helped organize the talent show. So the rock band played and I played these little funny canons for violin and viola. There were people that were there studying tap dance. And I wrote some stuff for brass ensemble.  I was getting into Stravinsky, too.  I was experimenting with polytonality. The band teacher hated my music.  He would make fun of it in front of the band.  He would come over to the piano and play two chords that were meant to be played together.  And he would bang on them and say, “This is how you’re supposed to sound.  This is how he wants you to sound.  Isn’t that pretty?  Isn’t this nice?”  And of course, he got a laugh from everybody.  But I said, “Yeah, that’s how I wanted this.”

FJO: Good for you.  I love how some of these early crystallizing moments stayed with you. Just a few years ago, you wrote this piece for chorus that’s a three-part canon which was totally breaking rules and, in so doing, you created these wonderful textures. It’s canon your way.  And that can be traced all the way back to those violin-viola duets.

JK:  Yeah.  It’s still hard for me to write parallel fifths.  There’s a big feeling of freedom to have parallel fifths or parallel octaves or things like that, because all that stuff was driven out of me.  It was counterpoint from the 1600s on, all those rules.  But yeah, those structural things like canons or how to unify a piece of music, it’s still there.

FJO: One thing that I find that’s so interesting about your story is that you were immersed in this world of playing violin and viola and in string quartets.  But you also had another foot in this world of the electric guitar and playing in a rock band.  Of course, they’re not really separate worlds.  And in the music you would later come to write, they definitely aren’t separate worlds. There are passages in string quartets of yours that sound like deep Delta blues and even hard rock.  Then there are things with electric guitar that almost sound baroque.  I came across a piece of yours called Dance Piece that sounds like square dance music, but it was done with all electronic instruments—electric guitar and synthesizers. It’s totally out of context for those instruments, yet it totally works.  So there are no walls that compartmentalize music into different genres for you. It’s all one big continuum; even within a piece, it can suddenly go from one thing to another.

“My mother liked me playing violin, but she didn’t like me playing Jimi Hendrix.”

JK: In high school, I was playing in two or three bands simultaneously.  I was getting as much playing as I possibly could.  Chicago blues was what gave me my inspiration.  Jimi Hendrix and Eric Clapton were my big idols, so I was learning their stuff and playing their stuff.  My mother liked me playing violin, but she didn’t like me playing Jimi Hendrix.  She tried to ban Jimi Hendrix in the household, but I said no.  She had heard these things about what he did and what he stood for and all that. But I was not copying that; I was just listening to the music and playing in bands.  Later on I played on and off in blues bands here in New York for years and years.  It was that period in the late ‘70s/early ‘80s when the electric guitar also could become an instrument—as were lots of jazz and rock instruments—for pure improvisation, free noise, and noise that was mixed with all sorts of other elements from the universe.  It was not just about one thing. Let’s put everything together.  Let’s have there be a continuum where there are no walls, no borders. One thing just flows to the next as quickly or as slowly as you want to make them.

Another recent piece that’s on the website that’s just for guitars is Requiem for Eric Garner. I had discovered Erik Satie, probably through Cage.  Then I found these pieces that had no bar lines—Ogives.  So immediately I loved it—1880s and no bar lines!  And then I read that ogives are things in Notre Dame [Cathedral], the kind of arcs that were used.  Satie went in there and just got inspired by l’Ecole de Notre Dame composers and he wrote this thing.  And it’s just so beautiful.  If I transpose it a little bit, I’ve got it all on the guitar.  So I learned those, and incorporated that. It goes from Ogives, then 11th century, then back to root pedals and strangeness with the guitar.  It’s all lots of fun.

The sun reflected in John King's glasses.

FJO: I have a notation question regarding your electric guitar music.  The tradition of writing for string quartets is an old tradition, and it is very clearly and very precisely notated, down to all the articulations and bowings. You can break that down in various ways and open it up, and you get all sorts of other things.  But with electric guitar, there are elements of performance practice that notation really hasn’t caught up with, like settings for amps and pedals that are so individualized. All the great players have a very personalized sound on the instrument. If you want someone to recreate your sound, there is a great deal of information you’d need to convey that isn’t part of standard musical notation.  How much of an issue is that for you?

JK:  Well, I think that if any guitar player were to pick something up, I think they’d just have to have the recording and take it from there.  I have another piece called White Buffalo Calf Woman Blues. I think the recording is up on the website.  I got an email from a guitar player in Italy who said he wanted to play that piece and did I have the music for it?  Well, I didn’t. I didn’t have the sheet music for it because it was kind of an improvised piece.  But then I said, “If somebody wants to play it, I’ll put it together.”  And you know, it can be put together. Some things were written out and then there were some areas that were improvisational.  But something like tone—guitar players are so particular about that, so to notate it would be like telling the guitar player to throw that away. So I wouldn’t want to be too precise with that kind of stuff; I would encourage the player to change the settings.  I have this kind of guitar, I have this kind of amp, and I have these kinds of pedals.  Maybe I’ll try this version of it.  If then out from that someone wants to interpret it various ways, I think that’s just a great thing.

But when I do string quartet music or orchestral music, I try to really go through and make sure that everything is correct about the bowings and things like that.  But then you go to rehearsals, and you hear the string players go, “Let’s take these bowings and let’s not take those bowings.” If they feel like they can get the grit, the beauty, or whatever the musicality is that they find in it with a different bowing, I’m fine with it.  I’m not really into saying, “This is the way it has to be.”  I’m not that kind of guy.

“I’m not really into saying, ‘This is the way it has to be.’ I’m not that kind of guy.”

FJO: With an orchestra, the more precisely something is notated, the less rehearsal time it requires.  As soon as you give people choices, they have to take time to debate what they’re going to do in terms of those choices. Because they’re on a clock, they’re forced into certain kinds of music-making paradigms.

JK: That’s exactly right.

FJO: While you have written for orchestra, it’s not a ton of what you do.  I imagine that’s probably because you prefer for there to be more freedom with the players.

JK: This is again something that is on the composer.  What kind of freedom do you want, and how much time are they going to have to digest it and to be able to understand it and do something with it?  That means the notation has to be really clear.  You can’t waffle at all about things and you have to be maybe like, “I want them to do this, this, and this, but maybe I should just have them do this and this.”  Go into those things, and then they’ll get it more precisely.  With these things that I call time vectors, I’ll try to explain it to the musicians and they’ll play through it once or twice. Then I’d say, “That one thing that you did, you’re not understanding what I meant.”  A clarification comes, and then they get it.  But you’re right, it’s about time.  It’s about being on the clock. If it’s an orchestra, how many orchestras in colleges or conservatories work with a digital clock? How much experience do they have with it and when are they going to use it?  Maybe no orchestras will ever want to do a piece that’s on a digital clock or that has anything but bar lines in it.  How much music do they get that has no bar lines?  How long will it take?  But then what happens if the players come out and they say, “Oh yeah, I’ve done that 50 times before; this is nothing to me.  Let’s go.  Let’s explore this.  It says that I can choose any articulation.  Well great.  Let’s do it”?  If someone were to commission me to do an orchestra piece—and it’s been done, but in Mannheim, Germany—usually what I do is I end up in 4/4, trying to put it into that kind of configuration. I get the sense that things are changing, but I don’t know how fast it’ll change.

FJO: Well, I’d love for you to explain time vectors to me.

JK: Okay, well I’ll explain it by way of where it comes from.  The first piece that I did for the Cunningham company was for a dance called Native Green.  The music was called gliss in sighs, and it was written for an electric prepared violin.  John Cage hooked me up with Max Mathews.  Max was making all these electric violins. The violin that he gave to me was so cool; every string had a separate microphone on it.  The way the Cunningham company works is they have speakers all over the theater.  So by making a broken chord across those four strings, you make the sound go around the auditorium.  It was just so beautiful. Playing a double stop, we had sounds coming from two sides of the auditorium.

“With time vectors, the direction is that you begin after or before a certain time and you end before or after another time.”

That was the first piece where I began to use time as the way things were organized.  I had a grouping of material—what I called a time window: Like from zero to 30 seconds, this can take place.  From 30 seconds to 45, these materials can be improvised or used.  That’s how it worked.  Cage later had those things that he called time brackets, where you had to start within a particular window; that was the way that time was organized.  With time vectors, the direction is that you begin after or before a certain time and you end before or after another time.  So, you can think of it this way: You have to begin after zero and end before 30.  You have to place this material within that.  Then, another kind of vector is you have begin before a certain time and end before a certain time.  Another way is that you have to begin before a certain time, and end after a certain time.  And the last vector, the last possibility, is you have to begin after a certain time and end after a certain time.  So I give you a musical phrase, and I say this has to fit like this, or you can stretch it here, or you can compress it here, or you can place it here, or it could become the entire piece sometimes.  Or it could be that you’d have to stretch those three notes if you wanted to be really extreme with your interpretation of these time vectors.  You can play three notes over the entire duration of the piece.  Or you can place it here, or place it there, stretch it this way, or compress it that way.  Have it fall at this particular point, have it fall within another particular point, but within these chance-determined timing points.

FJO: So you were doing this before Cage’s Number Pieces?

JK: Well, what I call the time windows thing was done before them.  But the idea of how to stretch these vectors was after.  It was maybe four or five years ago.

FJO: So would you say that that grew out of the influence of the Cage Number Pieces?

JK: I’m sure it did. And because of being with Cunningham, we played this pretty famous piece of his called Four3, which is based on chance-determined reworkings of the Erik Satie 24-hour piece Vexations. Cage took the cantus firmus, and he made all these different single lines where the pitches are chance determined, either above or below the cantus firmus. The rhythmic element of the cantus firmus is intact, but it is stretched out over a minute and a half or two minutes.  I used to play it for a while with David Tudor.  The last time I saw Cage was after a performance of that that I’d done with David Tudor at City Center.  When you’re playing that piece, you put yourself in this very interesting mind frame.  You get the piece of music that you’re about to play, but very little. There are maybe 16 or 18 different phrases you can choose, and so you get ready to play and then you look at the clock, and you look at the time score, and you think, “Okay, is this between 35 and 45?  Yes it is, so I can begin now. And then how long do I have play?  Well, I have to make it last until one minute, or until one minute and 20 seconds, and so I have to stretch it out.  I’m going to end at 1:20.  I’m going to go to the very end.” And you make that decision, then you play, and then you end.  So it puts you in a place where you really have to be focused. David Tudor’s doing his version of it and I’m doing my version of it. I’ve also played it with Christian Wolff and with David Behrman. You’re a performer and you’re also completely an audience.  You’re somehow aware of what’s happening around you.  You’re not reacting to it, but you’re aware of it.  That always fascinated me about that piece.  You have to be totally invested in that decision that you make.  “Okay, I’m going to do this one.  I’m going to do it here.  I’m going to do it for this length.”  But then what’s happening?  What else is out there in the world that’s co-existing with this thing, with this decision that I made?  That experience was really fascinating.

A small painting leans against one of the window panes in John King's apartment.

FJO: When you mention being the audience you open up a whole other Pandora’s box full of questions. We talked about how performers respond to the score, but not really about how audiences respond when they’re hearing things. How much concern do you have about audiences knowing how these pieces were put together? What does an audience coming to this music need in terms of advance planning, if anything at all, to really experience what you’re doing?

JK: Well, in those kinds of pieces, I think if an audience understands that they don’t have to understand the particulars of how the time’s being organized, but that the organization of the sounds that they’re hearing, the simultaneities, is chance determined, then what they gain from their experience is unique and totally valid. What do you hear? What is interesting to you? What do you notice? What those things are is completely valid and the best is if someone is sitting next to you and hears a completely different thing. That’s fine. It’s the experience you bring to it, then what you get out of it is valid. But there are also pieces that I’ve written that have more of an emotional or dramatic trajectory.

FJO: Your string quartet AllSteel immediately comes to mind. Once someone reads that you composed half of the movements of the piece before 9/11 and the other half afterwards and that the before and after movements alternate, there’s no way to un-know that information. It becomes a very significant part of the listening experience.

JK: But I don’t how much there are connectivities with the more abstracted time organized pieces. At the performances of Piano Vectors at Knockdown Center, the audience was just wandering.  It was like an installation. People would park themselves in different places. One person fell asleep under one of the pianos. They were constructing their own journey through these expanses of music that were done at different times and different ways. Those kinds of pieces I think are the ones that are the most open to the individual creating their own experience and getting from it what they notice. But there are other pieces that have a program or a beginning inspiration behind them that does impact the way they’re experienced, like Requiem for Eric Garner or the Free Palestine pieces.

FJO: I’d like to continue talking for a bit about AllSteel. It’s interesting to me that before 9/11 you were writing a certain kind of music but that afterwards it was a completely different kind of music. You’ve described it as the point where the 20th century ended and the 21st century began. It somehow changed the music you were writing. So I wanted to explore what exactly you meant by that.

JK: The piece was a commission. Some ideas I was gathering before, but I sat down on September 10 to begin the piece. It was a Monday. I remember going through those four movements and writing a good minute or two into each of those four parts. The beginning was this groove that I had in my head. I knew that there was going to be a very kind of sleazy blues thing from a pizzicato cello. Then I had this other kind of technical thing that I was going to use in the fourth movement. I really had a lot of it planned out. Those four movements were pretty well in place. Then 9/11 happened and I just couldn’t go ahead with it as planned, because it was pretty aggressive. So I wanted those other movements to be reflective and I think it did well for the piece.

FJO: There’s another string quartet of yours that has even a greater variety of musical styles co-existing together, 10 Mysteries, but I don’t have the same kind of window into it that I do with AllSteel because of your comments about that piece. These kinds of back stories are certainly helpful to me as a listener, but I wonder how important they are to you?

JK: Well, I remember 10 Mysteries was one of the pieces that I also wrote when I had this idea of the convergence of composed music, indeterminate music, and improvised music. I wanted those three things to be present, but you couldn’t tell what was what.  I called it the trilogic unity—having these three ways of making music be so connected that it was a unified thing.  I would like an audience to know, to the extent that they are able to understand, something that’s written down is going to be the same thing every time but there’s also something that comes purely spontaneously from improvisation and then other things that were embedded into the music with indeterminacy.

“I had this idea of the convergence of composed music, indeterminate music, and improvised music. I wanted those three things to be present, but you couldn’t tell what was what. It’s like you make a reservation at a steak house. And just before you get in the door, you think, ‘I feel like having a vegetarian burger down the street.’ Then on the way there, you run into a friend whom you haven’t seen in 20 years who just happens to pass by, and he says, ‘Let’s go have a drink.’”

I had this metaphor that I said once in a composition class.  It’s like you make a reservation at a restaurant.  It’s a steak house.  You plan it in advance: Friday I’m going to go to this steakhouse.  And just before you get in the door, you think, “I feel like having a vegetarian burger down the street.”  A spontaneous thought comes in.  I determined to do this, but now I’m going to do that.  Then on the way there, you run into a friend whom you haven’t seen in 20 years who just happens to be in town, who just happens to pass by, and he says, “Let’s go have a drink.”  Three kinds of ways of interacting with the world.  I just wanted to put that in the music somehow, that little compression of possibility.  Let’s put those close together so they’re always present.  That’s what I was trying to do with that piece.

FJO: Now how my brain works is that you messed with my head by calling this piece 10 Mysteries even though it only has nine movements. Where’s the tenth movement?

JK: I know. I threw off everybody with that. At the very end of the piece, it finishes and the quartet just holds—if it’s done live—for 30 to 45 seconds because I wanted there to be a moment where people would just collect all that at the end, all the stuff going on in people’s experience.  So the tenth mystery was what happens in the listener when the piece is finished.  It’s like the seventh direction for Native Americans. There are seven directions: north, south, east, west, up, and down, and then there’s where you are.

FJO: Another piece of yours that has thrown me off somewhat is The HeartPiece, which you co-created with the Polish composer Krzysztof Knittel for the Warsaw Autumn festival at which it was described as a “double opera.” I’m not sure what that means.

JK: There’s this great text by Heiner Müller called Herzstück and it has two characters in it: A and B. I was at the Warsaw Autumn performing with a friend of mine—Krzysztof Zarebski—who is a performance artist. He’s good friends with Krzysztof Knittel, a composer who lives in Warsaw.  I remember speaking to him about this crazy idea I had: “What if we were to write a double opera, kind of like an exquisite corpse. We take this text, and we do different things with it. You do your version, and I do my version, and we just go back and forth. And we’d use a string quartet.” He played electric keyboards, I was playing guitar, and then the singers would be David Moss, who’s a friend of mine, and he knew a well-known Polish soprano, Olga Pasisniek, who was open to doing something really wild and crazy. So that’s how we did it. We had some disagreements; it wasn’t like [John Cage and Lou Harrison’s] Double Music, so we had to come up with some kind of structural agreements, and then we just put it together.  I thought of that A-B text being like the structure for how the piece would be composed: A-B composers. The text is very open and it’s very funny. A could be a man and B a woman.  It could be two men; it could be two women.  It is without any gender, though people have their own thoughts about how that could be.  We put it together very quickly before we played it in a small theater in Warsaw and, fortunately, we did two performances.  And it was done for Polish TV. The set was designed by Krzysztof Zarebski. The string quartet was inside this big tent made out of paper. They start playing, then they poke through the paper and it reveals them as the paper is torn apart. It was an all-female string quartet called Dafo.

FJO: Doing that project seems to have opened up a whole other world of you. Since then you’ve composed a bunch of these weird kinds of operas that are experimenting with texts in a completely different way. Or works that play with narrative or a lack of narrative, like impropera, where the text and the staging also have indeterminate elements. This has now become a central part of what you do.

JK:  It is. For me, that fascinating juncture of staging, lighting, text, and music is what opera is supposed to be. It was done in a certain way in the Baroque period, and different ways as we’ve gone along in history. But both the team of Brecht/Weill and Cunningham/Cage took the idea of staging in kind of similar ways. They wanted to treat the music separately from the text. They wanted to treat the text separately from the stage design. The stage design with a Brecht piece wasn’t meant to be naturalistic: “Oh, we’re really in someone’s room.” Instead, it took the opportunity of staging something and saying, “Let’s put a cow skull on the top of a pole and that will represent what goes on in this room.” The audience wasn’t being told how to think. The audience was encouraged to think about what goes on in that room, not because it’s got chairs and sofas, but because it’s got a cow skull on the top of a stick.  That puts people in a slightly different place.

Another opera that I did was called Dice Thrown. It was based on this Stéphane Mallarmé poem.  Mallarmé was very particular about where the words appeared on the page, what font they were in, whether they were italicized or bold. I found it at the end of a collection of poetry by him.  When I opened the book for the first time, I looked at it and I said, “This is a musical score.”  And when I looked at his notes, his introduction to it, he said that it is like a piece of music.  The space on the page is meant to be like silence.  The way that the words are written is meant to be like how they could be read.

A page from a poem by Stéphane Mallarmé that uses space and various types of typography.

I looked at this and I said I just had to do something with it.  So I took the text apart.  I put all the italicized words together, because that’s a kind of a narrative, even though it stretches from top to bottom.  And then there’s the title—it’s also embedded from the beginning to the end of the poem.  I just used that as this rich inspiration for how the music was done.  I also had video incorporated in ways that exemplified that, and I divided the stage similarly to Cage’s Europeras.  He divided the stage into 64 parts.  I didn’t want to get that complicated, so the stage was divided into 16 squares. Every time the piece is played, there’s a projection behind the audience that shows where the singer is singing from.  They look at the score, which has a time code as well as a stage breakdown.  “I sing Aria One from this place tonight.  Then I’m joined.  There’s a chorus. The three singers can occupy these parts of the stage.”  There was a choreographic element. They had all the negative space—any place the singers weren’t occupying, that’s where a dance movement could be done.  Steve Koplowitz was the choreographer.  He had to do choreography for his six dancers that could exist in one square, or along the strips, or along the back, like he had to have it be mobilized and transformable, so that it fits every night. We did two different performances, and each performance has a kind of an A and a B part.  We’d do a version at the beginning and then a version at the end, so that the audience could experience two passes at this way of organizing material.

The set design people and even the choreographer didn’t think that it was possible in the beginning.  It happened when everyone understood how it was to be organized.  It was beautiful and seamless, and everything about it worked. You just had to make sure that everything’s organized, and people understood.  The singers understood, “I might sing this aria tonight.  I might not.  I might someplace else.  I might sing it from this part of the stage. It might last two minutes, it might last six minutes, but I have to make it go along with all the different variations that are possible.”

John King reading a poem by Stéphane Mallarmé that inspired his opera Dice Thrown.

That was one of those things where all those elements were organized completely separately, but then unified in the performance itself. The audience can notice, “Oh that word was projected on the back wall, and something else was sung, but I made a connection between this appearance that was projected and what the person was singing in French simultaneously with that projection.”  Maybe the dancers were doing something that, again, emphasized something for this person, but the person here didn’t get that, they got something else.  That was how I organized that opera.

“For me, that fascinating juncture of staging, lighting, text, and music is what opera is supposed to be.”

Theatrical work is of great interest to me. With the most recent micro-operas that I did, lighting was also a big element.  Chance-determined lighting. Getting that incorporated into the piece and noticing what happened. Getting reactions from the audience about how they experienced that. I will hopefully do many, many more of these.

FJO: We’re now almost at the halfway point of this year, and you’ve already written three pieces—an hour of music. Maybe you’ve written more, but you haven’t gotten them on your website yet. What are you working on next?  Are there going to be more operas? How far in advance do you plan the next thing you’re doing?  Do you know what the next six months are going to be?

JK: I know that there are certain things that I’m hoping to realize. I’ve had a project in mind for quite a while. I’ve worked with the Brooklyn Youth Chorus a lot. And I’ve made contact with a choir in Ramallah, Palestine. I have contact with people at a place called Culture Hub—that’s the new media part of LaMaMa Theater.  They do multi-site performances, which they call telematic performances. I’ve written a choral piece that’s similar to a lot of the stuff that I’ve written for the Brooklyn Youth Chorus based on a poem by Mahmoud Darwish, who was a great Palestinian poet. I’m hoping that I can get these two choirs to sing together at some point. The idea of a choir in Brooklyn singing a piece or two of theirs for the choir in Palestine and the choir in Palestine singing a piece or two of theirs for the choir in Brooklyn, and then having them sing something together is something that I’m hoping to do in the next six months. The music is finished. It’s now just the technology that we’re waiting on to get everyone to be able to talk to each other. I’m also working on a piece for the Belgrade Philharmonic with my partner Aleksandra Vrebalov. We’re working on the entire piece together without divisions of responsibility, trying to create a work without identifiable “creators” but blended so well that even we won’t be able to tell who wrote what! Plus the recording of the Free Palestine string quartets is another thing that will have to be edited, probably in the fall. Those are the main things right now.

FJO: So never a free moment. I know you drink lots of coffee.

JK:  Café Bustelo.

A can of Cafe Bustelo on the kitchen counter in John King's apartment.

Indeterminacy 2.0: The Music of Catastrophe

variant:blue

Image from variant:blue by Joshue Ott and Kenneth Kirschner

“An indeterminate music can lead only to catastrophe.”
—Morton Feldman

It’s a catchy quote, coming as it does from one of the founders of indeterminate music—but to be fair, we should perhaps let the tape run a little further: “An indeterminate music can lead only to catastrophe. This catastrophe we allowed to take place. Behind it was sound—which unified everything.”

To Feldman, indeterminacy was a means to an end—a way to break through the walls of traditional composition in order to reach the pure physicality of sound beyond. Just as Wittgenstein had dismissed his Tractatus as a ladder to be thrown away after it was climbed, Feldman climbed the ladder of indeterminacy and, having reached the top, discarded it.

Few of us will ever see as far as Feldman did from those heights, but in this week’s final episode of our series on indeterminacy, I want to talk a little about some of the smaller lessons I’ve learned from my own experience writing this kind of music.

My earliest experiments with indeterminate digital music grew as a logical progression out of the chance-based techniques I was using in my work at the time. For years I had used chance procedures to create rich, complex, unexpected musical material—material that I would then painstakingly edit. Chance was a source, a method for generating ideas, not an end in itself. But with each roll of the dice that I accepted as a final, fixed result, and from which I built a carefully constructed, fully determined piece, there was always a nagging question: What if the dice had fallen differently? Was there another—maybe better—composition lurking out there that I’d failed to find? Had I just rolled the dice one more time, rolled them differently, could I perhaps have discovered something more?

Indeterminacy became for me a way to have my musical cake and eat it, too. Rather than accept the roll of the dice as a source of raw material, I could accept all possible rolls of the dice and call them the composition itself. With an indeterminate work, there is no longer one piece, but a potentially infinite number of pieces, all of them “the” piece. Here was an entirely different way of thinking about what music could be.

Where does the composition reside? Is it in the specific notes or sounds of one particular score, one particular performance, one particular recording? Or is it in the space of possibilities that a particular composition inscribes, in the wider set of potential outcomes that a given system of musical constraints, techniques, or limits marks out? Even with a lifetime’s constant listening, it would be impossible to hear every imaginable realization of even a moderately complex indeterminate piece—and yet we still somehow feel we have grasped the piece itself, have “heard” it, even if we’ve directly experienced only a tiny subset of its possible realizations. Such a composition resides in a space of pure potentiality that we can never fully explore—yet in glimpsing parts of it, we may experience more music, and perhaps intermittently better music, than a single fixed composition could ever give us.

Accepting this is not without costs. The first, and very important, lesson I learned in writing indeterminate music was that I missed editing. There are for me few more rewarding aspects of composing than that slow, painstaking, often desperate process of getting things right—and ultimately, few more joyful. I love crafting and honing musical material, pushing ever asymptotically closer to some non-existent point of perfection that you imagine, however falsely, the piece can achieve. I love to edit; it’s what I do. And you can’t edit an indeterminate composition. An indeterminate composition is never right; it’s in fact about letting go of the entire idea of rightness.

So you gain something—a vaster piece, a piece that occupies a range of possibilities rather than being limited to just one—and you lose something. You lose a feeling of conclusion, of finality, that moment of true satisfaction when you realize you’ve pushed a work as close as you possibly can to what you want it to be. So yes, there are pros and cons here; this was an unexpected lesson for me.

I’ve always found, in writing music, that there’s an ever-present temptation to believe that we can find the right way of composing—the right method, the right process, the right set of techniques that will produce great results each and every time without doubt or hesitation. A final system of writing that will work perfectly, reliably, and consistently, once and for all. This is an illusion, and not just because there is no right method. There are many methods. Many musics. Many ways of composing. They all have their strengths and they all have their weaknesses.

Perhaps the best lesson that I’ve learned from my more recent indeterminate music actually has to do with my non-indeterminate music. I now feel, when I write this music—call it “fixed” music, call it “determinate” music, call it plain-old music—that I want to write music that can’t be indeterminate. I want to write music that could only be written in a fixed way, that has some inescapable element of complexity, contingency, detail, design—some aspect that just plain wouldn’t work indeterminately. If a piece can be indeterminate, let it be indeterminate. But find those pieces—those harder, more elusive pieces—that require more than chance, more than uncertainty, that take thought, intelligence, planning, and a carefully crafted architecture to realize. In other words, my hope is that composing indeterminate music has made me a more thoughtful, more aware composer—of any music. Perhaps, then, writing indeterminate music can be both a rewarding end in itself, and a path to finding that which indeterminacy can’t give us.

Consider the Goldberg Variations. One could easily imagine making a Goldberg machine, a program or system for building new indeterminate Goldberg variations based on the same set of structures and constraints that Bach himself brought to the work. But then consider that final turn, those final notes, in the final aria of the Goldbergs. Could any blind system of chance find just those notes, just that turn, just that precise musical moment that so perfectly communicates with, speaks to, and completes the musical experience of the entire work? That one point in musical space is singular, and to find it requires the greatest intelligence and the greatest art. We haven’t yet built any machine, generative system, or set of axioms that could in a million years locate that one point in musical space; perhaps one day we will, but for now it is a place that no indeterminacy would ever stumble upon. It required a composer, and Bach found it.

What I’m trying to say is this: that indeterminate music is wonderful and exciting and compelling, especially when you couple it with the vast possibilities that digital technology opens up for us. But it’s not the only music. There’s a place for a music of chance, a place for a music of catastrophe—and there’s a place for music as we know it, and have always seemed to know it. A place for a music in which a composer finds a point in the space of all possible music, a singular moment, a perfect event, and says, “Yes. This.”

Indeterminacy 2.0: Under the Hood

variant:SONiC

Image from variant:SONiC by Joshue Ott and Kenneth Kirschner

This week, I want to talk about some of the actual work I’ve done with indeterminate digital music, with a focus on both the technologies involved and the compositional methods that have proven useful to me in approaching this sort of work. Let me open with a disclaimer that this is going to be a hands-on discussion that really dives into how these pieces are built. It’s intended primarily for composers who may be interested in writing this kind of music, or for listeners who really want to dig into the mechanics underlying the pieces. If that’s not you, feel free to just skim it or fast-forward ahead to next week, when we’ll get back into a more philosophical mode.

For fellow composers, here’s a first and very important caveat: as of right now, this is not music for which you can buy off-the-shelf software, boot it up, and start writing—real, actual programming will be required. And if you, like me, are someone who has a panic attack at the sight of the simplest Max patch, much less actual code, then collaboration may be the way to go, as it has been for me. You’ll ideally be looking to find and work with a “creative coder”—someone who’s a programmer, but has interest and experience in experimental art and won’t run away screaming (or perhaps laughing) at your crazy ideas.

INITIAL CONCEPTS

Let me rewind a little and talk about how I first got interested in trying to write this sort of music. I had used chance procedures as an essential part of my compositional process for many years, but I’d never developed an interest in working with true indeterminacy. That changed in the early 2000s, when my friend Taylor Deupree and I started talking about an idea for a series we wanted to call “Music for iPods.” An unexpected side effect of the release of the original iPod had been that people really got into the shuffle feature, and suddenly you had all these inadvertent little Cageans running around shuffling their whole music collections right from their jean pockets. What we wanted to do was to write specifically for the shuffle feature on the iPod, to make a piece that was comprised of little fragments designed to be played in any order, and that would be different every time you listened. Like most of our bright ideas, we never got around to it—but it did get me thinking on the subject.

And as I thought about it, it seemed to me that having just one sound at a time wasn’t really that interesting compositionally; there were only so many ways you could approach structuring the piece, so many ways you could put the thing together. But what if you could have two iPods on shuffle at once? Three? More? That would raise some compositional questions that struck me as really worth digging into. And under the hood, what was this newfangled iPod thing but a digital audio player—a piece of software playing software. It increasingly seemed like the indeterminate music idea was something that should be built in software—but I had no clue how to do it.

FIRST INDETERMINATE SERIES (2004–2005)

In 2004, while performing at a festival in Spain, I met a Flash programmer, Craig Swann, who had just the skills needed to try out my crazy idea. The first piece we tried—July 29, 2004 (all my pieces are titled by the date on which they’re begun)—was a simple proof of concept, a realization of the “Music for iPods” idea; it’s basically an iPod on shuffle play built in Flash. The music itself is a simple little piano composition which I’ve never found particularly compelling—but it was enough to test out the idea.

Here’s how it works: the piece consists of 35 short sound files, each about 10 seconds long, and each containing one piano chord. The Flash program randomly picks one mp3 at a time and plays it—forever. You can let this thing go as long as you like, and it’ll just keep going—the piece is indefinite, not just indeterminate. Here’s an example of what it sounds like, and for this and all the other pieces in my first indeterminate series, you can download the functioning generative Flash app freely from my website and give it a try. I say “functioning,” but these things are getting a bit long in the tooth; you may get a big security alert that pops up when you press the play button, but click “OK” on it and it still works fine. Also potentially interesting for fellow composers is that, by opening up the subfolders on each piece, you can see and play all of the underlying sound files individually and hopefully start to get a better sense of how these things are put together.

It was with the next piece, August 26, 2004, that this first series of indeterminate pieces for me really started to get interesting (here’s a fixed excerpt, and here’s the generative version). It’s one thing to play just one sound, then another, then another, ad infinitum. But what if you’ve got a bunch of sounds—two or three or four different layers at once—all happening in random simultaneous juxtapositions and colliding with one another? It’s a much more challenging, much more interesting compositional question. How do you structure the piece? How do you make it make sense? All these sounds have to “get along,” to fit together in some musically meaningful way—and yet you don’t want it to be homogenous, static, boring. How do you balance the desire for harmonic complexity and development with the need to avoid what are called, in the technical parlance of DJs, “trainwrecks”? Because sooner or later, anything that can happen in these pieces will happen, and you have to build the entire composition with that knowledge in mind.

August 26, 2004 was one possible solution to this problem. There are three simultaneous layers playing—three virtual “iPods” stacked shuffling on top of each other. One track plays a series of piano recordings, which here carry most of the harmonic content; there are 14 piano fragments, most around a minute long, each moving within a stable pitch space, and each able to transition more or less smoothly into the next. On top of that are two layers of electronics, drawn from a shared set of 21 sounds, and these I kept very sparse: each is harmonically open and ambiguous enough that it should, in theory, be able to hover over whatever piano fragment is playing as well as bump into the other electronic layer without causing too much trouble.

As the series continued, however, I found myself increasingly taking a somewhat different approach: rather than divide up the sounds into different functional groups, with one group dominating the harmonic space, I instead designed all of the underlying fragments to be “compatible” with one another—every sound would potentially work with every other, so that any random juxtaposition of sounds that got loaded could safely coexist. To check out some of these subsequent pieces, you can scan through 2005 on my website for any compositions marked “indet.” And again, for all of them you can freely download the generative version and open up the folders to explore their component parts.

INTERMISSION (2006–2014)

By late 2005, I was beginning to drift away from this sort of work, for reasons both technological and artistic (some of which I’ll talk about next week), and by 2006 I found myself again writing nothing but fully “determinate” work. Lacking the programming skills to push the work forward myself, indeterminacy became less of a focus—though I still felt that there was great untapped potential there, and hoped to return to it one day.

Another thing holding the pieces back was, quite simply, the technology of the time. They could only be played on a desktop computer, which just wasn’t really a comfortable or desirable listening environment then (or, for that matter, now). These pieces really cried out for a mobile realization, for something you could throw in your pocket, pop some headphones on, and hit the streets with. So I kept thinking about the pieces, and kept kicking around ideas in my head and with friends. Then suddenly, over the course of just a few years, we all looked up and found that everyone around us was carrying in their pockets extremely powerful, highly capable computers—computers that had more firepower than every piece of gear I’d used in the first decade or two of my musical life put together. Except they were now called “phones.”

THE VARIANTS (2014–)

In 2014, after years of talking over pad kee mao at our local Thai place, I started working with my friend Joshue Ott to finally move the indeterminate series forward. A visualist and software designer, Josh is best known in new music circles for superDraw, a “visual instrument” on which he improvises live generative imagery for new music performances and on which he has performed at venues ranging from Mutek to Carnegie Hall. Josh is also an iOS developer, and his app Thicket, created with composer Morgan Packard, is one of the best examples out there of what can be achieved when you bring together visuals, music, and an interactive touch screen.

Working as artists-in-residence at Eyebeam, Josh and I have developed and launched what we’re calling the Variant series. Our idea was to develop a series of apps for the iPhone and iPad that would bring together the generative visuals of his superDraw software with my approach to indeterminate digital music, all tightly integrated into a single interactive experience for the user. Our concept for the Variant apps is that each piece in the series will feature a different visual composition of Josh’s, a different indeterminate composition of mine—and, importantly, a different approach to user interactivity.

When I sat down to write the first sketches for these new apps, my initial instinct was to go back and basically rewrite August 26, 2004, which had somehow stuck with me as the most satisfying piece of the first indeterminate series. And when I did, the results were terrible—well, terribly boring. It took me a little while to realize that I’d perhaps learned a thing or two in the intervening decade, and that I needed to push myself harder—to try to move the indeterminate pieces forward not just technologically, but compositionally as well. So I went back to the drawing board, and the result was the music for our first app, variant:blue (here’s an example of what it sounds like).

It’s immediately clear that this is much denser than anything I’d tried to do in the first indeterminate series—even beyond the eight tracks of audio running simultaneously. It’s denser compositionally, with a more dissonant and chromatic palette than I would have had the courage to attempt ten years earlier. But the piece is actually not that complex once you break it down: each audio file contains a rhythmically loose, repeating instrumental pattern (you can hear an example of one isolated component here to give you a sense of it), with lots of silent spaces in between the repetitions. The rhythms, however, are totally free (there’s no overarching grid or tempo), so as you start to layer this stuff, the patterns begin to overlap and interfere with each other in complex, unpredictable ways. For variant:blue, there are now 48 individual component audio files; the indeterminate engine grabs one sound file at random and assigns it to one of the eight playback tracks, then grabs the next and assigns it to the next available track, and so forth. One handy feature of all of the Variant apps is that, when you click the dot in the lower right, a display will open that shows the indeterminate engine running in real time, which should hopefully give you a better sense of how the music is put together.

In one way, though, the music for variant:blue is very much like my earlier indeterminate pieces: it’s straight-up indeterminate, not interactive. The user has no control over the audio, and the music evolves only according to the indeterminate engine’s built-in chance procedures. For variant:blue, the interaction design focuses solely on the visuals, giving you the ability to draw lines that are in turn modified by the music. True audio interactivity, however, was something that would become a major struggle for us in our next app, variant:flare.

The music for variant:flare has a compositional structure that is almost the diametrical opposite of variant:blue’s, showing you a very different solution to the problem of how to bring order to these indeterminate pieces. Where the previous piece was predominantly atonal and free-floating, this one is locked to two absolute grids: a diatonic scale (C# minor, though sounding more like E major much of the time), and a tight rhythmic grid (at 117 bpm). So you can feel very confident that whatever sound comes up is going to get along just fine with the other sounds that are playing, in terms of both pitch and rhythm. Within that tightly quantized, completely tonal space, however, there’s actually plenty of room for movement—and each of these sounds gets to have all sorts of fun melodically and especially metrically. The meters, or lack thereof, are where it really gets interesting, because the step sequencer that was used to create each audio file incorporated chance procedures that occasionally scrambled whatever already-weird meter the pattern was playing in. Thus every individual line runs in a different irregular meter, and also occasionally changes and flips around into new and confusingly different patterns. Try following the individual lines (like this one); it’s a big fun mess, and you can listen to an example of the full app’s music here.

We were very happy with the way both the music and the visuals for the app came together—individually. But variant:flare unexpectedly became a huge challenge in the third goal of our Variant series: interactivity. Try as we might, we simply couldn’t find a way to make both the music and the visuals meaningfully interactive. The musical composition was originally designed to just run indeterminately, without user input, and trying to add interactivity after the fact proved incredibly difficult. What brought it all together in the end was a complete rethink that took the piece from a passive musical experience to a truly active one. The design we hit on was this: each tap on the iPad’s screen starts one track of the music, up to six. After that point, each tap resets a track: one currently playing track fades out and is replaced by another randomly selected one. This allows you to “step” through the composition yourself, to guide its evolution and development in a controlled, yet still indeterminate fashion (because the choice of sounds is still governed by chance). If you find a juxtaposition of sounds you like, one compelling point in the “compositional space” of the piece, leave it alone—the music will hover there, staying with that particular combination of sounds until you’re ready to nudge it forward and move on. The visuals, conversely, now have no direct user interactivity and are controlled completely by the music. While this was not at all the direction we initially anticipated taking, we’re both reasonably satisfied with how the app’s user experience has come together.

After this experience, my goal for the next app was to focus on building interactivity into the music from the ground up—not to struggle with adding it into something that was already written, but to make it an integral part of the overall plan of the composition from the start. variant:SONiC, our next app, was commissioned by the American Composers Orchestra for the October 2015 SONiC Festival, and my idea for the music was to take sounds from a wide cross-section of the performers and composers in the festival and build the piece entirely out of those sounds. I asked the ACO to send out a call for materials to the festival’s participants, asking each interested musician to send me one note—a single note played on their instruments or sung—with the idea of building up a sort of “virtual ensemble” to represent the festival itself. I received a wonderfully diverse array of material to work with—including sounds from Andy Akiho, Alarm Will Sound (including Miles Brown, Michael Clayville, Erin Lesser, Courtney Orlando, and Jason Price), Clarice Assad, Christopher Cerrone, The Crossing, Melody Eötvös, Angelica Negron, Nieuw Amsterdams Peil (including Gerard Bouwhuis and Heleen Hulst), and Nina C. Young—and it was from these sounds that I built the app’s musical composition.

When you boot up variant:SONiC, nothing happens. But tap the screen and a sound will play, and that sound will trigger Josh’s visuals as well. Each sound is short, and you can keep tapping—there are up to ten sounds available to you at once, one for each finger, so you can almost begin to play the piece like an instrument. As with our other apps, each tap is triggering one randomly selected element of the composition at a time—but here there are 153 total sounds, so there’s a lot for you to explore. And with this Variant you get one additional interactive feature: hold down your finger, and whatever sound you’ve just triggered will start slowly looping. Thus you can use one hand, for example, to build up a stable group of repeating sounds, while the other “solos” over it by triggering new material. variant:SONiC is a free app, so it’s a great way to try out these new indeterminate pieces—but for those that don’t have access to an iOS device, here’s what it sounds like.

variant:SONiC is, for me, the first of our apps where the audio interactivity feels natural, coherent, and integral to the musical composition. And to me it illustrates how—particularly when working with touchscreen technology—indeterminacy quite naturally slides into interactivity with this kind of music. I’m not sure whether that’s just because iPhone and iPad users expect to be able to touch their screens and make things happen, or whether there’s something inherent in the medium that draws you in this direction. Maybe it’s just that having the tools on hand tempts you to use them; to a composer with a hammer, everything sounds like a nail.

In the end, though, much as I’m finding interactive music to be intriguing and rewarding, I do still believe that there’s a place for indeterminate digital music that isn’t interactive. I hope to work more in this direction in the future—though to call it merely “passive” indeterminate music sounds just as insulting as calling a regular old piece of music “determinate.” I guess what I’m trying to say is that, despite all these wonderfully interactive technologies we have available to us today, there’s still something to be said for just sitting back and listening to a piece of music. And maybe that’s why I’ve called this series Indeterminacy 2.0 rather than Interactivity 1.0.

Next week, our season finale: “The Music of Catastrophe.”