Author: Kenneth Kirschner

Indeterminacy 2.0: The Music of Catastrophe

variant:blue

Image from variant:blue by Joshue Ott and Kenneth Kirschner

“An indeterminate music can lead only to catastrophe.”
—Morton Feldman

It’s a catchy quote, coming as it does from one of the founders of indeterminate music—but to be fair, we should perhaps let the tape run a little further: “An indeterminate music can lead only to catastrophe. This catastrophe we allowed to take place. Behind it was sound—which unified everything.”

To Feldman, indeterminacy was a means to an end—a way to break through the walls of traditional composition in order to reach the pure physicality of sound beyond. Just as Wittgenstein had dismissed his Tractatus as a ladder to be thrown away after it was climbed, Feldman climbed the ladder of indeterminacy and, having reached the top, discarded it.

Few of us will ever see as far as Feldman did from those heights, but in this week’s final episode of our series on indeterminacy, I want to talk a little about some of the smaller lessons I’ve learned from my own experience writing this kind of music.

My earliest experiments with indeterminate digital music grew as a logical progression out of the chance-based techniques I was using in my work at the time. For years I had used chance procedures to create rich, complex, unexpected musical material—material that I would then painstakingly edit. Chance was a source, a method for generating ideas, not an end in itself. But with each roll of the dice that I accepted as a final, fixed result, and from which I built a carefully constructed, fully determined piece, there was always a nagging question: What if the dice had fallen differently? Was there another—maybe better—composition lurking out there that I’d failed to find? Had I just rolled the dice one more time, rolled them differently, could I perhaps have discovered something more?

Indeterminacy became for me a way to have my musical cake and eat it, too. Rather than accept the roll of the dice as a source of raw material, I could accept all possible rolls of the dice and call them the composition itself. With an indeterminate work, there is no longer one piece, but a potentially infinite number of pieces, all of them “the” piece. Here was an entirely different way of thinking about what music could be.

Where does the composition reside? Is it in the specific notes or sounds of one particular score, one particular performance, one particular recording? Or is it in the space of possibilities that a particular composition inscribes, in the wider set of potential outcomes that a given system of musical constraints, techniques, or limits marks out? Even with a lifetime’s constant listening, it would be impossible to hear every imaginable realization of even a moderately complex indeterminate piece—and yet we still somehow feel we have grasped the piece itself, have “heard” it, even if we’ve directly experienced only a tiny subset of its possible realizations. Such a composition resides in a space of pure potentiality that we can never fully explore—yet in glimpsing parts of it, we may experience more music, and perhaps intermittently better music, than a single fixed composition could ever give us.

Accepting this is not without costs. The first, and very important, lesson I learned in writing indeterminate music was that I missed editing. There are for me few more rewarding aspects of composing than that slow, painstaking, often desperate process of getting things right—and ultimately, few more joyful. I love crafting and honing musical material, pushing ever asymptotically closer to some non-existent point of perfection that you imagine, however falsely, the piece can achieve. I love to edit; it’s what I do. And you can’t edit an indeterminate composition. An indeterminate composition is never right; it’s in fact about letting go of the entire idea of rightness.

So you gain something—a vaster piece, a piece that occupies a range of possibilities rather than being limited to just one—and you lose something. You lose a feeling of conclusion, of finality, that moment of true satisfaction when you realize you’ve pushed a work as close as you possibly can to what you want it to be. So yes, there are pros and cons here; this was an unexpected lesson for me.

I’ve always found, in writing music, that there’s an ever-present temptation to believe that we can find the right way of composing—the right method, the right process, the right set of techniques that will produce great results each and every time without doubt or hesitation. A final system of writing that will work perfectly, reliably, and consistently, once and for all. This is an illusion, and not just because there is no right method. There are many methods. Many musics. Many ways of composing. They all have their strengths and they all have their weaknesses.

Perhaps the best lesson that I’ve learned from my more recent indeterminate music actually has to do with my non-indeterminate music. I now feel, when I write this music—call it “fixed” music, call it “determinate” music, call it plain-old music—that I want to write music that can’t be indeterminate. I want to write music that could only be written in a fixed way, that has some inescapable element of complexity, contingency, detail, design—some aspect that just plain wouldn’t work indeterminately. If a piece can be indeterminate, let it be indeterminate. But find those pieces—those harder, more elusive pieces—that require more than chance, more than uncertainty, that take thought, intelligence, planning, and a carefully crafted architecture to realize. In other words, my hope is that composing indeterminate music has made me a more thoughtful, more aware composer—of any music. Perhaps, then, writing indeterminate music can be both a rewarding end in itself, and a path to finding that which indeterminacy can’t give us.

Consider the Goldberg Variations. One could easily imagine making a Goldberg machine, a program or system for building new indeterminate Goldberg variations based on the same set of structures and constraints that Bach himself brought to the work. But then consider that final turn, those final notes, in the final aria of the Goldbergs. Could any blind system of chance find just those notes, just that turn, just that precise musical moment that so perfectly communicates with, speaks to, and completes the musical experience of the entire work? That one point in musical space is singular, and to find it requires the greatest intelligence and the greatest art. We haven’t yet built any machine, generative system, or set of axioms that could in a million years locate that one point in musical space; perhaps one day we will, but for now it is a place that no indeterminacy would ever stumble upon. It required a composer, and Bach found it.

What I’m trying to say is this: that indeterminate music is wonderful and exciting and compelling, especially when you couple it with the vast possibilities that digital technology opens up for us. But it’s not the only music. There’s a place for a music of chance, a place for a music of catastrophe—and there’s a place for music as we know it, and have always seemed to know it. A place for a music in which a composer finds a point in the space of all possible music, a singular moment, a perfect event, and says, “Yes. This.”

Indeterminacy 2.0: Under the Hood

variant:SONiC

Image from variant:SONiC by Joshue Ott and Kenneth Kirschner

This week, I want to talk about some of the actual work I’ve done with indeterminate digital music, with a focus on both the technologies involved and the compositional methods that have proven useful to me in approaching this sort of work. Let me open with a disclaimer that this is going to be a hands-on discussion that really dives into how these pieces are built. It’s intended primarily for composers who may be interested in writing this kind of music, or for listeners who really want to dig into the mechanics underlying the pieces. If that’s not you, feel free to just skim it or fast-forward ahead to next week, when we’ll get back into a more philosophical mode.

For fellow composers, here’s a first and very important caveat: as of right now, this is not music for which you can buy off-the-shelf software, boot it up, and start writing—real, actual programming will be required. And if you, like me, are someone who has a panic attack at the sight of the simplest Max patch, much less actual code, then collaboration may be the way to go, as it has been for me. You’ll ideally be looking to find and work with a “creative coder”—someone who’s a programmer, but has interest and experience in experimental art and won’t run away screaming (or perhaps laughing) at your crazy ideas.

INITIAL CONCEPTS

Let me rewind a little and talk about how I first got interested in trying to write this sort of music. I had used chance procedures as an essential part of my compositional process for many years, but I’d never developed an interest in working with true indeterminacy. That changed in the early 2000s, when my friend Taylor Deupree and I started talking about an idea for a series we wanted to call “Music for iPods.” An unexpected side effect of the release of the original iPod had been that people really got into the shuffle feature, and suddenly you had all these inadvertent little Cageans running around shuffling their whole music collections right from their jean pockets. What we wanted to do was to write specifically for the shuffle feature on the iPod, to make a piece that was comprised of little fragments designed to be played in any order, and that would be different every time you listened. Like most of our bright ideas, we never got around to it—but it did get me thinking on the subject.

And as I thought about it, it seemed to me that having just one sound at a time wasn’t really that interesting compositionally; there were only so many ways you could approach structuring the piece, so many ways you could put the thing together. But what if you could have two iPods on shuffle at once? Three? More? That would raise some compositional questions that struck me as really worth digging into. And under the hood, what was this newfangled iPod thing but a digital audio player—a piece of software playing software. It increasingly seemed like the indeterminate music idea was something that should be built in software—but I had no clue how to do it.

FIRST INDETERMINATE SERIES (2004–2005)

In 2004, while performing at a festival in Spain, I met a Flash programmer, Craig Swann, who had just the skills needed to try out my crazy idea. The first piece we tried—July 29, 2004 (all my pieces are titled by the date on which they’re begun)—was a simple proof of concept, a realization of the “Music for iPods” idea; it’s basically an iPod on shuffle play built in Flash. The music itself is a simple little piano composition which I’ve never found particularly compelling—but it was enough to test out the idea.

Here’s how it works: the piece consists of 35 short sound files, each about 10 seconds long, and each containing one piano chord. The Flash program randomly picks one mp3 at a time and plays it—forever. You can let this thing go as long as you like, and it’ll just keep going—the piece is indefinite, not just indeterminate. Here’s an example of what it sounds like, and for this and all the other pieces in my first indeterminate series, you can download the functioning generative Flash app freely from my website and give it a try. I say “functioning,” but these things are getting a bit long in the tooth; you may get a big security alert that pops up when you press the play button, but click “OK” on it and it still works fine. Also potentially interesting for fellow composers is that, by opening up the subfolders on each piece, you can see and play all of the underlying sound files individually and hopefully start to get a better sense of how these things are put together.

It was with the next piece, August 26, 2004, that this first series of indeterminate pieces for me really started to get interesting (here’s a fixed excerpt, and here’s the generative version). It’s one thing to play just one sound, then another, then another, ad infinitum. But what if you’ve got a bunch of sounds—two or three or four different layers at once—all happening in random simultaneous juxtapositions and colliding with one another? It’s a much more challenging, much more interesting compositional question. How do you structure the piece? How do you make it make sense? All these sounds have to “get along,” to fit together in some musically meaningful way—and yet you don’t want it to be homogenous, static, boring. How do you balance the desire for harmonic complexity and development with the need to avoid what are called, in the technical parlance of DJs, “trainwrecks”? Because sooner or later, anything that can happen in these pieces will happen, and you have to build the entire composition with that knowledge in mind.

August 26, 2004 was one possible solution to this problem. There are three simultaneous layers playing—three virtual “iPods” stacked shuffling on top of each other. One track plays a series of piano recordings, which here carry most of the harmonic content; there are 14 piano fragments, most around a minute long, each moving within a stable pitch space, and each able to transition more or less smoothly into the next. On top of that are two layers of electronics, drawn from a shared set of 21 sounds, and these I kept very sparse: each is harmonically open and ambiguous enough that it should, in theory, be able to hover over whatever piano fragment is playing as well as bump into the other electronic layer without causing too much trouble.

As the series continued, however, I found myself increasingly taking a somewhat different approach: rather than divide up the sounds into different functional groups, with one group dominating the harmonic space, I instead designed all of the underlying fragments to be “compatible” with one another—every sound would potentially work with every other, so that any random juxtaposition of sounds that got loaded could safely coexist. To check out some of these subsequent pieces, you can scan through 2005 on my website for any compositions marked “indet.” And again, for all of them you can freely download the generative version and open up the folders to explore their component parts.

INTERMISSION (2006–2014)

By late 2005, I was beginning to drift away from this sort of work, for reasons both technological and artistic (some of which I’ll talk about next week), and by 2006 I found myself again writing nothing but fully “determinate” work. Lacking the programming skills to push the work forward myself, indeterminacy became less of a focus—though I still felt that there was great untapped potential there, and hoped to return to it one day.

Another thing holding the pieces back was, quite simply, the technology of the time. They could only be played on a desktop computer, which just wasn’t really a comfortable or desirable listening environment then (or, for that matter, now). These pieces really cried out for a mobile realization, for something you could throw in your pocket, pop some headphones on, and hit the streets with. So I kept thinking about the pieces, and kept kicking around ideas in my head and with friends. Then suddenly, over the course of just a few years, we all looked up and found that everyone around us was carrying in their pockets extremely powerful, highly capable computers—computers that had more firepower than every piece of gear I’d used in the first decade or two of my musical life put together. Except they were now called “phones.”

THE VARIANTS (2014–)

In 2014, after years of talking over pad kee mao at our local Thai place, I started working with my friend Joshue Ott to finally move the indeterminate series forward. A visualist and software designer, Josh is best known in new music circles for superDraw, a “visual instrument” on which he improvises live generative imagery for new music performances and on which he has performed at venues ranging from Mutek to Carnegie Hall. Josh is also an iOS developer, and his app Thicket, created with composer Morgan Packard, is one of the best examples out there of what can be achieved when you bring together visuals, music, and an interactive touch screen.

Working as artists-in-residence at Eyebeam, Josh and I have developed and launched what we’re calling the Variant series. Our idea was to develop a series of apps for the iPhone and iPad that would bring together the generative visuals of his superDraw software with my approach to indeterminate digital music, all tightly integrated into a single interactive experience for the user. Our concept for the Variant apps is that each piece in the series will feature a different visual composition of Josh’s, a different indeterminate composition of mine—and, importantly, a different approach to user interactivity.

When I sat down to write the first sketches for these new apps, my initial instinct was to go back and basically rewrite August 26, 2004, which had somehow stuck with me as the most satisfying piece of the first indeterminate series. And when I did, the results were terrible—well, terribly boring. It took me a little while to realize that I’d perhaps learned a thing or two in the intervening decade, and that I needed to push myself harder—to try to move the indeterminate pieces forward not just technologically, but compositionally as well. So I went back to the drawing board, and the result was the music for our first app, variant:blue (here’s an example of what it sounds like).

It’s immediately clear that this is much denser than anything I’d tried to do in the first indeterminate series—even beyond the eight tracks of audio running simultaneously. It’s denser compositionally, with a more dissonant and chromatic palette than I would have had the courage to attempt ten years earlier. But the piece is actually not that complex once you break it down: each audio file contains a rhythmically loose, repeating instrumental pattern (you can hear an example of one isolated component here to give you a sense of it), with lots of silent spaces in between the repetitions. The rhythms, however, are totally free (there’s no overarching grid or tempo), so as you start to layer this stuff, the patterns begin to overlap and interfere with each other in complex, unpredictable ways. For variant:blue, there are now 48 individual component audio files; the indeterminate engine grabs one sound file at random and assigns it to one of the eight playback tracks, then grabs the next and assigns it to the next available track, and so forth. One handy feature of all of the Variant apps is that, when you click the dot in the lower right, a display will open that shows the indeterminate engine running in real time, which should hopefully give you a better sense of how the music is put together.

In one way, though, the music for variant:blue is very much like my earlier indeterminate pieces: it’s straight-up indeterminate, not interactive. The user has no control over the audio, and the music evolves only according to the indeterminate engine’s built-in chance procedures. For variant:blue, the interaction design focuses solely on the visuals, giving you the ability to draw lines that are in turn modified by the music. True audio interactivity, however, was something that would become a major struggle for us in our next app, variant:flare.

The music for variant:flare has a compositional structure that is almost the diametrical opposite of variant:blue’s, showing you a very different solution to the problem of how to bring order to these indeterminate pieces. Where the previous piece was predominantly atonal and free-floating, this one is locked to two absolute grids: a diatonic scale (C# minor, though sounding more like E major much of the time), and a tight rhythmic grid (at 117 bpm). So you can feel very confident that whatever sound comes up is going to get along just fine with the other sounds that are playing, in terms of both pitch and rhythm. Within that tightly quantized, completely tonal space, however, there’s actually plenty of room for movement—and each of these sounds gets to have all sorts of fun melodically and especially metrically. The meters, or lack thereof, are where it really gets interesting, because the step sequencer that was used to create each audio file incorporated chance procedures that occasionally scrambled whatever already-weird meter the pattern was playing in. Thus every individual line runs in a different irregular meter, and also occasionally changes and flips around into new and confusingly different patterns. Try following the individual lines (like this one); it’s a big fun mess, and you can listen to an example of the full app’s music here.

We were very happy with the way both the music and the visuals for the app came together—individually. But variant:flare unexpectedly became a huge challenge in the third goal of our Variant series: interactivity. Try as we might, we simply couldn’t find a way to make both the music and the visuals meaningfully interactive. The musical composition was originally designed to just run indeterminately, without user input, and trying to add interactivity after the fact proved incredibly difficult. What brought it all together in the end was a complete rethink that took the piece from a passive musical experience to a truly active one. The design we hit on was this: each tap on the iPad’s screen starts one track of the music, up to six. After that point, each tap resets a track: one currently playing track fades out and is replaced by another randomly selected one. This allows you to “step” through the composition yourself, to guide its evolution and development in a controlled, yet still indeterminate fashion (because the choice of sounds is still governed by chance). If you find a juxtaposition of sounds you like, one compelling point in the “compositional space” of the piece, leave it alone—the music will hover there, staying with that particular combination of sounds until you’re ready to nudge it forward and move on. The visuals, conversely, now have no direct user interactivity and are controlled completely by the music. While this was not at all the direction we initially anticipated taking, we’re both reasonably satisfied with how the app’s user experience has come together.

After this experience, my goal for the next app was to focus on building interactivity into the music from the ground up—not to struggle with adding it into something that was already written, but to make it an integral part of the overall plan of the composition from the start. variant:SONiC, our next app, was commissioned by the American Composers Orchestra for the October 2015 SONiC Festival, and my idea for the music was to take sounds from a wide cross-section of the performers and composers in the festival and build the piece entirely out of those sounds. I asked the ACO to send out a call for materials to the festival’s participants, asking each interested musician to send me one note—a single note played on their instruments or sung—with the idea of building up a sort of “virtual ensemble” to represent the festival itself. I received a wonderfully diverse array of material to work with—including sounds from Andy Akiho, Alarm Will Sound (including Miles Brown, Michael Clayville, Erin Lesser, Courtney Orlando, and Jason Price), Clarice Assad, Christopher Cerrone, The Crossing, Melody Eötvös, Angelica Negron, Nieuw Amsterdams Peil (including Gerard Bouwhuis and Heleen Hulst), and Nina C. Young—and it was from these sounds that I built the app’s musical composition.

When you boot up variant:SONiC, nothing happens. But tap the screen and a sound will play, and that sound will trigger Josh’s visuals as well. Each sound is short, and you can keep tapping—there are up to ten sounds available to you at once, one for each finger, so you can almost begin to play the piece like an instrument. As with our other apps, each tap is triggering one randomly selected element of the composition at a time—but here there are 153 total sounds, so there’s a lot for you to explore. And with this Variant you get one additional interactive feature: hold down your finger, and whatever sound you’ve just triggered will start slowly looping. Thus you can use one hand, for example, to build up a stable group of repeating sounds, while the other “solos” over it by triggering new material. variant:SONiC is a free app, so it’s a great way to try out these new indeterminate pieces—but for those that don’t have access to an iOS device, here’s what it sounds like.

variant:SONiC is, for me, the first of our apps where the audio interactivity feels natural, coherent, and integral to the musical composition. And to me it illustrates how—particularly when working with touchscreen technology—indeterminacy quite naturally slides into interactivity with this kind of music. I’m not sure whether that’s just because iPhone and iPad users expect to be able to touch their screens and make things happen, or whether there’s something inherent in the medium that draws you in this direction. Maybe it’s just that having the tools on hand tempts you to use them; to a composer with a hammer, everything sounds like a nail.

In the end, though, much as I’m finding interactive music to be intriguing and rewarding, I do still believe that there’s a place for indeterminate digital music that isn’t interactive. I hope to work more in this direction in the future—though to call it merely “passive” indeterminate music sounds just as insulting as calling a regular old piece of music “determinate.” I guess what I’m trying to say is that, despite all these wonderfully interactive technologies we have available to us today, there’s still something to be said for just sitting back and listening to a piece of music. And maybe that’s why I’ve called this series Indeterminacy 2.0 rather than Interactivity 1.0.

Next week, our season finale: “The Music of Catastrophe.”

Indeterminacy 2.0: In Which We Agonize Over Terminology

variant:flare

Image from variant:flare by Joshue Ott and Kenneth Kirschner

Discover the recipes you are using and abandon them.

—Brian Eno and Peter Schmidt, Oblique Strategies

I have a certain tendency to refer to my indeterminate pieces as “my indeterminate pieces.” But wait, aren’t they generative? Haven’t we forgotten about Brian Eno?

In this week’s episode, I want to try to hash out some of the terminology floating around this sort of music. This is very much intended as a conversation: I don’t feel I have the right answers myself, and I’d like to hear in the comments below your thoughts, disagreements, and differing interpretations of what all these words mean—or should mean. A certain amount of this is squares and rectangles, and more than one of these terms could apply to any given piece of music. So let me pick a few of the key concepts out there and start to try to pin them down a little…

CHANCE PROCEDURES

Let’s start with an easy one: the Cagean distinction between chance procedures and indeterminacy. Chance procedures are the incorporation of a random element into either the process of creating a composition or into the actual structure of the composition itself; as such, they can just as much be used to create a fixed work as an indeterminate one. The classic example is Cage’s own Music of Changes, in which he employed coin tosses and the I Ching to decide the notes of a fixed, fully determined score. Chance procedures can give you a piece that’s indeterminate or a piece that’s not, and an indeterminate piece can be based on chance or on some other uncertain element (such as performer choice, as in Feldman’s graph pieces).

INDETERMINATE MUSIC

Let’s start by calling an indeterminate composition a piece of music that differs with every realization, whether that realization is a performance or a recording. Or whether that difference comes from chance procedures, performer or listener choice, or a generative process. Cage called The Art of the Fugue an indeterminate composition because, while each and every note is fixed by the score, Bach never specified the instrumentation and so every realization is potentially different. Push this too far and the concept disintegrates: after all, every performance of any score will be different every time, with big huge millisecond differences between even the most robotic human performances. So we need to try to isolate the concept a little more, perhaps by saying that there’s an aspect of the composition’s fundamental structure that opens itself to uncertainty and difference, and that this structural rather than performative element is what properly characterizes indeterminacy.

Throwing out some examples beyond the odd one of The Art of the Fugue:

  • Feldman’s early indeterminate pieces—starting from the Projections series, in which the performer chooses their own notes among “high,” “middle,” and “low” pitch ranges—or the Durations pieces, in which the pitches themselves are fixed but the tempos and rhythms are free.
  • A huge number of Cage pieces, from Imaginary Landscape No. 4 with its radios, up through the late Number Pieces with their wide-open spaces and vast freedoms of time and sound.
  • Stockhausen’s Klavierstück XI, which forms a sort of musical mobile of individually fixed elements that can be permutated into endlessly varying combinations (and which, as we’ll see next week, is similar to the approach I’ve been trying with software-based indeterminate composition).

Or there’s Terry Riley’s In C, there’s Ives and Cowell, Brown and Wolff—the list goes on. But let’s change gears…

GENERATIVE MUSIC

Okay, now we get to Eno. His concept of generative music starts with 1975s Discreet Music and runs to the present with interactive apps such as Bloom and Scape. I’m tempted to say that generative music is music that uses a clearly defined process—an algorithm, a set of axioms or rules, a list of instructions—to create a composition that evolves strictly according to that process but not necessarily deterministically; the end result can be fixed or indeterminate, autonomous or interactive. Eno cites wind chimes as an example of a generative music. Or consider Reich’s It’s Gonna Rain: we can think of this as a generative work in that it takes a specified process (two tape loops gradually phasing) and lets that process run—but interestingly, it’s not an indeterminate composition; the result is a fixed recording. Conversely, a generative piece can be indeterminate—through built-in randomness, external inputs, or other means. We’ll need a Venn diagram for all this.

Perhaps one possible direction in helping differentiate generativity and indeterminacy is as a process vs. a result: generativity is a way of making something, indeterminacy is a trait of the thing made. So you can have a generative indeterminate work, or a generative determinate work. Could you have a non-generative indeterminate work? Perhaps Feldman’s approach to indeterminacy might fit the bill: his focus was on performer choice, rather than system or chance. So by the time you try to force a Projections or a Durations into the category of generative music, the whole concept has become blurred beyond recognition.

INTERACTIVE MUSIC

In a way, interactivity is an even bigger jump than the leap from certainty to indeterminacy. Let’s call an interactive composition a piece that actively integrates the listener or audience member into the structure of the composition itself. Or a piece in which the listener, rather than just the composer or performer, participates in the realization of the work. But perhaps a better word than “listener” would be “user”—because questions of interactivity arise naturally and almost automatically when dealing with indeterminate digital music, in a way that they don’t necessarily in a concert hall. The iPad apps I’ll talk about next week have, as we’ve worked on them, become far more focused on interactivity than on a more traditional, “passive” sense of indeterminacy, all through a process that has seemed both natural and inevitable.

Distinguishing interactive music becomes a question of the “stance” of the listener: Are they an active participant in the realization of the music? Do their actions alter or guide the development of the piece? Or do they experience the piece as they would a traditional work—sitting back, letting it unfold, just listening. Having now worked on both non-interactive indeterminate music and interactive indeterminate music (cue the Venn diagram again), I can tell you that this is a very, very significant difference from a compositional point of view. But more on that next week.

I want to throw out one final term—though more as a proposal, as I’m not sure it actually exists yet:

ADAPTIVE MUSIC

My indeterminate pieces all use software-based chance procedures to reshuffle their component parts, creating random juxtapositions of different fragments of material—a sort of digital mobile (pronounced like Calder rather than your phone). But in listening to them, I can’t escape the feeling that sometimes the results are better and sometimes worse; the roll of the dice sometimes gets it right—sometimes that random number generator is just on. From this, I got to thinking about how I could cheat—how I could bias the piece toward better results, more reliably, more consistently than just through pure chance alone. One idea was to “weight” the outcomes with perhaps something as crude as a “thumbs up” button: the piece hits upon a good concatenation of material and you hit “yes,” which then statistically biases it to move more often toward that part of its “compositional space” in the future. As you develop the piece, you’d gradually refine these “likes,” and the piece would slowly adapt itself toward a set of outcomes that sound better and better. So it’s random, but a structured, learned, constrained randomness. You could also have different versions or interpretations of the piece: the composer could painstakingly develop an adaptive profile—their vision for the piece—but then the listener could reset it and develop their own interpretation, their own way of weighting the dice and biasing the outcomes of chance toward a better or different musicality. We haven’t tried this yet, and I’m not sure if anyone else has, but I feel like it’s a promising idea.

There are plenty of terms still out there—aleatoric music, stochastic music, algorithmic music—that I don’t have the space or, in truth, the clarity on to start locking down right now. But maybe in the end it’s better to think of all these conflicting and converging words as adjectives rather than nouns—ways of describing a piece rather than static categories or classifications. Because, again, push any of them too far and they become absurd. What piece of music doesn’t have an element of chance back there somewhere in the history of its creation? What performance of a scored composition isn’t innately indeterminate in its microphysics, its performative nuance? And don’t forget that any playing of any recording is indeterminate as well—across different speakers or headphones, in different rooms, amidst different sonic environments, with different interruptions, pauses, and distractions.

And doesn’t every composition, “generative” or not, involve a formal system that structures it, that organizes and limits it within the vast space of all possible music? Consider the following perfectly legitimate set of rules and constraints: Take a harp and knock it on its side. Now, instead of playing it with your fingers, build an elaborate system of mechanical hammers to strike the strings. Tune those strings to a strange compromise in temperament that allows modulation between keys at the cost of twisting certain intervals away from their ostensibly more natural whole number ratios. Oh, and 88 might be a good number of strings to use.

Is this an interactive composition? A generative one? It has a clearly defined system of axioms and rules that constrain the possible space of musical results. But is it really necessary to say that a piano is one big indeterminate composition?

I’d like to see the answers to this and every other question I’ve asked today in the comments below, so that I can get it all figured out myself.

Indeterminacy 2.0: How to Burn Your Harpsichord

variant:Blue

Image from variant:blue by Joshue Ott and Kenneth Kirschner

Do you know the Brandenburg Concerto where Bach kicks over the harpsichord and lights it on fire? You know, No. 5, with its ripping keyboard solo that can only be described as a sort of “Baroque shredding.” I’ve heard that solo referred to as an audition piece for Bach himself, who was known as a fearsome improviser and may have used it to pummel prospective patrons into submission.

I open a discussion of indeterminacy and digital technology with this anecdote because I think it’s important to remember that indeterminacy’s country cousin—improvisation—goes to the very roots of music as we know it and beyond. In fact, the question to ask may rather be one of where our notions of fixity and certainty in music come from. We barely even have a word for it—we don’t exactly talk much about “determinacy”—and yet beneath all our ideas about music runs this assumption: that a piece of music is a stable thing, that it has a fixed essence, that when we talk about the Brandenburg Concerto No. 5 we’re talking about THE Brandenburg Concerto No. 5. Not some crazy ephemeral solo Bach might have thrown in there one day.

Which is too bad, really, because it is only through chance, chaos, and the unexpected that music, like life, evolves. Perhaps there was once a time when music seemed less rigid, less fixed—a liquid, not a solid. Perhaps in an age of folk songs, of an unpredictably malleable oral tradition, a more fluid image of music might have held. But something changed that, and I would suggest that it was the advent of the score—of a written tradition in music. With that, the possibility of a true text, a stable essence of a fixed piece of music, locked down once and for all forever, comes into being. And the fixity of the score, the perceived tyranny of its certainty and stability, was very much what Feldman, Cage, and the composers of the New York School of the 1950s were rebelling against when they pioneered our modern ideas of indeterminate music.

But there is another form of stability in music as we know it today, another kind of “determinacy” that underlies our sense of what music is, and can be: the recording. From 1877’s first needle drop onwards, we have known music as much from recordings as we have from scores—and for non-musicians, much more so. The recording has conquered the world, and in doing so has become music’s new fixity—its new certainty.

But why should a recording be the same every time you listen to it? Until recently, this question wouldn’t even have made sense. You had to physically scratch the sound onto those old wax cylinders, and one can only imagine the mess it would have been to try un-scratching it. You can’t re-lathe a vinyl record, or reach into an old-fashioned compact disc and start moving those microscopic pits around. But our notions of what our recordings are have not kept up with what our recordings actually are: digital data. Code. Our recordings are no longer hardware—they’re software. And yet we listen to an mp3 or an online music stream in the exact same way in which we have listened to CDs, vinyl records, cassettes, 78 rpm phonographs, wax cylinders—starting at the beginning, playing linearly to the end, and hearing music that’s exactly the same on each listen.

But there’s no reason why this must be the case. With digital music, it’s possible to build complexity, chance, and intelligence into the recording itself, to create a music that is ever-changing and open-ended, indefinite in duration and indeterminate in composition—to create an indeterminate recording. A listener can press play on a piece of recorded music that will be different on every listen, that can be heard for as long or as short a time as they wish, and that will continually grow and evolve for as long as they choose to listen.

On and off over the last decade, I’ve been experimenting with developing just this sort of music—with some successes, plenty of failures, and hopefully a little insight along the way. This series of articles will describe some of those experiments, others that haven’t yet been tried, and the hopes and ideas underlying them, all on the theme of the possibilities that exist at the intersection of technology and indeterminacy. Tune in next week for an attempt to figure out exactly what it is we’re talking about here.

***

Ken Kirschner

Kenneth Kirschner
Photo by Molly Sheridan

Composer Kenneth Kirschner was born in 1970 and lives in New York City. His music is freely available at kennethkirschner.com.