Tag: algorithmic composition

From the Machine: Conversations with Dan Tepfer, Kenneth Kirschner, Florent Ghys, and Jeff Snyder

Over the last three weeks, we’ve looked at various techniques for composing and performing acoustic music using computer algorithms, including realtime networked notation and algorithmic approaches to harmony and orchestration.

Their methods differ substantially, from the pre-compositional use of algorithms, to the realtime generation of graphic or traditionally notated scores, to the use of digitally controlled acoustic instruments and musical data visualizations.

This week, I’d like to open up the conversation to include four composer/performers who are also investigating the use of computers to generate, manipulate, process, and display musical data for acoustic ensembles. While all four share a similar enthusiasm for the compositional and performance possibilities offered by algorithms, their methods differ substantially, from the pre-compositional use of algorithms, to the realtime generation of graphic or traditionally notated scores, to the use of digitally controlled acoustic instruments and musical data visualizations.

Pianist/composer Dan Tepfer, known both for his expressive jazz playing and his interpretations of Bach’s Goldberg Variations, has recently unveiled his Acoustic Informatics project for solo piano. In it, Tepfer uses realtime algorithms to analyze and respond to note data played on his Yahama Disklavier piano, providing him with an interactive framework for improvisation. Through the use of musical delays, transpositions, inversions, and textural elaborations of his input material, he is able to achieve composite pianistic textures that would be impossible to realize with human performer or computer alone.

Composer Kenneth Kirschner has been using computers to compose electronic music since the 1990s, manipulating harmonic, melodic, and rhythmic data algorithmically to create long-form works from minimal musical source material. Several of his electronic works have recently been adapted to the acoustic domain, raising questions of musical notation for pieces composed without reference to fixed rhythmic or pitch grids.

Florent Ghys is a bassist and composer who works in both traditional and computer-mediated compositional contexts. His current research is focused on algorithmic composition and the use of realtime notation to create interactive works for acoustic ensembles.

Jeff Snyder is a composer, improviser, and instrument designer who creates algorithmic works that combine animated graphic notation and pre-written materials for mixed ensembles. He is also the director of the Princeton Laptop Orchestra (PLOrk), providing him with a wealth of experience in computer networking for live performance.

THE ROLE OF ALGORITHMS

JOSEPH BRANCIFORTE: How would you describe the role that computer algorithms play in your compositional process?

KENNETH KIRSCHNER: I come at this as someone who was originally an electronics guy, with everything done on synthesizers and realized electronically. So this computer-driven approach is just the way I work, the way I think compositionally. I’ve never written things with pencil and paper. I work in a very non-linear way, where I’m taking patterns from the computer and juxtaposing them with other patterns—stretching them, twisting them, transposing them.

I have to have that feedback loop where I can try it, see what happens, then try it again and see what happens.

A lot of my obsession over the last few years has been working with very reduced scales, often four adjacent semitones, and building patterns from that very restricted space. I find that as you transpose those and layer them over one another, you get a lot of very interesting emergent patterns. In principle, you could write that all out linearly, but I can’t imagine how I would do it, because so much of my process is experimentation and chance and randomness: you take a bunch of these patterns, slow this one down, transpose this one, layer this over that. It’s very fluid, very quick to do electronically—but hopelessly tedious to do if you’re composing in a linear, notated way. My whole development as a composer presupposes that realtime responsiveness. I have to have that feedback loop where I can try it, see what happens, then try it again and see what happens.

FLORENT GHYS: That’s very interesting, because we don’t come from the same background, but we ended up with algorithmic music for the same reasons. I come from a background of traditional acoustic music composition: writing down parts and scores for musicians. But I realized that the processes I was using as I was composing—canons, isorhythms, transpositions, stretching out durations—were very easy to reproduce in Max/MSP. I began by working with virtual instruments on the computer, fake sounds that gave me an idea of what it might sound like with a real ensemble. It was fascinating to listen to the results of an algorithmic process in real time—changing parameters such as density of rhythm, rhythmic subdivision, transposition, canonic relationships—and being able to hear the results on the spot. Even something as simple as isorhythm—a cell of pitches and a cell of rhythms that don’t overlap—writing something like that down takes some time. With an algorithmic process, I can go much faster and generate tons of material in a few minutes, rather than spending hours in Sibelius just to try out an idea.

DAN TEPFER: I’ve used algorithms in a number of ways. I’ve done stuff where I’ve generated data algorithmically that then gets turned into a relatively traditional composition, with notes on a page that people play. I’ve also experimented with live notation, which is more improvisationally based, but with some algorithmic processing in there too. And then there’s the stuff I’ve been doing recently with the Disklavier, where the algorithms react to what I’m improvising on the piano in real time.

With the live notation stuff, I’ve done it with string quartet, or wind quartet, and me on piano. I did one show where it was both of them together, and I could switch back and forth or have them both playing. I have a controller keyboard on top of the piano, and I can play stuff that gets immediately sent out as staff notation. There’s some processing where it’ll adapt what I’m playing to the ranges of each instrument, doubling notes or widening the register. Then there are musical controls where I can save a chord and transform it in certain ways just by pushing a button. At the rhythmic level, there’s usually a beat happening and this stuff is floating above it, a bit of an improvisational element where the musicians can sink into the groove.

JEFF SNYDER: I’ve got two main pieces that I would say fall into this category of realtime notation. The first is called Ice Blocks, which combines graphic notation with standard notation for open instrumentation. And then another one called Opposite Earth, which uses planets’ orbits as a graphic notation device. There are ten concentric circles, each one assigned to a performer. Each musician is a particular planet on an orbit around the sun. As the conductor, I can introduce vertical or horizontal lines from the center. The idea is that when your planet crosses one of those lines, you play a note. I have control over how fast each planet’s orbit is, as well as the color of the lines, which refer to pitch materials. There are five different colors that end up being five different chords. So it sets up a giant polyrhythm based on the different orbits and speeds.

Each planet can also rotate within itself, with additional notches functioning the same way as the lines do, although using unpitched sounds. That basically gives me another rhythmic divider to play with. I can remove or add orbits to thin out the texture or add density. It’s interesting because the piece allows me to do really complicated polyrhythms that couldn’t be executed as accurately with traditional notation. You might be playing sixteen against another person’s fifteen, creating this really complicated rhythmic relationship that will suddenly line up again. This makes it really easy: all you’re doing is watching a line, and each time you cross, you make a sound. You can do it even if the players aren’t particularly skilled.

PERFORMANCE PRACTICE AND USER EXPERIENCE

JB: I’m really interested in this question of performer “user experience” when working with realtime notational formats. What were the performers’ responses to dealing with your dynamic graphic notation, Jeff?

JS: The piece was played by PLOrk, which is a mix of composition grad students, who are up for anything, and then undergrads who are a mix of engineers and other majors. They get excited about the fact that it’s something different. But I’ve worked with more conservative ensembles and had performers say, “I’ve worked for so many years at my instrument, and you’re wasting my skills.” So people can have that response as well when you move away from standard notation.

With PLOrk, I was able to workshop the piece over a few months and we would discover together: “Is this going to be possible? Is this going to be too difficult? Is this going to be way too easy?” I could experiment with adding staff notation or using different colors to represent musical information. For me, it was super valuable because I wasn’t always able to gauge how effective certain things would be in advance. None of this stuff has a history, so it’s hard to know whether people can do certain things in a performance situation. Can people pay attention to different gradations of blue on a ring while they’re also trying to perform rhythms? I just have to test it, and then they’ll tell me whether it works.

JB: There’s always that initial hurdle to overcome with new notational formats. I’ve been using traditional notation in my recent work, albeit a scrolling version where performers can only see two measures at a time, but I remember a similar adjustment period during the first rehearsal with a string quartet. We set everyone up, got the Ethernet connections between laptops working, tested the latencies—everything looked good. But for the first fifteen minutes of rehearsal, the performers were all complaining that the software wasn’t working properly. “It just feels like it’s off. Maybe it’s not synced or something?” So I did another latency check, and everything was fine, under two milliseconds of latency.

DT: So the humans weren’t synced!

It’s just a new skill. Once performers get used to it, then they don’t want it to change.

JB: I reassured them that everything was working properly, and we kept rehearsing. After about 30 minutes, they started getting the hang of the scrolling notation—things were beginning to sound much more comfortable. So after rehearsal, as everyone was packing up, I said, “Is there anything you’d like me to change in the software, anything that would make the notation easier to deal with?” And they all said, “No! Don’t change a thing. It’s perfect!” And then I realized: it’s just a new skill. Once performers get used to it, then they don’t want it to change. They just need to know that it works and that they can rely on it.

But beyond the mechanics of using the software, I sometimes wonder whether it’s harder for a performer to commit to material that they haven’t seen or rehearsed in advance. They have no idea what’s coming next and it’s difficult to gain any sense of the piece as a whole.

FG: I think you’re touching on something related to musicianship. In classical music, the more you play a piece, the better you’re going to understand the music, the more you’re going to be able to make it speak and refine the dynamics. And within the context of the ensemble, you’ll understand the connections and coordination between all the musicians. So the realtime notation is going to be a new skill for musicians to learn—to be able to adapt to material that’s changing. It’s also the job of the composer to create a range of possibilities that musicians can understand. For instance, the piece uses certain types of rhythms or scales or motives; a performer might not know exactly what it’s going to be, but they understand the range of things that can happen.

KK: They need to be able to commit to the concept of the piece, rather than any of the specific details of the narrative.

DT: I think a key word here is culture. You’re seeing a microcosm of that when, in the time span of a rehearsal, you see a culture develop. At the beginning of the rehearsal, musicians are like, “It’s not working,” and then after a certain time they’re like, “Oh, it is working.” Culture is about expectations about what is possible. And if you develop something in the context of a group, where it is understood to be fully possible, then people will figure out ways to do it. It might start with a smaller community of musicians who can do it at first. But I think we’re probably not far from the time when realtime sight-reading will just be a basic skill. That’s going to be a real paradigm shift.

I think we’re probably not far from the time when realtime sight-reading will just be a basic skill. That’s going to be a real paradigm shift.

JB: How do you deal with the question of notational pre-display in your live notation work, Dan?

DT: It happens pretty much in real time.

JB: So you play a chord on your MIDI keyboard and it gets sent out to the musicians one measure at a time?

DT: They’re just seeing one note. There’s no rhythmic information. The real difficulty is that I have to send the material out about a second early in order to have any chance of maintaining consistency in the harmonic rhythm. It takes some getting used to, but it’s surprisingly intuitive after a while.

JS: That’s something I wasn’t able to address in the planets piece by the time of the performance: there was no note preparation for them, so lines just show up. I told the performers, “Don’t worry if a line appears right before your planet is about to cross it. Just wait until the next time it comes around again.” But it still stressed them out. As performers, they’re worried about “missing a note,” especially because the audience could see the notation too. So perhaps in the next version I could do something where the lines slowly fade in to avoid that issue.

JB: I have to sometimes remind myself that the performers are part of the algorithm, too. As much as we want the expanded compositional possibilities that come from working with computers, I think all of us value the process of working with real musicians.

KK: With these recent acoustic adaptations of my pieces, it was a whole different experience hearing it played with an actual pianist and cellists. It was a different piece. And I thought, “There is something in here that I want to pursue further.” There’s just a level of nuance you’re getting, a level of pure interpretation that’s not going to come through in my electronic work. But the hope is that by composing within the electronic domain, I’m stumbling upon compositional approaches that one may not find writing linearly.

COMPUTER AS COMPOSITIONAL SURROGATE

JB: I want to discuss the use of the computer as a “compositional surrogate.” The premise is that instead of working out all of the details of a piece in advance, we allow the computer to make decisions on our behalf during performance, based on pre-defined rules or preferences. There’s an argument that outsourcing these decisions to the computer is an abdication of the fundamental responsibility of being a composer, the subjective process of selection. But I’ve begun to see algorithm design as a meta-compositional process: uncovering the principles that underlie my subjective preferences and then embedding them into the algorithmic architecture itself.

KK: Right. There’s a sense that when something works musically, there’s a reason for it. And what we’re trying to do is uncover those reasons; the hope is that some of those rules that are affecting our aesthetic judgment are able to be discovered. Once you begin to codify some of that, you can offload it and shift some of the compositional responsibility to the computer. The idea is to build indeterminate pieces that have a degree of intelligence and adaptation to them. But that requires us to understand what some of those underlying mechanisms are that make us say “this is good” or “this is bad.”

For me, something might sound good one day, and another day I might hate it. I don’t know if you’re ever going to find a “rule” that can explain that.

FG: I don’t know. I’m a little skeptical. For me, something might sound good one day, and another day I might hate it. I don’t know if you’re ever going to find a “rule” that can explain that; there are so many factors that go into musical perception.

JB: A dose of skepticism is probably warranted if we’re talking about machines being able to intervene in questions of aesthetics. But to me, the beauty of designing a composer-centric framework is that it allows you to change your preferences from day to day. You can re-bias a piece to conform to whatever sounds good to you in the moment: a different tempo, more density, a slightly different orchestration. I’m not sure that we even need to understand the nature of our preferences, or be able to formalize them into rules, in order to have the computer act as an effective surrogate. Economists have a concept called “revealed preference,” where instead of looking at what consumers say they want, you look at their purchasing habits. That kind of thing could be applied to algorithm design, where the algorithm learns what you like simply by keeping track of your responses to different material.

KK: I’ve had a similar thought when working on some of my indeterminate pieces—that you want a button for “thumbs up” or “thumbs down.” If you could record the aggregate of all those decisions, you could begin to map them to a parameter space that has a greater chance of giving you good outcomes. You could also have different profiles for a piece. For example, I could do my “composer’s version” that contains my preferences and builds the piece in a certain direction; then I could hand it off to you, hit reset, and have you create your own version of the piece.

FG: In a lot of the algorithms I’ve been designing lately, I have a “determinacy-to-randomness” parameter where I can morph from something I’ve pre-written, like a melody or a series of pitches, to a probabilistic set of pitches, to a completely random set of pitches. With the probabilities, I allow the computer to choose whatever it wants, but I tell it, “I’d like to have more Gs and G#s, but not too many Cs.” So, weighted probabilities. We know that the random number generator in Max/MSP, without any scaling or probabilities, sounds like crap.

KK: It needs constraints.

JB: Finding ways to constrain randomness—where it’s musically controlled, but you’re getting new results with every performance—that’s become a major compositional concern for me. As an algorithm grows from initial idea to a performance-ready patch, the parameters become more abstract and begin to more closely model how I hear music as a listener. At the deepest level of aesthetic perception, you have things like balance, long-range form, tension/resolution, and expectation. I think probabilistic controls are very good at dealing with balance, and maybe not as good with the others.

FG: Yeah, when you deal with algorithms you go to a higher level of thinking. I’ve done things where I have a pattern that I like, and I want the computer to generate something else like it. And then eventually I know I want it to transform into another pattern or texture. But the tiny details of how it gets from A to B don’t really matter that much. It’s more about thinking of the piece as a whole.

NETWORKED NOTATION

JB: Jeff, I wanted to ask you about something a little more technical: when dealing with live notation in PLOrk, are you using wired or wireless connections to the performers’ devices?

JS: I’ve done live notation with both wireless and wired connections. In any kind of networking situation, we look at that question on a case-by-case basis. If we’re going to do wired, it simplifies things because we can rely on reasonable timing. If we’re going to do wireless, we usually have issues of sync that we have to deal with. For a long time, our solution has been LANdini, which was developed by Jascha Narveson. Recently, Ableton Link came out and that simplifies things. So if you don’t need certain features that LANdini offers—if you just need click synchronization—then Link is the simpler solution. We’ve been doing that for anything in which we just need to pulse things and make sure that the pulses show up at the same time, like metronomes.

JB: In my notation system, there’s a cursor that steps through the score, acting as a visual metronome to keep the musicians in sync. So transfer speed is absolutely critical there to make sure there’s as little latency as possible between devices. I’ve been using wired Ethernet connections, which ensures good speed and reliability, but it quickly becomes a real mess on stage with all the cables. Not to mention the hundreds I’ve spent on Ethernet adapters! Perhaps the way to do it is to have Ableton Link handle the metronome and then use wireless TCP/IP to handle the notation messages.

JS: That’s what I was just about to suggest. With Link, you can actually get information about which beat number you’re on, it’s not just a raw pulse.

JB: Does it work well with changing time signatures?

JS: That’s a good question, I haven’t tested that. I have discovered that any tempo changes make it go nuts. It takes several seconds to get back on track when you do a tempo change. So it’s limited in that way. But there are other possibilities that open up when you get into wireless notation. Something I’ve really wanted to do is use wireless notation for spatialization and group dynamics. So say you had a really large ensemble and everybody is looking at their own iPhone display, which is giving them graphic information about their dynamics envelopes. You could make a sound move through an acoustic ensemble, the same way electronic composers do with multi-speaker arrays, but with a level of precision that couldn’t be achieved with hand gestures as a conductor. It’d be easily automated and would allow complex spatial patterns to be manipulated, activating different areas of the ensemble with different gestures. That’s definitely doable, technically speaking, but I haven’t really seen it done.

BRINGING THE COMPOSER ON STAGE

Do you think that having the composer on stage as a privileged type of performer is potentially in conflict with the performers’ ownership of the piece?

JB: With this emerging ability for the composer to manipulate a score in realtime, I wonder what the effects will be on performance culture. Do you think that having the composer on stage as a privileged type of performer is potentially in conflict with the performers’ ownership of the piece?

FG: Bringing the composer on stage changes the whole dynamic. Usually instrumentalists rule the stage; they have their own culture. Now you’re up there with them, and it totally changes the balance. “Whoa, he’s here, he’s doing stuff. Why is he changing my part?”

JB: Right, exactly. In one of my early realtime pieces, I mapped the faders of a MIDI controller to the individual dynamic markings of each member of the ensemble. This quickly got awkward in rehearsal when one of the violinists said half-jokingly, “It seems like I’m playing too loudly because my dynamic markings keep getting lower and lower.”

DT: It’s like Ligeti-style: you go down to twelve ps! [laughs]

JB: From that point, I became very self-conscious about changing anything. I suddenly became aware of this strange dynamic, where I’m in sole control of the direction of the piece but also sitting on stage alongside the musicians.

DT: You know, it’s interesting—come to think of it, in everything I’ve done with live notation, I’m performing as well. I think that makes a huge difference, because I can lead by example.

KK: And you’re also on stage and you’re invested as a performer. Whereas Joe is putting himself in this separate category—the puppet master!

FG: I wonder if it’s not also the perception of the instrumentalists in what they understand about what you’re doing. In Dan’s case, they totally get what he’s doing: he’s playing a chord, it’s getting distributed, they have their note. It’s pretty simple. With more complex algorithmic stuff, they might not get exactly what you’re doing. But then they see an obvious gesture like lowering a fader, and they think, “Oh, he’s doing that!”

DT: Something nice and simple to complain about!

FG: Otherwise, you’re doing this mysterious thing that they have no idea about, and then they just have to play the result.

KK: This is why I think it’s really important to start working with a consistent group of musicians, because we’ll get past this initial level and start to see how they feel about it in the longer term as they get used to it. And that might be the same response, or it might be a very different response.

DT: Has anyone taken that step of developing this kind of work over a couple of years with the same group of people? I think then you’ll see performers finding more and more ways of embracing the constraints and making it their own. That’s where it gets exciting.


Well, that about does it for our four-part series. I hope that these articles have initiated conversation with respect to the many possible uses of computer algorithms in acoustic music, and perhaps provided inspiration for future work. I truly believe that the coupling of computation and compositional imagination offers one of the most promising vistas for musical discovery in the coming years. I look forward to the music we will collectively create with it.

Comments and questions about the series are very much welcome, either via the comments section below or any of the following channels:

josephbranciforte.com // facebook // twitter // instagram

From the Machine: Realtime Algorithmic Approaches to Harmony, Orchestration, and More

As we discussed last week, the development of a realtime score, in which compositional materials can be continuously modified, re-arranged, or created ex nihilo during performance and displayed to musicians as musical notation, is no longer the stuff of fantasy. The musical and philosophical implications of such an advance are only beginning to be understood and exploited by composers. This week, I’d like to share some algorithmic techniques that I’ve been developing in an attempt to grapple with some of the compositional possibilities offered by realtime notation. These range from the more linear and performative to the more abstract and computation-intensive; they deal with musical parameters ranging from harmony and form to orchestration and dynamics. Given the relative novelty and almost unlimited nature of the subject matter (to say nothing of the finite space allotted for the task), consider this a report from one person’s laboratory, rather than anything like a comprehensive survey.

HARMONY & VOICE LEADING

How might we begin to create something musically satisfying from just this raw vocabulary?

I begin with harmony, as it is the area that first got me interested in modeling musical processes using computer algorithms. I have always been fascinated by the way in which a mechanistic process like connecting the tones of two harmonic structures, according to simple rules of motion, can produce such profound emotional effects in listeners. It is also an area that seems to still hold vast unexplored depths—if not in the discovery of new vertical structures[1], at the very least in their horizontal succession. The sheer combinatorial magnitude of harmonic possibilities is staggering: consider each pitch class set from one to twelve notes in its prime form, multiplied by the number of possible inversional permutations of each one (including all possible internal octave displacements), multiplied by the possible chromatic transpositions for each permutation—for just a single vertical structure! When one begins to consider the horizontal dimension, arranging two or more harmonic structures in succession, the numbers involved are almost inconceivable.

The computer is uniquely suited to dealing with the calculation of just such large data sets. To take a more realistic and compositionally useful example: what if we wanted to calculate all the inversional permutations of the tetrachord {C, C#, D, E} and transpose them to each of the twelve chromatic pitch levels? This would give us all the unique pitch class orderings, and thus the complete harmonic vocabulary, entailed by the pitch class set {0124}, in octave-condensed form. These materials might be collected into a harmonic database, one which can we can sort and search in musically relevant ways, then draw on in performance to create a wide variety of patterns and textures.

First we’ll need to find all of the unique orderings of the tetrachord {C, C#, D, E}. A basic law of combinatorics states that there will be n! distinct permutations of a set of n items. This (to brush up on our math) means that for a set of 4 items, we can arrange them in 4! (4 x 3 x 2 x 1 = 24) ways. Let’s first construct an algorithm that will return the 24 unique orderings of our four-element set and collect them into a database.

example 1

Branciforte-example-1

Next, we need to transpose each of these 24 permutations to each of the 12 chromatic steps, giving us a total of 288 possible structures. To work something like this out by hand might take us fifteen or twenty minutes, while the computer can calculate such a set near-instantly.

example 2

Branciforte-example-2

The question of what to do with this database of harmonic structures remains: how might we begin to create something musically satisfying from just this raw vocabulary? The first thing to do might be to select a structure (1-288) at random and begin to connect it with other structures by a single common tone. For instance, if the first structure we draw is number 126 {F# A F G}, we might create a database search tool that allows us to locate a second structure with a common tone G in the soprano voice.

example 3:

To add some composer interactivity, let’s add a control that allows us to specify which voice to connect on the subsequent chord using the numbers 1-4 on the computer keypad. If we want to connect the bass voice, we can press 1; the tenor voice, 2; the alto voice, 3; or the soprano voice, 4. Lastly, let’s orchestrate the four voices to string quartet, with each structure lasting a half note.

example 4:

This is a very basic example of a performance tool that can generate a series of harmonically self-similar structures, connect them to one another according to live composer input, and orchestrate them to a chamber group in realtime. While our first prototype produces a certain degree of musical coherence by holding one of the four voices constant between structures, it fails to specify any rules governing the movement of the other three voices. Let’s design another algorithm whose goal is to control the horizontal plane more explicitly and keep the overall melodic movement a bit smoother.

A first approach might be to calculate the total melodic movement between the current structure and each candidate structure in the database, filtering out candidates whose total movement exceeds a certain threshold. We can calculate the total melodic movement for each candidate by measuring the distance in semitones between each voice in the current structure and the corresponding voice in the candidate structure, then adding together all the individual distances.[2]

example 5.0

Branciforte-example-5.0

While this technique will certainly reduce the overall disjunction between structures, it still fails to provide rules that govern the movement of individual voices. For this we will need an interval filter, an algorithm that determines the melodic intervals created by moving from the current structure to each candidate and only allows through candidates that adhere to pre-defined intervallic preferences. We might want to prevent awkward melodic intervals such as tritones and major sevenths. Or perhaps we’d like the soprano voice to move by step (ascending or descending minor and major seconds) while allowing the other voices to move freely. We will need to design a flexible algorithm that allows us to specify acceptable/unacceptable melodic intervals for each voice, including ascending movement, descending movement, and melodic unisons.

example 5.1

Branciforte-example-5.1

A final consideration might be the application of contrapuntal rules, such as the requirement that the lowest and highest voices move in either contrary or oblique motion. This could be implemented as yet another filter for candidate structures, allowing a contrapuntal rule to be specified for each two-voice combination.

example 5.2

Branciforte-example-5.2

Let’s create another musical example that implements these techniques to produce smoother movement between structures. We’ll broaden our harmonic palette this time to include four diatonic tetrachords—{0235}, {0135}, {0245}, and {0257}—and orchestrate our example for solo piano. We can use the same combinatoric approach as we did earlier, computing the inversional permutations of each tetrachord to develop our vocabulary. To keep the data set manageable, let’s limit generated material to a specific range of the piano, perhaps C2 to C6.

We’ll start by generating all of the permutations of {0235}, transposing each one so that its lowest pitch is C2, followed by each of the remaining three tetrachords. Before adding a structure to the database, we will add a range check to make sure that no generated structure contains any pitch above C6. If it does, it will be discarded; if not, it will be added to the database. We will repeat this process for each chromatic step from C#2 to A5 (A5 being the highest chromatic step that will produce in-range structures) to produce a total harmonic vocabulary of 2976 structures.

Let’s begin our realtime compositional process by selecting a random structure from among the 2976. In order to determine the next structure, we’ll begin by running all of the candidates through our semitonal movement algorithm, calculating the distances among voices in the first structure and all other structures in the database. To reduce disjunction between structures, but avoid repetitions and extremely small harmonic movements, let’s allow total movements of between 4 and 10 semitones. All structures that fall within that range will then be passed through to the interval check algorithm, where they will be tested against our intervallic preferences for each voice. Finally, all remaining candidates will be checked for violation of any contrapuntal rules that have been defined for each voice pair. Depending on how narrowly we set each of one these filters, we might reduce our candidate set from 2976 to somewhere between 5 and 50 harmonic structures. We can again employ an aleatoric variable to choose freely among these, given that each has met all of our horizontal criteria.

To give this algorithm a bit more musical interest, let’s also experiment with arpeggiation and slight variations in harmonic rhythm. We can define four arpeggio patterns and allow the algorithm to choose one randomly for each structure that is generated.

example 6:

While this example still falls into the category of initial experiment or étude, it might be elaborated to produce more compositionally satisfying results. Instead of a meandering harmonic progression, perhaps we could define formal relationships such as multi-measure repeats, melodic cells that recur in different voices, or the systematic use of transposition or inversion. Instead of a constant stream of arpeggios, the musical texture could be varied in realtime by the composer. Perhaps the highest note (or notes) of each arpeggio could be orchestrated to another monophonic instrument as a melody, or the lowest note (or notes) re-orchestrated to a bass instrument. These are just a few extemporaneous examples; the possibility of achieving more sophisticated results is simply a matter of identifying and solving increasingly abstract musical problems algorithmically.

Here’s a final refinement to our piano étude, with soprano voice reinterpreted as a melody and bass voice reinforced an octave below on the downbeat of each chord change.

example 6.1:

ORCHESTRATION

In all of the above examples, we limited our harmonic vocabulary to structures that we knew were playable by a given instrument or instrument group. Orchestration was thus a pre-compositional decision, fixed before the run time of the algorithm and invariable during performance. Let’s now turn to the treatment of orchestration as an independent variable, one that might also be subject to algorithmic processing and realtime manipulation.

There are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

This is an area of inquiry unlikely to arise in electronic composition, due to the theoretical lack of a fixed range in electronic and virtual instruments. In resigning ourselves to working with traditional acoustic instruments, the abstraction of “pure composition” must be reconciled with practical matters such as instrument ranges, questions of performability, and the creation of logical yet engaging individual parts for performers. This is a potentially vast area of study, one that cuts across disciplines such as mathematics/logic, acoustics, aesthetics, and performance practice. Thus, I must here reprise my caveat from earlier: the techniques I’ll present were developed to provide practical solutions to pressing compositional problems in my own work. While reasonable attempts were made to seek robust solutions, there are inevitably situations where theoretical purity must give way to expediency if one wishes to remain a composer rather than a full-time software developer.

The basic problem of orchestration might be stated as follows: how do we distribute n number of simultaneous notes (or events) to i number of instruments with fixed ranges?

Some immediate observations that follow:

a) The number of notes to be orchestrated can be greater than, less than, or equal to the number of instruments.
b) Instruments have varying degrees of polyphony, ranging from the ability to play only a single pitch to many pitches simultaneously. For polyphonic instruments, idiosyncratic physical properties of the instrument govern which kind of simultaneities can occur.
c) For a given group of notes and a fixed group of instruments, there may be multiple valid orchestrations. These can be sorted by applying various search criteria: playability/difficulty, adherence to the relationships among instrument ranges, or a composer’s individual orchestrational preferences.
d) Horizontal information can also be used to sort valid orchestrations. Which orchestration produces the least/most amount of melodic disjunction from the last event per instrument? From the last several events? Are there specific intervals that are to be preferred for a given instrument moving from one event to another?
e) For a given group of notes and a fixed group of instruments, there may be no valid orchestration.

Given the space allotted, I’d like to focus on the last three items, limiting ourselves to scenarios in which the number of notes to be orchestrated is the same as the number of instruments available and all instruments are acting monophonically.[3]

Let’s return to our earlier example of four-part harmonic events orchestrated for string quartet, with each instrument playing one note. By conservative estimate, a string quartet has a composite range of C2 to E7 (36 to 100 as MIDI pitch values). This does not mean, however, that any four pitches within that range will be playable by the instrument vector {Violin.1, Violin.2, Viola, Cello} in a one note/one instrument arrangement.

example 7

Branciforte-example-7

The most efficient way to determine whether a structure is playable by a given instrument vector—and, if so, which orchestrations are in-range—is to calculate the n! permutations of the structure and pass each one through a per-note range check corresponding to each of the instruments in the instrument vector. If each note of the permutation is in-range for its assigned instrument, then the permutation is playable. Here’s an example of a range check procedure for the MIDI structure {46 57 64 71} for the instrument vector {Violin.1 Violin.2 Viola Cello}.

example 8

Branciforte-example-8

By generating the twenty-four permutations of the harmonic structure ({46 57 64 71}, {57 64 71 46}, {64 71 46 57}, etc.) and passing each through a range check for {Violin.1 Violin.2 Viola Cello}, we discover that there are only six permutations that are collectively in-range. There is a certain satisfaction in knowing that we now possess all of the possible orchestrations of this harmonic structure for this group of instruments (leaving aside options like double stops, harmonics, etc.).

Although the current example only produces six in-range permutations, larger harmonic structures or instrument groups could bring the number of playable permutations into the hundreds, or even thousands. Our task, therefore, becomes devising systems for searching the playable permutations in order to locate those that are most compositionally useful. This will allow us to automatically orchestrate incoming harmonic data according to various criteria in a realtime performance setting, rather than pre-auditioning and choosing among the playable permutations manually.

There are a number of algorithmic search techniques that I’ve found valuable in this regard. These can be divided into two broad categories: filters and sorts. A filter is a non-negotiable criterion; in our current example, perhaps a rule such as “Violin 1 or Violin 2 must play the highest note.” A sort, on the other hand, is a method of ranking results according to some criterion. Perhaps we want to rank possible orchestrations by their adherence to the natural low-to-high order of the instruments’ ranges; we might order the instruments by the average pitch in their range and then rank permutations according to their deviation from that order. For a less common orchestration, we might decide to choose the permutation that deviates as much as possible from the instruments’ natural order.

example 9

Branciforte-example-9

By applying this filter and sort, the permutation {57 71 64 46} is returned for the instrument vector {Violin.1 Violin.2 Viola Cello}. As we specified, the highest note is played by either Violin 1 or Violin 2 (Violin 2 in this case), while the overall distribution of pitches from low-to-high deviates significantly from the instruments ranges from low to high. Mission accomplished.

Let’s also expand our filtering and sorting mechanisms from vertical criteria to include horizontal criteria. Vertical criteria, like the examples we just looked at, can be applied with information about only one structure; horizontal criteria take into account movement between two or more harmonic structures. As we saw in our discussion of harmony, horizontal criteria can provide useful information such as melodic movement for each voice, contrapuntal relationships, total semitonal movement, and more; this kind of data is equally useful in assessing possible orchestrations. In addition to optimizing the intervallic movement of each voice to produce more playable parts, horizontal criteria can be used creatively to control parameters such as voice crossing or harmonic spatialization.

example 10

Branciforte-example-10

Such horizontal considerations can be combined with vertical rules to achieve sophisticated orchestrational control. Each horizontal and vertical criterion can be assigned a numerical weight denoting its importance when used as a sorting mechanism. We might assign a weight of 0.75 to the rule that Violin 1 or Violin 2 contains the highest pitch, a weight of 0.5 to the rule that voices do not cross between structures, and a weight of 0.25 to the rule that no voice should play a melodic interval of a tritone. This kind of a weighted search more closely models the multivariate process of organic compositional decision-making. Unlike the traditional process of orchestration, It has the advantage of being executable in realtime, thus allowing a variety of indeterminate data sources to be processed according to a composer’s wishes.

While such an algorithm is perfectly capable of running autonomously, it can also be performatively controlled by varying parameters such as search criteria, weighting, and sorting direction. Other basic performance controls can be devised to quickly re-voice note data to different parts of the ensemble. Mute and solo functions for each instrument or instrument group can be used to modify the algorithm’s behavior on the fly, paying homage to a ubiquitous technique used in electronic music performance. The range check algorithm we developed earlier could alternatively be used to transform a piece’s instrumentation from performance to performance, instantly turning a work for string quartet and voice into a brass quintet. The efficacy of any of these techniques will, of course, vary according to instrumentation and compositional aims, but there is undoubtedly a range of compositional possibilities waiting to be explored in the domain of algorithmic orchestration.

IDEAS FOR FURTHER EXPLORATION

The techniques outlined above barely scratch the surface of the harmonic and orchestrational applications of realtime algorithms—and we have yet to consider several major areas of musical architecture such as rhythm, dynamics, and form! Another domain that holds great promise is the incorporation of live performer feedback into the algorithmic process. Given my goal of writing a short-form post and not a textbook, however, I’ll have to be content to conclude with a few rapid-fire ideas as seeds for further exploration.

Dynamics:

Map MIDI values (0-127) to musical dynamics markings (say, ppp to fff) and use a MIDI controller with multiple faders/pots to control musical dynamics of individual instruments during performance. Alternatively, specify dynamics algorithmically/pre-compositionally and use the MIDI controller only to modify them, re-balancing the ensemble as needed.

Rhythm:

Apply the combinatoric approach used for harmony and orchestration to rhythm, generating all the possible permutations of note attacks and rests within a given temporal space. Use probabilistic techniques to control rhythmic density, beat stresses, changes of grid, and rhythmic variations. Assign different tempi and/or meters to individual members of an ensemble, with linked conductor cursors providing an absolute point of reference for long-range synchronization.

Form:

Create controls that allow musical “snapshots” to be stored, recalled, and intelligently modified during performance. As an indeterminate composition progresses, a composer can save and return to previous material later in the piece, perhaps transforming it using harmonic, melodic, or rhythmic operations. Or use an “adaptive” model, where a composer can comment on an indeterminate piece as it unfolds, using a “like”/”dislike” button to weight future outcomes towards compositionally desirable states.

Performer Feedback:

Allow one or more members of an ensemble to improvise within given constraints and use pitch tracking to create a realtime accompaniment. Allow members of an ensemble to contribute to an adaptive algorithm, where individual or collective preferences influence the way the composition unfolds.

Next week, we will wrap up the series with a roundtable conversation on algorithms in acoustic music with pianist/composer Dan Tepfer, composer Kenneth Kirschner, bassist/composer Florent Ghys, and Jeff Snyder, director of the Princeton Laptop Orchestra.



1. These having been theoretically derived and catalogued by 20th century music theorists such as Allen Forte. I should add here, however, that while theorists like Forte may have definitively designated all the harmonic species (pitch class sets of one to twelve notes in their prime form), the totality of individual permutations within those species still remains radically under-theorized. An area of further study that would be of interest of me is the definitive cataloguing of the n! inversional permutations of each pitch-class set of n notes. The compositional usefulness of such a project might begin to break down with structures where n > 8 (octachords already producing 40,320 discrete permutations), but would nonetheless remain useful from an algorithmic standpoint, where knowledge of not only a structure’s prime form but also its inversional permutation and chromatic transposition could be highly relevant.


2. In calculating the distance between voices, we are not concerned with the direction a voice moves, just how far it moves. So whether the pitch C moves up a major third to E (+3 semitones) or down a major third to Ab (-3 semitones) is of no difference to us in this instance; we can simply calculate its absolute value, yielding a value of 3.


3. Scenarios in which the number of available voices does not coincide with the number of pitches to be orchestrated necessitates the use of the mathematical operation of combination and a discussion of cardinality, which is beyond the scope of the present article.

From the Machine: Realtime Networked Notation

Last week, we looked at algorithms in acoustic music and the possibility of employing realtime computation to create works that combine pre-composition, generativity, chance operations, and live data input. This week, I will share some techniques and software tools I’ve developed that make possible what might be called an interactive score. By interactive score, I mean a score that is continuously updatable during performance according to a variety of realtime input. Such input might be provided from any number of digitized sources: software user interface, hardware controllers, audio signals, video stream, light sensors, data matrices, or mobile apps; the fundamental requirement is that the score is able to react to input instantaneously, continuously translating fluctuations in input data into a musical notation that is intelligible to performers.

THE ALGORITHMIC/ACOUSTIC DIVIDE

It turns out that this last requirement has historically been quite elusive. As early as the 1950s, composers were turning to computer algorithms to generate and process compositional data. The resultant information could either be translated into traditional music notation for acoustic performance (in the early days, completely by hand; in later years, by rendering the algorithm’s output as MIDI data and importing it into a software notation editor) or realized as an electronic composition. Electronic realizations emerged as perhaps the more popular approach, for several reasons. First, by using electronically generated sounds, composers gained the ability to precisely control and automate the timbre, dynamics, and spatialization of sound materials through digital means. Second, and perhaps more importantly, by jettisoning human performers—and thus the need for traditional musical notation—composers were able to reduce the temporal distance between a musical idea and its sonic realization. One could now audition the output of a complex algorithmic process instantly, rather than undertake the laborious transcription process required to translate data output into musical notation. Thus, the bottleneck between algorithmic idea and sonic realization was reduced, fundamentally, to the speed of one’s CPU.

As computation speeds increased, the algorithmic paradigm was extended to include new performative and improvisational possibilities. By the mid-1970s, with the advent of realtime computing, composers began to create algorithms that included not only sophisticated compositional architectures, but also permitted continuous manipulation and interaction during performance. To take a simple example: instead of designing an algorithm that harmonizes a pre-written melody according to 18th-century counterpoint rules, one could now improvise a melody during performance and have the algorithm intelligently harmonize it in realtime. If multiple harmonizations could satisfy the voice-leading constraints, the computer might use chance procedures to choose among them, producing a harmonically indeterminate—yet, perhaps, melodically determinate—musical passage.

Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered.

This is just one basic example of combining live performance input with musically intelligent realtime computation; more complex and compositionally innovative applications can easily be imagined. What is notable with even a simple example like our realtime harmonizer, however, is the degree to which such a process resists neat distinctions such as “composition”/“performance”/“improvisation” or “fixed”/“indeterminate.” It is all of these at once, it is each of these to varying degrees, and yet it is also something entirely new as well. Realtime computation and machine intelligence signal a new era in music composition and performance, one in which novel philosophical questions might be raised and answered. I would argue that the possibility of instantiating realtime compositional intelligence in machines holds the most radically transformative promise for a paradigmatic shift in music in the years ahead.

All of this, of course, has historically involved a bit of a trade-off: composers who wished to explore such realtime compositional possibilities were forced to limit themselves to electronic and virtual sound sources. For those who found it preferable to continue to work exclusively with acoustic instruments—whether for their complex yet identifiable spectra, their rich histories in music composition and performance, or the interpretative subtleties of human performers—computer algorithms offered an elaborate pre-compositional device, but nothing more.[1]

BRIDGING THE GAP

This chasm between algorithmic music realized electronically (where sophisticated manipulation of tempi, textural density, dynamics, orchestration, and form could be achieved during performance) and algorithmic music realized acoustically (where algorithmic techniques were only to be employed pre-compositionally to inscribe a fixed work) is something that has frustrated and fascinated me for years. As a student of algorithmic composition, I often wished that I could achieve the same enlarged sense of compositional possibility offered by electronically realized systems—including generativity, stochasticity, and performative plasticity—using traditional instruments and human performers.

This, it seemed, hinged upon a digital platform for realtime notation: a software-based score that could accept abstract musical information (such as rhythmic values, pitch data, and dynamics) as input and convert it into a readable measure of notation. The notational mechanism must also be continuously updatable: it must allow for a composer’s live data input to change the notation of subsequent measures during performance. It must here strike a balance between temporal interactivity for the composer and readability for the performer, since most performers are accustomed to reading at least a few notes ahead in the score. Lastly, the platform must be able to synchronize notational outputs for two or more performers, allowing an ensemble to be coordinated rhythmically.

Fortunately, technologies do now exist—some commercially available and others that can be realized as custom software—that satisfy each of these notational requirements.

I have chosen to develop work in Cycling ’74’s Max/MSP environment, for several reasons. First, Max supports realtime data input and output, which provides the possibility of transcending the merely pre-compositional use of algorithms. Second, two third-party notation objects —bach.score[2] and MaxScore[3]—have recently been developed for the Max environment, which allow for numerical data to be translated into traditional (as well as more experimental forms of) musical notation. For years, this remained a glaring inadequacy in Max, as native objects do not provide anything beyond the most basic notational support. Third, Max has several objects designed to facilitate communication among computers on a local network; although most of these objects are low-level in their implementation, they can be coaxed into a lightweight, low-latency, and relatively intelligent computer network with some elaboration.

REALTIME INTERACTIVE NOTATION: UNDER THE HOOD

Let’s take a look at the basic mechanics of interactive notation using the bach.score object instantiated in Max/MSP. (For those unfamiliar with the Max/MSP programming environment, I will attempt to sufficiently summarize/contextualize the operations involved so that this process can be understood in more general terms.) bach.score is a user-interface object that can be used to display and edit musical notation. While not quite as robust as commercial notation software such as Sibelius or Finale, it features many of the same operations: manual note entry with keyboard and mouse, clef and instrument name display, rhythmic and tuplet notation, accidentals and microtones, score text, MIDI playback, and more. However, bach.score‘s most powerful feature is its ability to accept formatted text messages to control almost every aspect of its operation in realtime.

To take a basic example, if we wanted to display the first four notes of an ascending C major arpeggio as quarter notes in 4/4 (with quarter note = 60 BPM) in Sibelius, we would first have to set the tempo and time signature manually, then enter the pitches using the keyboard and mouse. With bach.score, we could simply send a line of text to accomplish all of this in a single message:

(( ((4 4) (60)) (1/4 (C4)) (1/4 (E4)) (1/4 (G4)) (1/4 (C5)) ))

example 1:

And if we wanted to display the first eight notes of an ascending C major scale as eighth notes:

(( ((4 4) (60)) (1/8 (C4)) (1/8 (D4)) (1/8 (E4)) (1/8 (F4)) (1/8 (G4)) (1/8 (A4)) (1/8 (B4)) (1/8 (C5)) ))

example 2:

Text strings are sent in a format called a Lisp-like linked list (llll, for short). This format uses nested brackets to express data hierarchically, in a branching tree-like structure. This turns out to be a powerful metaphor for expressing the hierarchy of a score, which bach.score organizes in the following way:

voices > measures > rhythmic durations > chords > notes/rests > note metadata (dynamics, etc.)

The question might be raised: why learn an arcane new text format and be forced to type long strings of hierarchically arranged numbers and brackets to achieve something that might be accomplished by an experienced Finale user in 20 seconds? The answer is that we now have a method of controlling a score algorithmically. The process of formatting messages for bach.score can be simplified by creating utility scripts that translate between the language of the composer (“ascending”; “subdivision”; “F major”) and that of the machine. This allows us to control increasingly abstract compositional properties in powerful ways.

Let’s expand upon our arpeggio example above, and build an algorithm that allows us to change the arpeggio’s root note, the chord type (and corresponding key signature), the rhythmic subdivision used, and the arpeggio’s direction (ascending, descending, or random note order).

example 3:

Let’s add a second voice to create a simple canonic texture. The bottom voice adds semitonal transposition and rhythmic rotation from the first voice as variables.

example 4:

To add some rhythmic variety, we might also add a control that allows us to specify the probability of rest for each note. Finally, let’s add basic MIDI playback capabilities so we can audition the results as we modify musical parameters.

example 5:

While our one-measure canonic arpeggiator leaves a lot to be desired compositionally, it gives an indication of the sorts of processes that can be employed once we begin thinking algorithmically. (In the next post, we will explore more sophisticated examples of algorithmic harmony, voice-leading, and orchestration.) It is important to keep in mind that unlike similar operations for transposition, inversion, and rotation in a program like Sibelius, the functions we have created here will respond to realtime input. This means that our canonic function could be used to process incoming MIDI data from a keyboard or a pitch-tracked violin, creating a realtime accompaniment that is canonically related to the input stream.

PRACTICAL CONSIDERATIONS: RHTYHMIC COORDINATION AND REALTIME SIGHT-READING

Before going any further with our discussions of algorithmic compositional techniques, we should return to more practical considerations related to a realtime score’s performability. Even if we are able to generate satisfactory musical results using algorithmic processes, how will we display the notation to a group of musicians in a way that allows them to perform together in a coordinated manner? Is there a way to establish a musical pulse that can be synced across multiple computers/mobile devices? And if we display notation to performers precisely at the instant it is being generated, will they be able to react in time to perform the score accurately? Should we, instead, generate material in advance and provide a notational pre-display, so that an upcoming bar can be viewed before having to perform it? If so, how far in advance?

I will share my own solutions to these problems—and the thinking that led me to them—below. I should stress, however, that a multiplicity of answers are no doubt possible, each of which might lead to novel musical results.

I’ve addressed the question of basic rhythmic coordination by stealing a page from Sibelius’s/Finale’s book: a vertical cursor that steps through the score at the tempo indicated. By programming the cursor to advance according to a quantized rhythmic grid (usually either quarter or eighth note), one can visually indicate both the basic pulse and the current position in the score. While this initially seemed a perfectly effective and minimal solution, rehearsal and concert experience has indicated that it is good practice to also have a large numerical counter to display the current beat. (This is helpful for those 13/4 measures with 11 beats of rest.)

example 6:

With a “conductor cursor” in place to indicate metric pulse and current score position, we turn to the question of how best to synchronize multiple devices (e.g. laptops, tablets) so that each musician’s cursor can be displayed at precisely the same position across devices. This is a critical question, as deviations in the range of a few milliseconds across devices can undermine an ensemble’s rhythmic precision and derail any collective sense of groove. In addition to synchronizing cursor positions, communication among devices will likely be needed to pipe score data (e.g. notes/rests, time signatures, dynamics, expression markings) from a central computer—where the master score is being generated and compiled—to performers’ devices as individual parts.

Max/MSP has several objects that provide communication across a network, including udpsend and udpreceive, jit.net.send and jit.net.recv, and a suite of Java classes that use the mxj object as a host—each of these has its advantages and drawbacks. Udpsend and udpreceive allow Max messages to be sent to another device on a network by specifying its IP address; they provide the fastest transfer speeds and are therefore perhaps the most commonly used. The downside to using UDP packets is that there is no specification for error-checking, such as guaranteed message delivery or guaranteed ordered delivery. This means that, while it provides the fastest transfer speeds, UDP does not necessarily guarantee that data packets will make it to their destination, or that packets will be received in the correct order. jit.net.send and jit.net.recv are very similar in their Max/MSP implementation, but use the TCP/IP transfer protocol, which does provide for error-checking; the tradeoff is that they have slightly slower delivery times. mxj-based objects provide useful functionality such as the ability to query one’s own IP address (net.local) and multicasting (net.multi.send and net.multi.recv), but require Java to be installed on performers’ machines—something which, experience has shown, cannot always be assumed.

I have chosen to use jit.net.send and jit.net.recv exclusively in all of my recent work. The slight tradeoff in speed is offset by the reliability they provide during performance. Udpsend and udpreceive might work flawlessly for 30 minutes and then drop a data packet, causing the conductor cursor to skip a beat or a blank measure to be unintentionally displayed. This is, of course, unacceptable in a critical performance situation. To counteract the slightly slower performance of jit.net.send and jit.net.recv (and to further increase network reliability), I have also chosen to use wired Ethernet connections between devices via a 16-port Ethernet switch.[4]

Lastly, we come to the question of how much notational pre-display to provide musicians for sight-reading purposes. We must bear in mind that the algorithmic paradigm makes possible an indeterminate compositional output, so it is entirely possible that musicians will be sight-reading music during performance that they have not previously seen or rehearsed together. Notational pre-display might provide musicians with information about the most efficient fingerings for the current measure, alert them to an upcoming change in playing technique or a cue from a fellow musician, or allow them to ration their attention more effectively over several measures. In fact, it is not uncommon for musicians to glance several systems ahead, or even quickly scan an entire page, to gather information about upcoming events or gain some sense of the musical composition as a whole. The drawback to providing an entire page of pre-generated material, from a composer’s point of view, is that it limits one’s ability to interact with a composition in realtime. If twenty measures of music have been pre-generated, for instance, and a composer wishes to suddenly alter the piece’s orchestration or dynamics, he/she must wait for those twenty measures to pass before the orchestrational or dynamic change takes effect. In this way, we can note an inherent tension between a performer’s desire to read ahead and a composer’s desire to exert realtime control over the score.

Since it was the very ability to exert realtime control over the score which attracted me to networked notation in the first place, I’ve typically opted to keep the notational pre-display to a bare minimum in my realtime works. I’ve found that a single measure of pre-display is usually a good compromise between realtime control for the composer and readability for the performer. (Providing the performer with one measure of pre-display does prohibit certain realtime compositional possibilities that are of interest to me, such as a looping function that allows the last x measures heard during performance to be repeated on a composer’s command.) Depending on tempo and musical material, less than a measure of pre-display might be feasible; this necessitates updating data in a measure as it is being performed, however, which runs the risk of being visually distracting to a performer.­

An added benefit of limiting pre-display to one measure is that a performer need only see two measures at any given time: the current measure and the following measure. This has led to the development of what I call an “A/B” notational format, an endless repeat structure comprising two measures. Before the start of the piece, the first two measures are pre-generated and displayed. As the piece begins, the cursor moves through measure 1; when it reaches the first beat of measure 2, measure 3 is pre-generated and replaces measure 1. When the cursor reaches the first beat of measure 3, measure 4 is pre-generated and replaces measure 2, and so on. In this way, a performer can always see two full bars of music (the current bar and the following bar) at the downbeat of any given measure. This system also keeps the notational footprint small and consistent on a performer’s screen, allowing for their part to be zoomed to a comfortable size for reading, or for the inclusion of other instruments’ parts to facilitate ensemble coordination.

example 7:

SO IT’S POSSIBLE… NOW WHAT?

Given this realtime notational bridge from the realm of computation to the realm of instrumental performance, a whole new world of compositional possibilities begins to emerge. In addition to traditional notation, non-standard notational forms such as graphical, gestural, or text-based can all be incorporated into a realtime networked environment. Within the realm of traditional notation, composers can begin to explore non-fixed, performable approaches to orchestration, dynamics, harmony, and even spatialization in the context of an acoustic ensemble. Next week, we will look at some of these possibilities more closely, discussing a range of techniques for controlling higher-order compositional parameters, from the linear to the more abstract.



1. Notable exceptions to this include the use of mechanical devices and robotics to operate acoustic instruments through digital means (popular examples: Yamaha Disklavier, Pat Metheny’s Orchestrion Project, Squarepusher’s Music for Robots, etc.). The technique of score following—which uses audio analysis to correlate acoustic instruments’ input to a position in a pre-composed score—should perhaps also be mentioned here. Score following provides for the compositional integration of electronic sound sources and DSP into acoustic music performance; since it fundamentally concerns itself with a pre-composed score, however, it cannot be said to provide a truly interactive compositional platform.


2. Freely available through the bach project website.


3. Info and license available at the MaxScore website.


4. A wired Ethernet connection is not strictly necessary for all networked notation applications. If precise timing of events is not compositionally required, a higher-latency wireless network can yield perfectly acceptable results. Moreover, recent technologies such as Ableton Link make possible wireless rhythmic synchronization among networked devices, with impressive perceptual precision. Ableton Link does not, however, allow for the transfer of composer-defined data packets, an essential function for the master/slave data architecture employed in my own work. At the time of this writing, I have not found a wireless solution for transferring data packets that yields acceptable (or even consistent) rhythmic latencies for musical performance.

From the Machine: Computer Algorithms and Acoustic Music

The possibility of employing an algorithm to shape a piece of music, or certain aspects of a piece of music, is hardly new. If we define algorithmic composition broadly as “creating from a set of rules or instructions,” the technique is in some sense indistinguishable from musical composition itself. While composers prior to the 20th century were unlikely to have thought of their work in explicitly algorithmic terms, it is nonetheless possible to view aspects of their practice in precisely this way. From species counterpoint to 14th-century isorhythm, from fugue to serialization, Western music has made use of rule-based compositional techniques for centuries. It might even be argued that a period of musical practice can be roughly defined by the musical parameters it derives axiomatically and the parameters left open to “taste,” serendipity, improvisation, or chance.

A relatively recent development in rule-based composition, however, is the availability of raw computational power capable of millions of calculations per second and its application to compositional decision-making. If a compositional process can be broken down into a specific enough list of instructions, a computer can likely perform them—and usually at speeds fast enough to appear instantaneous to a human observer. A computer algorithm is additionally capable of embedding non-deterministic operations such as chance procedures (using pseudo-random number generators), probability distributions (randomness weighted toward certain outcomes), and realtime data input into its compositional hierarchy. Thus, any musical parameter—e.g. harmony, form, dynamics, or orchestration—can be controlled in a number of meaningful ways: explicitly pre-defined, generated according to a deterministic set of rules (conditional), chosen randomly (aleatoric), chosen according to weighted probability tables (probabilistic), or continuously controlled in real time (improvisational). This new paradigm allows one to conceive of the nature of composition itself as a higher-order task, one requiring adjudication among ways of choosing for each musically relevant datum.

Our focus here will be the application of computers toward explicitly organizational, non-sonic ends.

Let us here provisionally distinguish between the use of computers to generate/process sound and to generate/process compositional data. While, it is true, computers do not themselves make such distinctions, doing so will allow us to bracket questions of digital sound production (synthesis or playback) and digital audio processing (DSP) for the time being. While there is little doubt that digital synthesis, sampling, digital audio processing, and non-linear editing have had—and will continue to have—a profound influence on music production and performance, it is my sense that these areas have tended to dominate discussions of the musical uses of computers, overshadowing the ways in which computation can be applied to questions of compositional structure itself. Our focus here will therefore be the application of computers toward explicitly organizational, non-sonic ends; we will be satisfied leaving sound production to traditional acoustic instruments and human performers. (This, of course, requires an effective means of translating algorithmic data into an intelligible musical notation, a topic which will be addressed at length in next week’s post.)

Let us further distinguish between two compositional applications of algorithms: pre-compositional use and performance use. Most currently available and historical implementations of compositional data processing are of the former type: they are designed to aid in an otherwise conventional process of composition, where musical data might be generated or modified algorithmically, but is ultimately assembled into a fixed work by hand, in advance of performan­ce.[1]

A commonplace pre-compositional use of data processing might be the calculation of a musical motif’s retrograde inversion in commercial notation software, or the transformation of a MIDI clip in a digital audio workstation using operations such as transposition, rhythmic augmentation/diminution, or randomization of pitch or note velocity. On the more elaborate end of the spectrum, one might encounter algorithms that translate planets’ orbits into rhythmic relationships, prime numbers into harmonic sequences, probability tables into melodic content, or pixel data from a video stream into musical dynamics. Given the temporal disjunction between the run time of the algorithm and the subsequent performance of the work, such operations can be auditioned by a composer in advance, selecting, discarding, editing, re-arranging, or subjecting materials to further processing until an acceptable result is achieved. Pre-compositional algorithms are thus a useful tool when a fixed, compositionally determinate output is desired: the algorithm is run, the results are accepted or rejected, and a finished result is inscribed—all prior to performance.[2]

It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance.

With the advent of realtime computing and modern networking technologies, however, new possibilities can be imagined beyond the realm of algorithmic pre-composition. It is now possible for a composer to build performative or interactive variables into the structure of a notated piece, allowing for the modification of almost any imaginable musical attribute during performance. A composer might trigger sections of a musical composition in non-linear fashion, use faders to control dynamic relationships between instruments, or directly enter musical information (e.g. pitches or rhythms) that can be incorporated into the algorithmic process on the fly. Such techniques have, of course, been common performance practice in electronic music for decades; given the possibility of an adequate realtime notational mechanism, they might become similarly ubiquitous in notated acoustic composition in the coming years.

Besides improvisational flexibility, performance use of compositional algorithms offers composers the ability to render aleatoric and probabilistic elements anew during each performance. Rather than such variables being frozen into fixed form during pre-composition, they will be allowed to retain their fundamentally indeterminate nature, producing musical results that vary with each realization. By precisely controlling the range, position, and function of random variables, composers can define sophisticated hierarchies of determinacy and indeterminacy in ways that would be unimaginable to early pioneers of aleatoric or indeterminate composition.

Thus, in addition to strictly pre-compositional uses of algorithms, a composer’s live data input can work in concert with conditional, aleatoric, probabilistic, and pre-composed materials to produce what might be called a “realtime composition” or a­n “interactive score.”

We may, in fact, be seeing the beginnings of a new musical era, one in which pre-composition, generativity, indeterminacy, and improvisation are able to interact in heretofore unimaginable ways. Instances in which composers sit alongside a chamber group or orchestra during performance, modifying elements of a piece such as dynamics, form, and tempo in real time via networked devices, may become commonplace. Intelligent orchestration algorithms equipped with transcription capabilities might allow a pianist to improvise on a MIDI-enabled keyboard and have the results realized by a string quartet in (near) real time. A musical passage might be constructed by composing a fixed melody along with a probabilistic table of possible harmonic relationships (or, conversely, by composing a fixed harmonic progression with variable voice leading and orchestration), creating works that blur the lines between indeterminacy and fixity, composition and improvisation, idea and realization. Timbral or dynamic aspects of a work might be adjusted during rehearsal in response to the specific acoustic character of a performance space. Formal features, such as the order of large-scale sections, might be modified by a composer mid-performance according to audience reaction or whim.

While the possibilities are no doubt vast, the project of implementing a coherent, musically satisfying realtime algorithmic work is still a formidable one: many basic technological pieces remain missing or underdeveloped (requiring a good deal of programming savvy on a composer/musician’s part), the practical requirements for performance and notation are not yet standardized, and even basic definitions and distinctions remain to be theorized.

In this four-part series, I will present a variety of approaches to employing computation in the acoustic domain, drawn both from my own work as well as that of fellow composer/performers. Along the way, I will address specific musical and technological questions I’ve encountered, such as strategies for networked realtime notation, algorithmic harmony and voice leading, rule-based orchestration, and more. While I have begun to explore these compositional possibilities only recently, and am surely only scratching the surface of what is possible, I have been fascinated and encouraged by the early results. It is my hope that these articles might be a springboard for conversation and future experimentation for those who are investigating—or considering investigating—this promising new musical terrain.



1. One might similarly describe a piece of music such as John Cage’s Music of Changes, or the wall drawings of visual artist Sol Lewitt, as works based on pre-compositional (albeit non-computer-based) algorithms.


2. Even works such as Morton Feldman’s graph pieces can be said to be pre-compositionally determinate in their formal dimension: while they leave freedom for a performer to choose pitches from a specified register, their structure and pacing is fixed and cannot be altered during performance.


Joseph Branciforte

Joseph Branciforte is a composer, multi-instrumentalist, and recording/mixing engineer based out of New York City. As composer, he has developed a unique process of realtime generative composition for instrumental ensembles, using networked laptops and custom software to create an “interactive score” that can be continuously updated during performance. As producer/engineer, Branciforte has lent his sonic touch to over 150 albums, working with such artists as Ben Monder, Vijay Iyer, Tim Berne, Kurt Rosenwinkel, Steve Lehman, Nels Cline, Marc Ribot, Mary Halvorson, Florent Ghys, and Son Lux along the way. His production/engineering work can be heard on ECM, Sunnyside Records, Cantaloupe Music, Pi Recordings, and New Amsterdam. He is the co-leader and drummer of “garage-chamber” ensemble The Cellar and Point, whose debut album Ambit was named one of the Top 10 Albums of 2014 by WNYC’s New Sounds and praised by outlets from the BBC to All Music Guide. His current musical efforts include a collaborative chamber project with composer Kenneth Kirschner and an electronic duo with vocalist Theo Bleckmann.

Your Computer is Listening. Are you?

Six years ago, I wrote an article stemming from a lively discussion that I had with a few friends on the work of David Cope’s artificial intelligence compositional program “Emily Howell.” My intention had been two-fold: to approach the philosophical challenges of our society accepting music originating from an extra-human source, while also attempting to discuss whether “Emily Howell’s work” met the definition of a composed piece—or if extraordinary human effort was involved in the final product.

This inquiry will take a very different approach.

We begin with the hypothesis that, due to the rate of growth and development of A.I. technology, #resistanceisfutile. Which is to say that computer-composed music is here, and the conversation needs to change.

Need proof? When I wrote the article six years ago, there were roughly two or three A.I. programs, mostly theoretical and almost exclusively confined to academic institutions. In the two weeks between agreeing to write this article and sitting at down to flesh out my notes, a new program using Google’s A.I. open platform was released. In the week and a half between writing my first draft and coming back for serious revisions, another A.I. music system was publicly announced with venture capital funding of $4 million.  The speed at which new technology in this field is developed and released is staggering, and we cannot discuss if it might change the musical landscape, but rather how we will adapt to it.

Advances in the capacity and ease of use in digitally based media have fundamentally changed the ways that creators and producers interact with audiences and each other and—in many ways—they have bridged some of the gaps between “classical” and “popular” music.

Ted Hearne introduced me to the beauty and artistic possibilities of Auto-Tune in The Source (digital processing design by Philip White). After seeing a demo of Kamala Sankaram’s virtual reality operetta The Parksville Murders, I programmed a session at OPERA America’s New Works Forum, bringing in the composer, producers (Opera on Tap), and director (Carri Ann Shim Sham) to introduce their work to presenters and producers of opera from around the country. While still a beta product, it led to a serious discussion about the capacity of new technologies to engage audiences outside of a more traditional performance space.

The Transactional Relationship 

In the tech world, A.I. is equated to the Holy Grail, “poised to reinvent computing itself.” It will not just automate processes, but continually improve upon itself, freeing the programmer and the consumer from constantly working out idiosyncrasies or bugs. It is already a part of our daily lives—including Google’s search function, Siri, and fraud detection on credit cards. The intuitive learning will be essential to mass-acceptance of self-driving cars, which will save tens of thousands of lives annually.

So why is A.I. composition not the next great innovation to revolutionize the music industry? Let’s return to the “Prostitute Metaphor” from my original article. To summarize, I argued that emotional interactions are based on a perceived understanding of shared reality, and if one side is disingenuous or misrepresenting the situation, the entire interaction has changed ex post facto. The value we give to art is mutable.

A.I.’s potential to replace human function has become a recurring theme in our culture. In the last 18 months, Westworld and Humans have each challenged their viewers to ask how comfortable they are with autonomous, human-esque machines (while Lars and the Real Girl explores the artificial constructs of relationships with people who may or may not ever have lived).

I’ll conclude this section with a point about how we want to feel a connection to people that move us, as partners and as musicians. Can A.I. do this? Should A.I. do this? And (as a segue to the next section), what does it mean when the thing that affects us—the perfectly created partner, the song or symphony that hits you a certain way—can be endlessly replicated?

Audiences are interested in a relationship with the artist, living or dead, to the point that the composer’s “brand” determines the majority of the value of the work (commissioning fees, recording deals, royalty percentages, etc.), and the “pre-discovery” work of famous creators have been sought after as important links to the creation of the magnum opus.

Supply and Demand

What can we learn about product and consumption (supply and demand) as we relate this back to composition in the 21st century?

If you don’t know JukeDeck, it’s worth checking out. It was the focal point of Alex Marshall’s January 22, 2017, New York Times article “From Jingles to Pop Hits, A.I. Is Music to Some Ears.” Start with the interface:

 Two JukeDeck screenshots--the first shows the following list of genres: piano, folk, rock, ambient, cinematic, pop, chillout, corporate, drum and bass, and synth pop; and the second shows the following list of moods: uplifting, melancholic, dark, angry, sparse, meditative, sci-fi, action, emotive, easy listening, tech, aggressive, and tropical

Doesn’t it seem like an earlier version of Spotify?

Two smartphone screenshots from an earlier version of Spotify, the first one features an album called Swagger with a shuffle play option and a list of four of the songs: "Ain't No Rest for the Wicked," "Beat The Devil's Tattoo," "No Good," and "Wicked Ones"; the second one features an album called Punk Unleashed with a shuffle play option and a list of five of the songs: "Limelight," "Near to the Wild Heart of Life," "Buddy," "Not Happy," and "Sixes and Sevens."

“Spotify is a new way of listening to music.” This was their catchphrase (see way-back machine to 6/15/11). They dropped that phrase once it became the primary way that people consume music. The curation can be taken out of the consumer’s hands—not only is it easier, but also smarter. The consumer should feel worldlier for learning about new groups and hearing new music.

The problem, at least in practice, is that this was not the outcome. The same songs keep coming up, and with prepackaged playlists for “gym,” “study,” “dim the lights,” etc., the listener does not need to engage as the music becomes a background soundtrack instead of a product to focus on.

My contention is not that the quality of music decreased, but that the changing consumption method devalues each moment of recorded sound. The immense quantity of music now available makes the pool larger, and thus the individuals (songs/tracks/works) inherently have less value.

We can’t erase the Pandora’s Box of Spotify, so it is important to focus on how consumption is changing.

A.I. Composition Commercial Pioneers

Returning to JukeDeck: what exactly are they doing and how does it compare to our old model of Emily Howell?

Emily Howell was limited (as of 2011) to the export of the melodic, harmonic, and rhythmic ideas, requiring someone to ultimately render it playable by musicians. JukeDeck is more of a full-stack service. The company has looked at the monetization and has determined that creating digital-instrument outputs in lieu of any notated music offers the immediate gratification that audiences are increasingly looking for.

I encourage you to take a look at the program and see how it creates music in different genres. Through my own exploration of the JukeDeck, I felt that the final product was something between cliché spa music and your grandparent’s attempt at dubstep, yet JukeDeck is signing on major clients (the Times article mentions Coca-Cola). While a composer might argue that the music lacks any artistic merit, at least one company with a large marketing budget has determined that they get more value out of this than they do from a living composer (acknowledging that a composer will most likely charge more than $21.99 for a lump-sum royalty buyout). So in this situation, the ease of use and cost outweigh the creative input.

The other company mentioned in the article that hopes to (eventually) monetize A.I. composition is Flow Machines, funded by the European Research Council (ERC) and coordinated by François Pachet (Sony CSL Paris – UMPC).

Flow Machines is remarkably different. Instead of creating a finished product, its intention is to be a musical contributor, generating ideas that others will then expand upon and make their own. Pachet told the Times, “Most people working on A.I. have focused on classical music, but I’ve always been convinced that composing a short, catchy melody is probably the most difficult task.” His intention seems to be to draw on the current pop music model of multiple collaborators/producers offering input on a song that often will be performed by a third party.

While that may be true, I think that the core concept might be closer to “classical music” than he thinks.

While studying at École D’Arts Americaines de Fontainebleau, I took classes in the pedagogy of Nadia Boulanger. Each week would focus on the composition of a different canonical composer. We would study each composer’s tendencies, idiosyncrasies, and quirks through a series of pieces, and were then required to write something in their style. The intention was to internalize what made them unique and inform some of our own writing, if only through expanding our musical language. As Stravinsky said, “Lesser artists borrow, greater artists steal.”

What makes Flow Machine or JukeDeck (or Emily Howell?) different from Boulanger’s methodology? Idiosyncrasies. Each student took something different from that class. They would remember, internalize, and reflect different aspects of what was taught. The intention was never to compose the next Beethoven sonata or Mahler symphony, but to allow for the opportunity to incorporate the compositional tools and techniques into a palate as the student developed. While JukeDeck excludes the human component entirely, Flow Machine removes the learning process that is fundamental to the development of a composer. In creating a shortcut for the origination of new, yet ultimately derivative ideas or idioms, composers may become less capable of making those decisions themselves. The long-term effect could be a generation of composers who cannot create – only expand upon an existing idea.

What would happen if two A.I. programs analyzed the same ten pieces with their unique neural networks and were asked to export a composite? Their output would be different, but likely more closely related than if the same were asked of two human composers. As a follow up, if the same ten pieces were run through the same program on the same day, would they export the same product? What about a week later, after the programs had internalized other materials and connections in their neural networks?

What makes Flow Machine unique is the acknowledgment of its limitations. It is the Trojan Horse of A.I. music. It argues that it won’t replace composition, but help facilitate it with big data strategies. If we were discussing any non-arts industry, it might be championed as a “disruptive innovator.” Yet this becomes a slippery slope. Once we can accept that a program can provide an artistic contribution instead of facilitating the production of an existing work, the precedent has been set. At what point might presenters begin to hire arrangers and editors in lieu of composers?

No one can effectively predict whether systems like Flow Machine will be used by classical composers to supplement their own creativity. Both recording and computer notation programs changed the way that composers compose and engage – each offering accessibility as a trade-off for some other technical element of composition.

I could foresee a future when multiple famous “collaborators” might input a series of musical ideas or suggestions into a program (i.e. playlist of favorite works), and the musically literate person becomes an editor or copyist, working in the background to make it cohesive. Does that sound far-fetched? Imagine the potential for a #SupremeCourtSymphony or #DenzelWashingtonSoundtrack. They could come on stage after the performance and discuss their “musical influences” as one might expect from any post-premiere talkback.

So what does it all mean?

In the short term, the people who make their living creating the work that is already uncredited and replicable by these programs may be in a difficult situation.

A classically trained composer who writes for standard classical outlets (symphony, opera, chamber music, etc.) will not be disadvantaged any further than they already are. Since Beethoven’s death in 1827 and the deification/canonization/historical reflection that followed, living composers have been in constant competition with their non-living counterparts, and even occasionally with their own earlier works. It will (almost) always be less expensive to perform something known than to take the risk to invest in something new. There may be situations where A.I.-composed music is ultimately used in lieu of a contemporary human creation, if only because the cost is more closely comparable to utilization of existing work, but I suspect that the priorities of audiences will not change quite as quickly in situations where music is considered a form of art.

Show me the money

I focused on JukeDeck and Flow Machine over the many other contributors to this field because they are the two with the greatest potential for monetization. (Google’s Magenta is a free-form “let’s make something great together” venture only possible with the funding of Google’s parent company Alphabet behind it, and various other smaller programs are working off of this open-source system.)

Acknowledging monetization is the key question when considering a future outside of academia. The supposed threat of A.I. music is that it might eliminate the (compensated) roles that composers play in the 21st century, and the counter-perspective is how to create more paying work for these artists.

Whether it is a performing arts organization looking to strengthen its bottom line or composers trying to support themselves through their work, acknowledging shifts in consumer priorities is essential to ensuring long-term success. We need to consider that many consumers are seeking a specific kind of experience in both their recorded and live performance that has diverged more in the last 15 years than in the preceding 50.

It is cliché, but we need more disruptive innovations in the field. Until we reach the singularity, A.I. systems will always be aggregators, culling vast quantities of existing data but limited in their ability to create anything fundamentally new.

Some of the most successful examples of projects that have tried to break out of the confines of how we traditionally perceive performance (in no particular order):

  • Hopscotch, with a group of six composers, featuring multiple storylines presented in segments via limousines, developed and produced by The Industry.
  • Ghosts of Crosstown, a site-specific collaboration between six composers, focusing on the rise and fall of an urban center, developed and produced by Opera Memphis.
  • As previously mentioned, Ted Hearne’s The Source, a searing work about Chelsea Manning and her WikiLeaks contributions, with a compiled libretto by Mark Doten. Developed and produced by Beth Morrison Projects (obligatory disclaimer – I worked on this show).
  • David Lang’s anatomy theater—an immersive experience (at the L.A. premiere, the audience ate sausages while a woman was hanged and dissected)—attempting to delve not just into a historical game of grotesque theater, but also creating the mass hysteria that surrounded it (the sheer number of people who were “unsettled” by this work seems to be an accomplishment – and once again, while I did not fully develop this show, I was a part of the initial planning at Beth Morrison Projects).

Craft is not enough. Quoting Debussy, “Works of art make rules but rules do not make works of art.” As we enter this brave new world of man versus machine, competing for revenue derived not just of brawn but increasingly of intellect, composers will ultimately be confronted—either directly or indirectly—with the need to validate their creations as something beyond that of an aggregate.

I am optimistic about the recent trend of deep discussion about who our audiences are and how we can engage them more thoroughly. My sincere hope is that we can continue to move the field forward, embracing technologies that allow creators to grow and develop new work, while finding ways to contextualize the truly magnificent history that extends back to the origins of polyphony. While I am doubtful about the reality of computer origination of ideas upending the system, I’m confident that we can learn from these technological innovations and their incorporation in our lives to understand the changes that need to be made to secure the role of contemporary classical music in the 21st century.