Tag: how-to

Musical Meal Prep: Managing Rapid-Fire Deadlines for the Aspiring and Evolving Music Creative

Alexandra Petkovski at her music work station overlayed with the New Music Toolbox logo.

Words like “organization,” “time-management” and “prioritization” are perhaps more likely to be first associated with the job of say, an accountant, however at the core, the foundation of the music industry rests on these integral pillars. Having worked as a composer, producer, songwriter, and artist throughout the Film/TV, contemporary, video game and commercial music worlds, I have learned that a good creative creates, a great creative finishes.

In my time thus far as a music creator, I have developed some go-tos when it comes to creating in the music industry profession. In a landscape of rapid-fire deadlines, the ability to create consistent, high-quality music is infinitely helped by preparation ahead of time. Think of it like meal prep. If you have all your vegetables cut and ready to go prior to starting your recipe, you will expedite the cooking process itself. (Et voila, quick and delicious stir-fry!) Here are several fundamentals I’ve come to lean on that help me in navigating through music industry deadlines, multiple project balancing, and multi-tasking generally.

Using A Template

There are a lot of varied stances on music templates and using them in the creative process. On one hand, it calls to question the amount of originality and authenticity present in a music cue or work. Are we just setting ourselves up for the cookie cutter effect? Will all of our work start to sound the same? Will our repertoire suddenly share similar sonic palettes? On the other hand, are we actually fostering and enabling our creativity in providing a foothold moving forward? Giving ourselves a gift – a catalyst for creation – ahead of time? I personally can see the validity in both these viewpoints, and have experienced both as such. Ultimately, I have found that for me, having at least some basic structure and set-up in place prior to jumping into a project provides me with a sense of support, and a better overall mental place to begin from. Below are the typical template elements I try to incorporate, and have “at the ready.”

Digital Audio Workstation Genre-Based Template

“Have template, will travel.”

Without going overkill (but also, feel free to!) having at least a handful of basic templates at the ready for different music directions and/or genres is a solid starting point. I have seen some insanely decked out templates, where all orchestral instrument groupings and their respective sample sound patches have been preloaded, a subfolder within the project designated for the music mix, the master subfolder, and the final stems printing portion (yeah…make sure your machine is equipped for the equivalent of a CPU rollercoaster ride of its life). I’ve also seen some very rudimentary, this-is-the-basic-breakdown-of-a-band ways to do it. I feel it really boils down to the type of project, and even further, what types of projects you really spend a majority of your time doing.

For myself, I like to have a template catered towards “trailerization” endeavors and orchestral projects. For context, “trailerization” is where an original or arranged music work is created in the sonic vein specific to Film and TV trailers, promotions, teasers, and in-show needledrops; although subject to change and definitely can vary, the predominant style of music here plays in the darker, epic and dramatic spaces. I will add that I have used templates to create music for specific briefs (a directive sent to music creator via supervisor, trailer house, advertising, and/or licensing agency outlining a specific musical aim and product goal dependent on project type) within the Film/TV and commercial realms, which I have found to be very helpful, whereas within the scoring to screen world have found that my personal preference is to work from a completely “clean slate.” This is largely in part due to the collaborative levels present in a project, and its overall customization. Scoring music for a film, for instance, relies heavily on the communication and dialogue between composer and director, possibly producer(s), and members of the film creative team in general. Sonic palette, although potentially drawing influence from music genres and references, will often be developed from ground zero. A call for song submissions for an ABC medical drama may be less pointed, and a bit more universal in the musical stylization process. Again, not always, but this has been my current bandwidth of experience. Either way, having a template to open and work from when hit with multiple project types and due dates can be a real time-saver, not to mention emotional crutch. Here’s what I like to have built into my “trailerization” template…

1. Covering the Sonic Spectrum – Sample Sounds and Sonic Palette

In my experience with “trailerizing” music cues and songs, having a definitive low and high end present in the sonic spectrum helps generate the dichotomy between tension and resolution throughout a piece of music. Contrast in sound creates musical pulse; a high airy synthesizer juxtaposed with a low oscillating bass can help evoke “anticipatory” “dark” tones, the pairing of rapidly rhythmic strings with low booming impacts, and subby hip-hop infused beats can create feelings of “epicness” and motion. In all cases, starting with a template where this sonic spectrum is represented (having respective sample sounds and patches preloaded) has been an excellent jumping off point in my work processes. The basic instrument and sample sound groups I’ve incorporated in my “trailerization” template are as follows: high synth, woodwind textures, choral/choir, piano (usually a felt piano), strings (high and low), electro percussion, orchestral percussion, band percussion, electro drums (beat kits), orchestral drums, basic drum kit, sub bass, low synth, FX (sweeps/impacts/crashes), and vocals. This is not to say I don’t add or take away instruments and patches dependent on where project creation takes me – maybe I decide to layer my bass with low brass, or I don’t want to use piano – but having something to start off from, and having instrument presets loaded already, really makes the whole creative process more efficient and thus enjoyable. Additionally, part of what makes a good template isn’t just having samples preloaded and/or designated instrument group tracks, but organizing within each patch/instrument grouping. Without diving into too much of the minutia, an example of this would be the way I approach my vocal groupings. Instead of just having all vocals organized as one large entity, I like to create labeled subsets consisting of leads, doubles, harmonies, BGVs (background vocals, often in the form of “ooh’s” “ah’s”), and ad libs. I do this simply by colour coordination of audio tracks, however whatever technique works for you is totally acceptable. A straightforward way of keeping groupings organized in the template is via track stacks. Which brings me to my next point.

2. Track Stacks

Track stacks–and/or folders, dependent on the Digital Audio Workstation (DAW)–are the gifts that keep on giving. Although they are a supremely simple notion, you’d be surprised at how long it took for me to catch on about their existence. (Well, I did – and now I’m never going back!) In essence, one selects a particular number of MIDI and/or audio tracks in their project, and can right click, select create track stack – and boom! – organize said tracks together in one folder. In Logic Pro, the DAW I work within, there are two types of track stacks to select from – a “folder stack” versus “summing stack.” For these purposes, selecting “summing stack” is the desired course of action. The beauty of this lies beyond just the obvious visual benefit, but can actually anticipate and set up the process for printing audio stems down the line. (There could be an entire segment on the process and description of printing stems, but for this article’s purposes, let’s just keep it simple and say that track stacks can become the stem buses printed to audio final stems.) The takeaway – track stacks are where it’s at.

3. Signal Flow Set Up; All Aboard the Bus(ing)

Having desired signal flow paths predetermined, particularly in the form of presets and busing to auxiliary channels, enables one to create polished, industry-standard products at a faster rate, and allows one to (at least roughly) mix tracks simultaneously to composing and producing. This applies to all types of project templates generally. In a music mix, there are a couple options to consider. One may incorporate auxiliary channels to mix wet signal with dry signal on initial tracks, which is what the basis of parallel compression is. Respectively, one can also stereo output track stack groupings to auxiliaries, enabling the ability to add group compression, reverb, delay, and any desired effects. On this note, typically different instrument and/or sound groupings will have a varying kind of compression, parallel compression, EQ and/or reverb and delay assigned to them. In any case, if one has these respective buses set up ahead of time, it expedites the process of taking a fully composed/produced piece of music to its mixing stage. In my “trailerization” template, I like to incorporate at least a couple different reverb and delay types/presets assigned to track stack groupings, and have parallel compression ready to dial in for all. I’ve found through experience the ability to send stems dry (without effects) and wet is also an important one, so that if another mixer becomes involved with the project, one can send them dry stems so that they may apply their own respective effects. Overall, I find that having bigger reverb chamber sounds and mild delays helps create the “dramatic” tone of a trailerized cue. There may be other effective ways to set up signal flow, however, this template component works fairly well for me.

4. Presets and Chains

I find presets and “go-to” chains are a great way to save time, and especially beneficial regarding vocals. I like to have certain chain effects on the track, at the ready, but I also like to have presets saved for vocal specific busing too. For instance, in my trailerization template, I have a Vox FX 1 preset saved, which contains “Vocalsynth” – a means for creating a lower octave double on vocals. I have a Vox FX 2 and Vox FX 3 that I’ve got saved to help expedite real-time production and mix of vocals as well.

Additionally, as far as presets go, I find that having a mastering preset to apply quickly to a demo product (when sending a song or cue to a client for instance) of the music mix helps take a cue across the finish line, and also can help it stand out generally. This doesn’t have to be fancy at all, and in fact, my own “trailerization” master preset is super simple, consisting of Izotope’s plug-in Ozone 8 (for all our racing-against-the-clock mastering needs). For those unfamiliar, Ozone 8 essentially allows one to try out different cue sound outputs, playing with potential project polishing including but not limited to EQ, compression, and limiters. Especially in limited time perimeters, it is a reliable and user-friendly method of heightening one’s music work. In a similar vein, creating vocal chain presets is also a huge time-saver under rapid-fire deadlines.

Making Playlists

Another excellent tool which I feel helps promote efficiency, thus creative flowing of juices, is the simple yet effective act of making playlists. In short, no matter the project, it is extremely beneficial to put together sonic references in the form of songs, music cues, score, etc. to turn to for creative inspiration. Further, when working with clients on projects, it is so helpful to have material to refer to when communicating about energy, feelings, vibe and direction for a music piece or score. There are many ways to go about doing this; some people like to have general playlists at the ready for their own creative reference contingent on music genre type or stylization, others will primarily create playlists once a dialogue with a client is underway, shaping said playlist as a result (sometimes this playlist may actually already exist in the client’s mind, unbeknownst to them, on a subconscious level of what they’d like to hear. This is up to us to investigate and coax out). For me, I like to partake in both schools of thought, where I have several playlists in place specific to a project type, which ironically were developed as a result of client-communication and creative collaboration dialogue. What came first, the chicken or the egg? Who cares – does it sound good?

A Creator is Only as Good as Their Calendar

Well, in the music industry this may not always be the case. However, in the story of my professional (and personal for that matter!) life, one of the most undeniably sexy leading characters has revealed themselves to be…*drum roll please*…my calendar! It’s funny, but it is fundamentally true. I have found that keeping an up-to-date, organized calendar is essential. My recommendation: use a calendar on a technological device, like your phone. Every time a new deadline comes down the pipe, or a new project is underway, write that down. Have a color-coding system. Instead of just having a to-do list, which outlines the work that needs to be done but doesn’t convey much else in terms of saying when it’s getting done, a calendar creates a visual image for the day-to-day activities. When a new deadline arises, one is able to see what they can possibly move around on their schedule in order to meet it, and/or prioritize the level of importance a project has in real-time. I’ll leave it at this. Your calendar is your friend. Use it. Cherish it.

Applicability to the Wide Music Project Gamut

Anticipation and preparation techniques for managing rapid-fire deadlines specific to creating music for media-related projects are also very applicable and relevant to a wide gamut of music project types generally. In the case of templates, one is able to use this model, for example, in instances of writing a musical, composing an orchestral work for live performance, and/or arranging a piece for a band or recording gig. The key ingredient in all these cases is creating a foundational framework to use in a consistent manner. For instance, The Jones Family, a roughly two and a half hour musical which I wrote, composed, cast, and recorded, began with the development and solidification of my sonic palette, and the instrumental decisions of what would comprise its sound. Once I determined the instruments that would weave the fabric of the musical, I was able to use that template again and again, employing it for respective musical songs. Further, in the process of translating produced instrumental mock-ups to initial notation (for live instrumental performance purposes) having a consistent outline of instrument groupings, and organized MIDI data, expedited “putting the music to paper” overall. In addition to templates, making playlists to help spark creative fire or provide sonic reference to a music genre can help lend perspective and context for projects like composing for a string quartet, illuminating music elements like melody, harmony, and rhythm to better serve industry expectation. Having a strong understanding of the industry standard helps better inform the music direction and choices you make. Whatever the music project deadline type, you want to equip yourself to the best of your ability regarding the landscape you are working within. Beyond this, you want to use the tools at your disposal to cut down time and better achieve your goals. This is why using a calendar to help outline, organize, and solidify your schedule and manage your music project deadlines is so beneficial (and I cannot emphasize enough – so simple!)

Above all, managing rapid-fire deadlines in the form of organization, time-management and prioritization is in service of making art to the best of one’s ability. I feel it important to also note that one of the underlying key elements of managing deadlines is consistently working on something, no matter what. Anticipating the play is half the battle. Although it can be supremely difficult sometimes (seriously) try to always have something on the go – if you’re feeling less creatively motivated (and the deadline allows it) perhaps shift gears for several hours, focusing on admin or “house-keeping” to-do lists. Understanding how you maximize your productivity, and where your time is best spent, is vital to always staying as prepared as possible for when new deadlines arise. I believe that what partly defines a sustainable, long-term profession in the music industry is the act of honoring one’s craft and time, ultimately setting one up for success. As we all continue on our musical and artistic journeys, I hope these techniques and tips can provide some help navigating the landscape forward.

Synthesizing Environmental Sounds

A hand manipulating a patch cord on a synthesizer with lots of patches and an overlay of the New Music Toolbox logo

Why bother replicating environmental sounds through electronic music synthesis when recording something is faster and more accurate? What is the point of recreating something when that thing already exists. For these questions, I have a philosophical answer and a practical answer.

On the philosophical side, fabricating a simulacra of the sounds around us is at its core a meditative process, built equally around practices of listening and analysis. It pays respect to the omnipresence of the invisible and honors the complexity of seemingly simple things. It unlocks new techniques for interaction with our instruments and enriches our experience of the world apart from them: “what makes up that sound” becomes something of a walking mantra impressing itself on everything you hear.

On the practical side, a recording is a life-like portrait, fixed and unchanging. It excludes from us the agency to restructure the world it captures. It relegates our creative interactions to the realm of post-processing (i.e. filtering, adding reverb, etc.) to emphasize or hide aspects of the events captured on tape.

The technique I’ll explain in this article takes the opposite approach: utilizing filtering, reverb, etc. as foundational elements for creating real-world portraiture while retaining the freedom of dream-logic malleability. Can you record the sound of a tin room in which a prop plane idles while its engine keeps changing size? Maybe. Can you synthesize it? Definitely.

Approaching a sound with the goal of recreating it is like listening to an exploded diagram, where a sonic totality is divided into components and considered individually. It is with an ear to this deliberate listening that I share with you words that have guided my work for the past decade, passed along to me by the great Bob Snyder, a Chicago-based artist, educator and friend, in the form of his “Ear Training” synthesis exercises. He started with a simple question through which the components of any sound can be observed and serve as a roadmap for from-scratch fabrication. “Is a sound noisy or tonal, and is its movement (if it has any) regular or irregular?”

Let’s do a quick exercise: listen to a sound, any sound (a baby crying, a phone ringing), and ask yourself: can I hum it? Trace the movement of the sound with your hand in the air and observe: is it rising and falling in a pattern? The answers to these questions point toward the equipment needed to recreate them. If the sound is tonal (if you can hum it), select an oscillator; if it isn’t, choose a noise generator. There are of course plenty of sounds that have both (a howling wind, the word “cha,” etc.) but for this initial thought experiment choose a tone or noise source to best fit whatever is the sound’s dominant component.

Next, is something about the sound changing? It could be its amplitude, its pitch, its timbre, etc., but if you find yourself tracing out this motion with your hand note how your hand is moving: regularly (up and down, like a car alarm) or less regularly (like shoes clanking away in a drier). A repeating motion would point toward a looping, cyclical modulator (a low frequency oscillator, a sequencer, etc.), where irregular motion would indicate something either noise-based or a mixture of otherwise unrelated things. Either jot these observations down or keep them in your head, whatever works best for you— the important thing is to remain cognizant of them as they accumulate.

To recreate a sound from scratch is to assemble these observations as discrete instructional steps. Try not to get bogged down by the totality of the sound itself. Instead focus on these component parts: the sound is nothing more than a list of them in aggregate.

Start with the basics—tone or noise, what about it is it changing— and slowly zoom in on the details from there. Wind blowing through a grove of trees is noisy and irregular. Sometimes the leaves rustle with more treble, sometimes with more mid-range. These various noisy timbres seem to happen sequentially, rather than simultaneously, as if the branches pushed one way sound different than when the wind changes direction and pushes them the other, and so on. Study the sound, note these characteristics, think of your observations as a decoder ring.

Hopefully this provides something of an overview of the opportunities that are possible in synthesizing environmental sounds and lays out some of the aspects of sound to focus on in your listening. Now let’s try our hand at a concrete example and patch something up!

I’d like to synthesize the sounds of the beach, in particular a memory I have of an afternoon spent there as a child.  We’ll begin with the sound of ocean waves from the listening perspective of the shoreline. It’s low tide and the surf is mild. The sun hangs in the air, lazily

Once we have a working version of our central sound component, I find it helpful to surround it with supporting contextual sonics. These reinforce our creation’s place in this fabricated soundscape and allow for a degree of set-dressing about which the details are entirely ours to decide. Are these ocean waves happening on a beach or are they crashing in an office? Those decisions are executed through the inclusion of these background characters.

For this patch, I’ll play it straight and set the sound stereotypically. To create the sense of a shoreline, the focus will be on a pair of hallmarks—things you might hear (and in this case things I remember hearing) while sitting on the beach and listening to the waves: the dull roar of the ocean and the whipping hiss of the wind.

In tuning these sounds I’ll be utilizing Low and High Pass filters, and doing so with an ear for how each filter type represents distance: using Low Pass filters for sounds that are far away (and whose top end has rolled off), and High Pass filters for sounds that are close-up (and whose top end is accentuated). Additionally, setting the relative level of these sounds against each other paints a portrait of attention: the sounds being focused on (in this case the waves) can seem louder than their neighbors (the wind, the ocean), and should that observation shift for any reason this balance can be adjusted accordingly.

Finally, the addition of narrative elements can lend to this sound-portrait some much-appreciated variety: if the background is always there, the things that come and go can pull us into a far more immersive listening experience.

To illustrate this point we’ll create the sound of a single-passenger plane in flight, passing overhead.  Unlike our wave, wind and ocean patches, this one is definitely hummable and will require tone sources to synthesize.  While there are myriad ways to go about recreating engine sonics, each essentially contains at least an oscillator and at least some timbral complexity, especially if that engine is full of moving parts!  The aspects that you choose to focus on in your own engine synthesis work will depend greatly on your listening work: what about the sound jumps out to you?  What is essential?  In the case of the single-passenger plane, I’ll be celebrating its beat-frequency-like movement, its stereo position adjustments and the Doppler Effect that occurs as it passes from one side of the beach to the other.

Now that we have our waves, our environment and our wildcard narrative element, let’s combine them into a performance. The world we create in the mixing of these sounds is at any point re-definable: on a whim the ocean can become tiny, the wind can whip itself up into a terrifying wall, the waves can pause and hold mid-crash. While the example illustrated below is one that tilts towards accuracy it can at any moment morph into something else entirely: a far more fantastical collage of sonic impossibilities or simply the next memory that comes to mind. The fluidity of the portrait is entirely yours to decide.

Like any skill, decoding and fabricating environmental sounds is an exercise that rewards practice. I encourage you to start as soon as you finish this article. Close your eyes and whatever you hear or imagine first ask yourself: what makes up that sound? Thanks for listening.

Indeterminacy 2.0: The Music of Catastrophe

variant:blue

Image from variant:blue by Joshue Ott and Kenneth Kirschner

“An indeterminate music can lead only to catastrophe.”
—Morton Feldman

It’s a catchy quote, coming as it does from one of the founders of indeterminate music—but to be fair, we should perhaps let the tape run a little further: “An indeterminate music can lead only to catastrophe. This catastrophe we allowed to take place. Behind it was sound—which unified everything.”

To Feldman, indeterminacy was a means to an end—a way to break through the walls of traditional composition in order to reach the pure physicality of sound beyond. Just as Wittgenstein had dismissed his Tractatus as a ladder to be thrown away after it was climbed, Feldman climbed the ladder of indeterminacy and, having reached the top, discarded it.

Few of us will ever see as far as Feldman did from those heights, but in this week’s final episode of our series on indeterminacy, I want to talk a little about some of the smaller lessons I’ve learned from my own experience writing this kind of music.

My earliest experiments with indeterminate digital music grew as a logical progression out of the chance-based techniques I was using in my work at the time. For years I had used chance procedures to create rich, complex, unexpected musical material—material that I would then painstakingly edit. Chance was a source, a method for generating ideas, not an end in itself. But with each roll of the dice that I accepted as a final, fixed result, and from which I built a carefully constructed, fully determined piece, there was always a nagging question: What if the dice had fallen differently? Was there another—maybe better—composition lurking out there that I’d failed to find? Had I just rolled the dice one more time, rolled them differently, could I perhaps have discovered something more?

Indeterminacy became for me a way to have my musical cake and eat it, too. Rather than accept the roll of the dice as a source of raw material, I could accept all possible rolls of the dice and call them the composition itself. With an indeterminate work, there is no longer one piece, but a potentially infinite number of pieces, all of them “the” piece. Here was an entirely different way of thinking about what music could be.

Where does the composition reside? Is it in the specific notes or sounds of one particular score, one particular performance, one particular recording? Or is it in the space of possibilities that a particular composition inscribes, in the wider set of potential outcomes that a given system of musical constraints, techniques, or limits marks out? Even with a lifetime’s constant listening, it would be impossible to hear every imaginable realization of even a moderately complex indeterminate piece—and yet we still somehow feel we have grasped the piece itself, have “heard” it, even if we’ve directly experienced only a tiny subset of its possible realizations. Such a composition resides in a space of pure potentiality that we can never fully explore—yet in glimpsing parts of it, we may experience more music, and perhaps intermittently better music, than a single fixed composition could ever give us.

Accepting this is not without costs. The first, and very important, lesson I learned in writing indeterminate music was that I missed editing. There are for me few more rewarding aspects of composing than that slow, painstaking, often desperate process of getting things right—and ultimately, few more joyful. I love crafting and honing musical material, pushing ever asymptotically closer to some non-existent point of perfection that you imagine, however falsely, the piece can achieve. I love to edit; it’s what I do. And you can’t edit an indeterminate composition. An indeterminate composition is never right; it’s in fact about letting go of the entire idea of rightness.

So you gain something—a vaster piece, a piece that occupies a range of possibilities rather than being limited to just one—and you lose something. You lose a feeling of conclusion, of finality, that moment of true satisfaction when you realize you’ve pushed a work as close as you possibly can to what you want it to be. So yes, there are pros and cons here; this was an unexpected lesson for me.

I’ve always found, in writing music, that there’s an ever-present temptation to believe that we can find the right way of composing—the right method, the right process, the right set of techniques that will produce great results each and every time without doubt or hesitation. A final system of writing that will work perfectly, reliably, and consistently, once and for all. This is an illusion, and not just because there is no right method. There are many methods. Many musics. Many ways of composing. They all have their strengths and they all have their weaknesses.

Perhaps the best lesson that I’ve learned from my more recent indeterminate music actually has to do with my non-indeterminate music. I now feel, when I write this music—call it “fixed” music, call it “determinate” music, call it plain-old music—that I want to write music that can’t be indeterminate. I want to write music that could only be written in a fixed way, that has some inescapable element of complexity, contingency, detail, design—some aspect that just plain wouldn’t work indeterminately. If a piece can be indeterminate, let it be indeterminate. But find those pieces—those harder, more elusive pieces—that require more than chance, more than uncertainty, that take thought, intelligence, planning, and a carefully crafted architecture to realize. In other words, my hope is that composing indeterminate music has made me a more thoughtful, more aware composer—of any music. Perhaps, then, writing indeterminate music can be both a rewarding end in itself, and a path to finding that which indeterminacy can’t give us.

Consider the Goldberg Variations. One could easily imagine making a Goldberg machine, a program or system for building new indeterminate Goldberg variations based on the same set of structures and constraints that Bach himself brought to the work. But then consider that final turn, those final notes, in the final aria of the Goldbergs. Could any blind system of chance find just those notes, just that turn, just that precise musical moment that so perfectly communicates with, speaks to, and completes the musical experience of the entire work? That one point in musical space is singular, and to find it requires the greatest intelligence and the greatest art. We haven’t yet built any machine, generative system, or set of axioms that could in a million years locate that one point in musical space; perhaps one day we will, but for now it is a place that no indeterminacy would ever stumble upon. It required a composer, and Bach found it.

What I’m trying to say is this: that indeterminate music is wonderful and exciting and compelling, especially when you couple it with the vast possibilities that digital technology opens up for us. But it’s not the only music. There’s a place for a music of chance, a place for a music of catastrophe—and there’s a place for music as we know it, and have always seemed to know it. A place for a music in which a composer finds a point in the space of all possible music, a singular moment, a perfect event, and says, “Yes. This.”

Indeterminacy 2.0: Under the Hood

variant:SONiC

Image from variant:SONiC by Joshue Ott and Kenneth Kirschner

This week, I want to talk about some of the actual work I’ve done with indeterminate digital music, with a focus on both the technologies involved and the compositional methods that have proven useful to me in approaching this sort of work. Let me open with a disclaimer that this is going to be a hands-on discussion that really dives into how these pieces are built. It’s intended primarily for composers who may be interested in writing this kind of music, or for listeners who really want to dig into the mechanics underlying the pieces. If that’s not you, feel free to just skim it or fast-forward ahead to next week, when we’ll get back into a more philosophical mode.

For fellow composers, here’s a first and very important caveat: as of right now, this is not music for which you can buy off-the-shelf software, boot it up, and start writing—real, actual programming will be required. And if you, like me, are someone who has a panic attack at the sight of the simplest Max patch, much less actual code, then collaboration may be the way to go, as it has been for me. You’ll ideally be looking to find and work with a “creative coder”—someone who’s a programmer, but has interest and experience in experimental art and won’t run away screaming (or perhaps laughing) at your crazy ideas.

INITIAL CONCEPTS

Let me rewind a little and talk about how I first got interested in trying to write this sort of music. I had used chance procedures as an essential part of my compositional process for many years, but I’d never developed an interest in working with true indeterminacy. That changed in the early 2000s, when my friend Taylor Deupree and I started talking about an idea for a series we wanted to call “Music for iPods.” An unexpected side effect of the release of the original iPod had been that people really got into the shuffle feature, and suddenly you had all these inadvertent little Cageans running around shuffling their whole music collections right from their jean pockets. What we wanted to do was to write specifically for the shuffle feature on the iPod, to make a piece that was comprised of little fragments designed to be played in any order, and that would be different every time you listened. Like most of our bright ideas, we never got around to it—but it did get me thinking on the subject.

And as I thought about it, it seemed to me that having just one sound at a time wasn’t really that interesting compositionally; there were only so many ways you could approach structuring the piece, so many ways you could put the thing together. But what if you could have two iPods on shuffle at once? Three? More? That would raise some compositional questions that struck me as really worth digging into. And under the hood, what was this newfangled iPod thing but a digital audio player—a piece of software playing software. It increasingly seemed like the indeterminate music idea was something that should be built in software—but I had no clue how to do it.

FIRST INDETERMINATE SERIES (2004–2005)

In 2004, while performing at a festival in Spain, I met a Flash programmer, Craig Swann, who had just the skills needed to try out my crazy idea. The first piece we tried—July 29, 2004 (all my pieces are titled by the date on which they’re begun)—was a simple proof of concept, a realization of the “Music for iPods” idea; it’s basically an iPod on shuffle play built in Flash. The music itself is a simple little piano composition which I’ve never found particularly compelling—but it was enough to test out the idea.

Here’s how it works: the piece consists of 35 short sound files, each about 10 seconds long, and each containing one piano chord. The Flash program randomly picks one mp3 at a time and plays it—forever. You can let this thing go as long as you like, and it’ll just keep going—the piece is indefinite, not just indeterminate. Here’s an example of what it sounds like, and for this and all the other pieces in my first indeterminate series, you can download the functioning generative Flash app freely from my website and give it a try. I say “functioning,” but these things are getting a bit long in the tooth; you may get a big security alert that pops up when you press the play button, but click “OK” on it and it still works fine. Also potentially interesting for fellow composers is that, by opening up the subfolders on each piece, you can see and play all of the underlying sound files individually and hopefully start to get a better sense of how these things are put together.

It was with the next piece, August 26, 2004, that this first series of indeterminate pieces for me really started to get interesting (here’s a fixed excerpt, and here’s the generative version). It’s one thing to play just one sound, then another, then another, ad infinitum. But what if you’ve got a bunch of sounds—two or three or four different layers at once—all happening in random simultaneous juxtapositions and colliding with one another? It’s a much more challenging, much more interesting compositional question. How do you structure the piece? How do you make it make sense? All these sounds have to “get along,” to fit together in some musically meaningful way—and yet you don’t want it to be homogenous, static, boring. How do you balance the desire for harmonic complexity and development with the need to avoid what are called, in the technical parlance of DJs, “trainwrecks”? Because sooner or later, anything that can happen in these pieces will happen, and you have to build the entire composition with that knowledge in mind.

August 26, 2004 was one possible solution to this problem. There are three simultaneous layers playing—three virtual “iPods” stacked shuffling on top of each other. One track plays a series of piano recordings, which here carry most of the harmonic content; there are 14 piano fragments, most around a minute long, each moving within a stable pitch space, and each able to transition more or less smoothly into the next. On top of that are two layers of electronics, drawn from a shared set of 21 sounds, and these I kept very sparse: each is harmonically open and ambiguous enough that it should, in theory, be able to hover over whatever piano fragment is playing as well as bump into the other electronic layer without causing too much trouble.

As the series continued, however, I found myself increasingly taking a somewhat different approach: rather than divide up the sounds into different functional groups, with one group dominating the harmonic space, I instead designed all of the underlying fragments to be “compatible” with one another—every sound would potentially work with every other, so that any random juxtaposition of sounds that got loaded could safely coexist. To check out some of these subsequent pieces, you can scan through 2005 on my website for any compositions marked “indet.” And again, for all of them you can freely download the generative version and open up the folders to explore their component parts.

INTERMISSION (2006–2014)

By late 2005, I was beginning to drift away from this sort of work, for reasons both technological and artistic (some of which I’ll talk about next week), and by 2006 I found myself again writing nothing but fully “determinate” work. Lacking the programming skills to push the work forward myself, indeterminacy became less of a focus—though I still felt that there was great untapped potential there, and hoped to return to it one day.

Another thing holding the pieces back was, quite simply, the technology of the time. They could only be played on a desktop computer, which just wasn’t really a comfortable or desirable listening environment then (or, for that matter, now). These pieces really cried out for a mobile realization, for something you could throw in your pocket, pop some headphones on, and hit the streets with. So I kept thinking about the pieces, and kept kicking around ideas in my head and with friends. Then suddenly, over the course of just a few years, we all looked up and found that everyone around us was carrying in their pockets extremely powerful, highly capable computers—computers that had more firepower than every piece of gear I’d used in the first decade or two of my musical life put together. Except they were now called “phones.”

THE VARIANTS (2014–)

In 2014, after years of talking over pad kee mao at our local Thai place, I started working with my friend Joshue Ott to finally move the indeterminate series forward. A visualist and software designer, Josh is best known in new music circles for superDraw, a “visual instrument” on which he improvises live generative imagery for new music performances and on which he has performed at venues ranging from Mutek to Carnegie Hall. Josh is also an iOS developer, and his app Thicket, created with composer Morgan Packard, is one of the best examples out there of what can be achieved when you bring together visuals, music, and an interactive touch screen.

Working as artists-in-residence at Eyebeam, Josh and I have developed and launched what we’re calling the Variant series. Our idea was to develop a series of apps for the iPhone and iPad that would bring together the generative visuals of his superDraw software with my approach to indeterminate digital music, all tightly integrated into a single interactive experience for the user. Our concept for the Variant apps is that each piece in the series will feature a different visual composition of Josh’s, a different indeterminate composition of mine—and, importantly, a different approach to user interactivity.

When I sat down to write the first sketches for these new apps, my initial instinct was to go back and basically rewrite August 26, 2004, which had somehow stuck with me as the most satisfying piece of the first indeterminate series. And when I did, the results were terrible—well, terribly boring. It took me a little while to realize that I’d perhaps learned a thing or two in the intervening decade, and that I needed to push myself harder—to try to move the indeterminate pieces forward not just technologically, but compositionally as well. So I went back to the drawing board, and the result was the music for our first app, variant:blue (here’s an example of what it sounds like).

It’s immediately clear that this is much denser than anything I’d tried to do in the first indeterminate series—even beyond the eight tracks of audio running simultaneously. It’s denser compositionally, with a more dissonant and chromatic palette than I would have had the courage to attempt ten years earlier. But the piece is actually not that complex once you break it down: each audio file contains a rhythmically loose, repeating instrumental pattern (you can hear an example of one isolated component here to give you a sense of it), with lots of silent spaces in between the repetitions. The rhythms, however, are totally free (there’s no overarching grid or tempo), so as you start to layer this stuff, the patterns begin to overlap and interfere with each other in complex, unpredictable ways. For variant:blue, there are now 48 individual component audio files; the indeterminate engine grabs one sound file at random and assigns it to one of the eight playback tracks, then grabs the next and assigns it to the next available track, and so forth. One handy feature of all of the Variant apps is that, when you click the dot in the lower right, a display will open that shows the indeterminate engine running in real time, which should hopefully give you a better sense of how the music is put together.

In one way, though, the music for variant:blue is very much like my earlier indeterminate pieces: it’s straight-up indeterminate, not interactive. The user has no control over the audio, and the music evolves only according to the indeterminate engine’s built-in chance procedures. For variant:blue, the interaction design focuses solely on the visuals, giving you the ability to draw lines that are in turn modified by the music. True audio interactivity, however, was something that would become a major struggle for us in our next app, variant:flare.

The music for variant:flare has a compositional structure that is almost the diametrical opposite of variant:blue’s, showing you a very different solution to the problem of how to bring order to these indeterminate pieces. Where the previous piece was predominantly atonal and free-floating, this one is locked to two absolute grids: a diatonic scale (C# minor, though sounding more like E major much of the time), and a tight rhythmic grid (at 117 bpm). So you can feel very confident that whatever sound comes up is going to get along just fine with the other sounds that are playing, in terms of both pitch and rhythm. Within that tightly quantized, completely tonal space, however, there’s actually plenty of room for movement—and each of these sounds gets to have all sorts of fun melodically and especially metrically. The meters, or lack thereof, are where it really gets interesting, because the step sequencer that was used to create each audio file incorporated chance procedures that occasionally scrambled whatever already-weird meter the pattern was playing in. Thus every individual line runs in a different irregular meter, and also occasionally changes and flips around into new and confusingly different patterns. Try following the individual lines (like this one); it’s a big fun mess, and you can listen to an example of the full app’s music here.

We were very happy with the way both the music and the visuals for the app came together—individually. But variant:flare unexpectedly became a huge challenge in the third goal of our Variant series: interactivity. Try as we might, we simply couldn’t find a way to make both the music and the visuals meaningfully interactive. The musical composition was originally designed to just run indeterminately, without user input, and trying to add interactivity after the fact proved incredibly difficult. What brought it all together in the end was a complete rethink that took the piece from a passive musical experience to a truly active one. The design we hit on was this: each tap on the iPad’s screen starts one track of the music, up to six. After that point, each tap resets a track: one currently playing track fades out and is replaced by another randomly selected one. This allows you to “step” through the composition yourself, to guide its evolution and development in a controlled, yet still indeterminate fashion (because the choice of sounds is still governed by chance). If you find a juxtaposition of sounds you like, one compelling point in the “compositional space” of the piece, leave it alone—the music will hover there, staying with that particular combination of sounds until you’re ready to nudge it forward and move on. The visuals, conversely, now have no direct user interactivity and are controlled completely by the music. While this was not at all the direction we initially anticipated taking, we’re both reasonably satisfied with how the app’s user experience has come together.

After this experience, my goal for the next app was to focus on building interactivity into the music from the ground up—not to struggle with adding it into something that was already written, but to make it an integral part of the overall plan of the composition from the start. variant:SONiC, our next app, was commissioned by the American Composers Orchestra for the October 2015 SONiC Festival, and my idea for the music was to take sounds from a wide cross-section of the performers and composers in the festival and build the piece entirely out of those sounds. I asked the ACO to send out a call for materials to the festival’s participants, asking each interested musician to send me one note—a single note played on their instruments or sung—with the idea of building up a sort of “virtual ensemble” to represent the festival itself. I received a wonderfully diverse array of material to work with—including sounds from Andy Akiho, Alarm Will Sound (including Miles Brown, Michael Clayville, Erin Lesser, Courtney Orlando, and Jason Price), Clarice Assad, Christopher Cerrone, The Crossing, Melody Eötvös, Angelica Negron, Nieuw Amsterdams Peil (including Gerard Bouwhuis and Heleen Hulst), and Nina C. Young—and it was from these sounds that I built the app’s musical composition.

When you boot up variant:SONiC, nothing happens. But tap the screen and a sound will play, and that sound will trigger Josh’s visuals as well. Each sound is short, and you can keep tapping—there are up to ten sounds available to you at once, one for each finger, so you can almost begin to play the piece like an instrument. As with our other apps, each tap is triggering one randomly selected element of the composition at a time—but here there are 153 total sounds, so there’s a lot for you to explore. And with this Variant you get one additional interactive feature: hold down your finger, and whatever sound you’ve just triggered will start slowly looping. Thus you can use one hand, for example, to build up a stable group of repeating sounds, while the other “solos” over it by triggering new material. variant:SONiC is a free app, so it’s a great way to try out these new indeterminate pieces—but for those that don’t have access to an iOS device, here’s what it sounds like.

variant:SONiC is, for me, the first of our apps where the audio interactivity feels natural, coherent, and integral to the musical composition. And to me it illustrates how—particularly when working with touchscreen technology—indeterminacy quite naturally slides into interactivity with this kind of music. I’m not sure whether that’s just because iPhone and iPad users expect to be able to touch their screens and make things happen, or whether there’s something inherent in the medium that draws you in this direction. Maybe it’s just that having the tools on hand tempts you to use them; to a composer with a hammer, everything sounds like a nail.

In the end, though, much as I’m finding interactive music to be intriguing and rewarding, I do still believe that there’s a place for indeterminate digital music that isn’t interactive. I hope to work more in this direction in the future—though to call it merely “passive” indeterminate music sounds just as insulting as calling a regular old piece of music “determinate.” I guess what I’m trying to say is that, despite all these wonderfully interactive technologies we have available to us today, there’s still something to be said for just sitting back and listening to a piece of music. And maybe that’s why I’ve called this series Indeterminacy 2.0 rather than Interactivity 1.0.

Next week, our season finale: “The Music of Catastrophe.”