Tag: alternative notation

Digital Audio Workstations: Notation and Engagement Reconsidered

DAW screen cap
First, a benign observation: the overwhelming majority of the music currently emanating from living room speakers, or being heard from passing car stereos, first passed through digital audio workstation (DAW) software of some shape and style. DAWs like Pro Tools, Audacity, Ableton, and GarageBand are generally defined by their use of sequenced tracks containing either sonic waveforms or MIDI code. Yet they are largely invisible to most musicians and listeners, unless one reflects on how digital audio is created and mediated on a day-to-day basis. When we think of a new work’s creation, we imagine a score being poured over by a meticulous hand, eventually realized with lyricism and aplomb by performer(s) of equivalent musical intuition and skill. We pay fleeting attention, if any, to the subsequent inscription and manipulation that occurs in the studio after both the composer and performer have gone out for drinks at the end of the recording session. Indeed, despite an engineer or producer’s best attempts, a new work cannot pass transparently through a DAW; there are always stopgaps, enhancements, deletions, and tweaks being exerted that, I think, fundamentally color the recorded piece as separate from the composer’s instruction and the performer’s execution. This begs the question of how best to characterize the DAW’s everyday impact on our musical world.

Whether a musical work began its life within a DAW (as is the case with computer or electroacoustic music), or only passed through one with minor alteration prior to public distribution, these software tools touch nearly every auditory creation with aspirations beyond a sidewalk corner, bedroom studio, or recital hall. But I would like to take their influence one step further. Not only do DAW software products mediate recorded sound, but these very same tools can be thought of as a form of digital music notation. I broadly define digital notation as any computer-generated system that inscribes information capable of being rendered musical, including but not limited to software that employs some version of conventional staff notation. In the same way we give latitude to the printed graphic scores of Cornelius Cardew, Iannis Xenakis, or Brian Eno as legitimate notation, the DAW’s world of sequenced tracks and waveforms deserves a similarly appreciative study. The fact that DAW software has utility as a performance or recording tool should not prejudice us against its additional notational qualities. Neither should the fact that DAWs are frequently used in tandem with other notational styles when realizing a work.
Xenakis Score
Two real-world situations hopefully add weight to our re-thinking of DAW software as notation. First, when a composer like Matthew Burtner creates a piece of computer or electroacoustic music through a DAW interface, with no originating staff score, should we simply say that Burtner’s piece lacks notation? Or that this music falls outside the boundaries of what notation can accomplish? Second, consider an error-prone session with a chamber group trying to record a new work by a composer like Brian Ferneyhough. By the end of the day, almost never does the recording engineer have a single unblemished take from the work’s beginning to end. More often than not, a hodgepodge of clips cutting across movements or rehearsal letters will need to be sewn back together in the DAW and made to sound convincing, both to Ferneyhough and the eventual listener. Separate versions of the work now exist: the original manuscript showing the composer’s lofty aspirations versus the listener’s reality, a sonic Frankenstein arranged within a DAW that compiles the engineer’s best approximation. Which format has a more legitimate claim as the work’s true inscription? Instead of throwing up our hands in despair at either situation, let’s expand our thinking and our musical toolbox by including DAW software, warts and all, as a digital notation.

A final clarification: the term “digital notation” is frequently thought of as encompassing only those tools of the 21st-century instrumental composer, Finale and Sibelius. Yet Finale and Sibelius are far more akin to conventional DAW software than they are to ink and manuscript notation. In fact, they represent just one fork in the road of the DAW’s development, employing the same sequenced tracks and playback capabilities as progenitor software products while sacrificing waveform sounds in favor of MIDI and virtual instruments. Since the first DAW’s unveiling in 1978, we see the incorporation of similar structural principles into later digital notation products such as Finale, initially released in 1988. Indeed, contemporary DAW software like Logic Pro, seamlessly blending tracks with either MIDI staff notation or waveforms in the same composition, show the re-convergence of these two design paths. Perhaps joining “staff” software like Finale and “non-staff” software like Audacity together under the same notational umbrella seems unintuitive or even bizarre. But I counter that our understanding and classification of digital music notation should rely on how we engage with the medium rather than on the look of the “score” rendered through pixels. In what ways do we take for granted, on an experiential basis, how composers sculpt and explore the sound materials within DAW notation? By briefly exploring three core structural features of most DAWs–waveforms, sequenced tracks, and rapid playback–I want to make the case that this style of digital notation (Finale et al. included) enables a remarkable creative work process for a composer that deserves greater consideration.

Logic Pro screen cap

Logic Pro screen capture

Starting with Max Mathews’s first 1957 forays into the MUSIC-N programming language, the 1960s and ’70s found those individuals experimenting with computer-generated sound being able to specify individual aural events with a revelatory level of ultra-fine resolution. One could now stipulate with great precision the digital synthesis of musical parameters such as pitch, duration, envelope, and harmonic content. Csound, shown here, is a contemporary incarnation of these composition principals. A looming challenge soon arose for the early developers of digital music notation: how, in spite of this high-resolution processing, could a larger series of musical ideas be represented with clarity in the context of an entire composition? Italian electronic music composer Giancarlo Sica summarized a hopeful new method: “…a musical performer must be able to control all aspects [of a digital notation] at the same time. This degree of control, while also preserving expressivity to the fullest extent, allows continuous mutations in the interpretation of a score.” Waveforms, track sequencing, and rapid playback are precisely the DAW’s answer to this outline for increased digital notation flexibility.

CSound screen capture

CSound screen capture

The waveform is the first feature of DAW notation that I believe is taken for granted: how exactly does a composer engage with waveforms as opposed to our standard symbolism of staves, bars, notes, and accents? Waveforms serve as representations of sonic loudness over time with either craggy (quick attacks and decays, with short sustains) or smooth (slow attacks and decays, with long sustains) linear shapes. They flaunt the edges, curves, and dips of a performer’s dynamic treatment of audible content. Furthermore, they fulfill Sica’s earlier blueprint by being able to stretch apart and compact on a whim, displaying a microsecond of curvature or minutes of slow growth in rapid succession. Yet while waveforms provide exactness in the realms of amplitude and duration through visual peaks and valleys, the vital categories of pitch and harmonic content become inaccessible. Waveforms in DAW notation dramatically re-prioritize the musical dimensions to which composers have been traditionally most attentive, trading pointillistic melodic lines and chordal clusters for the attack, sustain, and decay of long, homogenized statements. They blend together formerly discrete notes as they resonate into and out of one another, with punctuation determined largely by phrase and cadence. In essence, our original melodic line becomes a single gestural sweep. Composers must then express themselves in this medium by sculpting a waveform’s dynamic development via fades and contour trims. Through tweaking sonic envelopes like these, waveforms in a DAW notation environment lead composers to think of musical movement in spatial or even topographical terms, rather than through traditional contrapuntal or harmonic mindsets. When a composer manipulates a waveform as the building block of a DAW’s musical dialect, I would describe it as far more akin to working pottery on a lathe or carving a block of ice than a typical composition metaphor such as painting with brushstrokes on a canvas. The same cannot be said for Finale’s species of MIDI-intense DAW notation, as a composer can’t zoom deeper into a quarter-note and discover more musical information to massage. Ultimately, this is a core distinction between a composer’s engagement with waveforms versus staff notation: waveforms enable a practically limitless editing capacity within each topographical gesture, whereas staff symbols are bound by both their discreteness from one another as well as their individual immutability. Again, one simply cannot chop away at the interiors of a quarter-note to render a different sort of sound.
Waveforms, in turn, strongly inform the next feature of DAW notation taken for granted: track sequencing. Track sequencing was developed in order to solve the especially thorny problem of showing relative musical time in digital notation, especially when there are a large number of sound events packed into a relatively short segment. A thickly composed section of a musical work may be pulled apart and laid out on a plane of such tracks that are then stacked on top of one another, or sequenced, to show simultaneity and density of texture. One might understand track sequencing as analogous to the look and feel of a printed orchestral score in conventional staff notation. Yet within such a score, one ratio of detail is maintained throughout the entire work. A conductor is unable to “zoom” in and out to examine micro-fine aspects of a particular instrumental voice, while also limiting the cues and visual influence of the other instruments that bleed into view. In contrast, DAW notation accomplishes precisely this feat while grappling with global and local representations of time across often immense distances. Time-flexible track sequencing, in tandem with our topographical waveforms, enable the composer to almost effortlessly rove and leap between far-flung sections, both making pin-prick edits and rendering gaping holes in the curves of the sounds. The ease of this direct work with the sequenced materiality of the waveform prompts critic and composer Douglas Kahn to opine that, beyond merely writing with sound, users of DAW notation initiate a “word processing” of sound. He describes how “workstations can cut and paste at sub-perceptual durations… they can pivot, branch out, detour, and flesh out… there is no restriction to duration… no necessary adherence to any form of [musical] interval. [DAWs] are very conducive to admixture, stretch, continua, and transformation.” I would like to run with Kahn’s word processing metaphor and apply it specifically to how we overlook the way composers currently manipulate music through the track sequencing of DAW notation. The fluidity and depth with which we sculpt digital music acquires a kind of invisibility, just like word processing, once we become comfortable in the DAW ecosystem. It is as if the composer were tangibly poking and prodding the waveform’s topography without numerous layers of idiosyncratic and technological mediation.

The final feature of DAW notation largely taken for granted involves its rapid playback capacities. A familiar console of controls–fast forward, play, stop, pause, and rewind–exists as a universal feature on DAW products and facilitates constant repeatability with even greater flexibility than a cassette player or a VCR. Once the composer zeroes in on a given segment of interest, the playback of the composition can be locked between these segment’s boundaries. An audible portion can then be looped and a brief, five-second moment may be repeated and tweaked ad infinitum. With this playback tool working in tandem alongside visual variables such as track sequencing and waveform editing, the act of listening itself becomes an inseparably notational component of software like Audacity and even Finale. To clarify: printed notation was formerly necessary as a means to preserve and later create organized sound. Now, listening is on equal footing with sight in our experience of digital notation, generating a sort of feedback loop whereby audible sound is able to dictate its own trajectory in a much more embedded way than a composer might accomplish by sitting with a sketch at the piano. This of course suggests a different sort of notation entirely: one that is multi-sensory at its very core. A single waveform gesture in DAW notation provokes dual stimuli as its visual content translates almost seamlessly into an aural dimension and vice-versa. Through the dogged playback and repetition of a particular musical segment, there is an uncanny synesthesia between sight and sound. Such episodes must certainly cause difficulty: how do I tell where one sense ends and the other begins in a musical experience mediated by DAW notation? This is the third and most pressing aspect, I think, of our creative process largely taken for granted.

I hesitate to point to explicit stylistic changes that result from a composer’s use of DAW notation in lieu of ink and manuscript paper. This trepidation stems not only from the wild diversity of musical genres that employ DAW notation, but also the varying creative stages in which this software is utilized as well as the countless product permutations that mix and match the variables I just described. Rather, my point is that crucial aspects of this ubiquitous music notation technology escape our attention unless we look at them through the lens of compositional engagement. First, waveforms encourage a breadth and depth of musical control, in a topographical style, not available to discrete note values whether they be printed on a page or displayed as a MIDI veneer. Second, track sequencing permits shifting focal points of reference that in turn enable a hyperactive editing style akin to word processing. This is a situation that non-DAW notation precludes through limited visual flexibility. Third, the DAW’s rapid playback controls allow listening to mingle with and meld into the visual parameters of digital notation, as if waveforms were now tangible gestures with a physicality we can toy with beyond the pixels of the computer screen. This is, of course, to say nothing of the tactile interaction that composers experience when they employ a mouse and keyboard while listening to and sculpting the building blocks of DAW notation. Ultimately, DAW software distinguishes itself from other notational styles as a synesthetic tool akin to word processing in its application. In fact, it is a notation whose design parameters, inspired by Sica’s call for relentless flexibility, unite so seamlessly that they retreat from the user’s attention rather than become more apparent, especially as a composer grows increasingly comfortable with their use. The pervasive invisibility of DAW notation in our routine contact with sound and media compels us to shine a critical light on this ingenious device for the inscription and birth of new music.

Empty Rooms

Tempwerks Cover

This week I’ve been consumed by preparations for Trees and Branches, a concert I’ve wanted to do in some form for years now. For me, the legacy of John Cage’s ideas is even more fascinating than his music. Everyone seems to learn a different lesson from Cage, making the shape of that legacy vast, diverse, and constantly changing. I imagine it as a forest of trees and branches emerging from Cage’s sonic landscapes, radiating off in countless directions.

Since a single concert devoted to the entire scope of Cage’s influence would be impossible, it made sense to focus on one particular offshoot: in this case, the Californian composer and percussionist Arthur Jarvinen. There are a few articles about Jarvinen out there if you’re not familiar with his music. What made him irressistible as a focal point for this concert was his personal, immediate influence on so many composers and performers in the Los Angeles area, myself included.

Jarvinen’s sense of humor and penchant for redeeming “non-musical” sounds seem particularly Cageian, but Jarvinen’s humor was always more sardonic, and he didn’t necessarily follow Cage’s dictum to “let sounds be themselves.” In 2009 and 2010, Jarvinen wrote a series of electroacoustic pieces that combined a variety of electronic equipment he had collected over the years–strobe lights, contact mics, telegraph keys, shortwave radios, a Geiger counter–with more modern digital technology (e.g. laptops) and more traditional acoustic instrumentation (e.g. violin). He put together a group of LA-based composer-performers (Andrew Tholl, Scott Cazan, and me) to realize and perform these pieces, and named the group TempWerks. Unfortunately Jarvinen passed away before these pieces were to be premiered, but with the addition of Casey Anderson to the group, we were still able to perform them at the Sacramento Festival of New American Music in 2010 as originally planned.

On Saturday, we’re playing three of Jarvinen’s pieces together for the first time since then. Extending the vectors of influence, each member of TempWerks has also written a new piece for the group. At least in my case, I wrote with Cage and Jarvinen specifically in mind as influences, an unusual exercise for me. Generally influences are things that creep in unnoticed in the process of composing, so it was odd to have an explicit set of influences as a starting point.

Song from Twenty-Eight Rooms was the end result of this process. The piece calls for each performer to make seven field recordings of different empty rooms. Cage’s idea of silence as sound is an obvious inspiration here, but past this the resemblance is perhaps superficial. There are no chance processes in the piece—in fact it’s rather obsessively structured. There’s also a harmonic component—each performer is asked to arrange the recordings from low to high based on the dominant ambient hum from each room.

The piece is also notated in “stopwatch time,” i.e. with minutes and seconds instead of traditional rhythmic notation. This also stems from Cage, who often used this kind of notation to try and escape musical time or, to put it another way, to replace rhythm with duration. Two of Jarvinen’s pieces—Slide Show and Blinded by Enlightenment (Again)—also use a version of stopwatch time. But perhaps due to his background as a percussionist, Jarvinen’s stopwatch time pieces feel incredibly rhythmic. The five-second interval is a particularly important structural unit in his pieces, a curious choice because it’s right on the cusp of our temporal perception—a little too long to feel comfortable as a rhythmic interval and a little too short to feel comfortable as a durational interval.

This tension between “sound as sound” and “sound as music” is also a principal component of Song from Twenty-Eight Rooms, which uses the five-second interval as a departure point and adds several levels of subdivision. I briefly considered notating it with traditional rhythms in 5/4 at 60bpm, but this turned out to be impractical. In fact, deciding what tools to use to notate the piece was surprisingly time-consuming—I ended up writing most of the piece on graph paper before transferring it to the computer. After trying out various kinds of illustration software I settled on NeoOffice Draw, which seemed to strike the right balance between limitations and flexibility for this project.

While I was figuring out how to structure the piece, another influence snuck in after all: Tom Johnson, a self-consciously minimalist composer who creates startlingly vivid musical depictions of mathematical concepts. Many of Johnson’s pieces deal with combinatorics, or the field of mathematics dealing with combinations of objects. To give a basic example, let’s say you want to find all the pairs from a set with three elements. If you work it out, you’ll wind up with three sets: {1,2}, {1,3}, and {2,3}. One thing that attracted me to combinatorics is the similarity between mathematical combination and musical variation. Certain elements are repeated, while others change, in a manner that is systematic but often hard to predict as a listener.

Tempworks score

In the score of Song from Twenty-Eight Rooms, each recording is given its own color from low (red) to high (violet), but as the piece begins, each player is confined to one sound. The combinations here are concerned primarily with ordering (permutations)—{1,2,3,4}, {1,2,4,3}, {1,3,2,4}, etc. You’re also introduced to the three main “shapes” of the piece (left-facing triangle, right-facing triangle, square), corresponding to different amplitude envelopes.

Johnson’s music is certainly engaged with more advanced mathematical concepts than this, and his musical depictions of mathematical solutions are often exhaustively complete. I’m less concerned with completeness or correctness, and more interested in the emergent musical properties that mathematical processes can generate. When all the sounds, shapes, and durations are introduced, the number of possible combinations skyrockets tremendously. Instead of doggedly cataloguing all the combinations, I superimpose several patterns simultaneously, hinting at the complete gamut of possibilities in the space of a minute.

Song from Twenty-Eight Rooms

I like works of art that suggest a larger world just outside of the frame, and that’s what I’m grasping at here—chords shifting like quicksand, faint suggestions of melody, patterns just beyond the edge of comprehension. Yes, it’s a piece about absence, but also the richness of absence. People are like empty rooms, and out of these empty rooms whole universes burst forth, universes populated with all the things we can never know about them.