Tag: notation

Summer Rewind: 10 Posts To Read Again

bullhorn

What have the most read, shared, and discussed posts been on NewMusicBox over the past few years? It’s an inspiring list reflecting how passionate the field is when it comes to discussing everything from race, age, and gender diversity to industry concerns surrounding vital tools of the trade. The following ten articles, spanning the past five years, are all worth another read while considering where we stand on these issues in 2016.


score-600

1. NOTATIONAL ALTERNATIVES: BEYOND FINALE AND SIBELIUS

Sexes

2. THE POWER LIST: WHY WOMEN AREN’T EQUALS IN NEW MUSIC LEADERSHIP AND INNOVATION

beethoven-600

3. CHICAGO: THE DEAFENING SILENCE OF THE BEETHOVEN FESTIVAL MUSICIANS

Seamless boys chorus

4. CON VIBRATO MA NON TROPPO: RETHINKING SOPRANOS

mixer-600

5. WHAT IS GOING ON WITH THE RECORD INDUSTRY?

Rieder Münster

6. FOUND: THREE EXAMPLES OF 21ST-CENTURY MUSIC

sheet music

7. TO UPGRADE OR NOT TO UPGRADE? A NOTATION SOFTWARE UPDATE

Hildegard-600

8. THE ‘WOMAN COMPOSER’ IS DEAD

closed-600

9. AGEISM IN COMPOSER OPPORTUNITIES

RedKey-600

10. MY BILL EVANS PROBLEM–JADED VISIONS OF JAZZ AND RACE

To Upgrade or Not to Upgrade? A Notation Software Update

There have been big changes in the notation software market in recent years, and a lot of people are confused about what is going on and what the future might hold. Sibelius is dead, and Finale has been sold off? No more updates? Where did I put my old electric eraser and Pelican pens? As a professional engraver, I use this software 12 hours a day and am deeply invested in the state of things.

In April, Avid released a new version of Sibelius, loosely called Sibelius 8 although they are shying away from version numbers now. This is the first major upgrade of Sibelius with no new engraving features.

Yes, that is correct. No new engraving features. But if you use a computer that has a touch screen, you can now use a digital pen to annotate and mark up a score, in the same way you’d use a pen/pencil to mark up a printed copy.

Sibelius 8 touch screen

Some of the common tablet/smartphone gestures will work on touch screens, you can navigate with the pen, and do rudimentary editing.

Despite this dearth of overall improvements, Avid has decided to maximize their income stream, so this new version starts a draconian licensing program where you pay a lot more for constant upgrades that may be of little use to those of us who focus on notation. Or you can purchase a perpetual license, but you must still pay a fee every year to continue receiving updates.

  • Just want to know if you should upgrade? Feel free to skip ahead.

HOW DID WE GET HERE?

Notation software has changed our industry in countless ways. It eliminated some methods of music typography (e.g. the Music Typewriter and the Korean “Stamping” method). It has lowered the cost of music preparation and eased the ability to make changes to existing materials, provided professional tools to novices, and lowered the total fees paid for commissions which also typically included additional remuneration for copying costs since composers took over some of the tasks of materials preparation. This last item has often resulted in composers doing more of the work while being paid less.

However, I think we have grown a bit complacent and forgotten how fragile the software industry is. Professional music notation with computers came to prominence in the late ‘80s when SCORE was released and publishers found that it was well suited to many different types of music, plus it had a good system to create scores and extract parts. It also had excellent guitar tablature notation, which made it ideal for companies such as Hal Leonard. SCORE’s strength was that it found a way to divide all of the myriad notational elements and organize them into categories of items, which allowed for easy manipulation. It’s a primitive program with musical intelligence, and it’s primarily graphic based. If you have a 900-page score and insert a few bars at the beginning, there is no automatic update; you have to manually adjust things throughout, including page numbers, bar numbers, and layout. It was not particularly composer friendly, so it was mainly adopted by professional engravers and copyists. Some publishers used it in house—a few still do.

Score (version 3)

Score (version 3)

Shortly after that, Finale came along. It was slow and cumbersome, but it had a Mac and Windows version (SCORE only runs under DOS) and seemed more user friendly because it had a graphic interface with menus and tools to perform common tasks.

Finale 2014

Finale 2014

For many years, those two programs formed the basis for converting the industry to computerized typography. However, in 1998, twin brothers Ben and Jonathan Finn released a Windows version of their unique program called Sibelius. It was designed with the idea that we should have a “word processor” for music notation, which would also serve as a professional tool. They studied what SCORE and Finale did, improved on it, and talked to many professionals to gain a deeper understanding of the needs of the industry. The paradigm they developed—a program that is easy enough that a novice can use it, yet structured in a way that a professional can come along later and improve the quality of the notation, the look, and the layout—is still its most compelling, powerful feature. Try doing the same operations in Finale or SCORE and the work hours double or triple.

Sibelius 7

Sibelius 7

Sales of Sibelius and Finale are strong, particularly in the education market, and generate enough revenue that the companies that own these products (Avid for Sibelius, and MakeMusic for Finale) can afford to continue development and add features, support existing users, and maintain the software. Yet there have been big changes in these two companies.

MakeMusic has been sold to Peaksware/LaunchEquity Partners, and they have moved from their longtime Minnesota location to Colorado. Many people who were intimately familiar with the software left the company because of the move.

Avid decided to close the primary London office where the Sibelius development team worked, and all of the long-time programmers who knew the code intimately were let go.

Sounds grim, doesn’t it? Add to that the fact that there have been two releases of Sibelius with only minor or non-existent feature changes (7.5 and “8”), and it surely makes you wonder about the future.

MakeMusic has finally ended its once-a-year Finale upgrade cycle (which was designed to generate revenue, not to benefit users). The latest version released is 2014, and while they have announced a free 2014.5 update, it only offers some bug fixes and minor improvements. It still suffers from an old-fashioned ’80s-era interface that is dependent on dozens of palettes, requires the continual clicking on tools to accomplish basic tasks, and lags far behind Sibelius in important features like collision avoidance.

ARE THERE OTHER OPTIONS?

There are new notation products on the market, but most of them focus on tablet computing (like StaffPad). There are free programs like MuseScore and a few others that might attract users with very limited budgets.

PreSonus’s NOTION considers itself music notation software, but I haven’t seen anything done in the program that I would consider at a professional level. These programs can be fun and have potential, but I can’t imagine they will be adequate for professional engraving/copying work.

One company that hopes to upset the marketplace is Steinberg, the German firm that manufactures Cubase and Nuendo. They took the bold step of hiring the Sibelius team in London, and set them to work creating a new notation program. There is a lot of potential here. They are led by a very knowledgeable musician, Daniel Spreadbury, who was the brilliant manager for Sibelius. And the team he’s working with has created a notation program before, so they know the pitfalls. Since they have to compete with two very entrenched programs with lots of momentum, they need to build something better. They have studied some of the subtle aspects of music engraving, talked to many professionals, and have tried to learn what most notation programs still get wrong. I could write a very long article about this last item; it’s an area of deep concern. For example, horizontal spacing is poorly understood and no program has ever done it as well as plate engravers did 100 years ago. Every music notation program handles lyrics incorrectly (in terms of spacing), and vertical spacing/justification is equally problematic. Steinberg is aware of these things, and you can read about the work they are doing on Daniel’s blog.

They have also created a new music font structure (SMuFL) and created a free font called Bravura, loosely based on the old Not-a-Set dry-transfer symbols, which were in turn based on Breitkopf and Hertel’s engraving tools. Dry-transfer symbols are mass produced on transparent plastic sheets so they can be applied to a music page by rubbing the back with a burnisher. It was a common technique for autography that was used before computer notation software became prevalent.

Engraving sample created with Not-a-Set

Engraving sample created with Not-a-Set

Engraving sample created with Bravura

Engraving sample created with Bravura


WHAT SHOULD YOU DO NEXT?

If you use Sibelius 7, I think that’s a good version to stick with for now. (That’s the version I use for most of my work.)

If you use Sibelius 7.5, that’s fine too. (This version added some small new features, but it also changed the file format, so it’s annoying to share files with people working in earlier versions.)

If you use Sibelius 6, that’s a little tougher call. It’s acceptable to work in, but there are some limitations and it’s now several versions back. I would recommend moving on from that version before long.

If you use any version prior to 6, I would recommend you upgrade to 7 or 7.5 before you get trapped in the version 8 licensing scheme. But act quickly, you’ll need to buy 7/7.5 from a retailer who has existing stock since Sibelius is no longer selling those versions.

FINALE

Finale 2014d is pretty stable and it’s the version I tend to use for most projects. But opening old files in new versions of Finale can cause problems, or in some cases it won’t even work. Finale’s free NotePad is surprisingly the best choice for opening old Finale files and allows for simple editing.

If you use a version of Finale prior to ver. 2012, it’s time to upgrade.

Notation software is absolutely essential for virtually anyone who needs to write down a musical idea. I have about 70,000 music files on my computer, and I’d estimate 2/3 of them are in Sibelius format, the rest in Finale and SCORE format. I don’t foresee abandoning Sibelius or Finale any time in the future, and I am reasonably confident the programs will remain functional and useful, even if they don’t add any significant new features or fix the glaring problems that remain. Perhaps Steinberg’s entry into this market will shake things up and force some serious competition among all of the programs. Despite all of the grim news here, I remain optimistic and hopeful.


Bill Holab is the owner of Bill Holab Music, a company that publishes a select group of composers and provides high-end engraving/typography/design to the industry. www.billholabmusic.com

The Score Has Got You By the Short Hairs

By Michelangelo Merisi da Caravaggio

By Michelangelo Merisi da Caravaggio [Public domain], via Wikimedia Commons


When you think about it, the concept of music notation is pretty weird. Imagine if Andy Warhol had received commissions not for paintings, but rather for paint-by-number templates, to be realized by each art interpreter on their own canvases. Of course, we all know why music developed a notation system, but a recent email exchange with French composer Sasha Zamler-Carhart reminded me of the importance of not taking our practices for granted. Assumptions are baked into every aspect of music notation, often layered one on top of the other, and they color the kinds of music we can make.

Any notation system is about trade-offs: certain elements are emphasized over others for the sake of not overwhelming our human minds with their finite capacity for detail. After all, you could theoretically employ waveform print-outs as music notation, but that’s way too much detail to be useful in most performance contexts. By necessity then, the priorities of your practice inform its notation. But as soon as your notation exists, it throws its priorities right back in your face and informs your practice, more or less to the same extent.

Make too many wrong assumptions about a notation and you’ll quickly dig yourself into a hole. On the performing end of the equation, you’ll totally miss the point when it comes to wide swaths of repertoire, delivering lackluster interpretations that fail to reflect either the composer’s intent or your expressive talents. As a composer, you’ll limit your sound world to a small set of symbols—as the saying goes, when all you have is a hammer, everything starts to look like a nail—or conversely you’ll make it unnecessarily hard for performers to realize ideas that don’t fully fit the notation you’re using. So whether as performer or composer, you’ll have basically become the score’s whipping boy: conforming your music to the notation’s limitations instead of conforming the notation to your artistry.
New Music, Early Music

 

French composer Sasha Zamler-Carhart

French composer Sasha Zamler-Carhart

In our exchange, Zamler-Carhart told me about his compositional practice, the frustrations he had as a student, and how he came to find his voice as an artist. For him, the problem lay in the seemingly uncontroversial advice his teachers offered on how new music works in the “real world”:

  • You won’t get much rehearsal time
  • Make your scores as clear as possible so ensembles can play your pieces after a few readings
  • The best interpreters are technical virtuosi and perfect sight-readers

Yet Zamler-Carhart wasn’t satisfied with the results he was getting. Eclectic by temperament, he also studied early music, and in that genre he came across a set of practices that better resonated with his aesthetic:

  • Music is rehearsed and reworked “endlessly until it sounds beautiful”
  • If the music is worth performing, the time and effort required to realize the score are immaterial
  • The best interpreters are those who bring “flawless elegance” to their playing

Zamler-Carhart has taken these principles to heart, and they have informed his practice ever since. Consequently, he prefers not to “work with musicians unless they can give me a lot of time (I mean weeks and months, not hours).” Once in rehearsal, he refines his interpretations orally, teasing out nuances via performance practice instead of ultra-precise notation, and this allows him to have meaningful exchanges with interpreters who are not new music specialists.

In Zamler-Carhart’s thinking, this works because performers who deal primarily with the music of the past expect a “triangular” relationship between notation, interpreter, and performance practice. There is the assumption that some of the information required will not be in the score—not that anything goes, mind you, but rather that sources outside of the printed page are necessary. Zamler-Carhart simply leverages this set of expectations. In his words:

Many early music performers are not used to seeing lots of dynamic and articulation markings in a score. They expect those elements to be part of performance practice and to be conveyed in rehearsal. An over-specific score can discourage them and give them the impression that the music is more difficult than it really is. Once in rehearsal, however, they will probably accept any change in dynamics, articulation, timbre and even tempo as part of normal rehearsal information, and they will incorporate that into their performance.

Naturally, Zamler-Carhart’s approach has certain implications: when he chooses to rely on oral performance practice, he also de facto excludes the resulting piece from much of the new music mainstream. The American Composers Orchestra is unlikely to spend “weeks and months” rehearsing a single piece with a single composer, so if you want to write orchestral music, the “early music” model is not for you. But that doesn’t mean it has no value. Too often we assume that the standard notational model and the performance practice it entails is the only path (or at least the unquestionably superior path). This isn’t true, and when you look closer, you’ll find that it comes with its own trade-offs and restrictions.

Realizing this, Zamler-Carhart has successfully used a range of notations (and non-notations) for his pieces, from a one-line vocal staff with neumes to modernist graphic scores, and from (selective) traditional notation to orally transmitted music to be learned solely by ear. He has even changed notational systems mid-piece when appropriate. However, what he does notdo is turn the score into some kind of musical crossword puzzle: his choices are always based on the idea of making it as easy as possible to realize the musical vision at hand.

Excerpt from Zamler-Carhart’s oratorio Sponsus (2012)

Excerpt from Zamler-Carhart’s oratorio Sponsus (2012), using a single line and neumes. He explains, “The piece is in fact polyphonic, but the polyphony can be realized from a single line so there’s no need to notate each voice.” Posted with permission.

The Right Tool for the Job

Percussionist Steven Schick

Percussionist Steven Schick in performance.

When it comes to building a career, the best musicians navigate the biases inherent in music notation in one of two ways: either (1) they restrict themselves to repertoire that works with their notational preferences, or (2) they switch notations based on the task at hand.

Percussionist Steven Schick provides a great example of the second approach. Earlier in his professional life, regular recital tours and prodigious quantities of new repertoire were central to his practice. On one end of the spectrum, he earned a reputation for his masterful interpretations of famously complex pieces such as Brian Ferneyhough’s Bone Alphabet (written for Schick in 1991) and Xenakis’s Psappha. On the other, he served as percussionist for Bang on a Can, performing works by David Lang, Steve Reich, Louis Andriessen, and other composers of the minimalist or post-minimalist vein.

These repertoires are not notated in the same way—indeed, there is significant variation even within each. The score to Reich’s Drumming is traditional yet sparse, and the phasing for which the piece is famous is simply described in text. Bone Alphabet, alternately, takes Cold War–era notational specificity to its extreme, with articulations and dynamics in virtually every bar, nonstop nested tuplets, and constant meter changes. Psappha, on the other hand, eschews traditional notation entirely in favor of a series of grids and tablatures.

Over the last decade, Schick’s musical priorities have evolved and so have the notational practices employed. He now focuses on collaborative, large-scale projects developed in tandem with composers, directors, instrument builders, writers, and other artists over an extended period of time. Take Schick Machine, his collaboration with Paul Dresher. A one-man, concert-length theater piece scored largely for invented percussion instruments, Schick Machine tells the fantastical story of a “mad scientist” percussionist who tinkers with instruments in his garage. Schick moves across a stage cluttered with dozens of custom-made instruments, narrating and performing as he goes, often to humorous effect. At certain moments the storytelling dominates, while at others the narrative gives way to purely instrumental “numbers” that feature specific groupings of percussion instruments. There are no breaks in the piece and the performance lasts over an hour.


Notating a piece like Schick Machine poses clear logistical challenges. Of course, you could detail every movement, gesture, speech, and musical figure with diagrams and staff notation. But would such a score convey the priorities of the piece? If the goal were to create a piece that gets played on every high school percussion recital the world over, perhaps. But that’s not the point—I mean, just look at the title. The piece is meant for Schick alone; the score only needs to be precise enough for him to remember how to realize a performance. Thus, they didn’t bother with a traditional score. Schick explains:

The piece was derived from improvisations and is still pretty largely improvised. There is a script and a sort of standard performance video that we made at the Mondavi Center…I use them together in lieu of a score.

At best, going through the motions of Cold War notational practice would have been a waste of time and a distraction. At worst, it would have “downsampled” the artistry of the piece, flattening the nuance of the Dresher/Schick vision into a long and complex series of approximations. Now, I am by no means arguing that Schick has rejected traditional notation entirely—he is still more than willing to read a score written in the new music fashion. But it is a testament to his creative talents that he can shift gears when the music demands it, as it does for this piece. A lot of musicians, even at the highest level, can’t do that.
Breaking the Unwritten Rules

JACK Quartet

JACK Quartet. Photo by Henrik Olund.

Of course, you can have a successful music career focusing entirely on a single notational practice, whether new music specificity, early music ambiguity, structured improv, standard orchestral practice, or whatever. But you still need to understand the priorities of your notation. There are always unwritten rules, and there are consequences to violating them.

A few months ago, I saw a Facebook exchange between a handful of composers who are in the same graduate composition program together. One of them had written a piece that is played at a single dynamic level throughout. As such, he had simply written f at the start of each part and left it at that; there were no further dynamics and no cautionary indications. Of course, the first thing the performers asked in rehearsal was, “Where are the dynamics?” His response was a snotty, “At the start of the piece.” Technically, the composer was using traditional notation correctly, but new music practice requires more specificity than he provided. The interpreters knew that most contemporary scores have a lot of dynamic detail, so without a cautionary indication, it is entirely reasonable for them to assume there had been a printing error.

Performers can, of course, get along fine without dynamics, but you can’t just assume they’ll figure out what you want—you need to point them in the right direction. Zamler-Carhart did just this with his St. Francis String Quartets, written for the New York-based JACK Quartet. The scores have virtually no dynamics, articulations, or tempo marks. Not unexpectedly, the quartet was a bit surprised at first, but they were willing “to engage with the piece and understand the logic of why, for example, a passage would be soft or loud even if it doesn’t say so… even with a concert looming.” For Zamler-Carhart, the experience was fulfilling and “the challenge improved the quality of performance.”

These exceptional cases aside, some skill in interpreting unwritten conventions is required even for the most banal of notational practices. Take the tenuto. Composer Sarah Kirkland Snider posted this question on Facebook awhile ago:

Poll of performers/conductors/composers: when you see a tenuto mark (horizontal dash over a note) without a slur, do you think of it as a request to alter the dynamic or the duration? (And if you’re a performer, tell me which instrument you play.)

Her query inspired 62 responses and a spirited debate that was never resolved. Answers ranged from “a color change” to “emotional pressing” to shorter duration, longer duration, a slight dynamic accent, “more weight,” and many other contradictory statements. The fact of the matter is that you can never get a single, objectively right answer to this question, because the tenuto has evolved as a sort of open-ended placeholder, begging to be repurposed. The only thing you can really say is that it means that something in the music should change.

Stuck Inside the Box

Taking notational practice for granted can hold you back in important ways. When I was a composition student at UC San Diego, we had a residency with the Arditti Quartet, perhaps the foremost interpreters of modernist string quartet repertoire and its diaspora. But their virtuosity in that genre doesn’t mean they excel at everything else quartet-related.

During the course of the residency, Irvine Arditti made it fairly clear that he has (or had) certain blind spots when it comes to notational practice. In particular, he seems to have bought into the Cold War ideal that the score is the music, objectively and completely. On several occasions he responded to requests for a change in interpretation with, “We’re just playing what’s in the music,” implying that the quartet’s interpretation was correct and that the composer had made a mistake in notation.

Yet my fellow composers and I, debriefing over beers, couldn’t help but feel that something was missing, that there was a degree of one-dimensionality to their playing. One of my colleagues later had his piece performed at June in Buffalo by another quartet of less lofty reputation, and he vastly preferred that interpretation to the Arditti’s. The other quartet was willing to take the time to learn how his notation worked and how to interpret the musical ideas that underlay it. Consequently, they realized it more faithfully.
Irvine Arditti might counter that we had all just written shit pieces. (You can decide for yourself, at least for my piece; their rendition is embedded below.) At numerous points throughout the residency, he complained that there were no young composers doing anything interesting anymore: the best of their works were bad copies of Lachenmann, and the worst were just plain bad.


I don’t think that’s the problem. Rather, the issue is that Irvine Arditti acts as if there were only a single, objective notational practice. Since he refuses to interpret notation in any way other than the Lachenmann/Stockhausen/Xenakis model, is it any surprise he can’t find nuance in other types of scores? The young composers he calls derivative probably are—I don’t doubt he knows Germanic post-serialism like nobody’s business. But move outside of that comfort zone, and he loses the ability to assess other styles on their own terms. Anything he is not willing to decipher becomes “not well written” and anything he can decipher is by necessity derivative. It’s a self-fulfilling prophecy.

Nor am I the only person to notice the effects of this blind spot on the Arditti Quartet’s interpretations. Their Beethoven renditions haven’t exactly met with critical acclaim, after all. Reviews like the following are typical:

…for most of the concert they seemed more concerned with just getting the notes together than with interpretation. This was especially, almost painfully, evident in the opening work, Beethoven’s Grosse Fuge in B-Flat Major, Op. 133. Beethoven, of course, is not exactly in the Arditti’s wheelhouse. But that’s still no excuse for iffy intonation, long stretches of uninflected dynamics, and questionable articulation.

If there were really only one objective way to use notation, this shouldn’t have been possible for a quartet of the technical caliber of the Arditti. Yet that’s what they served up. The Ardittis are perhaps the greatest string quartet interpreters of the Cold War modernist repertoire and its offshoots. Unfortunately, they are middling interpreters of everything else, because they assume all music works the same way.

Naturally, there are many successful paths between the Arditti Quartet and Zamler-Carhart’s ever-shifting notation. Nor am I advocating that everyone structure their careers like Steven Schick. But we as musicians in the classical tradition use notation pretty much all the time, and it’s worth reflecting on how that changes us. I don’t fault Irvine Arditti for liking the kind of music he likes, or for sticking to a single performance practice. But it is undeniable that his approach to notational practice has influenced his career.

Music notation is not the tabula rasa we pretend it to be. It is rather a tool for expressing specific kinds of sonic ideas, to specific kinds of people, for specific reasons. You don’t need a fancy graphic score or some kind of alternative tablature to completely transform the priorities of a notation, you just need a performance practice. If we ignore the unwritten aspects of notation, we’re likely to come away dissatisfied. If we keep them in mind, conversely, we’ll be more successful at creating music that speaks to us, whether as composers or performers.

***

Aaron Gervais

Aaron Gervais
Photo by Tracy Wong

Aaron Gervais is a freelance composer based in San Francisco. He draws upon humor, quotation, pop culture, and found materials to create work that spans the gamut from somber to slapstick, and his music has been performed across North America and Europe by leading ensembles and festivals. Check out his music and more of his writing at aarongervais.com.

Digital Audio Workstations: Notation and Engagement Reconsidered

DAW screen cap
First, a benign observation: the overwhelming majority of the music currently emanating from living room speakers, or being heard from passing car stereos, first passed through digital audio workstation (DAW) software of some shape and style. DAWs like Pro Tools, Audacity, Ableton, and GarageBand are generally defined by their use of sequenced tracks containing either sonic waveforms or MIDI code. Yet they are largely invisible to most musicians and listeners, unless one reflects on how digital audio is created and mediated on a day-to-day basis. When we think of a new work’s creation, we imagine a score being poured over by a meticulous hand, eventually realized with lyricism and aplomb by performer(s) of equivalent musical intuition and skill. We pay fleeting attention, if any, to the subsequent inscription and manipulation that occurs in the studio after both the composer and performer have gone out for drinks at the end of the recording session. Indeed, despite an engineer or producer’s best attempts, a new work cannot pass transparently through a DAW; there are always stopgaps, enhancements, deletions, and tweaks being exerted that, I think, fundamentally color the recorded piece as separate from the composer’s instruction and the performer’s execution. This begs the question of how best to characterize the DAW’s everyday impact on our musical world.

Whether a musical work began its life within a DAW (as is the case with computer or electroacoustic music), or only passed through one with minor alteration prior to public distribution, these software tools touch nearly every auditory creation with aspirations beyond a sidewalk corner, bedroom studio, or recital hall. But I would like to take their influence one step further. Not only do DAW software products mediate recorded sound, but these very same tools can be thought of as a form of digital music notation. I broadly define digital notation as any computer-generated system that inscribes information capable of being rendered musical, including but not limited to software that employs some version of conventional staff notation. In the same way we give latitude to the printed graphic scores of Cornelius Cardew, Iannis Xenakis, or Brian Eno as legitimate notation, the DAW’s world of sequenced tracks and waveforms deserves a similarly appreciative study. The fact that DAW software has utility as a performance or recording tool should not prejudice us against its additional notational qualities. Neither should the fact that DAWs are frequently used in tandem with other notational styles when realizing a work.
Xenakis Score
Two real-world situations hopefully add weight to our re-thinking of DAW software as notation. First, when a composer like Matthew Burtner creates a piece of computer or electroacoustic music through a DAW interface, with no originating staff score, should we simply say that Burtner’s piece lacks notation? Or that this music falls outside the boundaries of what notation can accomplish? Second, consider an error-prone session with a chamber group trying to record a new work by a composer like Brian Ferneyhough. By the end of the day, almost never does the recording engineer have a single unblemished take from the work’s beginning to end. More often than not, a hodgepodge of clips cutting across movements or rehearsal letters will need to be sewn back together in the DAW and made to sound convincing, both to Ferneyhough and the eventual listener. Separate versions of the work now exist: the original manuscript showing the composer’s lofty aspirations versus the listener’s reality, a sonic Frankenstein arranged within a DAW that compiles the engineer’s best approximation. Which format has a more legitimate claim as the work’s true inscription? Instead of throwing up our hands in despair at either situation, let’s expand our thinking and our musical toolbox by including DAW software, warts and all, as a digital notation.

A final clarification: the term “digital notation” is frequently thought of as encompassing only those tools of the 21st-century instrumental composer, Finale and Sibelius. Yet Finale and Sibelius are far more akin to conventional DAW software than they are to ink and manuscript notation. In fact, they represent just one fork in the road of the DAW’s development, employing the same sequenced tracks and playback capabilities as progenitor software products while sacrificing waveform sounds in favor of MIDI and virtual instruments. Since the first DAW’s unveiling in 1978, we see the incorporation of similar structural principles into later digital notation products such as Finale, initially released in 1988. Indeed, contemporary DAW software like Logic Pro, seamlessly blending tracks with either MIDI staff notation or waveforms in the same composition, show the re-convergence of these two design paths. Perhaps joining “staff” software like Finale and “non-staff” software like Audacity together under the same notational umbrella seems unintuitive or even bizarre. But I counter that our understanding and classification of digital music notation should rely on how we engage with the medium rather than on the look of the “score” rendered through pixels. In what ways do we take for granted, on an experiential basis, how composers sculpt and explore the sound materials within DAW notation? By briefly exploring three core structural features of most DAWs–waveforms, sequenced tracks, and rapid playback–I want to make the case that this style of digital notation (Finale et al. included) enables a remarkable creative work process for a composer that deserves greater consideration.

Logic Pro screen cap

Logic Pro screen capture

Starting with Max Mathews’s first 1957 forays into the MUSIC-N programming language, the 1960s and ’70s found those individuals experimenting with computer-generated sound being able to specify individual aural events with a revelatory level of ultra-fine resolution. One could now stipulate with great precision the digital synthesis of musical parameters such as pitch, duration, envelope, and harmonic content. Csound, shown here, is a contemporary incarnation of these composition principals. A looming challenge soon arose for the early developers of digital music notation: how, in spite of this high-resolution processing, could a larger series of musical ideas be represented with clarity in the context of an entire composition? Italian electronic music composer Giancarlo Sica summarized a hopeful new method: “…a musical performer must be able to control all aspects [of a digital notation] at the same time. This degree of control, while also preserving expressivity to the fullest extent, allows continuous mutations in the interpretation of a score.” Waveforms, track sequencing, and rapid playback are precisely the DAW’s answer to this outline for increased digital notation flexibility.

CSound screen capture

CSound screen capture

The waveform is the first feature of DAW notation that I believe is taken for granted: how exactly does a composer engage with waveforms as opposed to our standard symbolism of staves, bars, notes, and accents? Waveforms serve as representations of sonic loudness over time with either craggy (quick attacks and decays, with short sustains) or smooth (slow attacks and decays, with long sustains) linear shapes. They flaunt the edges, curves, and dips of a performer’s dynamic treatment of audible content. Furthermore, they fulfill Sica’s earlier blueprint by being able to stretch apart and compact on a whim, displaying a microsecond of curvature or minutes of slow growth in rapid succession. Yet while waveforms provide exactness in the realms of amplitude and duration through visual peaks and valleys, the vital categories of pitch and harmonic content become inaccessible. Waveforms in DAW notation dramatically re-prioritize the musical dimensions to which composers have been traditionally most attentive, trading pointillistic melodic lines and chordal clusters for the attack, sustain, and decay of long, homogenized statements. They blend together formerly discrete notes as they resonate into and out of one another, with punctuation determined largely by phrase and cadence. In essence, our original melodic line becomes a single gestural sweep. Composers must then express themselves in this medium by sculpting a waveform’s dynamic development via fades and contour trims. Through tweaking sonic envelopes like these, waveforms in a DAW notation environment lead composers to think of musical movement in spatial or even topographical terms, rather than through traditional contrapuntal or harmonic mindsets. When a composer manipulates a waveform as the building block of a DAW’s musical dialect, I would describe it as far more akin to working pottery on a lathe or carving a block of ice than a typical composition metaphor such as painting with brushstrokes on a canvas. The same cannot be said for Finale’s species of MIDI-intense DAW notation, as a composer can’t zoom deeper into a quarter-note and discover more musical information to massage. Ultimately, this is a core distinction between a composer’s engagement with waveforms versus staff notation: waveforms enable a practically limitless editing capacity within each topographical gesture, whereas staff symbols are bound by both their discreteness from one another as well as their individual immutability. Again, one simply cannot chop away at the interiors of a quarter-note to render a different sort of sound.
Waveform
Waveforms, in turn, strongly inform the next feature of DAW notation taken for granted: track sequencing. Track sequencing was developed in order to solve the especially thorny problem of showing relative musical time in digital notation, especially when there are a large number of sound events packed into a relatively short segment. A thickly composed section of a musical work may be pulled apart and laid out on a plane of such tracks that are then stacked on top of one another, or sequenced, to show simultaneity and density of texture. One might understand track sequencing as analogous to the look and feel of a printed orchestral score in conventional staff notation. Yet within such a score, one ratio of detail is maintained throughout the entire work. A conductor is unable to “zoom” in and out to examine micro-fine aspects of a particular instrumental voice, while also limiting the cues and visual influence of the other instruments that bleed into view. In contrast, DAW notation accomplishes precisely this feat while grappling with global and local representations of time across often immense distances. Time-flexible track sequencing, in tandem with our topographical waveforms, enable the composer to almost effortlessly rove and leap between far-flung sections, both making pin-prick edits and rendering gaping holes in the curves of the sounds. The ease of this direct work with the sequenced materiality of the waveform prompts critic and composer Douglas Kahn to opine that, beyond merely writing with sound, users of DAW notation initiate a “word processing” of sound. He describes how “workstations can cut and paste at sub-perceptual durations… they can pivot, branch out, detour, and flesh out… there is no restriction to duration… no necessary adherence to any form of [musical] interval. [DAWs] are very conducive to admixture, stretch, continua, and transformation.” I would like to run with Kahn’s word processing metaphor and apply it specifically to how we overlook the way composers currently manipulate music through the track sequencing of DAW notation. The fluidity and depth with which we sculpt digital music acquires a kind of invisibility, just like word processing, once we become comfortable in the DAW ecosystem. It is as if the composer were tangibly poking and prodding the waveform’s topography without numerous layers of idiosyncratic and technological mediation.

The final feature of DAW notation largely taken for granted involves its rapid playback capacities. A familiar console of controls–fast forward, play, stop, pause, and rewind–exists as a universal feature on DAW products and facilitates constant repeatability with even greater flexibility than a cassette player or a VCR. Once the composer zeroes in on a given segment of interest, the playback of the composition can be locked between these segment’s boundaries. An audible portion can then be looped and a brief, five-second moment may be repeated and tweaked ad infinitum. With this playback tool working in tandem alongside visual variables such as track sequencing and waveform editing, the act of listening itself becomes an inseparably notational component of software like Audacity and even Finale. To clarify: printed notation was formerly necessary as a means to preserve and later create organized sound. Now, listening is on equal footing with sight in our experience of digital notation, generating a sort of feedback loop whereby audible sound is able to dictate its own trajectory in a much more embedded way than a composer might accomplish by sitting with a sketch at the piano. This of course suggests a different sort of notation entirely: one that is multi-sensory at its very core. A single waveform gesture in DAW notation provokes dual stimuli as its visual content translates almost seamlessly into an aural dimension and vice-versa. Through the dogged playback and repetition of a particular musical segment, there is an uncanny synesthesia between sight and sound. Such episodes must certainly cause difficulty: how do I tell where one sense ends and the other begins in a musical experience mediated by DAW notation? This is the third and most pressing aspect, I think, of our creative process largely taken for granted.

I hesitate to point to explicit stylistic changes that result from a composer’s use of DAW notation in lieu of ink and manuscript paper. This trepidation stems not only from the wild diversity of musical genres that employ DAW notation, but also the varying creative stages in which this software is utilized as well as the countless product permutations that mix and match the variables I just described. Rather, my point is that crucial aspects of this ubiquitous music notation technology escape our attention unless we look at them through the lens of compositional engagement. First, waveforms encourage a breadth and depth of musical control, in a topographical style, not available to discrete note values whether they be printed on a page or displayed as a MIDI veneer. Second, track sequencing permits shifting focal points of reference that in turn enable a hyperactive editing style akin to word processing. This is a situation that non-DAW notation precludes through limited visual flexibility. Third, the DAW’s rapid playback controls allow listening to mingle with and meld into the visual parameters of digital notation, as if waveforms were now tangible gestures with a physicality we can toy with beyond the pixels of the computer screen. This is, of course, to say nothing of the tactile interaction that composers experience when they employ a mouse and keyboard while listening to and sculpting the building blocks of DAW notation. Ultimately, DAW software distinguishes itself from other notational styles as a synesthetic tool akin to word processing in its application. In fact, it is a notation whose design parameters, inspired by Sica’s call for relentless flexibility, unite so seamlessly that they retreat from the user’s attention rather than become more apparent, especially as a composer grows increasingly comfortable with their use. The pervasive invisibility of DAW notation in our routine contact with sound and media compels us to shine a critical light on this ingenious device for the inscription and birth of new music.

Finally, Movement on the Notation Front

Back in July of 2012, many notation software users were shaken by the news that Sibelius’s parent company, Avid, was dissolving the program’s London-based office and its primary development team. My “Sharpen Your Quills” post demonstrated how the news resonated throughout the composer community; whether or not a composer used Sibelius or Finale (the two primary notation software options on the market today) or one of the several secondary software alternatives, it was apparent how deeply this structural change would impact the notation software industry. A year and a half later, there are finally signs of what effects that shakeup has had and what the future holds for those who see notation software as an irreplaceable tool.

Finale

Finale has weathered numerous complaints over the years regarding their policy of yearly updates (many of which seemed superficial at best), their reliance on an outdated programming infrastructure for Mac users, a reluctance or inability to match improvements brought forth by their competitors in a timely manner, and a business model that seemed geared toward the public school market while ignoring pleas from professional engravers asking for more functionality in working with complex musical notation. While Finale’s decision to forgo their yearly update model and allow their programmers more time to make extensive changes came a couple months before news broke of Avid’s adjustment to Sibelius, the timing was a lucky break nonetheless.
On November 4, Finale announced their newest version, Finale 2014. Once the announcement was made, the knee-jerk reaction for many users was to read the software’s overview by the widely respected Finale plug-in developer Jari Williamson (whose reviews are required reading for anyone interested in a new software update from Finale). The changes ranged from the technical (they were finally able to move from the depreciated Carbon programming interface to Cocoa, a boon for Mac users, but neglected to create a 64-bit version) to the practical (much-improved treatment of hairpins, cutting down on the need for time-intensive manual editing) to the good-god-why-did-this-take-so-long (the beginning of backwards capability—limited, but it’s a start). But what stuck out for me were the indications that their focus had grown to re-incorporate the needs of the professional contemporary composer/engraver.

Many of the changes addressed issues that the occasional user would probably never think about or require—merging rests across layers and cross-layer accidental changes being two of the biggest—and one of the most interesting changes, the acceptance and incorporation of “open” or non-traditional key signatures, point directly to contemporary compositional techniques that have become commonplace in the late 20th and early 21st century. The software still has much to address before it gets to where it should be—a user interface replete with interminable dialogue boxes, the lack of magnetic positioning that Sibelius has introduced, and the inability for intuitive copying of individual items with the selection tool are major sticking points—but the fact that Finale decided to focus on the issues it did rather than ancillary changes for general public usage demonstrates that Finale and their parent company, MakeMusic, may have become more serious about improving the power and depth of their software as well as its reach and breadth.

Sibelius

Since the major adjustments last year, there’s been little news on this front…the exception being a recent comment from Avid’s director of product management, Bobby Lombardi, who decided, in light of his competitor’s announcement, to let Sibelius Blog know that “Sibelius 7.5” is coming soon. In addition to a review of Finale 2014 as seen through the lens of Sibelius users, Sibelius Blog also mentions the fate of those programmers from Sibelius who were let go when Avid closed their London office; most were hired by a newcomer to the notation software marketplace—Steinberg.

Steinberg

From Steinberg’s blog page:

Steinberg set up a new London-based research and development centre in November 2012, and hired as many of the former Sibelius development team as possible to start work on a brand new scoring application for Windows and Mac. There are currently twelve of us in the team, and all of us were formerly part of the Sibelius development team.

This is one of the more interesting developments on the music notation front in a very long time. By releasing most of their A-Team developers, Avid unintentionally caused the creation of a new competitor (in a rapidly growing marketplace). What has been most fascinating about this new endeavor is the transparency with which the Steinberg team has chosen to build their new application…so new that it doesn’t even have a name yet. That transparency can be seen most clearly in the Steinberg “Making Notes” blog run by Product Marketing Manager Daniel Spreadbury (again, formerly of Sibelius). Taking a page from Hollywood, where production vignettes are now commonplace many months before a film is released, Steinberg is taking the unique step of discussing their creation process as they go.

Here Spreadbury discusses the nuts and bolts of putting together aspects of a notation system that would seem very simple but are both conceptually and logistically extremely complex:

Another important prototype is a means to visualise music on staves. Several months ago, a very simple visualiser was written that shows rhythms, but not pitches, of notes belonging to a single voice on a single-line staff. Since then, we’ve done work on determining staff position and stem direction for notes and chords, and also have the capability to assign multiple voices to the same staff, but we’ve had no way to visualise the result on a staff. Now our test application can optionally show music for multiple voices on a five-line staff, and can display multiple staves together.
It’s still very crude: notes are not beamed together, the spacing is pretty terrible, and things like ties are drawn very simplistically. This is not by any means the basis for how music will eventually appear in our application. But it is an important diagnostic tool as we continue to add more and more musical building blocks…
Our ethos is that our application will be most useful if it does automatically what an experienced engraver or copyist would do anyway. If an engraver and copyist can trust the musical intelligence built in to our application to make the right decisions, it will become a truly fast and efficient tool, and hopefully the one they will come to prefer over and above the others at their disposal.

Where this new software will end up is unclear—they’re still at the rough, early stages—but from what is currently available, this new addition to the pantheon of notation software applications has the potential to create a third-party platform that combines the best characteristics of both Finale and Sibelius. What this means for composers, and subsequently the entire new music community, is as varied as the number of ways in which these applications are used. Some composers use them exclusively as engraving tools, while others eschew paper and pencil altogether and compose directly into the application. Ultimately, if software developers are able to improve the ease of use and the quality of the finished product, then we all come out ahead.

Notational Alternatives: Beyond Finale and Sibelius

“Finale or Sibelius?” is a question that composers love to ask other composers. It’s often taken as a given that if you write music professionally, you’re already using one of these popular notation software packages. This may be about to change—with the news of Sibelius’s development team being unceremoniously dumped by Avid and subsequently scooped up by Steinberg, we may have a third variable to add to that equation. ThinkMusic, another newcomer, promises an iPad app in the near future, but has already generated controversy for seeming to use Sibelius in its video mockup.

In the meantime, there are a variety of other, lesser-known options for notation software already lurking out there. None of them may have the same clout with professionals as Sibelius and Finale—yet—but many are gaining ground. Whether they present robust alternatives for creating notation (MuseScore, LilyPond), or alternative ways of thinking about and deploying notation (Abjad, JMSL, INScore), each has its own advantages and its own dedicated following.

MuseScore: Open Source Upstart
MuseScore started out in 2002 as a spinoff of MusE, an open source sequencer created by German developer and musician Werner Schweer. Until 2007, however, MuseScore was an obscure piece of software only available on Linux. In 2008, Thomas Bonte and Nicolas Froment began to work on bringing the software to a wider audience. Now, over 5000 people download MuseScore every day. Bonte credits the software’s newfound success to its extremely dedicated developers and early adopters. Its open source community now boasts more than 250 contributors adding to the project. This includes making the software available in new languages, fixing bugs, writing documentation, creating video tutorials, and so on.


While Bonte admits that MuseScore is not yet as feature-complete as Sibelius or Finale, he highlights the price tag: MuseScore is completely free, while the others can run as much as $600. Bonte also points out that when compared to the others, MuseScore is a fairly young piece of software. He anticipates that in a few years, “Musescore will have 80% of other notation software’s feature set on board.”
Another long-term advantage is MuseScore’s open source status, says Bonte:

Anyone can look into the code, change it and distribute it further. This is not possible with proprietary software like Sibelius, Finale, and Score. Given the recent uproar in the Sibelius community about Avid closing the London office, it seems now more than ever appropriate to say that choosing free and open source software is the right thing to do. What happened with Sibelius may happen with any other proprietary software, but cannot happen with MuseScore or LilyPond. The source code is available to everyone; no one can take it away.

This openness made MuseScore the notation software of choice for the Open Goldberg Variations, a project to create a new, quality edition of J.S. Bach’s beloved work that would be freely available in the public domain. This time, the venerable work had a very modern path to publication: the project was crowdfunded through Kickstarter and remained open for peer review on musescore.com before being made available for download. The Open Goldberg Variations can be found on the IMSLP / Petrucci Project website, though anyone is welcome to host or share it.

Screenshot of Open Goldberg Variations iPad app

Screenshot of Open Goldberg Variations iPad app

Musescore.com is MuseScore’s latest initiative. Launched in the fall of 2011, musescore.com is an online sheet music sharing platform, and the only thing that MuseScore charges for. Bonte compares the business model of the site to Flickr or SoundCloud—subscribers pay a fee ($49 per year) for more storage and features, essentially. Bonte says this revenue stream allows them to continue to develop MuseScore full time, while maintaining the open source status of the software itself.

LilyPond and Abjad: A Marriage of Composition and Code
Jan Nieuwenhuizen and Han-Wen Nienhuys are the creators of LilyPond, another open source music notation package. The project that would eventually become LilyPond had its genesis in 1992, when Nieuwenhuizen was playing the viola in the Eindhovens Jongeren Ensemble, a youth orchestra conducted by Jan van der Peet. According to Nieuwenhuizen, the players struggled to read from computer printouts so much that they soon switched back to handwritten parts. That got him thinking: “Fully automated music typesetting done right—how hard could that be?”

As it turns out, it was not terribly easy. Using the typesetting system TeX as a foundation, Nieuwenhuizen began working on the problem with Nienhuys, a French horn player in the orchestra and math student at the Eindhoven University of Technology. But it wasn’t until four years later, in 1996, that LilyPond finally emerged after four flawed prototypes. Despite being plagued by difficulties, however, they found that they couldn’t leave the problem alone. “We never realized how hard it was to produce beautifully typeset music automatically until it was too late and we were hooked,” Nieuwenhuizen admits.

Since those humble beginnings, LilyPond has matured into a full-fledged community project, with over 50 authors contributing to the latest stable release for Windows, Mac OS X, and Linux. This includes one full-time developer, David Kastrup, who makes a living—“just barely,” says Nieuwenhuizen—from donations to the project, which Nieuwenhuizen sees as a major milestone.
Because LilyPond is primarily a typesetting and engraving program rather than a compositional tool, its user paradigm differs somewhat from programs like Finale/Sibelius/MuseScore. Similar to Score, the most common engraving program until Finale came along, musical notation is initially entered as text characters, separating out the step of encoding the notation from the act of graphically displaying the notation, while ensuring a consistent layout. Nieuwenhuizen admits that this can be scary or intimidating at first to composers unused to working this way, but contends that in itself, LilyPond is “quite intuitive and easy to use.” He also foresees more community development of graphical front ends, web-based services, and tablet apps that will make LilyPond even more accessible to those just starting out with the software.

This community may be LilyPond’s greatest asset, with a significant amount of overlap between users of the software and those tinkering with the software itself. This new generation of composers who code is extending LilyPond’s functionality into unforeseen territory. For example, Victor Adán, Josiah Oberholtzer, and Trevor Bača are the lead architects of Abjad, which allows composers to write code that acts on notation in LilyPond in “iterative and incremental” ways. In other words, instead of creating notation directly, composers write code that Abjad then translates into a format that LilyPond can interpret to generate notation. As a result, instead of just manipulating individual notes and objects, Abjad can manipulate higher-level structures—like changing the dynamic level of every instance of a particular note, to give one basic example. Abjad uses the Python programming language, known for its readability and flexibility, as its foundation.

Excerpt of Trevor Bača's Čáry created in LilyPond

Excerpt of Trevor Bača’s Čáry created in LilyPond
(click to enlarge)

Writing music with Abjad presents a departure from the traditional compositional process. For Bača, it occupies a position “somewhere between the notation packages like Finale, Sibelius, and Score, and the composition environments like OpenMusic and PWGL.” He describes the process of working with Abjad as a “two-part loop,” alternating between writing code to model parts of a score and considering the notation as visualized in LilyPond. This iterative process of continual revision blurs the boundaries between programmatic and musical thinking, as well as between composition and pre-composition.
The creators of Abjad have also worked closely with practicing composers in the course of development. One of these, Jeffrey Treviño, is already well versed in the musical uses of technology; in the course of writing Being Pollen, a work for percussion and electronics based on the poetry of Alice Notley, he estimates that he used nine different pieces of software. With Abjad he had a specific application in mind—he hoped it would help him notate the rhythms of Notley reciting her poem. He describes part of the process here:

I used Max/MSP to tap along to her recitation and make a text file of millisecond counts for when each syllable occurred. I tightened these up in Audacity to line up precisely, and then I exported the numbers again. I wanted to use these numbers to make a notation in Abjad, but Abjad didn’t have a quantizer… We ended up looking up some research together, especially Paul Nauert’s writing on Q-Grids quantization, and Josiah ended up making the quantizer for Abjad.

In this case, Treviño’s needs as a composer had a direct impact on the development of Abjad, and this in turn allowed Treviño to accomplish something musical that would have otherwise been impossible, or at least far more difficult. Treviño draws an analogy between this model of collaborative composing and high-level chess:

Remember when it was a big deal that Deep Blue beat [Grandmaster Garry] Kasparov in 1997? No one mentions that they did a tournament after this where people could use computers to assist them. When Kasparov had a computer, he beat Deep Blue—but most intriguingly, an amateur aided by a computer, not Kasparov, won the whole tournament. So, I’m a lot better at writing twenty-part counterpoint that doesn’t break any rules if a computer can help me. But the skill set it takes to get the computer to know the rules is a very different skill set than the skill set we teach students in counterpoint classes. That’s all to say—I think it’s best to think about all this as totally redefining the important skills of the creative act, so that formerly conventional amateur/master relationships might be turned on their heads. Rather than expanding or enabling skills that matter currently, this proposes a totally new set of competencies and approaches to the task.

(N.B.: Your author independently thought of this analogy, so it must be a good one.)

Video of Jeffrey Treviño’s “Being Pollen” performed by Bonnie Whiting Smith (composed with help of Abjad/LilyPond)

JMSL and INScore: Notation in Motion
Nick Didkovsky, the primary developer of the Java Music Specification Language, is a guitarist, composer, and programmer who leads the avant-rock octet Doctor Nerve and teaches computer music classes at NYU. But for many years Didkovsky’s parallel interests in music and computers remained independent, never intersecting. What finally inspired him to combine them was an article by Douglas Hofstadter in Scientific American about game theory and a particular kind of lottery called the Luring Lottery, in which the collective desire to win is inversely proportional to the amount of the prize. Didkovsky says, “[The Luring Lottery] is a beautiful and simple idea that binds people together in a simultaneously competitive and cooperative relationship… I wanted to realize that structure musically and thought computers might need to be involved.”

He turned to Pauline Oliveros for help, and she directed him to Larry Polansky. Polansky, together with Phil Burk and David Rosenboom, had created the Hierarchical Music Specification Language (HMSL), a programming language offering a suite of musical tools that turned out to be perfect for Didkovsky’s task. Today HMSL might be most easily compared to other audio programming languages like Max/MSP and SuperCollider, but in an era when these languages were in their infancy, what appealed to Didkovsky about HMSL was its open-endedness: “You can basically do anything… no two HMSL pieces sound even remotely the same because you’re not starting on a level high enough to influence musical tastes. It’s a very non-stylistically biased environment for musical experimentation. And so I think it’s kind of deliberate that it’s kind of a tough environment to work in, or at least it just doesn’t come with a lot of bells and whistles.”

For the next ten years, Didkovsky continued to develop music software with HMSL on the Commodore Amiga for Doctor Nerve as well as other ensembles like the Bang on a Can All-Stars and Meridian Arts Ensemble. When the Amiga platform began showing its age, Didkovsky and Burk had the idea to rebuild HMSL in Java, which could be run on multiple platforms, and in 1997 Java Music Specification Language was born.

The most significant change to JMSL since those days is the addition of a music notation package. With his commitment to traditional instruments, it made sense to Didkovsky to use JMSL to drive a notation environment—and the result was, in his words, a “huge catalyst” creatively. In addition to the many pieces Didkovsky has written using JMSL since then, it has also become a tool used by composers all over the world:

One of my former students, Peter McCullough… developed an extensive suite of personal tools that did very unusual things to scored music, designing both generative and mutator routines that populate the score with notes and transform them once they are present… [progressive metal guitarist and record producer] Colin Marston wrote a series of notated pieces that are impossible for a human being to play—superhuman, intensely noisy pieces performed at high tempo that just rip by and throw glorious shrieking noise in your face, while the staff notation is populated by thick clusters of notes flashing by.

Didkovsky is quick to note that, while traditional staff notation is an important feature of JMSL, it represents only part of what the software can do. Many of the applications of JMSL have little to do with standard music notation—for example, the Online Rhythmicon, a software realization of the instrument Leon Theremin built for Henry Cowell, or Didkovsky’s MandelMusic, a sonic realization of the Mandelbrot set.

Nonetheless, JMSL’s notation capabilities may end up being its most widely used feature, especially with the advent of MaxScore. Didkovsky collaborated with composer Georg Hajdu to create MaxScore, which allows JMSL’s scoring package to communicate with the more popular audio programming environment Max/MSP. Currently, most of Didkovsky’s development energies are directed towards improving MaxScore.

MaxScore Mockup

MaxScore Mockup
(click to enlarge)

INScore, created by Dominique Fober, is a similar synthesis of ideas from notation software and audio programming, though Fober is quick to stress that it is neither a typical score editor nor a programming language. Fober is a musician with a scientific background who found himself doing more and more research related to musical pedagogy. He now works for Grame, a French national center for music creation, where he conducts research related to music notation and representation.

INScore follows from Fober’s experiments based on the idea that, by providing immediate feedback to the performer, musical instruments act as a “mirror” that facilitates learning. Fober wanted to design a musical score that could act as a similar sort of mirror of musical performance, in the form of graphic signals informed by the audio that could augment traditional musical notation. Fober refers to this approach as an “augmented music score.”

“There is a significant gap between interactive music and the static way it is usually notated,” says Fober. Even with live electroacoustic music, performers generally read from paper scores that give an approximation of the electronic events. There are tools like Antescofo that allow computers to follow a score, and tools for the graphical representation of electronic music, like Acousmograph and EAnalysis, but INScore’s approach is different. “[With INScore] the idea was to let the composer freely use any kind of graphic representation—not just symbolic notation but images, text, and video as well—to express his or her thoughts in a form suitable for performance.”

Montreal-based composer Sandeep Bhagwati used INScore for an entire concert of works entitled “Alien Lands” in February 2011. Meanwhile, British composer Richard Hoadley has written Calder’s Violin for violin and computer, premiered in October 2011. Calder’s Violin uses INScore to dynamically generate the violinist’s score in the course of the performance. INScore is not solely aimed at composers, however, and it has also been used for pedagogy, for sound installations, and to model analytic scores of electroacoustic music.

 

Videos of Richard Hoadley’s “Calder’s Violin” (created with INScore)

The Future of Notation?
Despite the vast differences in all of these notation software packages, one thing that they have in common is that each offers something, small or large, that Sibelius and Finale don’t. If you’re looking for something easily accessible and free, MuseScore and LilyPond are well worth checking out. If you’re interested in algorithmic or interactive notation and are willing to deal with a somewhat sharper learning curve, Abjad, JMSL, and INScore are capable of remarkable things. Not to mention the many options I haven’t discussed—BACH Automated Composers Helper, Heinrich Taube’s Common Music and FOMUS, IRCAM’s OpenMusic, and the Sibelius Institute in Helsinki’s PWGL. With all of these tools at our disposal, chances are we might not be hearing “Finale or Sibelius?” for much longer.

Try To Remember

Music is not just something created by musicians; someone must perceive it in order for it to be. While this fact is obvious to most (… will someone say, “Well, DUH!”), an entire culture of music making is dedicated to rendering this into a shortcoming to be overcome by attempting to reify music as notation. While this practice is not limited to (and actually predates) Western civilization, the forms of notation most widely used to direct musicians about what to sing or play are modeled on its five-line stave and assortment of iconic shapes, esoteric terms, and abstruse abbreviations. That most improvising musicians can read standard Western music notation isn’t any great news, yet many music fans perceive a dichotomy between reading music and improvising it.

Of course, this is a fallacy. Playing music interpreted from even the most detail-oriented notation includes elements of improvisation (especially when sight reading). So improvisation is ubiquitous; it always comes down to a matter of degree. But we live in a world where things are most easily explained or taught in relation to binaries, such as: good/bad, high/low, left/right, or right/wrong. This either/or paradigm of acceptance and intellectual digestion profoundly shapes how the performers’ actual reification of music is perceived by their audience (which represents a binary). The comparison of pianists Sviatoslav Richter and Glenn Gould is one of the most classic examples of this. Richter’s stoic performance style is the near antithesis of Gould’s flamboyant mannerisms at the instrument. Gould retired from the stage for the last half of his career, performing almost exclusively in recording studios, while Richter recorded mostly in live performance. Richter championed the music of Schubert while Gould felt that its non-contrapuntal aspects weren’t part of serious music making. Gould played mostly from memory while Richter, who supposedly had a photographic memory, often read from the score in performance. Though both are among the most iconic musicians of the 20th century, there are those who insist that one was better than the other. While some of both camps’ detractors (and supporters) cite issues gleaned from the purely audible aspects of their performances as the sole criterion for these judgments, others point to what they perceived when watching them perform: Richter’s stoic style seems “uninvolved,” Gould’s mannerisms “detract” from the music.

This binary-based approach to music critiquing (which is part of what we do when we listen) influences how artists approach and present their music. One of the persistent myths about early jazz is that it was mostly group improvisation. This was reinforced by the fact that many, if not most, early jazz artists memorized their programs and didn’t need to read onstage. This allowed for another myth: that jazz musicians were unschooled and had a “genius” for music that could only be explained by a cultural heritage based on racial identity. The image of musicians playing from memory is powerful to a Eurocentric audience’s infatuation with literacy and the pre-jazz concertgoer was used to seeing musicians reading “serious” music. Musicians who didn’t read at these concerts were the soloists, “geniuses” who were playing concertos of the masters while the orchestra read the accompaniment. Closer to the truth might be that jazz musicians were trained differently than non-jazz musicians—whatever that means for musicians who were playing this music that wasn’t yet called “jazz.” For example, the difference between the enharmonic intervals of a dominant seventh and an augmented sixth didn’t get drummed into the jazz caste, so the traditional rules of their voice leading were effectively combined. This approach is now a part of American music. While this seems subtle on its face, the effect on melodic and harmonic development is clear even to the untrained ear.

Of course, almost anybody can memorize things, especially music. It’s like the ABCs and, for most, fun to do. I’ve taught music in middle school programs and have been surprised at the hefty repertoires of popular music that 12- to 16- year olds commit to memory. To boot, they knew when I made a mistake in my part and could, with very little prompting, sing the harmonies and/or antiphonal back-ups of their material. (We are definitely hard-wired for music and should take better advantage of it in our educational institutions.) But it’s still considered a sign of extra-special talent when jazz musicians play from memory. Maybe this is because the craft of reading music forces one to engage memory to different purposes when performing. When I read music, I concentrate on remembering fingerings, clef assignments, default key signatures and the like and, because I don’t have to remember the melody that I’m playing, I’m flexible; I’m guided by whatever notes and indications are on the paper in front of me. When I play from memory, I’m more rigid and tend to play the same things that I believe belong in an idealized performance.

This is something that I didn’t realize by myself. I first heard it described by guitarist-composer-philosopher Omar Tamez when I was playing at the festival he curates in Monterrey, Mexico. I had been researching jam-session culture at the time and we were talking about our experiences playing at them. He brought up an experience he had where he went to sit in at one (in New York, I think) and wanted to have the sheet music for a blues that was going to be played. He said that the person calling the tune thought it was ridiculous that Omar should need the sheet music for a blues, but Omar insisted on having it. When I asked him why (I usually don’t bother with sheet music at jam sessions, unless I’ve never heard the tune and don’t think I can learn its chord progression in one or two passes), he explained that when he reads from the page, he’s constantly seeing new ways to approach the tune, rather than relying on what he’s already taught himself about it. I imagine Richter, playing a Schubert sonata for the hundredth time, letting the page show his eye a fresh or deeper understanding of the music that he memorized long before the third time he performed it. I wonder, is that insight something that a member of the audience who was familiar with the 99th performance might have been able to perceive?

Omar’s music is designed to send its players into a very open improvising space. Even when he writes music over a repeating formal template (like the 32-measure AABA song structure), he prefers that everyone is willing to abandon it if the performance is better served. When the structure is abandoned, which happens often, the challenge becomes how to craft a group improvisation that “remembers” the tune. It’s very different from the standard approach where the form reigns supreme and the challenge is for the composer to write something distinctive that transcends the improvisations. I’ll be performing with Sarah Jane Cion’s trio tomorrow at the PAC House Theater in New Rochelle. Sarah is a pianist whose formidable technique has led her to write music that does just that. In this case, the challenge is to make one’s voice compatible with the vision that the composer (who is also the performer) has been able to transmit to paper.

I’ll be playing with Omar tonight in pianist Angelica Sanchez’s quartet at IBeam in Brooklyn with Satoshi Takeishi on drums. Omar’s performance style is much like Gould’s; highly animated with nearly every musical gesture articulated by an obvious physical one. Sanchez offers the antithesis: a performance that uses little motion other than what is needed to play the piano, no matter how virtuosic she gets, and her compositions are similar to Tamez’s in terms of open-endedness. Satoshi’s performance style is somewhere in between Tamez and Sanchez’s, but I’m absolutely unqualified to discuss my approach. We all played together at Konceptions at Korzo on Tuesday and we’ll be playing a lot of the same music that we did then. Although nearly everything that we’ll play in tonight’s performance will be improvised, I’m still going to try to see if I notice how the printed page influences it.

Schizophrenic Composer/Performer

As an undergraduate studying saxophone and composition, I avoided writing for my own instrument. I felt that I already knew the textural nuances, sonic transformations, and difficulties that each individual technique creates. Instead, I wanted to push myself to write for instruments that were unfamiliar to me. After gaining more experience as a composer, I felt more comfortable writing for myself as a saxophonist. Later, in graduate school, I was commissioned to write a duo for tenor sax and percussion for the 2011 SoundSCAPE New Music Festival, which I would also premier myself. This led me to discover some of the communication issues that often come between a composer and a performer: there are discrepancies between the composer’s intent and the performer’s limitations, issues regarding notational clarity, and difficulties with idiomatic and non-idiomatic writing for the instrument. Further complicating this matter was the fact that I would be both the composer and the performer, causing debate within myself as I assumed both roles and struggled with the above-mentioned dilemmas.

It was difficult to focus my enthusiasm while composing, amidst the myriad of technical possibilities, knowing I would premiere the piece. Utilizing performance techniques I was familiar with, such as slap tongue, multiphonics, and microtones, I began to weave the piece into a unique textural sound world. Because I enjoy these techniques, I pushed the limitations of both my ability and that of other saxophonists. While composing, I failed to think as a performer, creating intricate lines with a high degree of difficulty. My amusement subsided with the increasing complexity as I strung quarter-tone 32nd notes into polyrhythms between the sax and percussion. It became obvious that the piece (which I titled Solitary Confinement) would put me in a position to have to decipher my own thoughts on a level that I had never encountered before.

The process of learning to play any piece that you compose yourself is different from learning to play a piece by another composer. As a point of comparison, let us examine the process of learning a piece with many extended techniques from the point of view of a performer. Recently, I learned Gérard Grisey’s Anubis et Nout for bass saxophone (originally for contrabass clarinet). The rhythmic timbral shifts, alternate fingerings, multiphonics, and slap tongue are precisely woven together. It is evident that Grisey had highly-detailed knowledge concerning extended techniques, and his compositional style requires accuracy. With this in mind, the immediate approach to learning Anubis et Nout is one of precision in every aspect. Every rhythm and timbral shift must be exact for the full effect of the piece to come through.

Yet sometimes a composer’s artistic intent is misunderstood by the performer. Although it may be healthy for a performer to have questions about the performance practices used in the piece, it is important for the composer to notate with clarity that which he feels strongly about. This often depends on the composer’s understanding of the instrument.

This miscommunication may go the other way as well if the performer is not familiar with techniques the composer uses. As a saxophonist learning Anubis et Nout, any question I had about specific techniques, or about the attainability of any sounds or textures, were usually easy to solve. If I had difficulty with a technique, I knew that it was an issue with me as a performer, not one with the composer. However, what if the barrier between performer and composer were nonexistent; if the composer and performer were the same person? Would this create new issues in the process of learning a piece of music? How would this affect issues of communication between composer and performer?

The first time I sat down to learn Solitary Confinement as a performer, I experienced much of the same frustration as with Anubis et Nout. However, the questions of various techniques’ attainability were not as easily answered: as the composer, I found myself wondering how to change the piece to make it more feasible instead of asking myself how to attain the techniques needed. Where was the line between performer and composer? It becomes difficult to determine when a revision is desired by the composer for aesthetic reasons, or when the performer wants an edit due to technical demands.

After the first dismal practice session, I spent the next three days looking at the score—no instrument—penciling in fingerings for quarter-tones, and feeling more doomed with every additional mark. I realized I would not be able to learn what I wrote, and so I moved on to revising. Was this piece so difficult that saxophonists would not be able to learn it accurately? Or was I, as the performer, not capable of achieving the composer’s vision? Regardless, I knew I would have to simplify some of the techniques to make the piece attainable; but what was the proper balance between the composer’s and performer’s needs?

Baldwin Score Sample 1

Score Example #1: quarter-tones found early in the piece. © 2012 Kevin Baldwin

I took steps to simplify awkward passages, struggling with the dichotomy of composer and performer roles. As a performer, I quickly realized the non-repetitive quarter-tone runs were impractical. It would be unreasonable for me as the composer to expect performers to learn 3 to 4 minutes of quarter-tone runs that lack a consistent pattern. Though using the 12-tone chromatic system would have made the piece more manageable, I, as the composer, previously strove to create a less-readily comprehensible language for the listener. To compromise, I used one of the earlier phrases of quarter-tone runs as a basis for subsequent phrases, with a degree of variance. This satisfied both the composer and performer; the quarter-tones remained in the piece for aesthetic reasons, while simplifying the material for the performer.

Baldwin Score Sample 2

Score Example #2: original phrase as stated, and varied as the piece progresses,
as seen in Variation I below. © 2012 Kevin Baldwin

Baldwin Score Sample 2a

Score Example #2a: Variation 1. © 2012 Kevin Baldwin

Initially, Variation I had very few of the same pitches as the original phrase. The amount of work required to execute these passages far exceeded the artistic benefit. My struggles stemmed from identifying the source of my desire to edit the music. Despite the hardship of separating musical personae, I approached the edits with both the performer’s and composer’s dilemmas clearly in mind, thus simplifying the revision process. Still holding true to the original aesthetics of form, timbre, and phrasing, I reached a compromise between composer and performer, diminishing the inner turmoil. But once I made these initial revisions, problems arose again at the start of ensemble rehearsals.

Baldwin Score Sample 3

Score Example #3 © 2012 Kevin Baldwin

Polyrhythms between the sax and percussion created large ensemble problems that were difficult to remedy. The issue of keeping both parts together overshadowed the difficulty of playing the correct notes. Suddenly, the internal fight between performer and composer reawakened. As a performer, I finally became proficient at the quarter-tone runs, but the complexity of the interplay between saxophone and percussion consumed our attention. I refused to compromise my aesthetics in order to simplify the rhythmic intricacies seen in Score Example III because the overall chaotic textures and raucous edge was a priority. Therefore, the passage remained intact.

Problems within a piece do not only arise from the performer’s side. As a composer, I wondered if the notation and instructions in Solitary Confinement were clear enough for others to perform the piece with the same integrity and general interpretation as I intended. I knew what I wanted the piece to sound like, though I was unsure that the notation fully reflected my intention. Despite several other musicians’ reviews and approval of the score, the concern remained with me until another duo sought to perform the piece. Their interpretation fulfilled my intentions, making me feel that I had succeeded in communicating my ideas on paper well enough that they would be understood by others.

Through this experience, I set out to push my technical abilities, but in doing so I uncovered an internal struggle between composing and performing my own works. The advantages and disadvantages of the process have surfaced in other pieces written after Solitary Confinement as well. As a composer, I must adhere to my convictions and philosophies despite the level of difficulty. However, there may be times that the difficulty does not warrant the outcome, or else the techniques may not be completely possible for the performer. When the performer and composer is one in the same, these issues become more apparent, and can become more difficult to decipher the thoughts created by the two roles.

[Ed. Note: A full recording of Solitary Confinement is available at www.kbaldwinmusic.com.]

***

Saxophonist and composer Kevin Baldwin strives to push the boundaries of music through extended performance techniques and unique sounds. He has performed his works and those of others in the U.S., Canada, China, Italy, France. Currently, Kevin resides in New York City, and holds a MM in Contemporary Performance from the Manhattan School of Music.

The Notation of Friendship and Formality

Over the summer, I spent a lot of time thinking about notation. (Preparing parts for a full orchestra according to MOLA standards will do that to a person.). The process of slogging through an excruciating amount of notational detail brought into focus the idea that, beyond notions of “over-notation,” “under-notation,” or what is or is not called for in different styles of notated music, it seems to me that in many cases music notation serves as a measure of the level of familiarity shared between composers and performers.

Music notation is a language—it is intended to communicate information to musicians, who will then translate that information to the listener. It’s a great big old-school game of telephone. In speaking or in writing any language, there is a tendency to speak differently to friends and family than to a stranger or a professional colleague. With friends one might use more slang or more colloquial phrase structures than the more formal language we kick into gear with say, a potential boss or someone we have just met at a dinner party. The same applies to music. If a composer is writing a piece for his or her own ensemble, or for musicians with whom s/he is friends, the notation could be minimal, or graphic, or even ridiculously over-the-top complicated (or maybe there wouldn’t be any notation at all), because everyone knows one another, and presumably there is at least some time to work out the details of the music, ask questions, experiment, and so forth. An ensemble that is larger and has less firmly established relationships between composer and performers has different notational needs in order for the music to be communicated effectively. With an orchestra there are a multitude of issues at play, but add to those the fact that the composer may be a stranger to the orchestra—and a stranger bearing strange music at that—and that there is usually frighteningly little rehearsal time to prepare that strange music. If the notation isn’t crystal clear and presented for those musicians in a way that they can translate effectively within the scope of the available rehearsals, chances are the music will not come out according to the composer’s wishes.

For example, I don’t know the third horn player who will be playing my orchestra piece personally. Chances are I will not meet the third horn player. If s/he has a question during a 30-minute rehearsal that stops things for even ten seconds on the clock, that is rehearsal time lost. It won’t carry on ten seconds longer than scheduled. It’s just…poof. So am I going to notate very clearly the dynamic and the type of attack and the specific mute needed for the third horn player’s first note? Yeah, you bet I am, and everything about every other instrument at every necessary point in time. Because if I don’t, someone may ask about it. And unfortunately, that exchange, as much as it will be helpful and establish at least some tiny bit of personal connection between myself and that musician, doesn’t fit into the timetable of an orchestra rehearsal.

On the other hand, this same orchestra piece has a significant part for drum set, and I happen to know my drummer quite well. Because this drummer does not read music (it’s complicated), we have spent time together working out his part. We have also met one another’s families, drunk beer together, shared car rides—the line has crossed into friendship. His part (as it is) for the same piece conveys a totally different kind of information. It includes some extra data, like clock times so he can follow along with a mockup recording of the piece. In some spots there is much less detail than one might expect, such as a predetermined “skeleton” beat structure upon which he can expound as he sees fit, or one short passage with a basic rhythmic structure over which I have noted, “As far as I’m concerned you can go completely nuts here!” We have established a musical connection, we trust one another, and we have already dealt with questions and explorations, so that by the time we get to rehearsal, all there is to do is play the thing.

Whatever type of notation we use—minimal, maximal, standard practice, or completely made up from scratch—it is not only a road map to bring sounds to life, but it also often tells a parallel story of the lives communicating those sounds.

Throwbacks

A couple of months ago while I was on the West Coast to interview composers, Ken Ueno took me to the famed Amoeba Music record store in Berkeley. I hadn’t been to a brick-and-mortar music store in quite a while and was quickly transported back to a time in the ’80s when I’d scour the racks at Rose Records and Tower Records in Chicago. For a kid growing up in Corn Town, Illinois, such exercises were one of the only ways to discover new music, artists, and composers.

The experience became even more reminiscent of those days when I saw Silfra, Hilary Hahn’s new release with the experimental pianist Haushcka, being sold as an LP. I vaguely recalled hearing about a few pop artists who were still releasing LPs and even the occasional cassette of their albums, so I wasn’t too surprised by this throwback. For a second I thought this was a pretty risky move on Hahn’s part–the portion of the population that has never owned a turntable is ever-growing–until I noticed that the LP also came with access codes for digital downloads of the music as well; Hahn’s not risk-adverse, but she’s also not stupid. The entire album is made up of improvisations between the two artists (a bold move in and of itself), and the non-traditional content seems to fit nicely within the retro medium through which it is being heard.

Hahn’s experiment came to mind this week when the news broke about Beck’s new album. Scheduled to be available in December, Beck Hansen’s Song Reader will consist of a collection of 20 new songs published only as sheet music. Beck’s website explains further:

In the wake of Modern Guilt and The Information, Beck’s latest album comes in an almost-forgotten form—twenty songs existing only as individual pieces of sheet music, never before released or recorded. Complete with full-color, heyday-of-home-play-inspired art for each song and a lavishly produced hardcover carrying case (and, when necessary, ukulele notation), the Song Reader is an experiment in what an album can be at the end of 2012—an alternative that enlists the listener in the tone of every track, and that’s as visually absorbing as a dozen gatefold LPs put together.

The songs here are as unfailingly exciting as you’d expect from their author, but if you want to hear “Do We? We Do,” or “Don’t Act Like Your Heart Isn’t Hard,” bringing them to life depends on you.

BECK HANSEN’S SONG READER features original art from Marcel Dzama (who created the imagery for Beck’s acclaimed Guero), Leanne Shapton, Josh Cochran, Jessica Hische, and many more, as well as an introduction by Jody Rosen (Slate, The New York Times) and a foreword by Beck. The package measures 9.5” x 12.5” with 108 pages comprising 20 individual full-color song booklets—18 featuring original lyrics, and 2 instrumentals—with covers from more than a dozen different artists.

Readers’ (and select musicians’) renditions of the songs will be featured on the McSweeney’s website.

Reactions on the intertubes to the announcement have been predictably mixed, with Beck being labeled as a cutting-edge genius or a pretentious gimmick-laden hipster. One fan laments that “this eliminates so many people from being able to participate in the music except by various recording of likely dubious quality…” while another gets right to the point: “Notation is boring.” Composers seem to be equally divided, with complaints directed towards the inherent irony of printed music being portrayed as new and unique while compliments point toward the risks Beck is willing to take as well as the gesture away from overly processed studio production techniques.

Here, for what they’re worth, are my initial takes on Beck’s project:

1) This is one more way to utilize the immense power of community through the Internet to create music. I’ve been very interested in how musicians have been experimenting with group concepts to either create new works or foster new ways to present their music. Two contrasting examples of this are composer/producer Kutiman (Israeli-born Ophir Kutiel) and composer/conductor Eric Whitacre. In 2009, Kutiman created his ThruYOU project, a series of “songs” made by splicing and layering pre-existing YouTube amateur videos. In 2010, Whitacre came out with his first “virtual choir” which combined videos of 185 choristers singing their individual parts into an online performance of his Lux Aurumque; two subsequent virtual choirs included over 2000 and almost 4000 singers from around the world. I see Beck’s experiment as taking these innovations one step further by encouraging others to interpret his lead sheets in their own way. Will there be lousy performances? Of course there will be, but that comes with the territory of letting your creations go off into the world.

2) This is not new or unique, and yet it is. Published sheet music of songs has been around for over two centuries and today bookstores and music stores continue to be replete with lead sheet collections for every remotely popular act. That being said, the concept of the audio recording by the artist or band as being the “work” in question has been in force in popular music since the record industry blossomed in the 1940s and ’50s and lead sheets have always followed the recordings. To publish the sheet music not only before but instead of a recording altogether is indeed unique in Beck’s genre of music and his slice of the music industry. Which brings me to…

3) Comparing Beck’s project to what most composers do is a mistake. I’ve seen several composers already snarkily suggest that publishing printed music in the hope that others will perform it is what we do all the time, and so how is this new or bold? This almost seems too obvious. The vast majority of (read: not all) concert composers are not writing lead-sheet songs that are appropriate for the general public to perform. That, and Beck is known throughout the world and we’re not; even if this project sells a fraction of what his normal albums do, he’ll still sell more sheet music in one year than most concert composers could dream of. But I don’t see this project in terms of how much money he’s making–it’s obvious that he’s not too concerned about that himself–but rather getting as many people to perform his music as possible.

4) John Phillip Sousa and John Cage would approve. Sousa’s feelings about the infant recording industry at the turn of the century now seem prescient: “I foresee a marked deterioration in American music and musical taste, an interruption in the musical development of the country, and a host of other injuries to music in its artistic manifestations, by virtue–or rather by vice–of the multiplication of the various music-reproducing machines.” The severe backlash that Beck will endure because of his decision to not record his songs will, I’m afraid, be a testament to Sousa’s musings, but ultimately the project may inspire others to do the same and usher forth a new national respect for printed music. Beck’s experiment is also right in line with Cage’s ideas on allowing others to serve not as automatons but as active participants in the creative process; this was demonstrated perfectly by the two performances this week of Renga:Cage:100 by the Third Coast Percussion Ensemble at the Kennedy Center and MOMA with 5- to 7-second compositions by 100 composers strung together into one piece.

Ultimately it will be interesting to see how Beck’s album fares; it could be dead on arrival or it could spark a new cottage industry. It bears mentioning, however, that in this year that the Academy Award for Best Picture went to a silent film, we have not completely divested ourselves from our grandparents’ and great-grandparents’ cultural gifts.