Tag: Sibelius software

Digital Audio Workstations: Notation and Engagement Reconsidered

DAW screen cap
First, a benign observation: the overwhelming majority of the music currently emanating from living room speakers, or being heard from passing car stereos, first passed through digital audio workstation (DAW) software of some shape and style. DAWs like Pro Tools, Audacity, Ableton, and GarageBand are generally defined by their use of sequenced tracks containing either sonic waveforms or MIDI code. Yet they are largely invisible to most musicians and listeners, unless one reflects on how digital audio is created and mediated on a day-to-day basis. When we think of a new work’s creation, we imagine a score being poured over by a meticulous hand, eventually realized with lyricism and aplomb by performer(s) of equivalent musical intuition and skill. We pay fleeting attention, if any, to the subsequent inscription and manipulation that occurs in the studio after both the composer and performer have gone out for drinks at the end of the recording session. Indeed, despite an engineer or producer’s best attempts, a new work cannot pass transparently through a DAW; there are always stopgaps, enhancements, deletions, and tweaks being exerted that, I think, fundamentally color the recorded piece as separate from the composer’s instruction and the performer’s execution. This begs the question of how best to characterize the DAW’s everyday impact on our musical world.

Whether a musical work began its life within a DAW (as is the case with computer or electroacoustic music), or only passed through one with minor alteration prior to public distribution, these software tools touch nearly every auditory creation with aspirations beyond a sidewalk corner, bedroom studio, or recital hall. But I would like to take their influence one step further. Not only do DAW software products mediate recorded sound, but these very same tools can be thought of as a form of digital music notation. I broadly define digital notation as any computer-generated system that inscribes information capable of being rendered musical, including but not limited to software that employs some version of conventional staff notation. In the same way we give latitude to the printed graphic scores of Cornelius Cardew, Iannis Xenakis, or Brian Eno as legitimate notation, the DAW’s world of sequenced tracks and waveforms deserves a similarly appreciative study. The fact that DAW software has utility as a performance or recording tool should not prejudice us against its additional notational qualities. Neither should the fact that DAWs are frequently used in tandem with other notational styles when realizing a work.
Xenakis Score
Two real-world situations hopefully add weight to our re-thinking of DAW software as notation. First, when a composer like Matthew Burtner creates a piece of computer or electroacoustic music through a DAW interface, with no originating staff score, should we simply say that Burtner’s piece lacks notation? Or that this music falls outside the boundaries of what notation can accomplish? Second, consider an error-prone session with a chamber group trying to record a new work by a composer like Brian Ferneyhough. By the end of the day, almost never does the recording engineer have a single unblemished take from the work’s beginning to end. More often than not, a hodgepodge of clips cutting across movements or rehearsal letters will need to be sewn back together in the DAW and made to sound convincing, both to Ferneyhough and the eventual listener. Separate versions of the work now exist: the original manuscript showing the composer’s lofty aspirations versus the listener’s reality, a sonic Frankenstein arranged within a DAW that compiles the engineer’s best approximation. Which format has a more legitimate claim as the work’s true inscription? Instead of throwing up our hands in despair at either situation, let’s expand our thinking and our musical toolbox by including DAW software, warts and all, as a digital notation.

A final clarification: the term “digital notation” is frequently thought of as encompassing only those tools of the 21st-century instrumental composer, Finale and Sibelius. Yet Finale and Sibelius are far more akin to conventional DAW software than they are to ink and manuscript notation. In fact, they represent just one fork in the road of the DAW’s development, employing the same sequenced tracks and playback capabilities as progenitor software products while sacrificing waveform sounds in favor of MIDI and virtual instruments. Since the first DAW’s unveiling in 1978, we see the incorporation of similar structural principles into later digital notation products such as Finale, initially released in 1988. Indeed, contemporary DAW software like Logic Pro, seamlessly blending tracks with either MIDI staff notation or waveforms in the same composition, show the re-convergence of these two design paths. Perhaps joining “staff” software like Finale and “non-staff” software like Audacity together under the same notational umbrella seems unintuitive or even bizarre. But I counter that our understanding and classification of digital music notation should rely on how we engage with the medium rather than on the look of the “score” rendered through pixels. In what ways do we take for granted, on an experiential basis, how composers sculpt and explore the sound materials within DAW notation? By briefly exploring three core structural features of most DAWs–waveforms, sequenced tracks, and rapid playback–I want to make the case that this style of digital notation (Finale et al. included) enables a remarkable creative work process for a composer that deserves greater consideration.

Logic Pro screen cap

Logic Pro screen capture

Starting with Max Mathews’s first 1957 forays into the MUSIC-N programming language, the 1960s and ’70s found those individuals experimenting with computer-generated sound being able to specify individual aural events with a revelatory level of ultra-fine resolution. One could now stipulate with great precision the digital synthesis of musical parameters such as pitch, duration, envelope, and harmonic content. Csound, shown here, is a contemporary incarnation of these composition principals. A looming challenge soon arose for the early developers of digital music notation: how, in spite of this high-resolution processing, could a larger series of musical ideas be represented with clarity in the context of an entire composition? Italian electronic music composer Giancarlo Sica summarized a hopeful new method: “…a musical performer must be able to control all aspects [of a digital notation] at the same time. This degree of control, while also preserving expressivity to the fullest extent, allows continuous mutations in the interpretation of a score.” Waveforms, track sequencing, and rapid playback are precisely the DAW’s answer to this outline for increased digital notation flexibility.

CSound screen capture

CSound screen capture

The waveform is the first feature of DAW notation that I believe is taken for granted: how exactly does a composer engage with waveforms as opposed to our standard symbolism of staves, bars, notes, and accents? Waveforms serve as representations of sonic loudness over time with either craggy (quick attacks and decays, with short sustains) or smooth (slow attacks and decays, with long sustains) linear shapes. They flaunt the edges, curves, and dips of a performer’s dynamic treatment of audible content. Furthermore, they fulfill Sica’s earlier blueprint by being able to stretch apart and compact on a whim, displaying a microsecond of curvature or minutes of slow growth in rapid succession. Yet while waveforms provide exactness in the realms of amplitude and duration through visual peaks and valleys, the vital categories of pitch and harmonic content become inaccessible. Waveforms in DAW notation dramatically re-prioritize the musical dimensions to which composers have been traditionally most attentive, trading pointillistic melodic lines and chordal clusters for the attack, sustain, and decay of long, homogenized statements. They blend together formerly discrete notes as they resonate into and out of one another, with punctuation determined largely by phrase and cadence. In essence, our original melodic line becomes a single gestural sweep. Composers must then express themselves in this medium by sculpting a waveform’s dynamic development via fades and contour trims. Through tweaking sonic envelopes like these, waveforms in a DAW notation environment lead composers to think of musical movement in spatial or even topographical terms, rather than through traditional contrapuntal or harmonic mindsets. When a composer manipulates a waveform as the building block of a DAW’s musical dialect, I would describe it as far more akin to working pottery on a lathe or carving a block of ice than a typical composition metaphor such as painting with brushstrokes on a canvas. The same cannot be said for Finale’s species of MIDI-intense DAW notation, as a composer can’t zoom deeper into a quarter-note and discover more musical information to massage. Ultimately, this is a core distinction between a composer’s engagement with waveforms versus staff notation: waveforms enable a practically limitless editing capacity within each topographical gesture, whereas staff symbols are bound by both their discreteness from one another as well as their individual immutability. Again, one simply cannot chop away at the interiors of a quarter-note to render a different sort of sound.
Waveforms, in turn, strongly inform the next feature of DAW notation taken for granted: track sequencing. Track sequencing was developed in order to solve the especially thorny problem of showing relative musical time in digital notation, especially when there are a large number of sound events packed into a relatively short segment. A thickly composed section of a musical work may be pulled apart and laid out on a plane of such tracks that are then stacked on top of one another, or sequenced, to show simultaneity and density of texture. One might understand track sequencing as analogous to the look and feel of a printed orchestral score in conventional staff notation. Yet within such a score, one ratio of detail is maintained throughout the entire work. A conductor is unable to “zoom” in and out to examine micro-fine aspects of a particular instrumental voice, while also limiting the cues and visual influence of the other instruments that bleed into view. In contrast, DAW notation accomplishes precisely this feat while grappling with global and local representations of time across often immense distances. Time-flexible track sequencing, in tandem with our topographical waveforms, enable the composer to almost effortlessly rove and leap between far-flung sections, both making pin-prick edits and rendering gaping holes in the curves of the sounds. The ease of this direct work with the sequenced materiality of the waveform prompts critic and composer Douglas Kahn to opine that, beyond merely writing with sound, users of DAW notation initiate a “word processing” of sound. He describes how “workstations can cut and paste at sub-perceptual durations… they can pivot, branch out, detour, and flesh out… there is no restriction to duration… no necessary adherence to any form of [musical] interval. [DAWs] are very conducive to admixture, stretch, continua, and transformation.” I would like to run with Kahn’s word processing metaphor and apply it specifically to how we overlook the way composers currently manipulate music through the track sequencing of DAW notation. The fluidity and depth with which we sculpt digital music acquires a kind of invisibility, just like word processing, once we become comfortable in the DAW ecosystem. It is as if the composer were tangibly poking and prodding the waveform’s topography without numerous layers of idiosyncratic and technological mediation.

The final feature of DAW notation largely taken for granted involves its rapid playback capacities. A familiar console of controls–fast forward, play, stop, pause, and rewind–exists as a universal feature on DAW products and facilitates constant repeatability with even greater flexibility than a cassette player or a VCR. Once the composer zeroes in on a given segment of interest, the playback of the composition can be locked between these segment’s boundaries. An audible portion can then be looped and a brief, five-second moment may be repeated and tweaked ad infinitum. With this playback tool working in tandem alongside visual variables such as track sequencing and waveform editing, the act of listening itself becomes an inseparably notational component of software like Audacity and even Finale. To clarify: printed notation was formerly necessary as a means to preserve and later create organized sound. Now, listening is on equal footing with sight in our experience of digital notation, generating a sort of feedback loop whereby audible sound is able to dictate its own trajectory in a much more embedded way than a composer might accomplish by sitting with a sketch at the piano. This of course suggests a different sort of notation entirely: one that is multi-sensory at its very core. A single waveform gesture in DAW notation provokes dual stimuli as its visual content translates almost seamlessly into an aural dimension and vice-versa. Through the dogged playback and repetition of a particular musical segment, there is an uncanny synesthesia between sight and sound. Such episodes must certainly cause difficulty: how do I tell where one sense ends and the other begins in a musical experience mediated by DAW notation? This is the third and most pressing aspect, I think, of our creative process largely taken for granted.

I hesitate to point to explicit stylistic changes that result from a composer’s use of DAW notation in lieu of ink and manuscript paper. This trepidation stems not only from the wild diversity of musical genres that employ DAW notation, but also the varying creative stages in which this software is utilized as well as the countless product permutations that mix and match the variables I just described. Rather, my point is that crucial aspects of this ubiquitous music notation technology escape our attention unless we look at them through the lens of compositional engagement. First, waveforms encourage a breadth and depth of musical control, in a topographical style, not available to discrete note values whether they be printed on a page or displayed as a MIDI veneer. Second, track sequencing permits shifting focal points of reference that in turn enable a hyperactive editing style akin to word processing. This is a situation that non-DAW notation precludes through limited visual flexibility. Third, the DAW’s rapid playback controls allow listening to mingle with and meld into the visual parameters of digital notation, as if waveforms were now tangible gestures with a physicality we can toy with beyond the pixels of the computer screen. This is, of course, to say nothing of the tactile interaction that composers experience when they employ a mouse and keyboard while listening to and sculpting the building blocks of DAW notation. Ultimately, DAW software distinguishes itself from other notational styles as a synesthetic tool akin to word processing in its application. In fact, it is a notation whose design parameters, inspired by Sica’s call for relentless flexibility, unite so seamlessly that they retreat from the user’s attention rather than become more apparent, especially as a composer grows increasingly comfortable with their use. The pervasive invisibility of DAW notation in our routine contact with sound and media compels us to shine a critical light on this ingenious device for the inscription and birth of new music.

Notational Alternatives: Beyond Finale and Sibelius

“Finale or Sibelius?” is a question that composers love to ask other composers. It’s often taken as a given that if you write music professionally, you’re already using one of these popular notation software packages. This may be about to change—with the news of Sibelius’s development team being unceremoniously dumped by Avid and subsequently scooped up by Steinberg, we may have a third variable to add to that equation. ThinkMusic, another newcomer, promises an iPad app in the near future, but has already generated controversy for seeming to use Sibelius in its video mockup.

In the meantime, there are a variety of other, lesser-known options for notation software already lurking out there. None of them may have the same clout with professionals as Sibelius and Finale—yet—but many are gaining ground. Whether they present robust alternatives for creating notation (MuseScore, LilyPond), or alternative ways of thinking about and deploying notation (Abjad, JMSL, INScore), each has its own advantages and its own dedicated following.

MuseScore: Open Source Upstart
MuseScore started out in 2002 as a spinoff of MusE, an open source sequencer created by German developer and musician Werner Schweer. Until 2007, however, MuseScore was an obscure piece of software only available on Linux. In 2008, Thomas Bonte and Nicolas Froment began to work on bringing the software to a wider audience. Now, over 5000 people download MuseScore every day. Bonte credits the software’s newfound success to its extremely dedicated developers and early adopters. Its open source community now boasts more than 250 contributors adding to the project. This includes making the software available in new languages, fixing bugs, writing documentation, creating video tutorials, and so on.

While Bonte admits that MuseScore is not yet as feature-complete as Sibelius or Finale, he highlights the price tag: MuseScore is completely free, while the others can run as much as $600. Bonte also points out that when compared to the others, MuseScore is a fairly young piece of software. He anticipates that in a few years, “Musescore will have 80% of other notation software’s feature set on board.”
Another long-term advantage is MuseScore’s open source status, says Bonte:

Anyone can look into the code, change it and distribute it further. This is not possible with proprietary software like Sibelius, Finale, and Score. Given the recent uproar in the Sibelius community about Avid closing the London office, it seems now more than ever appropriate to say that choosing free and open source software is the right thing to do. What happened with Sibelius may happen with any other proprietary software, but cannot happen with MuseScore or LilyPond. The source code is available to everyone; no one can take it away.

This openness made MuseScore the notation software of choice for the Open Goldberg Variations, a project to create a new, quality edition of J.S. Bach’s beloved work that would be freely available in the public domain. This time, the venerable work had a very modern path to publication: the project was crowdfunded through Kickstarter and remained open for peer review on musescore.com before being made available for download. The Open Goldberg Variations can be found on the IMSLP / Petrucci Project website, though anyone is welcome to host or share it.

Screenshot of Open Goldberg Variations iPad app

Screenshot of Open Goldberg Variations iPad app

Musescore.com is MuseScore’s latest initiative. Launched in the fall of 2011, musescore.com is an online sheet music sharing platform, and the only thing that MuseScore charges for. Bonte compares the business model of the site to Flickr or SoundCloud—subscribers pay a fee ($49 per year) for more storage and features, essentially. Bonte says this revenue stream allows them to continue to develop MuseScore full time, while maintaining the open source status of the software itself.

LilyPond and Abjad: A Marriage of Composition and Code
Jan Nieuwenhuizen and Han-Wen Nienhuys are the creators of LilyPond, another open source music notation package. The project that would eventually become LilyPond had its genesis in 1992, when Nieuwenhuizen was playing the viola in the Eindhovens Jongeren Ensemble, a youth orchestra conducted by Jan van der Peet. According to Nieuwenhuizen, the players struggled to read from computer printouts so much that they soon switched back to handwritten parts. That got him thinking: “Fully automated music typesetting done right—how hard could that be?”

As it turns out, it was not terribly easy. Using the typesetting system TeX as a foundation, Nieuwenhuizen began working on the problem with Nienhuys, a French horn player in the orchestra and math student at the Eindhoven University of Technology. But it wasn’t until four years later, in 1996, that LilyPond finally emerged after four flawed prototypes. Despite being plagued by difficulties, however, they found that they couldn’t leave the problem alone. “We never realized how hard it was to produce beautifully typeset music automatically until it was too late and we were hooked,” Nieuwenhuizen admits.

Since those humble beginnings, LilyPond has matured into a full-fledged community project, with over 50 authors contributing to the latest stable release for Windows, Mac OS X, and Linux. This includes one full-time developer, David Kastrup, who makes a living—“just barely,” says Nieuwenhuizen—from donations to the project, which Nieuwenhuizen sees as a major milestone.
Because LilyPond is primarily a typesetting and engraving program rather than a compositional tool, its user paradigm differs somewhat from programs like Finale/Sibelius/MuseScore. Similar to Score, the most common engraving program until Finale came along, musical notation is initially entered as text characters, separating out the step of encoding the notation from the act of graphically displaying the notation, while ensuring a consistent layout. Nieuwenhuizen admits that this can be scary or intimidating at first to composers unused to working this way, but contends that in itself, LilyPond is “quite intuitive and easy to use.” He also foresees more community development of graphical front ends, web-based services, and tablet apps that will make LilyPond even more accessible to those just starting out with the software.

This community may be LilyPond’s greatest asset, with a significant amount of overlap between users of the software and those tinkering with the software itself. This new generation of composers who code is extending LilyPond’s functionality into unforeseen territory. For example, Victor Adán, Josiah Oberholtzer, and Trevor Bača are the lead architects of Abjad, which allows composers to write code that acts on notation in LilyPond in “iterative and incremental” ways. In other words, instead of creating notation directly, composers write code that Abjad then translates into a format that LilyPond can interpret to generate notation. As a result, instead of just manipulating individual notes and objects, Abjad can manipulate higher-level structures—like changing the dynamic level of every instance of a particular note, to give one basic example. Abjad uses the Python programming language, known for its readability and flexibility, as its foundation.

Excerpt of Trevor Bača's Čáry created in LilyPond

Excerpt of Trevor Bača’s Čáry created in LilyPond
(click to enlarge)

Writing music with Abjad presents a departure from the traditional compositional process. For Bača, it occupies a position “somewhere between the notation packages like Finale, Sibelius, and Score, and the composition environments like OpenMusic and PWGL.” He describes the process of working with Abjad as a “two-part loop,” alternating between writing code to model parts of a score and considering the notation as visualized in LilyPond. This iterative process of continual revision blurs the boundaries between programmatic and musical thinking, as well as between composition and pre-composition.
The creators of Abjad have also worked closely with practicing composers in the course of development. One of these, Jeffrey Treviño, is already well versed in the musical uses of technology; in the course of writing Being Pollen, a work for percussion and electronics based on the poetry of Alice Notley, he estimates that he used nine different pieces of software. With Abjad he had a specific application in mind—he hoped it would help him notate the rhythms of Notley reciting her poem. He describes part of the process here:

I used Max/MSP to tap along to her recitation and make a text file of millisecond counts for when each syllable occurred. I tightened these up in Audacity to line up precisely, and then I exported the numbers again. I wanted to use these numbers to make a notation in Abjad, but Abjad didn’t have a quantizer… We ended up looking up some research together, especially Paul Nauert’s writing on Q-Grids quantization, and Josiah ended up making the quantizer for Abjad.

In this case, Treviño’s needs as a composer had a direct impact on the development of Abjad, and this in turn allowed Treviño to accomplish something musical that would have otherwise been impossible, or at least far more difficult. Treviño draws an analogy between this model of collaborative composing and high-level chess:

Remember when it was a big deal that Deep Blue beat [Grandmaster Garry] Kasparov in 1997? No one mentions that they did a tournament after this where people could use computers to assist them. When Kasparov had a computer, he beat Deep Blue—but most intriguingly, an amateur aided by a computer, not Kasparov, won the whole tournament. So, I’m a lot better at writing twenty-part counterpoint that doesn’t break any rules if a computer can help me. But the skill set it takes to get the computer to know the rules is a very different skill set than the skill set we teach students in counterpoint classes. That’s all to say—I think it’s best to think about all this as totally redefining the important skills of the creative act, so that formerly conventional amateur/master relationships might be turned on their heads. Rather than expanding or enabling skills that matter currently, this proposes a totally new set of competencies and approaches to the task.

(N.B.: Your author independently thought of this analogy, so it must be a good one.)

Video of Jeffrey Treviño’s “Being Pollen” performed by Bonnie Whiting Smith (composed with help of Abjad/LilyPond)

JMSL and INScore: Notation in Motion
Nick Didkovsky, the primary developer of the Java Music Specification Language, is a guitarist, composer, and programmer who leads the avant-rock octet Doctor Nerve and teaches computer music classes at NYU. But for many years Didkovsky’s parallel interests in music and computers remained independent, never intersecting. What finally inspired him to combine them was an article by Douglas Hofstadter in Scientific American about game theory and a particular kind of lottery called the Luring Lottery, in which the collective desire to win is inversely proportional to the amount of the prize. Didkovsky says, “[The Luring Lottery] is a beautiful and simple idea that binds people together in a simultaneously competitive and cooperative relationship… I wanted to realize that structure musically and thought computers might need to be involved.”

He turned to Pauline Oliveros for help, and she directed him to Larry Polansky. Polansky, together with Phil Burk and David Rosenboom, had created the Hierarchical Music Specification Language (HMSL), a programming language offering a suite of musical tools that turned out to be perfect for Didkovsky’s task. Today HMSL might be most easily compared to other audio programming languages like Max/MSP and SuperCollider, but in an era when these languages were in their infancy, what appealed to Didkovsky about HMSL was its open-endedness: “You can basically do anything… no two HMSL pieces sound even remotely the same because you’re not starting on a level high enough to influence musical tastes. It’s a very non-stylistically biased environment for musical experimentation. And so I think it’s kind of deliberate that it’s kind of a tough environment to work in, or at least it just doesn’t come with a lot of bells and whistles.”

For the next ten years, Didkovsky continued to develop music software with HMSL on the Commodore Amiga for Doctor Nerve as well as other ensembles like the Bang on a Can All-Stars and Meridian Arts Ensemble. When the Amiga platform began showing its age, Didkovsky and Burk had the idea to rebuild HMSL in Java, which could be run on multiple platforms, and in 1997 Java Music Specification Language was born.

The most significant change to JMSL since those days is the addition of a music notation package. With his commitment to traditional instruments, it made sense to Didkovsky to use JMSL to drive a notation environment—and the result was, in his words, a “huge catalyst” creatively. In addition to the many pieces Didkovsky has written using JMSL since then, it has also become a tool used by composers all over the world:

One of my former students, Peter McCullough… developed an extensive suite of personal tools that did very unusual things to scored music, designing both generative and mutator routines that populate the score with notes and transform them once they are present… [progressive metal guitarist and record producer] Colin Marston wrote a series of notated pieces that are impossible for a human being to play—superhuman, intensely noisy pieces performed at high tempo that just rip by and throw glorious shrieking noise in your face, while the staff notation is populated by thick clusters of notes flashing by.

Didkovsky is quick to note that, while traditional staff notation is an important feature of JMSL, it represents only part of what the software can do. Many of the applications of JMSL have little to do with standard music notation—for example, the Online Rhythmicon, a software realization of the instrument Leon Theremin built for Henry Cowell, or Didkovsky’s MandelMusic, a sonic realization of the Mandelbrot set.

Nonetheless, JMSL’s notation capabilities may end up being its most widely used feature, especially with the advent of MaxScore. Didkovsky collaborated with composer Georg Hajdu to create MaxScore, which allows JMSL’s scoring package to communicate with the more popular audio programming environment Max/MSP. Currently, most of Didkovsky’s development energies are directed towards improving MaxScore.

MaxScore Mockup

MaxScore Mockup
(click to enlarge)

INScore, created by Dominique Fober, is a similar synthesis of ideas from notation software and audio programming, though Fober is quick to stress that it is neither a typical score editor nor a programming language. Fober is a musician with a scientific background who found himself doing more and more research related to musical pedagogy. He now works for Grame, a French national center for music creation, where he conducts research related to music notation and representation.

INScore follows from Fober’s experiments based on the idea that, by providing immediate feedback to the performer, musical instruments act as a “mirror” that facilitates learning. Fober wanted to design a musical score that could act as a similar sort of mirror of musical performance, in the form of graphic signals informed by the audio that could augment traditional musical notation. Fober refers to this approach as an “augmented music score.”

“There is a significant gap between interactive music and the static way it is usually notated,” says Fober. Even with live electroacoustic music, performers generally read from paper scores that give an approximation of the electronic events. There are tools like Antescofo that allow computers to follow a score, and tools for the graphical representation of electronic music, like Acousmograph and EAnalysis, but INScore’s approach is different. “[With INScore] the idea was to let the composer freely use any kind of graphic representation—not just symbolic notation but images, text, and video as well—to express his or her thoughts in a form suitable for performance.”

Montreal-based composer Sandeep Bhagwati used INScore for an entire concert of works entitled “Alien Lands” in February 2011. Meanwhile, British composer Richard Hoadley has written Calder’s Violin for violin and computer, premiered in October 2011. Calder’s Violin uses INScore to dynamically generate the violinist’s score in the course of the performance. INScore is not solely aimed at composers, however, and it has also been used for pedagogy, for sound installations, and to model analytic scores of electroacoustic music.


Videos of Richard Hoadley’s “Calder’s Violin” (created with INScore)

The Future of Notation?
Despite the vast differences in all of these notation software packages, one thing that they have in common is that each offers something, small or large, that Sibelius and Finale don’t. If you’re looking for something easily accessible and free, MuseScore and LilyPond are well worth checking out. If you’re interested in algorithmic or interactive notation and are willing to deal with a somewhat sharper learning curve, Abjad, JMSL, and INScore are capable of remarkable things. Not to mention the many options I haven’t discussed—BACH Automated Composers Helper, Heinrich Taube’s Common Music and FOMUS, IRCAM’s OpenMusic, and the Sibelius Institute in Helsinki’s PWGL. With all of these tools at our disposal, chances are we might not be hearing “Finale or Sibelius?” for much longer.

Keeping Score: Spreadbury Speaks on Sibelius Team Transition

Daniel Spreadbury

Daniel Spreadbury

Daniel Spreadbury worked on the Sibelius notation software for years, both as a product* and community manager. Then, last July, the software’s parent company, Avid, announced a restructuring, and the Finsbury Park office in London that had been home to the Sibelius team was closed. News came last week, however, that the team is now opening a new office in London to work on a brand new notation program–this time under the auspices of Steinberg, a German company known primarily for the sequencer Cubase. Here’s what Spreadbury had to say about the new project:

Kevin Clark: First off—the question on everyone’s minds: what are you working on and when can we buy it? Of course things are in the very early stages, but any news would be very exciting.

Daniel Spreadbury: Obviously we shall be working on a brand new music notation and composition application, which will sit alongside Steinberg’s other products.  All other aspects and strategies are currently under discussion and will be communicated in due time.

KC: Are there any existing Steinberg technologies that form a good basis for your work?

DS: Certainly – though we’re not sure which just yet. Steinberg has a rich portfolio of technologies, and we can’t wait to get to know our new colleagues and learn from them about the ways in which components or technologies from other products can enrich our new program.

Steinberg logo

KC: Is your team still intact at Steinberg?

DS: Yes, as far as was possible. Steinberg have been fantastic, and were clear from the outset that they wanted to bring the whole team over if they could. However, after it was clear that our office would be closed, a few of our former colleagues took up other jobs and subsequently chose not to re-join us. But the team is definitely intact, and between us we have decades of experience in designing and building great software for musicians, and we are looking forward to combining that experience with the know-how of our new colleagues.

KC: How would a music notation product relate to the rest of Steinberg’s software? Would it be a part of the core business or a separate direction?

DS: Speaking as somebody who has until recently merely been an observer of Steinberg, it has always been my belief that Steinberg is totally committed to providing great products for creative musicians. I see our new application as fitting right in with this ethos, but perhaps targeted at musicians who are more comfortable working with music notation than with sequencer or DAW workflows.

KC: On a separate note, what’s it been like to go through this change for your whole team?

DS: We have been welcomed with open arms by Steinberg. The company’s leaders have shown a real commitment to our team in opening a new office for us in London, and we couldn’t be happier. Many of us have been working together for more than a decade, so the prospect of the team breaking up was pretty distressing, but now we are able to look forward to working together for years to come.

KC: Lastly, what can the community do to help? Any new product will take a while, but in the meantime, if your community wants to help, what should they do?

DS: Right now, it’s very early days. We have a lot of work to do before we can really engage directly with the community in a structured way, but we plan to once our plans are a little firmer. Watch this space!

* An earlier version of this article listed Daniel Spreadbury as a former programmer on Sibelius. He was not. He was a product manager. We regret the error, although we’re glad the actual Sibelius developers got a laugh out of it.

Embracing Abandoned Technologies

Abandoned Technologies

While most folks walk around with one all-purpose handheld device, I prefer carrying around three: a Palm T/X and a FlipCam (both of which have been discontinued) plus a BlackBerry (which the rumor mill claims is also not long for this world). Photo by Kevin Clark

While reactions within the composer community to the possible discontinuation of the Sibelius music notation software program have ranged from shock and outrage to indifference, my own feelings are a bit more complicated. While they are somewhat colored by my attraction to abandoned technologies, they are also informed by the fact that I, for the most part, abandoned pen and staff paper for music notation software more than a quarter of a century ago. These two things are actually related to each other in my personal experience.

One of the activities I prided myself very highly on when I was a teenager was the creation of musical scores with meticulous calligraphy. I would literally devote hours to each individual page. My materials consisted of: Passantino manuscript paper (in various dimensions depending on how many instruments I was writing for); protractors (to guarantee that every stem was perpendicular to the staff and to measure each one precisely); numerous jars of Liquid Paper (not so much to correct mistakes but rather to customize margins and sometimes to increase spaces between staff systems, very time consuming and not recommended); and Paper Mate black ballpoint pens (blue pens when I composed in 31-tone tuning, but that’s another story). If someone back then had told me that I would no longer engage in this activity once I graduated from college I would have thought it an affront to the core of my personal identity. Yet the first significant purchase I made, the year after receiving my bachelor’s degree from Columbia in 1985, was a personal computer (an Apple IIGS) and a color dot-matrix printer (the ImageWriter II)—which to this day remains the first and last time I ever bought products made by Apple. I was talked into bringing these machines into my life by several very persuasive salespeople—I would never have listened to just one—who convinced me that owning the then brand new IIGS (September 1986), which was specifically designed for graphic and sound applications, would completely change my composing regimen. For better or worse, it did.

Before it changed my music writing habits, it forever changed my relationship to writing prose. Whatever its flaws in retrospect, AppleWorks was a vast improvement over the old Royal typewriter I had been using for anything I wrote and wanted someone else to read. (The less-than-ideal manifestation of prose originating from that particular device, which I bought for a pittance at a Salvation Army thrift store, was compounded by my having no idea how to clean typewriter keys at the time, so all the Os were invariably completely filled in, etc.) But not only was the readability of the text printed on the Imagewriter a quantum leap beyond what I had been imposing on folks, the content also improved dramatically. Using a word processor was how I learned to edit myself. It’s still an uphill battle, but at least now it’s one I’m aware needs to be fought.

The earliest piece of music notation software that I became aware of that was compatible with the IIGS was something called Pyware Music Writer. (I was amazed this morning when I searched for “Pyware” that the company is not only still around but is still producing music notation software!) After those first few months adapting to composing words at the computer, using notation software seemed a parallel evolution for writing music. And it was. I no longer worked on a composition from beginning to end in a linear fashion, but rather ideas would flow for different parts of a piece and then, via the machinations of technology, eventually become a coherent whole. And I could make changes without constantly rewriting pages and going through gallons of Liquid Paper in the process.

The biggest shortcoming of the version of Pyware Music Writer I purchased, however, was that it was limited to six staves in total. As I was mostly interested in composing chamber music, especially since no orchestras were knocking on my door at the time, this did not pose an insurmountable problem. But I sometimes wonder, even if a performance would still not have been immediately forthcoming, if my inspiration would have led me to larger ensembles had the program afforded me the ability to notate for them. Perhaps a worse problem, however, was that my Pyware notated scores were extremely difficult to read in a dot matrix print out. I had gone from creating engraver-quality handwritten scores to abysmal computer-generated ones. I send out heartfelt apologies to everyone whom I had subjected to these scores for virtually a decade.

Still, I was reluctant to abandon ship after spending so much money on this equipment and so much time learning how to use it and inputting so many hours of work into it. Apple made it even harder for me to upgrade since they discontinued the IIGS and made their subsequent operating systems completely incompatible with it, which meant that if I were to switch to a more recent machine I would need to learn a completely different system and re-enter all my music again from the beginning. (Because of this and my feeling that Apple’s infringement suit against the Franklin Ace was unfair, I have remained fervent in my boycott of their products.) But eventually common sense won over my stubbornness and I switched computer systems. I still remember donating the IIGS to the same Salvation Army shop where I bought the Royal typewriter that it had replaced.

For a brief time I used Finale on my new PC, but I couldn’t get it to do what I was trying to do in a composition I was working on at the time—extensive passages involving different cross beamings in every instrument (which I believe it can now do)—so I went back to notating music by hand. And then I learned about Sibelius. It took me only a weekend to figure out the basics of how to use it and within a year I had transferred almost everything I had composed up to that point into it. I was using version 2.0 at the time. I’ve since upgraded to 5.0 and luckily all my older files open, even though they occasionally need a few tweaks here and there. Sibelius is now up to a version 7.0, but I’m not a technology chaser so I have no intention of upgrading anytime soon. And if the doomsday scenario folks are talking about comes to pass, I won’t be able to purchase newer editions anyway.

The years of being saddled with the Apple IIGS and Pyware Music Writer taught me to do the best with what I had, something the folks who wait in line overnight to buy the latest iPhone will find incomprehensible. Often the latest technologies are not the most useful. Back in 1998, some friends who were annoyed that I kept forgetting appointments chipped in and bought me a PalmPilot. It was discontinued last year, but my alarms still go off like clockwork and guarantee that I’m where I’m supposed to be most of the time. The handy and very easily searchable memos and address applications also help me remember and quickly access a ton of information I otherwise would have forgotten. Last year I bought a FlipCam about a month before it ceased being manufactured; it also remains a constant companion. I have no intention of throwing my BlackBerry out amidst rumors that it too might not be long for this world. And I laugh at all the folks who have abandoned their CD collections for iPods, cloud services, or whatever the latest soon-to-be-discontinued technology-du-demain is; my thousands of LPs still sound great! And so do harpsichords, recorders, lutes, analog synthesizers, and myriad other musical instruments that were supposed to have been made obsolete by “improved” instrument designs.

There is no way we can ever catch up with what we think the future will be, or rather, what others (mostly marketers) tell us the future will be. At the end of the day, that time could be better spent honing our skills with technologies that work for us and that we are comfortable with. I still haven’t mastered all the details of Sibelius 5.0 (or even 2.0, for that matter); its already available intricacies will probably serve my compositional needs for the rest of my life. Now all I need to do is make sure that a computer that can run the software will last as long.