Tag: software

What to Ware? A Guide to Today’s Technological Wardrobe

Circuitry for Salvage
Circuitry for Salvage

Circuitry for Salvage (Guiyu Blues), 2007. First version of design, housed in VHS tape box. 12 probes for linking to dead circuit board to be re-animated. Rotary switches select frequency range of each of six oscillator voices. Photo by Simon Lonergan.

At some point in the late 1980s the composer Ron Kuivila told me, “we have to make computer music that sounds like electronic music.” This might appear a mere semantic distinction. At that time the average listener would dismiss any music produced with electronic technology—be it a Moog or Macintosh—as “boops and beeps.” But Kuivila presciently drew attention to a looming fork in the musical road: boops and beeps were splitting into boops and bits. Over the coming decades, as the computer evolved into an unimaginably powerful and versatile musical tool, this distinction would exert a subtle but significant influence on music.

Kuivila and I had met in 1973 at Wesleyan University, where we both were undergraduates studying with Alvin Lucier. Under the guidance of mentors such as David Tudor and David Behrman, we began building circuits in the early 1970s, and finished out the decade programming pre-Apple microcomputers like the Kim 1. The music that emerged from our shambolic arrays of unreliable homemade circuits fit well into the experimental aesthetic that pervaded the times. (The fact that we were bad engineers probably made our music better by the standards of our community.) Nonetheless we saw great potential in those crude early personal computers, and many of us welcomed the chance to hang up the soldering iron and start programming.[1]

The Ataris, Amigas, and Apples that we adopted in the course of the 1980s were vastly easier to program than our first machines, but they still lacked the speed and processor power needed to generate complex sound directly. Most “computer music” composers of the day hitched their machines to MIDI synthesizers, but even the vaunted Yamaha DX7 was no match for the irrational weirdness of a table strewn with Tudor’s idiosyncratic circuits arrayed in unstable feedback matrices. One bottleneck lay in MIDI’s crudely quantized data format, which had been optimized for triggering equal-tempered notes and was ill suited for complex, continuous changes in sound textures. On a more profound level, MIDI “exploded” the musical instrument, separating sound (synthesizer) from gesture (keyboard, drum pads, or other controller)—we gained a Lego-like flexibility to build novel instruments, but we severed the tight feedback between body and sound that existed in most traditional, pre-MIDI instruments and we lost a certain degree of touch and nuance[2].

MIDI no longer stands between code and sound: any laptop now has the power to generate directly a reasonable simulation of almost any electronic sound—or at least to play back a sample of it. Computer music should sound like electronic music. But I’m not sure that Kuivila’s goal has yet been met. I still find myself moving back and forth between different technologies for different musical projects. And I can still hear a difference between hardware and software. Why?

Most music today that employs any kind of electronic technology depends on a combination of hardware and software resources. Although crafted and/or recorded in code, digital music reaches our ears through a chain of transistors, mechanical devices, speakers, and earphones. “Circuit Benders” who open and modify electronic toys in pursuit of new sounds often espouse a distinctly anti-computer aesthetic, but the vast majority of the toys they hack in fact consist of embedded microcontrollers playing back audio samples—one gizmo is distinguished from another not by its visible hardware but by the program hidden inside a memory chip on the circuit board. Still, whereas a strict hardware/software dialectic can’t hold water for very long, arrays of semiconductors and lines of code are imbued with various distinctive traits that combine to determine the essential “hardware-ness” or “software-ness” of any particular chunk of modern technology.

Some of these traits are reflected directly in sound—with sufficient attention or guidance, one can often hear the difference between sounds produced by a hardware-dominated system versus those crafted largely in software. Others influence working habits—how we compose with a certain technology, or how we interact with it in performance; sometimes this influence is obvious, but at other times it can be so subtle as to verge on unconscious suggestion. Many of these domain-specific characteristics can be ignored or repressed to some degree—just like a short person can devote himself to basketball—but they nonetheless affect the likelihood of one choosing a particular device for a specific application, and they inevitably exert an influence on the resulting music.

I want to draw attention to some distinctive differences between hardware and software tools as applied to music composition and performance. I am not particularly interested in any absolute qualities inherent in the technology, but in the ways certain technological characteristics influence how we think and work, and the ways in which the historic persistence of those influences can predispose an artist to favor specific tools for specific tasks or even specific styles of music. My observations are based on several decades of personal experience: in my own activity as a composer and performer, and in my familiarity with the music of my mentors and peers, as observed and discussed with them since my student days. I acknowledge that my perspective comes from a fringe of musical culture and I contribute these remarks in the interest of fostering discussion, rather than to prove a specific thesis.

I should qualify some of the terms I will be using. When I speak of “hardware” I mean not only electronic circuitry, but also mechanical and electromechanical devices from traditional acoustic instruments to electric guitars. By “software” I’m designating computer code as we know it today, whether running on a personal computer or embedded in a dedicated microcontroller or Digital Signal Processor (DSP). I use the words “infinite” and “random” not in their scientific sense, but rather as one might in casual conversation, to mean “a hell of a lot” (the former) and “really unpredictable” (the latter).

Vim

The Traits

Here are what I see as the most significant features distinguishing software from hardware in terms of their apparent (or at least perceived) suitability for specific musical tasks, and their often-unremarked influence on musical processes:

    • Traditional acoustic instruments are three-dimensional objects, radiating sound in every direction, filling the volume of architectural space like syrup spreading over a waffle. Electronic circuits are much flatter, essentially two-dimensional. Software is inherently linear, every program a one-dimensional string of code. In an outtake from his 1976 interview with Robert Ashley for Ashley’s Music With Roots in the Aether, Alvin Lucier justified his lack of interest in the hardware of electronic music with the statement, “sound is three-dimensional, but circuits are flat.” At the time Lucier was deeply engaged with sound’s behavior in acoustic space, and he regarded the “flatness” of circuitry as a fundamental weakness in the work of composers in thrall to homemade circuitry, as was quite prevalent at the time. As a playing field for sounds a circuit may never be able to embody the topographic richness of standing waves in a room, but at least a two-dimensional array of electronic components on a fiberglass board allows for the simultaneous, parallel activity of multiple strands of electron flow, and the resulting sounds often approach the polyphonic density of traditional music in three-dimensional space. In software most action is sequential, and all sounds queue up through a linear pipeline for digital to analog conversion. With sufficient processor speed and the right programming environment one can create the impression of simultaneity, but this is usually an illusion—much like a Bach flute sonata weaving a monophonic line of melody into contrapuntal chords. Given the ludicrous speed of modern computers this distinction might seem academic—modern software does an excellent job of simulating simultaneity. Moreover, “processor farms” and certain DSP systems do allow true simultaneous execution of multiple software routines. But these latter technologies are far from commonplace in music circles and, like writing prose, the act of writing code (even for parallel processors) invariably nudges the programmer in the direction of sequential thinking. This linear methodology can affect the essential character of work produced in software.
    • Hardware occupies the physical world and is appropriately constrained in its behavior by various natural and universal mechanical and electrical laws and limits. Software is ethereal—its constraints are artificial, different for every programming language, the result of intentional design rather than pre-existing physical laws. When selecting a potentiometer for inclusion in a circuit, a designer has a finite number of options in terms of maximum resistance, curve of resistive change (i.e., linear or logarithmic), number of degrees of rotation, length of its slider, etc.—and these characteristics are fixed at the point of manufacture. When implementing a potentiometer in software, all these parameters are infinitely variable, and can be replaced with the click of a mouse. Hardware has real edges; software presents an ever-receding horizon.
    • As a result of its physicality, hardware—especially mechanical devices—
      often displays non-linear adjacencies similar to state-changes in the natural world (think of the transition of water to ice or vapor). Pick a note on a guitar and then slowly raise your fretting finger until the smooth decay is abruptly choked off by a burst of enharmonic buzzing as the string clatters against the fret. In the physical domain of the guitar these two sounds—the familiar plucked string and its noisy dying skronk—are immediately adjacent to one another, separated by the slightest movement of a finger. Either sound can be simulated in software, but each requires a wholly different block of code: no single variable in the venerable Karplus-Strong “plucked string algorithm”[3] can be nudged by a single bit to produce a similar death rattle; this kind of adjacency must be programmed at a higher level and does not typically exist in the natural order of a programming language. Generally speaking, adjacency in software remains very linear, while the world of hardware abounds with abrupt transitions. A break point in a hardware instrument—fret buzz on a guitar, the unpredictable squeal of the STEIM Cracklebox—can be painstakingly avoided or joyously exploited, but is always lurking in the background, a risk, an essential property of the instrument.
    • Most software is inherently binary: it either works correctly or fails catastrophically, and when corrupted code crashes the result is usually silence. Hardware performs along on a continuum that stretches from the “correct” behavior intended by its designers to irreversible, smoky failure; circuitry—especially analog circuitry—usually produces sound even as it veers toward breakdown. Overdriving an amplifier to distort a guitar (or even setting the guitar on fire), feeding back between a microphone and a speaker to play a room’s resonant frequencies, “starving” the power supply voltage in an electronic toy to produce erratic behavior. These “misuses” of circuitry generate sonic artifacts that can be analyzed and modeled in software, but the risky processes themselves (saturation, burning, feedback, under-voltage) are very difficult to transfer intact from the domain of hardware to that of software while preserving functionality in the code. Writing software favors Boolean thinking—self-destructive code remains the purview of hackers who craft worms and Trojan Horses for the specific purpose of crashing or corrupting computers.
    • Software is deterministic, while all hardware is indeterminate to some degree. Once debugged, code runs the same almost all the time. Hardware is notoriously unrepeatable: consider recreating a patch on an analog synthesizer, restoring mixdown settings on a pre-automation mixer, or even tuning a guitar. The British computer scientist John Bowers once observed that he had never managed write a “random” computer program that would run, but was delighted when he discovered that he could make “random” component substitutions and connections in a circuit with a high certainty of a sonic outcome (a classic technique of circuit bending).
    • Hardware is unique, software is a multiple. Hardware is constrained in its “thinginess” by number: whether handcrafted or mass-produced, each iteration of a hardware device requires a measurable investment of time and materials. Software’s lack of physical constraint gives it tremendous powers of duplication and dissemination. Lines of code can be cloned with a simple cmd-C/cmd-V: building 76 oscillators into a software instrument takes barely more time than one, and no more resources beyond the computer platform and development software needed for the first (unlike trombones, say). In software there is no distinction between an original and a copy: MP3 audio files, PDFs of scores, and runtime versions of music programs can be downloaded and shared thousands of times without any deterioration or loss of the matrix—any copy is as good as the master. If a piano is a typical example of traditional musical hardware, the pre-digital equivalent of the software multiple would lie somewhere between a printed score (easily and accurately reproduced and distributed, but at a quantifiable—if modest—unit cost) and the folk song (freely shared by oral tradition, but more likely to be transformed in its transmission). Way too many words have already been written on the significance of this trait of software—of its impact on the character and profitability of publishing as it was understood before the advent of the World Wide Web; I will simply point out that if all information wants to be free, that freedom has been attained by software, but is still beyond the reach of hardware. (I should add that software’s multiplicity is accompanied by virtual weightlessness, while hardware is still heavy, as every touring musician knows too well.)
    • Software accepts infinite undo’s, is eminently tweakable. But once the solder cools, hardware resists change. I have long maintained that the young circuit-building composers of the 1970s switched to programming by the end of that decade because, for all the headaches induced by writing lines of machine language on calculator-sized keypads, it was still easier to debug code than to de-solder chips. Software invites endless updates, where hardware begs you to close the box and never open it again. Software is good for composing and editing, for keeping things in a state of flux. Hardware is good for making stable, playable instruments that you can return to with a sense of familiarity (even if they have to be tuned)—think of bongos or Minimoogs. The natural outcome of software’s malleability has been the extension of the programming process from the private and invisible pre-concert preparation of a composition, to an active element of the actual performance—as witnessed in the rise of “live coding” culture practiced by devotees of SuperCollider and Chuck programming languages, for example. Live circuit building has been a fringe activity at best: David Tudor finishing circuits in the pit while Merce Cunningham danced overhead; the group Loud Objects soldering PICs on top of an overhead projector; live coding vs. live circuit building in ongoing competition between the younger Nick Collins (UK) and myself for the Nic(k) Collins Cup.

David Tudor performance setup

  • On the other hand, once a program is burned into ROM and its source code is no longer accessible, software flips into an inviolable state. At this point re-soldering, for all it unpleasantness, remains the only option for effecting change. Circuit Benders hack digital toys not by rewriting the code (typically sealed under a malevolent beauty-mark of black epoxy) but by messing about with traces and components on the circuit board. A hardware hack is always lurking as a last resort, like a shim bar when you lock your keys in the car.
  • Thanks to computer memory, software can work with time. The transition from analog circuitry to programmable microcomputers gave composers a new tool that combined characteristics of instrument, score, and performer: memory allows software to play back prerecorded sounds (an instrument), script a sequence of events in time (a score), and make decisions built on past experience (a performer). Before computers, electronic circuitry was used primarily in an instrumental capacity—to produce sounds immediately[4]. It took software-driven microcomputers to fuse this trio of traits into a powerful new resource for music creation.
  • Given the sheer speed of modern personal computers and software’s quasi-infinite power of duplication (as mentioned earlier), software has a distinct edge over hardware in the density of musical texture it can produce: a circuit is to code as a solo violin is to the full orchestra. But at extremes of its behavior hardware can exhibit a degree of complexity that remains one tiny but audible step beyond the power of software to simulate effectively: initial tug of rosined bow hair on the string of the violin; the unstable squeal of wet fingers on a radio’s circuit board; the supply voltage collapsing in a cheap electronic keyboard. Hardware still does a better job of giving voice to the irrational, the chaotic, the unstable (and this may be the single most significant factor in the “Kuivila Dilemma” that prompted this whole rant).
  • Software is imbued with an ineffable sense of now—it is the technology of the present, and we are forever downloading and updating to keep it current.       Hardware is yesterday, the tools that were supplanted by software. Turntables, patchcord synthesizers, and tape recorders have been “replaced” by MP3 files, software samplers, and ProTools. In the ears, minds, and hands of most users, this is an improvement—software often does the job “better” than its hardware antecedents (think of editing tape, especially videotape, before the advent of digital alternatives). Before any given tool is replaced by a superior device, qualities that don’t serve its main purpose can be seen as weaknesses, defects, or failures: the ticks and pops of vinyl records, oscillators drifting out of tune, tape hiss and distortion. But when a technology is no longer relied upon for its original purpose, these same qualities can become interesting in and of themselves. The return to “outmoded” hardware is not always a question of nostalgia, but often an indication that the scales have dropped from our ears.

Hybrids

Lest you think me a slave to the dialectic, I admit that there are at least three areas of software/hardware crossover that deserve mention here: interfaces for connecting computers (and, more pointedly, their resident software) to external hardware devices; software applications designed to emulate hardware devices; and the emergence of affordable rapid prototyping technology.

The most ubiquitous of the hardware interfaces today is the Arduino, the small, inexpensive microcontroller designed by Massimo Banzi and David Cuartielles in 2005. The Arduino and its brethren and ancestors facilitate the connection of a computer to input and output devices, such as tactile sensors and motors. Such an interface indeed imbues a computer program with some of the characteristics we associate with hardware, but there always remains a MIDI-tinged sense of mediation (a result of the conversion between the analog to digital domains) that makes performing with these hybrid instruments slightly hazier than manipulating an object directly—think of controlling a robotic arm with a joystick, or hugging an infant in an incubator while wearing rubber gloves. That said, I believe that improvements in haptic feedback technology will bring us much closer to the nuance of real touch.

The past decade has also seen a proliferation of software emulations of hardware devices, from smart phone apps that simulate vintage analog synthesizers, to iMovie filters that make your HD video recording look like scratchy Super 8 film. The market forces behind this development (nostalgia, digital fatigue, etc.) lie outside of the scope of this discussion, but it is important to note here that these emulations succeed by focusing on those aspects of a hardware device most easily modeled in the software domain: the virtual Moog synthesizer models the sound of analog oscillators and filters, but doesn’t try to approximate the glitch of a dirty pot or the pop of inserting a patchcord; the video effect alters the color balance and superimposes algorithmically generated scratches, but does not let you misapply the splicing tape or spill acid on the emulsion.

Although affordable 3D printers and rapid prototyping devices still remain the purview of the serious DIY practitioner, there is no question that these technologies will enter the larger marketplace in the near future. When they do, the barrier between freely distributable software and tactile hardware objects will become quite permeable. A look thru the Etsy website reveals how independent entrepreneurs have already employed this technology to extend the publishing notion of “print on demand” to something close to “wish on demand,” with Kickstarter as the economic engine behind the transformation of wishes into businesses. (That said, I’ve detected the start of a backlash against the proliferation of web-accessed “things”—see Allison Arieff, “Yes We Can. But Should We?”).

Some Closing Observations

Trombone-Propelled Electronics rev. 3.0, 2005. Photo by Simon Lonergan.

Trombone-Propelled Electronics rev. 3.0, 2005. Photo by Simon Lonergan.

I came of age as a musician during the era of the “composer-performer”: the Sonic Arts Union, David Tudor, Terry Riley, LaMonte Young, Pauline Oliveros, Steve Reich, Philip Glass. Sometimes this dual role was a matter of simple expediency (established orchestras and ensembles wouldn’t touch the music of these young mavericks at that time), but more often it was a desire to retain direct, personal control that led to a flowering of composer-led ensembles that resembled rock bands more than orchestras. Fifty years on, the computer—with its above-mentioned power to fuse three principle components of music production—has emerged as the natural tool for this style of working.

But another factor driving composers to become performers was the spirit of improvisation. The generation of artists listed above may have been trained in a rigorous classical tradition, but by the late 1960s it was no longer possible to ignore the musical world outside the gates of academe or beyond the doors of the European concert hall. What was then known as “world music” was reaching American and European ears through a trickle of records and concerts. Progressive jazz was in full flower. Pop was inescapable. And composers of my age—the following generation—had no need to reject any older tradition to strike out in a new direction: Ravi Shankar, Miles Davis, the Beatles, John Cage, Charles Ives, and Monteverdi were all laid out in front of us like a buffet, and we could heap our plates with whatever pleased us, regardless of how odd the juxtapositions might seem. Improvisation was an essential ingredient, and we sought technology that expanded the horizons of improvisation and performance, just as we experimented with new techniques and tools for composition.

It is in the area of performance that I feel hardware—with its tactile, sometimes unruly properties—still holds the upper hand. This testifies not to any failure of software to make good on its perceived promise of making everything better in our lives, but to a pragmatic affirmation of the sometimes messy but inarguably fascinating irrationality of human beings: sometimes we need the imperfection of things.

 

This essay began as a lecture for the “Technology and Aesthetics” symposium at NOTAM (Norwegian Center for Technology in Music and the Arts), Oslo, May 26-27 2011, revised for publication in Musical Listening in the Age of Technological Reproduction (Ashgate, 2015). It has been further revised for NewMusicBox.

*

Nicolas Collins

Nicolas Collins

New York born and raised, Nicolas Collins spent most of the 1990s in Europe, where he was visiting artistic director of Stichting STEIM (Amsterdam) and a DAAD composer-in-residence in Berlin. An early adopter of microcomputers for live performance, Collins also makes use of homemade electronic circuitry and conventional acoustic instruments. He is editor-in-chief of the Leonardo Music Journal and a professor in the Department of Sound at the School of the Art Institute of Chicago. His book, Handmade Electronic Music–The Art of Hardware Hacking (Routledge), has influenced emerging electronic music worldwide. Collins’s indecisive career trajectory is reflected in his having played at both CBGB and the Concertgebouw.

 

*

1. Although this potential was clear to our small band of binary pioneers, the notion was so inconceivable to the early developers of personal computers that Apple trademarked its name with the specific limitation that its machines would never be used for musical applications, lest it infringe on the Beatles’ semi-dormant company of the same name—a decision that would lead to extended litigation after the introduction of the iPod and iTunes. This despite the fact that the very first non-diagnostic software written and demonstrated at the Homebrew Computer Club in Menlo Park, California, in 1975 was a music program by Steve Dompier, an event attended by a young Steve Jobs (see http://www.convivialtools.net/index.php?title=Homebrew_Computer_Club) (accessed on February 21, 2013).


2. For more on the implications of MIDI’s separation of sound from gesture see Collins, Nicolas, 1998. “Ubiquitous Electronics—Technology and Live Performance 1966-1996.” Leonardo Music Journal Vol. 8. San Francisco/Cambridge 27-32. One magnificent exception to the gesture/sound disconnect that MIDI inflicted on most computer music composers was Tim Perkis’s “Mouseguitar” project of 1987, which displayed much of the tactile nuance of Tudor-esque circuitry. In Perkis’s words:

When I switched to the FM synth (Yamaha TX81Z), there weren’t any keydowns involved; it was all one “note”…  The beauty of that synth—and why I still use it! — is that its failure modes are quite beautiful, and that live patch editing [can] go on while a voice is sounding without predictable and annoying glitches. The barrage of sysex data—including simulated front panel button-presses, for some sound modifications that were only accessible that way—went on without cease throughout the performance. The minute I started playing the display said “midi buffer full” and it stayed that way until I stopped.

(Email from Tim Perkis, July 18, 2006.)


3. Karplus, Keven and Strong, Alex. 1983. “Digital Synthesis of Plucked String and Drum Timbres.” Computer Music Journal 7 (2). Cambridge. 43–55.


4. Beginning in the late 1960s a handful of artist-engineers designed and built pre-computer circuits that embodied some degree of performer-like decision-making: Gordon Mumma’s “Cybersonic Consoles” (1960s-70s), which as far as I can figure out were some kind of analog computers; my own multi-player instruments built from CMOS logic chips in emulation of Christian Wolff’s “co-ordination” notation (1978). The final stages of development of David Behrman’s “Homemade Synthesizer” included a primitive sequencer that varied pre-scored chord sequences in response to pitches played by a cellist (Cello With Melody Driven Electronics, 1975) presaging Behrman’s subsequent interactive work with computers. And digital delays begat a whole school of post-Terry Riley canonical performance based on looping and sustaining sounds from a performance’s immediate past into its ongoing present.

To Upgrade or Not to Upgrade? A Notation Software Update

There have been big changes in the notation software market in recent years, and a lot of people are confused about what is going on and what the future might hold. Sibelius is dead, and Finale has been sold off? No more updates? Where did I put my old electric eraser and Pelican pens? As a professional engraver, I use this software 12 hours a day and am deeply invested in the state of things.

In April, Avid released a new version of Sibelius, loosely called Sibelius 8 although they are shying away from version numbers now. This is the first major upgrade of Sibelius with no new engraving features.

Yes, that is correct. No new engraving features. But if you use a computer that has a touch screen, you can now use a digital pen to annotate and mark up a score, in the same way you’d use a pen/pencil to mark up a printed copy.

Sibelius 8 touch screen

Some of the common tablet/smartphone gestures will work on touch screens, you can navigate with the pen, and do rudimentary editing.

Despite this dearth of overall improvements, Avid has decided to maximize their income stream, so this new version starts a draconian licensing program where you pay a lot more for constant upgrades that may be of little use to those of us who focus on notation. Or you can purchase a perpetual license, but you must still pay a fee every year to continue receiving updates.

  • Just want to know if you should upgrade? Feel free to skip ahead.

HOW DID WE GET HERE?

Notation software has changed our industry in countless ways. It eliminated some methods of music typography (e.g. the Music Typewriter and the Korean “Stamping” method). It has lowered the cost of music preparation and eased the ability to make changes to existing materials, provided professional tools to novices, and lowered the total fees paid for commissions which also typically included additional remuneration for copying costs since composers took over some of the tasks of materials preparation. This last item has often resulted in composers doing more of the work while being paid less.

However, I think we have grown a bit complacent and forgotten how fragile the software industry is. Professional music notation with computers came to prominence in the late ‘80s when SCORE was released and publishers found that it was well suited to many different types of music, plus it had a good system to create scores and extract parts. It also had excellent guitar tablature notation, which made it ideal for companies such as Hal Leonard. SCORE’s strength was that it found a way to divide all of the myriad notational elements and organize them into categories of items, which allowed for easy manipulation. It’s a primitive program with musical intelligence, and it’s primarily graphic based. If you have a 900-page score and insert a few bars at the beginning, there is no automatic update; you have to manually adjust things throughout, including page numbers, bar numbers, and layout. It was not particularly composer friendly, so it was mainly adopted by professional engravers and copyists. Some publishers used it in house—a few still do.

Score (version 3)

Score (version 3)

Shortly after that, Finale came along. It was slow and cumbersome, but it had a Mac and Windows version (SCORE only runs under DOS) and seemed more user friendly because it had a graphic interface with menus and tools to perform common tasks.

Finale 2014

Finale 2014

For many years, those two programs formed the basis for converting the industry to computerized typography. However, in 1998, twin brothers Ben and Jonathan Finn released a Windows version of their unique program called Sibelius. It was designed with the idea that we should have a “word processor” for music notation, which would also serve as a professional tool. They studied what SCORE and Finale did, improved on it, and talked to many professionals to gain a deeper understanding of the needs of the industry. The paradigm they developed—a program that is easy enough that a novice can use it, yet structured in a way that a professional can come along later and improve the quality of the notation, the look, and the layout—is still its most compelling, powerful feature. Try doing the same operations in Finale or SCORE and the work hours double or triple.

Sibelius 7

Sibelius 7

Sales of Sibelius and Finale are strong, particularly in the education market, and generate enough revenue that the companies that own these products (Avid for Sibelius, and MakeMusic for Finale) can afford to continue development and add features, support existing users, and maintain the software. Yet there have been big changes in these two companies.

MakeMusic has been sold to Peaksware/LaunchEquity Partners, and they have moved from their longtime Minnesota location to Colorado. Many people who were intimately familiar with the software left the company because of the move.

Avid decided to close the primary London office where the Sibelius development team worked, and all of the long-time programmers who knew the code intimately were let go.

Sounds grim, doesn’t it? Add to that the fact that there have been two releases of Sibelius with only minor or non-existent feature changes (7.5 and “8”), and it surely makes you wonder about the future.

MakeMusic has finally ended its once-a-year Finale upgrade cycle (which was designed to generate revenue, not to benefit users). The latest version released is 2014, and while they have announced a free 2014.5 update, it only offers some bug fixes and minor improvements. It still suffers from an old-fashioned ’80s-era interface that is dependent on dozens of palettes, requires the continual clicking on tools to accomplish basic tasks, and lags far behind Sibelius in important features like collision avoidance.

ARE THERE OTHER OPTIONS?

There are new notation products on the market, but most of them focus on tablet computing (like StaffPad). There are free programs like MuseScore and a few others that might attract users with very limited budgets.

PreSonus’s NOTION considers itself music notation software, but I haven’t seen anything done in the program that I would consider at a professional level. These programs can be fun and have potential, but I can’t imagine they will be adequate for professional engraving/copying work.

One company that hopes to upset the marketplace is Steinberg, the German firm that manufactures Cubase and Nuendo. They took the bold step of hiring the Sibelius team in London, and set them to work creating a new notation program. There is a lot of potential here. They are led by a very knowledgeable musician, Daniel Spreadbury, who was the brilliant manager for Sibelius. And the team he’s working with has created a notation program before, so they know the pitfalls. Since they have to compete with two very entrenched programs with lots of momentum, they need to build something better. They have studied some of the subtle aspects of music engraving, talked to many professionals, and have tried to learn what most notation programs still get wrong. I could write a very long article about this last item; it’s an area of deep concern. For example, horizontal spacing is poorly understood and no program has ever done it as well as plate engravers did 100 years ago. Every music notation program handles lyrics incorrectly (in terms of spacing), and vertical spacing/justification is equally problematic. Steinberg is aware of these things, and you can read about the work they are doing on Daniel’s blog.

They have also created a new music font structure (SMuFL) and created a free font called Bravura, loosely based on the old Not-a-Set dry-transfer symbols, which were in turn based on Breitkopf and Hertel’s engraving tools. Dry-transfer symbols are mass produced on transparent plastic sheets so they can be applied to a music page by rubbing the back with a burnisher. It was a common technique for autography that was used before computer notation software became prevalent.

Engraving sample created with Not-a-Set

Engraving sample created with Not-a-Set

Engraving sample created with Bravura

Engraving sample created with Bravura


WHAT SHOULD YOU DO NEXT?

If you use Sibelius 7, I think that’s a good version to stick with for now. (That’s the version I use for most of my work.)

If you use Sibelius 7.5, that’s fine too. (This version added some small new features, but it also changed the file format, so it’s annoying to share files with people working in earlier versions.)

If you use Sibelius 6, that’s a little tougher call. It’s acceptable to work in, but there are some limitations and it’s now several versions back. I would recommend moving on from that version before long.

If you use any version prior to 6, I would recommend you upgrade to 7 or 7.5 before you get trapped in the version 8 licensing scheme. But act quickly, you’ll need to buy 7/7.5 from a retailer who has existing stock since Sibelius is no longer selling those versions.

FINALE

Finale 2014d is pretty stable and it’s the version I tend to use for most projects. But opening old files in new versions of Finale can cause problems, or in some cases it won’t even work. Finale’s free NotePad is surprisingly the best choice for opening old Finale files and allows for simple editing.

If you use a version of Finale prior to ver. 2012, it’s time to upgrade.

Notation software is absolutely essential for virtually anyone who needs to write down a musical idea. I have about 70,000 music files on my computer, and I’d estimate 2/3 of them are in Sibelius format, the rest in Finale and SCORE format. I don’t foresee abandoning Sibelius or Finale any time in the future, and I am reasonably confident the programs will remain functional and useful, even if they don’t add any significant new features or fix the glaring problems that remain. Perhaps Steinberg’s entry into this market will shake things up and force some serious competition among all of the programs. Despite all of the grim news here, I remain optimistic and hopeful.


Bill Holab is the owner of Bill Holab Music, a company that publishes a select group of composers and provides high-end engraving/typography/design to the industry. www.billholabmusic.com

We Are Sitting In (Another) Room: Improv with Architecture

Pea Soup To Go
Today marks the 40th anniversary of Nicolas Collins’s Pea Soup, a piece that uses electronics to “play” the signature acoustics of a space. In honor of that milestone, Collins today unveils Pea Soup To Go, a free virtual jukebox programed with recordings of 70 different versions of the work, iterations which span decades and continents.

Since the composition relies on linked microphones and loudspeakers in a “self-stabilizing feedback network” to map and respond to changes in the room and produce the sonic content featured in the piece, it might just be one of the purest forms of ambient music available. The jukebox shuffles the various collected recordings, masking transitions between each with long crossfades, allowing listeners to dip into this historic stock pot and feast until they are full.

*

Molly Sheridan: How do you tend to explain this piece to people who haven’t yet heard it, especially those without a great deal of technical background?

Nicolas Collins: Technically it’s pretty simple. Everybody seems to have heard the squeal of feedback at some point, and most are familiar with the fact that moving the microphone (or electric guitar) usually changes the pitch of the feedback. I explain that the phase shifter (the electronic gizmo at the heart of the piece) emulates a hand moving the mike every time the feedback starts to swell. The piece has a sufficiently dreamy, non-threatening quality that most people don’t worry too much about the how and why.

MS: And that idea led you to the title Pea Soup?

NC: The immersive quality of the sound field brought to mind the cliché of a fog “as thick as pea soup.” Rather silly, in retrospect, but I was pretty young and now I’m stuck with it.

MS: While reading up on the history of Pea Soup, I was surprised to discover that the work can involve (or always does?) live musicians. This was something I didn’t quite pick out in the first few iterations of the piece I heard via the jukebox. They are charged with interacting with the electronics (or later the software) in some specific ways. Can you explain why you prescribe their actions in the way that you do? And then this of course made me curious about the impact of the audience in the space and therefore on thework itself.

NC: Left to its own devices the Pea Soup feedback network creates simple, languid melodies whose pitches are derived from the resonant frequencies of the room (and the tempo reflects the reverberation time–larger rooms play slower tunes.) A small change in the room acoustics can cause a pitch to be added to or dropped from the melody, like some slow hocket music. I ask performers to “play” the acoustics by walking around the room, since interfering with the reflecting paths of the feedback often causes a change in the patterns. They play notes as well: playing a unison with a feedback pitch, then bending slightly out of tune, can stop the feedback; playing an octave or fifth above a feedback pitch can cause the feedback to break to the upper interval; and introducing a pitch that hasn’t been heard in the feedback from several minutes often brings it back into the melodic pattern.
Audience sounds and movement obviously influence the patterns as well–a performance in a noisy bar unfolds very differently than in a quiet, formal concert hall. I’ve also installed the work in gallery settings, where interaction with the audience becomes central.

In performance I usually let the feedback system stabilize for a few minutes, as a sort of alap introducing the scale of the room, before the players start. The web app (Pea Soup To Go) shuffles a library of around 70 performance recordings, with long fade-ins and fade-outs. The sequence is random (or as close as I can get), as is the selection of in- and out-points for each file, so the recordings always start at different times–sometimes one drops right in on a musician’s sounds, but sometimes you have to wait a few minutes to hear a player. Plus the players are instructed to play “inside” the feedback texture, rather than soloing on top, so it’s not always easy to distinguish the instrumental voices.
countryman phase shifter
MS: Okay, now for the gear snobs in the crowd, this piece offers some interesting insights into the punishment time can dish out on work that involves specific electronic components that can break down and become obsolete. This led you to some particular extremes—I especially loved the correspondence you exchanged with Carl Countryman, the maker of the phase shifter you originally employed in the piece. Can you tell us a bit about that evolution and how it affected the work?

NC: This will make me sound even older than I am, but back in 1974 there were no digital delays (or at least no affordable ones). The studio at Wesleyan had three Countryman Phase Shifters that Alvin Lucier had bought to do what’s called “Haas-effect Panning,” which is a way to pan sounds quite realistically using very short time delays. I had been working a lot with feedback, and discovered that changing the phase shifter’s delay setting could emulate moving a mike, opening up a whole new vista of quasi-automated feedback manipulation. Pea Soup emerged as one of the major products of my undergraduate education.
After college I moved on to other materials and technologies (early microcomputer music, live sampling and signal processing, collaboration with improvisers.) But I’d return to feedback from time to time, and when, through my day job in New York, I ran into Carl Countryman at trade shows I’d always ask if he had any of the Phase Shifters back at his warehouse. By the 1980s he was making very popular high-quality Direct Boxes and lavaliere microphones, and the phase shifters were long gone and, it seems, not missed–his answer was always “no.”

Then in the late 1990s I was in Berlin with a DAAD fellowship, and an ensemble with which I was working (Kammernesemble Neue Musik Berlin) asked if they could revive Pea Soup. At first I tried to reconstruct the original analog circuit. I emailed Mr. Countryman, who obviously still remembered my unwanted nagging, and he sent me the schematic with the explicit understanding that I was never to bother him about this device again. The circuit is not complicated, but it has one odd custom-made part that was difficult to duplicate. I did a few performances with my best attempt in the analog domain, but after a few years I wrote a software emulation of the original analog boxes that, with enough code tweaking, evolved into a pretty convincing substitute.

Software has allowed me to add a few features that would have been great to have back in 1974 but were out of reach then (such as a filter that automatically nulls pitches that would otherwise dominant in the texture.) Programs are not as cute as little metal boxes, but they’re lighter and can be distributed more freely, like old-fashioned paper scores: I’ve posted the program on my web site, where anyone who’s interested can download it and perform Pea Soup without the need fly in Nic and his gear.
Pea Soup software
MS: How does the experience of Pea Soup via this clever website relate the performance experience of hearing it live for you?

NC: In a big space with big speakers Pea Soup can be a very immersive and interactive experience—“church of sound,” as one friend once called it. The web app (Pea Soup To Go) is obviously more like listening to a recording of a concert than experiencing a live event, but this is a record that never ends, never repeats—a multi-disk CD changer in “shuffle mode” with a twist: the long crossfades knit the 70 files into one continuous performance. Since every room is in a different, architecturally determined “key,” you end up hearing a series of odd, vaguely modal chord changes that stretch out over an almost glacial time scale.

MS: Even before I started reading the background on Pea Soup, I kept thinking of Cage and Lucier associations related to “hearing” a space–using a space and its contents as so essential to the end sonic result. Do you hear this piece as in that evolutionary line? In what ways does it intersect and/or diverge?

NC: Yes, it certainly is in that line. I was a young, impressionable student of Lucier’s at the time I made Pea Soup. I was drawn to feedback under the twin influences of Lucier and Cage. I loved Lucier’s extraction of musical material from fundamental acoustical phenomena (think of Vespers and I Am Sitting In A Room). My parents were both architectural historians, and the link between music and architecture was critical to my finding a comfortable place to work. And feedback became the solution to my Cage-induced ambivalence about making personal musical decisions in a world where all sounds could be “musical sounds”: turn up the volume and let nature/god/architects do the rest—a sort of acoustical I Ching.

Divergence? I think my generation of musicians and composers is (and always was) much more comfortable with the idea of improvisation than our teachers were: Cage hated it; Lucier kept trying to come up with other words to describe it. In Pea Soup and most of my other work I embrace improvisation, I hand a lot of responsibility off to my players, and live with the consequences.

I also see each musical generation incorporating a new generation of technology. My peers and I embraced synthesizers, effect boxes, homemade circuitry, computers. And technological shifts often beget stylistic changes – some modest, some significant. There’s a certain kind of technological interactivity that I believe is, for better or for worse, the gift of my generation of experimental music composers.

MS: Even though this was originally a student piece, you note that the lessons of architectural acoustics have continued to engage you, making this piece of ongoing interest even 40 years later. What have some of those lessons been?

NC: I still have difficulty making certain musical decisions, and I often return to acoustics to clarify the edges or underpinnings of a piece. In the end no sound gets to the ear without engaging with acoustics, and the physical reality of sound keeps me grounded. There’s a certain primordial consonance or orderliness or reassuring “rightness” in it, that I find helpful when I’m feeling lost.
roomtone variations
Recently, while tweaking the software for Pea Soup, I discovered a simple way of mapping the resonant frequencies of a room to conventional music notation. I’ve written a piece (Roomtone Variations) that uses this technique to create a site-specific score for any concert space, in real time, in the presence of the audience. The score is projected on a screen for all to see as it unfolds, and after the analytical intro (which takes about two minutes) an ensemble performs purely acoustic variations on this “architectural tone row” – a kind of “Pea Soup Unplugged.”

Another new piece, Speak, Memory, uses room reverberation as short-term memory for image files and sound bites. In the course of the performance I display the transformation of the original pictures and sounds as they are “forgotten” by the room. (I hope to include both these pieces on my first concert in New York in many years, at Roulette on March 9.)

You could look at this obsession in one of two ways, I suppose: either I am somewhat pathetic for, at the age of 60, still being hung up on my first true love from age 20; or it’s a sign of deep commitment to one’s fundamental beliefs. Take your choice.

Finally, Movement on the Notation Front

Back in July of 2012, many notation software users were shaken by the news that Sibelius’s parent company, Avid, was dissolving the program’s London-based office and its primary development team. My “Sharpen Your Quills” post demonstrated how the news resonated throughout the composer community; whether or not a composer used Sibelius or Finale (the two primary notation software options on the market today) or one of the several secondary software alternatives, it was apparent how deeply this structural change would impact the notation software industry. A year and a half later, there are finally signs of what effects that shakeup has had and what the future holds for those who see notation software as an irreplaceable tool.

Finale

Finale has weathered numerous complaints over the years regarding their policy of yearly updates (many of which seemed superficial at best), their reliance on an outdated programming infrastructure for Mac users, a reluctance or inability to match improvements brought forth by their competitors in a timely manner, and a business model that seemed geared toward the public school market while ignoring pleas from professional engravers asking for more functionality in working with complex musical notation. While Finale’s decision to forgo their yearly update model and allow their programmers more time to make extensive changes came a couple months before news broke of Avid’s adjustment to Sibelius, the timing was a lucky break nonetheless.
On November 4, Finale announced their newest version, Finale 2014. Once the announcement was made, the knee-jerk reaction for many users was to read the software’s overview by the widely respected Finale plug-in developer Jari Williamson (whose reviews are required reading for anyone interested in a new software update from Finale). The changes ranged from the technical (they were finally able to move from the depreciated Carbon programming interface to Cocoa, a boon for Mac users, but neglected to create a 64-bit version) to the practical (much-improved treatment of hairpins, cutting down on the need for time-intensive manual editing) to the good-god-why-did-this-take-so-long (the beginning of backwards capability—limited, but it’s a start). But what stuck out for me were the indications that their focus had grown to re-incorporate the needs of the professional contemporary composer/engraver.

Many of the changes addressed issues that the occasional user would probably never think about or require—merging rests across layers and cross-layer accidental changes being two of the biggest—and one of the most interesting changes, the acceptance and incorporation of “open” or non-traditional key signatures, point directly to contemporary compositional techniques that have become commonplace in the late 20th and early 21st century. The software still has much to address before it gets to where it should be—a user interface replete with interminable dialogue boxes, the lack of magnetic positioning that Sibelius has introduced, and the inability for intuitive copying of individual items with the selection tool are major sticking points—but the fact that Finale decided to focus on the issues it did rather than ancillary changes for general public usage demonstrates that Finale and their parent company, MakeMusic, may have become more serious about improving the power and depth of their software as well as its reach and breadth.

Sibelius

Since the major adjustments last year, there’s been little news on this front…the exception being a recent comment from Avid’s director of product management, Bobby Lombardi, who decided, in light of his competitor’s announcement, to let Sibelius Blog know that “Sibelius 7.5” is coming soon. In addition to a review of Finale 2014 as seen through the lens of Sibelius users, Sibelius Blog also mentions the fate of those programmers from Sibelius who were let go when Avid closed their London office; most were hired by a newcomer to the notation software marketplace—Steinberg.

Steinberg

From Steinberg’s blog page:

Steinberg set up a new London-based research and development centre in November 2012, and hired as many of the former Sibelius development team as possible to start work on a brand new scoring application for Windows and Mac. There are currently twelve of us in the team, and all of us were formerly part of the Sibelius development team.

This is one of the more interesting developments on the music notation front in a very long time. By releasing most of their A-Team developers, Avid unintentionally caused the creation of a new competitor (in a rapidly growing marketplace). What has been most fascinating about this new endeavor is the transparency with which the Steinberg team has chosen to build their new application…so new that it doesn’t even have a name yet. That transparency can be seen most clearly in the Steinberg “Making Notes” blog run by Product Marketing Manager Daniel Spreadbury (again, formerly of Sibelius). Taking a page from Hollywood, where production vignettes are now commonplace many months before a film is released, Steinberg is taking the unique step of discussing their creation process as they go.

Here Spreadbury discusses the nuts and bolts of putting together aspects of a notation system that would seem very simple but are both conceptually and logistically extremely complex:

Another important prototype is a means to visualise music on staves. Several months ago, a very simple visualiser was written that shows rhythms, but not pitches, of notes belonging to a single voice on a single-line staff. Since then, we’ve done work on determining staff position and stem direction for notes and chords, and also have the capability to assign multiple voices to the same staff, but we’ve had no way to visualise the result on a staff. Now our test application can optionally show music for multiple voices on a five-line staff, and can display multiple staves together.
It’s still very crude: notes are not beamed together, the spacing is pretty terrible, and things like ties are drawn very simplistically. This is not by any means the basis for how music will eventually appear in our application. But it is an important diagnostic tool as we continue to add more and more musical building blocks…
Our ethos is that our application will be most useful if it does automatically what an experienced engraver or copyist would do anyway. If an engraver and copyist can trust the musical intelligence built in to our application to make the right decisions, it will become a truly fast and efficient tool, and hopefully the one they will come to prefer over and above the others at their disposal.

Where this new software will end up is unclear—they’re still at the rough, early stages—but from what is currently available, this new addition to the pantheon of notation software applications has the potential to create a third-party platform that combines the best characteristics of both Finale and Sibelius. What this means for composers, and subsequently the entire new music community, is as varied as the number of ways in which these applications are used. Some composers use them exclusively as engraving tools, while others eschew paper and pencil altogether and compose directly into the application. Ultimately, if software developers are able to improve the ease of use and the quality of the finished product, then we all come out ahead.

Notational Alternatives: Beyond Finale and Sibelius

“Finale or Sibelius?” is a question that composers love to ask other composers. It’s often taken as a given that if you write music professionally, you’re already using one of these popular notation software packages. This may be about to change—with the news of Sibelius’s development team being unceremoniously dumped by Avid and subsequently scooped up by Steinberg, we may have a third variable to add to that equation. ThinkMusic, another newcomer, promises an iPad app in the near future, but has already generated controversy for seeming to use Sibelius in its video mockup.

In the meantime, there are a variety of other, lesser-known options for notation software already lurking out there. None of them may have the same clout with professionals as Sibelius and Finale—yet—but many are gaining ground. Whether they present robust alternatives for creating notation (MuseScore, LilyPond), or alternative ways of thinking about and deploying notation (Abjad, JMSL, INScore), each has its own advantages and its own dedicated following.

MuseScore: Open Source Upstart
MuseScore started out in 2002 as a spinoff of MusE, an open source sequencer created by German developer and musician Werner Schweer. Until 2007, however, MuseScore was an obscure piece of software only available on Linux. In 2008, Thomas Bonte and Nicolas Froment began to work on bringing the software to a wider audience. Now, over 5000 people download MuseScore every day. Bonte credits the software’s newfound success to its extremely dedicated developers and early adopters. Its open source community now boasts more than 250 contributors adding to the project. This includes making the software available in new languages, fixing bugs, writing documentation, creating video tutorials, and so on.


While Bonte admits that MuseScore is not yet as feature-complete as Sibelius or Finale, he highlights the price tag: MuseScore is completely free, while the others can run as much as $600. Bonte also points out that when compared to the others, MuseScore is a fairly young piece of software. He anticipates that in a few years, “Musescore will have 80% of other notation software’s feature set on board.”
Another long-term advantage is MuseScore’s open source status, says Bonte:

Anyone can look into the code, change it and distribute it further. This is not possible with proprietary software like Sibelius, Finale, and Score. Given the recent uproar in the Sibelius community about Avid closing the London office, it seems now more than ever appropriate to say that choosing free and open source software is the right thing to do. What happened with Sibelius may happen with any other proprietary software, but cannot happen with MuseScore or LilyPond. The source code is available to everyone; no one can take it away.

This openness made MuseScore the notation software of choice for the Open Goldberg Variations, a project to create a new, quality edition of J.S. Bach’s beloved work that would be freely available in the public domain. This time, the venerable work had a very modern path to publication: the project was crowdfunded through Kickstarter and remained open for peer review on musescore.com before being made available for download. The Open Goldberg Variations can be found on the IMSLP / Petrucci Project website, though anyone is welcome to host or share it.

Screenshot of Open Goldberg Variations iPad app

Screenshot of Open Goldberg Variations iPad app

Musescore.com is MuseScore’s latest initiative. Launched in the fall of 2011, musescore.com is an online sheet music sharing platform, and the only thing that MuseScore charges for. Bonte compares the business model of the site to Flickr or SoundCloud—subscribers pay a fee ($49 per year) for more storage and features, essentially. Bonte says this revenue stream allows them to continue to develop MuseScore full time, while maintaining the open source status of the software itself.

LilyPond and Abjad: A Marriage of Composition and Code
Jan Nieuwenhuizen and Han-Wen Nienhuys are the creators of LilyPond, another open source music notation package. The project that would eventually become LilyPond had its genesis in 1992, when Nieuwenhuizen was playing the viola in the Eindhovens Jongeren Ensemble, a youth orchestra conducted by Jan van der Peet. According to Nieuwenhuizen, the players struggled to read from computer printouts so much that they soon switched back to handwritten parts. That got him thinking: “Fully automated music typesetting done right—how hard could that be?”

As it turns out, it was not terribly easy. Using the typesetting system TeX as a foundation, Nieuwenhuizen began working on the problem with Nienhuys, a French horn player in the orchestra and math student at the Eindhoven University of Technology. But it wasn’t until four years later, in 1996, that LilyPond finally emerged after four flawed prototypes. Despite being plagued by difficulties, however, they found that they couldn’t leave the problem alone. “We never realized how hard it was to produce beautifully typeset music automatically until it was too late and we were hooked,” Nieuwenhuizen admits.

Since those humble beginnings, LilyPond has matured into a full-fledged community project, with over 50 authors contributing to the latest stable release for Windows, Mac OS X, and Linux. This includes one full-time developer, David Kastrup, who makes a living—“just barely,” says Nieuwenhuizen—from donations to the project, which Nieuwenhuizen sees as a major milestone.
Because LilyPond is primarily a typesetting and engraving program rather than a compositional tool, its user paradigm differs somewhat from programs like Finale/Sibelius/MuseScore. Similar to Score, the most common engraving program until Finale came along, musical notation is initially entered as text characters, separating out the step of encoding the notation from the act of graphically displaying the notation, while ensuring a consistent layout. Nieuwenhuizen admits that this can be scary or intimidating at first to composers unused to working this way, but contends that in itself, LilyPond is “quite intuitive and easy to use.” He also foresees more community development of graphical front ends, web-based services, and tablet apps that will make LilyPond even more accessible to those just starting out with the software.

This community may be LilyPond’s greatest asset, with a significant amount of overlap between users of the software and those tinkering with the software itself. This new generation of composers who code is extending LilyPond’s functionality into unforeseen territory. For example, Victor Adán, Josiah Oberholtzer, and Trevor Bača are the lead architects of Abjad, which allows composers to write code that acts on notation in LilyPond in “iterative and incremental” ways. In other words, instead of creating notation directly, composers write code that Abjad then translates into a format that LilyPond can interpret to generate notation. As a result, instead of just manipulating individual notes and objects, Abjad can manipulate higher-level structures—like changing the dynamic level of every instance of a particular note, to give one basic example. Abjad uses the Python programming language, known for its readability and flexibility, as its foundation.

Excerpt of Trevor Bača's Čáry created in LilyPond

Excerpt of Trevor Bača’s Čáry created in LilyPond
(click to enlarge)

Writing music with Abjad presents a departure from the traditional compositional process. For Bača, it occupies a position “somewhere between the notation packages like Finale, Sibelius, and Score, and the composition environments like OpenMusic and PWGL.” He describes the process of working with Abjad as a “two-part loop,” alternating between writing code to model parts of a score and considering the notation as visualized in LilyPond. This iterative process of continual revision blurs the boundaries between programmatic and musical thinking, as well as between composition and pre-composition.
The creators of Abjad have also worked closely with practicing composers in the course of development. One of these, Jeffrey Treviño, is already well versed in the musical uses of technology; in the course of writing Being Pollen, a work for percussion and electronics based on the poetry of Alice Notley, he estimates that he used nine different pieces of software. With Abjad he had a specific application in mind—he hoped it would help him notate the rhythms of Notley reciting her poem. He describes part of the process here:

I used Max/MSP to tap along to her recitation and make a text file of millisecond counts for when each syllable occurred. I tightened these up in Audacity to line up precisely, and then I exported the numbers again. I wanted to use these numbers to make a notation in Abjad, but Abjad didn’t have a quantizer… We ended up looking up some research together, especially Paul Nauert’s writing on Q-Grids quantization, and Josiah ended up making the quantizer for Abjad.

In this case, Treviño’s needs as a composer had a direct impact on the development of Abjad, and this in turn allowed Treviño to accomplish something musical that would have otherwise been impossible, or at least far more difficult. Treviño draws an analogy between this model of collaborative composing and high-level chess:

Remember when it was a big deal that Deep Blue beat [Grandmaster Garry] Kasparov in 1997? No one mentions that they did a tournament after this where people could use computers to assist them. When Kasparov had a computer, he beat Deep Blue—but most intriguingly, an amateur aided by a computer, not Kasparov, won the whole tournament. So, I’m a lot better at writing twenty-part counterpoint that doesn’t break any rules if a computer can help me. But the skill set it takes to get the computer to know the rules is a very different skill set than the skill set we teach students in counterpoint classes. That’s all to say—I think it’s best to think about all this as totally redefining the important skills of the creative act, so that formerly conventional amateur/master relationships might be turned on their heads. Rather than expanding or enabling skills that matter currently, this proposes a totally new set of competencies and approaches to the task.

(N.B.: Your author independently thought of this analogy, so it must be a good one.)

Video of Jeffrey Treviño’s “Being Pollen” performed by Bonnie Whiting Smith (composed with help of Abjad/LilyPond)

JMSL and INScore: Notation in Motion
Nick Didkovsky, the primary developer of the Java Music Specification Language, is a guitarist, composer, and programmer who leads the avant-rock octet Doctor Nerve and teaches computer music classes at NYU. But for many years Didkovsky’s parallel interests in music and computers remained independent, never intersecting. What finally inspired him to combine them was an article by Douglas Hofstadter in Scientific American about game theory and a particular kind of lottery called the Luring Lottery, in which the collective desire to win is inversely proportional to the amount of the prize. Didkovsky says, “[The Luring Lottery] is a beautiful and simple idea that binds people together in a simultaneously competitive and cooperative relationship… I wanted to realize that structure musically and thought computers might need to be involved.”

He turned to Pauline Oliveros for help, and she directed him to Larry Polansky. Polansky, together with Phil Burk and David Rosenboom, had created the Hierarchical Music Specification Language (HMSL), a programming language offering a suite of musical tools that turned out to be perfect for Didkovsky’s task. Today HMSL might be most easily compared to other audio programming languages like Max/MSP and SuperCollider, but in an era when these languages were in their infancy, what appealed to Didkovsky about HMSL was its open-endedness: “You can basically do anything… no two HMSL pieces sound even remotely the same because you’re not starting on a level high enough to influence musical tastes. It’s a very non-stylistically biased environment for musical experimentation. And so I think it’s kind of deliberate that it’s kind of a tough environment to work in, or at least it just doesn’t come with a lot of bells and whistles.”

For the next ten years, Didkovsky continued to develop music software with HMSL on the Commodore Amiga for Doctor Nerve as well as other ensembles like the Bang on a Can All-Stars and Meridian Arts Ensemble. When the Amiga platform began showing its age, Didkovsky and Burk had the idea to rebuild HMSL in Java, which could be run on multiple platforms, and in 1997 Java Music Specification Language was born.

The most significant change to JMSL since those days is the addition of a music notation package. With his commitment to traditional instruments, it made sense to Didkovsky to use JMSL to drive a notation environment—and the result was, in his words, a “huge catalyst” creatively. In addition to the many pieces Didkovsky has written using JMSL since then, it has also become a tool used by composers all over the world:

One of my former students, Peter McCullough… developed an extensive suite of personal tools that did very unusual things to scored music, designing both generative and mutator routines that populate the score with notes and transform them once they are present… [progressive metal guitarist and record producer] Colin Marston wrote a series of notated pieces that are impossible for a human being to play—superhuman, intensely noisy pieces performed at high tempo that just rip by and throw glorious shrieking noise in your face, while the staff notation is populated by thick clusters of notes flashing by.

Didkovsky is quick to note that, while traditional staff notation is an important feature of JMSL, it represents only part of what the software can do. Many of the applications of JMSL have little to do with standard music notation—for example, the Online Rhythmicon, a software realization of the instrument Leon Theremin built for Henry Cowell, or Didkovsky’s MandelMusic, a sonic realization of the Mandelbrot set.

Nonetheless, JMSL’s notation capabilities may end up being its most widely used feature, especially with the advent of MaxScore. Didkovsky collaborated with composer Georg Hajdu to create MaxScore, which allows JMSL’s scoring package to communicate with the more popular audio programming environment Max/MSP. Currently, most of Didkovsky’s development energies are directed towards improving MaxScore.

MaxScore Mockup

MaxScore Mockup
(click to enlarge)

INScore, created by Dominique Fober, is a similar synthesis of ideas from notation software and audio programming, though Fober is quick to stress that it is neither a typical score editor nor a programming language. Fober is a musician with a scientific background who found himself doing more and more research related to musical pedagogy. He now works for Grame, a French national center for music creation, where he conducts research related to music notation and representation.

INScore follows from Fober’s experiments based on the idea that, by providing immediate feedback to the performer, musical instruments act as a “mirror” that facilitates learning. Fober wanted to design a musical score that could act as a similar sort of mirror of musical performance, in the form of graphic signals informed by the audio that could augment traditional musical notation. Fober refers to this approach as an “augmented music score.”

“There is a significant gap between interactive music and the static way it is usually notated,” says Fober. Even with live electroacoustic music, performers generally read from paper scores that give an approximation of the electronic events. There are tools like Antescofo that allow computers to follow a score, and tools for the graphical representation of electronic music, like Acousmograph and EAnalysis, but INScore’s approach is different. “[With INScore] the idea was to let the composer freely use any kind of graphic representation—not just symbolic notation but images, text, and video as well—to express his or her thoughts in a form suitable for performance.”

Montreal-based composer Sandeep Bhagwati used INScore for an entire concert of works entitled “Alien Lands” in February 2011. Meanwhile, British composer Richard Hoadley has written Calder’s Violin for violin and computer, premiered in October 2011. Calder’s Violin uses INScore to dynamically generate the violinist’s score in the course of the performance. INScore is not solely aimed at composers, however, and it has also been used for pedagogy, for sound installations, and to model analytic scores of electroacoustic music.

 

Videos of Richard Hoadley’s “Calder’s Violin” (created with INScore)

The Future of Notation?
Despite the vast differences in all of these notation software packages, one thing that they have in common is that each offers something, small or large, that Sibelius and Finale don’t. If you’re looking for something easily accessible and free, MuseScore and LilyPond are well worth checking out. If you’re interested in algorithmic or interactive notation and are willing to deal with a somewhat sharper learning curve, Abjad, JMSL, and INScore are capable of remarkable things. Not to mention the many options I haven’t discussed—BACH Automated Composers Helper, Heinrich Taube’s Common Music and FOMUS, IRCAM’s OpenMusic, and the Sibelius Institute in Helsinki’s PWGL. With all of these tools at our disposal, chances are we might not be hearing “Finale or Sibelius?” for much longer.

Keeping Score: Spreadbury Speaks on Sibelius Team Transition

Daniel Spreadbury

Daniel Spreadbury

Daniel Spreadbury worked on the Sibelius notation software for years, both as a product* and community manager. Then, last July, the software’s parent company, Avid, announced a restructuring, and the Finsbury Park office in London that had been home to the Sibelius team was closed. News came last week, however, that the team is now opening a new office in London to work on a brand new notation program–this time under the auspices of Steinberg, a German company known primarily for the sequencer Cubase. Here’s what Spreadbury had to say about the new project:

Kevin Clark: First off—the question on everyone’s minds: what are you working on and when can we buy it? Of course things are in the very early stages, but any news would be very exciting.

Daniel Spreadbury: Obviously we shall be working on a brand new music notation and composition application, which will sit alongside Steinberg’s other products.  All other aspects and strategies are currently under discussion and will be communicated in due time.

KC: Are there any existing Steinberg technologies that form a good basis for your work?

DS: Certainly – though we’re not sure which just yet. Steinberg has a rich portfolio of technologies, and we can’t wait to get to know our new colleagues and learn from them about the ways in which components or technologies from other products can enrich our new program.

Steinberg logo

KC: Is your team still intact at Steinberg?

DS: Yes, as far as was possible. Steinberg have been fantastic, and were clear from the outset that they wanted to bring the whole team over if they could. However, after it was clear that our office would be closed, a few of our former colleagues took up other jobs and subsequently chose not to re-join us. But the team is definitely intact, and between us we have decades of experience in designing and building great software for musicians, and we are looking forward to combining that experience with the know-how of our new colleagues.

KC: How would a music notation product relate to the rest of Steinberg’s software? Would it be a part of the core business or a separate direction?

DS: Speaking as somebody who has until recently merely been an observer of Steinberg, it has always been my belief that Steinberg is totally committed to providing great products for creative musicians. I see our new application as fitting right in with this ethos, but perhaps targeted at musicians who are more comfortable working with music notation than with sequencer or DAW workflows.

KC: On a separate note, what’s it been like to go through this change for your whole team?

DS: We have been welcomed with open arms by Steinberg. The company’s leaders have shown a real commitment to our team in opening a new office for us in London, and we couldn’t be happier. Many of us have been working together for more than a decade, so the prospect of the team breaking up was pretty distressing, but now we are able to look forward to working together for years to come.

KC: Lastly, what can the community do to help? Any new product will take a while, but in the meantime, if your community wants to help, what should they do?

DS: Right now, it’s very early days. We have a lot of work to do before we can really engage directly with the community in a structured way, but we plan to once our plans are a little firmer. Watch this space!

* An earlier version of this article listed Daniel Spreadbury as a former programmer on Sibelius. He was not. He was a product manager. We regret the error, although we’re glad the actual Sibelius developers got a laugh out of it.

Sharpen Your Quills!

In Jesse Ann Owen’s seminal book Composers at Work: The Craft of Musical Composition 1450-1600, she describes the physical equipment that composers from that period used as they sketched and made initial drafts of their music:

The main instrument for writing during this time period was the quill pen, the point of which had to be cut according to the kind of letters or shapes desired. The graphite pencil that is the ancestor of the pencil in use today was developed during the second half of the sixteenth century, following the discovery of a source of graphite in England, and it came in to common use only after 1600. Other kinds of pencils, made from lead or other metal, left quite fine and faint lines, not appropriate for musical composition. All of the extant manuscripts used for composing were written with pen and ink.

The use of ink meant that erasure was difficult. There were only four ways to correct a mistake: write over it, cross it out, smudge it before the ink dried, or scrape the ink from the surface with a small knife…The choice of method was determined by the stage of work and the requirements for neatness.

When the creation of music is discussed or analyzed, it is rare (other than in Owen’s book, albeit briefly) for the physical tools with which the artist transfers their ideas from mind to page to enter the conversation. Similarly, as commonplace as corporate mergers, acquisitions, and divestitures have become within our modern economic landscape, one would not expect such issues to have a direct and potentially negative impact on the output of creative artists such as composers.

Yet these two seemingly unrelated topics were suddenly brought sharply into focus this week with the news that AVID, the parent company of the popular Sibelius notation software, would be closing down Sibelius’s main British-based development office as part of a major streamlining move to cut costs. First widely publicized by Norman Lebrecht on his Slipped Disc blog and augmented by NewMusicBox’s own investigative team, this move and the possibility of AVID discontinuing the Sibelius product altogether has had a chilling effect throughout a portion of the new music community.

For the uninitiated, the ability to create affordable, publisher-quality engraved music notation on a personal computer came into existence with the advent of Finale in 1989 in the United States and the emergence of Sibelius in the UK in 1993. Ever since Sibelius became available on the Windows and Macintosh platforms in 1998, the healthy rivalry between Finale and Sibelius has forced each to continually hone their functionality and ultimately improved both software applications.

The practical result of these improvements has been the raised expectations on composers to produce professional-level engraved scores and parts; conductors, instrumentalists, and singers today will usually turn a piece down altogether if it is hand-written by anyone but the best calligraphers. In addition to the performer’s expectations, these software applications have given composers various tools with which they can hear their music through playback, quickly extract parts from a full score, and allow for an immense amount of control over the presentation of their music.

As notation software has become as ubiquitous within the composer community as Photoshop has become with professional photographers, it will come as no surprise that the threat of one of these major applications being discontinued is of great concern to many professional composers. To this end, I sent out a brief list of three questions to about fifty well-respected composers in the US and UK to get a sense of how this topic might affect their creative output. Considering the fact that they were only given two days to respond, the fact that I was able to get twenty responses was great and the results are very much across the board.

To all of these composers, I asked the following questions:

1. Which do you use—Finale or Sibelius?
2. In what ways do you use the software before, during, or after the creative process?
3. What would the ramifications to your own current process be if, for some reason, your notation software become discontinued?

Attitudes on this topic ranged from indifference to horror; depending on how each composer used their software in their creative process, the effect of a discontinuance seemed to be anywhere from a minor annoyance to a DEFCON 1 level upheaval. Out of the twenty composers who responded to my questions, seven use Finale, twelve use Sibelius, and one uses both. Below are some of the responses I received.

***
Chen Yi (Finale)

I use Finale to copy finished scores or to work directly on computer when I arrange my own works for different instrumentation…It will be terribly inconvenient if I can’t open the existing files to make corrections later. I still remember how much time I had to spend in making corrections on my older works in the past.

Clint Needham (Sibelius)

I use the program at all stages of the creative process…ideas usually arise from the piano or from the aether and are quickly plugged into the computer and manipulated.  There is also a fair amount of keyboard time, but because my piano chops are limited, the program really allows me to explore a variety of pitch and rhythmic manipulations to my ideas.  I think this is the case for a number of non-pianist composers.

The beauty of any notation program is the command the composer has on the creation of their own score and parts that are performance ready.  This works for composers at any stage—student to professional.   This also allows us to make changes quickly to the score and parts.  The main ramification for me would be the time and exhausting experience of learning a new notation program.

Jennifer Higdon (Finale)

I do a combination for composing…I do pencil on paper and computer.  Ultimately, it all ends up on the computer in Finale.  And because I run my own publishing house, it means everything is in Finale, and we just print directly from the files (daily orders means everyday use of the program).

This is too scary to think about.  I currently have PDF backups of every work (all the scores and parts…it amounts to thousands of pages), but this would mean that I couldn’t make any changes in the pieces themselves, even when I find mistakes (just last week a performer alerted me to a missing accidental in a string quartet that’s almost ten years old).  The ramifications would be huge.

Steven Stucky (Sibelius)

Only for engraving, usually not as part of the compositional process. For all but the smallest chamber or choral pieces, I write a pencil score that goes to a copyist for input. He uses Finale.

Little or no effect on my composing, since I don’t use the software as part of my compositional process. The occasional exceptions are brief passages of dense textures built up by canonic imitation, in which cut-and-paste in Sibelius can be a more effective way to model the result than simply working it out on paper.

Jason Eckardt (Finale)

The software gives me a degree of control in the production of the final score and parts that I cannot achieve in any other satisfactory way. Since music notation is an inexact translation of abstract imaginary events into a set of instructions, I want to be able to articulate those instructions in the most precise and personal way possible.

While composing, I sketch out a very rough first draft of ideas on paper and then refine them in Finale, adding dynamics, articulations, performance instructions, refining rhythms and pitch formations, and so on. This gives me another filter for my raw ideas and allows me to further objectify my creative impulses so that I may analyze them more effectively. When I am finished with the piece, all of the input is in the file, and the final production is then a question of editing, yet another filter for self-critique.

I find software playback to be annoying at best, but it is useful for getting a sense of the large-scale design and pacing in a way that, for me, is more difficult when reading through the score. I suspect that this is because I am able to further remove myself from the viewpoint of the composer (aware of all of the processes, techniques, designs, and possible inadequacies that exist within the composition) and engage with the piece purely as a listener.

I suppose I would have to learn the new software that would replace what I am currently using. I wouldn’t look forward to that, considering I’ve more or less figured out how to manipulate Finale exactly as I desire.

Paola Prestini (Sibelius)

I use Sibelius after my first draft of writing in order to refine ideas and add elements such as backing tracks or electronics. Sibelius interfaces smoothly with my self-made and preexisting sound banks in Logic and because the electronic angle of my composition is still based on intuitive discovery, this part of my process is crucial for my electroacoustic works.

I’d be majorly slowed down if this aspect of my process was thwarted by change. I’d of course learn the next tool, but it would be an unwanted choice.

Ken Ueno (Finale)

I often work in chunks.  I compose a section, then notate it.  This way, I don’t end up doing the thing I most detest (copying) all together in the end, I get some relief from the concentration of composing by doing some mindless busy work (whilst listening to tunes), and I get to edit the recently composed section.

It would be a real pain, but maybe it will foster some grassroots projects for a platform more natively supportive of new music and its graphical challenges.

Carson Cooman (Sibelius, engraver uses SCORE)

[I use Sibelius] after [the creative process]…Sibelius 7 was such a horrific update, that I intended to keep using Sibelius 6 as long as I can still get it to run. So, in that sense I already had felt they’d lost their way, and I was going to stick with the older/better version. There is still a community of professional copyists who use SCORE (which, as a DOS program, must be run in emulation on any modern systems), so it may someday become an analogous situation for continuing to run old versions of Sibelius.

Alexandra Gardner (Finale)

I use it extensively during (writing directly to computer and for playback) and after (extracting parts, editing, revising, etc.)…

[Ramifications?] DEVASTATION. I hope no one has to deal with an issue like that. Ever.

David T. Little (Sibelius)

I compose almost entirely in the computer these days, using the traditional pencil and paper only to sketch beforehand, work out details during, or analyze afterward. For many of my acoustic works, the creative process lives largely within this particular software environment.

Well, in a way I’m already behind the times, since I haven’t upgraded to Sibelius 7. (Nor do I plan to.) Sibelius 6 feels very comfortable to me, and I plan to keep using it until I just can’t anymore.  I guess at that point, I will have to figure out something new.

Annie Gosfield (Finale)

I use notation software after the creative process. It’s just a matter of inputting data after a piece is finished, or after a piece is revised.

I’ve used Finale for so long it’s become automatic and intuitive. It would be terrible to have to start at zero with a new application. Over the years I have considered changing to Sibelius, but I’ve always found Finale to be more flexible, so the last thing I wanted was to add the chore of learning a new notation application.

Gabriel Kahane (Sibelius)

Depending on the scale of the piece, I will integrate Sibelius at various stages. For non-orchestral works, my strong preference is to input into Sibelius after having a full draft, though it seems that becomes less and less the way things actually work out. I’d be loathe to say that Sibelius is part of my “creative” process, but I certainly depend on it for ease of part-making, etc….

It would be a major bummer if Sibelius were discontinued, though I imagine I’d just use my outdated software forever…

Jason Robert Brown (Finale)

I really just use it as a transcription and copying tool—sometimes I’ll use it to proof piano parts by doing playback, but not much…If it got discontinued, I presume I’d finally learn how to use Sibelius, reluctantly.

Tarik O’Regan (Sibelius)

In one way, I use it in a similar fashion to the way I use Microsoft Word, that is silently! But—different to the way one might use a word processor—I tend not to compose “at the computer.” Rather, I use Sibelius as a transcription tool for ideas that have already been largely worked-out elsewhere. Also, as I work with a publisher, my Sibelius files are never in final form when they leave my computer.

To the creative process, strangely, not too much, I think. However, I imagine the ramification for the print production process would be quite significant. Most importantly, I might be able to start listening to Sibelius symphonies again without thinking AVID is trying to sell me something…

Kristin Kuster (Sibelius)

I use notation software at the end of my writing process. I write by hand first, then notate in Sibelius. Every now and again, if a deadline is fast approaching, I input large completed sections into Sibelius as I go; yet I prefer to get the whole piece down by hand before hitting the computer.

I used for Finale for 16 years. After finishing large pieces in Finale, I had a reverb of aching “mousearm” because getting the whole piece down took so long—too much menu drop-downing and mouse dragging. My brain and body don’t want to go back to cumbersome Finale, it’s simply not as smooth as Sibelius.

It is worth noting that a discontinuation of Sibelius would have a broad-reaching, massive impact on music education programs across the country. I estimate nearly ninety percent of our student composers at the University of Michigan use Sibelius as their primary notation software, and many faculty across the UM School of Music, Theatre, and Dance use Sibelius as a teaching tool for a wide variety of purposes.

Kurt Rohde (Sibelius)

It is a very helpful tool for teaching good notation skills, and for quick playback realizations. I have done projects with students whereby we have a single Sibelius file of a piece that everyone is working one simultaneously. We pass the file around and make changes/additions and save multiple versions, allowing us to go back and look at the process that lead us to the final composition.

As far as it being helpful for my own process, I am more a sit down and play and write on paper and listen composer. I will rarely use the playback or plugins. That said, it has been helpful for providing MIDI files for pieces that involve dance, movement, theater, so that preliminary workshops and rehearsals can help with the assembly of a piece that otherwise would require the availability of (at the very least) a piano and pianist at the whim of my collaborators. This format makes it possible to do collaborative projects that involve other peoples’ schedules and separation by large distances.

What is very funny about this announcement is that I just got a new MacBook Pro, and realized I needed to get updated Sibelius software. I contacted them asking if I could upgrade to Sibelius 6 (I have not been impressed with version 7), and got a note back the day the announcement came out that I could not do that; I could only go directly to Sibelius 7 (“but this one goes to 11…”).

Oscar Bettison (Sibelius)

I used to write on paper, then transfer everything over to Sibelius, but now it’s more complicated. In the last few years I’ve found myself with a new writing process that involves me going between paper and Sibelius, especially when working with notes. I try to produce as much material as possible, much more than I’ll ever need, and that’s faster to do in the program than on paper.

[Discontinuation?] In a word, disastrous. I write very slowly, and some of that is ameliorated by the software. I can do the bookends of the process (the sketching stage and the parts stage) so much faster with it. Plus the fact that, when working with performers, I can write something, make a .pdf and a midi file, email them both and find out in a matter of hours if what I’ve done works for them or not. If Sibelius was discontinued, it would be like stepping back into the Dark Ages for me.

Alex Shapiro (Sibelius)

I’ve used Sibelius since about 2000, when I was dubbed one of their “ambassadors,” turning others on to the ease and quality of the program. I went straight from hand-copying to Sibelius, without having previously used any other notation software. Thus, were Sibelius to head six feet under to meet its namesake, I’d no doubt have to learn Finale since publishing my works is a significant part of my business.

My creative process—as well as that of my business—is greatly expanded by the use of a notation program. The manner by which I compose a particular piece is dictated by the needs of that project.

If I’m composing an acoustic piece for which it would be helpful to give the performing ensemble a very listenable mock-up to assist their rehearsal process, then I compose in Digital Performer manipulating high-end samples, record a performance version, then quantize the heck out of a copy and save it as a Standard MIDI File, and export it into Sibelius where I then make it look like real music on the page. This is a very streamlined process that accomplishes several tasks at once between the two programs.

If I’m composing an electroacoustic work, the process is the same as above, and since I create the accompanying audio tracks in Digital Performer, the added bonus is that I export not only the MIDI file but the mixed audio file into Sibelius, which syncs them both and allows me to notate a solid road map of the non-instrumental sounds in the score.

If I’m composing a work like my flute quartet, Bioplasm, employing a lot of unusual instrumental techniques that would be nearly impossible to demo, then I input the music directly into Sibelius using only a typing keyboard, since I’m only concerned with the score and parts and how they will communicate my musical intentions.

In all cases, since I’m publishing my music and not only selling it directly, but getting it to distributors around the world, a notation program is the only way to accomplish this. It’s a piece of software that is directly responsible for a notable amount of my income long after the music has been composed.

Kevin Puts (Sibelius)

Once my basic ideas are generated by improvising on the keyboard, I use the program during the entire process of composing. I use paper to scrawl down ideas much of the time, but spend no time whatsoever making those ideas legible or coherent on paper. I use Sibelius’s playback feature often as a means of getting a general sense of pacing and “feel,” though I do not rely on this feature to “check orchestration” or any other aspect of composition.

I would be very disappointed if Sibelius became discontinued. Simply on a visual level, I think music notation extremely beautiful. I love the look of beautifully engraved scores, and without boring you with details, I will say I have very idiosyncratic preferences when it comes to the look of a score. Even before I began using Sibelius in 1999, I was writing my scores meticulously by hand, using templates, stencils, sheets of transfer letters which were rubbed onto the page to create professional-looking text, sometimes typing out blocks of texts and cutting and pasting them onto the page, rulers, pens of different thicknesses, etc. I wanted to emulate the look of hand-written scores by Joseph Schwantner and Christopher Rouse (my teachers at Eastman), George Crumb (an almost unreachable standard). I had made a feeble attempt several years before this to use the program Score, which, to my eye, produces the most elegant scores of any program, but the learning curve was simply too steep for me, and it seemed to me Score really amounted to an engraving program rather than a user-friendly composing tool.

Many colleagues of mine at Eastman and then my first students (at UT Austin) were often using Finale, but—with the very rare exception where the composer was an absolute expert/computer genius with the program and could adjust the defaults to his/her liking—to me the results were completely unsatisfying and looked exactly like that which they were: student works. So it wasn’t until Joe Schwantner brought in Sibelius at Eastman in 1998 or 1999 and gave a demo to the composers that I was sold. Everything looked immediately beautiful and “right” to me: the shape of the noteheads, the slurs (much like those in Score), the ties (ditto), the spacing, the articulations, even the default text settings. I loved the way you could, with the mouse, manipulate the page as if it were sitting on a desk in front of you. I loved that parts could be generated almost effortlessly, and today this feature is improved to the point it seems ludicrous to pay someone thousands of dollars to extract a set of orchestral parts when it can be done in two or three afternoons while watching AMC and Comedy Central.

In short, I LOVE Sibelius and I absolutely and positively rely on it. I am fortunate to work with Bill Holab, who is my publishing agent. Bill works with the Sibelius writers to refine the program each time a new version comes out, so he always knows the answer when I get stuck or something goes wrong, which is almost never.

Avid Keeps Sibelius, Employee Confirms UK Office Closure

Avid Technology, the company behind the high-end video editing suite Avid and the owners of Sibelius software, announced Monday that it had agreed to sell its consumer audio and video product lines.

Gary Greenfield, CEO of Avid, noted that “by streamlining and simplifying operations, we expect to deliver improved financial performance and partner more closely with our enterprise and professional customers,” but no updates specific to the future of the Sibelius line were addressed in the release.

Avid PR contact Ian Bruce, responding to a direct request for comment, stated, “Specifically on Sibelius, this was not part of the sales we announced this week. Sibelius stays with Avid, and is an important brand and product for us going forward.”

Bruce did not respond to follow-up questions regarding unconfirmed reports that Avid is closing the UK Sibelius office and that the work will be moved to a team in the Ukraine to cut costs. He also did not address Avid’s plans for product support during this transition time nor comment on the next development release of the software.

Avid bought Sibelius in 2006 from its founders Ben and Jonathan Finn. On Twitter, Sibelius Senior Product Manager Daniel Spreadbury confirmed the closure of the UK office and that development is moving elsewhere. It is still unknown what this means for Sibelius’s UK staff.

When asked how Sibelius connects to Avid’s stated focus on enterprise and professional customers, Bruce said, “Sibelius is widely used by our Media Enterprise customers (broadcast, film and others), educational customers, and Post/Pro, and is regarded as the de facto professional platform for writing, playing, printing and sharing music.”

On Tuesday Avid conducted a conference call with slides to announce the sales, and subsequently posted the slide deck. The slides outline Avid’s focus on professional post-production customers and a more streamlined set of products. The slides make no mention of Sibelius.

Composers took to social media to share their concerns and request information from the company.

Twitter Discussion of Sibelius

Sibelius Senior Product Manager Daniel Spreadbury, Avid representative Marianna Montague, and composer Melissa Dunphy comment on this week’s Avid news.

Storified by · Thu, Jul 05 2012 10:43:07

When the news broke, composer Melissa Dunphy took to twitter and was contacted by Avid representative Marianna Montague:
.@avidmarianna Why are we hearing that the UK office has been shut down? These are the people who made Sibelius as wonderful as it is.Melissa Dunphy
@mormolyke Sibelius is not part of the sale Avid announced. Sibelius stays w/ Avid and is an important brand & product moving forward.Marianna Montague
@mormolyke we have restructured ops including closing some facilities but this shouldn’t be confused w/ our commit 2 products like Sibelius.Marianna Montague
@avidmarianna Sibelius users are extremely concerned about this, especially since the UK office was the driving force behind Sibelius.Melissa Dunphy
@avidmarianna For many of us, UK developers are our point of contact – people whom we trust & talk to about improvements to the software.Melissa Dunphy
@avidmarianna Are you planning on issuing a statement re: what will happen to Sibelius development now that the main office has been closed?Melissa Dunphy
A little later in the day, and after an outpouring of supportive tweets from many composers, Sibelius’ Senior Product Manager, Daniel Spreadbury, took to Twitter himself, confirming part of the story.
Thanks to everybody who has expressed concern for me and my colleagues today. It means a lot!dspreadbury
@dspreadbury Did Avid divest Sibelius? Their own website and investor call replay doesn’t directly mention it one way or the other.Matt Erion
@Stonewing No divestment, but it is closing our office and moving development elsewhere. Future of everybody in our office is unclear.dspreadbury
@dspreadbury FWIW, your customer service and all-round awesomeness was a big reason I always raved about Sibelius. Thank you.Melissa Dunphy

(reporting contributed by Molly Sheridan and Kevin Clark)
Correction: An earlier version of this article misidentified Daniel Spreadbury as David Spreadbury. Thanks to commenter Hilton Cubbitt for pointing out the mistake. We apologize for the error.