Tag: computer music

“Automation Divine”: Early Computer Music and the Selling of the Cold War

It was a love song—not what viewers expected, perhaps, who tuned into a July 1956 episode of Adventure Tomorrow, a science documentary program broadcast by KCOP, channel 13, out of Los Angeles. But, then again, it was a love song to a computer. Push-Button Bertha. Sweet machine. What a queen. Jack Owens, the lyricist (and, on that July 1956 episode, the performer), had taken his inspiration from the tune’s composer: a Datatron 205, the room-filling flagship computer of Pasadena-based ElectroData, Inc.

Bertha’s not demanding
Never wants your dough
Always understanding
Just flip a switch and she’ll go

Push Button Bertha score

Just that month, ElectroData had been acquired by the Burroughs Corporation; Burroughs, an adding-machine manufacturer, was buying a ready-made entry into the computer business.[1] The Datatron had been programmed by Martin L. Klein and Douglas Bolitho, a pair of engineers. (Klein also moonlighted as Adventure Tomorrow’s on-air host.) “Push-Button Bertha” wasn’t Datatron’s magnum opus, but rather one of thousands of pop-song melodies the program could spit out every hour. Its inspiration was purely statistical.

We set out to prove that if human beings could write ‘popular music’ of poor quality at the rate of a song an hour, we could write it just as bad with a computing machine, but faster.

In fact, it was a perceived deficit of inspiration that supposedly prompted the project. Klein explained: “[W]e set out to prove that if human beings could write ‘popular music’ of poor quality at the rate of a song an hour, we could write it just as bad with a computing machine, but faster.”[2] Klein and Bolitho went through the top one hundred pop songs of the year, looking for patterns. They came up with three:

1. There are between 35 and 60 different notes in a popular song.
2. A popular song has the following pattern: part A, which runs 8 measures and contains about 18 to 25 notes, part A, repeated, part B, which contains 8 measures and between 17 and 35 notes; part A, again repeated.
3. If five notes move successively in an upward direction, the sixth note is downward and vice versa.

To those principles were added three more timeworn rules:

4. Never skip more than six notes between successive notes.
5. The first note in part A is ordinarily not the second, fourth or flatted fifth note in a scale.
6. Notes with flats next move down a tone, notes with sharps next move up a tone.[3]

The six rules were then put to work via the Monte Carlo method, which had been developed around the speed and indefatigability of the newly invented computer, harnessing the wisdom of a crowd of countless, repeated probabilistic calculations. Fed a stream of random, single-digit integers (which limited the number of available notes to 10), Datatron would test each integer/note against its programmed criteria. If it met every guideline, it was stored in memory; if not, it was discarded, and the program would move on to the next integer. After a few dozen iterations, presto: another prospective hit. Or not. Klein and Bolitho never admitted the program’s rate of success; out of Datatron’s presumably thousands of drafts, only “Push-Button Bertha” saw the light of day.

Push-Button Bertha

Piano version performed by Matthew Guerrieri

Datatron 205

The song took its place in a small but growing repertoire of computational compositions. 1956 was a banner year for statistically designed, computer-generated music. A team of Harvard graduate students, including Frederick P. Brooks, Jr. (who would go on to lead the design of IBM’s famed System/360 mainframes), programmed Harvard’s Mark IV computer to electronically analyze and then generate common-meter hymn tunes. (“It took us three years to get done,” Brooks later remembered, “but we got stuff you could pass off on any choir.”[4]) Lejaren Hiller and Leonard Isaacson used the University of Illinois’ ILLIAC I machine to create a string quartet, its movements moving through music history from basic counterpoint to modern speculation; portions of the Illiac Suite were premiered on August 9, 1956, only a month after the TV debut of “Push-Button Bertha.”

“Push-Button Bertha” was putting a cloak of high-minded research around that most hallowed of American art forms: a sales pitch.

“Push-Button Bertha” was a curiosity, but it reveals something particular about the early days of computer music in the United States. The Harvard and Illinois efforts were research and experimentation that were at least nominally driven by curiosity and the prospect of expanding academic knowledge. But all three were, in part, justifications of more hard-nosed concerns. When Owens sang of Bertha never wanting your dough, he shaded the truth a bit; Bertha wanted quite a bit of dough indeed. A Datatron 205 computer cost $135,000, and that didn’t include necessities such as a control console, punched-card input and output equipment, or magnetic tape storage. Nor did it include desirable extras, such as the capability to do floating-point calculations—that alone required an additional $21,000, more, at the time, than the median price of a house in the United States.[5] The Burroughs Corporation needed to justify the price tag of its newest product line. “Push-Button Bertha” was putting a cloak of high-minded research around that most hallowed of American art forms: a sales pitch.


Adventure Tomorrow Syndication ad, 1959

Off the air, Martin Klein wasn’t employed by ElectroData or Burroughs; at the time, he worked for Rocketdyne, North American Aviation’s rocket and missile division, based in the San Fernando Valley. The Brooklyn-born Klein had initially pursued music—as a teenager, he composed and copyrighted (though did not publish) a piano-and-accordion number entitled “Squeeze-Box Stomp”—but instead took up science. He did graduate work at Boston University, earning a master’s and a Ph.D.; his master’s dissertation, especially (on methods of using optical refraction to measure the flow of air around high-speed objects—supersonic planes or missiles, say), foreshadowed his work at Rocketdyne, which was largely devoted to designing circuits to convert rocket-engine test-firing data into forms that a computer could analyze.[6]

But that hint of a performing career would eventually resurface. In 1956, Klein began to spend his weekends on television. Saturdays brought Wires and Pliers, in which Klein and his North American Aviation colleague Harry C. Morgan (with the help of electrical engineer Aram Solomonian) showed viewers how to assemble simple electronic circuits and gadgets.[7] (The show’s sponsor, the Electronic Engineering Company of California, conveniently sold kits containing the necessary components for each project.) On Sundays, Adventure Tomorrow promoted technological optimism by way of the latest advances from California’s rapidly expanding electronics and defense industries. Burroughs needed a showcase for its technology; Klein needed technology to showcase.

Klein had demonstrated a flair for technologically enhanced promotion. His first efforts with the Datatron were intended to spotlight Pierce Junior College, where Klein was an instructor. In December 1955, Klein had the computer predict the winners of New Year’s Day college football bowl games; it got four out of five correct. News reports made sure to mention that Klein was teaching computer design at Pierce, “one course of a whole program in electronics offered by the college preparing men for occupations in this industry, so vital to our country’s defense.”[8] By April, Klein, under the auspices of Pierce and backed by several electronics-industry sponsors (including ElectroData), was on the air every week. Wires and Pliers didn’t last long, but Adventure Tomorrow did. From the beginning, Adventure Tomorrow was a cheerleader for the latest military technology—“the wondrous world of missiles, jets, and atomic projects,” as a later advertisement for the program put it. It was in that spirit that Klein and Bolitho went to work extracting a bit of publicity-friendly frivolity from the Datatron 205.

If Klein knew how to engineer attention, Bolitho’s specialty was wrangling the machines. Early computers were a forest of hard-wired components, fertile ground for capricious behavior. Bolitho’s rapport with the finicky beasts was legendary. His ability to coax computers into reliability eventually led him to be tasked with leading prospective customers on tours of Burroughs’ Pasadena plant. “He had some kind of magical quality whereby he could walk up to a machine that was covered with cobwebs and dust and turn it on and that thing would work, even if it had been broken for years,” a fellow engineer remembered.[9]

Depending on his audience, Klein would oscillate between extolling computers as inhumanly infallible and comfortingly quirky. Explaining the basics of the new tools to readers of Instruments and Automation magazine, he lauded “the advantage of automatic control over control by human operators where human forces are constantly at work to disrupt the logical processes.” But, recounting the genesis of “Push-Button Bertha” for Radio Electronics—a magazine aimed more at hobbyists and amateurs—Klein struck a more whimsical note, echoing (deliberately or not) the Romantic stereotype of the sensitive, temperamental artist:

The words “electronic digital computer” immediately conjure up a picture of a forbidding, heartless device. Those of us who design computing machinery know this isn’t true. Computing machines have very human characteristics. They hate to get to work on a cold morning (we call this “sleeping sickness”). Occasionally, for unexplainable reasons, they don’t work the same problem the same way twice (we say, then, that the machine has the flu).[10]

Klein’s joke turns a little more grim knowing that, by 1961, the United States military was using no fewer than sixteen Datatron 205 computers at twelve different locations—including the Edgewood Arsenal at the U.S. Army’s Aberdeen Proving Grounds in Maryland, where the technology that produced “Push-Button Bertha” was instead used to calculate simulated dispersal patterns for airborne chemical and biological weapons.[11]

Computer Music from the University of Illinois

All the composing computers, in fact, were military machines. ILLIAC, for instance, was a copy of a computer called ORDVAC, built by the University of Illinois and shipped to the Aberdeen Proving Grounds to calculate ballistics trajectories. Hiller and Isaacson had first learned their way around ILLIAC and the Monte Carlo method trying to solve the long-standing problem of determining the size of coiled polymer molecules—a problem of more than passing interest to the United States government, which funded the research as part of a program to develop and improve synthetic rubber production.[12] (It was Hiller, who had coupled his studies of chemistry at Princeton with composition lessons with Milton Babbitt, who realized the same mathematical technique could be applied to musical composition.)

Computational composition in the United States got its start, quite literally, in the off-hour downtime of the military-industrial complex.

Harvard’s Mark IV was the last in a group of computers designed by Howard Aiken. The Mark I had helped work out the design of the first atomic bombs; Marks II and III were built for the U. S. Navy. The Mark IV, which had produced all those hymn tunes, had been funded by the U. S. Air Force; it worked out guided-missile flight patterns and helped design lenses for the U-2 spy plane.[13] The Harvard computers, it turned out, ran more reliably if they were never turned off; Aiken duly assigned Peter Neumann, a music-loving graduate student, to watch over the Mark IV from Friday night until Monday morning. Student projects—hymn-tune-generation included—happened on the weekends.[14] Computational composition in the United States got its start, quite literally, in the off-hour downtime of the military-industrial complex.

For a few years, American computer-music researchers may have looked with jealousy across the Atlantic, to Paris and Cologne and the fledgling, dedicated electronic-music studios that had blossomed under the aegis of government-supported radio stations. But there are suggestions that those European efforts, too, emerged out of a nexus of technology and defense.


The origin story of the famous WDR electronic-music studio in Cologne, for instance, starts with an American visitor. In 1948, American scientist Homer Dudley visited Germany, bringing along his invention, the vocoder; the device made a crucial impression on Werner Meyer-Eppler, who would later help create the WDR studio—and whose students would include the studio’s most famous denizen, Karlheinz Stockhausen.

Bell Labs ad, 1950

Dudley was an employee of Bell Labs, one of the great 20th-century American research-and-development shops, a hive of telecommunications innovation. The vocoder had originally been developed as part of investigations into shrinking the bandwidth of telephone signals, in order that more messages might travel over the same wires. But, especially with the onset of war, the work at Bell Labs was increasingly aligned with the desires of government. The vocoder had been pressed into wartime service as the backbone of SIGSALY, the Allied system that successfully masked high-level phone conversations from German eavesdropping, and which practically introduced numerous features of the modern digital communications landscape: compression, packet-switching, electronic key encryption.[15] (The keys for SIGSALY were stretches of electronically generated random white noise, pressed onto matched pairs of phonograph records, each pair being destroyed after a single use.)

One wonders if Meyer-Eppler had been targeted for recruitment into the development of SIGSALY’s sequels; after all, so much of the WDR studio’s work seemed aligned with and adaptable to the sort of research that Bell Labs was pursuing in the wake of its wartime work. Think of one of the WDR studio’s most celebrated productions, Stockhausen’s Gesang der Jünglinge; if the work’s combination of a transmitted human voice and electronic noise recalls SIGSALY, the way it deconstructs, processes, and reassembles that voice, the way it filters the sounds through various statistical screens—it practically outlines a research program for next-generation voice and signal encryption.

At the very least, the new music triangulated Dudley’s sonic manipulation with two other innovations, the transistor and Claude Shannon’s new information theory; all three had emerged from Bell Labs, which would also birth Max Mathews’s pioneering MUSIC software—all the while pursuing numerous military and defense projects. In later years, Bell Labs would consult on the formation of IRCAM, Pierre Boulez’s hothouse of computer music in Paris.[16]

But IRCAM, envisioned as a seedbed, was instead an endpoint, at least in terms of the sort of institutional computing that, in its interstices, had provided a home for early computer music. Already the future was in view: a computer in every home, a chip in every device, casual users commandeering the sort of processing power that the builders of the UNIVACs and the ORDVACs and the Datatrons could barely imagine. (The year IRCAM finally opened, 1977, was the same year that the Apple II was introduced.) Even the output of those institutions—for instance, Max/MSP, the descendant of an IRCAM project—was destined for laptops.

The sounds of the post-war avant-garde were never far, in concept or parentage, from the technological needs of the Cold War.

Surrounded by the surfeit of computation, it is hard to imagine the scarcity that led those first computer musicians to a marriage of convenience with the military and national-security bureaucracies—a marriage convenient to both sides. But to understand that give-and-take is to understand something about the nature of music in the middle of the 20th century, the technocratic faith that came to inform so many aspects of the culture. The sounds of the post-war avant-garde were never far, in concept or parentage, from the technological needs of the Cold War.

And what of the composer of “Push-Button Bertha”? Even as it became obsolete, the Datatron 205, with its blinking console and spinning tape drives, enjoyed a long career as a prop in movies and television, lending a technologically sophisticated aura to everything from Adam West’s television Batcave to Dr. Evil’s lair in the Austin Powers movies. That, too, may have been a result of Klein and Bolitho’s public-relations stunt. Only a few months after the Adventure Tomorrow premiere of “Push-Button Bertha,” producer Sam Katzman, a veteran impresario of low-budget genre movies, gave the 205 its big-screen debut, going to the Datatron’s Pasadena factory home to film scenes for a science-fiction production called The Night the World Exploded. In the movie, the 205—mentioned prominently, by name, in dialogue and narration—is used to determine just how long before a newly discovered and volatile “Element 112” works its way to the earth’s surface and destroys the planet. The Datatron had returned from its pop-song holiday to a more familiar role for the era’s computers: calculating the end of the world.[17]

Night the World Exploded poster


  
1. The founder of the Burroughs Corporation, William Seward Burroughs I, was the grandfather of the Beat writer William S. Burroughs. In a 1965 interview, the younger Burroughs gave his opinion of computational art:

“INTERVIEWER: Have you done anything with computers?
BURROUGHS: I’ve not done anything, but I’ve seen some of the computer poetry. I can take one of those computer poems and then try to find correlatives of it—that is, pictures to go with it; it’s quite possible.
INTERVIEWER: Does the fact that it comes from a machine diminish its value to you?
BURROUGHS: I think that any artistic product must stand or fall on what’s there.”

(See Conrad Knickerbocker, “William Burroughs: An Interview,” The Paris Review vol. 35 (1965), p. 13-49.)

  
2. Martin L. Klein, “Syncopation by Automation,” Radio Electronics, vol. 28, no. 6 (June 1957), p. 36. [36-38]

  
3. Ibid., p. 37.

  
4. F. P. Brooks, A. L. Hopkins, P. G. Neumann, W. V. Wright, “An experiment in musical composition”, IRE Trans. on Electronics Computers, vol. EC-6, no. 3 (Sep. 1957). See also Grady Booch, “Oral History of Fred Brooks,” Computer History Museum Reference number: X4146.2008 (http://archive.computerhistory.org/resources/access/text/2012/11/102658255-05-01-acc.pdf, accessed September 18, 2018).

  
5. Datatron prices from Tom Sawyer’s Burroughs 205 website (http://tjsawyer.com/B205prices.php, accessed September 10, 2018). In 1957, the median home price was approximately $17,000, as calculated from Robert Shiller’s archive of historical home prices (http://www.econ.yale.edu/~shiller/data/Fig3-1.xls, accessed September 10, 2018).

  
6. Martin L. Klein, The Determination of Refractive Indices of Dynamic Gaseous Media by a Scanning Grid, M.A. Thesis, Boston University (1949). Klein’s doctoral thesis was on zone plate antennae—forerunners of the modern flat versions used for HD television.

  
7. “TV Show Features ‘Wires and Pliers,’” Popular Electronics, vol. 4, no, 4 (April 1956), p. 37.

  
8. “Pierce College Teacher Picks Sports Results,” Van Nuys Valley News, January 10, 1956, p. 10.

  
9. Richard Waychoff, Stories about the B5000 and People Who Were There (1979), from Ed Thelen’s Antique Computers website (http://ed-thelen.org/comp-hist/B5000-AlgolRWaychoff.html, accessed September 10, 2018).

  
10. Martin L. Klein, Harry C. Morgan, and Milton H. Aronson, Digital Techniques for Computation and Control (Instruments Publishing Co.: Pittsburgh, 1958), p. 9; Klein, “Syncopation by Automation,” p. 36.

  
11. For the 205’s usage within the military, see Martin H. Weik, A Third Survey of Domestic Electronic Digital Computing Systems (Public Bulletin no. 171265, U.S. Department of Commerce, Office of Technical Services, 1961), p. 145. For the 205 at Edgewood, see Arthur K. Stuempfle, “Aerosol Wars: A Short History of Defensive and Offensive Military Applications, Advances, and Challenges,” in David S. Ensor, ed., Aerosol Science and Technology: History and Reviews (RTI Press: Research Triangle Park, NC, 2011), p. 333.

  
12. See, for instance, F. T. Wall, L. A. Hiller, Jr., and D. J. Wheeler, “Statistical Computation of Mean Dimensions of Macromolecules. 1” The Journal of Chemical Physics, vol. 22, no. 6 (June 1954), pp. 1036-1041; F. T. Wall, R. J. Rubin and L. M. Isaacson, “Improved Statistical Method for Computing Mean Dimensions of Polymer Molecules,” The Journal of Chemical Physics, vol. 27, no. 1 (January 1957), pp. 186-188. The University of Illinois had received $135,000 from the National Science Foundation for research into synthetic rubber, the largest such grant given to a university under the NSF’s synthetic rubber program; Special Commission for Rubber Research, Recommended Future Role of the Federal Government with Respect to Research in Synthetic Rubber (National Science Foundation: Washington, D. C., December 1955), p. 9.

  
13. As it turned out, Aiken’s conservative design left the Mark IV significantly slower than other computers; James G. Baker, who ran the Harvard group researching automated lens design, grew frustrated with the speed of the Mark IV (and his access to it), eventually switching to an IBM mainframe at Boston University. See Donald P. Feder, “Automated Optical Design,” Applied Optics vol. 12, no. 2 (December 1963), p. 1214; Gregory W. Pedlow and Donald E. Welzenbach, The Central Intelligence Agency and Overhead Reconnaissance: The U-2 and OXCART Programs, 1954-1974 (Central Intelligence Agency: Washington, D.C., 1992), p. 52 (declassified copy at https://www.cia.gov/library/readingroom/docs/2002-07-16.pdf, accessed September 12, 2018).

  
14. John Markoff, “When Hacking Was In its Infancy,” The New York Times, October 30, 2012.

  
15. For an historical and technical overview of SIGSALY, see J. V. Boone and R. R. Peterson, “SIGSALY—The Start of the Digital Revolution” (2016) (at https://www.nsa.gov/about/cryptologic-heritage/historical-figures-publications/publications/wwii/sigsaly-start-digital.shtml, accessed September 17, 2018).

  
16. Robin Maconie has speculated on the implications of the Bell Labs connections in a pair of articles: “Stockhausen’s Electronic Studies I and II” (2015) (at http://www.jimstonebraker.com/maconie_studie_II.pdf, accessed September 17, 2018), and “Boulez, Information Science, and IRCAM,” Tempo vol. 71, iss. 279 (January 2017), pp. 38-50.

  
17. For the 205’s film and TV history, see the Burroughs B205 page at James Carter’s Starring the Computer website (http://starringthecomputer.com/computer.php?c=45, accessed September 12, 2018). The Night the World Exploded, written by Jack Natteford and Luci Ward, and directed by Fred F. Sears, was released by Columbia Pictures in 1957.

The Opposite of Brain Candy—Decoding Black MIDI

Niche music genres are nothing new. They existed before hipsters, before Stravinsky, and before Mozart. However, in the last two decades there has been a blossoming of niche music genres, made possible by technological advancements such as personal computers and Digital Audio Workstations as well as decreasing costs to build home studios and widespread use of the internet. As more and more people are creating music, they are subjugated less and less to the genre-defining artists of the status quo. The result is the emergence of countless niche genres, each with its own unique following.

Perhaps one of the most fascinating niche genres to recently surface is Black MIDI. Created by self-proclaimed “blackers,” Black MIDI exists almost exclusively on YouTube in the English-speaking world, with total video views numbering in the millions while total subscribers for teams (groups of blackers who collaborate on Black MIDI tracks) remain less than 50,000. Black MIDI is presented on YouTube as a video recording of a MIDI file containing millions of individual notes played back through a sequencer.

The term “Black MIDI” refers to the moments in a piece where the notes, if displayed on a traditional two-stave piano score, are so dense that there appears to be just a mass of black noteheads. The increased density of notes also affects the computer, which is sometimes unable to process all of the notes within a particularly complex section. The goal of Black MIDI is to approach this processing failure without actually crossing that line. “We try to make it insane—but not too insane,” says Jason Nguyen, the person behind the major Black MIDI distribution YouTube channel Gingeas.

The origin of Black MIDI can be traced back to Japan in 2009 when the first blacker, Shirasagi Yukki @ Kuro Yuki Gohan, created the first black MIDI and uploaded it to the Japanese video site Nico Nico Douga. The piece is based on U.N. Owen Was Her?, the theme song from the extra boss level in the Touhou Project, a vertically scrolling Japanese shooter video game. The use of Japanese video game music has since remained iconic to Black MIDI.

For the next couple of years, Black MIDI spilled over from Japan into China and Korea, where it continued to grow. It was not until 2011 that the genre took off in the West, the first major hit being this upload by YouTube user Kakakakaito1998. Typical of Black MIDI’s early style, the video features a traditionally notated two-stave piano score rather than a MIDI piano scroll alone.

Once Black MIDI made its way to the West, it was not long before blackers began refining the creation and presentation of their niche form of art. Blackers sought to solidify their identity, which led to the creation of Guide to Black MIDI and Impossible Music Wiki, the latter of which was created by Nguyen and the other blackers with whom he frequently collaborates. Both sites serve as an introduction to and codification of Black MIDI.

Blackers also began pushing the limits of their art, adding more notes (numbering in the millions) and making the visual presentation as important as the sonic presentation. Black MIDI became a marriage of visuals and sound, a cascade of colors and patterns paired with an ordered complexity of notes. While the popular songs of choice remained music from Japanese video games, blackers also started making black MIDIs based on recent pop songs.

As computer-processing power increased, Black MIDIs also became larger and included more notes than before. In addition, much of the software was updated to 64-bit, which positively impacted RAM usage and allowed playback of even larger files. The continued growth and evolution of technology also allowed blackers to develop tricks to fill their videos with more notes.

“My videos are edited for no lag,” says Nguyen. “They aren’t real-time: I record the MIDI program slowed down, and then speed it up in a video editor.” This technique takes less of a toll on computer processing power and RAM.

In addition to software and visual changes in Black MIDI in the West, English-speaking blackers established their own team, BMT (Black MIDI Team). Teams, including BMT, consist of a number of blackers who serve various roles, from blackening songs to creating the videos and hosting them on YouTube. This collaboration creates a virtual production and distribution chain that ensures blackers get their work out to as many people as possible through several main YouTube accounts—including Gingeas—while also being credited for their work. Additionally, while BMT is separate from the other major teams that exist in China and Korea, they frequently collaborate with each other on videos and MIDI tracks.

The lack of a major Japanese team brings up an interesting observation: Black MIDI has since disappeared from Japan where it originated. According to Nguyen, Japanese blackers “are analogous to those TV shows where there’s a mysterious founder of a civilization that is not really known throughout the course of the show.” The Japanese blackers have now assumed this role of a silent creator. Although the forebears of Black MIDI are long gone, the Black MIDI community has spread around the globe and is thriving.

One can’t help but draw comparisons between Black MIDI and Conlon Nancarrow’s studies for player piano. Both Nancarrow and blackers have tested the possibilities of note density in their pieces, creating astounding polyrhythms and textures in the process. In addition, the method of note entry is essentially the same between the two. However, Nancarrow’s medium was acoustic while the blackers’ is digital. In some regards, black MIDI could be construed as the 21st century’s response to Nancarrow.

Despite this apparent connection to Nancarrow, the Guide to Black MIDI claims it does not exist and that Black MIDI was an independent evolution: “We believe that references to Conlon Nancarrow and piano rolls are too deep and black midi origins must be found in digital MIDI music world” [sic]. Notwithstanding the blackers’ contentions, there are obviously significant similarities between Nancarrow and Black MIDI.

More recently, other artists have been creating music from a combination of both Nancarrow’s acoustic techniques and the blackers’ digital techniques to achieve intricate musical effects. For example, electronic composer Dan Deacon has written multi-layered player piano tracks that create an acoustic sound more complex than Nancarrow and are only made possible through the addition of modern MIDI technology and a Digital Audio Workstation. While Deacon’s style is entirely different from both Nancarrow and the blackers, the techniques he employs remain the same.

Though only one of many niche music genres that are internet-exclusive, Black MIDI stands out as unique. The simple melodies and tonal harmonies combined with the possibility of near or total computer processing failure are captivating. Additionally, Black MIDI’s connection to visual art adds a third dimension that makes the art form even more engaging. For a genre that has only existed for six years, it is difficult to tell where black MIDI is headed or where its influence will plant its seed, but for the time being I’ll enjoy the ride and listen to this along the way.

New Music and Globalization 2: Networked Music

Interface for John Roach and Willy Whip’s <em>Simultaneous Translator</em>

Interface for John Roach and Willy Whip’s Simultaneous Translator

Globalization today is almost synonymous with the internet. The net has not only enabled globalization’s modern-day form, but also, in its early days at least, served as its ideological model: thanks to the network, the world would become one of universal access to culture and resources, flat hierarchies, and the smooth and unimpeded flow of information.

That ideology spilled over into the first experiments in net-art and net-music in the early 1990s. As musicians began to experiment with the net itself, however, it soon become apparent that these hopes were far from realized or, in some cases, actually unattainable. In this post, I want to take a look at how networked music has addressed those early ideals and come to terms with their shortcomings. The question of information flow has received the most attention, so I will focus on it first.
It didn’t take long before the ideal of unimpeded global communication ran up against the realities of bandwidth, buffering speeds, and browser capabilities.[1] Even setting these limitations aside for future solution, we can’t get around basic physics. As Álvaro Barbosa has calculated,[2] even if data were to travel at its maximum speed, the speed of light, across a perfect network with unlimited bandwidth, the delay between two opposite points on the globe would be at least 65 msec—or more than three times longer than what the human ear will perceive as simultaneous. This has a direct impact on music: acceptable synchronization between geographically remote performers, such as an orchestra might easily achieve on stage, is not possible. Because such synchronicity is a founding value of most “good” musical performances, it was soon clear that net music would have to build itself on new foundations.

Bill Duckworth's Cathedral in its first incarnation

Bill Duckworth’s Cathedral in its first incarnation

Latency—the unavoidable delay of a networked system—is now accepted by net musicians, and either worked around or incorporated as a feature of their art. The Japanese-American artist Atau Tanaka is among those who have given most thought to this. In works like NetOsc (1999), written for the Sensorband trio of himself, Edwin van der Heide, and Zbigniew Karkowski, he considers latency as the “acoustic resonance” of the net, analogous to the resonance of a cathedral. (The analogy inadvertently points to one of the first, and most acclaimed, works of net-music, William Duckworth’s Cathedral, launched in 1997.) John Roach and Willy Whip’s Simultaneous Translator (2007) uses live data about the lag of router-to-router transactions across the net to shape certain musical parameters. Of course, by the 1990s musical innovations in open form, free improvisation, aleatory, parametrical composition, acousmatic music, and so on were well enough bedded in for there to be no real need for dramatic innovations in musical form or technique. The battles that made networked music aesthetically possible, at least, had all been fought.

When it comes to considering the web as a flat, non-hierarchical landscape, an important early work is Randall Packer’s Telemusic #1 (2000), made in collaboration with Steve Bradley, Gregory Kuhn, and John Young. As Packer describes it, one of the principal concerns of Telemusic #1 was “dissolving the spatial and temporal constraints of the performance environment and transforming the World Wide Web into an unseen ensemble of audience participants.” Performance of the work takes place in two spaces simultaneously: the physical performing venue and cyberspace. Online participant-listeners could navigate a 3D Flash environment populated by bits of text. Clicking on a text would trigger an audio sample of those words, which would be processed live and projected or streamed back into the physical and virtual spaces. (In the physical space, these texts were spoken aloud.) Participant-listeners would then hear a composite of the activities of themselves and everyone else participating in the work. The idea was developed further in Telemusic #2 (2001). Here, each participant-listener’s IP address was used to create a unique sonic identifier, making it possible to hear the virtual space as a plurality of individuals, rather than an undefined homogeneity.

Packer's Telemusic pieces

Packer’s Telemusic pieces: Screenshot of texts on the ‘telematic condition’

In net-works like these (and these are just two, relatively early, examples among many), art becomes about constructing an environment that the user enters, rather than delivering a precisely conceived message.[3] Packer expresses his own vision of this future utopia as follows: “Innovations in multi-user gaming, chat rooms, teleconferencing, MUDS and so on, point to new opportunities for radically new compositional forms for the online experience, a form of music that is no longer dependent on the location of the audience member, or even the location of the performance space.”[4] This is analogous to the shift that took place in site-specific art over the same period, from the 1970s and ’80s to the 2000s, although in music’s case it was driven as much by technological expediency as political critique.

Packer’s vision is seductive, and it may well come to pass in some form. He is not alone in thinking it. (Although it is interesting that more than a decade on it hasn’t, in spite of the accelerating pace of technological change.) Yet his vision is built upon the principle of technological sophistication, and therefore privilege. This brings me to the final ideal that I listed at the start of this post: universal access. This has proved the hardest issue to resolve, and appears to have received the least attention, at least among those artists that I am aware of. The fact is that engaging in net art—whether as a participant or creator—requires certain privileges that remain far from universal: a good quality computer and high-speed internet connection (or access to a relatively well-appointed gallery where such a work may be installed), sufficient leisure time, and appropriate physical abilities to work whatever interface there may be.

It is no coincidence that despite the ideology of access, net art’s key developments remain centered around North American and European research institutes like CCRMA at Stanford and SARC in Belfast, Northern Ireland. Until net composers address this question, net music must surely remain an incomplete metaphor for contemporary globalization.

*

[1] Helen Thorington’s article “Breaking out: The trip back,” published in Contemporary Music Review, 24(6) (2005), pp. 445–58, gives an excellent survey of the early history of networked music. An indication of the technical challenges connected to networked music can gained by reading the publications of the SoundWIRE research group at Stanford’s Center for Computer Research in Music and Acoustics. See: https://ccrma.stanford.edu/groups/soundwire/publications/.
[2] Á. Barbosa. “Displaced Soundscapes: A Survey of Network Systems for Music and Sonic Art Creation.” Leonardo Music Journal, 13 (2003), pp. 53–9.
[3] See Pierre Lévy. “The Art and Architecture of Cyberspace.” Multimedia: From Wagner to Virtual Reality, ed. R. Packer and K. Jordan (New York: Norton, 2001).
[4] Randall Packer. “Composing with Media: Zero in Time and Space.” Contemporary Music Review, 24(6) (2005), pp. 509–25, at p. 524.

Laurie Spiegel: Grassroots Technologist

A conversation in Spiegel’s Lower Manhattan loft
September 9, 2014—3:00 p.m.
Video presentation and photography by Molly Sheridan and Alexandra Gardner
Transcription by Julia Lu

People often speak about computers and technology as though these things are completely antithetical to nature and tradition, though this is largely a false dichotomy. Electronic music pioneer Laurie Spiegel began her musical life as a folk guitar player and has never abandoned that music. But she fell in love with machines the first time she saw a mainframe tape-operated computer at Purdue University on a field trip there with her high school physics class and has been finding ways to humanize them in her own musical compositions and software development ever since. She sees a lot of common ground between the seemingly oppositional aesthetics of folk traditions and the digital realm. In fact, when we met up with her last month in her Lower Manhattan loft crammed full of computers, musical instruments, and toys of all sorts, she frequently spoke about how in her world view the computer is actually a folk instrument.

“The electronic model is very similar to the folk model,” she insists. “People will come up with new lyrics for the same melody, or they’ll change it from a ballad to a dance piece. Nobody can remember what the origin is. There is no single creator. … In the way that electronic sounds go around—people sample things, they do remixes or sampling, they borrow snatches of sound from each other’s pieces—the concept of a finite fixed-form piece with an identifiable creator that is property and a medium of exchange or the embodiment of economic value really disappears … in similar ways. … Prior to electronic instruments, you had to go through the bottleneck of written notation. So electronic music did for getting things from the imagination to the ears of an audience what the internet later did for everybody being able to self-publish, democratizing it in ways that obviously have pros and cons.”
A realist as well as an idealist, Spiegel is well aware of the cons as well as the pros of our present digitally saturated society. “[W]hen I was young,” she recalls, “You had a great deal of time to focus on what was happening in your mind and information could proliferate, amplify itself, and take form in your imagination without that much interruption from outside. … Our culture is at this point full of people who are focused outward and are processing incoming material all the time. Would somebody feel a desire to hear a certain kind of thing and go looking for it? Would they hear something inside their head and want to hear it in sound? It seems that people are fending off a great deal now. The dominant process is overload compensation: how can I rule out things that I don’t want to focus on so that I can ingest a manageable amount of information and really be involved in it. Information used to be the scarce commodity. Attention is now the scarce commodity.”

The imagination is very important to Spiegel. It is what has fueled her pioneering sonic experiments such as her haunting microtonal Voices Within: A Requiem from 1979 or her landmark 1974 Appalachian Grove created at Bell Labs soon after she returned from the mountains in western North Carolina where she traveled with “my banjo over one shoulder and my so-called ‘portable’ reel-to-reel tape recorder over the other shoulder, listening to and enjoying older music and the culture that comes from early music.” It is also why she created the Music Mouse computer software, a tool that transformed early personal computers such as the Mac, Atari, and Commodore Amiga into fully functional musical instruments and idea generators for musical compositions. It also led her to create a realization of Johannes Kepler’s “Music of the Spheres,” the 17th-century German astronomer’s conversion of planetary motion into harmonic ratios; this electronic score and a song by Chuck Berry is the only music by living composers that was sent into outer space on the two Voyager spacecrafts. (Although Spiegel insists that her realization, which was included as part of “Sounds of the Earth” rather than “Music of the Earth,” is not her musical composition.)
But perhaps even more important to Spiegel than the imagination is emotional engagement. “I always wanted to make music that was beautiful and emotionally meaningful,” she explained. “The emotional level is the level at which I am primarily motivated and always have been. I’m still the teenage girl who, after a fight with my father, would take my guitar out on the porch and just play to make myself feel better. That’s who I am musically. I kind of knew what I liked as a listener, and what I liked was music that would express emotions that I didn’t have a way of expressing, where somebody understood me and expressed in their music what I was feeling in ways that I couldn’t express myself. So, to some degree, I think I see the role of the composer as giving vicarious self-expression to people, although at this point, with the technology we have, there’s no reason for anybody who wants to make music not to be able to.”

*

Laurie Spiegel with various patchcord analog synthesizers and a keyboard console

Laurie Spiegel’s equipment in 1980. Photo by Carlo Carnivali, courtesy Laurie Spiegel.

Frank J. Oteri: The meta-narrative of electronic music, and technological developments overall, is that we went from big anti-personal mainframe computers that took up entire buildings to home computers to handhelds and even smaller.
Laurie Spiegel: And I went that whole journey. I started using punch cards and paper tape. The first computer I ever saw was at Purdue University in Lafayette, Indiana, when I was in high school. I went down there for a weekend and they had a tape-operated computer on which I attempted to do an assignment for my high school physics class. In this class there was me and just one other girl. All of the others were guys, and the teacher really thought we didn’t belong there. It was just so weird. But I always loved science.
FJO: But before you got involved with making music with electronics, you were a guitar player and the acoustic guitar is one of the smallest, most intimate instruments that one can play by oneself and have a full sound, all alone. So it seemed to me like there’s a connection between that and how electronic music came to be made on smaller and smaller devices.
LS: Personal and private are important aspects of music to me. When I was little, I started with a plastic ukulele which was even smaller. Then my grandmother, who was from Lithuania, played mandolin, and she gave me a mandolin when I was maybe nine years old or so. That had the advantage that I could keep it under my bed and take it out at night and play it quietly with nobody hearing me playing it. I had the total freedom to just improvise and make stuff up. I don’t think I even told anybody when she gave it to me. It was like my secret instrument, my private means of expression – whereas the piano in the living room was this large, sacred object where everybody in the house heard you and didn’t necessarily want to hear kids practicing. The guitar was similarly private, and I could play it in my room. The freedom of not being heard, for a person who’s basically somewhat self-conscious, is really important, and so is the portability.

Laurie Spiegel playing a guitar in her loft

Despite having computers and other electronic musical instruments from half a century scattered throughout her loft, Laurie Spiegel still loves to play the guitar.

I used to take the guitar with me everywhere I went during high school, college, young adulthood, up until I hit classical music circles and discovered that a lot of the people who were studying music, and were the best at it, didn’t seem to do it for personal enjoyment. They were so serious about it. In the folk music-type circles and improvising circles, people would bring their instruments with them and people jammed all the time. But once I hit Juilliard, I didn’t find that people really did that kind of stuff. They didn’t improvise. They were seriously working on their trills. And they were seriously working on their performance pieces. It wasn’t integrated into their lives the same way as for amateurs who really love music. I guess I still regard myself somewhat as an amateur, just doing it for the love of it really, which is the technical definition of that word. I’ve always been an improviser too, which electronic instruments were perfect for because you were actually interacting live with the sound in electronic music; whereas, when I write music on paper, for instruments, I don’t get to hear it, or not for a long time, or not while I’m working on it. Of course, that’s no longer true because all the notation software now let’s you hear stuff while you’re working on it, and you know that a rhythm isn’t what you meant right away. But in the old days, when I was learning notated composing, it was in your head.
FJO: It’s interesting that that came much later for you though, long after you were playing music.
LS: I was playing music, I was improvising, I was making stuff up, and at a certain point I wanted to learn to write things down so I wouldn’t forget them. So I started trying to teach myself to write stuff down. One of my roommates in the house that I lived in pointed out to me that they call that composing. You make things up and write them down. I was living in England and studying philosophy and history, doing a social sciences degree basically. I said, “No, I’m not composing. I’m just writing things down so that I don’t forget them. I’m not a composer.” But eventually it became undeniable, and composing took over.
FJO: And so the social sciences became less of a concern for you once music took over?
LS: No, it never really went away. I’m still very interested in politics, sociology, economics, statistics, anthropology, psychology, all that stuff, and animals. I’m a complete sucker for animals.
FJO: But it was still a transition. You were at Oxford and then you were studying with John Duarte.
LS: In London, during the second year that I was over there. He was probably the perfect teacher for me. He had a partly classical, partly folk, and partly jazz background. He taught me counterpoint and theory and a bit about composing, as well as classical guitar. Once a week I would take the train into London for the weekend and spend a whole day in his house. And we stayed in touch. Much later, when he was in his 80s, he started to learn to use personal computers and began doing his composing directly into the computer. It was amazing. He was an English composer not obsessed with avant-gardism, firmly rooted in some kind of folk—folk is not a general enough word, but a grassroots sense of musical meaningfulness, or maybe it is more accurate to say he was connected to tradition very organically and naturally in his music, like quite a few other British composers. I identify with that.

Spiegel sitting in front of synthesizers and a tamboura

Laurie Spiegel in the early 1970s. Photo by Louis Forsdale, courtesy Laurie Spiegel.

FJO: So that’s a very different experience from then enrolling in a composition program at Juilliard, of all places.
LS: Yeah, well, I was completely not expecting the dominance of the post-Webernite, serialist, atonal, blip and bleep school of music. I wasn’t interested in that. I mean, I knew what I wanted to learn. I wanted to learn harmony, structure, form, process, history, and repertoire, lots of stuff. But it wasn’t really considered cool to be interested in learning to write tonal music. I remember a teacher—who shall remain nameless—who, when I brought in a piece in E minor for guitar, said, “Hmm, key signature. Doesn’t mean for sure that you don’t have any musical imagination, but it’s not a good sign.”
It was so much more uptight then. I was in a way intellectually prepared for it because at Oxford there was a comparable phenomenon going on. The logical positivists were in charge talking about how many definitions can dance on the head of a … whatever. I was more interested in phenomenologists and Asian philosophy, and all kinds of stuff that was about the opposite of the dominant philosophers at Oxford at the time. Logical positivism is divorced from gut feelings, which were my personal link to music. As a teenager, when I was miserable I would take my guitar out on the porch and play and express my emotions. And when I heard great classical repertoire, it could vicariously express emotion for me. And so music was really about emotion. It was also about structure, because I love structure. That’s the computer programmer in me. So the things that I was most attracted to in music were slightly at odds with the music that was in with the dominant power structure when I went to Juilliard.
Then there were also all these child prodigies wandering around. I already had finished a degree in the social sciences. I was older, which made me immediately suspect because it’s a highly child prodigy-oriented atmosphere; if you weren’t discovered by 12, you were a has-been. But there were a number of things that saved me from giving up and going crazy. One was that through electronic music I was able to create music people could hear and I became active in the Downtown scene while I was still up there. And people liked my work. I played music in other people’s ensembles, played guitar or banjo or whatever for Tom Johnson and with Rhys Chatham. I would do these filigree patterns, and Rhys would do these long drone-like lines against the stuff. That balanced it. Also I was making a living. I got a job with a small company that did educational films and filmstrip soundtracks. I composed all of their soundtracks for, I think, three and a half years or about that, and it paid decently. And again, when you do soundtracks, all that really matters is emotional content, and to a lesser degree the style. It’s the opposite of the aesthetic that was dominant uptown with Boulez, Wuorinen, and Milton Babbitt, although I liked Milton and a lot of these people. I was friendly with and hung out with the Speculum Musicae people, but our musical tastes were just in contrast to each other.
FJO: But your primary teacher at Juilliard was Jacob Druckman, who was really all over the map aesthetically.
LS: Yeah, boy, Jake was amazing. I was also his assistant and spent a lot of time in his house up in Washington Heights. I proofread the parts for Windows. He let me use his extra studio time when he wasn’t using it at the Columbia Princeton Studios, so I got to know Vladimir [Ussachevsky] and Otto Luening pretty well, and of course Alice [Shields] and Pril [Smiley]. I have a reel of pieces I recorded up there that at some point I’ll transfer and see what they sound like.
FJO: I’d love to hear those!
LS: I also studied with Vincent Persichetti, who was a wonderful teacher. He really did his best to try to help each of his students find themselves individually and learn to make the music that they personally wanted to make. He didn’t push you in any direction. He didn’t want to create a clone of himself, unlike some of the teachers there, and he was great. And I also had some lessons with Hall Overton, who appreciated that I was one of the very few students there who could improvise and enjoy it. But at the same time, I was going downtown to meet Mort Subotnick and visit his studio when it was still upstairs from the Bleecker Street Cinema. I fell in love with the Buchla, so I was doing that too. I was doing all of these different kinds of music at once. Unlike most people who might be immersed in the atmosphere of Juilliard, it was one of the places that I was active musically, but it wasn’t the place. It didn’t dominate me.

Spiegel with reel-to-reels and synthesizers

Laurie Spiegel with various synthesizers and reel-to-reel tape recorders in the 1970s. Photo by Louis Forsdale, courtesy Laurie Spiegel.

FJO: You played piano, but it wasn’t your major instrument.
LS: No, I had to kind of begin to learn piano because it was useful for theory, harmony, and composing and studying. And I love the repertoire, but it wasn’t like anything with strings on it, which attracted me like a magnet. But pianos—I mean, I love them, but they came later.
FJO: But in terms of compositional paradigms, a keyboard configuration creates a certain kind of mindset. I want to discuss this more when we talk about the Music Mouse software you developed and your algorithmic compositions. If you think in terms of a seven-five keyboard, whether you’re improvising on it or even composing in your head and coming from a keyboard-oriented background, certain patterns are going to emerge. And if your frame of reference is a guitar fret board, other kinds of things are going to happen.
LS: If philosophically you’re a determinist, you could say that absolutely everything is algorithmic, but we do have a sense of free will and we do have the perception that we’re making decisions. But yeah, you could argue that if everything is deterministic, including the workings of the mind, then all music is algorithmic.
That seven-five pattern you see on the keyboard is only visible there because it’s the structure of the diatonic scales that we hear. It’s a pattern within the musical model our culture is dominated by. It’s not that pattern, but how it fits the hands, and the habits of the hands that become actual reflexes, that can be limiting. They can become so ingrained that they keep the imagination from roaming. That happens with the guitar fretboard too, though with different patterns, and with an instrument such as “Music Mouse” too, I suppose. Each instrument somehow biases our music in its own unique direction. Some composers manage to transcend those kinds of habits, some compose away from any instrument, others invent new instruments. But the physiological interface is sort of an algorithmic constraint all on its own, and I would think there are also similar cognitive constraints.

analog synthesizers with patchcords and keyboard console

Some of the analog synthesizers in Laurie Spiegel’s loft.

FJO: You were telling me when we spoke the other day that there was a music composition teacher who was so upset with you because if his students used Music Mouse he wouldn’t know if they were coming up with their own music. So when you mentioned falling in love with the Buchla, I remembered that when we did our talk with Morton Subotnick he said that he was very determined to avoid the standard piano interface, that it was very important for him not for it to have that interface in order to free people’s creativity, that you would have to deal with the instrument in a completely new way. Otherwise the paradigm would force you into familiar patterns.
LS: I believe that was some of Schoenberg’s rationale for coming up with the 12-tone system, too. It breaks you out of all of your customary habits and the patterns that are ingrained. Every time I pick up the guitar, my hands tend to fall into patterns of things that I’ve played before, which can be good. But you are looking for something new when you’re composing, unlike when you’re just performing. Yeah, that was one of the wonderful things about the Buchla versus the Moog and Arp and other early electronic instruments. It was modular and there was no keyboard, and so you really worked with timbre and texture and sonic shapes and architectures, as opposed to falling into melody and harmony.
FJO: You came to these various pieces of equipment and you’ve done new things with them, but you also wrote music that was instantly beautiful. But beauty is also something that is in part acculturated.
LS: I always wanted to make music that was beautiful and emotionally meaningful. It was out of fashion to do that. A lot of people were simply trying to avoid doing that at the time, whereas I was willing to go for it. Newness was being pursued for its own sake.
FJO: You even composed a short piano piece that addresses the whole history of music and shows a way out of that.
LS: Oh, The History of Music in One Movement.
FJO: I love the program note you included in the score and how even though the music is inspired by all these periods in history, every note of it is yours. There are moments that almost get into sort of a modernist place, but it doesn’t end there. Writing something like that when modernism was acknowledged as the final phase in music’s evolution was very brave.
LS: That piece was one of the most fun composing experiences and one of the most interesting that I’ve ever had. At every point when I was writing something evocative of a certain period, I had to sort of try to feel through what it would feel like to need to go on to break through into what happened in the next period. I had to want the freedoms that the next musical era took. There are many transitions in there. The hardest part of writing that was that horrible little place where I did an actual pair of serial rows that retrograde and invert against each other and that sound so ugly and harsh to me. For historical accuracy, I thought I really had to put that in. And at that point in the piece, it says “Oh my God, we can’t do this,” and it retrogrades back and it takes a different direction and kind of goes off into a sort of Impressionist-tinged blues, and then into minimalism, texture, pure sonic fabric. But of course, when we wrote that, we hadn’t yet gotten to “post-minimalism”, whatever that means.

handwritten piano score

Two excerpts from the score of Laurie Spiegel’s The History of Music in One Movement showing her version of medieval music and high modernism. Copyright © 1981 by Laurie Spiegel, Laurie Spiegel Publishing (ASCAP) International Copyright Secured. All Rights Reserved. Reprinted with permission.

FJO: Musicologists point to the late ‘50s and early ‘60s as the beginnings of minimalism, but the ‘70s were really when it had its greatest impact with audiences. In fact, its full flowering seems to have gone hand-in-hand with the sudden availability of electronic instruments. This is also true for other kinds of music that were evolving at that time, like prog rock.
LS: Electronic instruments gave people the freedom to create works and sound on an unprecedented scale. Prior to electronic instruments, you had to go through the bottleneck of written notation. You had to go through the bottleneck of a limited number of orchestras with very conservative tendencies, because they had their subscribers to please. Electronic instruments were a great democratizing force. That’s one of the reasons why you began to see so many more women composers because you could go from an idea for a piece to the point where you could actually play it for another human being. I mean that had been true all along if you limited yourself to writing only for the instruments you played yourself. But when it came to writing things on an orchestral scale of sonority, to be able to realize something and then play it for other people all on your own was a brand new phenomenon. So electronic music did for getting things from the imagination to the ears of an audience what the internet later did for everybody being able to self-publish, democratizing it in ways that obviously have pros and cons. The economic models of these various ways of getting something from the inside of my mind to the inside of someone else’s mind, for whom it would be meaningful, have been completely upset and will have to settle down differently. Analog electronics were revolutionary, and now the digital ones are also. It’s amazing how quickly so many changes have taken place and they’re very disorienting to a lot of people, understandably.

Laurie Spiegl McLeyvier music system digital synthesizer with computer terminal

Laurie Spiegel at the McLeyvier Music System, an early digital synthesizer with a computer terminal, in the early 1980s. Photo by Rob Onadera, courtesy Laurie Spiegel.

About what you asked, minimalism and electronic instruments, it was liberating for us players of plucked instruments and pianos to work with sustained tones. Instead of composing additively, but writing down one tiny sound at a time, we could start with a rich fabric of sound and subtractively sculpt form into it, or we could set up a process and let it just slowly evolve on its own.
FJO: The other big change happened with how those electronics were situated. In the early stages you had to be attached to some kind of university system or, if you got lucky, you could afford a Moog or a Buchla.
LS: One the things that I think made the ‘70s a really special period was that electronic instruments were too expensive for most people to own one. Sure there were people who had their own—Mort had one, Suzanne Ciani had one, a lot of rock groups could between them get one. But for a lot of us, the way to get access to electronic instruments was through shared studios. There was PASS—the Public Access Synthesizer Studio—which later evolved into Harvestworks. There was the NYU Composers’ Workshop. There was WNET’s Experimental TV Lab where I was a video artist in residence for a while, though I ended up really not doing much video but doing sound tracks for everybody else’s videos. There was Mort’s little studio, and its community of people upstairs from the Bleecker Street Studio. The Kitchen was another one. The Kitchen started as a center for video and then expanded into music. So there was community. There were interactions between people. People would meet each other and they would get ideas and bounce ideas off each other and work together in ways that I would think must be much more difficult to achieve now that everyone has an extremely powerful studio—beyond our wildest dreams back then—in their bedroom or sitting on their desk. To be working in the studio and, okay, I’m coming in and Eliane Radigue is just finishing up, and she shows me what she’s doing. Then she watches me put up what I’m doing, and then when I’m done, Rhys Chatham comes in and he’s like, “Oh, you could do this and this, and by the way, you know, we’re trying this; do you want to come and play with us?”—I mean, things just happened between people and I think that made the ‘70s a really special period, the fact that there were so many shared studios where people worked together, interacted with each other, commented on each other’s work, and helped each other with their work, as opposed to everybody sitting by themselves in their rooms with their computers.

Spiegel at work in the era of mainframe synthesizers. Photo by Emmanuel Ghent, courtesy Laurie Spiegel.

Spiegel at work in the era of mainframe synthesizers. Photo by Emmanuel Ghent, courtesy Laurie Spiegel.

FJO: Even some companies, like Bell Labs, became hotbeds of activity for composers at that time.
LS: Well, there was no place like Bell Labs. You can’t really even consider it a company. Bell Labs was pure research with a level of autonomy given to each person working there that probably no longer exists anywhere. There was no need to do anything with any commercial buy in. You could do whatever you were interested in, everyone was brilliant, and everyone was interested in stuff. You didn’t last that long or do that well at Bell Labs if you weren’t self-motivated and a self-starter. You were expected to have your own ideas and be able to realize them. I’m still in very close touch with my friends from that lab. We email all the time and toss ideas around. I just don’t know if there is any other place quite like that, although I think places like Apple and Google like to think they have the level of freedom that they had at the lab. I’ve never really been around them on a work-a-day basis to find out.
FJO: I love that they would just let artists come and do their thing.

Robert Moog, Laurie Spiegel, and Max Matthews seated together at a table.

Robert Moog, Laurie Spiegel and Max Matthews. Photo courtesy Laurie Spiegel.

LS: Well, they did and they didn’t. The arts were a little on the hushed side because of their regulated monopoly status, and moving into the ‘70s, they began to be under attack by the various powers that wanted to divide “Ma Bell” into a number of small, separate, competing companies, which ultimately did happen and was a great loss in my opinion. They were under a certain mandate; there were a number of considerations. One was that everything they did should be oriented to communications research. So when they came up with Unix and the C language, they just gave them away for free. Another was that they were not really supposed to be doing digital communications so much, I think, as improving existing analog telephone service. I’m not really that sure. I wasn’t in the managerial level of the lab. Max Matthews was, though; he was a fairly high-up person. He ran twelve sub-departments that did all kinds of amazing stuff: acoustic research, speech synthesis and analysis, non-verbal communications, various cognitive studies like studies of the characteristics of long-term versus short-term human memory and stereopsis, and in vision the study of eiditic memory. You would just walk around or ask whoever happened to be at the coffee machine when you were getting a cup of coffee: “What do you do?” and they would tell you something absolutely fantastically fascinating that they were very much into. It was an amazing place.
FJO: In addition to music, you were also doing video work at Bell Labs. I love the name of the program you worked on there.
LS: VAMPIRE! (Video And Music Program for Interactive Realtime Exploration.) It was a system that could only be used at night. That was the mandate. We artist types could use the computers during the hours during which they were not in use for legitimate Bell Telephone research.
FJO: I think my favorite work of yours from that period though is that gorgeous Appalachian Grove.
LS: Yeah? At that point I had a graduate research fellowship starting in I think ’73 at the Institute for Studies in American Music with Wiley Hitchcock, whom I greatly admired. Anybody who hasn’t read his book, Music in the United States: An Historical Introduction, should read it. He put me back in touch with and made me feel better about my banjo playing and the folk level, which had been basically kind of ridiculed in some of the other circles I’d been in during that era.

Laurie Spiegel playing a banjo

Laurie Spiegel playing the banjo in October 1962. Photo courtesy Laurie Spiegel.

I had just been down in the mountains in western North Carolina—with my banjo over one shoulder and my so-called “portable” reel-to-reel tape recorder over the other shoulder—listening to and enjoying older music and the culture that comes from early music. I mean, music from Europe went into those hills before the Baroque era and evolved on its own there, amazing music. I had just come back from there when I did Appalachian Grove and wanted to capture some of the feeling of being down there.
The wonderful thing about being surrounded by scientists, and not being in a computer music studio in a music department, is that a lot of scientists really love music. They are unabashedly lovers of fine music that’s meaningful in all the ways that I find music meaningful. They go to classical concerts, and they play instruments themselves. They love music the way ordinary people do. Whereas, something happens when you put music into an academic context in which down the hall is a science lab where everything has to be provable and rationalizable. You begin to get pieces where every note needs to be able to be explained, a certain level of self-consciousness begins to be laid on a musical experience. I’m not saying that always happens, but it seemed to be a tendency in academia during that period which was not present at Bell Labs.
FJO: What’s nice about the re-issue of your first album, The Expanding Universe, that came out last year is that we can finally hear all of the compositions you created at Bell Labs.
LS: Well, most of them. I did an awful lot of stuff. Two and a half hours, or a little more than that, was all we could fit on two CDs.
FJO: Only a tiny portion of that material was issued on the original LP, which curiously was released by the folk music label Philo.
LS: Another thing that I keep harping on is that the computer is a folk instrument. One of my favorite subjects in college had been anthropology. You have all these various techniques of going into an alien society and trying to figure out what’s important. One of the techniques is to try to figure out the cultural premises, the rock bottom assumptions that members of that culture would make. So I took a look at a number of different distribution media for music: classical concert venues; grassroots organizations like community sings; bands and church groups; parlor music, music that is done at home with people gathering around a piano singing or playing guitar together; and electronic media—photography, radio, and electronic music. I looked at the characteristics of the music that is disseminated by each of these methods and certain patterns begin to fall out.
The classical model is a finite piece of music with a fixed form that is attributable to one creator—Beethoven, for example. But the electronic model is very similar to the folk model. You have material that floats around and is transmitted from person to person. It’s in variable form; it’s constantly being transformed and modified to be useful to whoever is working with it, the same way folk songs are. People will come up with new lyrics for the same melody, or they’ll change it from a ballad to a dance piece. Nobody can remember what the origin is. There is no single creator. There’s no owner. The concept of ownership doesn’t come in. In the way that electronic sounds go around—people sample things, they do remixes or sampling, they borrow snatches of sound from each other’s pieces—the concept of a finite fixed-form piece with an identifiable creator that is property and a medium of exchange or the embodiment of economic value really disappears in both folk music and electronic and computer music in similar ways.
FJO: But certainly in the earliest era of electronic music, there would be these musique concrète and studio-generated electronic music tape pieces that are even more fixed than a piece by Beethoven because not only is there one piece, there’s only one interpretation of it because the interpretation is a fixed form.

Laurie Spiegel with her equipment including patchcord analog synthesizers, keyboard console, and a reel-to-reel tape recorder in 1971. Photo by Stan Bratman, courtesy Laurie Spiegel.

Laurie Spiegel with her analog synthesizer and reel-to-reel tape recorders in 1971. Photo by Stan Bratman, courtesy Laurie Spiegel.

LS: That was pretty much true back when electronic music could only be disseminated on reel-to-reel up until cassettes were invented, since you had to actually own two reel-to-reel machines to make a copy and very few people did. You would have tape concerts where you could play pieces for people or it might get on the radio or a record as a medium of dissemination. But once there were cassettes, you started to get people doing mixes and overdubs, excerpting things and chopping things together. Not a lot of people did the kinds of techniques that had been used in classic studio technique—lots of splicing and cutting—on cassette. To edit a cassette tape is pretty unusual. Then when you got digital recording, the first wave of digital excerpting was samplers before personal computers and the internet made other ways more feasible. The business end of the music industry is trying very hard to make everything identifiable and institute royalty systems and stuff. But I think, even though I’d benefit from receiving royalties, it’s to some degree a losing battle and a superimposition of a model that no longer really fits. We don’t have a new model yet that provides economic support back, but maybe we don’t need one–because music production is so much cheaper and faster.
FJO: I definitely want to talk more about these issues with you, but let’s get back to Philo. It’s really unusual for them to have released an LP of electronic music. That record proves in a way that the divide between folk music and electronic music was a fake war that was created in part by the media overblowing some people’s negative reactions to Dylan plugging in at 1963 Newport Folk Festival.
LS: Well, I was a folk person and a banjo person. The lowest, most grassroots technology and the most sophisticated electronic technology you would think would be diametrical opposites, but the fact that you can make music independently at home, and make music locally with other people in an informal way without any of the traditional skills such as keyboard skills and music notation, that’s a great commonality.
FJO: And some of the popular rock groups at that time were also doing some very sophisticated stuff with electronics.
LS: Pink Floyd.
FJO: Perhaps even more so some of the German groups like Tangerine Dream and Kraftwerk, many of whose recordings were purely electronic music without vocals or anything else. There isn’t that much of a sonic difference between some of their music and some of the stuff on the Expanding Universe LP.
LS: Yeah, there is and there isn’t. In a way, it’s almost closer to minimalism. I’m thinking the earlier Terry Riley pieces like Poppy Nogood and In C, which are pretty much open form. My pieces tend to actually be relatively short and have pretty clear forms and the processes in them tend more toward melodic evolution than repetition.
FJO: But in terms of the surface sound, I think the music on that LP could appeal to anybody who’s a fan of Tangerine Dream, and having that recording appear on Philo rather than one of the labels that was releasing electronic music that had been created in university settings, like CRI, seems like a reaching out to this broader audience.

Laurie Spiegel playing an electric guitar

Laurie Spiegel playing the electric guitar at a NAMM showcase in Anaheim in the late 1980s. Photo courtesy Laurie Spiegel.

LS: I have been in multiple musical worlds simultaneously throughout most of my career. I haven’t lived in the classical world, although I still totally love classical music, probably really the best. But none of those labels would have had me. Philo were willing. And then Rounder took it and kept it right up until Unseen Worlds Records put out the CD re-release. I mean, listen to Appalachian Grove and Patchwork and Drums. They’re clearly closer to a grassroots, folk sensibility than they are to any of the post-Webernite composers. But I did get it through personal connections that were more in the folk world. I had a roommate for about 14 months, Steve Rathe of Murray Street Productions, who was at that point working for NPR. He decided to move to New York and stayed here “for 2 weeks” until he could find a place, which turned into 14 months, which was actually great. I like him a lot. And he connected me up with Philo. He went to them and said, “You gotta hear this stuff.” That’s how that really happened. He still invites bunches of people over to his loft to just have an old fashioned country music evening with banjos and fiddles, and I play banjo or fiddle or guitar at those.
FJO: You’ve never gone over to one of these things and played with Music Mouse.
LS: No. Music Mouse doesn’t work like that. I have jammed playing Music Mouse, but it doesn’t lend itself well to playing with other people, because it tends to not be good for standard chord changes.
FJO: Now in terms of how worlds opened up, I’m curious about how your music wound up getting sent into outer space on Voyager.

The cover for the Voyager record and the record

The gold-plated Sounds of Earth Record containing Laurie Spiegel’s realization of Johannes Kepler’s Harmonices Mundi and its gold-aluminum cover (left). Photo by NASA (Public Domain). A copy of this record was sent into outer space on both the Voyager 1 and 2 spacecrafts in 1977. The cover was designed to protect the record from micrometeorite bombardment and also provides a potential extra-terrestrial finder a key to playing the record. The explanatory diagram appears on both the inner and outer surfaces of the cover, as the outer diagram will be eroded in time.

LS: I was visiting friends up in Woodstock on a lovely summer’s afternoon, and somehow a phone call got forwarded to me and they said, “We’re with NASA, and we would like to use some of your work for the purpose of contacting extraterrestrial life.” And I said, “What kind of a crank call is this? If you’re really from NASA, send something to my address on NASA letterhead. Okay, goodbye.” And they did, which really surprised me.
There are a number of algorithmic works. One type might start with a truly logical progression that generates the information for a piece. Another kind is to use the patterns we find in nature and translate those into the auditory modality, like the Kepler piece [which is what was put on Voyager]. Kepler of course didn’t have the means to do that back in the 16th century. But we do.
FJO: And so you realized that.
LS: Yeah, yeah. Ann Druyan, Timothy Ferris, and Carl Sagan liked it for the opening cut on the Sounds from Earth record. There are two records on Voyager. One is Music from Earth. It’s not in the music part. It’s in the Sounds from Earth.
FJO: That’s always bothered me.
LS: No. It really is simply a translation into sound of the angular velocities of the planets. It’s a transcription really. I don’t think of it as a composition. It’s an orchestration I did, and I think I did a good one, because I have listened to some other ones and they seem rather dry and academic sounding; whereas, I somehow, being me, managed to get some sense of feeling into the ways that I mixed it and the pace at which I let it unfold, and the decisions I made such as only including the planets that were known during Kepler’s times instead of all of the planets we later came to know.
FJO: There was an LP that came out of another realization of Kepler’s Harmony of the World in the late ‘70s, and in that realization the other three additional planets discovered after Kepler’s lifetime were represented as percussion tracks. There is some similarity between that recording and what you did.
LS: It’s the same solar system.
FJO: But still I hear your sensibility in your version somehow.
LS: But it’s not an original piece by me. If anybody composed it, it was Kepler who created this score, or as Kepler would have said, “It’s a composition by God rendered audible to man,” although I don’t know if he really believed in God. His mother was almost burnt at the stake as a witch.
FJO: That leads us into this whole question of who can claim compositional ownership of algorithmic compositions.
LS: Well, if the piece is generated by a process then whoever creates the process you would think composes the piece. It gets more complicated when it’s an interactive algorithmic situation. I have never called Music Mouse an algorithmic music generator. It’s interactive. It’s an “intelligent instrument”, an instrument with a certain amount of music intelligence embedded in it, mostly really by a model of what I would call “music space” – music theory, rhythmic structures, and orchestrational parameters that one can interact with. If someone composes with that, to some degree, it’s a remote collaboration because there is certain decision making I put into that program that they’re stuck with. And the rest of it is up to them. So there is decision making from both me and them, in that the computer is really almost passive. I would say it only does what you tell it to in simple situations like Music Mouse. In complex situations such as the entire world internet system, things become so complex that things will happen that the system was not instructed to do. But that’s on a different scale from a program where you actually describe a process of music generation, or a program such as Music Mouse, where if you do exactly the same thing you will always get exactly the same result, as with other instruments.

Screen shot of Music Mouse software in use

Music Mouse running under the STEEM Atari STe Emulator on a Windows Vista PC. Photo courtesy Laurie Spiegel.

FJO: Allow me to play devil’s advocate.
LS: Go for it.
FJO: Even if you’re creating music on a piano, there are things that are built into that piano that sort of predetermine the kind of things you can do with it.
LS: Sure, each instrument really does have an aesthetic domain. You obviously can’t do the same music with a flute and with a harp. But you say that you could hear my sensibility in the Kepler. You probably hear a related sensibility when you listen to my piano pieces or my orchestral writing.
FJO: Yes.
LS: So the medium interacts certainly with the individual person expressing themselves through the medium. It’s sort of a collaboration between a structure and a person.
FJO: Well, the reason I’m bringing this up is the story you told me last week about a music composition teacher being upset with you because your software made it difficult for him to know if his students were actually composing the music he assigned them to write.
LS: I wrote Music Mouse for my own use, and then I showed it to people and they wanted copies of it, and then they showed it to people, and it got to the point where more people wanted copies of it than I could sit down and explain how to use it to and so I wrote a manual. Then it kept snowballing, and it needed a publisher, so I gave it to Dave Oppenheim at OpCode to publish. And then a lot more people had it. At one point, later when Dr. T’s Music Software were publishing it, Music Mouse was bundled with [Commodore] Amiga computers, and something like 10,000 copies of it shipped. A lot of people used that program.
So I began to get feedback back from all manner of people who I didn’t know. The program was in many contexts I had never dreamed it would be in. So I get a somewhat upset letter from a college music teacher telling me that because of my program, he doesn’t know how to grade his students. He can’t tell if they know harmony, or they’re relying on my software for the harmony that they’re using in the compositional exercises they’re submitting to him. What is he supposed to do about that? How is he supposed to grade them? Music isn’t really something that’s supposed to be graded anyway. But yeah, a lot of unexpected and interesting things happened as a result of that program going out in the world on such a large scale.

Two 3.5 inch floppy discs

Floppy discs for two of Laurie Spiegel’s software programs, Music Mouse and MIDI Terminal, as issued in the 1980s. Photo courtesy Laurie Spiegel.

FJO: One of the things that I find fascinating about it is it can help you get out of habits that you had.
LS: I used to call it an idea generator. You’re certainly not going to be able to do anything you ever did on a keyboard or guitar, and you will be doing other kinds of things. And you’ll be focusing not on the level of the individual notes, but on the shapes of the phrases and the architecture of the musical gesture. It forces you to conceptualize on a larger scale. Music composing often really bogs down at the level of the note, and people lose perspective and they muddle around. If it’s really beautifully done, you can be utterly fascinated and transfixed by what’s happening on the level of the notes. But you also find an awful lot of pieces that seem to just kind of go on and on and wander around because the person creating them has lost perspective in terms of an overall form. Music Mouse orients you to think on a slightly larger scale of the phrase or the gesture. Of course, you can still wander around, making a mess for a really long time. We’ve all done that. But it’s an improvising instrument and it’s a brainstorming instrument.
FJO: In terms of how it affected your own composition process, are there things in your music that are different before Music Mouse and after Music Mouse?
LS: Music Mouse had things in common with the various FORTRAN IV and C programs I wrote at Bell Labs, but I can’t begin to say how much the orchestration of electronic sounds that could be dealt with in real time changed in a single decade. I mean, you talk about 1975 when I was doing pieces like Patchwork at Bell Labs. In 1985, I was doing pieces like Cavis Muris and the orchestration of real-time electronic sounds, real-time digital sounds, was just light years more advanced. It’s amazing what happened orchestrationally in that decade with the development of real-time digital audio.
FJO: I love the back story of Cavis Muris.
LS: I’m very fond of mice actually. There was a little family of mice living in the loft at that point. But the mouse of Music Mouse initially was the mouse input device of the early Apple Macintosh. It occurred to me, when I got my first Mac. It was not the very first one, the very limited 128k. By the 512k Mac, it became usable. So what would be the most logical thing you’d want to do with a mouse-controlled instrument? You would want to push sound around with the mouse. So, it was Music Mouse, and then I just kept refining it and refining it. That’s how it got its name. Now, of course, nobody uses mice. Well, some people still use mice. And of course there are still plenty of real mice.
FJO: I still use one, but I also still use a PalmPilot.
LS: I always used a trackball, which I guess I would have had to call it “Music Rat,” because it is definitely bigger than a mouse. I was thinking of doing a Rhythm Rat at one point, but I never got that far. There were too many other things going on. I might do a Counterpoint Chipmunk at some point. I don’t know. I would love to get back to coding. It’s just been so busy and the technology changes under me faster than I can learn to keep up with it in my spare time with so many other things always going on.

computer terminals and digital keyboard

One of Laurie Spiegel’s current compositional work stations.

FJO: The constant change in technology raises other issues about the future of musicality. Being adept at something because you’ve mastered it over the course of many years is an alien concept to a lot of people nowadays. But in a society where the technology changes at the drop of a dime, it’s really difficult to become proficient in any specific thing.
LS: You are right. People used to learn a tool or technique and refine and develop their use of it for the rest of their lives. Now we can’t even run the software we used most just a few years ago. We are always beginners, over and over.
This constant transitioning fits with the attention span of the channel flipper or the web browser. And process of facing the blank page until some creation takes form on it is now rare. More and more of today’s digital tools come up with a menu of selections, like GarageBand. Here’s a library of instruments, pick one—multiple choice initial templates. Do you want to make this kind of piece or that? Things start with “here are some options you can select among” as opposed to starting with something in my mind which I’m hearing in the silence in my imagination. Back in the dark ages when I was young, you had a great deal of time to focus on what was happening in your mind and information could proliferate, amplify itself, and take form in your imagination without that much interruption from outside. You had your mind to yourself. I don’t think kids walk home from school anymore. I don’t know. All parents seem to be hell-bent on making sure they’re safe and picking them up. And they are constantly interacting, with people or with devices or with people via devices.
Our culture is at this point full of people who are focused outward and are processing incoming material all the time. So you’ve got musical forms which are mixes, mash-ups, remixes, collages, processed versions, and sampling, all kinds of making of new pieces out of pre-existing materials rather than starting with some sound that you begin to hear in your imagination. I’m a little concerned about this because there’s just nothing like the imagination—being able to focus inward and listen to what your own auditory mechanism wants to hear—listening for what it wants to hear and what it would generate on its own for itself. You can do processing of the stuff coming at you ‘til the cows come home, but are you going to get something that’s really the expression of your individuality and your sensibility the same way as listening to your own inner ear? Are you going to come up with something original and authentically uniquely you?
FJO: But you were saying before that we’ve moved to this point where nobody owns a sound and that reconnects us with much earlier folk music traditions.
LS: Well, people still do, but it seems to be very hard to enforce ownership of sounds.
FJO: I loved the story you told Simon Reynolds about wanting to listen to an LP you thought you had and when you were not able to find that recording, you made your own music instead.
LS: That’s where the piece The Expanding Universe came from. I was looking back and forth through my LPs, and I wanted to hear something like that—not a drone piece, not a static piece, not like La Monte Young, and also not something that was a symphony. It just needed to be this organic, slowly growing thing, and I couldn’t find it, so—do-it-yourselfer attitude—I made one.
FJO: So do you think it’s less likely that somebody would do that now?
LS: Would somebody feel a desire to hear a certain kind of thing and go looking for it? Would they hear something inside their head and want to hear it in sound? It seems that people are fending off a great deal now. The dominant process is overload compensation: how can I rule out things that I don’t want to focus on so that I can ingest a manageable amount of information and really be involved in it. Attention is now the scarce commodity. Information used to be the scarce commodity, “information” including music of course.

Computers, bookcases and wires scattered across Laurie Spiegel's loft

Laurie Spiegel’s loft is an oasis of books, musical instruments, electronic equipment, and toys.

FJO: In terms of finding that original sound, there’s a piece of yours that I certainly think is one of the most original sounding pieces and it’s one of my favorites—Voices Within. It’s also one of the only pieces that you did using alternative tunings.
LS: Wandering in Our Time is similar, although not as highly structured as Voices Within. It’s easier to use tonality or modality. Microtonality is hard to deal with. I didn’t use any particular microtonal scale. It was really by feel.
FJO: But there’s a real sense of it being another world.
LS: It was a very internal world. I keep using the word emotions, but emotionally, subjectively, the kind of unformed sense of experience you can’t even identify or label or describe, but it’s something haunting you inside that you feel music is the way to express. Does that make any sense?
FJO: Yes, but the reason I bring it up is because of what you were saying about attention being so hard to come by these days. That piece really struck me because I didn’t have a framework for listening to it since it was so unlike anything else. With technology today and where we are in terms of being offered all these possibilities and having to choose from a set of options rather than striking out on our own paths, I wonder how possible it is for a piece like that to be created now.
LS: You wouldn’t have come up with that piece on a keyboard-based synthesizer. It needed a synthesizer without a keyboard. To some degree, all of these computer programs for music out there now are virtually keyboard synthesizers; they all give you a scale. You have to really work to get out of the scales, those normal diatonic scales that are in every software package on the market. There are a lot of assumptions about the nature of music in most of the commercial software. They’re perfectly fine for making music that’s a lot like previous music, but not in terms of finding those places on the edge of what we know where we’re feeling for something that is so subjective and so tenuously there that we can’t begin to describe it. Those kinds of aesthetic experiences in sound are not really what the software that most music is done on today is optimized for. I suppose I’m guilty of using existing software by other people as much as anyone, but you do have to really work to get beyond the assumptions inherent in most software tools for any of the creative arts these days.
FJO: In terms of working within conventions, it was fascinating for me to discover Waves and Hearing Things, your pieces for orchestra.

page of handwritten orchestral score

An excerpt from Laurie Spiegel’s Seeing Things for chamber orchestra. Copyright © 1983 (revised 1985) by Laurie Spiegel, Laurie Spiegel Publishing (ASCAP) International Copyright Secured. All Rights Reserved. Reprinted with permission.

LS: I can thank Jake Druckman for actually giving me an opportunity to, both of those opportunities actually. He agented both of those. Everybody just wants to hear my electronic stuff, pretty much.
FJO: But those pieces are extraordinary, too. They’re very interesting musical paths that might not have been intuitive had you not immersed yourself in electronic music. I hear the same kinds of transformations of timbres—an instrument emerges out of a cloud of sound the way that a timbre would emerge in electronic pieces from that time. Yet it’s all done acoustically.
LS: But that happens in classical music, too. Much as I would have never admitted it to other kids at Juilliard, I absolutely love Rimsky-Korsakov and how he orchestrates. His orchestration is one of my great inspirations. And I love his orchestration book, too. It’s just really about sound and feeling it, it’s not about instruments ranges or any kind of nuts-and-bolts level stuff. You could say that what he does in some of his orchestrations is virtually electronic. It’s so focused on the sounds that you practically forget that they’re instruments.
Orchestras are great because you have all these timbres—wow! Then again, I love writing for solo instruments, too. Concerts are good, and I’ve enjoyed many concerts, but to me the most important music was always the music that happened at home where I would just pick up my guitar and play it to feel better, or I would sit there and sight read at the keyboard, which I used to love to do a lot, but haven’t had much time for in recent years. Or playing music for just one other person. Or playing music with one other person at home. Writing music that somebody can just put on the piano, trying to write things that are not that hard to play so that more people can play them. I’m not interested in virtuosity. I’m not interested in writing show pieces for concert halls. I’m interested in writing something that someone can sit down and play at home and enjoy the musical experience of playing it. That’s more important for me as a composer, so I tend to write pieces just for guitar or piano, the instruments that I have played the most.
FJO: That’s a beautiful statement because so many people talk about getting into electronic music so that they could write music that they weren’t able to get players to play, creating a music that is even too hard for the virtuosos, music that’s beyond human ability. You’re saying the exact opposite.
LS: Well, that too. It’s not an either/or. They’re both valid. That’s one of the reasons to do Music Mouse. It’s as close as you can come to playing an entire orchestra live in real time. I have all this timbral control. Nine of the twelve tracks on my CD “Unseen Worlds” were created with just Music Mouse and it was like playing a pretty full orchestra.
FJO: So if you somehow had the time to take those pieces and orchestrate them and have them played by actual orchestras, would that be aesthetically satisfying you?
LS: That seems like an awful lot of time and work to do something that already exists, as opposed to doing something different if given the opportunity to do something for orchestra. But yeah, that would be interesting. They would be different pieces, I would think. But that would be a lot of work. Well, it might not be. Actually you could automate an awful lot more of the transcription than you used to be able to do. Writing notes down, God, it’s so much slower than playing. That’s partly why I’ve always been an improviser. Jack Duarte, my teacher in London, said “composition is improvisation slowed down with a chance to go back and fix the bad bits”. Or “bad notes,” I think he may have said.

Laurie Spiegel playing a lute

Laurie Spiegel playing the lute in 1991. Photo by Paul Colin, courtesy Laurie Spiegel.

FJO: So we’ve talked about the composer and the interpreter, what about the listener?
LS: Well, I think one of my advantages as a composer was that I didn’t accept the identity professionally until I had already grown up as a listener and a player. The emotional level is the level at which I am primarily motivated and always have been. I’m still the teenage girl who, after a fight with my father, would take my guitar out on the porch and just play to make myself feel better. That’s who I am musically. I kind of knew what I liked as a listener, and what I liked was music that would express emotions that I didn’t have a way of expressing, where somebody understood me and expressed in their music what I was feeling in ways that I couldn’t express myself. So, to some degree, I think I see the role of the composer as giving vicarious self-expression to people, although at this point, with the technology we have, there’s no reason for anybody who wants to make music not to be able to. But there really still are levels of ability. Not everybody’s going to be Beethoven or Bach. There still will always be room for truly amazing artists of composition and sound who can do things that other people can’t. It’s just that I really kind of rail against the old dichotomy of the small elite of highly skilled makers of music and this vast number of passive listeners that have no way to actively express some thoughts in music. That seems really wrong to me, and that no longer needs to be the case. But that’s not to say that it isn’t still worth listening, because there aren’t that many truly great works out there, percentage-wise.

In addition to her musical compositions, computer software, and extensive writings about music, nature and many other topics, Laurie Spiegel is also a visual artist. These are two of her Xerographs.

In addition to her musical compositions, computer software, and extensive writings about music, nature and many other topics, Laurie Spiegel is also a visual artist. These are two of her Xerographs.

Paul Rudy: Life Improvisations


Composer Paul Rudy takes to heart the idea that “nature’s wisdom follows the path of least resistance.” Although he knew this intuitively in his early “pre-composer” days as a self-described “mountaineer, carpenter, and vagabond,” it took time and plenty of experimentation in order to integrate the concept into his artistic life. But despite what many may consider a very late start (he did not compose his first piece until he was 26 years old), once he found his stride, he has more than compensated for lost time with a large and ever-expanding catalog of electronic, electroacoustic, and instrumental works that have earned him numerous composer golden eggs such as the Rome Prize and a Guggenheim Fellowship. He currently holds the position of Curator’s Professor at the University of Missouri, Kansas City, where he has taught since 1999.

After years of wrestling with his composing process, Rudy had a breakthrough with the piece Degrees of Separation, for amplified cactus. (Yes, you read that right—cactus.) Realizing the folly of attempting to notate musical directions for a cactus, he threw away his score paper and created a graphic text score that left ample room for improvisation, and he considers it one of his most successful pieces. Two years later, during a residency at the Helene Wurlitzer Foundation of New Mexico, he had a transformative experience composing In lake’ch, a long-form electronic composition, which took shape quickly and without the struggle he had become accustomed to, as a result of taking a fully improvisational approach to the composing process. He remembers, “All along I had this information that was whispering, ‘Do this. Try this. Work this way.’ And when I did that, it was being affirmed by the world… If I don’t hear something, it’s because I’m not listening, because I’m not paying attention. I’m not attending to what’s going on around me.”

A page from Seven Spiritual Laws of Life: Guided solo or group meditations for any instruments or voices and fixed media by Paul Rudy. Used with permission.

A page from Seven Spiritual Laws of Life: Guided solo or group meditations for any instruments or voices and fixed media by Paul Rudy. Used with permission.

And he is walking his talk; active listening and intuition play vital roles not only in his composing process, but also in his work as an educator and performer. One of the courses Rudy teaches at UMKC is a mandatory improvisation for composers class that emphasizes listening skills and hones the ability to perform spontaneously within a group setting. After sitting in on two sessions, I can say for a fact that the performances I heard—which were primarily by undergraduate students—were some of the most nuanced, and sonically compelling improvisations that I’ve heard by any musicians in a long time. Rudy is also passionate about exploring the healing power of sound, and his performances now incorporate his own overtone singing, as well as improvisational material that delves into the meditative and therapeutic qualities of sound.

Following intuition has served Rudy well, and he continues to follow those quiet whisperings. He has started a monthly sound healing circle in Kansas City. He is particularly excited about the new quartz singing bowls that he plans to include in those events and in his own performances. He is completing a film score that will be premiered in the spring of 2014. When he is not teaching composition and electronic arts at UMKC, he can be found on his farm outside Kansas City. He might be hunkered in his electronic music studio, or chopping wood outside, but regardless, he’s always committed to listening.

Dan Trueman: Man Out of Time


Trueman’s office on the campus of Princeton University
November 4, 2013—2 p.m.
Transcribed by Julia Lu
Video presentation and text condensed and edited by Molly Sheridan

I readily confess that I lifted the title for this piece directly from a poetic description of Dan Trueman that appeared in Electronic Musician just a few weeks before I interviewed the composer myself. “Trueman is a man out of time,” noted Ken Micallef, “one foot in tomorrow’s software, the other in yesterday’s folk music.”

I scribbled this seeming contradiction across the top of my notes, but quickly began to wonder if these musical worlds were so very far apart after all. Trueman’s beloved Norwegian Hardanger fiddle, after all, is its own kind of remarkable technology. And the work he does with programming, particularly when building his own invented instruments or working with the Princeton Laptop Orchestra (PLOrk), often takes metal, plastic, and code into areas of incredibly organic and tactile creation. That they implied to me a type of contradiction felt narrow minded on reflection. If there is any line to cross, Trueman certainly doesn’t trip on it.

What he does notice are some other tensions, which then influence his work from project to project. Coming from a family of musicians who regularly played chamber music together, Trueman is extremely conscious of how much we privilege professional performance over communal music making. As a result, he works to make sure those playing his music—whether on stage or in the classroom together—feel a meaningful engagement with the notes and instruments in their hands. He’s much less concerned with the preservation of his catalog for posterity and instead focused on making sure that the new technologies he develops for it function correctly from year to year so that he can keep building and developing creatively. When time concerns him at all, it’s not in how the past meets the present, but in how a human sense of rhythm meets a metronome’s tick.

His innate intellectual curiosity keeps him exploring topics within music and beyond, but whether the eventual expression of his ideas requires old instruments or the invention of new ones, at its root is something basic and strong.

“I guess what I’m saying is that I always feel like I have to make sure I’m coming back to playing music with my body and with other people, and trying to keep myself honest about how I think I understand things,” Trueman acknowledges at one point in our conversation. Later, he hits this same lesson from a slightly different angle. “It’s funny how we get these inherited bits of wisdom about what it means to write music. In the end, we all have to find our own way.”

*

Molly Sheridan: On your website there’s a neat juxtaposition that crops up among resume bullet points where your work with the Hardanger Fiddle Association of America butts right up against other work produced for the Computer Music Journal. Anyone familiar with your career knows you are certainly not just dabbling on either side of this aisle, and ultimately it’s all just technology in a way, but I think there are still clear dividing lines for most people. Have you always just been pulling things that interested you into your toolbox or were these separate strands that eventually braided themselves together?
Dan Trueman: What you say about them both being technologies is totally true and has been something that I argue all the time. It’s really not that different. That said, I actually think that the reason I have done both over the years is that I just like them both. I’ve played the fiddle forever; my fiddles hang on the wall, always waiting to be played. So if what I’m doing is not as interesting as playing the fiddle, then I usually go play. But with the newer technologies, I like something about the process of programming, in particular. I actually like programming—writing lines of code and having it work. It’s very satisfying. It’s funny: composing is hard and it’s hard to get a sense of closure writing a piece of music. When I finish a piece of music, there’s still a sense of things to work on and trying to come to terms with what it all means. Writing code, you just write it, and it works or it doesn’t. I like that. You take the fiddle and try to imagine things you could do with it that you can’t really do right now. For example, I play in a lot of different tunings, but once you’re in one you’re sort of stuck. I remember years ago wishing that the strings could be retuned on the fly, so that while I was playing, I could go to a different scordatura. Wouldn’t it be cool if I could hit a pedal or something and then it would just change? I suppose you could do that mechanically, but building instruments in the digital realm allows you to try things like that. So in a way I’m inspired by the limitations of these real physical things, but trying to come up with new ways of just being musical.
MS: In a way, is it fair to say that straddling this roof point, this man and machine, acoustic and electronic, often encapsulates what your music is “about” or at least hints at some of its creative impulse?
DT: A lot of times I think that’s true. Certainly in this So Percussion piece [neither Anvil nor Pulley], I was specifically interested in exploring this space between moving and training as musicians do, and seeing what machines can do, and putting them against one another. I also embrace certain elements of computer stuff that I think are native to it and are sometimes avoided. For instance, the glitchy stuff which now is sort of common in a lot of music—for a long time, we always avoided things like that. But I like things that are native to the electronic or the digital realm, and I like to foreground those and pit them against more carbon-based things. Paul Lansky always used categories—I think it comes from Star Trek, actually—like “this is carbon-based music” and “this is silicon-based music.” There are identifiable features of both, which I like to have present all together at once. I don’t feel like it needs to be one or the other. But even more to the point of your question, I really like to see how new instruments that we might build engage with how we like to make music. So you take, for instance, So Percussion—these people who have years and years of experience playing in a certain way, with very virtuoso approaches to engaging with rhythm and time. Then you find that it really doesn’t even line up with how we represent time on paper or with a metronome or on a piece of software. To see what happens when we push those against each other is definitely something that’s really been at the center of my work for the last seven or eight years.

In Trueman's Princeton office, where old ideas meet new technology

In Trueman’s Princeton office, where old ideas meet new technology

MS: I was wondering about that issue of notation, because both from your folk side and also the technology side, it seems like there must be a certain tension between what’s in your ear and what’s on the page.
DT: Yeah, and the fiddle music is particularly interesting in that regard. Actually, some of it relates to the very specific fiddle music that I play. For years, I’ve been playing this Norwegian fiddle—the Hardanger fiddle—and a particular kind of dance music from a part of Norway called Telemark. Any type of musician or music lover who isn’t familiar with that music is always scratching their head because you can’t count it the way we know how to count. You have to feel it in your body in a certain way. Swing may be the closest thing that we all know about in terms of it not really being quantifiable in a clear way. But in the case of this Norwegian dance music, it’s that times ten. So to represent it on a page is really difficult. And if you forget that it’s just a really rough approximation, and you start doing what the page tells you, you actually lose all the magic—this kind of warped sense of time that you get from this dance music.

So even apart from dealing with new technologies, just the issue of representing what’s happening when we’re making music with our bodies in a certain way, trying to represent that on a page with notation, is one challenge. Then when you think about building a new instrument, say with software, you always have to work with some kind of representation because computers are dumb. We need to tell them exactly what to do. So we write these lines of code, and they have to be totally explicit. In some ways, when you write a program, more than any other way it reflects the limitations of our understanding of how that music actually works. You write it down, and the computer does something exactly the way you told it—so it reflects how you understand that music—and you listen it and go, “Huh, that’s not really quite right.” I love that, and I find that really super interesting. I think we can sometimes get too comfortable with how we think we understand things. For instance, when we talk about meter and rhythm, we assume that we build everything up from small subdivisions. This is basically accepted wisdom, and that’s how we teach people. But if you do that, and you apply it to this Norwegian dance music, it’s just wrong. You actually do violence to that music. So there’s something that we think we understand, but it’s not lining up with this kind of music that we actually make.
I guess what I’m saying is that I always feel like I have to make sure I’m coming back to playing music with my body and with other people, and trying to keep myself honest about how I think I understand things—so that I don’t let my representation of things sort of swallow or overly constrain the thing that actually drew me to it in the first place.

Sample code from neither Anvil nor Pulley

Sample code from neither Anvil nor Pulley

MS: This discussion of time and perception leads nicely to my next question, which is how you ended up with such a fixation on messing with your metronome. Twisted references to this little timekeeper crop up in a number of your pieces. Did you have a traumatizing experience as a student or something? What happened with the metronome?!
DT: I had an amazing experience with a metronome! This was with a digital metronome back when I was in college studying violin. I love practicing with metronomes—there’s something almost spiritual about it—and there’s a practice that I can get into where I’ll set the metronome for a certain tempo, I’ll work on something, then I’ll increment it up a little bit, and then I’ll come back down. This experience that I had, I was playing these sixteenth notes with spiccato with the metronome and gradually increasing the tempo. And I was really trying to make my sixteenth notes as even as possible, so I was attending really closely to the details. As the tempo went up, I noticed at a certain point that every time I stopped playing, the metronome would speed up. I’m like, something’s wrong with this metronome. It would happen every time I would stop. Then I asked someone to come in and said, “What’s going on? Is there something weird going on? Have we entered the Twilight Zone with this metronome?” At the time, I hadn’t really thought about it very much, but of course it makes sense. As we attend to things, our sense of how time passes changes. So I literally slowed down my experience of time.

I’ve met a few other people who have had this same experience, you know. So I’m confident I’m not just weird here. But ever since then, I’ve just been really curious about the power of mechanical time: how we measure it, and how we represent it. Nowadays we think, well, the metronome is right. I need to practice and get very good. A lot of our contemporary music these days I think reflects that. We play with very regimented types of pulses and beats. I think it’s in part a reflection of our acceptance of the metronome and, more generally, the idea of calculated pulse that we get from sequencers and so on. So we write pulse-based music, and it doesn’t have the same kind of flow and rubato that, say, 19th-century music has, where they were very skeptical about the metronome. So yes, I had a semi-traumatic experience with a metronome.
MS: So you’re taking very precise machines, and then you’re interested, well, not in imprecision, but in non-perfection I guess.
DT: That’s right. I’m very much interested in the dirty, crunchy areas around this mechanical sense of time. If I may say, one of my favorite places in this piece that I wrote for So Percussion comes right at the interface between the first and second movements. The first movement is this sort of jaunty fiddle tune, and the So guys, they’re grooving it, feeling it the way fiddlers feel it, at 120 beats per minutes. Then the very last note, Eric hits this wood block that starts the metronome for the second movement, which is also 120 bpm. There’s this moment where it’s like, wow, they’re really playing at 120 bpm, but there’s a quality in the way this changes from this sort of, you know, it’s grooving, it’s tight, but it’s not this crrrk, crrrk, crrrk type of calculated pulse that we get from metronomes. There’s this twist that I feel every time we get to that moment where two ways of articulating that pulse come right up against each other.


MS: There does seem to be a remarkable naturalness between how you integrate acoustic and electronic instruments. Do you have a personal stash of rules or guidelines for how you go about doing that at this point?
DT: That’s a really great question. I make a lot of things, and then I play with them. I think my instincts have gotten better over the years, but I still feel like maybe instead of nine out of ten things that I make, that eight out of ten things that I make are really boring. I’ve gotten a little better at anticipating, but basically I’ll have an idea: Wouldn’t it be cool to do this? Wouldn’t it be cool to play with an instrument that can do this? And then most of the time, I’ll code it up in some way, or maybe it will involve some hardware, and I’ll make it and literally, within seconds often, say, “Aw, geez. That’s boring.” Or, “I really need it to be able to do this.” And then I’ll go back and code some more.

That was actually the thing about this So Percussion piece. The second movement, this 120 bpm movement, is the first one that I wrote. I spent about three months banging my head against the wall, trying to find the thing that I thought would work for this because I wanted something that really engaged their incredible musical training and something indigenous to the computer which was pushing against them. And that’s not an easy task, but still—three months in! This is terrible. Then finally, I built this one thing I wanted to try and it was maybe three days later that I came up for air because I started playing with it and—wow—this is so fun. I wish that I were better at predicting. Maybe if I were more analytical I could come up with some principles that I could write about, but I still pretty much follow my nose on these things. Basically, I’m aiming to make something that is physically engaging in some way and that’s going to be interesting for the player to do.

Actually this does get to a fairly big thing for me. Ninety percent of my musical life is spent by myself playing fiddle or maybe trying to hack through some Bach at the piano. You know, not performing. And my enjoyment of music really is primarily there [off stage]. I think we forget that sometimes. There’s such an emphasis on performance and making things that are always going to be presented that the role of the players and their experience can really get lost. They’re executing something, as opposed to engaging with something. So one of my first principles in designing these types of instruments is really, well, what’s this going to be like for the player—is this going to be super engaging in some way to play? Again, that comes back to my fiddles hanging on the wall. I can just pick one up and play some tunes, and that’s really great. So anything else that I do, I want it to be at least similarly engaging—making me feel like I’m actually, and with some urgency, involved in the music-making process. That’s hard, but I try to develop an instinct for making things that will accomplish that. Most of the time I miss, but occasionally, I get something that, wow, three days later I’m still doing this. So there must be something right here.

Trueman's 5-string Hardanger-inspired "5x5 fiddle," built by Salve Håkedal

Trueman’s 5-string Hardanger-inspired “5×5 fiddle” built by Salve Håkedal

MS: Do you trace that pretty directly to the fact that you’re an active performer, so you’re especially sympathetic to those considerations?
DT: I suppose that may be true. I grew up playing music, but I came to composing fairly late. My older sister is a composer and so I thought, well, that’s what she does. I can’t possibly tread on her turf. So it really wasn’t until I was almost 22 that I started writing music. Being a fiddler, I always loved playing chamber music, and I actually mean in sort of the old-fashioned sense, sitting in somebody’s living room and making music together. I grew up sight-reading music with my parents. They built a harpsichord and a clavichord and so we had these instruments in the house. My older sister was a terrific musician and so she’d play piano or harpsichord, my parents would play recorders, and I’d play violin. So there was something about that—it was something that we’d do, not something that we were rehearsing to perform to impress people. That’s what makes me tick and that is so marginalized now. In the new music world and in the electronic music world, it’s like the presumption is that, well, we’re aiming for performance. And people don’t even talk about it! I hope I’m not saying something too obnoxious here, but I just feel like maybe we’ve sort of lost hope that music making is something that people do—a vital and continuing thing. But then, I hang out with these fiddlers. The fiddle world is this incredibly vibrant place, and they’re always putting on shows and performing, but I still think they live for being in somebody’s kitchen playing tunes together. That to me is the most incredible thing, and if I’m going to do this with new technologies, well, it better at least have a chance of succeeding there.

MS: I think this kind of musical engagement happens so often among musicians behind closed doors, maybe especially among players who don’t end up pursuing professional careers, but it’s not something we often talk about.
DT: And when we get to a certain level, the assumption is, well, we’re putting on performances. I feel like our performances would be better if this part of it were well tended to. I mean, I love putting on shows and rehearsing a piece and really trying to have it be as awesome as possible. But I also like when I hear fiddlers who get up and play tunes every day, and I’m just with them in their living room.
MS: It seems to me that this attitude was perhaps further ingrained through your somewhat unconventional string training, right? Your violin teacher early on seems to have had a rather long-term influence on your own sense of ambition and career and the deeper artistic goals you developed.
DT: Oh, boy. Yeah, so this is Irene Lawton back in Stony Brook, on Long Island where I grew up. I remember her telling me once, “You know, I don’t care if you become a professional musician. In fact, I’d rather you not become a professional musician. But if you play for five minutes a day, and you go for that sound, and you do that for the rest of your life, I’ll be very happy.” Which was kind of an incredible bottom line in a way because on the one hand, you say, I’m not trying to be a professional musician. But on the other hand, I’m trying to stay awake in a certain way. She was always talking about being awake to the moment. She also had this incredible way of undermining one’s ego. I had a very healthy ego when I started studying violin with her, and in really good ways I think she wanted to make sure that I was making music for music reasons, and not just because it fed my ego.

I think she actually did some damage to me as a performer in the sense that I became very insecure. I had to reinvent myself and start playing other kinds of music because the notion of standing up and demanding attention and impressing people—basically it was equivalent to feeding my ego. I’m actually very appreciative of that, but I still wrestle with it. It’s funny, these lessons that we get from an early age. They leave a mark. My wife is a guitarist and she teaches. She has students for eight, nine, ten years. I think for a lot of them, she makes an incredible mark on them. Whether they become professional musicians or not, just from my own memories and my own experiences, it’s amazing how long that lasts.

Trueman as a young violin student

Trueman as a young violin student

MS: Seriously. We like to hold up the fact that music lessons might improve a kid’s math scores, but there’s so much more beyond that in those intimate mentorship moments—deep life lesson that come out of that period and stick around.
DT: Totally. As my years went on with Irene Lawton, we had these long lessons and I’m not kidding you, we’d do yoga. This relates to everything we’ve just been talking about. She was very aware and interested in our bodies and the relationship of the body to the instrument. Doing some yoga to wake up your body before sitting down and doing this is really a natural thing with the violin. It’s really important. The next thing we would do would be sight-reading. We would sight-read duos. So again, it was this very in the moment, almost trying to survive type of thing. A little bit like improvisation, but from another angle. That was where the priority was.

The stuff with the body I’m still really interested in with new technologies—the music really lives in our bodies in certain ways. I think of particular fiddlers, for instance, and the way they do ornaments—two fiddlers in particular, Brittany Hass and Caoimhín Ó Raghallaigh. Both of them have this beautiful way of making ornaments, and it’s in their hands in a way. You can write it down—you can analyze it however you want—but in some way, it’s about the whole thing and how it’s put together. I developed an appreciation for that from Irene Lawton early on, because she really was all about the bow arm, thinking about the sound you were conjuring from this instrument and how it related to your breath, your shoulders, the weight of your arm, the joints in your fingers, and so on. It was all tied together.
MS: So how did composing finally get on this palette of interests for you?
DT: I started composing little bits of things when I was 12 or 13. I remember having sheets of paper with big notes on it. Actually, maybe I was even younger than that. But like I said, my sister was a composer—super talented—and also writing lots of music very early on. I also had all of these inherited notions about what it means to be a composer. You have to play piano, of course. I took some piano, but I eventually quit because I didn’t want to do it. I wanted to play violin, and there are only so many hours in the day, so now I can’t be a composer. But I tried little things here and there. I was really active as a chamber musician in college, and in my last term at Carleton College in Minnesota I needed a couple of extra credits. So I took a composition class with the same composer that my sister had taken composition with, actually: Phillip Rhodes, a wonderful composer and incredible teacher.

I remember him, sort of a stern guy; I still call him Mr. Rhodes. After the first or second class he took me aside and said, “Well, I’m not sure you should be in here. You know, everybody else has got a lot more experience.” “Just give me a couple of weeks,” I said. “Let me try.” And so he let me stay in. It was a revelation. Just to give you an idea of where I was: I was 22, just learning how to compose. I brought him one of my many things that were 15-seconds long. I couldn’t get any further. So I brought this string quartet to him—absolute beginner stuff. Stuck at 15 seconds. He looks at it, and he’s got this furrowed brow, and he says after a few minutes, “Well, you know, Dan, you don’t have to have all the instruments play all the time.” Ohhh! So all of a sudden, I can write, you know, a minute of music. After that lesson, I went home and made a whole list of things like that—just a list of ideas to remember when you’re stuck. All of the sudden, the floodgates opened, and I started writing a lot of music.
I started getting over all these hang-ups. I remember one young composer saying, “Oh, well, every good piece of music should have all of its key elements in the opening moment.” I was very impressed by that for a while, to a debilitating extent. All my pieces had to have this. I realized eventually, of course, that’s not true. That is just baloney. It’s funny how we get these inherited bits of wisdom about what it means to write music. In the end, we all have to find our own way.

Sample score page: "Feedback" from neither Anvil nor Pulley

Sample score page: “Feedback (In which a Famous Bach Prelude)” from neither Anvil nor Pulley
(click image to enlarge)

MS: Do you feel that you had a musical home then? Considering all your different interests, and then coming to composition late, you could have easily felt somewhat isolated in a sense, or deeply divided at least.
DT: Well, my whole family is musical. My parents are both amateurs, but both very accomplished. Then my sister, she’s one of these annoying people who can do anything. You hand her an instrument, she’d be able to figure it out and play well on it in short order—something I’ve never been able to do. So I was surrounded by it. My dad’s a physicist and my mom’s a painter, but they were building harpsichords. I mean, I thought that was normal. They were building harpsichords, and then I eventually inherited the task of tuning these instruments. So having music around all the time, but also having the notion that these things are things we can mess with. It all kind of makes sense to me now that I say it, because I feel like that’s sort of what I’m doing now. I’m getting under the hood, but also just wanting to make music all the time. And that was there from the beginning.
MS: You’re often an active participant in your pieces or, when you’re taking a slightly more traditional composer role, you are at least very close to the performers bringing the works to life. Has there been or will there ever be much music by Dan Trueman that does not include this particular type of intimacy?

DT: Yes. Well, maybe. We’ll see. I do find it most compelling to write for people whom I know or whom I feel like I’ve got some connection with. With So Percussion, I was so engaged by how they make music. And when I met them, I liked them as people, and so I knew I wanted to make music with them. So there’s that element of it. I remember Bill Frisell telling a story like this about meeting this pedal steel player at a party once. He didn’t know anything about him—didn’t even know what he played—and within ten minutes he said, “I know I’m going to play with this guy.” I was really impressed by that. I think it’s true. There are people you just want to make music with. The notion of me just making a score and sending it off, I don’t do it very much.

The other question is me being in it, and I’ve been wrestling with that for a long time. For many years, I mostly only did that, in part because I was making pieces where I would be playing either electric violin and laptop, or I’d be playing Hardanger fiddle. I was very adamant at times—I don’t care that this isn’t practical. I’m going to make these pieces because these are really interesting, idiosyncratic places that I want to go—so I know that these pieces are going to be really hard for anybody else to do. Maybe impossible, because they require a Hardanger fiddle—how many people have one of those?—or some weird software that, at least at the time, would have been sort of impossible to share with anybody. But I did it anyway, because I didn’t want to be governed by some lowest common denominator. I still feel that way.

Trueman and his beloved Hardanger fiddle

Trueman and his beloved Hardanger fiddle

There’s an accepted wisdom that we want to maximize the number of performances we get. We make things that as many people can do with as few complications as possible. That’s fine, but I feel like, wow, there are really some interesting musical places that we rule out by insisting on that. So I go down my little rabbit holes and make these things that only I can play, or that require six-string violin and sensors in the bow and some weird custom software, and sure, nobody else does them. They’re not even really necessarily a model for somebody for writing further pieces. Some of that bothers me now, but it changes from month to month. I have this sort of idealistic belief sometimes that if I make something, it may be really hard to do and personal and idiosyncratic, but if it’s really great, then at some point, somebody else is going to want to do it, and they’ll figure it out. NewMusicBox had this thing recently about software, in particular. And that’s related to this in a sense. How do you make things that you can share, that can at least be useable for another year, or five years, or ten? If you are in it yourself, you can tend to it. If you want other people to do it, then almost by definition you have to make things less adventurous.

So there’s a tension there between wanting things that can go far and stay around, and wanting to just simply go for it and see what it is that you can find in this weird place. Then there’s the other fact that I really like playing music. It’s only in the last few years that I’ve started to have experiences as the composer where I sit in the hall and actually enjoy myself. For many years, I basically hated that more than almost any other musical experience. Now, like when So Percussion plays my piece, I love being there. They’re just so great and it always turns into something that I can’t actually believe exists.
MS: Has that shift required you to make any of those composerly concessions you’ve mentioned?
DT: I’m just finishing some pieces now that Adam Sliwinski from So is playing. They are these pieces for what I’m calling prepared digital piano, and these actually go at this whole thing from a lot of ways. I’m really excited about it because they’re for laptop and 88-key MIDI controller. That’s it. Sets up in about 30 seconds. Software—you open it up, it just works. It’s notated in traditional notation. Any pianist can sit down and play this. I actually can’t play these pieces. I actually feel like for the first time, I’ve got something here that is idiosyncratic and lets me explore these things in a way that I like to, but also it’s totally easy for other people to do. I’ll be able to distribute that software and I think that lots of people could play it.
MS: I want to focus in a little further on that idea of sharing and software expiration. I was listening back to some of your decade-old work, the Interface recordings in particular. Considering that the hardware and software used to create some of this music may have a much shorter shelf life than the violin, are you anxious about compositions in your catalog that even you can’t really play anymore?

Trueman's hemispherical speaker design on display

Trueman’s hemispherical speaker design on display

DT: I have one like that in part because I made it for six-string electric violin, and I don’t play six-string electric violin anymore—and I don’t really want to—but I don’t know how to do that piece without it. That’s funny because that’s actually not even a question of software. The Interface record you mentioned with Curt Bahn, there’s even all these old sensor bows that I made for that that are in various states of disrepair. I could never get back to that place where I made music with Curtis in that way. I have a couple of thoughts about that. One is that in that case, that was all improvisational. We were building these rich software instruments that we’d improvise with, and we really felt like the instrument building was part of the whole process. So the thought that we needed to be able to do this again really didn’t matter. We didn’t care about reproducing things.
In fact, we’d rather the next gig have the next version of our software and have our sensors take us to a new place. I really like that about working improvisationally with software and viewing instrument building as part of the compositional and performance process. It’s like Coltrane working on his licks in the bathroom during intermission. He was actively building things into his hands that he could then use in the second set. So we would try to do the same thing with software—the next set is not going to be the same as the last one.

Regarding old software, it’s not even just that there might be objects in the Max patch that need to be updated or something like that. Back in the day, I was building things where the composition was really in the specific presets of how things were wired and the parameter values, so that over the course of a piece I might change a hundred numbers 20 times to slightly different values because it would basically create a different type of texture or a different type of response. They’re actually really hard to reproduce. I don’t do it anymore. Now I make the instruments I make, and whether consciously or unconsciously, I sort of avoid things that I think are just so fragile that I’m going to lose them in a year or two.
MS: So considering what you’ve just said, ultimately how concerned are you about issues of preservation and protecting your catalog?
DT: Okay, now you’re provoking me here because I’ve been known to rant about this. I sometimes talk to student composers about notating their music. So much of the time it’s about longevity—how are people going to play this music when I’m gone? I really don’t care! I mean, to me, it’s actually kind of bizarre to worry about whether people are going to play our music when we’re dead. I understand this hope for immortality and so on, so in some way, yeah, of course I want my kids to understand what I’ve done and ideally to appreciate it in some way. But the notion of prioritizing that in the creative process really does seem problematic.
The history of the new technology is that sometimes it’s actually a question of, well, this doesn’t work next week, and I do care about that. I want to be able to make things that I can build on, and that I can revisit. So for instance, this even comes down to languages that we can use. A lot of people use Max/MSP, which I use a lot. Then there’s another language called Chuck that I use a lot. These days, I mostly work in Chuck because it’s a text-based language, and I find that I can revisit my work there more easily. I can read it. I can understand what I did. I can reuse it. It basically comes forward in time with me in way that I struggle with Max. In Max, I’ll look at a patch that I made yesterday—how does this work again? Let alone a patch that I made five years ago. So it’s not so much caring about the longevity of the catalog, because I really do think that’s sort of preposterous, but I do want to feel like I can build on my own ideas in productive ways.
MS: Somehow, we’ve gotten all this way and haven’t even referenced the Princeton Laptop Orchestra or the hemispherical speakers you designed. Though the ensemble has been around for a while and is even imitated elsewhere, I suspect for many people that the name still might conjure images of a bunch of students gazing blankly into the blue light.
DT: Totally.
MS: So would you mind taking us behind the curtain a bit there, as far as how the laptop orchestra really functions and what kinds of music it is able to create and perform?
DT: The whole thing with the Laptop Orchestra for me was to build a context for experimenting with making music together with more than one or two people—trying to find new ways of making music with new technologies. I’d been teaching computer music here [at Princeton] for a couple years, and teaching it the way it generally has been taught—and still is taught, to a certain extent: You work in isolation in a studio, you make your track, and you share it with somebody. That’s all fine and good, but to me as a fiddler, it felt very dead, in a way. I would make something and then put it on a concert, sit in the dark and listen to it. It’s hard for me to get excited about that. I wanted to get this stuff out of the studio, but how do you do that? I had done a lot of laptop improv over the years, where you get a bunch of people, you plug into a mixer, and you all come out of a couple speakers. Nobody knows what anybody’s doing.

I’m kind of conservative, I guess. I wanted it to feel old fashioned so I could be making music with somebody else, attending to what I’m doing and aware of it, while hearing what somebody else is doing. That’s hard to do with conventional speaker technology. That was a project Perry Cook—who’s this great computer music researcher and musician—and I worked on together and ultimately it resulted in building these spherical and hemi-spherical speakers that radiate sound in a room roughly the way acoustic instruments do. So you can put one right near you, even on you. I’ve got one that I sit in my lap—I have sensors on it and I bow the thing—and the sound comes right out, so it’s like a cello in some ways.

The idea of the Laptop Orchestra [takes that further]. What happens if we’ve got four people, or six, or 40? With the show we did with Matmos, we had 30 or more laptop people on stage with these speakers, Matmos, and So Percussion all going at it. It has been great to do it with students because I still feel like we’ve only explored some of the corners of what we can do with this. The students come in, and they don’t have a whole lot of preconceptions about what it is we need to do, so we can try all sorts of things.

So that’s it in sum: a group of people each with a laptop, a hemispherical speaker near them for their own sound source, and maybe some kind of interface device that they’ll be using to physically engage with the sound. It has evolved and spread, and we even have a pro level one that we started here called Sideband that is made up of former and some current graduate students and faculty and staff—between eight to twelve people at any particular performance, sometimes as few as six. We started that five or six years into the whole process of doing laptop orchestras, because you can only get so far when you’ve got new people every year. You can’t really accumulate expertise. With Sideband, we really want to see how far can we push this. That for me is where the laptop orchestra is now, really trying to develop small communities—bands, basically—where we’re trying to accumulate experience and repertoire that we can get better at and see where it takes us.

Sample score page: "120bpm (Or, What is your Metronome Thinking?)" from <em>neither Anvil nor Pulley</em>

Sample score page: “120bpm (Or, What is your Metronome Thinking?)” from neither Anvil nor Pulley
(click image to enlarge)

Tether notation explanation for the piece.

Tether notation explanation for the piece.

MS: It strikes me every time you relate one of these anecdotes that while a lot of composers talk about constraints being creatively fulfilling, you’re inventing your own instruments to make a piece. It seems like that would introduce some inherent challenges. I get how that would be incredibly inspiring, but it also means that on your palette, anything is possible.
DT: Right. That’s why composing for laptop orchestra, or laptops in general, is so hard. I think one of the reasons I like working with percussion so much is that some of the questions are similar. If you’re writing for string quartet, you know what you’re writing for. If you’re writing for percussion ensemble, well, you’ve got to make a bunch of decisions about things, right? Percussionists in general are really adventurous. You can give them anything, and they’ll do something with it. Laptop orchestra is one step beyond that. Not only do we have to write the piece, we’ve got to build instruments and learn how to play them. We’ve got to teach people how to play them. We have to invent notation that makes sense for those instruments. It’s totally daunting. I mean, I love it, but I’m only up for one every year or two because it’s just so hard to do.

Though there’s a sort of myth about computer music and computers, that they can make any sound you can imagine. I actually think computers have a really limited vocabulary. Of course, you can record something and then you can do stuff with your recordings. That’s great. But basically it’s a very small palette, and a lot of times the palette is just not very interesting or you might have an allergy to it. For instance, a lot of people won’t do anything with FM synthesis, because ‘80s popular music is just marked by FM synthesis and it sounds dated. That’s a problem with a lot of computer music stuff. The vocabulary is really small, so either you embrace it or you try to find something that works for you in some way. But it’s just stupid hard.

MS: Until I read the interview you did with Cycling 74 on your programming work, I don’t think I truly grasped the depth of your knowledge on the programming side; as a string player myself, I had perhaps just been more focused on your violin side. Though for a man with your background, this diversity of intellectual curiosities is perhaps not terribly surprising.
DT: Like I was saying earlier, I like programming partly because it scratches an itch. I loved studying physics—my dad is a theoretical physicist. I think it gave me a little bit of fearlessness—I never thought I couldn’t because, well, I’ve majored in physics!
MS: And maybe it even helps explain how you ended up becoming a fiddle player interested in seriously complex folk music and a computer programmer who wants to make sure the music preserves clear human interaction.
DT: That’s why I’ve been so pleased with this So Percussion piece [neither Anvil nor Pulley], because I feel that’s come across. It’s got all these things in it, and I’m really happy about that. But yeah, I guess I’m kind of a nerd. I’m drawn to the weird parts of it—probably more than most.

Trueman's Norwegian Hardanger fiddle

Trueman’s Norwegian Hardanger fiddle

Sounds Heard: Computer-Assisted

Claire Chase: DENSITY
New Focus
Claire Chase: DENSITY
The newest album by flutist and leader of ICE, Claire Chase, uses the concept of density as an overarching theme. Varése’s 1936 work Density 21.3 serves as the springboard, and from there she explores many definitions of density in music. The various sized flutes snowball upon themselves in all of the other works on the disc: the multiple linearities we know from Steve Reich and Philip Glass; fragile, gauzy layers of texture from Marcos Balter; laser-focused swimming with sine waves from Alvin Lucier; and they even transform into a noisy heavy metal guitar in Mario Diaz de León’s Luciform for flute and electronics. As pristinely produced as this recording is, don’t miss a chance to hear Chase perform these works live—her performances are riveting, and just as tight as those on the album.



Chris Arrell: Diptych
Beauport Classical
Performed by: Boston Musica Viva, Clayton State Chorale, Sonic Generator, Jacob Greenberg, Lisa Leong, and Amy Williams.
Chris Arrell: Diptych
Chris Arrell’s bustling echo electric, performed by Sonic Generator, is one of five absorbing works on a portrait CD of the composer’s music. Scored for clarinet, violin, cello, vibraphone, and computer, Arrell uses the story of Narcissus as a creative stepping-stone. The electronic part is derived from modeling the spectral content of the acoustic instruments, creating transformed electroacoustic “images” of the instruments, a bit like the distortions that happen in funhouse mirrors. The restless instrumental textures emit long metallic sonic tails that ripple and swirl throughout the open spaces of the music, wrapping a diaphanous film of electronic counterpoint around the soundscape.



Richard Teitelbaum: Piano Plus
New World
Performed by: Richard Teitelbaum, Ursula Oppens, Aki Takahashi, Frederic Rzewski
Richard Teitelbaum: Piano Plus
Piano music is the focus of this album from interactive electronic and computer music pioneer Richard Teitelbaum. Specifically, technology is used to extend the range of the acoustic piano and to introduce textural complexities that exceed the ability of normal human performance. The six pieces were written between 1963 and 1998, and feature the composer himself playing three of the works, while the others are performed by contemporary music pianist superheroes Frederic Rzewski, Aki Takahashi, and Ursula Oppens. The piece presented below, SEQ TRANSIT PARAMMERS, was conceived with the intention of the player collaborating creatively by performing compositional tasks to determine the direction of the music, à la Cage, Brown, and Tudor—”a kind of toolkit for real-time interactive composition,” writes the composer in the liner notes.


Matthew Burtner: Engaging the Natural World


The music of composer Matthew Burtner is in large part inspired by his childhood experiences of the Alaskan landscape where he grew up. That influence applies not only to the content of the music, but to the way it is created. At times during his childhood, his family lived in extremely remote locations in the North and managed without electricity or running water. “So now, I really love to be surrounded by electric things!” he admits, laughing. Burtner can be found knee-deep in cables, computers, and electronic gear pretty much wherever he is—whether directing the Interactive Media Research Group at the University of Virginia where he serves as associate professor of composition and computer technologies, working with his organization EcoSono, which seeks to foster connections between the arts, technology, and environmentalism, or presenting solo performances on the metasaxophone, a computer-enhanced saxophone of his own creation.

Much of his compositional work over the past 15 years has focused on a triptych of multimedia operas, each based on one of the three different geographical regions of Alaska where he lived as a child. The third and most ambitious opera is Auksalaq, co-created with media artist Scott Deal, which employs an interdisciplinary team of scientists, media technologists, artists, and musicians to form an interactive, multimedia commentary on the environmental changes taking place in the far north of Alaska, and the long-term, worldwide effect of those shifts. Described as a “telematic opera,” the performance, which involves a combination of instrumental music, computer sound, spoken and sung texts, and extensive video footage of scientific data and dramatic arctic landscapes, takes place in several locations which are connected via the internet. The audience in each venue experiences part of the performance in person, as well as performances from other regions which are projected onto video screens. There is even an app (prompting audience members at a recent Washington, D.C., performance to coin the term “appera”) which allows people at all of the locations to share their reactions to the drama via texts that are rendered into a constantly moving thought cloud on a screen for all to see, and from which the main character, sung by soprano Lisa Edwards-Burrs, chooses words with which to construct her final aria. Everything about Auksalaq is intended to highlight the concepts of remoteness and interconnectivity, which by the end of the work do not seem to conflict in any way.

Working with and—literally—through the environment is an integral part of Burtner’s musical aesthetic, which he calls ecoacoustics. His interest in the perception of sound traveling through natural materials—such as snow, wind, and sand—has resulted in a number of compositions that make the natural world part of the musical ensemble. For example, Syntax of Snow, created for glockenspiels and fresh snow, is ideally performed outside, as is the work Sandprints, for human-computer ensemble, whistling, and sand. It requires microphones to be buried under the sand to amplify the movements of people manipulating the sand above ground, which are in turn manipulated by a series of computational processes to form the musical material. He has created a large umbrella for this methodology by turning ecoacoustics into an entire course of study, part of which is a performing ensemble called MICE (Musical Interactive Computer Ensemble) which has performed Sandprints in the Namibian desert, as well as created and performed numerous other works. He has also founded a non-profit organization called EcoSono with the mission of spreading the integration of experimental sound art with environmentalism through education, engagement, and artistic production.
When questioned about how he reconciles the potential conflict between his passion for technology and for the environment, he is quick to point out that humans have always needed technology to survive in the world; we need clothing to protect us from the elements, we require boats to move across water, snowshoes to travel over arctic terrain. He sees the potential of technology to assist in bringing people closer to nature, rather than separating them from it, and is intent upon exploding those early childhood experiences listening to the sounds of nature into a shared universal perspective. “I see the issue of climate change as the defining issue of the 21st century. As an artist I think the best thing I can do is to try to engage with the public about the issue, and try to activate more and more voices talking and thinking about it.”

Alien Music

In the course of my grand conference-laden tour of London this month, I attended the CHIPS Symposium last week, which explored the performance practices (and problems) of artificial performers in mixed human-computer ensembles. Too many fascinating ideas were presented to summarize here, but I’d like to zoom in on one provocative statement.

Most of the symposium was devoted to attempts to make artificial performers more closely approach human musicality, a certainly valuable endeavor (though not without certain controversies). Tim Blackwell of Goldsmiths University, however, proposed an alternative (though not mutually exclusive) goal: the creation of an “alien music” with rules and aesthetics separate and distinct from our own.

This statement immediately resonated with me, and seemed emblematic of the compositional endeavor as a whole. After all, isn’t anyone who writes music in search of something “alien,” or at least novel? Aren’t we driven to create because what we want to hear doesn’t exist yet? As a kid, this particular passage from the book My Teacher is an Alien by Bruce Coville made a big impression on me:

I flinched as another burst of horrible squawking and growling sounded above me… I shivered. That noise was like a tiger running its claws down a blackboard; it felt like aluminum foil against my teeth. What could be making it? … I had noticed that the horrible noise was coming from a pair of flat pieces of plastic hanging on the wall. But it wasn’t until Mr. Smith started ‘singing’ along with the sound that I realized the plastic sheets were speakers. That hideous sound was music! Or at least what passed for music wherever my alien teacher had come from.

A few years later, I discovered Iannis Xenakis, and ever since I have always imagined “Mr. Smith” singing along to Mycenae-Alpha.

I hated the piece at first; I could not imagine why someone would willingly subject themselves to it. And yet, something kept me coming back to it. I’m not sure why. Maybe because I wanted to understand something that seemed senseless. Maybe I had something to prove. Either way, the abstract “score” included in the liner notes provided a key, a guide. Hearing the sonic shapes transmute and transform in concert with the lines on paper…well, it changed me. And I found, to my surprise and astonishment, that I liked the piece. Was my musical taste really so plastic?

But then Xenakis isn’t really “alien” after all; Mycenae-Alpha is still the product of a human mind, albeit with some computer assistance. What about something more artificial, or even accidental?

In this clip, taken from a playthrough of a corrupted Super Mario Bros. cartridge, something like music seems to emerge from the speakers. But here the music is a byproduct, an incomplete representation of the information contained on the cartridge. And yet I find it weirdly compelling in the same way Xenakis was to me on first listen. (Not everyone feels this way, as made clear by one YouTube commenter’s plea to “KILL IT WITH FIRE.”)

When I listen to this kind of music, I imagine I am like an animal listening to human music, perceiving some dim reflection off a distant surface. Currently, the cognitive mechanism that allows us to perceive beat patterns is thought to be unique to humans, and one cockatoo named Snowball:

But maybe we don’t need to go into the animal kingdom for an alien understanding of music. On MetaFilter, user KathrynT captures one child’s remarkable reaction to Penderecki:

Some years ago, I was listening to music with my friend Emma and our respective husbands while her 5-year-old autistic son played elsewhere in my house. Well, we were trying to listen to music; every time we’d put something on, Philip would come rocketing out from wherever he was playing and shut the music off before rocketing back to playing the Star Trek theme on the piano, or whatever it was he happened to be doing. (Avenue Q didn’t make it through the first bar.)

But when I put on the Threnody [for the Victims of Hiroshima] as an illustration of how complex Penderecki can be, and how music can be extremely unpleasant to listen to but still a powerful work of art, Philip came creeping into the room. He walked up to the stereo, but instead of turning it off, he reached out and put his hand on the speaker. Then he reached across and put his other hand on the other speaker. And he stood there, transfixed, absorbing the music with his body as well as his ears, for the entire duration of the piece.

When it ended, he turned around and shot back to the piano without a word or gesture and resumed his previous activities. But he was clearly experiencing that piece in a very different way than most people do; I wish I knew what it meant to him.

But is it so different, when musical taste varies so much from person to person? I wish I knew what it meant to you.

Sounds Heard: John Bischoff—Audio Combine

John Bischoff is a composer celebrated for his work at the cutting edge of live computer music, explorations that can be traced back all the way to the late 1970s and his experiments with his first KIM-1. Audio Combine, the recent New World Records release of Bischoff pieces spanning 2004-2011, is an undeniable reminder that, though his roots run deep, his music hasn’t been anchored.

As a recorded document, Audio Combine is one of those discs well suited to the “dark room/good headphones” listening experience, each track representing an opportunity to get lost inside a foreign landscape. Bischoff’s live performance of the music was recorded at The Mills College Concert Hall and mixed by Philip Perkins, in collaboration with the composer. Due to the unique way Perkins staged the recording—via three microphones placed around the hall in addition to Bischoff’s direct feed—there is much to feel aurally compelled to “look” at in the sonic field.

To begin with, the title track plays a teasing game of Whac-A-Mole using a host of delicate sounds—most memorably the plucked pitches of a music box—which slide into view with a gauzy grace before slipping quickly around the next corner and out of earshot. Sidewalk Chatter is a bit more glitch and crackle, while Local Color is built around the ring of struck metallic tones and the wavering and decay of pitches. In all of the aforementioned cases, the music is careful in its development; never overcrowded with sound or a blurry chaos of ideas. Bischoff remains patient, not afraid to punctuate with silence. There is air left in the room for reflection and exploration. It’s a framework taken to its sparest extreme (and, frankly, most Lynchian eeriness) in Decay Trace. Perhaps because of that, it also proved to be a personal favorite. This restraint is then somewhat shrugged off for the final track, Surface Effect, when the combine goes into hyperdrive, all breakers thrown, and luxuriates in the sum of the sounds that have been generated in the course of the hour-long disc.

As might be expected when dealing with such non-traditional sound creation and construction, there is a great deal of interesting background to be explored beyond the recordings themselves, information thoroughly outlined in the liner notes penned by Ed Osborn. Though his behind the curtain insights definitely provide illuminating added value, it’s also worth noting that they also aren’t strictly necessary. Bischoff’s music is not a sterile sonic experiment reporting its results, but a kind of conversation between man, machine, and the surrounding environment. The method is intriguing, but the resulting sound world is really all that matters.

NMBx FLASHBACK: The last time I caught up with John Bischoff, New World Records had just released The League of Automatic Music Composers, 1978–1983. Bischoff, a co-founding League member, stopped by to chat about what made-at-home computer music involved before the invention of the laptop. You can listen to our conversation here: