Author: Matthew Guerrieri

“Automation Divine”: Early Computer Music and the Selling of the Cold War

It was a love song—not what viewers expected, perhaps, who tuned into a July 1956 episode of Adventure Tomorrow, a science documentary program broadcast by KCOP, channel 13, out of Los Angeles. But, then again, it was a love song to a computer. Push-Button Bertha. Sweet machine. What a queen. Jack Owens, the lyricist (and, on that July 1956 episode, the performer), had taken his inspiration from the tune’s composer: a Datatron 205, the room-filling flagship computer of Pasadena-based ElectroData, Inc.

Bertha’s not demanding
Never wants your dough
Always understanding
Just flip a switch and she’ll go

Push Button Bertha score

Just that month, ElectroData had been acquired by the Burroughs Corporation; Burroughs, an adding-machine manufacturer, was buying a ready-made entry into the computer business.[1] The Datatron had been programmed by Martin L. Klein and Douglas Bolitho, a pair of engineers. (Klein also moonlighted as Adventure Tomorrow’s on-air host.) “Push-Button Bertha” wasn’t Datatron’s magnum opus, but rather one of thousands of pop-song melodies the program could spit out every hour. Its inspiration was purely statistical.

We set out to prove that if human beings could write ‘popular music’ of poor quality at the rate of a song an hour, we could write it just as bad with a computing machine, but faster.

In fact, it was a perceived deficit of inspiration that supposedly prompted the project. Klein explained: “[W]e set out to prove that if human beings could write ‘popular music’ of poor quality at the rate of a song an hour, we could write it just as bad with a computing machine, but faster.”[2] Klein and Bolitho went through the top one hundred pop songs of the year, looking for patterns. They came up with three:

1. There are between 35 and 60 different notes in a popular song.
2. A popular song has the following pattern: part A, which runs 8 measures and contains about 18 to 25 notes, part A, repeated, part B, which contains 8 measures and between 17 and 35 notes; part A, again repeated.
3. If five notes move successively in an upward direction, the sixth note is downward and vice versa.

To those principles were added three more timeworn rules:

4. Never skip more than six notes between successive notes.
5. The first note in part A is ordinarily not the second, fourth or flatted fifth note in a scale.
6. Notes with flats next move down a tone, notes with sharps next move up a tone.[3]

The six rules were then put to work via the Monte Carlo method, which had been developed around the speed and indefatigability of the newly invented computer, harnessing the wisdom of a crowd of countless, repeated probabilistic calculations. Fed a stream of random, single-digit integers (which limited the number of available notes to 10), Datatron would test each integer/note against its programmed criteria. If it met every guideline, it was stored in memory; if not, it was discarded, and the program would move on to the next integer. After a few dozen iterations, presto: another prospective hit. Or not. Klein and Bolitho never admitted the program’s rate of success; out of Datatron’s presumably thousands of drafts, only “Push-Button Bertha” saw the light of day.

Push-Button Bertha

Piano version performed by Matthew Guerrieri

Datatron 205

The song took its place in a small but growing repertoire of computational compositions. 1956 was a banner year for statistically designed, computer-generated music. A team of Harvard graduate students, including Frederick P. Brooks, Jr. (who would go on to lead the design of IBM’s famed System/360 mainframes), programmed Harvard’s Mark IV computer to electronically analyze and then generate common-meter hymn tunes. (“It took us three years to get done,” Brooks later remembered, “but we got stuff you could pass off on any choir.”[4]) Lejaren Hiller and Leonard Isaacson used the University of Illinois’ ILLIAC I machine to create a string quartet, its movements moving through music history from basic counterpoint to modern speculation; portions of the Illiac Suite were premiered on August 9, 1956, only a month after the TV debut of “Push-Button Bertha.”

“Push-Button Bertha” was putting a cloak of high-minded research around that most hallowed of American art forms: a sales pitch.

“Push-Button Bertha” was a curiosity, but it reveals something particular about the early days of computer music in the United States. The Harvard and Illinois efforts were research and experimentation that were at least nominally driven by curiosity and the prospect of expanding academic knowledge. But all three were, in part, justifications of more hard-nosed concerns. When Owens sang of Bertha never wanting your dough, he shaded the truth a bit; Bertha wanted quite a bit of dough indeed. A Datatron 205 computer cost $135,000, and that didn’t include necessities such as a control console, punched-card input and output equipment, or magnetic tape storage. Nor did it include desirable extras, such as the capability to do floating-point calculations—that alone required an additional $21,000, more, at the time, than the median price of a house in the United States.[5] The Burroughs Corporation needed to justify the price tag of its newest product line. “Push-Button Bertha” was putting a cloak of high-minded research around that most hallowed of American art forms: a sales pitch.


Adventure Tomorrow Syndication ad, 1959

Off the air, Martin Klein wasn’t employed by ElectroData or Burroughs; at the time, he worked for Rocketdyne, North American Aviation’s rocket and missile division, based in the San Fernando Valley. The Brooklyn-born Klein had initially pursued music—as a teenager, he composed and copyrighted (though did not publish) a piano-and-accordion number entitled “Squeeze-Box Stomp”—but instead took up science. He did graduate work at Boston University, earning a master’s and a Ph.D.; his master’s dissertation, especially (on methods of using optical refraction to measure the flow of air around high-speed objects—supersonic planes or missiles, say), foreshadowed his work at Rocketdyne, which was largely devoted to designing circuits to convert rocket-engine test-firing data into forms that a computer could analyze.[6]

But that hint of a performing career would eventually resurface. In 1956, Klein began to spend his weekends on television. Saturdays brought Wires and Pliers, in which Klein and his North American Aviation colleague Harry C. Morgan (with the help of electrical engineer Aram Solomonian) showed viewers how to assemble simple electronic circuits and gadgets.[7] (The show’s sponsor, the Electronic Engineering Company of California, conveniently sold kits containing the necessary components for each project.) On Sundays, Adventure Tomorrow promoted technological optimism by way of the latest advances from California’s rapidly expanding electronics and defense industries. Burroughs needed a showcase for its technology; Klein needed technology to showcase.

Klein had demonstrated a flair for technologically enhanced promotion. His first efforts with the Datatron were intended to spotlight Pierce Junior College, where Klein was an instructor. In December 1955, Klein had the computer predict the winners of New Year’s Day college football bowl games; it got four out of five correct. News reports made sure to mention that Klein was teaching computer design at Pierce, “one course of a whole program in electronics offered by the college preparing men for occupations in this industry, so vital to our country’s defense.”[8] By April, Klein, under the auspices of Pierce and backed by several electronics-industry sponsors (including ElectroData), was on the air every week. Wires and Pliers didn’t last long, but Adventure Tomorrow did. From the beginning, Adventure Tomorrow was a cheerleader for the latest military technology—“the wondrous world of missiles, jets, and atomic projects,” as a later advertisement for the program put it. It was in that spirit that Klein and Bolitho went to work extracting a bit of publicity-friendly frivolity from the Datatron 205.

If Klein knew how to engineer attention, Bolitho’s specialty was wrangling the machines. Early computers were a forest of hard-wired components, fertile ground for capricious behavior. Bolitho’s rapport with the finicky beasts was legendary. His ability to coax computers into reliability eventually led him to be tasked with leading prospective customers on tours of Burroughs’ Pasadena plant. “He had some kind of magical quality whereby he could walk up to a machine that was covered with cobwebs and dust and turn it on and that thing would work, even if it had been broken for years,” a fellow engineer remembered.[9]

Depending on his audience, Klein would oscillate between extolling computers as inhumanly infallible and comfortingly quirky. Explaining the basics of the new tools to readers of Instruments and Automation magazine, he lauded “the advantage of automatic control over control by human operators where human forces are constantly at work to disrupt the logical processes.” But, recounting the genesis of “Push-Button Bertha” for Radio Electronics—a magazine aimed more at hobbyists and amateurs—Klein struck a more whimsical note, echoing (deliberately or not) the Romantic stereotype of the sensitive, temperamental artist:

The words “electronic digital computer” immediately conjure up a picture of a forbidding, heartless device. Those of us who design computing machinery know this isn’t true. Computing machines have very human characteristics. They hate to get to work on a cold morning (we call this “sleeping sickness”). Occasionally, for unexplainable reasons, they don’t work the same problem the same way twice (we say, then, that the machine has the flu).[10]

Klein’s joke turns a little more grim knowing that, by 1961, the United States military was using no fewer than sixteen Datatron 205 computers at twelve different locations—including the Edgewood Arsenal at the U.S. Army’s Aberdeen Proving Grounds in Maryland, where the technology that produced “Push-Button Bertha” was instead used to calculate simulated dispersal patterns for airborne chemical and biological weapons.[11]

Computer Music from the University of Illinois

All the composing computers, in fact, were military machines. ILLIAC, for instance, was a copy of a computer called ORDVAC, built by the University of Illinois and shipped to the Aberdeen Proving Grounds to calculate ballistics trajectories. Hiller and Isaacson had first learned their way around ILLIAC and the Monte Carlo method trying to solve the long-standing problem of determining the size of coiled polymer molecules—a problem of more than passing interest to the United States government, which funded the research as part of a program to develop and improve synthetic rubber production.[12] (It was Hiller, who had coupled his studies of chemistry at Princeton with composition lessons with Milton Babbitt, who realized the same mathematical technique could be applied to musical composition.)

Computational composition in the United States got its start, quite literally, in the off-hour downtime of the military-industrial complex.

Harvard’s Mark IV was the last in a group of computers designed by Howard Aiken. The Mark I had helped work out the design of the first atomic bombs; Marks II and III were built for the U. S. Navy. The Mark IV, which had produced all those hymn tunes, had been funded by the U. S. Air Force; it worked out guided-missile flight patterns and helped design lenses for the U-2 spy plane.[13] The Harvard computers, it turned out, ran more reliably if they were never turned off; Aiken duly assigned Peter Neumann, a music-loving graduate student, to watch over the Mark IV from Friday night until Monday morning. Student projects—hymn-tune-generation included—happened on the weekends.[14] Computational composition in the United States got its start, quite literally, in the off-hour downtime of the military-industrial complex.

For a few years, American computer-music researchers may have looked with jealousy across the Atlantic, to Paris and Cologne and the fledgling, dedicated electronic-music studios that had blossomed under the aegis of government-supported radio stations. But there are suggestions that those European efforts, too, emerged out of a nexus of technology and defense.


The origin story of the famous WDR electronic-music studio in Cologne, for instance, starts with an American visitor. In 1948, American scientist Homer Dudley visited Germany, bringing along his invention, the vocoder; the device made a crucial impression on Werner Meyer-Eppler, who would later help create the WDR studio—and whose students would include the studio’s most famous denizen, Karlheinz Stockhausen.

Bell Labs ad, 1950

Dudley was an employee of Bell Labs, one of the great 20th-century American research-and-development shops, a hive of telecommunications innovation. The vocoder had originally been developed as part of investigations into shrinking the bandwidth of telephone signals, in order that more messages might travel over the same wires. But, especially with the onset of war, the work at Bell Labs was increasingly aligned with the desires of government. The vocoder had been pressed into wartime service as the backbone of SIGSALY, the Allied system that successfully masked high-level phone conversations from German eavesdropping, and which practically introduced numerous features of the modern digital communications landscape: compression, packet-switching, electronic key encryption.[15] (The keys for SIGSALY were stretches of electronically generated random white noise, pressed onto matched pairs of phonograph records, each pair being destroyed after a single use.)

One wonders if Meyer-Eppler had been targeted for recruitment into the development of SIGSALY’s sequels; after all, so much of the WDR studio’s work seemed aligned with and adaptable to the sort of research that Bell Labs was pursuing in the wake of its wartime work. Think of one of the WDR studio’s most celebrated productions, Stockhausen’s Gesang der Jünglinge; if the work’s combination of a transmitted human voice and electronic noise recalls SIGSALY, the way it deconstructs, processes, and reassembles that voice, the way it filters the sounds through various statistical screens—it practically outlines a research program for next-generation voice and signal encryption.

At the very least, the new music triangulated Dudley’s sonic manipulation with two other innovations, the transistor and Claude Shannon’s new information theory; all three had emerged from Bell Labs, which would also birth Max Mathews’s pioneering MUSIC software—all the while pursuing numerous military and defense projects. In later years, Bell Labs would consult on the formation of IRCAM, Pierre Boulez’s hothouse of computer music in Paris.[16]

But IRCAM, envisioned as a seedbed, was instead an endpoint, at least in terms of the sort of institutional computing that, in its interstices, had provided a home for early computer music. Already the future was in view: a computer in every home, a chip in every device, casual users commandeering the sort of processing power that the builders of the UNIVACs and the ORDVACs and the Datatrons could barely imagine. (The year IRCAM finally opened, 1977, was the same year that the Apple II was introduced.) Even the output of those institutions—for instance, Max/MSP, the descendant of an IRCAM project—was destined for laptops.

The sounds of the post-war avant-garde were never far, in concept or parentage, from the technological needs of the Cold War.

Surrounded by the surfeit of computation, it is hard to imagine the scarcity that led those first computer musicians to a marriage of convenience with the military and national-security bureaucracies—a marriage convenient to both sides. But to understand that give-and-take is to understand something about the nature of music in the middle of the 20th century, the technocratic faith that came to inform so many aspects of the culture. The sounds of the post-war avant-garde were never far, in concept or parentage, from the technological needs of the Cold War.

And what of the composer of “Push-Button Bertha”? Even as it became obsolete, the Datatron 205, with its blinking console and spinning tape drives, enjoyed a long career as a prop in movies and television, lending a technologically sophisticated aura to everything from Adam West’s television Batcave to Dr. Evil’s lair in the Austin Powers movies. That, too, may have been a result of Klein and Bolitho’s public-relations stunt. Only a few months after the Adventure Tomorrow premiere of “Push-Button Bertha,” producer Sam Katzman, a veteran impresario of low-budget genre movies, gave the 205 its big-screen debut, going to the Datatron’s Pasadena factory home to film scenes for a science-fiction production called The Night the World Exploded. In the movie, the 205—mentioned prominently, by name, in dialogue and narration—is used to determine just how long before a newly discovered and volatile “Element 112” works its way to the earth’s surface and destroys the planet. The Datatron had returned from its pop-song holiday to a more familiar role for the era’s computers: calculating the end of the world.[17]

Night the World Exploded poster


  
1. The founder of the Burroughs Corporation, William Seward Burroughs I, was the grandfather of the Beat writer William S. Burroughs. In a 1965 interview, the younger Burroughs gave his opinion of computational art:

“INTERVIEWER: Have you done anything with computers?
BURROUGHS: I’ve not done anything, but I’ve seen some of the computer poetry. I can take one of those computer poems and then try to find correlatives of it—that is, pictures to go with it; it’s quite possible.
INTERVIEWER: Does the fact that it comes from a machine diminish its value to you?
BURROUGHS: I think that any artistic product must stand or fall on what’s there.”

(See Conrad Knickerbocker, “William Burroughs: An Interview,” The Paris Review vol. 35 (1965), p. 13-49.)

  
2. Martin L. Klein, “Syncopation by Automation,” Radio Electronics, vol. 28, no. 6 (June 1957), p. 36. [36-38]

  
3. Ibid., p. 37.

  
4. F. P. Brooks, A. L. Hopkins, P. G. Neumann, W. V. Wright, “An experiment in musical composition”, IRE Trans. on Electronics Computers, vol. EC-6, no. 3 (Sep. 1957). See also Grady Booch, “Oral History of Fred Brooks,” Computer History Museum Reference number: X4146.2008 (http://archive.computerhistory.org/resources/access/text/2012/11/102658255-05-01-acc.pdf, accessed September 18, 2018).

  
5. Datatron prices from Tom Sawyer’s Burroughs 205 website (http://tjsawyer.com/B205prices.php, accessed September 10, 2018). In 1957, the median home price was approximately $17,000, as calculated from Robert Shiller’s archive of historical home prices (http://www.econ.yale.edu/~shiller/data/Fig3-1.xls, accessed September 10, 2018).

  
6. Martin L. Klein, The Determination of Refractive Indices of Dynamic Gaseous Media by a Scanning Grid, M.A. Thesis, Boston University (1949). Klein’s doctoral thesis was on zone plate antennae—forerunners of the modern flat versions used for HD television.

  
7. “TV Show Features ‘Wires and Pliers,’” Popular Electronics, vol. 4, no, 4 (April 1956), p. 37.

  
8. “Pierce College Teacher Picks Sports Results,” Van Nuys Valley News, January 10, 1956, p. 10.

  
9. Richard Waychoff, Stories about the B5000 and People Who Were There (1979), from Ed Thelen’s Antique Computers website (http://ed-thelen.org/comp-hist/B5000-AlgolRWaychoff.html, accessed September 10, 2018).

  
10. Martin L. Klein, Harry C. Morgan, and Milton H. Aronson, Digital Techniques for Computation and Control (Instruments Publishing Co.: Pittsburgh, 1958), p. 9; Klein, “Syncopation by Automation,” p. 36.

  
11. For the 205’s usage within the military, see Martin H. Weik, A Third Survey of Domestic Electronic Digital Computing Systems (Public Bulletin no. 171265, U.S. Department of Commerce, Office of Technical Services, 1961), p. 145. For the 205 at Edgewood, see Arthur K. Stuempfle, “Aerosol Wars: A Short History of Defensive and Offensive Military Applications, Advances, and Challenges,” in David S. Ensor, ed., Aerosol Science and Technology: History and Reviews (RTI Press: Research Triangle Park, NC, 2011), p. 333.

  
12. See, for instance, F. T. Wall, L. A. Hiller, Jr., and D. J. Wheeler, “Statistical Computation of Mean Dimensions of Macromolecules. 1” The Journal of Chemical Physics, vol. 22, no. 6 (June 1954), pp. 1036-1041; F. T. Wall, R. J. Rubin and L. M. Isaacson, “Improved Statistical Method for Computing Mean Dimensions of Polymer Molecules,” The Journal of Chemical Physics, vol. 27, no. 1 (January 1957), pp. 186-188. The University of Illinois had received $135,000 from the National Science Foundation for research into synthetic rubber, the largest such grant given to a university under the NSF’s synthetic rubber program; Special Commission for Rubber Research, Recommended Future Role of the Federal Government with Respect to Research in Synthetic Rubber (National Science Foundation: Washington, D. C., December 1955), p. 9.

  
13. As it turned out, Aiken’s conservative design left the Mark IV significantly slower than other computers; James G. Baker, who ran the Harvard group researching automated lens design, grew frustrated with the speed of the Mark IV (and his access to it), eventually switching to an IBM mainframe at Boston University. See Donald P. Feder, “Automated Optical Design,” Applied Optics vol. 12, no. 2 (December 1963), p. 1214; Gregory W. Pedlow and Donald E. Welzenbach, The Central Intelligence Agency and Overhead Reconnaissance: The U-2 and OXCART Programs, 1954-1974 (Central Intelligence Agency: Washington, D.C., 1992), p. 52 (declassified copy at https://www.cia.gov/library/readingroom/docs/2002-07-16.pdf, accessed September 12, 2018).

  
14. John Markoff, “When Hacking Was In its Infancy,” The New York Times, October 30, 2012.

  
15. For an historical and technical overview of SIGSALY, see J. V. Boone and R. R. Peterson, “SIGSALY—The Start of the Digital Revolution” (2016) (at https://www.nsa.gov/about/cryptologic-heritage/historical-figures-publications/publications/wwii/sigsaly-start-digital.shtml, accessed September 17, 2018).

  
16. Robin Maconie has speculated on the implications of the Bell Labs connections in a pair of articles: “Stockhausen’s Electronic Studies I and II” (2015) (at http://www.jimstonebraker.com/maconie_studie_II.pdf, accessed September 17, 2018), and “Boulez, Information Science, and IRCAM,” Tempo vol. 71, iss. 279 (January 2017), pp. 38-50.

  
17. For the 205’s film and TV history, see the Burroughs B205 page at James Carter’s Starring the Computer website (http://starringthecomputer.com/computer.php?c=45, accessed September 12, 2018). The Night the World Exploded, written by Jack Natteford and Luci Ward, and directed by Fred F. Sears, was released by Columbia Pictures in 1957.

On Empathy

The crimes and misdemeanors language perpetrates against music are many and various, but one offense is more insidious than most, simply for being so insignificant. It’s a preposition. In English, invariably, we listen to a piece of music. Never with a piece of music.

That little rut of syntax conceals a speed bump on what seemingly should be a musical express lane: the generation of empathy. Empathy is something music can and ought to steadily, even effortlessly create. Performing music, particularly in any sort of ensemble, large or small, exercises the muscles of empathy like no other. But even just listening to it should give empathy a boost, one would think. Name another art form that so regularly launches even its most historically, culturally, and ethnologically distant artifacts into newly immediate vitality, again and again.

Empathy is, perhaps, the most plausible of music’s utopian promises. The universality of musical communication dissolves the barriers of isolated viewpoints. We can gain direct access to perspectives and emotions far from our own experience. Music expands our ability to empathize, to sympathize, to humanize. It’s a great story. It’s a story I’ve told enough times, certainly. And, at those times—now, for instance—when empathy seems to be a dwindlingly scarce societal resource, it’s a story we like to tell with greater insistence, and confidence, and hope.

But what if it’s just that—a story? From another angle: what if there’s no way to listen to a piece of music and with a piece of music at the same time?


For the better part of a century, psychologists and similarly inclined scholars have made a particular distinction between empathy and emotional contagion. The former is defined in the usual way: having the experience of another person’s perception, perspective, emotional reaction. The latter is a little different: experiencing an emotional response simply because everyone around you is experiencing the same emotion. It’s an illusion of empathy, one conjured completely out of one’s own emotional memories.

The distinction is important in the study of musical perception. Here’s a recent explanation of the difference, by scholar Felicity Laurence:

It seems possible that when accounting for feelings of unity arising during shared musical experience, we may be confusing the impression of actually understanding and even feeling sympathetic towards one’s fellow “musickers” with what is in fact the experience of an emotional “wave.” In doing so, we are arguably conflating this “contagious” experience with the distinct and separate phenomenon of empathy. Emotional contagion is not inherently negative, and may indeed lead to, or accompany, empathic response. However, people engaged in musicking may seek specifically to engender, and then celebrate emotional contagion in order to reduce individual sovereignty and dissolve interpersonal boundaries. Even in an apparently benign concert performance, for example, we may be able to discern such manipulative behavior on the part of the performers and the corresponding mass response of their audience.

This description, at least, maintains the optimistic possibility (“may indeed”) that emotional contagion can pull the listener in the direction of true empathy. But others have not been so sure.

flamingo reflection

Photo by Pablo Garcia Saldana

The Spanish philosopher José Ortega y Gasset, for instance, was a skeptic. And he came to doubt because of a now-familiar controversy—the shock of the new. In the early part of the 20th century, Ortega was wrestling with a problem: how to define modern music vis-à-vis the music of previous eras. “The problem was strictly aesthetic,” Ortega wrote, “yet I found the shortest road towards its solution started from a simple sociological phenomenon: the unpopularity of modern music.” Unlike music of the Romantic era, modern music had not met with wide popularity. And the reason for that was profound and inherent: “It is not a matter of the majority of the public not liking the new work and the minority liking it,” Ortega went on. “What happens is that the majority, the mass of the people, does not understand it.” After a century of Romanticism’s mass appeal, modernism was a rude awakening:

If the new art is not intelligible to everybody, this implies that its resources are not those generically human. It is not an art for men in general, but for a very particular class of men, who may not be of more worth than the others, but who are apparently distinct.

Hence the title of the essay: “La deshumanización del arte,” the dehumanization of art. And, like Milton Babbitt’s “Who Cares If You Listen?” (which, in some respects, Ortega anticipated by thirty years), Ortega isn’t out to demonize that dehumanization. It is what it is. And a lot of what it is has to do with how art—and music specifically—does and doesn’t engender empathy.

Romanticism, to Ortega, was popular because “people like a work of art that succeeds in involving them in the human destinies it propounds.” In the case of music, the destiny propounded was that of the composer: “Art was more or less confession.” Wagner, the adulterer, writes Tristan und Isolde, an opera about adultery, “and leaves us with no other remedy, if we wish to enjoy his work, than to become vaguely adulterous for a couple of hours.”

This seems like an empathetic response. But, upon closer examination, the music of Beethoven and Wagner is “melodrama,” and our response to it just “a contagion of feelings”:

What has the beauty of music to do with the melting mood it may engender in me? Instead of delighting in the artist’s work, we delight in our own emotions; the work has merely been the cause, the alcohol, of our pleasure… they move us to a sentimental participation which prevents our contemplating them objectively.

“[T]he perception of living reality and the perception of artistic form are, in principle, incompatible since they require a different adjustment of our vision,” Ortega insists. “An art that tries to make us see both ways at once will be a cross-eyed art.” The clarity of empathy is hopelessly blurred by reflexive emotional response: “It is no good confusing the effect of tickling with the experience of gladness.”


Ortega’s analysis is subjective, speculative criticism, but some of the ideas he turns over—especially regarding genre and empathy—have, however tentatively, been put to scientific test. In one provocative study, Shannon Clark and S. Giac Giacomantonio compared that match between subjects in late adolescence and early adulthood—across the age boundary when the psychological development of empathy is thought to settle into a mature level. Clark and Giacomantonio quizzed their subjects as to their listening preferences, classifying them according to a Musical Preference Factor Scale (MPFS) developed by Peter J. Rentfrow and Samuel D. Gosling:

Factor 1, “reflective & complex” (e.g., classical, jazz, folk, blues)

Factor 2, “intense & rebellious” (e.g., rock, alternative, heavy metal)

Factor 3, “upbeat & conventional” (e.g., country, pop, soundtracks, religious)

Factor 4, “energetic & rhythmic” (e.g., rap, soul, dance, electronica)

The result?

[I]t was shown that music genres encompassed by MPF-1 and MPF-2 are stronger in their associations with empathy than are those encompassed by MPF-3 and MPF-4. In fact, MPF-3 was negatively associated with empathy, indicating that those who have greater preferences for these genres may be lower in empathic concern. Additionally, MPF-4 was shown to have very little influence on empathy, positively or negatively, indicating that these genres of music contain little to no emotive messaging influencing empathic concern[.]

What’s more, the study hinted that “music preferences are more relevantly associated with cognitive aspects of development than affective ones.” In other words, the path to increased empathy is through thinking, not feeling. To be sure, the framework fairly smacks of unexamined stereotypes (I can think of plenty of rap music that is “reflective & complex,” and plenty of classical music that is “upbeat & conventional”). To any even slightly versatile musician, the MPFS categories (even in expanded form) can feel excessively, well, categorical. And, as with all studies of music and empathy so far, the study is far more suggestive than conclusive—the sample size is small, the data noisy. But squint your eyes, and you can just make out Ortega’s line between “objective” and “sentimental” music.

Still, Ortega’s business was philosophy, not psychology. His conception of the empathy-emotional contagion distinction was phenomenological, echoing ideas of empathy and intersubjectivity explored by Edmund Husserl and, especially, Husserl’s student Edith Stein, a fascinating thinker whose life was cut short at Auschwitz. (The philosophical consideration of empathy goes back to the Enlightenment, but it was Stein’s thesis, written at the absolute tail end of the Romantic era, that most influentially distinguished between empathy and emotional contagion.) And Ortega, it should be noted, had an ulterior motive. At the core of his analysis is his mistrust of utility. His famous Decartes-like statement of individual existence—“Yo soy yo y mi circunstancia,” I am I and my circumstances—posits existence as a contest between the self and the decisions into which the self is pushed by those circumstances. On some level, Ortega regards Romanticism as more circumstantial, more useful, than he might prefer. (Ortega’s anti-utilitarianism most shows its seams when stretched. In Ortega’s Meditations on Hunting, for example, he ends up elevating the “exemplary moral spirit” of hunting for sport over hunting for food.)

But how do you measure the utility of music, anyway? Earlier this year, I moderated a discussion panel for one of the concerts in a two-season survey of Anton Webern’s complete music, mounted by Trinity Wall Street in New York City. For a prompt, I offered a quotation from the rather contentious 1908 essay “Ornament and Crime” by the rather notorious Viennese architect Adolf Loos:

The evolution of culture is synonymous with the removal of ornament from utilitarian objects.

The target of Loos’s ire was Art Nouveau and its penchant for putting decadent, decorative swirls on everything from wallpaper and furniture to ashtrays and breadboxes. But Loos was Webern’s contemporary; and it feels like this quote should have something to do with Webern’s famously stripped-down rhetoric. But what, exactly, that connection is, I’m not sure.

What was it that, perhaps, Webern considered utilitarian about music, and that previous generations had excessively ornamented? Was it the utility of musical form, and how it had been ever-more-grandly ornamented with tonal harmony? Webern’s works are formal—often scrupulously so—but without the tonal indicators of form, without the harmonic mile markers and exit signs everyone had grown accustomed to over the past three centuries. At the very least, this answer hints at how Webern’s music can be so wildly expressive while, compared to tonal music, doing so little. A piece of music isn’t expressive because it adheres to sonata form, say; sonata form is useful because it gives the nebulous quality of musical expression something to which to adhere.

But, with Ortega’s essay swimming in my head, here’s another idea. Maybe the utility of music is its communication—not what it communicates, which nobody can ever agree on, anyway, but just that it communicates with such power and directness. And the ornament? Emotional contagion.

I might like this answer even better, because it dissolves so many of the paradoxes of Webern’s reception—why it’s judged cool and inscrutable, when it’s anything but; why it’s judged austere and meager, when it’s anything but; why it’s judged impersonal and inhuman, when it’s anything but. Maybe the real resistance to Webern’s music (and a lot that followed) is that, in and of itself, it refuses to let the audience off the hook. To engage with it is to experience empathy without the cushion of emotional contagion. Real empathy, the experience of a world-view other than yours, is a far different and far less comfortable thing than a safe memory of your own emotional experience.

mirror reflection

Photo by Ali Syaaban


The whole landscape of this discussion is, admittedly, esoteric. Webern’s music is extreme. Ortega’s endpoint is a bit extreme. Academic studies of music and empathy, by nature, inhabit at least somewhat abstract spaces. (A number of investigations, for instance, have studied responses to music by autistic listeners in order to make observable effects more readily obvious.) Most of us—composers, performers, listeners—don’t live at these kind of extremes. We roam across Rentfrow and Gosling’s Music Preference Factors, mixing and matching, picking and choosing, sometimes amplifying a mood, sometimes challenging it, sometimes throwing different approaches into the blender just to see what happens. We all, at least some of the time, like to be transported into a new perspective; at the same time, we like to be guided to that place with a sense of being met halfway.

The question is whether the distance is the only thing being halved. The implication of Ortega, and Webern, and the tentative attempts to quantify such things is that, maybe, empathy and emotional contagion, rather than working hand-in-hand, as we might assume, are instead in a mutually exclusive tug-of-war. It is both a profoundly counter-intuitive idea and one that causes a surprising amount of music history to fall into logical place. And I find that just considering the idea reduces a lot of the foundation of how I think about music to sand. How would that change how we make music? What would that music sound like? How would we perform it?

Here’s another question: would we even be able to hear it?


In the introduction to the second, 1966 edition of his study Understanding Media, Marshall McLuhan took the opportunity to try and clarify a tricky point: how, when a medium is superseded, the old medium becomes the “content” of the new. For example, “the ‘content’ of TV is the movie. TV is environmental and imperceptible, like all environments. We are aware only of the ‘content’ or the old environment.” Our entire historical relation to the world around us is simply an ongoing two-step between content and media:

Each new technology creates an environment that is itself regarded as corrupt and degrading. Yet the new one turns its predecessor into an art form.

The argument that atonal modernism “chased away” the audience for classical music/concert music/art music (choose your favorite flawed terminology) is practically a cliché at this point. But the bigger shift was technological. At the same time as the advent of atonality, our relationship with music was undergoing the greatest sea change in history: live performances were displaced by recorded, broadcast, or otherwise electrically and electronically mediated performances. And one can interpret McLuhan’s framework so that the primary feature of Romantic music—its flair for creating the illusion of startlingly immediate emotions—became the “content” of music once its dominant mode of consumption became electronically mediated. In other words, the phonograph, the radio, the recording studio made the emotional contagion of music the end, not the means. Considering McLuhan’s framework leads us to another counter-intuitive possibility: that, for a hundred years, the intended meaning of any piece of music has been lost in translation, its technological mediation filtering out everything but the emotional contagion.

It’s an esoteric interpretation. But it would explain a lot. It would explain why one of music history’s most zealous projects, the post-World War II determination to dismantle the legacy of Romanticism, foundered so completely. It would explain why some of the most thrilling and fascinating music of the past one hundred years, music that still can generate an electric response in the concert hall, found no traction on record or radio. And, more to the point, it would go a long way toward explaining why two generations and counting of conscious efforts to “reconnect” with audiences, of composers and performers producing music conceived in tonality and dedicated to the proposition that accessibility and clarity are fundamental to musical practice, have failed to forestall yet another political and historical moment in which our capacity for empathy has been ruthlessly and thoroughly crowded out by emotional contagion. But the notion also implies a dilemma: the music best able to engineer empathy might be that which is the hardest sell to a listener—because it is the most at odds with the way we have come to listen to music.

Like most dilemmas, it’s older than we think. The ancient Greeks were already worrying about it, forever theorizing how to channel music’s capacity for moral improvement, forever peppering those theories with observations that so much of the music that surrounded them eschewed morality for an easy emotional response. Aristotle, like so many after him, tried to square the circle with crude class distinctions, contrasting “the vulgar class composed of mechanics and laborers and other such persons” with “freemen and educated people,” resigned to the necessity of appealing to the former with “active and passionate” harmonies—since “people of each sort receive pleasure from what is naturally suited to them”—but insisting that, for education, “the ethical class of melodies and of harmonies must be employed.” (Not incidentally, this discussion takes place in the eighth and final book of Aristotle’s Politics.) But somehow I think that even Aristotle’s educated people were just as susceptible to emotional contagion as his mechanicals.

It’s not a class trait, or a national trait, or an aesthetic trait; it’s a human one. Emotion is easy; empathy is hard. We prefer listening to over listening with, a preference reinforced, perhaps, by the inescapable electronic web we’ve woven around ourselves. We keep believing that the one can lead to the other. But is that, in actuality, anything more than a feeling?

Spreadsheets and Skeptics: a philosophical tale of data and music

data music
data music

Image via TrekCore

On argumente mal l’honnesteté et la beauté d’une action par son utilité
A man but ill proves the honour and beauty of an action by its utility

—Michel de Montaigne, “De l’Utile et de l’Honneste”

What do you do?

How many answers are there to that question? An occupation. A pastime. A technique. A course of action. Or maybe the question itself is a concession: a rhetorical shrug of the shoulders against the possibility of an answer.

Last August, The New York Times Magazine ran an article by Steven Johnson. “The Creative Apocalypse That Wasn’t” painted, amidst some judicious caveats, a hopeful, even rosy picture of the prospects for a musical career post-Napster, post-internet, post-streaming services. It was, in a way, an exemplar of 21st-century explanatory journalism: technologically optimistic, pleasantly contrarian—and data-driven. Very data-driven.

Both the data and the drive were concerned with that same question: what do you do? One of Johnson’s main exhibits was occupational data—that is, counting up the number of people who said that their occupation was “musician” or some equivalent. In Johnson’s analysis, that number was going up, even as digital forms of consumption seemed to be anecdotally squeezing musicians out of the marketplace. Which led to the second “what do you do?”: don’t worry (or, at least, worry less), be happy (or, at least happier).

There were problems with the article. Johnson’s data was selective and, in at least one case (which I’ll get to below), didn’t quite say what he thought it said. And his own conception of what musicians do was somewhat disconnected from the huge variety and combinations of ways musicians make a living. I certainly raised an eyebrow (as did, I would imagine, Frank) when Johnson noted that

The growth of live music isn’t great news for the Brian Wilsons of the world, artists who would prefer to cloister themselves in the studio, endlessly tinkering with the recording process in pursuit of a masterpiece

—seemingly oblivious not only to exactly how many babies he was tossing out with that achingly lovely California-sun-dappled bathwater, but how many other cloisters (schools, practice rooms, composing tables) are crucial to even the most prolifically disposable musical styles.

creative apocalypse

Plenty of critiques followed Johnson’s article—most of them negative. The Future of Music Coalition led the way, leading to a back-and-forth that mainly shored up the respective trenches. Other observers weighed in. The National Endowment for the Arts Office of Research and Analysis mined some more data, some of it provocative. (The final graph in that report, showing, via Bureau of Economic Analysis data on capital investments, the decline in real investment in new music, is like a flash-card summary of the tyranny of the back catalog.)

I don’t feel the need to sift through all that data again. But I did start thinking about the data itself, the fact of it. Maybe Johnson’s article wasn’t the bellwether for the coming of Big Data to music, but it certainly was part of the flock. Data-driven analysis has seeped into every corner of the musical ecosystem, beyond arguments for (or against) increased opportunities for individual musicians. Streaming services, online retailers, social media communities—all are crunching reams of data and creating reams more, all the time. Our relationship with data has changed profoundly. Even the word itself hints at how much: it turned from plural to singular. (As a linguistic descriptivist, I find meaning in that.) Maybe we should step back, and figure out how to deal with that going forward.

So this will be a philosophical tale about data. As befits a philosophical tale, it will also be a cautionary tale. As befits a cautionary tale, it will include visits from three ghosts. There is, unfortunately, no neat moral at the end. But there will be the start of a framework for answering the question: what do you do?

*

Michel de Montaigne (1533-1592). Engraved by C.E.Wagstaff and pu

Michel de Montaigne (1533-1592). Engraved by C.E.Wagstaff and published in The Gallery Of Portraits With Memoirs encyclopedia, United Kingdom, 1833.

Two ghosts to start: first, Michel de Montaigne, the 16th-century nobleman and bureaucrat who, in his spare time and a long retirement, pretty much invented the essay, assembling his everyday observations and close-read experiences into a volume that, upon publication, was nearly immediately recognized as a classic of humanist thought. And then, from the succeeding generation, René Descartes, the father of Western philosophy, who retreated into his own mind (cogito ergo sum, after all) to search for fundamental truths—and who thought that Montaigne’s way of thinking was intellectually irresponsible to a positively diabolical extent.

The source of Descartes’s discomfiture was Montaigne’s cheerful espousals of a very old philosophy: skepticism, in a version that went well beyond mere Devil’s advocacy (Descartes’s suspicions notwithstanding). In Montaigne’s lifetime, French intellectual life had been marked by a fashion for schools of ancient philosophy that, beyond pursuing insight, offered designs for living—Stoicism, Epicureanism, and Skepticism. The latter cultivated a habit of questioning everything, admitting nothing, subjecting even the most seemingly obvious statement to a barrage of sabotaging logic and rhetoric. Its most famous exponent, the 2nd-century thinker Sextus Empiricus, worked his way through the liberal and scientific arts, demonstrating how none of them (music included) could even be proven to exist.

It sounds like a game, a mental exercise. It is. Epokhē, the Skeptics called it, a suspension of judgement, a constant refusal to succumb to certainty. Get good enough at it, the Skeptics thought, and you could will yourself into a state of ataraxia, tranquility, mindfulness, open to experience rather than trying to frustratingly box it into categorical truths.

In Montaigne, Skepticism inspired a radical if puckish empathy. One of his more tangential but revealing enthusiasms is for stories about animals behaving in clever or vaguely human ways. Another classical Skeptic, Aenesidemus, formulated a defense of epokhē in the form of a chain of ten tropes; Montaigne seems to have especially taken to heart the first: “Different animals manifest different modes of perception.” If animals have a way of experiencing the world, an inner life, that we have so little access to, how can we possibly say that our way of experiencing the world is the only valid one? In Montaigne’s famous formulation: “When I play with my cat who knows whether I do not make her more sport than she makes me?”

Rene Descartes (1596-1650). Engraved by W.Holl and published The

Rene Descartes (1596-1650). Engraved by W.Holl and published The Gallery Of Portraits With Memoirs encyclopedia, United Kingdom, 1833.

Skepticism drove Montaigne’s perception outward; it drove Descartes’s inward. “I think, therefore I am” was Descartes’s implicit shot across Montaigne’s ruminative bow, fencing off human reason as exceptional. He started with the same sally as Montaigne—question everything—but, where Montaigne and his classical forebears took that as an everyday attitude, Descartes took it as as a prompt to, as he was determined to do, answer everything as well. (In her excellent biography of Montaigne, How to Live, Sarah Bakewell puts it like this: “Trying to get away from Skepticism, [Descartes] stretched it to a hitherto unimaginable length, as one might pull a strand of gum stuck to one’s shoe.”)

That first answer, about thinking and being, was Descartes’s base camp. And he immediately questioned it: how do I know this to be true? Well, there was nothing inherent to I think, therefore I am that demonstrated its truth, except for the fact that it was so clearly true to Descartes. And, with that, he began climbing into thinner and thinner air:

I concluded that I could perhaps take, as a general rule, that all the things which we very clearly and distinctly conceive are true.

All the things which we very clearly and distinctly conceive are true.

Whatever happened to “show your work”?

*

In turning back to the data, one might well adopt Montaigne’s motto: Que sais-je? What do I know? And it doesn’t take much effort to reach a Montaigne-like conclusion, a feeling that the cat is playing with us as much as we are playing with the cat. But that’s a trap, too.

For me, the most interesting hole poked in Johnson’s article had to do with some figures Johnson gleaned from the Bureau of Labor Statistics’s Occupational Employment Statistics (OES), which derive from a yearly survey of some 800 occupational categories. Johnson:

According to the O.E.S., in 1999 there were nearly 53,000 Americans who considered their primary occupation to be that of a musician, a music director or a composer; in 2014, more than 60,000 people were employed writing, singing or playing music. That’s a rise of 15 percent, compared with overall job-­market growth during that period of about 6 percent.

That’s a pretty clear trend, no? But the BLS cautions against such year-to-year comparisons of OES data, and with good reason. A New Zealand statistician named Thomas Lumley poked into those figures and found that the 15 percent increase could almost entirely be attributed to an increase in the “Music Directors and Composers” category; beginning sometime around 2009, approximately 15,000 primary and secondary schoolteachers that weren’t previously being counted as music directors suddenly were. Take out that influx, and Johnson’s upswing turns into a decline.

I got curious about that tweak, so I emailed the Bureau of Labor Statistics about it. I was hoping that it was some straightforward change in methodology, one that might say something about how, at least from the standpoint of the state, the dominant idea of a “musician” was evolving. Nope—in their message back, the OES Information Desk chalked it up to the law of unintended consequences:

In particular, in 2010 and 2011, the OES program implemented the revised 2010 version of the federal Standard Occupational Classification (SOC) system. As part of the 2010 SOC revision, the word “band” was added to the occupational description for music directors and composers. This revision was not intended to change the occupation’s content, since “band” was implied to be part of the previous definition for this occupation also. However, the addition of the word “band” and the inclusion of this occupation on the OES survey form sent to elementary and secondary schools may have caused a shift in the number of workers reported as music directors and composers rather than as teachers.

I love this. The addition of one innocuous word to the description managed to extend the fog forward and backward in time. There’s no way to tell how many band directors did get added, didn’t get added, should have been added, should have been in the category already. It brings us, full circle, back to Montaigne: the more you know, the less you know.

At this point, we might respond with a common trope: the data, we would say, is unreliable. But, really, the data is just the data. The BLS asked a question and got an answer; they asked a slightly different question and got a slightly different answer. They’re not pretending that it’s anything other than that; it’s why they specifically warn against making the kind of comparisons that Johnson made. But we, Cartesian children all, can’t resist. Johnson saw the pattern and judged it true. The Future of Music Coalition and Thomas Lumley saw a different pattern, and they did the same thing. Certainly, you can think that one interpretation is more plausible than the other, that one is closer to the truth. I know what I think. (I think it’s the latter.) And yet, at the same time, there’s Montaigne in my head saying, sure, that’s what you think—but what, exactly, do you know?

It’s not the data that’s unreliable; it’s the clarity. And when it comes to trying to figure out music, that’s a bit of a problem.

*

leninother

The problem was neatly framed by a third ghost: Louis Althusser (1918-1990), the Marxist philosopher and theorist. Althusser was a troublesome character, philosophically and otherwise. For all his insistence that he was a classical Marxist, his interpretation of Marx was rather unorthodox—and, to other scholars in the field, highly suspicious. He was unstable, going through periods of mental distress; in 1980, he killed his wife, strangling her in their apartment at the École normale supérieure in Paris, escaping prosecution by being judged to have been temporarily insane. (He described the incident with sophistic frankness in a posthumously published memoir, in which he also admitted that he hadn’t actually read all that much Marx.) His writing is pervaded by a kind of brittlely incisive gloom.

His most famous theoretical contribution—his analysis of ideology, from his essay “Ideology and Ideological State Apparatuses (Notes towards an Investigation),” first published in 1970—is a good example of how grim his philosophy could be. Althusser presents ideology as so omnipresent in society and time, without history, pinning people into identities even before birth, as to make one wonder how any ideology could ever be subverted, or superseded, or even simply adjusted. It is almost helplessly deterministic, to the point that its relationship to actually lived life starts to seem not just counterintuitive, but disconnected.

So why bring him up? Because Althusser had a real skill, almost a sixth sense, for identifying points of tension. And the point of tension at which he builds his theory of ideology is exactly the point at which the competing priorities of data-driven analysis and music collide.

One of the big ideas in Althusser’s essay is interpellation: how ideologies call out individuals as subject to those ideologies, and how individuals respond.

[I]deology “acts” or “functions” in such a way that it “recruits” subjects among the individuals (it recruits them all), or “transforms” the individuals into subjects (it transforms them all) by that very precise operation which I have called interpellation or hailing, and which can be imagined along the lines of the most commonplace everyday police (or other) hailing: “Hey, you there!”

Assuming that the theoretical scene I have imagined takes place in the street, the hailed individual will turn round. By this mere one-hundred-and-eighty-degree physical conversion, he becomes a subject. Why? Because he has recognized that the hail was “really” addressed to him, and that “it was really him who was hailed” (and not someone else).

Althusser presents his example as a sequence of events, but actually, “these things happen without any succession,” he writes. “The existence of ideology and the hailing or interpellation of individuals as subjects are one and the same thing.” So this thicket of scare quotes marks off another of Althusser’s inescapable prisons: if an ideology exists, not only will it interpellate you as subject to it, it already has.

Setting aside the turtles-all-the-way-down aspect of Althusser’s idea of ideology, interpellation is a useful way to think about the way we talk about jobs and occupations. The OES data, for instance, interpellates you, the musician, as a musician, but subject to the terms of the ideology behind the collection of OES data. The various ideologies that pervade society—free market ideologies, hangover-Calvinist ideologies, up-by-your-bootstraps-self-sufficient ideologies—are interpellating you all the time. Artists and musicians, especially in less-dominant stylistic modes, run into this all the time: think about a phrase like “doing what you love,” which so often interpellates artists. Yes, we do what we love, which, as subjects of free-market ideology, calls us out as people who shouldn’t expect to make as much money as other people who do what the free market loves. (It’s no wonder that there’s a movement in radical labor circles dedicated to “counter-interpellation,” essentially re-framing and re-naming worker-subjects in terms suited to more worker-friendly ideologies.)

But Althusser goes further. He wants to know why and how interpellation happens. So he takes a look at one of the bigger ideologies out there: Christianity. The Christian religious ideology calls out an individual, “in order to tell you that God exists and that you are answerable to Him.” The ideology is the voice by which God addresses you (through scripture and its interpretation). The ideology tells you who you are, your place in the world, your duties. Do what the ideology tells you and you will be saved. And so on.

“Now this is quite a familiar and banal discourse,” Althusser writes, “but at the same time quite a surprising one.” Why? Because the ideology is addressing individuals, interpellating individual subjects, but only “on the absolute condition that there is a Unique, Absolute, Other Subject, i.e. God.” There are big-S Subjects (ideologies) and little-s subjects (individuals), and it’s the gap between them that makes interpellation work. The big-S Subject interpellates the little-s subject such that, not only is the little-s subject inescapably linked to that identity, but the little-s subject can contemplate the big-S Subject in his or her own image, such that the ideology doesn’t seem imposed, or constructed, but just “the way things are.” Ideology ensures that, in Althusser’s words, “everything really is so, and that on condition that the subjects recognize what they are and behave accordingly, everything will be all right: Amen— ‘So be it’.”

Responding to the Future of Music Coalition’s first round of objections, Johnson left a long comment that included both of these statements:

We made a decision to focus the piece on the artists, not the ecosystem around the artists

and

[W]e wanted to stick with our principle of not relying on individual anecdotes, and report only broader, industry-wide data

—to which one might say, “well, which is it?” But it’s not either-or; it’s Althusser’s little-s subject and big-S Subject working in quintessential lockstep. Johnson wants to make you, the reader, feel better about the plight of individual artists in an era of technological optimism, and he wants to do it by analyzing large-scale, collective statistics. Does that work? Sure—as long as you’re convinced that the statistics reflect back the image of the individual artist. The artist is the subject. Data is the ideology.

*

Cash Week - sm

So what do you do? Ignore the data? That seems extreme. Data-driven analysis might be an ideology, but it’s a rationally based one. And I, for one, like rational belief systems. They tend to be more useful than the alternatives. They tend to discredit a lot of opinions and behaviors that I find offensive, or unfair, or damaging. But even a rational belief system is still a belief, a faith—something the rationality of the system tends to obfuscate. Not only does that make it easy to fall into Descartes’s clarity-equals-truth trap, it’s easy for that seeming truth to subtly shift from one category to another, to jump the tracks.

Take economics, for instance, the most data-driven of social sciences. If you ask exactly what it is economists do, the best answer might be: they try and design mathematical models that return data matching that generated by real-world situations involving money and material goods and decisions and consumer behavior. But that is not quite the same thing as describing the behavior itself—a distinction that a lot of people (economists included) fail to make a lot of the time. And the models assume a level of coherence (rational actors, rational decisions, market efficiency) only sometimes (if ever) found in the actual world.

Descartes might have thought twice about that clarity thing: after all, his first book was a survey of music theory—Musicae compendium, written in 1618, published (posthumously) in 1650. And, on the very first page, Descartes wrote this (as translated by Thomas Harper in 1653):

For songs may bee made dolefull and delightfull at once; nor is it strange that two divers effects should result from this one cause, since thus Elegiographers and Tragoedians please their Auditors so much the more, by how much the more griefe they excite in them.

Music, at its core, is not a rational art. And yet its creation now necessarily happens within systems and societal frameworks evermore marked off, framed, and otherwise governed by the self-proclaimed rationality of Big Data. Sometimes the meeting will be useful; sometimes it will not. But it will always be a meeting of two fundamentally divergent belief systems. It’s not a matter of collecting more data, or better data, or finding a more sophisticated analysis of that data. The best you can hope for is ecumenical cooperation.

Montaigne would have responded to that uncertainty the way he responded to all uncertainties:

Every one is well or ill at ease, according as he so finds himself; not he whom the world believes, but he who believes himself to be so, is content; and in this alone belief gives itself being and reality. Fortune does us neither good nor hurt; she only presents us the matter and the seed, which our soul, more powerful than she, turns and applies as she best pleases

That sort of attitude is easier said than done, even with a knack for epokhē. But it’s the start of a corrective against the anxiety of data, the illusion of and need for exact, singular answers to big questions. Data requires interpretation; so do notes. Analysis is performance; performance is analysis. The application of a musical soul can make as much sense of fortune as the sort of a spreadsheet. Sure, that’s just a belief. But that, it turns out, is what we do.

Homage to Captain Swing

SwingletterA Swing letter addressed to Corpus Christi College, Cambridge: “The college that thou holdest shall be fired very shortly.”

The letters began arriving in the early autumn of 1830, addressed to magistrates, landlords, clergy across rural England:

Sir, take notice that we send you word that your threshing machine shall be burnt to ashes before the month end

Sir, This is to acquaint you that if your thrashing machines are not destroyed by you directly we shall commence our labours

Sir, Your name is down amongst the Black hearts in the Black Book and this is to advise you and the like of you, who are Parson Justasses, to make your wills

The threats all carried the same signature: Captain Swing, a supposed rebel leader, the name calling to mind a pair of macabrely mirrored rhythms: the sweep of the arm in the manual threshing of wheat—the loosening of the grain from the surrounding chaff—and the slow pendulation of a hanged body. Revenge for thee is on the wing from thy determined Captain Swing.

He wasn’t real. Captain Swing was a fiction, a symbol, a conveniently adopted veil of anonymity. He became a metaphor, an embodiment of the frustrations of England’s farm-laborers and rural poor. In 1830, that frustration boiled over, and protests swept across the English countryside. As part of their protests, the Swing rioters extended the Luddite tradition of machine-breaking, destroying the threshing machines that were stealing their livelihood. Their demands were simple: higher wages and an end to rural unemployment. They were reacting to the Industrial Revolution—but, the threshing machines notwithstanding, not so much the advent of mechanization as the change in identity, the way industrialization eroded a robust system of rural relationships and rhythms to a single, stark transaction: employer and employed, owner and tenant, capital and labor, haves and have-nots.

No one knows who invented Captain Swing. But the mascot was an unwitting and curious bit of prescience.

feature 58-38Bandleader and crimefighter Swing Sisson encounters a critic in Feature Comics #58 (July 1942).

On the cusp of a new academic year, Robert Blocker, dean of the Yale School of Music, offered a resolution destined, perhaps, to become a standard of its kind. Defending (in The New York Times) his institution’s decision to suspend the activities of its jazz ensemble (and its general de-emphasis of jazz in the curriculum), Blocker appealed to categorization:

Our mission is real clear…. We train people in the Western canon and in new music.

This intimation of musical haves and have-nots—placing jazz outside the vale of a posited Western canon of great works, then and now—is dumb in its own way (Alex Ross and Michael Lewanski were quick to point out how and why). It is also wrong on a deeper and more historically populous level. Both Ross and Lewanski make the eminently correct assertion that a curriculum without jazz is poor training indeed for the wonderfully kleptomaniacal repertoire of classical music. But, even beyond that, to promulgate a canon that does not change and expand its parameters in response to performed reality is, I think, missing the point of music, and missing it badly.

The notion that jazz is some kind of outside force attempting to breach Fortress Classical is not new. Take Deems Taylor, for instance—composer, critic, narrator of Fantasia, well-known classical music personality of the ’20s, ’30s, and ’40s. In his book The Well-Tempered Listener, Taylor examined the practice of “swinging the classics,” making jazz band versions of classical chestnuts. This sort of thing had apparently exercised enough indignation that the president of the Bach Society of New Jersey, Taylor reported, sent a letter to the FCC proposing penalties for radio stations that broadcast such numbers. Taylor gave that suggestion a sympathetic shrug:

If you’re going to suspend the license of a broadcasting station for permitting Bach to be played in swing time, what are you going to do to a station for permitting swing music to be played at all? (You might offer the owner of the station his choice of either listening to nothing but swing for, say, twelve hours, or else spending a month in jail.) You can’t legislate against bad taste.

Taylor’s solution was musical rope-a-dope, completely certain that the unaltered classical repertoire would win out. “I believe in letting people hear these swing monstrosities because I believe that it’s the best method of getting rid of them,” he concluded. “A real work of art is a good deal tougher than we assume that it is.”

Connoisseurs may also recall last year’s anti-jazz contretemps, culminating with composer-activist John Halle’s broadside against the current state of jazz vis-à-vis progressive politics, which, on its surface, avoided the high-low divide that Taylor repointed and Blocker tripped over. (Halle’s thesis: “It’s been years since jazz had any claim to a counter-cultural, outsider, adversarial status, or communicated a revolutionary or even mildly reformist mindset.”) But at the core of Halle’s article was a related view of score and performance, revealed when he took tenor saxophonist Joe Henderson to task for performing and recording an instrumental version of the old standard “Without a Song”—the original lyrics of which are redolent with, as Halle puts it, “vile Jim Crow racism” (“A darky’s born/ but he’s no good no how / without a song”)—at nearly the same time Henderson was, elsewhere in his music, acknowledging and endorsing the Black Power movement of the 1960s. (“A nadir of obliviousness,” Halle concluded.)

What Blocker’s comment, Taylor’s bravado, and Halle’s litmus test all share is the assumption of a kind of one-way street between intent and performance. Halle’s implication is that, no matter Henderson’s intention, the performance is politically regressive because of the original lyrics—to echo Taylor, even a poor work of art, it seems, is a good deal tougher than we assume that it is. Taylor’s confidence that the score can survive any amount of stylistic contamination nevertheless insinuates that performance, the real-world, real-time expression of style, is ultimately secondary. Blocker’s mission statement implicitly posits a musical regime setting the verities of the written-down, published, and academically vetted canon against the presumably more relativistic and transient pleasures of a performed vernacular.

The supposition throughout is that the composer’s (or lyricist’s) intent remains paramount, that even a thoroughly transformative performance is still just a reiteration of that intent.  There is another possibility, though: the possibility that, the performance can offset the composer’s intent, simply by virtue of who is doing the performing—and how.

There is also the possibility that this is, in fact, one of music’s highest virtues.

* * *

Here’s an interesting thing. Take two weights, connect them with a string, then run the string over a pulley, like this—

atwood1

You can intuitively guess what will happen: if both weights have the same mass, they’ll just hang there, but if one has more mass, it’ll pull the other through the pulley. This seems trivial, but it’s not, not entirely—which is why the Rev. George Atwood, a tutor at Cambridge’s Trinity College, invented this apparatus in the late 1700s, the better to teach principles of classical mechanics. Playing around with Atwood’s machine, students could measure and learn about rates of acceleration, string tension, inertial forces, and the like. One thing that you can determine with Atwood’s machine is that, in the case of unequal masses (and assuming the pulleys are frictionless), the acceleration on both weights is constant and uniform. In other words, if the masses are equal, the system is at equilibrium, but if the masses are unequal, it’s a runaway system, the weights flying through the pulley, ever faster, until they run out of string or vertical space.

But if you take the two weights, run the string over two pulleys, and start the smaller weight swinging back and forth, like this—

atwood2

—some unexpected things start to happen. The swinging weight, via centrifugal force—more pedantically, via the apparent force that results from interpreting a rotating reference frame as an inertial frame—counteracts some of the gravitational pull on the larger mass. Which means that the Swinging Atwood’s Machine (as it was dubbed by Nicholas Tufillaro, the physicist who first started playing around with such systems back in the 1980s) can end up doing some very counterintuitive things. Even if the masses are unequal, the system can still reach an equilibrium, the smaller mass locking into periodic and sometimes seriously funky orbits:

TufillaroFig4(From Nicholas B. Tufillaro, Tyler A. Abbott, and David J. Griffiths, “Swinging Atwood’s Machine,” Am. J. Phys. 52 (10), October 1984)

To summarize: if you have two unequal masses that are inextricably bound to each other, it doesn’t necessarily mean that the larger mass always dominates the system. The smaller can still counterbalance the larger. It just needs to swing.

* * *

LennysMahler6Leonard Bernstein’s score of Mahler’s Sixth Symphony, from the New York Philharmonic’s digital archives.

It’s only a metaphor, of course. Then again, most writing and talking about music ends up, before too long, at metaphors. “Swing” itself, musically speaking, is a pretty vague concept. It has to do with rhythm, but it has to do with so much more than rhythm: it considers the flow of musical experience through the lenses of momentum and vitality. In its most poetic sense, the metaphor is ecumenical. Those old “Mahler Grooves” bumper stickers could be at once a cheeky incongruity and a recognition that, in its own way, and in a good performance, Mahler could indeed groove, that the symphonies could swing in the grandest sense. But even in the term’s more technical sense—that calibration of the ratio between stressed and unstressed notes—“swing” hearkens all the way back to the old Baroque inégale: a variance, a perturbation, an inequality, turned into a dance of emphasis and de-emphasis that pulls the music forward.

All performance is a matter of emphasis and de-emphasis; it is, on one level, about choice. And, thanks to music’s singular strangeness—grammar and eloquence forever in search of content and meaning—that choice can extend far beyond technical choices on the part of the musicians. Take the case of classical music’s great Beleth, Richard Wagner, who embodied the human possibilities of greatness and ugliness to an exceptionally intense degree. Because his medium was music, performing and listening to Wagner’s work is an opportunity to choose the greatness over the ugliness.

This is, incidentally, what Blocker gets so wrong about the canon. To use it as a dividing line is a diminishing choice, segregating musics that might otherwise yield energetic synergy. The better choice is to view the idea of a canon as an opportunity for expansion and addition—to decide that the classics not only can survive being swung, but, in the larger sense, can positively thrive on it.

From an optimistic vantage, this ongoing process of choice might be thought of as practice, training players and audience to imagine a better world, the better to achieve it. A pessimist could point out (quite rightly) that such training is taking an awfully long time to translate into concrete change.

Captain Swing, in the long run, could not prevent industrialization and capitalism from diminishing and dehumanizing the English rural poor. But the imaginary Captain Swing and his very real foot-soldiers still offer warning and inspiration. The canon, after all, is a threshing floor, separating musical wheat from chaff. The question is whether the canon is to be winnowed by hand, as it were—by individual performers, individual choices collectively shaping repertoire and style—or by machine: by institutions, by factories of learning and production. A top-down segregation of the canon recapitulates what the Swing rioters foresaw: it makes the relationship between the performer and the repertoire excessively transactional, limited in dimension and devoid of ownership.

The steady advance of technology and prosperity in certain places, among certain classes, can make us forget that all of us, rich and poor, fortunate and unfortunate, live in a machine. Its gears are money and power. Inequality, greed, racism, misogyny, discrimination all remain institutionalized and persistent. The music this article has been talking about—jazz and classical—is, culturally speaking, on the margins, however luxurious; maybe to expect these musics, old or new, to alter the fabric of society, however incrementally, is excessively idealistic. (Confession: in that regard, I am an idealist.) But just in their performance, in jazz’s constant reinvention and classical’s constant re-creation, they mount a defense. In swinging, they swing the machine. They mitigate their lesser mass. They, perhaps, prevent the whole system from running away to a catastrophic end. Or, at least, they keep us from being pulled helplessly through the machine.

*Homepage featured image courtesy Petras Gagilas via Flickr

Digital to Analog: The Needle and Thread Running Through Technology

Daphne Oram making hand-drawn inputs to the Oramics apparatus. (Via.)

Daphne Oram making hand-drawn inputs to the Oramics apparatus. (Via.)

This is a picture of Daphne Oram, demonstrating the technology she invented: Oramics. Oram (1925-2003) learned electronics as a studio engineer at the BBC in the 1940s. She composed the first all-electronic score broadcast by the BBC—in 1957, for a production of Jean Giradoux’s Amphitryon 38—and, a year later, co-founded the BBC Radiophonic Workshop. A year after that, dismayed at the BBC’s lack of enthusiasm for her work (which may be sensed in the fact that the Workshop was not allowed to use the word “music” in its name), she struck out on her own and began to develop Oramics.

Oram’s conception was a radical union of audio and visual. It was a synthesizer, but one in which the input was hand-drawn patterns on strips of 35mm film. The strips of film rolled past photoelectric sensors, and the resulting currents were converted to sound. The avant-garde possibilities of sound-on-film had been explored previously—by Oskar Fischinger, for example, or Arseny Avraamov and his Soviet counterparts (the latter well-chronicled in Andrey Smirnov’s essential study Sound in Z)—but Oramics was more ambitious, more innovative. Oram’s machine ran up to ten strips of film at once, controlling not only pitch, but amplitude, waveform, and various filters. Sound-wise, it was miles ahead of the voltage-controlled analog synthesizers of the time.

HB Oram 2This is a picture of my daughter playing with the Oramics app, an iOS-based simulation. It was released in 2011, to coincide with a special exhibition at London’s Science Museum. Oram’s original apparatus was on display—now behind glass, no longer functional. Oram had stopped working on Oramics in the 1990s, after suffering a pair of strokes; by then, the advance of electronic music had left her and her machine behind. To consider why is to, perhaps, get close to something about the nature of technology, our relationship with it, how decisions about it in one place and time shape attitudes in another place and time.

Fair warning: this article is going to take the scenic route getting to its destination—more suite than sonata. But that I should feel compelled to give such a warning is not irrelevant. Because the real question is why some things are at the center, and why some things are peripheral, and how those things get to where they are. And a good starting point for answering that question is another technology: clothes.

*

The Great Masculine Renunciation, M division.

The Great Masculine Renunciation, M division.

One of the great geologic-level events in the history of fashion was first named by the British psychoanalyst J. C. Flugel in his 1930 book The Psychology of Clothes. The event was, as Flugel put it, “the sudden reduction of male sartorial decorativeness which took place at the end of the eighteenth century”:

[M]en gave up their right to all the brighter, gayer, more elaborate, and more varied forms of ornamentation, leaving these entirely to the use of women, and there by making their own tailoring the most austere and ascetic of the arts. Sartorially, this event has surely the right to be considered as ‘The Great Masculine Renunciation.’ Man abandoned his claim to be considered beautiful. He henceforth aimed at being only useful.

Flugel attributed the Great Masculine Renunciation to the spread of democratic ideals in the wake of the French Revolution: with all men now theoretically equal, male fashion converged on a kind of universal neutrality. In other words, according to Flugel, the more utilitarian style of fashion spread outward from the middle class, mirroring the rise of middle-class economic power.

Flugel was, perhaps, too optimistic. David Kuchta, in his book The Three-Piece Suit and Modern Masculinity: England, 1550–1850, traces the origins of the Great Masculine Renunciation much further back, to the 1666 introduction of the three-piece suit by Charles II. Sobriety in dress was first a symbol of masculine, aristocratic propriety. Only later would the style be adopted by the middle class, in order to criticize aristocratic wealth and assert their own political power; in turn, the upper class would re-embrace the style in their own defense. Both sides, at the same time, accused the other of being insufficiently modest in their dress, of embodying not masculinity and prudence, but effeminacy and indulgence.

And note: it is entirely a parley between middle- and upper-class men. Kuchta concludes:

The great masculine renunciation of the late eighteenth and early nineteenth centuries was thus less the triumph of middle-class culture; rather, it was the result of middle-class men’s appropriation of an earlier aristocratic culture, of aristocratic men’s appropriation of radical critiques of aristocracy, and of a combined attempt by aristocratic and middle-class men to exclude working-class men and all women from the increasingly shared institutions of power. (emphasis added)

What really solidified the Great Masculine Renunciation was the great geologic-level event in the history of technology: the Industrial Revolution. What was once a symbol of judiciously wielded privilege now became a symbol of efficiency, of diligence, of devotion to productivity. The uniform of economic and political power could also signify a complete congruence of work and life. Anthropologist David Graeber, in a recent article, put it this way:

[T]he generic quality of formal male clothing, whether donned by factory owners or functionaries, makes some sense. These uniforms define powerful men as active, productive, and potent, and at the same time define them as glyphs of power—disembodied abstractions.

Dress for the cog in the machine you want to be.

A couple of months ago, I was at the annual Fromm Foundation concerts at Harvard University, which featured the International Contemporary Ensemble, for which the group opted for outfits that, while realized in individual ways, still hewed close to standard new-music-ensemble dress. In fact, the few nods in the direction of rebellion—some bright leggings here, some gold-studded boots there, ICE founder Claire Chase’s metallic silver jacket—mostly just reinforced how closely the performers still orbited the standard all-black contemporary music uniform.

I’m not sure exactly when it became standard (a day of hunting through a few decades of archived newspaper reviews yielded precious little record of what performers were wearing—something, I realize, that might very well be symptomatic of what this article is discussing), but that all-black uniform has held sway for at least thirty years, which is not insignificant. Concert dress had long since conformed to the ideals of the Great Masculine Renunciation, so it makes sense that avant-garde concert dress would go even further in realizing those ideals: more stark, more neutral, more sober. And 20th-century avant-garde music, to an unprecedented extent, was a process-based movement—serialism to minimalism and everything in between—so one might expect its interpreters to take their fashion cues from the similarly streamlined and orderly world of the factory and the assembly line. But there’s something else going on with that parade of all-black, I think, and it is a bit of fallout from technological advance. And advance isn’t really the right word, in this case. We think of technological innovation as always being expansive, opening up possibilities and dimensions. But technological innovation also contracts dimensions. And the shadow of one of those contractions survives in all those black clothes.

One of the most sweeping changes wrought by audio recording and broadcasting technology was that, for the first time ever, music was no longer, by necessity, a visual as well as an aural experience. Music had always been only heard in live performance—which meant the listener was there, looking as well as hearing. (Even exceptions—Vivaldi’s female choristers singing behind a screen or Wagner’s enclosed pit orchestra or the like—were more like unusual variations of the visual context.) But with recordings and radio, the visual portion of musical performance disappeared. All one had was the sound. The technology decoupled eye and ear.

It is, actually, akin to the Great Masculine Renunciation. The process is the same: reduce a given media—and remember, as Marshall McLuhan was fond of pointing out, clothes are just as much a form of media as any other—to its discrete components, isolate what is essential, streamline it into its most basic, direct form, cast away everything else. In this case, you have two media changing in tandem: concert dress evolved toward this extreme neutrality in order to better mimic the non-visual experience of music that recordings and radio increasingly made the norm. You could even argue that the music itself started to amplify this evolution, ever more focused on sound, how the sound is organized and produced, techniques and presentation styles following the sonic impetus toward abstraction. It echoed the favored toolbox—scientific, industrial, political—for making sense of what was turning out to be a very complicated world: divide and conquer.

*

Pythagoras; woodcut from the Wellcome Library, London.

Pythagoras; woodcut from the Wellcome Library, London.

The purest expression of philosophical allegiance to the sound-only experience was and is acousmatic music. The term was invented by Pierre Schaeffer, the French musique concrète pioneer, to describe the experience of hearing musique concrète, or any other sonic experience in which the source of the sound was hidden. The goal of acousmatic experience was to stop thinking about how the sound was produced and start noticing the sound itself, qualities and textures that might be elided or ignored in an audio-and-visual presentation. Schaeffer likened it to Pythagoras, the ancient Greek philosopher, supposedly lecturing from behind a veil in order to focus his students’ attention on the substance of his teachings. Thus, Schaeffer insisted, the modern technology of electronic sound reproduction was simply a recreation of ancient experience: “[B]etween the experience of Pythagoras and our experiences of radio and recordings, the differences separating direct listening (through a curtain) and indirect listening (through a speaker) in the end become negligible.”

Does it change the nature of Schaeffer’s thesis to note that the Pythagorean veil probably didn’t exist? The earliest references to it come long after Pythagoras’s time and make the veil more allegorical than real—an exclusionary implication, dividing Pythagoras’s followers into those who really got what he was teaching and those who didn’t. (Brian Kane has unraveled the Pythagorean veil—and much else—in his book Sound Unseen: Acousmatic Sound in Theory and Practice.) Then again, Schaeffer’s real, acknowledged philosophical reference point wasn’t Pythagoras. It was the phenomenology of Edmund Husserl.

Phenomenology is not an easily summarized thing, but at its core is the act of examining what exactly we perceive in order to bring to light ways we organize and narrate our perceptions. One of the better descriptions of the phenomenological process was given by Husserl’s disciple Maurice Merleau-Ponty, in his 1945 Phenomenology of Perception:

It is because we are through and through compounded of relationships with the world that for us the only way to become aware of the fact is to suspend the resultant activity, to refuse it our complicity…. Not because we reject the certainties of common sense and a natural attitude to things — they are, on the contrary, the constant theme of philosophy—but because, being the presupposed basis of any thought, they are taken for granted, and go unnoticed, and because in order to arouse them and bring them to view, we have to suspend for a moment our recognition of them…. [Phenomenological] reflection… slackens the intentional threads which attach us to the world and thus brings them to our notice[.]

It’s easy to see how Schaeffer’s acousmatic idea transfers this process into the realm of sound, veiling the relationship between a sound and its production in order to reveal how much of the sound’s nature gets lost in our compulsion to categorize it.

Sounds like a great idea, doesn’t it? But beneath that bright, objective surface is a nest of problems that can reiterate the sorts of presuppositions that phenomenology is meant to exorcise. Feminist interpretations of phenomenology, for instance, face the difficulty of Husserl’s idea of intersubjectivity, the assumption that other people will perceive and classify the objective world in much the same way I will. As it turns out, the “I” in that sentence is not incidental. As scholar Alia Al-Saji has written:

The consciousness that results is not only an empty, pure ego, it is also a universalized (masculine) consciousness that has been produced by the exclusion of (feminine) body, and hence implicitly relies on the elision of sexual difference. The phenomenological method’s claim to “neutrality” thus appears rooted in a form of double forgetfulness that serves to normalize, and validate, the standpoint of the phenomenological observer.

Johanna Oksala, similarly, acknowledges the suspicion “that the master’s tools could ever dismantle the master’s house.”

This might seem far away from the actual experience of music. But the thing to keep in mind is that to make some definition of the “actual experience” of music is, almost always, to make a claim of neutrality—to privilege one aspect of music (usually the sensual and aesthetic sense of timbre and rhythm and syntax) over another (usually the ramifications of the societal conditions under which the music is created or performed). And it runs into the same problem: who decides what’s essential? Every single categorical division I’ve been talking about—plain and fancy, sound and vision, parts and whole, past and present, musical and extra-musical—is similarly implicated. We call some kinds of dress sensible and some ostentatious because long-dead men (and only men) were locked in competition for who would be in and out of favor, and broadcast their convictions via the media of clothes. We analytically divide every human activity into component parts because the mechanical demands of industrial development got us in the habit. We separate aspects of musical performance by sense because a particular form of technology first did it for us, decades ago. We make divisions along lines that we never laid down.

*

From Daphne Oram, An Individual Note of Music, Sound and Electronics (1972).

From Daphne Oram, An Individual Note of Music, Sound and Electronics (1972).

Daphne Oram was temperamentally disinclined to make such divisions. Her work on Oramics turned into something resembling a new-age quest, a search for enlightenment at the boundaries of technology.  In 1972, Oram published a short book called An Individual Note of Music, Sound and Electronics. It is, on the one hand, a chatty, primer-like overview of basic ideas of sound synthesis and electronic music, but one that, at every possible opportunity, analogizes and anthropomorphizes its subject on the grandest possible scale:

In every human being there will surely be, as we have said, tremendous chords of wavepatterns ‘sounding out their notes.’ Do we control them by the formants we build up… by tuned circuits which amplify or filter? Are we forever developing our regions of resonance so that our individual consciousness will rise into being—so that we can assert our individuality? In this way does the tumult of existence resolve itself into a final personal waveshape, the embodiment of all one’s own interpretations of the art of living?

What emerges over the course of the book is that Oramics was conceptually inseparable from Oram’s critique of technology itself—but that technology could, indeed, dismantle and rebuild its own house.

If the machines, which replace the human interpreters, are incapable of conveying those aspects of life which we consider the most human, then… the machines will thwart the communication of this humanity. But need machines be so inhuman? Could we so devise a machine that, in the programming of it, all those factors which are deemed to be the most ‘human,’ could be clearly represented?

Her positive answer was the development of Oramics. Her vision of technology was—to put it as she might—one of additive, not subtractive, synthesis.

Oram ended up on the margins of the perceived mainstream of innovation, even as she pursued her uniquely holistic conception of technology. One can speculate as to why. She was too far ahead of her time for the BBC, and, perhaps, too far out of time for the electronic music community at large. Her machine was never finished. (“It is still evolving all the time,” she wrote, “for one lifetime is certainly not long enough to build it and explore all its potential.”) She had an all-or-nothing attitude—toward her work, her employers, her colleagues. She could be exacting, stubborn, single-minded, and other adjectives that would probably sound somewhat less pejorative if she had been a man.

But Oram also never got her due because she was singular, in a way that all the technocracies that make up society, explicit and implicit, couldn’t quite encompass or process. (“My machine does not really fit into any category,” she admitted.) For all her technological prowess, Oram was the opposite of what gets assigned technological value. She was integral. She was non-repeatable. She was non-modular. She was indivisible.

In the first article in this series, I wrote:

I’ve found that one really fascinating question to ask myself while listening to music that utilizes technology—old technology, new technology, high technology, low technology—is this: What’s being hidden? What’s being effaced? What’s being pushed to the foreground, and what’s being pushed to the background?

Oram is a reminder that it’s not just what gets pushed to the background. It’s also who.

Digital to Analog: Plug and Play

The Boston Daily Globe surveys the frozen scene, March 13, 1888.

The Boston Daily Globe surveys the frozen scene, March 13, 1888.

Our latest blizzard in these parts hit New England while I was out of town for a wedding. The result was a lot of time on the phone: arranging new flights, arranging extended hotel reservations, and (having arrived back in Boston to a non-functioning car) arranging a tow through a somewhat overextended AAA. Did I mention the roof? The roof started leaking; start calling roofers.

What this means is that my wife and I have listened to a lot of telephone hold music over this past month: Muzak, soft rock, the allegedly calming strains of the most mainstream classical-lite repertoire imaginable. And it made me think of something that might be worth writing down, which is this: right now, in 2015, when technology is more amazing than it’s ever been, when what we call a “telephone” is, for most of us, actually a pocket-sized computer of sufficient power and capability that, twenty years ago, it would have been considered in the realm of science fiction—in spite of all this, telephone hold music is still defiantly and even hilariously low fidelity. It is still rendered back to the ear in the most tinny possible timbre.

Oddly, and surprisingly, that just might say something fairly deep and intricate about the history of music.

* * *

There’s one common feature to the way music has been made and experienced over the past century or so, a feature that cuts across genre and style, a feature so ubiquitous we don’t really have to think about it anymore. And it came into music by way of the telephone. It’s this:

Phone Plug

This is, of course, a quarter-inch phone plug. It’s what’s at the end of most patch cords. If you’ve ever worked with an electric guitar, or bass, or keyboard, or a modular synthesizer, or a mixing board—and so on—you’ve used this plug. If you’ve ever listened to music through headphones, you’ve used it as well—or its smaller, eighth-inch sibling. It’s a linchpin of amplification, recording—any musical activity that uses electricity.

It’s actually older than you might think—it’s certainly older than I thought it was. The familiar form of it dates from at least 1880: that’s when Charles E. Scribner applied for a patent for a “certain new and useful Improvement in Spring-Jack Switches” that included a diagram of a plug nearly identical to the one still used today.

Scribner PT489570 figure

The idea, though, goes back at least another couple of decades, to “plug-switches”: a metal contact and a metal spring—completing an electric circuit—and a metal wedge that one could insert between the two. Plug-switches came into common use with telegraphy; Scribner adapted them into the plug-and-jack arrays of telephone switchboards. (The etymology here preserves some technological history: the first telephone switchboards were just that, boards of manual switches that had to be flipped one by one; Scribner’s first try at a suitable plug-switch looked like a jack-knife, which is why we still call the connection a jack.)

The key part of the modern phone plug is the ring of insulation between the tip and the sleeve. It’s what lets both signal and ground flow through a single plug—the tip conducts the signal, the sleeve conducts the ground. (Add more rings of insulation and more interspersed metal rings and the plug can carry more conductors. A stereo plug, for instance, adds an extra ring between the tip and the sleeve.) The insulation—the gap—keeps everything separated, preventing short circuits, ensuring the flow of current.

The signal fidelity of the phone plug is pretty robust. But the drive behind the development of the phone plug wasn’t signal fidelity; it was efficiency. The phone plug—and the spring-jack—let more telephone connection points be packed into a smaller space, and let switchboard operators make (and break) those connections with a single physical gesture. And it let those connections be made again and again and again. The connection embodied in the phone plug, is, in fact, at odds with the communicative connection of music. A connection made with a phone plug is reliable; a connection made via music is not.

* * *

"Hello! Telephones provide communication." From Gaston Serpette's "La demoiselle du téléphone," 1891.

“Hello! We are calling and giving the gift of communication.” From Gaston Serpette’s “La demoiselle du téléphone” (1891).

There have always been and always will be composers who adopt technology as a subject matter head on. Gabrieli. Berlioz. Stockhausen. Tristan Perich has put the physical nature of computer technology front and center; Mikel Rouse has turned our 24/7 interaction with media technology into opera. It’s a rich, rich area of exploration.

But I find it most interesting when technology turns up in the music of composers who aren’t necessarily thought of as being particularly technologically minded, at least thematically speaking. Consider three examples, one older, two more recent:

Francis Poulenc’s 1958 La voix humaine, to a libretto by Jean Cocteau, is perhaps the most famous operatic telephone call in the repertoire, a one-woman tour de force presenting a love affair’s entire history and dissolution through a single, one-sided, technologically mediated (and, occasionally, sabotaged) conversation. Nico Muhly’s 2011 opera Two Boys (libretto by Craig Lucas) might be its descendent, a traditionally operatic tale of obsession and violence that instead swirls through the internet. Gabriel Kahane’s Craigslistlieder, a 2006 song cycle setting texts drawn from online personal ads, is precisely breezy, miniatures that capture something of the fleeting yet permanently preserved nature of online interactions.

In La voix humaine, Poulenc displays all his usual hallmarks of musical surrealism: the abrupt shifts, the use of pop music tropes to produce immediate but sometimes alienatingly oblique emotional beats, the cold comfort of standard progressions. The music of the internet in Two Boys is—at a slight but fascinating stylistic variance to the rest of the opera—the driving, rhythmically tiled common-tone harmony shifts of second-wave minimalism, ingeniously yoked to another style, plainchant: online rituals of communication as reenactments of perennial patterns. Craigslistlieder goes back further, to the aphoristic expressiveness of Romantic-era song, leveraging its touchstones of yearning and loneliness.

In other words, all three composers are not inventing new styles to illustrate their given technological connectivities, but adapting an older style that best encompasses what it is about each technology that they want to highlight. The interesting thing is that all those older styles can be heard as having their own, divergent technological antecedents. The technological precursor of Poulenc’s style was cinema, with its ability to disjoint space and time through framing and montage. (Poulenc and Cocteau’s transference of that disjointedness to their subject is casually echoed in the fact that the quintessential surrealist party game—the Exquisite Corpse—would be refracted into the more prosaic game of Telephone.) Both plainchant and minimalism have musical technologies in their genomes: notation for the former, recording and studio techniques in the latter. And Romanticism? I’ve always thought of Romanticism as reflecting the technology of the letter and the democratization of postal services: self-expression and the expressive fragment united into a potent, concentrated compound. All three works, then, as different from each other as they are, do for technology what classical music has always done: reinterpret the new in terms of the old, make the connection with the tradition.

This new/old relationship between music and technology has been around for a long time, often to the point that today we don’t even hear it anymore. Those Romantic letters, for instance: in Schubert’s Winter Journey: Anatomy of an Obsession, Ian Bostridge’s new book on Franz Schubert’s Winterreise, he makes a compelling point about the lied “Die Post,” of how the jaunty horn calls implied in the piano part could, in Schubert’s time, have been heard as deeply ironic, the Romantic nostalgia traditionally attached to the sound of the horn here signaling the arrival of a disruptive new connective technology—the horse-drawn mail coach.

The paradox is that, at the same time, it’s the failure to connect that has been the characteristic expressive trope of classical music, from the entreaties of troubadours to the Byronic suffering-in-isolation of the Romantic era (epitomized by Winterreise) to the alienation of modernism. In La voix humaine, the signal is constantly being dropped or interrupted. The connections in Two Boys explain everything and nothing—the drama is in misunderstanding, not understanding. (The fact that the opera’s audience stand-in character, the police investigator Anne Strawson, is not more fluent or perceptive about the internet—something that came in for criticism in reviews of the piece—is actually one of the most operatic things about it, channeling an entire lineage of figures who can’t complete or decipher a communicative connection.) The ads Kahane set for his cycle are, literally, “Missed Connections.” This is the history of opera—all the way back to Orpheus, a signal acquired, then lost.

Not just opera: it is the history of music, forever communicating—what, exactly? But forever communicating, nonetheless, even as the message gets hopelessly lost in the translation to music. And it’s not a bug; it’s a feature. Music keeps a ring of insulation between eloquence and meaning. It’s what keeps the current, the power, flowing. It’s what makes the connection so immediate.

* * *

Advertisement ca. 1899 (via the Library of Congress).

Advertisement ca. 1899 (via the Library of Congress).

“I Can Hear You,” the penultimate track on They Might Be Giants’ 1996 album Factory Showroom, was recorded on a wax cylinder, in the same manner that such recordings would have been made in the late 1800s. (The recording was made on a visit to the Thomas Edison National Historical Park in West Orange, New Jersey.) Song and technology combine into a crafty joke:

http://youtu.be/IZIUAhGbCcM

Like all the other music I’ve been discussing, the song is about communication technology—but, in this case, it’s a tribute to every such technology that privileged efficiency over fidelity. Sure, drive-through intercom systems have terrible sound quality, but they get the job done.

Telephone hold music is where this calculus between efficiency and fidelity breaks down: you can’t stop listening to how bad the reproduction is. But, then again, it gets the job done. The hold music for AAA of Southern New England, for instance, was a series of Mozart piano concerti—which I easily recognized, even though the piano sounded like an underwater glockenspiel, even though the strings groaned in and out of the mix like a squeaky hinge, even though the bass was practically non-existent.

It was, in other words, privileging structure and syntax over color and sensuality—or, at least, substituting a version of color and sensuality that was an awful lot more circumscribed and compressed than normal. Which, it turns out, is still a perfectly valid musical experience. I’ll be honest: I kind of got into it. I started to appreciate its weird, alien pings and pops. I started to hear just how little signal information you need to establish baselines for harmonic tension and resolution. I started wondering how you might go about writing a piece that would emulate these exact sounds and qualities. And I realized: people already have.

A fairly wide swath of the history of recorded and broadcasted music was limited to something approaching a hold-music level of fidelity. Wax cylinders; acoustic recordings; early 78s; primitive radio—to our ears, they sound impoverished. But to contemporary ears (judging from contemporary accounts), they sounded amazing. And no wonder: the quantum leap from a completely ephemeral art form to one that could be fixed and reproduced ad infinitum is something we can’t really comprehend. What did it matter that the sound was brittle, stark, pointillistic?

Maybe a lot—because, around the same time, musical styles in those places where recordings and radio played began a turn toward brittle, stark, and pointillistic. Jazz, with its cranked, intricately syncopated drive, its characteristic rhythm-section foundation plucked, hammered, and struck. Neoclassicism, Romantic stock boiled back down to lean harmonies, bracing clarity, and bone-dry wit. Serialism, structure and syntax schematized into the spotlight, pitch and rhythm as points on a grid. It’s almost as if musicians listened to those early recordings and began to hear music from another angle, one stripped of sonic plushness but alive with the give-and-take of musical grammar, and realized that such give-and-take in itself could be a playground for expression.

Which means that another obsolete technology gets preserved in the repertoire and the toolbox: a style to be channeled, or adapted, or rejected, but holding at its core the substance of a long-ago technological advance, alongside Schubert’s postal delivery, Poulenc’s telephone, and—in future times—Muhly and Kahane’s internet. We don’t think we’re writing and playing and hearing an archeology of technology, but we are.

Digital to Analog: Poems and Histories

[Richard Monckton] Milnes brought [Thomas] Carlyle to the railway, and showed him the departing train. Carlyle looked at it and then said, “These are our poems, Milnes.” Milnes ought to have answered, “Aye, and our histories, Carlyle.”

—Ralph Waldo Emerson, Journals

At the height of the Iraq War, the United States Department of Defense spent over three billion dollars a year to neutralize technology I carry in my pocket. That was, at one time, the annual budget for the Joint Improvised Explosive Device Defeat Organization (JIEDDO), formed in 2006 as a clearinghouse for Pentagon and private contractor efforts to jam the electronic signals that were being used to trigger the IEDs that were causing the majority of U.S. casualties in Iraq and Afghanistan. (And JIEDDO represented only a portion of the expense.) One of the more common sources of such signals were, and remain, cell phones. A couple of wires, some explosive material, some screws or other bits of metal, and my phone—or yours—can be made into a shrapnel-filled bomb.

***

Piano

I’m going to guess that improvised explosive devices were not on Andrew Pekler’s mind when he conceived his 2013 installation The Prepaid Piano. Pekler—a USSR-born, California-raised, Berlin-residing electronic-music polymath—put five mobile phones, each set to vibrate, directly on the strings of a grand piano, in five different places. Audience members were then free to call any of the phones, either from their own phones or from phones provided in the hall; contact microphones on the piano’s soundboard then passed the vibrations over to a modular synthesizer, which looped and altered the sounds, the loops changing with the proliferation of incoming calls, while more direct interventions—knocking the case, plucking the strings—provided their own cycles of punctuation.

As documented on the 2014 LP The Prepaid Piano & Replayed (co-released by the UK-based Entr’acte and the Italy-based Senufo Editions), the result is more extremely sophisticated lark (in the John-Cage-as-trickster-sensei spirit of the punning title) than ripped-from-the-headlines commentary. The amplified sounds crackle, pop, and metallically purr; the synthesis ropes it all into a loping grind. It’s engagingly textured, fun, maybe a little melancholy in its slow-rolling machinery, but still a long way from any evocation of the more violent technologies that rend the world on a daily basis. But consider the elements of The Prepaid Piano: cell phones, wires, screws, electricity.

The trope of regarding technological advances—particularly those that enable or shape connections among people—as inherently insidious is so ingrained that it’s almost reflexive at this point. But all technology is both useful and dangerous, with human behavior tipping that balance to one side or another. Usefulness usually wins out: the more convenient a technology is, the more risk we’re liable to accept in adopting it. Cell phones embody that—they’re so useful that it’s hard to remember (or imagine) what life was like before they were prevalent; but, then again, they’re damaging enough that people inevitably wonder whether that previous life wasn’t, in fact, better.

Cell phone technology is particularly notable because the most crucial part of it—the cellular network itself—is completely unseen. You’d be hard-pressed to design a better allegory for the good/bad potential of technological advance than the cellular network. It is ubiquitous and invisible. It holds the potential for a connection to the world and a harsh, bloody severance from it. And it is everywhere, all around us, all the time.

* * *

On January 17, pianist Vicky Chow gave a recital at Northeastern University’s Fenway Center. Chow is best known as the pianist for the Bang on a Can All-Stars and is a standing member of other new music groups as well. For this concert, though, most of the collaborators were virtual. Christopher Cerrone’s Hoyt-Schermerhorn, for instance, a 2010 meditation sparked by a long wait for a subway train, layered in a third rail of digital processing, the upper end of the keyboard triggering glitchy, distorted echoes over a gentle, subterranean meandering of parallel tenths. Hoyt-Shermerhorn was a Boston premiere, as was Steve Reich’s Piano Counterpoint, the 1973 ensemble piece Six Pianos re-arranged (by Vincent Corver) for a solo pianist playing along to four pre-recorded tracks; Chow’s snap-tight rhythm and technique, along with the timbre—brighter than the original—re-emphasized the music’s mechanical churn (as well as its sense of a very 1970s-NYC prescribed commotion, echoing of Stephen Sondheim’s “Another Hundred People” from Company as testament to the strength of that zeitgeist).

Ronald Bruce Smith’s Piano Book was a world premiere. Smith (a Northeastern professor) pulled out most of the traditional recipes for disguising the piano’s decay—trills, scales, Debussy-like flourishes, an entire section riffing on Baroque-style ornaments—and étude-like tricks for keeping more than two registers in play with only two hands. (Chow juggled it all with flair.) But amplification and processing were present here, too, electronically stretching the piano’s resonance and pedaled sustain into thick, soft clouds of sound. It struck me that all the technology was serving a purpose similar to that of the cellular network: it was making the piano more musically convenient, expanding its palette, increasing its capability, not just disguising its quirks but electronically eliminating them.

The finale, John Zorn’s 2014 Trilogy (another Boston premiere) seemed, at first, to cast all that aside. The collaborators here were human—bassist Trevor Roy Dunn and drummer Ian Ding—and the electronic mediation was limited to the sort of basic amplification one would use for the ensemble being evoked, a standard jazz trio. But Trilogy is trickier than it seems: Chow was playing from a fully through-composed part while Dunn and Ding improvised around her, an illusion of jazz, punctiliousness and freedom blurred together, almost imperceptibly. Zorn, it turns out, was playing with technologies, too, just much older ones: musical notation and improvisation, using the one to expand the other just as the other three works on the program were used processing and playback to expand on the piano’s possibilities.

One effect of it all was to render another, rather sophisticated piece of technology largely invisible—that is, the piano itself. Pianos are complex, ingenious, immensely satisfying pieces of engineering. So are all acoustic instruments, in their own ways—decades or, in some cases, centuries of incremental improvements yielding machines of remarkable and efficient expressivity. And yet, for the better part of a century, that development has largely been frozen. The piano Chow was playing was not appreciably different from one Rachmaninoff would have played. The persistent presence of old repertoire in classical music has enshrined acoustic instruments’ virtues and limitations as equally sacred.

I can appreciate the expressive potential of preserving an instrument’s seeming imperfections—the piano’s inability to sustain a tone much past a few seconds, for example, has probably fueled as much compositional creativity over the past two hundred years as any aesthetic revolution. But, then again, that preservation has been going on for the entirety of my musical life and much longer, so of course I would find a way to get used to it. Good and/or bad, it is one of the defining characteristics of classical music now. Part of that is classical music’s great boon and burden, the weight of history: to know that the great virtuosi of the past played essentially the same instruments that we do is a powerful connection. And I would guess that’s why the dominant use of electronics in more-or-less-classical new music in the 21st century is still in tandem with the old acoustic instruments. One technology is layered over with another: strata of innovations.

***

E177_front-1024x1024

The flip side of The Prepaid Piano & Replayed turns those layers into a palimpsest, effacing its acoustic, site-specific nature by way of Ableton Live’s audio-to-MIDI converter; the original recording, thus transformed, becomes a stream of instructions to a synthesizer. The virtual transfers enable Pekler to treat digital technology in the same, expressively-mine-the-imperfections way that generations of classical composers and performers have treated acoustic technology: the complex, noisy nature of The Prepaid Piano is, as Pekler admits, ideally designed to bring out the limitations of audio-to-MIDI. In a way, it highlights how much of the piece exists at the edge of so many less obvious musical technologies, especially those surrounding communication: composer to performer, performer to audience, audience to performer, and so forth. The Prepaid Piano & Replayed was issued as a limited edition of 300 vinyl copies—music designed around infinitely distributable wireless and digital means packaged into a rare and resolutely physical object.

For me, what Pekler’s project and Chow’s recital had in common was that they both prompted consideration of a particular feature of technology, musical technology in this case, but applicable to all technologies: the technology you notice is almost always, at the same time, pushing another technology into the unnoticed background. In that regard, technology isn’t entirely neutral, at least at first glance: the interface is always compressing the data, some information in sharper focus than other information. And I’ve found that one really fascinating question to ask myself while listening to music that utilizes technology—old technology, new technology, high technology, low technology—is this: what’s being hidden? What’s being effaced? What’s being pushed to the foreground, and what’s being pushed to the background?

In the coming months I want to explore some byways of how technology—cutting-edge or not—is being used in new music. Part of that story is already history; part of it is still, and always, being written. The quote at the top of this article, about poet Richard Monckton Milnes and historian Thomas Carlyle observing the trains, can be a bit of a guiding light. Emerson (who knew both Milnes and Carlyle) recorded it in his journals in 1842, when steam-powered rail travel was less than twenty years old. What was then Carlyle (and, in Emerson’s imagination, Milnes) offering a friendly reproach to get with the times now reads as an image of how technologies, as they become obsolete, can move entire systems of thought into a kind of limbo, passed by but still there. Humphrey Jennings, documentary filmmaker and general Renaissance man, included Emerson’s story in his extraordinary, unfinished, posthumously published anthology Pandæmonium 1660-1886: The Coming of the Machine as Seen by Contemporary Observers. Jennings commented on the passage:

It was in this year 1842 that J. C. Doppler noticed the differing pitch of train whistles—advancing and retiring—and proposed, by analogy, the Doppler effect in the spectra of certain stars.

Sounds—and music, and technologies—come and go, but even their coming and going is its own kind of testimony.

This Year’s Model (or, That’s What They Don’t See)

In C #11.

Heinrich Heine, from Die romantische Schule (1835): “[E]very epoch is a sphinx which plunges into the abyss as soon as its problem is solved.”

2.

The most telling commemoration of the 50th anniversary, last month, of the San Francisco premiere of Terry Riley’s In C—maybe not the most sublime or the most grand, but the most telling—was the In C iPad app. Developed by Matt Ingalls and Henry Warwick and released by the software firm Sonomatics, the app lets you conjure up a performance of In C all by yourself. It is compulsively fun. Up to twelve different virtual “performers” work their way through the piece’s 53 fragments, moving from one fragment to the next at the tap of a finger. Each channel is completely customizable: volume, instrumental sound, transposition. The level of control is exhaustive.

3.

The mere fact that we have been celebrating the 50th anniversary of In C (the app is just the tip of an iceberg of performances, recordings, and tributes) indicates a collective decision to exempt the piece, in a way, from Heine’s rule. In C has been cantilevered over the abyss of its own musical epoch. It’s hardly the first piece of music to have been thrown such a lifeline. There’s a whole group of such works. We call it the canon.

4.

Back in 2009, when the 45th anniversary of In C was marked with a massive Carnegie Hall performance, and when Robert Carl’s essential study Terry Riley’s In C was published by Oxford University Press, Carl—in an interview on this site—both acknowledged and deflected the work’s canonization:

I should mention that it still is in many ways a prophetic work, because it’s one of the greatest examples of human beings getting together to agree to do a communal action where they maintain their individuality yet with a prescribed goal. I think it’s a political statement. More than Cage, I think it’s an example of a structured anarchy which is very positive. And as such I think it remains a useful idealistic model for us in terms of all the political and social issues that we have.

5.

By sidestepping the communal premise of In C—its prophetic core—the In C app provides only the most obvious example of the way canonization has both expanded and contracted the interpretive space around the piece. Maybe that’s one small advantage to such anniversaries: you can see past the incremental nature of the journey to just how far you’ve gone. In C at 50 is a long way from In C in 1964. The one-time experiment is now a favorite ritual. The manifesto is now scripture. The cooperative happening is now a one-player computer game.

6.

None of this changes In C, the piece of music, its ever-shifting cloud of melodic fragments—each player repeating each fragment at will, moving to the next fragment at will, a haze of tonality kept in sync by the bright, insistent high-C pulse (suggested by Steve Reich, one of the original performers). And all of this changes In C, the musical work, the harbinger of the minimalist style, the marginal experiment that is now, in spite of itself, a monument.

7.

Classical music canonization tends to be only gradually bestowed and, while not necessarily a permanent label, it’s still an impressively sticky one, historically speaking. In comparison, pop music has made canonization a disposable consumer good. The prototype of pop-music success is a kind of unabashedly ambitious, near-instantaneous, omnipresent, and utterly temporary canonization. It’s commodified canonization; but, then again, one of the neat things about commodification is how it brings out the most essential and—in the literal, mathematical sense—radical aspects of whatever is being commodified. Which is why I’m going to talk a little bit about Taylor Swift.

Taylor Swift's "Shake It Off"

Screen shot from Taylor Swift’s “Shake It Off”

8.

Listen to the hi-hat samples. In Pharrell Williams’s “Happy,” the inescapable pop hit of the first half of 2014, the underlying four-beat drum loop features an open hi-hat on the third beat. In Taylor Swift’s “Shake It Off,” the inescapable pop hit of the second half of 2014, the underlying eight-beat (but nevertheless extremely similar) drum loop features an open hi-hat on the first beat. But the pitch of the hi-hat in “Happy” is flat, neutral. The pitch of the hi-hat in “Shake It Off” rises.

9.

Those hi-hats might sum up the difference in mood between “Happy” and “Shake It Off”—ineffably laid back vs.insistently upbeat. But, in “Shake It Off,” that slight upward sizzle takes on added significance because of the comparative stasis of the rest of the song. Like “Happy,” “Shake It Off” never modulates; in fact, it goes “Happy” one better by never even changing its harmonic progression. Verse and chorus are a regular tread—ii-IV-I-I, ii-IV-I-I, in sæcula sæculorum. That fillip of hi-hat is the only upward trend in the song. Maybe it can be heard as the equivalent of a safe, prudent investment: the unchanging status quo dotted with a periodic, predictable appreciation of interest.

10.

Is that too much? That last paragraph could well be a parody of the sort of writing that has sprung up in response to Swift and “Shake It Off.” But it says something about the cultural place and purpose of the song that such writing is so easy. Money has been so bound up with the publicity around the song, the commentary about the song, the mere fact of the song, that it is difficult to not hear financial considerations wending their way through the production. Taylor Swift, after all, is the center of a formidable corporate enterprise. The discourse around “Shake It Off” and 1989, the album featuring it, has never been far from industry matters. The album’s completion of Swift’s turn from country-pop to pop? An occasion to analyze the navigation of genre-based and artist-based fan bases. The impressive rate and quantity of the album’s sales? An invitation to make a state-of-the-industry address. Swift’s much-noted decision to pull her music from the streaming service Spotify? An indictment/day of reckoning for streaming services as a whole. And so forth. This is what it means to be located within the cultural establishment, when the values of the establishment are congruent with those of the market.

11.

The disparity between the countercultural trappings of rock and pop and its indubitable position as the soundtrack of the current American establishment has, by this point, been noted to a probably adequate extent. Interestingly, though, like any such canonization (aspirational or actual), pop canonization tends to push interpretations of the canonized repertoire to extremes. This might be why “Shake It Off” loudly, proudly, and a little bit obsessively asserts a kind of critical unassailability. Haters gonna hate. I’m just going to shake it off. (It cannot possibly be a coincidence that each phrase of the chorus is punctuated with an “ooh-ooh-ooh” more-or-less directly lifted from Cee-Lo Green’s “Fuck You,” the musical version of nonchalantly scratching your nose with your middle finger.)

12.

In an odd way, this places “Shake It Off” in an orbit shared by In C: music defined, in part, by going against the critical wind, whether as a pose or an aesthetic choice. But that creates interesting dissonance—immediate in the one case, a long time coming in the other. “Shake It Off” debuted at #1 on the Billboard Hot 100 chart; In C is now an acknowledged masterpiece. One way or another, the counterculture becomes the culture.

13.

Sometimes, fans or critics or marketers or, occasionally, artists themselves will try to elevate a pop canonization into something closer to a classical canonization. The results, even if on some level reasonable, nevertheless usually come off as clashing. To liken (as both Leonard Bernstein and Ned Rorem did) The Beatles to Franz Schubert is, as much as anything, an example of how canonization has distorted both, how it has added another screen to the way we interface with their music.

14.

A more recent example: this past autumn, Sr. Cristina Scuccia, an Ursuline nun and winner of the Italian version of the singing-competition television show The Voice, released a cover of Madonna’s 1984 single “Like a Virgin.” Scuccia characterized the song in the kind of universalizing, levels-of-meaning terms that often accompany efforts at canonic enhancement. “Reading the text, without being influenced by previous interpretations, you discover that it is a song about the power of love to renew people [and] rescue them from their past,” she said. The Rome-based, bishop-affiliated Servizio Informazione Religiosa was not convinced, calling the recording a “reckless and calculated commercial operation.” Scuccia and her churlish detractors speak to our instincts: we can tell when canonization is happening on terms too contrived, too tangential, or too far removed from artistic quality. Or can we?

15.

Peter_von_Cornelius_-_Holy_Family_with_John_the_Baptist_as_a_Boy_-_Google_Art_Project

This is a drawing of a Madonna by Peter Cornelius (1784-1867), the most famous of the Nazarene school of painters. The Nazarenes were a group of 19th-century German and Austrian artists. For many years, they lived together in an abandoned monastery in Rome. A Madonna or two was practically a password to get into the Nazarenes: they were inspired by Renaissance artists (Raphael, in particular) and devoted to religious subjects and imagery. (And not just in their art; the name of the group was a mocking reference to the artists’ habit of growing their hair and beards long, in a Jesus-like manner.) There is a good chance you have never heard of the Nazarenes. In the modern narrative of art history, they are mentioned only in passing, if at all. But in their own time, they were quite popular. By the 1850s, according to French poet and critic Théophile Gautier (in his survey Les Beaux-Arts en Europe), Peter Cornelius was an institution. “Rarely has an artist enjoyed in his lifetime a glory equal to that of M. Pierre de Cornelius,” Gautier wrote. “He is admired as if he were already dead.”

16.

Heinrich Heine disdained the Nazarenes. In his younger years, Heine had studied drawing with Cornelius at the Dusseldorf academy and recognized the artist’s technical skill; nevertheless, Cornelius’s works seemed “to have been painted on Good Friday, while the doleful songs of the processions swept through the street, and re-echoed in the atelier and in the heart of the painter,” as Heine wrote in his Reisebilder. Cornelius’s figures are “drawn with dream-like accuracy, powerfully true, only they want blood-throbbing life and colour. Yes, Cornelius is a creator; but if we look at his creations it seems to us as though they could not live long; as though they were all painted a few hours before death; as though they all were prophetic signs of approaching dissolution.”

17.

It is impossible to separate Heine’s characterizations of the Nazarenes from his—and their—political station. Their ascetic, otherworldly trappings notwithstanding, the Nazarene painters were supported by the conservative Prussian powers-that-were of the time, in the form of commissions, patronage, and official positions (for instance, that Dusseldorf post for Peter Cornelius). Heine was an outsider: a Jew, a Saint-Simonian, a critic of aristocratic, reactionary privilege. His books were banned. For the last 25 years of his life, he lived in exile in Paris. That the Nazarenes found favor with the regime that rejected Heine inevitably figured into his criticism.

18.

Heine eventually expanded his view of the Nazarenes into a universal category. In his 1841 book Ludwig Börne. Eine Denkschrift, Heine divided the world into Nazarenes and Hellenes—the former moralistic, devoted to concepts, divorced from feeling, and oppressive; the latter sensual, hedonistic, alive, authentically human, and free. (The book itself is a chronicle of a friendship dissolved. Ludwig Börne was a pioneering journalist, an afflicter of the comfortable, a muckraker before the term was invented; despite their similar political leanings, the two eventually fell out over what Heine saw as Börne’s excessive Nazarene tendencies. The posthumous tribute was, perhaps, testimony to Heine’s recognition that Börne—and, indeed, Heine himself—could not be so easily categorized.)

19.

Heine was not sanguine about the contest between Nazarenes and Hellenes. Near the end of his life, he published a darkly whimsical essay called “The Gods in Exile,” a series of vignettes describing the supposed historical fates of the Roman gods once they were cast aside in favor of Christianity. Apollo, for example, after many centuries, was working as a shepherd in Lower Austria. But the beauty of his singing gave away his true identity, and he was hauled before the Inquisition and convicted. Asked if he had any last requests, the god sought only his lyre and the chance to sing one more song. The music was so beautiful, and the performance so exquisite, that all who heard it— particularly the women—were overcome with emotion. As a result, Heine reports, the authorities made sure to drive a stake through Apollo’s corpse, just in case such power was, in fact, due to vampirism.

20.

But Heine’s criticism of the Nazarenes was at least prophetic: the work of Cornelius and his confederates faded into obscurity with alacrity. And so we remember Heine, because he saw something that history later confirmed. Or do we remember Heinrich Heine because his poetry was set by Mendelssohn and Schumann and Brahms? Do we put more stock in his opinion of the Nazarenes because he got into the canon by dint of his other writing?

21.

Or—do we remember Heine because his art criticism was coming from a specific political place, one that made later political theorists more readily adopt his artistic opinions? Heine’s aesthetics were echoed by Ludwig Feuerbach, whose works, in turn (along with Heine’s own), were a crucial source for Karl Marx. As Margaret Rose put it in her book Marx’s Lost Aesthetic

the Nazarenes… are of central importance to an understanding of Marx’s critique of the cultural politics of his time as well as to the description of other critiques, such as those given by Heine, which helped to form Marx’s ideas on the patronage given to the arts in early nineteenth-century Prussia

—which provided the basis for a Marxist, materialist theory of art and aesthetics that became increasingly influential as Marxism itself did.

22.

In other words: the historical fate and reputation of the Nazarenes, their journey from favored to forgotten status, their place (or lack of) in the canon—a lot of it was fueled by forces and considerations a long way from the actual paintings. The Nazarene painters might have welcomed the patronage of the Prussian aristocratic establishment, but it’s not like they sought it out with their aesthetic choices. Heine, for that matter, might have been just as dismayed with that part of his own canonization: the overwhelmingly conceptual nature of classical Marxism is about as Nazarene as it gets.

23.

The idea that success is as much of a trap as failure is an old one. In his Denkschrift, Heine quotes an especially wry observation from one of Ludwig Börne’s letters. Börne notes that, with unrest on the increase across Europe, crowned heads have been particularly concerned over the safety of their porcelain factories. “You have no idea, my dear Heine, how having fine porcelain keeps one in check,” Börne writes. “Look at me, I once was wild, with little luggage and no porcelain. But with possessions, and especially with fragile possessions, comes fear and bondage.” The fear becomes pervasive: “Truly, I feel like the damn porcelain inhibits me in writing, I am so mild, so careful, so anxious….”

24.

Literature’s most well-known Nazarene painter is Adolf Naumann, a minor character in George Eliot’s Middlemarch. Dorothea Brooke and her husband, the pedantic clergyman Edward Casaubon, meet Naumann and his assistant, Naumann’s cousin Will Ladislaw, while honeymooning in Rome. Naumann insists on having Casaubon model the head of St. Thomas Aquinas for a large allegorical painting. Having been thus elevated to the pantheon, Casaubon and his wife return home, where Casaubon alters his will—suspicious of the friendship between Dorothea and Will Ladislaw, Casaubon decrees that, should the two marry, Dorothea will forfeit his property—and then suddenly dies. His legacy, in essence, imprisons the characters for the rest of the novel.

25.

Elvis Costello, from “Living in Paradise” (1978): “Meanwhile up in heaven they are waiting at the gate, saying, ‘We always knew you’d make it, didn’t think you’d come this late.’”

26.

In C was eligible for the 1965 Pulitzer Prize in music, but Terry Riley did not win the award. Nobody did. In what has become the most infamous snub in modern musical history, the jury (Winthrop Sargeant of The New Yorker, Ronald Eyer of Newsday, and Thomas Sherman of the St. Louis Post-Dispatch) recommended that, in lieu of the usual prize, Duke Ellington be given a special citation, a recommendation the Pulitzer board chose to ignore. Sargeant and Eyer resigned from the jury and made the story public.

27.

The nomination letters the 1965 Pulitzer music jury sent to the board, particularly the ones from Sargeant and Eyer (Sherman seems to have been content to defer to the more insistent New Yorkers), are fascinating reading—which was probably part of the problem. The most fascinating is the one by Sargeant. “A few weeks ago” (Sargeant begins) “a cellist-composer named Charlotte Moorman gave a concert in which she played the cello, shot off a pistol and dived, fully clothed, into a tank of water. This all before a small audience.” (This, incidentally, is a conflated reference to two of Moorman’s collaborations with Nam June Paik: the Variations on a Theme by Saint-Saëns and Paik’s arrangement of John Cage’s 26’1.1499” for a String Player.) Sargeant goes on: “There is a great deal of this sort of thing going on, and, in our preliminary meeting, Mr. Eyer, Mr. Sherman and myself decided that such things should be eliminated from competition for the Pulitzer Prize.” In somewhat more perfunctory language, Eyer concurred: “Inconclusive and frankly experimental music could not be considered.” The Ellington award would have been, in part, a riposte to what the jurors saw as the unacceptably avant-garde tendencies of American new music: the canon as a zero-sum game.

28.

What’s more: both Eyer and Sargeant—in their nominating letters, mind you—went out of their way to disparage the one Ellington piece that would have been eligible for the award, the Far East Suite. They couldn’t even be bothered to get the name right. Eyer: “[W]e wish to make it clear that we do not take into account his most recent composition, ‘Far Eastern Suite,’ introduced here recently, which we do not consider comparable to the best of his output.” Sargeant: “The only composition of his that had a first American performance during the past season was something called ‘Far East Suite’ or ‘Impressions of the Far East’ or something like that. We heard tapes of it, and found it to be inferior to the best Ellington.”

29.

Sargeant and Eyer were right about Ellington, and wrong about Ellington. Criticism of any cultural artifact is (often correctly) deflected by accusing the critic of failing to understand, of missing the point, of not getting it. But approbation can come from a very similar place. Admittance to the canon is sometimes on grounds far removed from what the creator intended. And there’s nothing you can do about it. Praisers gonna praise.

30.

(Interestingly, some of Ellington’s (scrupulously private) anger over the Pulitzer incident focused on Sargeant and Eyer’s decision to go public. “Why the fuck,” Mercer Ellington remembered his father saying, “did it have to get in the newspapers?” Maybe he realized that the publicity would, in effect, forestall any future reconsideration on the part of the Pulitzer board, that the jury’s unorthodox canonization would fix that part of Ellington’s stature as permanently as a proper canonization would.)

31.

For the 1965 music Pulitzer jury, the concept of certain works, the idea behind them, automatically disqualified them from consideration, regardless of the musical production. Similarly, the idea of giving Duke Ellington a Pulitzer Prize trumped their opinion of the musical substance of his only eligible work.

32.

“Shake It Off” and In C are both conceptual works: the idea of them is as important as their musical substance. For “Shake It Off,” the concept is the assertion (or, maybe, the revelation) of Taylor Swift’s established persona as completely congruous with 21st-century pop music. The importance of In C is inseparable from its status as a foundational document—“the Sacre of minimalism,” as Robert Carl aptly put it. On the surface, the production of each piece—the strategy by which the substance expresses the concept—is divergent: deliberately loose parameters in In C, intensive control in “Shake It Off.” (Like everything else in the song, nothing about those rising-pitch hi-hats is happenstance.) But the concept still governs the production.

33.

Like the videos for other Taylor Swift songs, the video for “Shake It Off” is designed, first and foremost, to make us like Taylor Swift. Swift appears in a series of established dance contexts: classical ballet, breakdancing, twerking, modern dance, finger tutting, and so on. In each, she is the odd girl out, an unskilled and clumsy interloper in a troupe of disciplined experts. The final scenes feature Swift and a group of likewise untrained, “normal” people dancing with a kind of freeform giddiness, sealing the video’s inverse relationship between expertise and likability, between proficiency and fun.

34.

More than a couple commentators have caught more than a whiff, however unintentional, of cultural appropriation in Swift’s goofing on the more African-American-associated dance styles. But there’s a divide of class and race to be found in the very idea behind the video: that Taylor Swift, a blond white girl, would be considered likable for her lack of skill stands in contrast to the generations of African Americans that grew up knowing they would have to be “twice as good” in order to succeed (in the more cynical version of the saying, “twice as good to get half as much”). At the very least, it is a reminder that performing some act of skill in an amateur, casual way is, on some level, an act of privilege.

35.

(Duke Ellington, from “Ninety Nine Percent” (1963): “Ninety-nine percent / won’t do / ninety-nine-and-a-half / ain’t enough / If you love yourself, be good to yourself, be one-hundred-percent wise / that’s the ticket to heaven, and there is no compromise.”)

36.

Robert Carl begins his study of In C by emphasizing how “transgressive” the piece was in the context of its time: its pulse, its tonally centered modality, its open instrumentation and form. “And perhaps most threatening to a sense of professionalism in the classical avant-garde,” Carl adds, “it welcomed performers of varying levels; one did not need to be a virtuoso to participate in a successful performance.” Nevertheless, as Riley continued to perform the piece, he did feel the need to set down a baseline of proficiency; what was once a single page of music is now, in the current edition of the score to In C, a single page of music and two pages of instructions: “It is important not to race too far ahead or to lag too far behind.” And: “All performers must play strictly in rhythm and it is essential that everyone play each pattern carefully.” And: “It is advised to rehearse patterns in unison before attempting to play the piece, to determine that everyone is playing correctly.” (S. Alexander Reed, in his 2011 article In C on Its Own Terms: A Statistical and Historical View,” has ingeniously analyzed how, over the years, Riley’s additional instructions—both in print and in rehearsals for various performances—have gradually massaged the piece into a more specific structure, the arrangement of the modules’ diatonic and chromatic pitches made to reveal a traditional arch form infused with forward-directed, goal-oriented momentum.)

37.

The rehearsal process for that November 1964 premiere of In C was catch-as-catch-can, with a different subset of performers at every practice. Interviewed for Carl’s book, Riley remembered one of the last rehearsals before the premiere. “[W]e had one which was almost everybody, including a couple of hippies who came in off the street, and who tried to blow over it, and Steve [Reich] threw them out, because he was totally intolerant of anything like that,” Riley said. “I would have probably let them play!” Preparation and improvisation, expert and dilettante, belonging and exclusion: Michel Foucault would have loved that story.

38.

(In C, in fact, grew out of an idea Riley had for a fully notated piece; frustrated with that effort, Riley, in a “flash,” realized that he could design a similar piece that would work on the principle of controlled improvisation. An irony: Riley had hoped that the abandoned, through-composed piece would be performed at the Monterey Jazz Festival; its descendent, the partially improvised In C, became a classical music standard.)

39.

One of the more hardy internet memes of the past couple years has usually gone under some variation of the moniker “You Only Had One Job”—a photo of some incredibly mundane task that some person has nevertheless still managed to screw up. If you wanted a quick pop example of how deeply corporate, assembly-line assumptions and value systems now pervade every human interaction, you could do a lot worse than the fact that the humor of the “You Only Had One Job” meme almost always centers around the incompetence of the worker, and never around the absurd monotony of the work.

40.

It’s possible to hear In C as a triumph of industrial efficiency. A single page of short, simple, repetitive actions resulting in a complex, added-value final product. The individual workers have room for initiative—repeat each phrase as many times as you want!—but, at the end of the day, that sense of freedom is subsumed into the larger task: the ultimate trajectory of the music remains unchanged, the template producing another In C performance to add to the stockpile. Frederick Winslow Taylor would have loved this piece.

41.

No, of course that’s not what Terry Riley intended. Nor is that how most performers and listeners have chosen to hear In C. But that’s at least partially because, for so many years, In C was perceived as a countercultural artifact. Before the canon caught up with it, it was easy to hear In C as a celebration of community and cooperation, of easygoing anarchy producing a temporarily harmonious society. But, going forward, there may never be another generation that doesn’t hear In C outside the canon. Why wouldn’t In C, now situated inside the establishment, start to sound different? What’s to stop those future listeners from hearing the piece as an ingenious aesthetic rationalization of one of the most common human conditions of late-capitalist life—the sense that one is only a cog in the machine?

42.

The In C iPad app can even be interpreted as underlining the factory-like aspects of the piece. The performers, the cogs—the workers, just like so many others—have been replaced by technology: cheaper, more efficient, more pliable.

43.

The chorus of “Shake It Off” sounds a bit like a litany of insistence that the line between one’s identity and one’s work has been thoroughly erased. Players gonna play. Haters gonna hate. Heartbreakers gonna break. Fakers gonna fake. You only [have] one job.

44.

If it wasn’t already obvious, I am incurably intrigued by the way a piece of music, an ephemeral object, can go on to have a rich and unexpected existence beyond the control of its composer or performers. All bits of culture have this potential, but it seems especially strange and potent with music, just because music is so slippery to begin with: temporal, insubstantial, with a built-in disconnect between page and performance. That such fragments, really, can be invested, over and over again, with entire encyclopedias of meaning is incredibly weird. Canonization both acknowledges and fixes that process, in a way that both invites and demands further investment.

45.

The neatest, most appropriate peroration for this essay would be an expression of hope, a wish for In C to shake off the weight of history, to preserve its lightness, its informality, its identity as “the scruffy longhair shuffling its feet at the doors of the exclusive club,” as Robert Carl describes it. But that is incredibly difficult. Remember, canonization pushes interpretations to extremes—performers try to recapture a work’s original power by amplifying it, try to recapture its original novelty by distorting it to the point that it can again seem disorienting. This is already happening with In C. (The 45th anniversary Carnegie Hall performance went big: 70-some performers, 90-some minutes.) And as the interpretations become more extreme, the implications become more extreme. We’re more likely to hear the edges in music that’s been sharpened to a point.

46.

That might be why, on the occasion of its 50th anniversary, I have been more inclined to hear In C through a skeptical lens. That doesn’t lessen the piece (a great work can encompass many vectors) or my affection for it. But, in the atmosphere of late-2014 America—an atmosphere of division and disillusionment—In C‘s idealism feels awfully stark. From a political, civil-society standpoint, In C is a fairytale, in way that can feel especially fictional in our particular here and now: the system is simple, the system is transparent, the system is democratic, and the system works.

47.

In a 2007 essay called “For the Birds/Against the Birds” (reprinted in her collection Elective Affinities), Lydia Goehr collided the idealism of In C with Theodor Adorno’s famous/infamous postulation of art after Auschwitz:

The composer Terry Riley once wrote that “music has to be the expression of spiritual categories like philosophy, knowledge, and truth, the highest human qualities. To realize this, my music necessarily radiates balance and rest.” Why “balance and rest,” Adorno asks again of his contemporaries, in a world that no longer admits of either in truthful form? If art is to mirror life, why think of the life or nature that appears under the spell of society’s untruth? Why not focus on the concealed or lost life, on the life brought historically to death, on the life that no longer appears to the eye?

48.

The framework of Goehr’s essay is a contrasting analysis of Theodor Adorno, the musical conscience of the Frankfurt School of philosophy, and Arthur Danto, the philosophical conscience of the New York School of music and visual art. Between them, Goehr locates an essential, unresolvable tension in modernist and post-modernist aesthetics. “[T]he dialectic that starts out between art and nature becomes over time one between art and the commonplace, where the commonplace increasingly becomes a concept demonstrating the loss of what the concept of nature once implied: namely, beauty and freedom,” Goehr writes. “If this is right, then it also plausibly follows that the artists of the 1960s (and after) who sought a meaning for art in the commonplace were too content to accept the loss of a certain sort of meaning in art. Or, those who still saw beauty in nature or art were too content not to find beauty in the commonplace.”

49.

Canonization, a historical holdover that persists in the contemporary world, runs smack into this conflict. By both confirming a work’s artistic standing and claiming it as a universal commonplace, it situates the work at the exact point of tension Goehr identifies. Once in the canon, a work is no longer just meaningful or beautiful, it is saddled with the expectation of being meaningful and beautiful, in ways that contradict each other, and even the work’s original intention. The danger of the canon is that becomes a prison of frustration.

50.

Is that too much? In C is a great piece and deserves to have its greatness acknowledged. Its persistence is a sign of optimism; its variability is a virtue; its celebration is surely an instance of society, for once, recognizing something good in its midst. But the mechanism of canonization ought to be checked, calibrated, investigated. Because pieces of music are hardly the only things that get canonized. Society and culture—which are simply terms that deflect our own complicity—are organized around what is in and out of various canons. We don’t always see the mechanism—another screen. We are the mechanism. We are the ones doing the canonizing, enshrining not just music, not just art, but ideas, morals, standards, truths and truisms, be they common sense, religious, political. A lot of things get canonized. A lot.

51.

Heinrich Heine, from Die Romantische Schule (1835): “In the world’s history every event is not the direct consequence of another, but all events mutually act and react on one another.”

52.

Elvis Costello, from “The Beat” (1978): “There’s only one thing wrong with you befriending me. Take it easy. I think you’re bending me.”

53.

“…when each performer arrives at figure #53, he or she stays on it until the entire ensemble has arrived there. The group then makes a large crescendo and diminuendo a few times and each player drops out as he or she wishes.”