Author: Marc Weidenbaum

What Is Music’s Comic Book Superpower?

This is the third in a short series of essays about how sound is inscribed.

My eye keeps moving back to the top of the page, try as I may to make headway into the story. The narrative is going in one direction, but my eye keeps looping back. This occurrence is natural, even intended. What I’m reading is a graphic novel, and the reader’s eye is expected to move around. You read some dialog, and then look at a face. Or you look at a face, and then read some dialog. You linger on the backgrounds, sometimes washes of color and texture, at other times highly detailed renderings. You see something of interest, and your eye wanders back to see if there had been a hint of it you’d missed earlier on—sometimes on the page, sometimes further back. The word balloons are just part of the overall story. Sometimes the dialog has nothing to do with what’s on the page, instead setting the stage for what will be underway when you turn to the next page.

And dialog is but one of many potential sonic elements on a given page in a comic book or graphic novel. There may be little lines that suggest a tiny burst of noise, and there are emblazoned effects—words writ large in blocky colorful type—that announce the intrusion of sonic events. The spoken words’ tonality is suggested in various ways: lower case at times, versus the more standard all-caps; occasionally bolded terms, sometimes for emphasis, often for cadence; a bit of color here and there, perhaps to distinguish who is speaking; even a wavy effect as someone is winding down into sleep.

And then there is the presence of music. When I come across actual musical notes in a comic book, they sometimes break the narrative spell for me. I pause and wonder if I should enter them into a simple music app to hear whether they “mean” anything, whether they are actual music or just a graphic signifier of music. How music is displayed in comics varies by author and by artist (those not always being the same individual), by story and by genre, by publisher and by region. Sometimes music appears as notes, and sometimes it’s more abstract.

In the comics of Megan Kelso, for example, sound often takes on a physical manifestation. Now, sound is inherently physical—at a loud concert you can feel the beats in your chest and, even at its quietest, sound is a force being registered by the infinitesimal hairs inside your ear’s cochlea. In Kelso’s comics, the physicality of the sound is often emblematic of an emotional heft; it fills space as a physical object or a heavy mood might. A young woman, seen here in a short story from Kelso’s Queen of the Black Black collection, plays piano. The sheet music is in front of her, tellingly absent any visible notes. We’re not, as readers, to be distracted by literal melodies. Instead what we witness is unfurling ribbons of sound that emanate from the grand piano’s exposed cavity. Here they play a specific role, trying (and failing) to reach the young man seated in the room’s far corner:

ribbons of sound

I spent many years editing comics about music. This was for Pulse!, the music magazine long ago published by Tower Records. I edited comics about music for the magazine for a decade, beginning with the contributors Adrian Tomine (when he was still attending high school) and Justin Green. Kelso was also among the contributors, along with many others, some early-ish on in their careers, like Jessica Abel, Brian Biggs, Ed Brubaker, Leela Corman, Tom Hart, and Frank Santoro, to name a few, and others further along, such as Peter Kuper, P. Craig Russell, and Carol Tyler. The Pulse! comics were about music, but they were so “about” music that how music was visually characterized wasn’t often the focus of a given piece. They were about the lives of musicians, and also about how audiences’ lives were affected by the musicians’ work.

How music appears in comics that are not entirely about music, how music appears as a narrative element of comics in general, is a strong interest for me. I read comics voraciously, and nothing slows my reading like the appearance of a few notes (or their suggestion through some other visual means) on a page. The presence of music in comics is far less codified than how action is portrayed, or facial expressions, or time passing, or distance. In this sequence from The Lagoon, a graphic novel by Lilli Carré, conflicting sounds appear, the snoring of one individual contrasting with another melodic thread. The transition in the woman’s face, as the sounds go from annoying to entertaining, caught my eye in particular. The two strains slowly merge, or at least make peace with each other, as the panels proceed:

the music of snoring

Music plays a unique visual role in comics. Much sound in comics is of an instant. Someone says something, and then it’s over. Someone makes a loud noise, and then it’s over. Many of the non-musical sounds in comics aren’t that different from the sound effects I marveled at in my youth, like this panel collected in Origins of Marvel Comics, a paperback compendium of classic comics. Here the Hulk does battle with Prince Namor, the SKRRAKK!, with its redundant yet effective exclamation point, serving as a massive onomatopoeia.


Comics take place, like music, over time, but comics are read as a sequence of still images, whereas music by its very nature tends to be visualized in a way that suggests continuity and flow. One of the things sound can do in comics, thanks to its necessary visual continuity—its tendency to be represented in streams rather than isolated instances—is to fill spaces and connect moments. This sequence in the first volume of Michel Fiffe’s Copra series is a fine example. Note how the source of the sound remains evident even as the sound moves from one panel to another and proceeds to inflict pain:

sound as pain

The presence of sound in a comic is not unlike its presence in a film. Sometimes its presence is documentary-like, a part of a scene, an act of naturalism or of set decoration, but often it reflects some individual point of view. In this scene from Charles Burns’s The Hive, a band is playing but the reader “hears” nothing, because no notes are visible. The protagonist’s mind is elsewhere, even as the quartet rages:

band in a comic

Music can serve a comic’s narrative, certainly, but it isn’t just there as part of the story. At its best, music is there as part of how the story is told. It isn’t there just to tell us something culturally about the characters or emotionally about the moment. In its best utilization, music appears in a comic as part of the illustrator’s graphic-language toolkit. This sequence from early in Get Over It, a breakup memoir by Corinne Mucha, does much in a tiny space—just two panels. We don’t come across the couple’s boombox until the second of the pair of panels, but then again the paneling is doing more than telling two moments in a sequence: it’s showing two individuals, destined for mutual heartbreak, across a divided table from each other:

boombox in a comic

The music in Mucha’s scene emanates, as a traditional series of notes, from the pair of speakers, but the sequence makes no visual sense in purely literal terms: notes don’t enter a speaker on the left and then proceed to the right. What the music is doing is showing two people in close proximity who are already drifting apart. The elegant divide between the panels is in fact a fissure, the bit of musical confusion a symbol of unacknowledged dissonance. The divide between those two panels foretells where the story is headed. The way the divide asks the eye to briefly look back is instructing the reader to pay attention. The music isn’t just the soundtrack to the characters’ lives; its visual depiction is an intrinsic tool in the storytelling of those lives. Music doesn’t occur like the other sounds in a comic. It challenges the artist to reconcile its presence within the narrative. Its strength is how it resists submission to the micro-episodic nature of panel-by-panel storytelling. And when the creators rise to the occasion, it sings.

Is the Printed Circuit Board a Form of Musical Notation?

This is the second in a short series of essays about how sound is inscribed.

A small box has arrived with the day’s mail. It’s cardboard, generic packaging, the address label clearly affixed atop a previous label. Inside the box is a small synthesizer module that I’ve purchased secondhand.

I open the box, unpeel the bubble wrap, and look at the tiny device, which is designed to connect with an extensive array of like-formatted modules. It’s about an inch wide on the front, with holes where patch cables will go in and knobs to control various aspects of its intended functionality. Directly behind that faceplate is a stack of electronic internal parts, all PCBs (printed circuit boards) and wires. In my office this little item will connect with about two dozen of its cousins, but before I plug in its power cable I look closely at the circuits, straining my eyes in the process.

I’m looking for, of all things, a bit of humor.

When I first started fiddling (for the first time since college) with synthesizer modules, back in mid-2014, I had a fully exposed starter rig into which I’d plug them. (I’ve since gotten a briefcase-like enclosure.) Even after they were hooked up—an oscillator here, a filter there, the naked power supply warm to the touch—I could freely view the interior workings. These consisted largely of PCBs, usually green or gray, their surfaces home to a mix of diodes, solder, and dozens of things I couldn’t begin to identify. Some looked like they came off an antiseptic assembly line, others like they’d been designed and manufactured in someone’s parents’ basement.

For a long time I couldn’t see the synthesizer forest for the module trees. It was all a blur of patch cables and buttons, switches and LEDs. I was also overly cautious, handling the items like one would an adorable newborn kitten with a congenital heart defect. As time passed, though, I gained some sense of fluency with the material in front of me: how to use a sample and hold module to get a random pattern out of a noise source, how to use an offset to set a waveform’s bottom level to something higher than zero volume, how to hook my synthesizer up to my laptop so I didn’t have to spends hundreds of dollars every time I wanted to try out a new synthesizer function.

Some modules have lovely design flourishes, bits of fantastic line art right there on the circuit board, so enticing it threatens to give “cyberpunk” a good name all over again.

In time I wasn’t plugging in my newly arrived modules so quickly. I was spending more time looking at them, admiring their structures, noting aspects unique to various individual companies. Some modules have lovely design flourishes, bits of fantastic line art right there on the circuit board, so enticing it threatens to give “cyberpunk” a good name all over again. Others have funny little phrases, puns on functionality, like where the power supply goes, or little axioms that both gently mock and encourage the beholder—Barbara Kruger by way of circuitry. This is what I now first look for when I unpack a new module.

Still, my fluency could only go so far without a tutor, so I reached out to a friend, Matt Holland. I met Matt years back just as he was starting studies at the University of California, Berkeley. He was getting a degree in electrical engineering and computer science—that’s one degree, he’s careful to point out. Matt graduated from Berkeley and went on to work at various companies; his most recent gig has been with Teenage Engineering, the boutique Stockholm-based company best known for its OP-1, a hand-held synthesizer whose exceptional design earned it a place on the shelves at the Museum of Modern Art. Matt, who recently moved back to California from Sweden, invited me to stop by a small studio space he shares south of Market Street in San Francisco, and schooled me in PCB literacy.

Matt understood what I was after. I was already learning how to use synthesizer modules. What I wanted to get better at now was to be able to “read” them. I explained that I was working on a series of investigations into how sound is inscribed—in writing, in code, and so forth—and thus I was especially keen for insights into how synthesizer modules’ sonic intents can be interpreted visually. In other words: Is the printed circuit board a form of musical notation? And even if it isn’t, what can one glean from all those diodes, the cryptic copper lines, the tiny landscapes of circuitry?

Is the printed circuit board a form of musical notation? And even if it isn’t, what can one glean from all those diodes, the cryptic copper lines, the tiny landscapes of circuitry?

Matt’s studio is a small museum of drum machines. He has the full run of the “X0X” Roland items, like the 808 Rhythm Composer and the 303 bass synthesizer. On one desk is the ARP 2600, which dates from the early 1970s. He bought it at a pawnshop in Los Angeles. If you’re into synthesizers, having a degree in electrical engineering can come in handy—you can get the items in disrepair, and then fix them. And if you have a degree in electrical engineering, being into synthesizers can come in handy—Matt has a side gig doing commissioned work to bring archaic equipment back to life. Right now this includes a Roland CR-68, the little brother to the CompuRhythm CR-78, whose innards are splayed on a table when I visit. It’s a bit like the scene at the eyeball workshop in Blade Runner, just less chilly. Matt points out how there are cables in the device’s enclosure that simply shouldn’t be there. Someone did aftermarket work on it once upon a time, and now he’s carefully investigating those efforts. Matt has been asked at times to return items to their original state (the synthesizer equivalent of the Early Music movement), but in this case he’s trying to repair the aftermarket changes in addition to the box’s foundational electronics.

Matt has never been a purist. He got into synthesizers and electronics through circuit bending, which means taking existing circuits in existing devices—a Speak & Spell, an old Casio keyboard—and seeing what happens when you re-wire them. In his office now is a kids’ toy featuring all the U.S. presidents up to George W. Bush, and if you touch their faces they say things. Matt has tweaked the innards and added switches. He made it in 2006 during his first electrical-engineering internship, at the company OQO, an early portable-computing startup. Matt calls this device the Gang of 43.

Matt explains that the circuits of synthesizer modules as viewed on their PCBs can’t necessarily be read for their sonic content, though with experience and effort you can make educated guesses, in particular based on components’ relative proximity. In many ways, music-gadget PCBs are not distinguishable from those inside a cellphone or a microwave oven. There are key things to look for, though, such as what chips are present and what the inputs and outputs are, because audio-related jacks (MIDI, XLR, 1/4″) are rarely used in other fields.

That said, the PCB is not only readable; it is intended to be read.

Looking at the traces (the metal lines that connect aspects of the board) you can roughly gauge the generation of a board’s construction because new boards evidence the taut geometry of CAD design, whereas old boards were literally drawn by hand.

Most boards have writing on them. This is information to help the board assemblers. The code “C6,” for example, can be referenced in a schematic or BoM, bill of materials. There are standards to these, so even without a schematic you know that something is an integrated circuit if it starts with a U.

There are other signs of age. Modern modules will tend to have smaller versions of components that have seen iterative improvements over the years, engineering always tending toward miniaturization. Likewise, newer boards usually stack multiple PCBs more tightly, rather than running them perpendicular to the faceplate. (If you’ve watched the TV series Halt and Catch Fire, you’ll recall a moment of PCB-subdivision ingenuity—and of rousing tech feminism—from the first season.)

Design techniques never fully disappear, even if they’re outdated. Companies to this day, notably Munich-based Doepfer, which introduced a format standard called Eurorack in the mid-1990s, still can have perpendicular boards, as well as internal cabling. To read circuit boards is to read the history of technology, music and otherwise. The operational amplifier, or op amp, employs negative feedback. Negative feedback was developed by Harold Stephen Black back in the late 1920s, and it’s ubiquitous to this day. Chips like the op amp are plainly labeled, so you can look them up online to discern their purpose. They’ll likely be adorned with company information, a part number, the date of production, and a mark indicating pin 1, which helps the assembler with proper positioning.

Matt jumps to the present day and powers up PCB design software on his laptop. The laptop is an Apple, but the software needs to be run within Windows, so he first brings up a Windows emulator. The screen loads slowly, and then it is suddenly flush with a gorgeous psychedelic vision. The myriad colored squiggles are all visualizations of eventual copper connections. These are the internals of the OP-Z synthesizer, a new device that Teenage Engineering has been teasing musician-consumers with for over a year. One ingenious aspect of the OP-Z is it will have no screen—you’ll just plug it into an iPad.

Matt’s show and tell helps display just how complex boards have gotten, even if they remain rooted in history. Each act of miniaturization is a hyper-optimized codification of an earlier technological achievement. The compact nature of the OP-Z and other devices is owed to the multilayer boards. The OP-Z’s will be eight layers thick, though to the untrained eye it will look essentially like one board. Even to the naked eye there will be distinguishable characteristics. Most of the items on it will be surface-mounted, for example, meaning they won’t be punched through. The boards are covered with tiny little circles. These are “vias,” for “vertical interconnect access,” which allow the tightly packed layers to move data, sound, and power back and forth.

And even if a PCB board can’t be read in full, it can be signed. If you look very closely at the screen, in the upper corner of the virtual board’s current design, there’s a little “MH 2017” that registers Matt Holland’s contribution to Teenage Engineering’s group effort. This will likely disappear from the OP-Z PCB before it is put into production. Matt says that during his first job out of college, for an ASIC (or application-specific integrated circuit) company, he attempted to hide his initials in some metal wiring. A few weeks later he received an email from his boss: “Nice try.”

Audio or It Didn’t Happen

This is the first in a short series of essays about how sound is inscribed.

It’s a Friday afternoon and the nearby school kids are playing with such ferocity that it seems possible an enhanced interrogation center has sprung up just two floors down from my office window.

Then a loud bell rings. Its waveforms are so thick it’s as if you can see them floating in the misty San Francisco air. The bell is all stately gravitas. Even if you didn’t know that the nearby school is parochial, you’d sense the bell’s churchly vibe.

And then, suddenly, there is silence: Recess has concluded. The bell has marked a juncture. There was a before and now I’m in the after. It’s not utter silence, not the absence of sound, what I’m experiencing now. It’s the sort of silence that traffic and hallway chatter and a neighbor’s music can still, somehow, collectively suggest as silence when the ear is no longer burdened by hundreds of energized children screaming at one another.

I’m struck by the moment. So, I do something that comes naturally to me when something of sonic interest arises. I tweet about it. I tweet about the transition from chaos to peacefulness, summarizing the tonal shift in just under 140 characters.

Shortly thereafter someone replies to my tweet.

Before paraphrasing the reply, it might be easier to first characterize the incident by employing a comparison. Perhaps you’re familiar with the phrase “pics or it didn’t happen.” It’s a not uncommon response when someone posts—text-only—something of interest that seems to beg visual documentation, like if you run into a celebrity (“Jonny Greenwood was at the music equipment shop!”) or buy something snazzy (“I finally sprang for a sequencer module for my synthesizer”) or do something tawdry (“Judging by the mess, last night’s jam session involved too much bourbon, too little music”).

The “pics or it didn’t happen” response isn’t necessarily a literal encouragement to follow up with a document of evidence. It’s a gentle teasing. It’s a friendly social-network admonition in the form of a rhetorical nudge—though, it’s worth noting, virtually no one who ever says “pics or it didn’t happen” would be unhappy if pics did follow.

As for the response I got to my afternoon tweet—the response I have not infrequently received to such a tweet—it was the audio equivalent of “pics or it didn’t happen.” Essentially I was asked: “But did you record it?”

That’s a complicated question to answer, despite the inquiry’s brevity. The simplest answer is: no. No, I didn’t record it, not with an audio recorder. But I did record it, in the sense that my tidy, brief, Twitter-circumscribed description was consciously intended to encapsulate it.

That’s how and why I record sound: by writing it, inscribing it—not so much notating it as noting it, unpacking it, coming to understand how it works by investigating how it works.

The matter comes down in part to what “it” is. Is “it” the bells, or is “it” the sound of the bells through the window, or the sound of the kids, or the way the kids and the bells worked in congress with each other, or how the bells masked the more detailed nature of the kids walking back into school from the playground, or how when the bells ended the remaining low-key urban cacophony was still, in its own way, peculiarly placid. That’s a lot of “it.” There was a lot packed into the period of time that passed.

And then again, how much time had passed, how long of a hypothetical audio recording are we talking about? Sure the transition could be mapped from a few seconds of screaming kids to a few seconds of street noise with the bell sequence in between—maybe a minute total? However, the impact of those kids screaming had built up over the hours prior I’d been seated at my desk working—in fact, as an experience, the impact had built up over the years I’ve sat at that desk in that room listening and not listening.

An audio recording might have given some glimpse into what I’d experienced, but no more and arguably less than what I’d been able to summarize in my description.

bird on a bell

Photo: Luke Barnard

And that’s how and why I record sound: by writing it, inscribing it—not so much notating it as noting it, unpacking it, coming to understand how it works by investigating how it works. I do it a lot, sometimes on Twitter or Facebook, occasionally on Instagram (to caption a photo), sometimes in longer essays or reflections—once as an extended series of mini-essays in which I treated everyday sound as something that one might review, much as one reviews a record or concert. Titled “Listening to Yesterday,” the series ranged from birdsong in a toxic harbor to restaurant kitchen noise to the user interfaces of conference call systems.

As I hoped to express in the slightly longer form of those “Listening to Yesterday” entries, the descriptions of sound in my tweets and my longer-form writing aren’t about description, any more than a record review is about description—well, a good record review. My hope in describing sound is to unpack it. It’s about getting inside the sound, sonically and contextually. The best advice I ever got about reviewing music was to give the reader a suggestion about how to listen to the piece in question, what one might listen for. The same can be said—should be said—about writing about everyday sound. (As I write this I’ve become aware of the fact that I’m writing about the process of describing sound on a social media platform whose name is itself a common instance of onomatopoeia.) Writing differs from sound in key ways. Sound is temporal. Writing, especially pithy writing, can happen in an instant, and yet suggest something of any given length. A tweet of a moment’s sound is both briefer than the occurrence and yet capable of suggesting something altogether longer, even timeless, which is to say something not just about that moment but about similar moments—not just that school day, but many school days at many schools, in and out of the mist, with and without bells, parochial and secular, yesterday, today, and tomorrow.

Sound is temporal. Writing, especially pithy writing, can happen in an instant, and yet suggest something altogether longer, even timeless.

If anything, to reproduce with a recording rather than a description would fall short of my own goals in several ways.

For one, the recording (I have made them) never sounds like what I heard. The ear doesn’t work that way. The ear hears through things, focuses on things, filters out things. That happens in the world as a mix of brain wiring and personal inclinations. Once reproduced as a recording, those varying degrees of attentiveness are flattened: everything becomes evident relative to its respective volume.

For another, if you listened to the recording, you’d likely not hear what I heard—I heard a pulse, but you hear a beat; I heard something soothing, but you hear something antagonizing; I heard children, but you hear traffic. To share the moment in sonic form is to instantiate a Rashomon, to introduce the opportunity for multiple interpretations. My goal in these moments isn’t, generally speaking, to ask what something is; it’s to say what I think it is—and to ask that you say what you think some other moment of sound is.

Finally, my attentive listening in the day-to-day world is almost always a matter of retrospectively acknowledged reflection. That is to say, it’s a matter of “This is something I just heard, and now I will note it down as a means to capture my sense of it.” I couldn’t have recorded it because I hadn’t become conscious of it until it was happening, perhaps even until after it had happened. The only way I could really accomplish what’s asked when someone says, “Did you record it?” is if I had the sort of technology running that Nagra helped innovate, in which I was always recording everything around me, and then at the end of the day could go back to the tape and locate it.

I don’t do that, because recording everything—putting aside matters of surveillance and ethics—is exactly contrary to the reason I summed up the moment in the first place. I’m not looking to reproduce a generality, to document a public moment; I’m looking to hone a specific experience, a private moment.

The Procedural Hows and Theoretical Whys of

Part A: In Good Company

Sound Cloud LogoThe website in question is where you, right now, can go to listen to live recordings of New York’s Alarm Will Sound ensemble performing original commissions at the Mizzou New Music Summer Festival, including work by composers Clint Needham and Liza White.

It is where CHROMA, the London-based chamber-music group, posted the world premiere of Rolf Hind’s piece for featured clarinet soloist, “Sit Stand Walk.”

It is where the Brooklyn-based Sō Percussion made available excerpts from its Creation series of collaborations, including pieces by Tristan Perich and Daniel Wohl.

And it’s where London’s Barbican Centre has uploaded numerous Beethoven recordings by the Leipzig Gewandhaus Orchestra, under the baton of Kurt Masur.

It is not the website of a leading classical publication. It is not the online culture section of The New York Times or the London Guardian. It is not a digital offshoot of WQXR or NPR, or of big-eared KCRW for that matter.

No, it is, and it has quickly become, with good reason, the default go-to site for music hosting by all manner of musicians, not just aspiring pop stars and bedroom beatmakers, but also those involved in new and experimental composition and performance. The following overview is intended to provide an introduction to making use of SoundCloud, including some tips for maximizing one’s efforts, as well as some passing contextual and tactical thoughts on why SoundCloud has proved as popular and functional as it has.

Part B: Crash Once, Twice Shy

New music makers have numerous reasons for wariness before taking the time generally necessary to master yet another online music-hosting platform.

Why even try, when so many services have let you down before? All that fine-tuning of a personalized MySpace page, only for the user base to up and leave it like a ghost town? All that effort in uploading a project to, only to discover that the tag processing is unwieldy? All that work getting music into iTunes, only to have track previews limited to 30 measly seconds, and to be left wondering how, other than linking, you might actually promote your music?

The issues with music-hosting platforms are cultural as much as they are technological. Viewed as a whole, the variety of barriers to having proper online representations suggest something akin to a digital-era conspiracy to keep complex music off the Internet.

Here are some of the hassles:

There is sound quality. At least since iTunes debuted and introduced a particularly low-grade format (128kbps) as an audio standard, the sonic compression of digital music has not suited the dynamic range of most music that doesn’t fall within the broadly defined realm of “pop.” Over time, the standard MP3 file sizes have, thankfully, enlarged (320kbps tends to be the norm), but online streaming is currently supplanting MP3 files, and frequently that means, indeed, a low-fidelity presentation of recorded sound.

There is categorization. Few if any online services handle the taxonomy and typology of adventurous music well. Most music websites have a field for the artist and a field for a song, and little to address the informational void. The sites are already living in a post-album world, and they do little to make nice with recordings in which things like composer and conductor and performers and soloist are important. On a particularly bright and cheerful day, one might consider the Internet a fascinating and massive experiment in New Criticism, every piece of music floating out there virtually free of context.

And there is the basic typographical matter. It’s something that so-called desktop publishing presaged, a situation in which corners are routinely cut in favor of oversimplified—and thus meaning- and pronunciation-altering—decisions regarding haceks and accent marks and umlauts. doesn’t solve all these problems, but it does offer a solid and adaptable foundation for musicians to use to share their music.

Part C: Setting Up the Account Is Just the First Step

Here are some simple instructions on setting up and making use of a account.

Step 1: Sign up. You can do this by associating your new account with your Facebook account, or you can create an account directly on The latter is recommended because there’s no significant benefit to the former. You can always associate the accounts later.

Step 1

click image to enlarge

Step 2: Fill out your basic profile. The fields (City, Occupation, etc.) are straightforward. Here is one suggestion, though: strongly consider using a single word, or a phrase with no spaces, as your “profile” name. The profile name serves various purposes, including being your SoundCloud account URL. To join SoundCloud isn’t simply to access virtual real estate. It’s to participate (more on which in Step 5), and having a single memorable identity is key to making your presence on SoundCloud effective.

Step 2

click image to enlarge

Step 3: Fill out your “Advanced Profile.” Keep your Description brief, maybe 200 words, tops—and consider using rudimentary HTML tags, such as <b></b> to bold key words and to structure the text. Enter URLs for essential web locations, and don’t overdo it. Your website, Twitter, and Facebook are likely sufficient. Enter too many, and your listeners won’t know where to click. You can use the <a href=”URL”></a> tag in your Description if you want to, for example, link to a record review or interview that appears on another website.

Step 3

click image to enlarge

Step 4: Upload tracks. As with the creation of your account, the uploading of a track will require you to fill in various fields. They’re pretty straightforward (Title, Image, Type). There aren’t multiple fields for participants. However, the Description field allows for simple HTML, so you can use that space not only to list participants (performer, composer, etc.) but to link to their SoundCloud accounts or websites or both. Make note of that “Show more options” button: it pulls up a whole bunch of additional useful fields, including simple ways to add commerce links so you can sell the track or related material. The easiest way to go about this all is to set the track as Private until you’re happy with all the text and other details, and only then make it public.

Step 4

click image to enlarge

Step 5: Participate. That bears repeating: Participate. Even if you’re employing SoundCloud primarily as a promotional tool, think of it as a party—you’re not going to meet anyone if you just stand there (unless you’re wearing a funny T-shirt, or blessed with remarkable cheekbones). These things take effort.

To begin with, “follow” people—follow musicians you work with; follow musicians you admire. Some will follow you back, and that will be the start of actually communicating on and through SoundCloud. You’ll find, in time, that you will look at the Following/Followed lists for people you like, and take a cue as to whose music to look at. Furthermore, the people whose music you do follow show up in your SoundCloud home page, so you will be kept abreast of their activity—not just what they post, but what they have commented on.

Also, embed your tracks elsewhere (on your blog, for example), and encourage others to do so. One of the beautiful things about SoundCloud is that it has elegant “players” that you can use to embed a track or a set of tracks into a post on another website. For example, the track below is from a project I recently completed, in which I got over 60 musicians to remix the first movement of the Chamber Symphony, Op. 110a, by Dimitri Shostakovich. The original recording was by the fine Los Angeles ensemble wild Up, who graciously provided the source audio for the project.

And, finally, be sure to comment on other musicians’ uploaded tracks—that is, see SoundCloud for what it is, not just a music-hosting platform, but a platform for communication and collaboration. Comments come in two forms: standard and “timed,” the latter of which appear at a distinct point along the chronology of the track. You’ll see the “timed” comments along the track just above.

Step 6: Dig in. There is far more you can do on SoundCloud. The coverage above is intended simply as an introduction. For example, you can create Sets of tracks that provide additional context. You can join Groups, which in addition to collating tracks by some semblance of shared cultural activity (field recordings, serialism, toy piano) provide for discussion beyond the confines of a single recording. There are Soundcloud apps that allow you to do additional things with and to your tracks. Everything described above is free, albeit with a space limit on data storage, but you can elect to pay for a premium account and access additional resources. (The limits to SoundCloud are worth noting. For one thing, this is all “fixed recordings.” If you specialize in algorithmic music, you’ll be posting finished recordings, not live generative sound. Also, SoundCloud is a business, and as such monitors what is posted; it is especially attentive to copyright violation, so if you tend toward the aggressively plunderphonic, be prepared to have your track removed—or your entire account for that matter.)

Step 7: Make it new. The structure of SoundCloud suggests itself as a neutral space. In many ways, it has defined itself as the anti-MySpace. Where MySpace became overloaded with design elements, SoundCloud keeps it simple. This simplicity suggests SoundCloud less as a place and more as a form of infrastructure—if MySpace was a city that never slept, SoundCloud is the Department of Public Works. Its elegant tool sets provide structure but don’t define or fully constrain activity. For the more adventurous participants, SoundCloud is itself a form to be played with. Some musicians have used the “timed comments,” for example, to annotate their work as it proceeds. Others have fun with the images associated with their tracks, posting sheet music or workspace images. Some create multiple accounts for different personas or projects. Others have used the limited personalization options to colorize the embeddable player and make it look seamless within their own websites and blogs.

It’s arguable that the most productive users of SoundCloud recognize the fluid nature of the service and post not only completed works, but works in progress. They upload sketches and rough drafts and rehearsals: this keeps their timeline freshly updated, helps excuse the relatively low fidelity of streaming sound, and further invites communication with listeners—many of who are fellow musicians themselves.

Ready to make some noise?

You can use platforms like SoundCloud to participate in NewMusicBox’s “Sound Ideas” challenges and easily share the music you create. Craft responses to prompts from:

John Luther Adams
Ken Ueno
Sarah Kirkland Snider
Sxip Shirey

Selections from submitted tracks will be featured in an upcoming post.


Marc Weidenbaum founded the website in 1996. It focuses on the intersection of sound, art, and technology. He has written for Nature, the website of The Atlantic, Boing Boing, Down Beat, and numerous other publications. He has commissioned and curated sound/music projects that have featured original works by Kate Carr, Marcus Fischer, Marielle Jakobsons, John Kannenberg, Tom Moody, Steve Roden, Scanner, Roddy Shrock, Robert Thomas, Pedro Tudela, and Stephen Vitiello, among many others. He moderates the Disquiet Junto group at; there dozens of musicians respond to weekly Oulipo-style restrictive compositional projects. He’s a founding partner at i/olian, which develops software projects that explore opportunities to play with sound. He lives in San Francisco in a neighborhood whose soundmarks include Tuesday noon civic alarms as well as persistent seasonal fog horns from the nearby bay. He also resides at

Luciano Berio’s Sinfonia, Generational Perspectives, and the Fluid Nature of Copyright in a Classical Context

Luciano Berio in Castiglioncello, 1996.
Photo by Marina Berio

Twenty years ago this year, the late Sussex University professor David Osmond-Smith (1946-2007) published Berio [Oxford], a tidy overview of the work—and to the extent that they relate, the life—of Italian composer Luciano Berio (1925-2003), as part of the series Oxford Studies of Composers.

I read it for the first time recently, for background on Osmond-Smith’s perspective on Berio before a planned dive into Playing on Words, the book on Berio’s Sinfonia (1968-1969) that he had published a half-decade prior.

I’ve been focused on Sinfonia lately because it had come to triangulate two different personal interests that I’d previously thought of more in parallel. The work is both a successful foray by Leonard Bernstein and the New York Philharmonic into experimental contemporary music during the 1960s, and a precursor to the sample-based music that is so commonplace in our current time.

The former concern is in contrast with Bernstein’s infamous 1964 fail involving John Cage’s Atlas Eclipticalis, for which the recently deceased computer-music trailblazer Max Matthews had developed a 50-channel mixer (a little factoid I learned while reviewing Begin Again: A Biography of John Cage [Knopf] by Kenneth Silverman for Nature earlier this year).

The latter concern, however, is my primary interest, because the ongoing discussion about copyright is front and center in musical consciousness today, from both a legal and compositional point of view.

Osmond-Smith’s Berio is relatively breezy, with occasional full-stop breaks for close musicological line readings. (I’d be lying by omission if I didn’t acknowledge that the note-parsing is largely over my head, but even then I still found some threads I could learn from. It didn’t hurt that in the copy of the book I borrowed from the library someone had penciled in numeric values for some of the note sequences. Now that’s what I call crowd-sourcing.) The volume is a brief survey of Berio’s work, from electronic-iconoclast innovation in Milan, to his extensive output for voice and (“conventional”) instrumentation, to his exploration in the realm of theater. It’s an excellent overview, with summaries of major works, and biographical details to explain the context in which the works were composed and initially performed. The purely biographical aspects can be deadpan to a fault—the second and third time Berio is married, there’s little-to-no mention of personal turmoil (well, when he takes his second wife, it’s made clear that his working relationship with his first, the vocalist Cathy Berberian, hasn’t suffered); it’s like he’s simply traded in one car or house for another.

The outlines of a jet-setting composer’s life aside, the two main things I came away with from the book, secondary to simply knowing far more about Berio than I had when I started reading it, are as follows:

First, there isn’t much about “meaning” in the book. There is excellent detail of how the pieces work, about the mechanisms of Berio’s music, how it functions, how Berio accomplishes what he’s after technically. There is, however, little explanation, little interpretation, of what it is he’s after, what he’s expressing, both intentionally and as perceived by the book’s author. There is a brief moment when a psychological reading of a work is considered, and then dispensed with. And in regard to Sinfonia, a parenthetical allows that the way several pre-existing works were handled in tandem suggests that by doing so Berio was commenting on them as a whole—but that interpretive thread is left dangling, tantalizingly to me.

Osmond-Smith was by no means a conservative voice, either in scholarship (he studied in the early 1970s with Umberto Eco and Roland Barthes, both of who had their impact on Berio’s work) or in life (during roughly the same period of time he was reportedly the founder of the Gay Society at Sussex). Which leaves me to wonder the extent to which this lack of exploration of meaning is generational—but that’s a path of consideration that will remain on hold for me until I’ve more thoroughly read up on the copyright matters.

Because the second thing I took away from the book was that it’s hard to imagine it being written this way today. By “this way” I mean that one of the fundamental aspects of Berio’s compositional approach is his use of pre-existing composition, from Mahler to folk song, to his experiments in tape collage and other forms of electronic manipulation—and the copyright aspect of that is never touched on in these pages. We know Berio collaborated with Eco and with Italo Calvino, among others, but the vast majority of his “collaborators” are unwitting ones, the composers (and authors, including Samuel Beckett and Claude Lévi-Strauss) whose work he interpolated into his own. We learn a lot, thanks to Osmond-Smith’s detailed knowledge of the works, about how these pieces of music and writing become Berio’s own, through force of his compositional ingenuity (Osmond-Smith’s explanation of Berio’s adaptation of James Joyce’s Ulysses is particularly rich, in that he locates parallels between the way the composer and author posited thematic material). What we don’t know is what permissions Berio sought out, or the extent to which doing so was even necessary during his period as a composer. We know from the book more about the development of a real-time sound processor by former CERN physicist Giuseppe Di Giugno (called Peppino here, evidence of Osmond-Smith’s familiarity with the people in Berio’s life) than we do about how Berio gained permission, if he even did, for the prior existing work that he took as his compositional source material. And Di Giugno’s gadget, however forward-thinking, was a much less central aspect of Berio’s career than was the device we call appropriation.

Even though Berio was published more than a decade after the birth of hip-hop, general asides to appropriation beyond classical music, let alone the word “sampling,” remain unexplored here. There are numerous terms that serve as precursors to the word “sampling” (let’s put aside the pejoratives: pilfered, stolen, lifted, and so forth): to pay homage, invoke, quote, cite, reference, allude to, and so forth. Few, if any, play a role in Osmond-Smith’s consideration.

And that’s not to fault the author. Quite the contrary, it’s just to view how the concerns of the period during which he wrote, during which Berio worked, are different from those that rule in conversation today. With orchestras in crisis, with the record industry struggling to develop a business model in the wake of the CD’s certain death, with composers setting up shop on Kickstarter to help fund commissions, discussions about classical music today, as with discussions about any type of music, inevitably come back to financial models.

The complex thing is that the financial models inevitably involve matters of proprietorship of works in a way that an imagination and a career such as Berio’s seem to flout. Importantly, Berio is no peculiar outlier in this regard, except to the extent that more than with many composers his appropriations had a conceptual element whose seams he generally desired to let show. Classical music has a longstanding history of composers drawing on compositional material from both within and without its literature—a history that is humorously at odds with the not incorrect perception of a sustained tension between much of classical music’s audience and sample-based music.

Perhaps all this will be cleared up for me when I next read Osmond-Smith’s quarter-century-old book that is wholly dedicated to Sinfonia, so I write this more than anything as a place-marker in my learning process.


Marc Weidenbaum

Marc Weidenbaum is an editor and writer based in San Francisco. He was an editor at Pulse! and a co-founding editor at Classical Pulse!, and he consulted on the launch of Among the publications for which he has written are Nature, Boing Boing, e/i, Jazziz,, Big, Make, and The Ukulele Occasional. Comics he edited have appeared in various books, including Justin Green’s Musical Legends (Last Gasp) and Adrian Tomine’s Scrapbook (Drawn & Quarterly). He has self-published, a website about ambient/electronic music, since 1996; it features interviews with, among others, Aphex Twin, Autechre, Gavin Bryars, Zbigniew Karkowski, Pauline Oliveros, Steve Reich, the creators of the Buddha Machine, netlabel proprietors, and sound-app developers.

Juiced In It: Bob Dylan and the Consequences of Electricity

Bob Dylan is by no means a key participant in the history of electronic music. But as rock figures go, none may be more closely affiliated with the consequences of electricity. His performance at the Newport Folk Festival in 1965, when he traded his acoustic guitar for an electric one, is widely recognized as a milestone in the evolution of rock music. Of course, to die-hard fans at the time, the act seemed traitorous, signifying an end to innocence that resonated with President John F. Kennedy’s assassination two years prior. At least rockists and folkies agreed on one thing: plugging in was transformative.

As for electronicists, the hindsight of 40-plus years provides context. We are talking about 1965: the year Robert Moog’s namesake invention became commercially available, the year Poème Électronique composer Edgard Varèse died at age 81, the year the newly formed Grateful Dead began experimenting during Acid Test shows, the year Steve Reich (graduate-school friend of the Dead’s Phil Lesh) composed his tape-loop landmark It’s Gonna Rain. Dylan’s decision was, no doubt, a watershed moment, but an anomaly of its times it was not. Electricity was in the air.

The Dylan-Newport hoopla, as with most theological debates, has long been lost on me. I have one good friend 20 years my senior, Robert Levine, a classical music critic and editor, who attended that fateful show in 1965. He was 20 years old at the time and watched everything from backstage. A Joan Baez fan, he’d also been at Newport the previous year. I once asked him about the audience’s reaction to Dylan in 1965, and he said it was “as if someone had been invited to a fancy Thanksgiving dinner and taken a dump on the table.”

Timelines of Interest

A few generations later, the goal posts have been moved. The major cultural divide is no longer between acoustic instruments and electric ones, but between conventionally recognized instruments and computerized ones. Critiques today of digitally processed music, from academic to film scores to electronica to hip-hop, can trace back their language and their cultural assumptions—the suspicion of technology, the invocation of a halcyon era, the emphasis on authenticity—to the hysteria that greeted Dylan’s decision to trade in folk music for rock’n’roll.

Last year in the pages of L.A. Weekly, the novelists Jonathan Lethem and Rick Moody, along with John Darnielle (a musician who goes by the name Mountain Goats), discussed the influence of rock on novels and vice versa, and the talk, moderated by Alec Hanley Bemis, hinged on a proposed duality: text versus texture, rock’s great wordsmith (Bob Dylan) versus its preeminent sonic inventor (Brian Eno), rock-as-literature versus music-as-atmosphere. Dylanist versus Enoid. Lethem didn’t subscribe fully to the dualism in the first place, and he distanced himself from it even further earlier this year. In an essay in the February 2007 issue of Harper’s, “The Ecstasy of Influence,” on the subject of copyright and creativity, he refers to Eno and Dylan in tandem, drawing parallels between their creative modes: “To live outside the law, you must be honest: perhaps it was this, in part, that spurred David Byrne and Brian Eno to recently launch a ‘remix’ website, where anyone can download easily disassembled versions of two songs from My Life in the Bush of Ghosts, an album reliant on vernacular speech sampled from a host of sources. Perhaps it also explains why Bob Dylan has never refused a request for a sample.” Lethem portrays both musicians, for all their differences, at ease with technological progress.

That initial proposed polarity between Dylan and Eno lingered with me in the year that followed, and the subsequent qualification by Lethem (in whose 2003 novel The Fortress of Solitude the two musicians serve as touchstones) kept me pondering it even longer than I might have. Certainly, Dylan has interacted with electronic music far less than have some of his peers. And unlike fellow ’60s-era folk-rock songsmiths Leonard Cohen and Paul Simon, he never allowed his sound to be radically rewired by a major minimalist (Philip Glass for Cohen) or an ambient maestro (Eno himself for Simon).

My fixation on the supposed Dylan/Eno distinction came down to two issues with which I couldn’t make peace: first, that Dylan, whom I consider inherently (heck, concertedly and willfully) ambiguous, can be used as a synonym for any artistic trait aside, perhaps, from a gravelly drawl; second, that Dylan is so distant from what has been depicted as “Enoid” that he has nothing to say about electronically mediated music. The two concerns were brought to a head as news accumulated, in recent months, about director Todd Haynes’s feature-length film, I’m Not There, in which seven different actors, including Cate Blanchett and Richard Gere, depict Bob Dylan over the course of his career. On the one hand, I was relieved that a film had been founded on a shared belief that Dylan is anything but singular (these are the “versions of his persona” as Lethem put it in his Dylan interview in Rolling Stone August 2006). On the other, I sensed that I had better get a real grip on “my” Dylan before being forced to reconcile seven additional ones.

I listened for this electronically mediated Dylan in his commercial recordings, tracked down various bootlegs, and parsed the loopy intros and outros that he provides to pop songs on the XM Satellite Radio show that he hosts. I slowly came to realize that the Dylan on whom I was fixated, “my” Dylan, wasn’t the Dylan I’d heard—it was the Dylan I’d read. I’d eagerly consumed Dylan’s 2004 autobiography, Chronicles: Volume One, when it was first released, and it had been this Dylan who was stuck in my imagination. I read Chronicles again, in search of “my” little Dylan. Now, if only due to his legendary act of having plugged in, Dylan’s autobiographical musings would be worth parsing for their electric content. And as it turns out, the exercise rewards Brian Eno completists, copyleft advocates, and, especially, students of the recording studio as musical instrument.

Early on in the book, Dylan likens himself to jazz trumpeter Miles Davis, another musician who traded one audience for a second when he went electric. That parallel is precisely Dylan’s point: “Miles Davis would be accused of something similar when he made the album Bitches Brew, a piece of music that didn’t follow the rules of modern jazz, which had been on the verge of breaking into the popular marketplace, until Miles’s record came along and killed its chances. Miles was put down by the jazz community. I couldn’t imagine Miles being too upset.” Dylan reiterates the personal affinity 100 or so pages later when describing a first meeting with a potential producer: “He told me that hit records don’t matter to him, ‘Miles Davis never made any.’ That was fine by me.”

The comparison doesn’t quite do Dylan justice, since in 1965 Davis’s quintet was still playing “Stella by Starlight” and “My Funny Valentine” at the Plugged Nickel in Chicago, while Dylan was infuriating purists with his newly electric guitar. Three more years would pass before Davis and producer Teo Macero got Herbie Hancock to try out an electric piano in a Manhattan recording studio, yielding the first album of Davis’s own so-called “electric period,” Filles de Kilimanjaro. (Both Dylan and Davis recorded for Columbia Records, a shared experience that, unfortunately, goes largely unexplored in the book.)

The producer who breaks the ice with Dylan in New Orleans by name-checking Miles Davis was Daniel Lanois. In Dylan’s telling, the two were introduced by U2 singer Bono. It is not a case of love at first sight. “He’s got ideas about overdubbing and tape manipulation theories that he’s developed with the English producer Brian Eno on how to make a record,” writes Dylan, “and he’s got strong convictions.” Bono made the recommendation having worked with Lanois and Eno on such albums as The Joshua Tree and The Unforgettable Fire, albums that helped drive U2’s pub-rock anthems into arenas.

The partnership begins ominously. When Dylan and Lanois first meet in the courtyard of a New Orleans hotel, Dylan’s longtime guitarist, G.E. Smith, leaves them alone, saying, “See ya in a moment.” He’s not mentioned again for the remainder of the book (though he and Dylan will work together occasionally in the future). Dylan and Lanois go on to record Oh Mercy, released in 1989, and that album’s production receives the most sustained narration of any single event recollected in this book—and in Chronicles, a tale told anything but straight (it jumps between time periods like a William Faulkner novel), that is indeed saying something. For a Lanois enthusiast, the chapter is a valuable window into the working habits of the man whose collaborations with Brian Eno include the ambient milestones On Land and Apollo. Dylan describes at length Lanois’s “makeshift” studios, which he sets up in rented homes; his use of layering to achieve moods; and, for all the bliss of the end product, his strong personality.

I went back to Oh Mercy repeatedly during this reading, and to my other favorite Dylan albums, the usual combination of his eponymous debut, John Wesley Harding, Blonde on Blonde, “Love and Theft”, and so on. Nothing in those albums spoke to me about their electronic mediation, about the facts of their recording processes, as did Dylan’s own writing—though I do deeply desire vocal-free dubs of Oh Mercy and “Love and Theft” for the opportunity to relish the atmospheric grooves perpetrated on those two albums. Sadly for me, they probably don’t exist.

Dylan was by no means unprepared to work with a studio maven like Lanois—nor is it a surprise that it’s a Lanois album, Oh Mercy, that Dylan’s own production work (under the pseudonym Jack Frost) on “Love and Theft” most closely resembles. (“I didn’t feel like I wanted to be overproduced any more,” he told Lethem for the article in Rolling Stone, where he talked quite a bit about the recording process and his decision to produce himself.) By the late 1980s, he’d already developed his own head full of ideas, through experience, about the role of recording in the production of pop music. In the years leading up to his collaboration with Lanois, he played a tour with the Grateful Dead, and the band prodded him to perform esoteric items from the back pages of his catalog. Dylan, who was less than enthusiastic about this C-list set list, recalls in his book, “A lot of them might only have been sung once anyway, the time that they’d been recorded.”

This notion of songwriting as a spontaneous experience mediated in the studio is echoed later still in Chronicles (or earlier, depending on whether you gauge time by page numbers or years), as Dylan thinks back to some of his first recordings, singing into the tape recorder of Lou Levy, his original publisher before moving to Columbia. Levy came from an earlier generation, and Dylan’s improvisatory style was new to him. “Once in a while [Levy] would stop the machine and have me start over on something,” recalls Dylan. “When that happened, I usually did something different because I hadn’t paid attention to whatever I had just sung, so I couldn’t repeat it like he just heard.” Levy, attuned to the craftsmanship of Brill Building songwriters, like Gerry Goffin and Carole King, doesn’t know what to make of Dylan’s fluidity with words and meaning. The difference in opinions between Dylan and Levy about the role of the tape recorder sets the stage for the generation-gap-inducing events of Newport.

Levy makes two appearances in Chronicles—one at the opening and one at the close, book-ending three decades of Dylan’s music-making. The click of his tape recorder is something like Proust’s madeleine to Dylan. It’s the very act of recording that triggers his book-length reminiscence. (If anything, it brings to mind another New Orleans concoction, the narrative structure of Anne Rice’s Interview with the Vampire.)

At the end of that extended section on the making of Oh Mercy, Dylan writes, “Danny and I would see each other again in ten years,” when they’d record Time Out of Mind, which would have made a good title for his book. When you turn the page to the next (and final) chapter, it isn’t the next day, let alone the next decade. You’re back in Levy’s old-school office at Leeds Music Publishing, essentially the moment when the book began, some 220 pages earlier—before Lanois, before the Dead, before Newport.

Though the closing chapter is rich with business details—in virtually the same breath, Dylan blindly signs a contract with Hammond while negotiating himself out of his standing agreement with Levy—the real focus is Dylan’s appreciation for how the production of folk music was unlike that of the Brill Building songwriters who preceded him: “I didn’t have many songs, but I was making up some compositions on the spot, rearranging verses to old blues ballads, adding an original line here or there, anything that came into my mind—slapping a title on it. I was doing my best, had to thoroughly feel like I was earning my fee. … Into Lou’s tape recorder I could make things up on the spot all based on folk music structure, and it came out natural. … I changed words around and added something of my own here or there. … You could write twenty or more songs off that one melody by slightly altering it. I could slip in verses or lines from old spirituals or blues.” This is the Dylan who figures in novelist Lethem’s Harper’s exercise in Dylan-ology, in which he distinguishes plagiarism from plumbing, theft from love.

What Dylan writes isn’t self-incrimination, nor is he selling any alibis. It’s a description of craft that cuts across musical genres. The cut’n’paste method of adoption that served as Dylan’s initial songwriting strategy sounds all the more familiar some four decades later, when rhymin’ and stealin’, to borrow a phrase from the Beastie Boys, is the norm. You wouldn’t have to change many of those words for them to resonate with a remixer, DJ, or laptop musician.

Understand, Chronicles is by no means a book focused on music and technology. When Dylan speaks of field recordings, he’s talking about Alan Lomax tracking down blues singers in the Delta. When he says of Lanois’s work on the song “Most of the Time” that “Danny put as much ambiance in this song as he could,” he is at best secondarily referencing the ambient music of Lanois and Lanois’s influential colleague, Brian Eno. Still, of all the songs waxed by blues legend Robert Johnson, the one that Dylan singles out for exegesis is “Phonograph Blues,” which he calls “an homage to a record player with a rusty needle.” It’s also a song about sexual impotency, but more than anything, it’s among the most modern of classic blues songs. It’s one thing for a bluesman to sing of trains, but a lyric about a phonograph punctures the veil of rural antiquity that many folk fans foist on the blues. “Phonograph Blues” is an apt choice for a musician, like Dylan, who, ever conscious of technology, intends on dispelling romantic illusions.

One of the last things Dylan does in the book is to state the following: “First thing I did was go trade in my electric guitar, which would have been useless to me.” It’s page 237, but he’s still living in Minneapolis—his life has proceeded in the book, all the way up to his stint in the supergroup the Traveling Wilburys, but now it’s looped back to Dinkytown, a neighborhood near the local university, where he was first getting into folk music and developing his performing persona. Dates are hard to come by in Chronicles, but it’s likely between 1959, when he entered the University of Minnesota, and 1961, when he relocated to Manhattan. It’s a Rosebud kinda moment. The man famous for going electric recounts when, years earlier, a child of rock’n’roll newly smitten with folk, he’d first unplugged.


Marc Weidenbaum

Marc Weidenbaum is an editor and writer based in San Francisco. He was an editor at Pulse! and a co-founding editor at Classical Pulse!, and he consulted on the launch of Among the publications for which he has written are Down Beat, e/i, Jazziz, Stereophile,,,, Big, Make, and The Ukulele Occasional. Comics he edited have appeared in various books, including Justin Green’s Musical Legends (Last Gasp) and Adrian Tomine’s Scrapbook (Drawn & Quarterly). He has self-published, a website about ambient/electronic music, since 1996; it features interviews with, among others, Aphex Twin, Autechre, Gavin Bryars, Zbigniew Karkowski, Pauline Oliveros, Steve Reich, and the creators of the Buddha Machine.

Upwardly Mobile: What we talk about when we talk about laptop music

When I was invited by NewMusicBox to write an overview of “laptop music,” my initial instinct was that this would be less an introduction than a requiem. Isn’t the phrase “laptop music” sorta “over”? Well, as it turns out, no. Quite the contrary: More people are making more music with more software than ever on laptops.

In the few days since the publication of “Serial Port: A Brief History of Laptop Music,” I’ve already begun to hear from people, some referenced in the story itself, others simply involved in the culture at large.

The core of the article uses a recent concert by the Kronos Quartet as a kind of emblematic experience of laptop music. Three laptops were involved in the performance: one played by a member of Matmos in that duo’s piece with Kronos at the end of the second set; one employed by Kronos’s sound engineer/designer, Scott Fraser; and one by Walter Kitundu, the composer and instrument maker who performed with Kronos at the end of the first set.

In the article, I noted, “If Matmos’s use of the laptop best epitomized the aural fact of laptop music, Kitundu’s came closer to an audience’s experience of laptop music: you had no idea what he doing.”

Well, to help clarify things significantly, Kitundu sent me an informative email, and he gave me permission to post it:

Just thought I’d demystify my laptop’s role during the Kronos piece at YBCA. I was using software that allowed me to play MP3s with a record on my Phonokora. The digital files were often versions of Kronos’ interpretations of Mingus collages that I’d assembled with turntables. They learned these elements and recorded them using their traditional instruments, and I reused them via the turntable to create some of the atmospheric sound of the piece, and to respond to what they played live during certain sections of the composition. (Mingus 3 or 4 times removed.) The Phonokora now has a USB crossfader… I was interested in mixing the old with the new, strings and bytes, naural/digital—seeing how refiltering ideas repeatedly via the process would affect the outcome. The composition was about memory (of a loved one passed on) and this was a concrete way to examine how memory transforms over time and through experience.

Another person asked me, subsequent to the story’s publication: “I do a lot of my music work on my iMac. My turntable is plugged into it even. Does this still count as ‘laptop music’? I mean, it’s Reason and Live and hopefully soon Reaktor.” (Those last three capitalized words are the names of different music-making software packages.)

That distinction was very much on my mind as I wrote the article. To me it comes down to what I describe in the article as a “continuity of technological experience.” The laptop has allowed people who both make music at home and perform in front of audiences to use the same equipment, and thus it has allowed them to develop a heightened sense of intimacy with their equipment. That’s what uniquely makes the laptop, among various computer-music tools, akin to an instrument.

So, no, an iMac doesn’t count as a laptop just because one makes music on it. But if that single computer becomes one’s primary apparatus, both as a studio unto itself and as a performance tool that one plays in various environments, then it certainly might as well be a laptop.

Serial Port: A Brief History of Laptop Music

Marc Weidenbaum

Inside the Box: The computer comes out to play

There’s often a vertical plane between musician and audience. The sheet-music stand paved the way for the upturned plastic shell of the turntable, and today, chances are that rectangle obscuring the face of the performer on stage is the screen of a laptop computer, which has emerged as a ubiquitous music-making tool.

The laptop, however, obscures more than just the musician’s face. Its uses vary too widely for it to be easily characterized. For some, the laptop is essentially a more portable equivalent of the DJ’s turntables, mixer, and crate of records. But for many, it is a means to bring the power of computer processing into live performance, creating music of the moment that’s comprised of all manner of sonic detritus: field recordings, sine waves, sound bites of pre-existing music, pure feedback.

Computer music is nothing new, though it has certainly blossomed in the past decade thanks to the rapid spread of personal computing. The question is: What’s “laptop music”? How does the fact that the technology now is portable alter computer-enabled music? More than anything, the laptop has brought computer music not only out of the closet, but out of the house. And thanks to the laptop’s compact size and ease of use, it’s triggered several successive waves of adopters. “Laptop music,” as a result, isn’t really a genre, and since the laptop can run such a variety of music software, it may be inappropriate to simply call it an instrument. What is it? A phenomenon.

The laptop is a proverbial black box—well, generally speaking, a silver one, usually in this context affixed with a glowing Apple logo—and it has many inputs and outputs. The same could be said of its history and its future. This overview of so-called “laptop music” is an attempt to see what led up to this moment, to highlight some leading figures, and to look ahead to what “mobile music” might constitute down the pike. The laptop’s a bit like an SUV. It’s expensive and powerful and nice to look at, but how many people actually take it over really rigorous terrain? Well, plenty, in fact, from the microsonics of Tetsu Inoue, to the augmented field recordings of Christian Fennesz, to the spatial immersions of Carl Stone, to the fractured dance music of Autechre, all of whom have made the laptop computer one of their primary tools.


I initially studied computer science in college, and before I opted for an English degree, my favorite professor was an esteemed figure in the field by the name of Alan J. Perlis, a man who won the very first Turing Award (often described as the Nobel Prize of computing) the year I was born. He would often digress from a sequence of code that he was reciting from memory in order to tell us stories about the dawn of the study of computing. From today’s standpoint, in a time of iPods and Tablet PCs, my own college education feels like it occurred during the Stone Age, with those monochrome monitors and rudimentary programming languages. But for Perlis, our cathode-ray computer-lab terminals and the Macintoshes popping up in dormitories were generations removed from his Cretaceous-era schooling.

Prof. Perlis appreciated our difficulty with the problems he assigned each week, those all-nighters we spent eradicating bugs. He told us that when he was a graduate student there was a commonplace way that programmers went about wrestling with a faulty bit of programming: You’d open up the computer you were working on, enter it, sit on a cozy chair and contemplate the machine from the inside. That image has never faded from my memory. If anything, it’s become more vivid as computers have gotten smaller. This primer covering the laptop and its role in music today is a peek under the hood, now that the machines have gotten too compact to be entered directly.


Fast Backward: A brief prehistory of laptop music

In our everyday lives, phones double as cameras, high-tech supertitles accompany the opera, and TiVo automates the recording of PBS’s latest adaptation of Charles Dickens’ Bleak House. Alvin Toffler’s pessimistic “future shock” is experienced primarily in hindsight, when we consider how quickly our lives have been altered; by and large, it’s more along the lines of “past shock,” when we recoil at the idea of life without high-speed Internet access, ATMs, and Netflix. And after momentary consideration, we shrug it off, flip open the latest Best Buy circular and further consume our way into a technologically mediated future.

We could credit ourselves as a species with high points for adaptability, but to do so would be to underestimate how well we’ve been prepared for these seemingly bold technological leaps—often by cultural trends and scientific discoveries that date back not just decades but hundreds of years. Modern robotics had its precursors in the Renaissance doodles of Leonardo Da Vinci, and the modern computer in the decidedly pre-industrial writings of Ada Byron, Countess of Lovelace.

Likewise, the laptop music that even today might strike us as utterly new has certain precedents. These precedents prepared our imaginations, even if the technology that allowed the imaginings to be realized was a long time coming. There are numerous cultural currents that brought us to where we are now, but the following are six key 20th-century phenomena that prepared us for the 21st century: musique concrète, serialism, analog synthesizers, and hip-hop, along with the broader matters of the “studio as instrument” and rapid advances in computing.

Pierre Schaeffer
Pierre Schaeffer in 1952 playing the phonogène à clavier, a tape recorder with its speed altered by playing any of twelve keys on a keyboard. Photo courtesy of GRM.

Before there was sampling, there was musique concrète, a.k.a. tape and razor blades. Its origination attributed to Pierre Schaeffer (1910-1995) in the late 1940s, and its development to Pierre Henry (1927-) and John Cage (1912-1992), among others, musique concrète took recorded sound as the start, rather than the end, of the compositional process. Tape was cut and spliced, working with pure sound (a bird call, an overheard conversation, an orchestral performance) as an element of construction. Today, Xacto blades are used mostly for opening eBay packages, and audiotape has gone the way of photo-sensitive film, but the spirit of musique concrète lives on in the computer’s agility with sampling and with molding whole segments of sound.

It would be quite understandable that one might glance at this year’s classical concert offerings and imagine that serialism, and by extension twelve-tone music, had come and gone. But even more than musique concrète, serialism foresaw the computer’s ability to perform functions on set blocks of prerecorded sound. Musique concrète may have introduced the idea that sound can itself be subjected to compositional play. However, it was serialism that posited the “row,” or a fixed set of notes, as a sonic object, and thus composition as a paper algorithm that is enacted on that subset. Today’s computer music, especially the live improvisation performed on laptops, enacts those transformations on sheer sound in much the same way. In place of “rows” we have “samples,” and in place of the codified twelve-tone transpositions we have the endless variety of computer mediation, like granular synthesis, in which processing is applied to extremely narrow slivers of sound; surround-sound effects, in which sound can be moved around in three-dimensional space; and backward-masking, that hallmark of tape-editing, just to name a few.

Leon Theremin
Leon Theremin

We can now perform high-grade digital synthesis on the same machines we use to balance our checking accounts, but the origins of these so-called “soft synths” were the hulking analog synthesizers pioneered by the likes of Leon Theremin (1896-1993) and, later, Robert Moog (1934-2005) and tweaked to the point of ridicule by prog groups such as Yes. The histrionics of those rock performances may have earned some of the resulting backlash, but the globe-trotting bands served the purpose of road-testing the hardware. Even at this stage in the laptop’s ascendance, several well-respected electronic-based musicians, such as Thomas Dimuzio, eschew the laptop due to its lack of dependability. It’s also worth noting that the album Analord, the most recent release by Richard D. James, the electronic musician best known by the pseudonyms Aphex Twin and AFX, was performed entirely on vintage analog equipment—and as if to emphasize the old-school implication, the material was released initially as a series of vinyl 12″s.

The classical music world has, of late, wrestled with bringing hip-hop into the symphony hall. Witness Daniel Bernard Roumain’s A Civil Rights Reader, which adds a turntablist to a string quartet, and the Asian Dub Foundation’s commission by the English National Orchestra for an opera about Libyan leader Muammar Gaddafi.

Grandmaster Flash
Grandmaster Flash

But hip-hop’s influence on today’s art music is far more elemental than those events suggest. Hip-hop was birthed on the cheap. Its legacy isn’t merely a matter of having brought beats to the forefront of Western music; its legacy, at a more basic level, has to do with making the most of available technology, and with manipulating pre-existing sound in real time. Every time musicians pump a laptop’s sound card into an amplifier, they celebrate the Bronx DJs who made history with two turntables and a microphone.

Those four, fairly self-contained phenomena occurred in the shadow of a broader sequence of events. Like many a revolution, the concept of the “recording studio as instrument” started with a counter-cultural bang and ended up on retail shelves. The Platonic ideal of using the recording studio to capture otherwise un-performable music dates back, in the public consciousness, to the separate but related decisions by the Beatles and pianist Glenn Gould to cease touring in favor of the studio’s seemingly endless possibilities. Certainly there were precedents for “impossible” music—in, among other distinct realms, the tape constructions of Schaeffer and the player-piano machinations of Conlon Nancarrow. But there’s no overstating the impact of a good spokesperson, and in the Beatles and Gould, not to mention the Beach Boys and other aural seekers of the 1960s, the recording studio had many prominent individuals in its corner. The creative use of the recording studio to construct unreal, or “hyperreal,” music continued to evolve through producer Teo Macero’s work with Miles Davis, Steely Dan’s meticulously constructed pop, Brian Eno’s development of ambient and, inevitably and somewhat pathetically, the rise of celebrities who must lip sync their “live” concerts, so unequipped are they to approximate their fetishistically manicured hit singles.

Dovetailing with the steady rise of the studio-as-instrument was a second revolution: the asymptotic advance of the personal computer, and its eventual impact on audio recording. During the past half decade, many professional studios have been shuttered as the computer became the de facto recording studio, trading tape for hard drives and replacing, or augmenting, mixing boards with software like Pro Tools. Portable computers continue to lag behind the desktops for processing power, due to the twin issues of miniaturization and cost, but not long after the Apple PowerBook appeared, along with the equivalent Microsoft- and Linux-based machines, a laptop was capable of producing sounds that could not just entertain a live audience but could capture a musician’s imagination, even if it could not yet serve as a pro-grade console.

All of which has brought us to the present, to a moment when in music, as in life, computer literacy is increasingly essential to daily activity. What’s important to recognize is that for all our easy adoption of technology, there’s a strong tendency to doubt its efficacy, even its appropriateness, in the realm of art. Today, the computer plays a core role for most composers, whether they actively push the envelope with digital synthesis or simply use a software package such as Sibelius or Finale for notation purposes. Whether a composer treats the computer as an appliance or as an instrument is the lingering question.

Joshua Kit Clayton
Joshua Kit Clayton

I attended a lecture recently by musician Joshua Kit Clayton, who is also one of the programmers of the popular Max/MSP software. Max/MSP is a key application on many musicians’ laptops; it’s a language of sorts that allows users to code subroutines that can be applied to sound (and visuals, and more) or to other subroutines. Clayton was addressing an audience of undergrads at a prominent art school in San Francisco. The computer screen projected behind him was a Matrix-like spew, a flow chart of only somewhat comprehensible commands, boxes, arrows, and squiggly lines. It made the new math seem old hat by comparison. Perhaps sensing trepidation among his paint-stained listeners, he paused his geekily charismatic presentation to decry math illiteracy among musicians in particular and artists in general. Clayton sternly lectured the young audience: “Go home, smoke some weed, and get over your obstacles!”


Tool or Toolbox: The laptop’s ever-changing role

Matmos (L-R) M. C. Schmidt and Drew Daniel
Photo by Steven Halin

When the Kronos Quartet performed in April 2006 at the Yerba Buena Center for the Arts in San Francisco, three laptops were in full view—four if you count the guy in front of me who kept pulling out his ultra-portable OQO device, a fully functioning Windows machine smaller than the average paperback. But let’s stick to the concert. One of the laptops belonged to Walter Kitundu, who performed with the quartet at the end of the first set. The second laptop belonged to Drew Daniel, half of the electronic duo Matmos, which joined Kronos for the close of the second set. The third laptop belonged to Kronos sound designer Scott Fraser (an experimental musician in his own right), who sat behind a vast soundboard toward the middle of the hall and routinely checked the computer as the evening progressed.

All three laptops in the Kronos show were Apples. The Apple logo is as ubiquitous in electronic music as the Adidas logo once was in hip-hop, so this is no coincidence, despite Apple’s slim market share. That glowing logo, however, is probably all that the three machines had in common. Even if they were running the same software, which is unlikely, they were being used for entirely different purposes, yet all in the service of Kronos’ performance. Daniel’s was most certainly the source of the pumping rhythms that grounded Matmos’s collaboration with Kronos, For Terry Riley. In that sense, his laptop came closest to what is most commonly conceived of as laptop music: synthesized sound far removed from its raw materials, produced on the fly, generally with rhythmic intent (even when that rhythm slows to the nearly subaural realm referred to as “drone”).

Kitundu’s use of the laptop was less self-explanatory. The presence of the Apple during his piece, Cerulean Sweet III, was, in fact, a little surprising, given that his compositions center on instruments of his own invention, ingenious hybrids that often involve unfinished wood and old turntable parts. Cerulean is an artfully maudlin piece based on snippets of music by jazz great Charles Mingus, and the audience might have surmised that some of those pre-recorded sounds were emanating from his little computer—and not from the strange contraptions he and Kronos were performing on. In this sense, even if Matmos’s use of the laptop best epitomized the aural fact of laptop music, Kitundu’s came closer to an audience’s experience of laptop music: you had no idea what he doing. Perhaps he was augmenting the sound of his instrument, which he calls a phonoharp. Perhaps he was merely displaying the score. During abstract sound-art shows by laptop musicians, it’s not uncommon for someone to ponder whether the performer is just checking his email while the music plays by itself. Such skepticism fades with familiarity, as the rough contours of laptop music become understood and the listener can judge a performance on the basis of the music rather than the player’s theatricality. More on that in a moment.

Walter Kitundu
Walter Kitundu
Photo by Donald Swearingen

First, don’t dismiss Fraser simply because of his job designation. It’s unlikely Fraser was participating in Kronos’s sound as a performer, though since virtually all of the pieces Kronos played that evening involved a pre-recorded portion—like the drum machine beats of Clint Mansell’s score for Requiem for a Dream and the old 78-rpm recordings heard in Dan Visconti’s Love Bleeds Radiant—it’s presumable that he was triggering those. Fraser was primarily, whether with laptop, soundboard, or some combination thereof, shaping the way the music filled the hall. Much contemporary electronica, taking advantage of the tabula rasa of digital sound and the computer’s ability to produce immersive multi-speaker environments, is a kind of sound-design as art; Brian Eno’s ambient continues to serve as both metaphor and urtext here. These days, musicians including Richard Chartier, Ryoji Ikeda, and Nosei Sakata have made much of little more than a sine wave.

The variously ambiguous roles of those three laptops at that single Kronos event neither bookend the continuum of laptop music, nor even begin to touch on its breadth, but they do signify how differently the machine can be employed in concert.

It would have been one thing for the laptop to simply have come to serve as a digital version of turntables, allowing the user to cue up prerecorded material or to replicate the multi-tracking duties of a recording engineer. But the key to understanding the laptop’s role in contemporary music is to appreciate how it took the promise of the studio-as-instrument and turned it inside out. The laptop took the home studio and made it portable, portable in a way that even the four-track tape recorder had only hinted at. A musician seated on stage with a laptop has ability to synthesize and transform sound in a manner that a decade ago would have been unthinkable.

The key word in that last sentence is not “unthinkable” but “manner.” Compositional issues as fundamental as matching tempos, or modulating a sequences of notes, or otherwise manipulating self-contained aural elements can be handled, thanks to off-the-rack software, at the level of gesture. Some software, such as Ableton Live, is useable right out of the box; others, such as Max/MSP, provide musicians with a language to learn, but one with which they might craft their own software tools. (Tellingly, many of Max/MSP’s engineers are accomplished electronic musicians themselves, including Joshua Kit Clayton, Les Stuck, and Luke DuBois, the latter best known as a member of the Freight Elevator Quartet.) All of which is to say, not only are decisions that once determined how a composer set pen to paper now in the hands of an improviser, so too are the means by which that composer might have honed the sounds in a studio. Many newcomers to electronic music experience an epiphany when they come to understand how inherent improvisation is to composition on a computer.

The laptop is easy to learn and difficult to master. It doesn’t take much to figure out how to emit sound, especially if all you want to do is let a series of prerecorded songs overlap ever so slightly to produce a continuous mix; iTunes and other MP3 players have automated that task. Beyond that, though, decisions get murky fast: which software to use, whether to add additional physical inputs, like theremin, external CD players, keyboards, and other such triggers and sound sources. Most of the popular software employed by laptop musicians is available on both Macintosh and Windows platforms, and often Linux as well; musicians, like graphic designers, are attracted to the Apple for its elegant user interface.

Some musicians use the laptop as the equivalent of an effects pedal, just one more tool in their toolbox; for others it is the toolbox itself. At least for beginning laptop musicians, the old koan “less is more” is worth keeping in mind. While the software one employs doesn’t directly result in a specific sound, it’s safe to say that someone who employs a wide range of equipment and programs is probably going to make more complicated and chaotic music than someone who cuddles up with a single piece of software and tries to make the most of it.

The word “portability” doesn’t quite do justice to the laptop’s greatest strength. Anyone who has played a double bass has little sympathy for computer-enabled musicians who felt that only with the laptop could they perform regularly in front of live audiences—as if the desktop had been such a hefty or fragile item to lug to a gig. The laptop’s true triumph is about a continuity of technological experience. Jimi Hendrix slept with his guitar. Likewise, one laptop is not immediately interchangeable with another. The laptop gets personalized over time—from big things like which programs are loaded, to small things like the user’s adjustment to the location of the keys or the idiosyncratic hardware issues that computer manufacturers deny but that anyone who’s bonded with their laptop comes to think of as being akin to the machine’s personality.

So, what distinguishes a successful laptop performance from an unsuccessful one? Since I’d gladly attend a concert of nothing but old refrigerators, my opinion should probably be taken with a grain of salt, but here goes: Laptop music, whether quiet or noisy, beat-driven or hazy, is generally about process. Most laptop performances consist not of individual pieces but of extended improvisations that run from the beginning to the end of the individual’s set. Listening to a laptop musician perform can be less like listening to a composition and more like paying a visit to an artist’s studio: The real question is less what went into an individual piece, and more along the lines of what the person has been up to recently, what software they have been mastering, what sonic realms they are exploring.


Plastic Devices: Critical laptop innovators and recommended CDs

Any introduction to a specific realm of music necessitates a list of recommended listening. The following are eight individuals, in alphabetical order, who are central to the development of the laptop as a performance and compositional tool, and to its status as a bona fide cultural phenomenon, along with some of their essential recordings. Choosing individuals to best represent such a widespread convergence of technology and culture is a challenge. Come 2006, the laptop is a standard tool, and though innovation continues, the following musicians were actively involved before portable-computing hegemony set in. Also, they all record and tour frequently, which has helped expand their influence and introduce more people to the laptop’s potential. So pop one of these CDs into your own laptop’s disc drawer and think of it as the 21st-century equivalent of a player-piano roll…

Taylor Deupree
Taylor Deupree

Taylor Deupree (b. 1971), based out of New York City, is the proprietor of several record labels, chief among them 12k, whose lo-fi name belies its curatorial focus on high-grade sonic processing, and Term, a trailblazing “netlabel” that offers up for free the occasional recording (often a live performance or a future 12k album-in-progress) to anyone with a good Internet connection and a sizable hard drive. (Visit As a musician, Deupree is an unrepentant minimalist, one who mixes fragility and a mechanistic impulse on albums like stil. (12k, 2002), which revels in its interiority. He frequently collaborates with the musicians who are signed to his labels, notably on Live in Japan, 2004 (12k, 2005) with laptop-enabled guitarist Christopher Willits and Post_Piano (Sub Rosa, 2002) with Kenneth Kirschner, who uses the computer software Flash to compose chance-based works that involve ever-shifting layers of sound files.


Fennesz is Vienna-based Christian Fennesz (b. 1962) with laptop and guitar in hand. It’s difficult to recall, let alone imagine, there was a moment not so long ago when plugging a guitar into a portable computer would have been a headline-making occasion. But it’s true. When Fennesz’s Endless Summer (Mego, 2001) was released, the laptop was still considered a self-contained unit, not to mention one still in its infancy. (This is long before Apple’s GarageBand software made the connection between laptop and guitar part of its marketing.) Endless Summer caused some minor consternation at the time—the laptop had only in the preceding few years gained enough firepower to serve musicians, and already it was being yanked into the past? And hadn’t hip-hop, not to mention rock’s own navel-gazing, killed the guitar? Apparently not. Fennesz defied digital purism in favor of music that tasted retro and futuristic; the title isn’t the only nod to the Beach Boys, not on an album so expressly languorous, the guitar sometimes plucked for rustic-folk flavor, and at others processed into a hazy background soundscape. Subsequent Fennesz recordings of note include the fractured Live in Japan (Headz, 2003), which requires several close listens for proper orientation, and Venice (Touch, 2004), on which he folds in field recordings, feedback, and David Sylvian’s singing.

Kid 606
Kid 606

The jaw-droppingly antic collage music of Venezuelan native, and longtime California resident, Kid 606 (given name: Miguel Depredro; b. 1979), brought him quick and widespread attention following the release in 2000 of Down with the Scene (Ipecac), P.S. I Love You (Mille Plateaux), and the uncharacteristically relaxed The Soccergirl EP (Carpark). His nom du laptop, adopted from an old Roland drum machine (the 808 was already spoken for), is perfect for what has come to be known as “laptop punk”: a riotous assemblage as alarmingly retrograde as it is almost blissfully cacophonic. That the phrase “laptop punk” went from delightful oxymoron to outright redundancy in half a decade is a testament to the movement’s speedy evolution. In addition to the albums mentioned prior, don’t miss Kid 606’s aptly titled Kill the Sound before the Sound Kills You (Ipecac, 2003).


Monolake (b. 1969) personifies the convergence of composition, programming, and performance. It is the pseudonym of Robert Henke, a German craftsman of electronica that at its best burbles with a controlled sublimity—as if the music has been pressed, like autumn leaves, between thick plates of Lucite and then illuminated from below. Monolake was a duo until the other founding member, Gerhard Behles, left the name in Henke’s capable hands in order to form a company whose chief product, Ableton Live, is today one of the most popular laptop-performance software titles. Today, one of Ableton’s marquee practitioners is none other than Henke/Monolake, each of whose new full-length albums tend to appear in record stores right around the time a new version of Ableton Live is released (it’s up to its fifth iteration). Henke isn’t merely Ableton’s equivalent of a clothes horse; he’s also an engineer on the software, bicycling once a week from his home to the company’s office to discuss interface design and to report on his latest plug-ins. Each new Monolake album is developed in tandem with the software. Key among them are Cinemascope (ml/i, 2001), road music disguised as sound design, and Hongkong (Chain Reaction, 1997), collecting tracks that laid the groundwork for the digital sedative known as minimal techno. Henke also records under his own name. Robert Henke albums tend to be more spacious, less rhythmic, than the work attributed to Monolake. Check out “Studies for Thunder,” the closing track on Henke’s Signal to Noise (ml/i, 2004), for its application of minimalist aesthetics to (entirely artificial) environmental atmospherics.

Ikue Mori
Ikue Mori

Ikue Mori (b. 1953), originally from Japan, came into prominence during the 1970s as a drummer in the New York art-punk band DNA. Since then she has been a fixture on the Manhattan music scene. (She records for John Zorn’s label, Tzadik, and the two frequently collaborate.) Mori somewhat famously ditched the drums for a drum machine, eventually developing an intense fixation on the laptop, which she employs for both solo performances and collaborations. Her mild-mannered stage presence masks the fluidity and speed of her music, which is marked by simultaneous layers and samples of traditional instrumentation. She is an equal partner in the trio Mephista with pianist Sylvie Courvoisier and drummer Susie Ibarra. Recommended are Mori’s recent solo album, Myrninerest (Tzadik, 2005), which has a certain mystic whimsy, and Mephista’s Black Narcissus (Tzadik, 2002); by positioning her laptop between piano and drums, Narcissus highlights the controlled serendipity she brings to the mix. Also worth tracking down: The Turntable Sessions: 2001-2002 Volume 1, a project initiated by Billy Martin, drummer with the jazz act Medeski Martin & Wood; the album teams her live with echo-laden DJ Olive on three tracks, one a trio with Martin.


The work of British musician Scanner (a.k.a. Robin Rimbaud, b. 1964) posits the laptop as a passport that allows him to move easily between cultural worlds. One evening he may perform a set live at a club, using the instrument from which he took his name to rip unprotected speech from the airwaves, his laptop emanating an improvised emotional soundtrack that lends context and drama. The next he may be at a museum, performing one of his pieces that serve as both art and commentary, like 52 Spaces (Bette, 2002), an audio-visual reduction of Michelangelo Antonioni’s film The Eclipse, or the knowingly titled Warhol’s Surface (Intermedium, 2003), which applies his Scanner MO to recordings of Andy Warhol. Rimbaud is increasingly involved in collaborative, event- and site-specific work, like summoning voices of the past in an installation at a 13th-century hospital, or composing for ballet, or doing sound design for a horror movie. It’s the laptop that allows someone so mobile (not just geographically but artistically) to keep familiar tools at hand.

Carl Stone
Carl Stone
Photo by Takamasa Aoki

Los Angeles native Carl Stone (b. 1953), who studied with composers Morton Subotnick and James Tenney, splits his time between Japan and America. Stone’s music often involves field recordings from his travels, though those sounds might be so contorted, into percussive noise or an ether silence, that they’re nearly unrecognizable. There’s something about his reputation as a peripatetic individual that makes his laptop seem to double as a suitcase, and his performances feel like montage aural slide shows of his latest activities. His frequent residence in Japan has brought him into the circle of indigenous microsonic noise labeled as “onkyo”—represented by musicians such as Testu Inoue, Sachiko M, and Nobukazu Takemura—which is often, though not exclusively, produced on laptops. Stone has a particular interest in immersive installations, and I’ve witnessed him wow both an unschooled audience in New Orleans and a room of his peers in San Francisco with his surround-sound control of space and time. His Nak Won (Sonore, 2003) shows his mastery of choppy momentum, dozens of samples falling down a staircase in resplendent chaos. And even better is pict.soul (Cycling ’74, 2002), a collaboration with the ultra-rarified Inoue, during which they riff with sonic particulates that make most minimalism sound like Wagner by comparison. (And lest this music be mistaken as a youth movement, note that a laptop-wielding Subotnick, born two decades prior to Stone, was a highlight of the 2005 San Francisco Electronic Music Festival, at which he collaborated with Miguel Frasconi.)

Keith Fullerton Whitman
Keith Fullerton Whitman

Keith Fullerton Whitman (b. 1973), who grew up in New Jersery and today resides in Boston, maintains a second existence under the name Hrvatski, but even dual identities aren’t enough to encompass the breadth of his musical activities. As Hrvatski he has perpetrated some of the most blindingly ecstatic anti-dance music imaginable, beats that shred themselves as they go. Witness the often brazen wildness of Swarm & Dither (Planet Mu, 2002). But in time, he’s emerged from that moniker’s shadow as a thoughtful, knowledgeable musician for whom the laptop is both a room in which he can seclude himself as well as just another piece of equipment in his shed. His best work involves guitar and laptop, notably the lush soundscapes of Playthroughs (Kranky, 2002) which update the live looping pioneered by guitarist Robert Fripp in the 1970s. Whitman has an ongoing partnership with Greg Davis, and the two released a maddeningly expansive compilation of live recordings last year, Yearlong (Carpark). By the way, though Whitman jokes that his home studio does double duty as his bedroom, he isn’t married to his laptop. The album Multiples (Kranky, 2005) put him to work on vintage equipment housed at Harvard and MIT, including the Serge Modular Synthesizer Yamaha Disklavier.


The Incredible Shrinking Computer: Music in the palm of your hand

The riddle goes: What part of your body disappears when you stand up? The answer: Your lap. To rephrase the question: What happens when your laptop disappears? That is: What happens as you get more accustomed to mobile computing and your threshold for acceptable portability becomes even more demanding—and those demands are met? The foreseeable answer is music in the palm of your hand.

Richard Devine
Richard Devine

Despite all the activity in laptop music today, it seems unlikely that the laptop will, as a self-contained unit, have the sort of lifespan enjoyed by the piano, the string quartet, or the ukulele. There is no reason to suspect that the laptop is anything more than a transition object, as the computer slowly finagles its way into everyday life. The rise of the laptop has to do not with technology having reached a point where portability made pre-existing computer music feasible for live performance; it’s that the technology reached a point where the portability led to more rapid adoption, which at a certain critical mass led to unexpected consequences, like laptop battles, as organized at, in which individuals vie for the reaction of judges and audience; gestural interfaces, like the optical sensors added to CD players; homebrew electro-acoustic experimentation, and so forth.

The watchword for this sort of down-the-road activity is “pervasive” computing. Prognostication aside, it might be helpful, in closing, to just run through a variety of currently existing small electronic devices, some of them laptop-enabled, many if not all of them the spawn (or kissing cousins) of laptop music:

For the most part, this essay has been concerned with the realm of art, but it’s worth noting that the biggest selling category in computer-based music is ring tones for cellular phones. These began as single-line melodic reductions of songs, from Vivaldi to 50 Cent, but as cell phones’ specs have improved, so too has the accompanying music, much to the annoyance of moviegoers and teachers. Today, phones feature multiple voices, richer sound, and the ability of phone owners to shape sounds themselves. Some phones include tools for composing. Much as you can take a photo with your in-phone camera and email it to a friend, so too can you make a little tune and send it out into the world.

In this context, the first truly portable computing device of any practical consequence (attention aficionados of Apple’s Newton: this means factoring in market penetration) was, arguably, the Palm Pilot, later re-christened simply the Palm. Microsoft developed a competing operating system, the Pocket PC, which features a slimmed down edition of its flagship operating system, Windows. Both Palms and Pocket PCs have music tools available, from simple metronomes and guitar tuners to mini-keyboards, samplers, and multi-track recorders. For some examples of this sub-laptop music, check out the community that’s former around Bhajis Loops (, a music program for the Palm whose users include electronic minimalist Richard Devine.

It’s been noted that the laptop is significantly more than a digital turntable, but that isn’t to say that the laptop doesn’t serve frequently as little more than an MP3 player that can crossfade between tracks. A recent system called Final Scratch has eked out a common ground between computer and vinyl LP by providing DJs with a hybrid that allows them to use the LP as the interface for manipulating music, even though the music itself is stored not on the surface of the album but in a computer that is hooked up to the turntable. Despite digital reticence on the part of vinyl DJs, the equipment has caught on, in part due to the creative involvement of Richie Hawtin (a.k.a. Plastikman) and John Acquaviva. (There’s a similar system named Serato, manufactured by Rane.)

Many of today’s electronic musicians developed a fondness for lo-fi, synthetic sound while playing video games in their youth. While the soundtrack to the average video game has matured considerably, musicians’ tastes haven’t necessarily. Not only do communities exist for the production at the 8- and 16-bit levels of games of yore, many musicians also produce music on everything ranging from old Atari units to today’s Game Boys. Perhaps in response to this trend, a recent Nintendo DS cartridge, Electroplankton, is a sort of audio game or sound toy. It has no endgame, no specific goal, except making noise with an interactive psychedelic interface. Electroplankton takes advantage of the DS’s dual screens, one of which is touch-sensitive.

Speaking of alternate interfaces, the Tablet PC, on which the entire screen is touch-sensitive, shows a lot of promise for innovation. It’s a standard feature on many Windows laptops, and the company is pushing a new “ultra mobile” protocol that may dispense with the keyboard entirely. Then again, as the iPod’s success has shown, it’s likely that such hands-on computing won’t reach mass popularity until Apple joins the party.

Though circuit-bending predates the laptop, its popularity has surged of late, as electronic musicians have sought out new sources of sound. Circuit-benders take pre-existing hardware—most famously gadgets like Speak & Spells and other children’s toys—and mess with the innards until they squeal to the owner’s satisfaction and, more to the point, surprise. The founder of circuit bending is Reed Ghazala, who compiled his decades of experimentation last year in the book Circuit-Bending: Build Your Own Alien Instruments. I recently moderated a panel discussion at the first annual Maker Faire, held by Make magazine, in which I brought together three musicians who make their own instruments: a pair of these circuit-benders, Chachi Jones (a.k.a. Donald Bell) and univac (a.k.a. Tom Koch), plus Krys Bobrowski, whose inventions include a giant glass ‘armonica and horns made of dried kelp. To hear those three tell it, the definition of an instrument is as much in play at the dawn of the 21st century as the concept of the composition was toward the end of the 20th. They, along with Kitundu and folks like Pierre Bastien (who has built his own automaton orchestra), Matt Heckert (of Survival Research Labs), Ken Butler (whose guitars are known to sprout from tennis rackets), and Monolake (who has developed the “Monodeck,” a personalized bit of hardware that serves as a central hub for his software and equipment) are at the forefront of where instrumentation is headed.

Also at the Maker Faire this year was the crew that developed the Monome, a USB-enabled grid that serves as a sample trigger for laptops, though that description doesn’t do it justice. The promise of that particular device, which retails for $500, isn’t just its simplicity or the open-source community of musician-developers who will share the code they program to make the most of the new instrument. The promise is the idea that the mass production of such physical devices is no longer only in the hands of big companies like Yamaha and Korg.

But for all the various small-scale devices being produced today that make music, none has the open-ended potential of the laptop. Laptop musicians aren’t just collectively working to create a shared understanding of how the device functions as an instrument, they are also individually, as they piece together the perfect balance of software and hardware, making singular instruments that are all their own.

Marc Weidenbaum was an editor at Pulse! and a co-founding editor at Classical Pulse!, and he consulted on the launch of Among the publications for which he has written are Down Beat, e/i, Jazziz, Stereophile,,,, Big, Make, and The Ukulele Occasional. Comics he edited have appeared in various books, including Justin Green’s Musical Legends (Last Gasp) and Adrian Tomine’s Scrapbook (Drawn & Quarterly). He has self-published, a website about ambient/electronic music, since 1996; it features interviews with, among others, Aphex Twin, Autechre, Gavin Bryars, Pauline Oliveros, and Steve Reich.