Tag: technology

Finding Myself in an Alternate Reality, or 12 months on Sand Hill Road

Two elevators

If you drive north from San Jose on I-280 towards San Francisco, you eventually pass the unassuming Exit 24 which takes you towards Sand Hill Road. Just past the Stanford Particle Acceleration Laboratory, Sand Hill Road is home to some of the most expensive corporate real estate in the world. (I was told a single 20×20 sq. ft. office in the same business park would rent for over $15,000 a month.) Here is the casino-laboratory where Silicon Valley’s unicorns are created: Apple, Uber, AirBnB, Lime Scooter. Some of the most ubiquitous names in our modern lexicon started on this road with funding.

During the process of my divorce, the assault trials, and the ensuing litigation which lasted approximately 20 months, I had decided for safety and financial reasons to move in with family in the Bay Area and had found a day job as a systems administrator for a local IT company. The job paid well enough that I was able to cover my bills, clear up some debt, and generally keep my head above water and start to save—something that I had never been able to do during my five-year-long partnership.

I was assigned to provide technical support three days a week to the largest and most successful venture firm in the business park. I was responsible for end-user support of computer and tablet devices used by some of the most elite of Silicon Valley’s elite.

In the beginning, I hated this world. It was everything I had grown to despise about Silicon Valley and the Bay Area: wealth in excess of anything one could possibly spend in a lifetime, a complete lack of creativity in my tasks, a boring routine, a lousy commute, and people who, on good days, were simply unpleasant, and on bad days were downright rude. Plus it had no connection to the arts and for the first time in my life I truly felt completely disconnected from my field and craft.

I hated this world until someone in my family reminded me of several things:

  1. Nothing is permanent, including this job.
  2. You are taking care of what you need to do so you can live the life you want to.
  3. Try to learn something from this job. You never know what might help you in the future.

So I opened up my mind to try to learn.

I took away three things from this place that would become incredibly important to moving my music career forward.

I knew I would never want to be a financial analyst or investor within about 30 seconds of working there, and that feeling continued. However, I did take away three things from this place that would become incredibly important to moving my music career forward, as I learned in the coming months.

Something that had always eluded me in the pursuit of music as a career was how to sell myself and my work, and now here I was standing in an office the entire purpose of which was to watch people sell themselves and then decide whether to invest in them or not.

“Sales is sales,” an old boss used to say to me when I worked for an audio firm, “and art, or audio, is nothing but sales,” and I took that to heart.

Because of the nature of my job, I sometimes had to sit in pitch meetings and provide whatever technical assistance was needed, and I came to love watching these investors in meetings. It gave me the unique opportunity to see what technical critics used to refer to as the “Steve Jobs Reality Distortion Field” and allowed me to learn three valuable lessons:

Time (and Money) is Limited

Even in the world of Silicon Valley business where it seems money is endless, the reality is that time and money are in short supply. I noticed that these fund managers only invested in products or projects that spoke to them on some level. I decided to do the same, by only accepting commissions and only pursuing personal projects that I felt a true connection to in some way.

How to Construct an Elevator Pitch

I had the experience of chatting with a major investor for a few minutes. He had taken a liking to me, and we were chatting about what my life was like outside of my day job. He asked me what I did outside of work, and I had mentioned that I had gone to conservatory. Knowing this person had an interest in the Bay Area arts scene, I was hoping to chat about this for a time. Instead, he looked bored and changed the topic. It was another reminder to me how I had lost passion in my own work, and it showed. I decided to learn all I could about pitching and marketing my own work. If I didn’t believe in it myself, or show passion about what I had created, no one else would.

Passion is More Important

Time after time, I saw these products come in that (in my opinion) were not something I could see anyone in their right mind paying for, but the passion that these engineers, developers, and CEO’s brought to the table was what eventually caused the firm to, if not invest outright, advance them to the next round of decision making. It was the passion that got them continued meetings with higher and higher level employees.

My parents had hoped that by living surrounded by family I would be able to get more work done. What they believed I had come to Silicon Valley to do, make art, was not to be, but what I learned from what Silicon Valley does best—innovate—affected my work Sonetos del amor oscuro beyond what I had thought was possible.

This project, originally started after the mass shooting at Pulse, became an obsession for me. Creating something that I was passionate about was the breathing room I needed outside of my day job. By day I fixed tablet computers and by night I buried myself in this work. Building on what I had learned in my previous work Remember the Things They Told Us, I again wrote from the heart. I relied exclusively on craft and intuition without attempting to devise contrapuntal contraptions or other gimmicks to create some heady work of art as I used to do.

I lived the text that García Lorca had set down on those pages. I soaked them up, and it was in those words that I could come to terms with myself as queer. Though I had come out at the age of 22, I had not truly admitted it to myself until I began to devour this work. I always had this belief that I was more than my queer-ness and in order to fulfill that, I had always attempted to avoid trying to come off as “too queer” (whatever that meant) in my writing. The effect, however, was more like cutting my writing off at the knees. To quote the great Bill Watterson, it was almost as though I was saying to myself “you need a lobotomy, I’ll get the saw.”

Hearing this work performed live became extremely important for me because hearing the work live meant that for the first time, I would publicly acknowledge an aspect of myself that I never felt previously was important or relevant, but had come to understand in rediscovering myself that it was more integral to who I am as a composer than I realized. A recent trip to South Asia had also reminded me that it is not necessarily normal in the world to not go unpunished (if not be validated) as a queer artistic voice, and conversations with other queer friends in Mexico City reminded me that most Latinos, especially queer Latinos, do not even have a platform to bear witness in this way.

When I approached the Great Noise Ensemble with a concept recording and a partial score, Armando Bayolo graciously agreed to do the work on their “Four Freedoms” series, a series of four concerts each of which recalled one of Franklin Delano Roosevelt’s “Four Essential Freedoms”: freedom of expression, freedom of worship, freedom from want, and freedom from fear.

Freedom of Expression was truly the epitome of what this work meant to me, and would begin to drive a need for me to become more of an activist citizen-artist then I had ever been before.

The Internet is a Strange Place for Music

A computer keyboard with an iPhone on top of it streaming a music video

I: Time is Different on the Internet

Time is different on the internet. We spend time differently in that realm, often more frenetically. While our time in the “real world” is spent in hourly chunks—an hour at lunch, eight hours at work, an evening out with friends—we enter and exit the internet in many short bursts. Our sessions may span from minutes to mere seconds, but they pile up to hours per day. Time passes by differently across the internet. Our capacity to focus while on it both widens and narrows, whether it is spending an entire evening on Netflix, or skirting across dozens of different webpages in a single hour. These differences, in how time is spent and felt in its passing, derive from our control of it. (This is the strangest of relationships we have to time and space.) Online information is easier and quicker to access. It is also easier to produce. Therefore, we don’t invest much time in any single piece of content. It becomes disposable. Ultimately, online content has little control over how much time we spend on it.

In music, this control over time is significant. Consider scrubber bars, the progress bars on digital media players that allow the user to jump to any given moment in a clip. These tools provide a kind of time-travel ability for a listener. It’s not a completely new ability; one can drop a needle anywhere on the side of an LP. Fast-forward and rewind functions are also possible on CDs and cassettes. But scrubbing on these mediums carries a level of randomness to it. On the internet, a media clip can be scrubbed through with maximum specificity and efficiency. The YouTube and Vimeo scrubber bars not only indicate how much time has elapsed in the clip, they also flash a thumbnail of whatever moment you place your cursor over. Scrubber bars on SoundCloud achieve a similar task for audio, as they embody an image of the clip’s waveform. These tools not only enable easy movement through musical time, they also quickly summarize information about the media clip, revealing to a user its contents before they are even experienced aurally.

The scrubber bar alters the agency of a listener. In turn, visuals, developmental structure, and interactivity relate differently on the internet than they do in live spaces.

Now, most music is meant to be listened to straight through. A listener isn’t required to utilize the scrubber bar. In fact, to do so can be a deadly temptation, especially in classical and contemporary concert music. Pausing, skipping, or taking a peek at the timecode, these can spoil hard-earned accumulations of musical tension and long-form development. But staying on a single webpage for more than just a few minutes, this is not natural behavior on the internet. The scrubber bar, like all the other tools built into a digital interface, is designed to eliminate wait time and get a user to a particular moment as fast as possible. Such goals are not often pertinent to a musical experience, yet they carry a significant effect on the aesthetics of listening to music online. The scrubber bar alters the agency of a listener. In turn, visuals, developmental structure, and interactivity relate differently on the internet than they do in live spaces. Before diving further into these particularities though, we must understand first how time in music is related to space, both physical and digital.

[banneradvert]

Spending time and experiencing the passing of time are both about personal expectations. They are also about choice. Liza Lim’s opera The Navigator is 90 minutes long without intermission. Is this too long? Not at all, certainly not for the concert hall. Audience members are expecting this length. They know when they buy their tickets and when they settle into their seats that they are going to be there for about an hour. Performance spaces govern the length of musical time. For example, classical music concerts are often one to three hours long. They usually begin with one, two, or three shorter works (five to twenty minutes), and then one long work (between an hour and ninety minutes). Artistically, there is no reason concert music cannot be made for much shorter or much longer lengths. However, such instances are often statements about length, a purposeful deviation from the normal. While a piece of concert music may be within or outside of the standard length of a concert, there is no denying that a standard length for music in the concert hall exists.

This principle is true for all performance spaces. Think about a dance club. The social function of that space, just like the concert hall, begets a standard length of time for the music it houses. A DJ set is usually one or two hours, but each song will never be more than a few minutes. In a dance club, the energy must be high and constant. Songs are best kept short and impactful, allowing for the flow of energy to be tightly controlled by the DJ. The way people enter and exit the dance club, this also begets a standard for how the music develops over time. Back at the concert hall, audience members enter before the beginning and are expected to remain in their seats until the end of the performance. Therefore, this space, with its captive audience, is well suited to have music that makes long-form motivic connections (such as the kind in a Beethoven symphony). In the dance club, development becomes less about motives and more about the flow of energy and mood. With people entering and exiting at different moments, motivic connections will not necessarily be perceived by a listener. However, many people come to the club for a dance experience with a dynamic flow of energy. Therefore, the DJ focuses less on musical motives in their set, and more on a visceral, physical continuity. This way in which performance spaces influence development illustrates how these standards around time are not arbitrary. The social context, that communal ritual that takes place in the hall, club, temple, mall, and coffee shop, carves acoustic peculiarities into the walls and ceiling of the space, reinforcing and encouraging music inside it to behave a certain way in time.

II: Time is Hard Won on the Internet

Compared to performance spaces in the “real world,” the internet is not a normal place for music.

Compared to performance spaces in the “real world,” the internet is not a normal place for music. The scrubber bar in digital media players gives listeners a particular control over their listening experience, making it markedly different from any live circumstance. On one hand, some music made for live performance becomes more difficult to listen to on the internet. It can feel unnatural to listen to a piece without pause, to not click away before the end. On the other hand, this new relationship between listener and music opens the door to aesthetic avenues rarely exploited in the corporeal realm. The visuals, development, and interactivity of music are three components drastically redefined online.

The visuals of a musical performance are straightforward in most circumstances. In a concert hall, we see the musicians performing when we listen to the music. In a temple, we often are faced with religious iconography during a liturgy. Digital standards for the visual elements in a piece of music are much broader. On YouTube or Vimeo, it’s plausible that you would see either of those two things. However, it’s just as plausible that the music would be accompanied by a produced music video, some album artwork, GoPro footage, or any number of other things. Online, where depth perception, peripheral vision, and audio playback are completely different from a “real world” viewing experience, performing musicians are not necessarily the most logical visual material to pair with a piece of music.

Sheet music is a popular visual for contemporary music online. Conduct a YouTube search of the composer “Brian Ferneyhough,” and you’ll see that a majority of recordings are paired with images of the score, rather than the live musicians. This makes sense. Through a camera lens, often much is lost from a visual of the live musicians. To look from one musician to another, or to notice different aspects of the stage and lighting, this ability is given away to the videographer. On the other hand, with a still image of sheet music, where the visual plane is two-dimensional, the agency to focus on different regions of the picture is returned to the listener. Today, there is a whole network of synchronized score-to-video creators on YouTube, such as the Score Follower channels, George N. Gianopoulos, Mexican Scores, gerubach, and many others.

Of course, it is still possible for live musicians to be engaging on video. After all, the ability to shift perspective and attention around a visual is not removed. Rather, it’s merely transferred from the listener to the videographer. Four/Ten Media invests great attention into the visual design of their videos. Consider their production of Argus Quartet performing Andrew Norman’s Peculiar Strokes. The cutting of the camera angles aligns with the momentum and focus of the music. The lighting and set design is sleek and playful, much like the aesthetic of the work. And, rather than having the traditional silent pause between movements, the camera cuts to headshots of the musicians verbally signaling each movement. These visuals are amplifying and elevating the music. In his film of Vicky Chow performing Andy Akiho’s Vick(i/y), Gabriel Gomez skirts the line between performance and music video. Over a performance by Chow on an upright piano in a Brooklyn apartment building, Gomez inserts footage of other locations and people. This material is not functional to performing the music. Rather, it adds metaphorical energy to Chow’s playing and Akiho’s composition. Like Four/Ten Media, Gomez is outfitting a live performance for a digital medium, only with an added layer of visual poetry. Videography can also take a less straightforward relationship with the music. In Angela Guyton’s video of Kate Ledger performing Ray Evanoff’s A Series of Postures (Piano), the close, hand-held, continuous shot from the camera provides a fluid visual counterpoint to the piece’s pointillistic, angular articulations.

In all of these examples, the visual component is outfitted to make each moment of the music is more stimulating, engaging, and full of information. With an increased level of interest in each moment, the listener might forgo any desire to operate the scrubber bar on the YouTube or Vimeo player. That surrender of control, which a listener voluntarily gives at the beginning of any performance in a concert hall, is now even surrendered in the digital space. Even an hour-long piece without break can retain viewership over the entire performance if the video is produced just right. However, this is only part of the picture. Just as there is music that is designed to erase the scrubber bar, there are internet-born aesthetics that acknowledge, even exploit, this tool’s function.

III: Control Varies on the Internet

The hard-won item that the scrubber bar gives to the listener is control of time, the ability to move to any moment of a piece at will. However, such tight control is not always a necessary asset to a piece of music. For the likes of Radigue, Czernowin, or Beethoven, control of time is important. These composers take large amounts of it in order to express unique, long-form ideas. They paint narrative, trigger tension and release, and accumulate powerful physical sensation. Development is the concept that requires control over time. But long-form development, at least in this conventional sense of the idea, is not always a primary component in a piece of music. Conceptual music, as well as music from meme culture, has become highly disseminated online. These types of music are not without development. Rather they structure musical time in a way that does not rely on the listener’s full experience of it.

Conceptual music like this takes up time, but it does not need to control much time.

Conceptual music from both a pre- and post-internet age has an immediacy to its temporal structure that sits easily in the space of a digital media player. Consider Patrick Liddell’s I Am Sitting In A Video Room. Following a similar structure to Alvin Lucier’s I Am Sitting In A Room, this piece is a sequence of the same 41-second video of Liddell, uploaded, downloaded, and re-uploaded to YouTube many, many times. Specifically, he does this 1,000 times, causing a slow incremental degradation of picture quality over time. This degradation is a type of long-form development. However, that development is present to serve the concept of the piece, not the listener’s experience of the concept. The piece is centered around the conceptual idea of quality and degradation on the internet. Such a concept is immediately clear from the start of the piece. It doesn’t matter whether the viewer watches every single moment of the 1,000 re-uploaded videos or not. The concept is expressed regardless of whether the viewer watches the whole video, skips over the middle, or never gets to the end. Conceptual music like this takes up time, but it does not need to control much time. The viewer may move around in the scrubber bar as they wish, or they may even sit and listen to every single moment of the work. Either way, the piece still effectively conveys itself, and the listener is able to receive it adequately.

This release of control is present even in pre-internet-age conceptual music. In György Ligeti’s Poème Symphonique (1962), 100 mechanical metronomes are triggered all at once. The performance ends when the last metronome ceases motion. Additionally, Erik Satie’s piano piece Vexations (1893) is a single page of piano music played 840 times very slowly. Performances of this piece range from as short as eight hours to as long as 35. Like I Am Sitting In A Video Room, these pieces don’t require the audience to listen to the piece in its entirety. It doesn’t even require them to listen at the same pace as the piece’s form. One aspect of these pieces, when performed live, is that an audience may enter and exit as they wish, like a sound installation. Now, in online settings of these pieces, the scrubber bar provides an augmented version of this enter/exit freedom an audience had in live performances. Online, the ability to skip and rewind is added to this set of listener freedoms, providing a contemporary analogue for an agency that already existed in the live performance space of sound installations.

IV: Development Through Time vs. Through Network

Music from meme culture also carries an immediacy that sits well in the online space. Like conceptual music, it does not require a tight control of time in the listener’s experience. Unlike conceptual music though, which still needs time to actualize a concept, meme music relies on social networks, rather than time, to express itself. Consider the meme, “All Star” by Smash Mouth, which is slightly different from the song that isAll Star.” The song “All Star” is a standard three-and-a-half-minute radio hit from 1999, and that is all it is. However, the meme that is “All Star” is an open-ended collection of different homemade treatments of the song by the same title. Here is a treatment of “All Star” where the song’s lyrics are replaced entirely by the single phrase from the pre-chorus “and they don’t stop coming.” Here is another where the vocal line has been pitch corrected into a four-part chorale in style of J. S. Bach. Here’s even another where a man named Jon Sudano uploads dozens of vocal covers to pop songs, where he will only sing the melody of “All Star” over the given pop song. When it comes to time and development, the duration of each of these meme-pieces is ultimately inconsequential. The expression of the meme-piece does not come from time, but from the cultural baggage accumulated via the meme’s dissemination and connection to other memes. Therefore, as long as the cultural reference is communicated, the role of time in the meme is irrelevant. What is significant about a version of “All Star” that is performed on an old cell phone has nothing to do with compositional technique, harmonic content, or performance practice. Rather, such a piece prompts a listener to recall an earlier time (early 2000s pop-rock, Shrek, dial-up tones) in an absurd and emotional way.

As more iterations of the “All Star” format are created, the internet-native humor and disjointed coherence of the viral process take over the original aesthetics of the band’s song. To invoke the song “All Star” today is to reference a meme, not a mere song. This is a form of development, of evolution, that occurs outside of the individual meme-piece. It’s a form of development defined by its networked connection to other meme-pieces of similar format. It doesn’t happen over the course of any single iteration of the meme. The development is the change between iterations, between meme creators, over the course of its viral lifespan.

Self-awareness is characteristic of meme culture that has created a sort of musical catalog of its trends and moments. Adam Emond created 225 YouTube videos of pop songs where every other beat is removed. Whereas reordering the beats of a song is usually one of many treatments that are applied to a meme-song, Emond has taken an inverse approach and applied the same treatment to many different songs. ZimoNitrome has done a catalogue-work in the piece april.meme, where 24 memes trending in April 2018 were used as material to create a single two-minute video piece.

V: Surrendering Control and Opening the Door

A recording, as it exists on YouTube, is less of a performance to sit through, and more of a landscape to explore.

There is one last posture towards time that the internet encourages music to take: interactivity. Through intentionally massive lengths of time, listeners are prompted to actively use the scrubber bar as a means of exploring at their own pace. Johannes Kreidler’s piece Audioguide is a seven-hour long, non-stop theater work that exhibits this. It’s comprised of many smaller conceptual pieces, sequenced together one after another without break. While certain moments of the seven-hour work are uploaded as excerpts, the piece also exists in its entirety as a single video. This length, which is nearly indigestible in a single sitting on the internet, inevitably prompts the listener to “search around” the piece using the scrubber bar. A recording, as it exists on YouTube, is less of a performance to sit through, and more of a landscape to explore. Through incorporating massive durations, music on the internet can take on an interactive component, where the timing of the listening experience is reliant on the viewer.

Now, achieving a sense of “landscape” via extreme length is not an internet-native aesthetic. Rather, these lengthy online pieces can also be seen as a sub-category to the sound installation. In 2001, St. Burchardi church in Halberstadt, Germany began a performance of John Cage’s ORGAN2/ASLSP for organ (a piece composed in 1987). The piece is comprised of extremely long durations, and this particular performance, live-streamed 24/7, will last until the year 2640 (639 years). In the early 2000s Lief Inge began time stretching recordings of Beethoven’s Ninth Symphony so that they were each 24 hours in length. Inge has been producing live installations of these time-stretches around the world ever since, as well as maintaining a constant live stream on his website. Like Audioguide, these pieces exist on a magnitude of duration beyond the average person’s attention span. However, in physical spaces, as well as live streams (where scrubber bars are not present), there is only an intention for the listener to experience a single portion of the piece. The composer still controls time as it runs through the music. The key difference between these live performances/streams and video pieces like Audioguide (as well as these next examples) is that the scrubber bar allows for the piece to be digested in a way that is more cursory, exploratory, and non-linear. Time and form there is determined by the listener rather than the composer.

Stretch videos, in the likes of Inge’s, have become an entire category of this interactivity in themselves. Hundreds of these videos exist online, time stretching the music of Brian Eno, Radiohead, Beethoven, John Williams, even computer sound effects such as the Windows startup sound. Unlike a live stream, these take the form of multi-hour videos in which a listener may move from moment to moment at their preferred pace. Though music will always be moving transiently through time, these stretch videos are the closest thing there is to exploring a piece as a static object, something to touch, observe, and walk through.

These super-long pieces of music have a second posture towards control of time: if a listener is not scrubbing through the piece, it is most likely that they are playing it as background music while they study, read, or sleep. This more passive form of interactivity imports easily into the internet space, where performing music (i.e. vibrating speaker cones) requires a near-to-nothing expense of energy. Currently on Spotify, Sleep by Max Richter is a piece designed around this very idea. The eight-hour-long ensemble piece is meant to play while a listener sleeps. Additionally, Jack Stratton of the band Vulfpeck released Sleepify in 2014, a ten-track album comprised of silence. A pun of the streaming platform Spotify, the album is meant to be played on repeat during sleep so that streaming royalties can be farmed while people’s devices are not in use. “Sleep music” like this actually has a rich history, one full of live spaces, not just online. R.I.P. Hayman was presenting sleep concerts as early as 1977, and many more artists have come since then. So while the concept of sleep music is not native to the internet, the low amount of mechanical work needed for sound to be digitally produced illustrates how sleep music fits easily into the internet space.

All in all, we’ve looked at three different postures towards the control of time on the internet. Through examining visuals, we have seen how control of time can be aggressively won over from the listener. When development becomes centered around concept and cultural reference more than around time, control of time becomes less relevant to the piece. And finally, in creating massive, interactive terrains of sound through extremely long pieces, control is given over to the listener. In surveying these aesthetics, it is also clear that music on the internet carries an extremely broad spectrum in how much effort and resources are needed to create it. The Four/Ten Media video of Argus Quartet was likely the fruit of a team of artists, editors, and technicians, as well as several thousand dollars. That piece rests on the same viewing platform as the time-stretched video of Beethoven’s Ninth Symphony, a piece that requires only a laptop, free software, and an internet connection to create. The internet is a strange place for time. It is in this strangeness that a door is opened up to the parameter of resources.

At the beginning of this piece, the scrubber bar was presented as an anomaly to the musical experience. Like nearly all online tools, it functions to increase efficiency and deliver information faster, two imperatives that seem unrelated to the priorities of experiencing music. But beneath the goal to maximize efficiency is a deeper one to democratize resources and equalize different voices in a conversation. It is an ethic and virtue of the internet, open source and public domain. If this is true, then listening to music on the internet is not an anomaly at all. The concert hall, a dance club, and a religious temple all have social and physical peculiarities that carve and mold music to fit easily into the space’s original design. The internet is no exception to this fact. Its virtues for democratization, and its digital peculiarities such as the scrubber bar, shape and mold music. It touches music’s visuals, developmental structures, and interactivity in a way that ultimately makes composing possible for more people. More and more, the internet is being considered as a primary space for music performance and dissemination. While the initial effects of this trend are aesthetic, shaping the way time is controlled and utilized by the artist, music on the internet inevitably influences every aspect of creating music. For many, this makes the internet a strange place for music. But given just how pervasive the digital space is becoming each day, such a place may not remain strange much longer at all.

How to Exist: 20 Years of NewMusicBox

An interview takes place in a study-type room, with a man sitting on a couch, another man with his back to us sitting in a chair, and a woman in a blue dress behind the camera filming

Forgive me if I begin this look back at twenty years of NewMusicBox and its times by opening a different, older, but resolutely print magazine. In October 2000, about 18 months after NMBx’s founding, The Wire, the UK-based magazine for new and exploratory music, reached a milestone of its own: issue number 200. It marked the occasion with a directory of 200 “essential websites”: sites for record labels, venues, artists, discussion groups, and more. Nearly two decades later, the idea of trying to write down any sort of meaningful index to the web seems extraordinarily quaint; but at the start of the century, before Google transformed how we think about information, such things were not uncommon. Back then—and I’m just about old enough to remember this—it still felt as though if you put in a few days’ work, you could pretty much get a complete grasp of the web (or at least of that slice of it that met your interests).

Within The Wire’s directory, among a collection of links to 18 “zines,” sits NewMusicBox. Here’s Christoph Cox’s blurb:

Run by the American Music Center, an institution founded in 1942 [sic] “to foster and encourage the composition of contemporary music and to promote its production, publication, distribution and performance in every way possible,” NewMusicBox’s monthly bulletins do this admirably, and, with recent issues exploring topics as various as the relationship between alternative rock and contemporary classical, the funding of new composition, and the world of microtonality, regular visits are worthwhile.

NMBx’s presence on this list isn’t surprising. (Although I hadn’t looked at this issue of The Wire for many years myself, I was confident the site would be in there.) The online magazine of the AMC (and later New Music USA) has always been close to the forefront in online publishing. What is surprising—and just as telling—is that aside from a few websites devoted to individual composers (Chris Villars’ outstanding Morton Feldman resource; Eddie Kohler’s hyperlinked collection of John Cage stories, Indeterminacy; Karlheinz Stockhausen’s homepage-slash-CD store-slash-narrative control center stockhausen.org), almost no other sites in The Wire’s catalogue are devoted to contemporary classical music or modern composition. The sole major exception is IRCAM, whose pioneering, well-funded, and monumental presence (especially through its ever-expanding BRAHMS resource for new music documentation) gives an indication of the level NMBx was working at to have achieved so much so early on.

[banneradvert]

Although NMBx was at the forefront of online resources in 1999, the idea of an online publication for contemporary American music had been circulating at the AMC for some time. A long time, in fact. In 1984—just two years after the standardization of the TCP/IP protocol on which the internet is built, and when the web was still called ARPANET—the AMC’s long-range planning committee wrote, “The American Music Center will make every effort to become fully computerized and to develop a computer network among organizations concerned with contemporary music nationwide.”[i] This seems like an almost supernatural level of foresight for an organization that was still at that time based around its library of paper scores. That is, until one recalls the number of composers, especially of electronic music, who were themselves at the forefront of computer technology. One of these was Morton Subotnick, a member of the AMC board and one of new music’s earliest of early adopters. Deborah Steinglass, currently New Music USA’s interim CEO, but back then AMC’s Director of American Music Week (and soon to become its Development Director), recalls a meeting in 1989—the same year that Tim Berners-Lee published his proposal for a world wide web—in which Subotnick introduced the potential of computer networks for documenting and sharing information to the board, whose members were astonished and incredulous.[ii]

From its beginnings, NMBx was about making composers heard.

Yet they were moved to take it seriously. Carl Stone, another composer-board member who was involved from an early stage, reports that early models were an ASCII-based Usenet or bulletin board-type system that would allow users to exchange and distribute information nationwide.[iii] This idea evolved quickly, and ambitiously. A strategic plan drawn up in 1992 and submitted in January 1993 states that during 1994, the Center would “create an online magazine with new music essays, articles, editorials, reviews, and discussion areas for professionals and the general public.” Alongside Stone and Subotnick, the early drivers of this interest in technological innovation included fellow board members John Luther Adams, Randall Davidson, Ray Gallon, Eleanor Hovda, Larry Larson, and Pauline Oliveros.

This is not to say that everyone at the AMC was an early adopter; Stone says that one of his main tasks was “to keep driving the idea of an online service forward. While it might seem obvious today, there was significant resistance to an online service in some quarters. Some people felt it would be dehumanizing, expensive. They couldn’t see the coming ubiquity of computers in our daily life.” A key role in maintaining this drive, Steinglass tells me, was played by the AMC’s Executive Director Nancy Clarke. Clarke, a music graduate from Brown University, had worked as a music program specialist at the National Endowment for the Arts before coming to the AMC in 1983. According to Steinglass, Clarke was very interested in technology and was sympathetic to the predictions of Subotnick and others. It was she as much as anyone who pushed for and implemented an online presence for the AMC.

The fruit of these discussions (and several successful funding bids written by Steinglass) was the launch of amc.net in the first half of 1995: the same year as online game-changers such as eBay and Amazon, but months before either. In fact, the AMC’s website (designed by Jeff Harrington) proved to be one of the world’s first for a non-profit service organization, a testament to the vision and ambition of Clarke, Stone, Subotnick, and the rest of the AMC board. By June 12, according to a letter from Clarke to the Mary Flagler Cary Charitable Trust (one of the site’s funders), it was already receiving a respectable 20,000 hits a month.

Yet the goal of a web magazine devoted to contemporary American music—meaning all sorts of non-commercial music, from jazz to experimental, as well as concert music—remained incomplete. In that same June letter, Clarke lists the services amc.net was providing: they include a catalogue of scores held in the AMC’s library; a compendium of creative opportunities (updated daily); listings of jazz managers and record companies; a forthcoming database of composers, scores, performers, and organizations; and that mid-’90s online ubiquity, the guestbook. But no mention of a magazine.

The idea was reinvigorated in 1997. Richard Kessler arrived as the AMC’s new executive director and amplified the need for the AMC—and indeed other music information centers like it—to do more than offer library catalogs and opportunity listings. “We’re supposed to be about advocacy,” is how he describes his thoughts at that time. “And not just [for] composers, but also performers and publishers and the affiliated industry.”[iv] To achieve this, Kessler reasoned, the AMC needed to switch its attention away from its score library and towards ways to give a voice to composers across the spectrum, particularly those working at the margins of the established scene. “There are composers out there who, if they’re not published, people don’t know who they are or what they’re doing,” he says.

Planning documents and funding applications produced shortly after Kessler’s arrival in July 1997 discuss the development of “a twice-monthly web column” that would provide “first person” perspectives on American music by experts and practitioners within the field.[v] At this stage an online magazine does not seem to have been in anyone’s mind, although it was suggested that these columns would be supported by chat forums, links, and other materials. Kessler was clear about what he wanted this publication to do, whatever form it might finally take: it should give “a palpable, well-known voice to the American concert composer, broadly writ. I also wanted it to affirm the existence of those artists. Can you play a part in ensuring that those artists will exist in that [online] space? Not only for people to discover them, but also for the artists themselves to feel like they do exist.”[vi]

By late spring 1998, the “American Music: In the First Person” proposal had evolved into an idea for a multi-part online newsletter. Planning documents from May of that year introduce the idea of a monthly internet-based publication “serving as a communications and media vehicle for new American music.”[vii] These documents are aimed more generally at creating an “information and support center for the 21st century,” but the presence of the magazine is regarded as the “linchpin” in that new program.

After this, things moved quickly. On July 1, a conversation between Kessler and Steve Reich was published on the AMC’s website. This was the first of a series of interviews entitled “Music in the First Person” (and which still continue under the title of “Cover”): it is interesting to note how the “first person” of the title shifted from the author of a critical essay or column, as proposed in May, to the (almost always a composer) subject of an interview. In the same month, Frank J. Oteri was approached—and interviewed—for the job of editor and publisher of the planned magazine, a position he took up in November. NewMusicBox published for the first time the following year, on May 1, 1999, featuring an extended interview with Bang on a Can, an extensive history of composer-led ensembles in America written by Ken Smith, “interactive forums,” news round-ups, and information on recent CD releases.

NMBx has grown up alongside the internet itself, and often been close to its newest developments.

NMBx has grown up alongside the internet itself, and often been close to its newest developments. The original “Music in the First Person” interviews that began in 1998 were published with audio excerpts as well as text—a heavy load for dial-up era online access. A year later, the April 1, 2000, interview with Meredith Monk introduced video for the first time. And on November 22, 2000, NMBx released its first concert webcast(!). This was a recording, made by then-Associate Editor Jenny Undercofler a week before, but the first live webcast came only a little later, on January 26, 2001—almost eight years before the Berlin Philharmonic’s pioneering Digital Concert Hall. The innovations continued: with its regularly updated content, comments boxes, and obsessive (and often self-referential) hyperlinking, NMBx was a blog almost before such things existed, and certainly long before anyone else was blogging about contemporary concert music. Composer and journalist Kyle Gann and I started our respective blogs in August 2003, although it was a little while before I wrote my first post about new music; Robert Gable beat us both by a month with his aworks blog. In fact, Gable introduced our particular blogospheric niche to the wider world in a post he wrote for NMBx in October, 2004; within weeks, Alex Ross had joined the fun, and the rest is …

Many early innovations were brought to the table by Kessler, who saw potential in webcasts, discussion groups, and more, but this is not to say that the early plans for NMBx didn’t also feature some cute throwbacks. Among them, plans for link exchanges (links to your work having a great deal of currency back then), and elaborate content-sharing schemes with external providers before YouTube, Spotify, and Soundcloud embedding made such things meaningless.

From its beginnings, NMBx (and the wider organization of AMC) was about making composers heard. In the late 1990s what this meant and how it might be achieved was still seen through a relatively traditional lens. One funding application mentions that in spite of recent advances in technology and society, “many of the challenges that faced the field decades ago remain more or less unchanged.” It goes on to list them:

  • the need for composers to identify and secure steady employment
  • the need to educate audiences and counter narrow or negative perceptions of new music
  • the need to instill institutional confidence about the importance of new music—whether from orchestras, opera companies, publishers, media, or record companies
  • the need to encourage repeat performances of new music
  • the need to secure media coverage of new music[viii]
At this stage, the internet was still regarded by many as a tool for amplifying or augmenting existing models of publication. The editors had to field questions about whether the magazine would ever be “successful” enough to launch a paper version.

At this stage, the internet was still regarded by many as a tool for amplifying or augmenting existing models of publication and information sharing. In the same year as NMBx was launched, I joined the New Grove Dictionary of Music as a junior editor and ended up part of the team that oversaw Grove’s transition from 30-volume book to what was then one of the world’s largest online reference works. For several years after 1999, we were focused on making a website that was as much like the book as possible. (This was harder than you would imagine: Grove’s exhaustive use of diacriticals, for example, made even a basic search engine a far from simple task.) As far as maximizing the opportunities of the web went, this extended largely to adding sound files (that were directly analogous to the existing, printed music examples) and hyperlinks (analogous to the existing, printed bibliographies), along with editing and adding to the existing content on a quarterly basis.[ix] My experiences at Grove were echoed in NMBx’s office. The editors had to field questions about whether the magazine would ever be “successful” enough to launch a paper version; one planning document (perhaps trying to assuage the fears of the screen-wary) reassures that “anyone who wishes to download a copy of the magazine for printing and reading at a later date will be able to do so free of charge.”[x]

Clip from Billboard, 2001

Just a few years into the new century, however, things began to change in ways that hadn’t been anticipated, even by those at the forefront of technological application. Blogging in particular had revealed two powerful and unexpected abilities of the web: to complicate our understanding of truth and to amplify the functions of style, personality, and connections within the new media economy. In the second half of the decade, these were supercharged by the arrival of social media.

This changed what it meant to be heard. Continuing to exist as a composer was no longer about accessing authorial gatekeepers—becoming audible through major performances, broadcasts, and publishing contracts—but about telling personal stories of identity and representation, and about shining a light outside of the mainstream. These changes were anticipated early on at NMBx—the forum discussions from that very first “Bang on a Can” issue centered on the subject of audience engagement—and continue to be reflected in its features.

Continuing to exist as a composer was no longer about accessing authorial gatekeepers but about telling personal stories of identity and representation.

Oteri and Molly Sheridan, who replaced Undercofler as associate editor in 2001, have guided NMBx to its 20th birthday—a remarkable continuity of leadership for any publication, online or off! Along the way, they have directed many stages in its evolution—including several site redesigns—and launched many innovations. The major facelift came in 2006, and with it a move from monthly “issues” to a rolling schedule of articles and blog posts that was more in line with the stream-based style of the growing web. By now, NMBx was essential online reading for anyone interested in contemporary American music, and hot on the heels of this redesign came another enduring innovation: the launch of Counterstream Radio in March 2007. Advertised on its press release as “Broadcasting the Music Commercial Radio Tried to Hide from You,” Counterstream caught a mid-noughties trend for online radio stations, but has endured better than some others.

Sheridan at work on Counterstream Radio

Sheridan at work on Counterstream Radio

Yet although Frank (currently composer advocate for New Music USA, in addition to his NMBx work) and Molly (now director of content for the organization more broadly) have always had a strong idea of the best direction for NMBx, the debates in its pages are often sparked by practitioners themselves. (From the beginning, readers were invited to participate in forum discussions around a wide range of field issues or tied directly to individual posts; some of my strongest early memories of NMBx are of the lively conversations that would take place below the line.) To that extent, the site remains focused on what composers want to read; and judging by some of the recurring themes in NMBx’s 20-year archive of articles and blog posts, what composers want to read seems to be: how to get your work heard; how to create (even write for!) an audience; and how to engage with modernity and/or technology.

Even more importantly, there have also been, from the start, debates about representation. Concert music has been slow to confront its problem with race, for example, but it has been part of the conversation at NMBx for years: perhaps appropriately, since as changes in representation have come, one must hope that new music will lead them. Musicologist Douglas Shadle’s recent article on “Florence B. Price in the #Blacklivesmatter Era” is a valuable contribution, but even more pertinent has been the voice NMBx has given to living composers of color—from the early interview with Tania Léon in August 1999 through to the most recent of all featuring Hannibal Lokumbe, with many opinion pieces like Anthony Greene’s “What the Optics of New Music Say to Black Composers” along the way.

NMBx has been led by the compositional community, but it has been able to reflect that community’s concerns as they have played out in the wider world as well.

In areas like these, NMBx has been led by the compositional community, but it has been able to reflect that community’s concerns as they have played out in the wider world as well. As someone involved in the world of new music not as a creator but as a critic, observer, and occasional programmer, features like these are immensely valuable to keeping an eye on my own privilege, and to pushing me to open up the margins of my own understanding. Greene’s observation that “new music has done very little to change the expected optics of classical music, which is why new music’s identity problem is what it is today” is a powerful caution against complacency.

To take another example of those optics, the subject of gender representation and the problems faced by women in the contemporary music world were first addressed pre-NMBx, beginning with Richard Kessler’s February 1999 interview with Libby Larsen. They have remained in the foreground ever since, suggesting that the question remains current, but very much unresolved. A search for “gender” in the NMBx archive brings up almost 200 items, yet this isn’t even everything—it leaves out Rob Deemer’s widely read 2012 list of women composers, for example. (Forty-one items have also been tagged with the word “diversity,” though this list is not a free-text search, and only goes back to 2012.) The debates at NMBx wove in and out of conversations in the wider world. In 2002, guest editor Lara Pellegrinelli—who had recently written for the Village Voice about the lack of women musicians involved in Jazz at Lincoln Center—published a series of posts by women musicians, each headed “How does gender affect your music?” (Jamie Baum’s response: “When asked if gender has had an influence on my compositions, my reaction was of surprise—surprise that I hadn’t been asked that question before, not in 20 years of performing.”) Blogger Lisa Hirsch’s extended article of 2008, “Lend Me a Pick Ax: The Slow Dismantling of the Compositional Gender Divide,” added essential concert and interview data to the debate, highlighting the difference between post-feminist fantasy and harsh reality; and composer Emily Doolittle, with Neil Banas, offered an interactive model to highlight “The Long-term Effects of Gender Discriminatory Programming.” A widely derided column in the conservative British magazine The Spectator of 2015 (“There’s a Good Reason Why There Are No Great Female Composers”) prompted a suitably damning response from blogger Emily E. Hogstad (“Five Takeways from the Conversation on Female Composers”) that deftly drew together several moments across both new and historical music, and in the wake of 2012’s International Women’s Day composer Amy Beth Kirsten enriched the discussion with a call for the death of the “woman composer.” This last article attracted more than 100 comments and extensive debate, but the one that attracted so much interest it briefly crashed NMBx was Ellen McSweeney’s “The Power List: Why Women Aren’t Equals in New Music Leadership and Innovation,” a nuanced response to Sheryl Sandberg’s Lean In and its applicability to the world of new music. Tying questions of both race and gender together was Elizabeth A. Baker’s remarkable intersectional cry, “Ain’t I a Woman Too,” from August last year.

Perhaps most indicative of all was Alex Temple’s 2013 piece, “I’m a Trans Composer. What the Hell Does That Mean?” Temple’s article (originally published on her own website) is explicitly a follow-up to other NMBx contributions on gender, two of which are mentioned in its opening paragraph. It adds layers of nuance to the debate, both around the question of male/female binarism, as well as the question of whether compositional style can be gendered. No, says Temple to this latter, but:

I have noticed that certain specific attitudes toward music seem to correlate with gender … While I don’t think of my work as specifically female, I do think of it as specifically genderqueer. Just as I often feel like I’m standing outside the world of gendered meanings, aware of them but never seeing them as inevitable natural facts like so many humans seem to do, I also tend to feel like I’m standing outside the world of artistic meanings.

In its combination of raw experience and careful self-reflection, Temple’s article is exemplary but not unique to NMBx; an equally honest and unmissable piece, this time on musico-racial identity, is Eugene Holley, Jr’s “My Bill Evans Problem.” For those of us—including me, I confess—who have found ourselves under-informed about trans issues, Temple’s article provided a welcome introduction: not only to the terms of that discussion, but also for its possible ramifications for artistic creativity and self-expression (articles published since, including Cas Martin’s “An Ode to Pride Month,” have added layers of their own).

The continuing presence of articles like these brings us back to the core purpose of NMBx as the AMC envisioned it back in 1997: to allow composers to feel like they exist. In 2019 that is not only a question of allowing composers to feel like they exist as composers, within the framework of institutional support and recognition, but as people, within the framework of a more humane, more complete understanding of what we are as a society. In recent years, one or two online publications have found ways to discuss difficult social questions within the context of contemporary music; it’s rarer still to see it done with the same level of peer-to-peer sharing of knowledge and experience. NMBx, built in the best days of the web, was there before them all.


In the twenty or so years since we started to pay attention to it, the internet has concatenated every part of our private and public lives. Art, culture, sport, business, and gossip no longer appear separately, like supplements in our weekend newspapers, but together, on the same screen as dinner plans, memes, and conversations with our friends. Since the advent of Twitter, different things have become even more closely braided within the same scroll-stream, units differentiated only by the volume at which they declare themselves from our screens: #ClimateCatastrophe, #FiveJobsIHaveHad, #WorldPenguinDay read three hashtags in close proximity on my TweetDeck right now.

This is not altogether a bad thing. In the 1980s and ’90s, before this whole online thing really took off, musicologists and critics would fret about the disassociation of classical “art” music from life, and of musicology from society. Popular music was better at inserting itself into and complementing people’s lives. Film, literature, and theater were also good at it. Yet music, it was argued, was somehow still regarded in the abstract. It was partly in response to this that the scholarly movement that came to be known as New Musicology was born, having as its aim the study of music within its social context, music as a social creation. Today, music inhabits very much the same space as everything else in our lives (just as music is increasingly made out of the components of those lives). NMBx’s blogs and features, which place the day-to-day stories of actual new music composers at the center of the discussion, are a perfect reflection of this. The internet, with its indifferent reframing of everything as #content, has played no small role in this change in how we see the world. Few people talk of New Musicology now. Not because its premises were wrong, but because they have become standard practice. In this, as in so much else, NewMusicBox has long been ahead of the curve. Here’s to existing, always.


Thanks to Jeff Harrington, Richard Kessler, Debbie Steinglass, and Carl Stone for sharing with me their recollections and documentation of the early days of NMBx and amc.net.

[i] Quoted in American Music Center, 1992: “The Arts Forward Fund: Request for Proposal,” n.p. (“Proposal Summary”).

[ii] Deborah Steinglass, email to the author, April 5, 2019. According to Steinglass, Subotnick “also talked about the future of transportation, and how the US would have highways filled with electric vehicles none of us would actually have to drive.”

[iii] Carl Stone, email to the author, April 10, 2019.

[iv] Richard Kessler, Skype interview with the author, April 5, 2019.

[v] I am grateful to Richard Kessler for sharing these and other documents with me, and for permission to quote from them.

[vi] Kessler, Skype interview.

[vii] American Music Center, 1998: “An Information & Support Center for the 21st Century: An Action Plan.”

[viii] American Music Center, 2000: “A Proposal to the William and Flora Hewlett Foundation to Support an Online Information and Communications Infrastructure for New American Music,” page 10.

[ix] I am happy to report that since my time at Grove – or Oxford Music Online as it is now known – these ambitions have expanded greatly.

[x] American Music Center, “An Information & Support Center for the 21st Century,” page 5.

Live Streaming 104: Post Stream, Graphics, Licensing, and Live Streaming Through Collaboration

Live streaming is trending, feeding the algorithms, and connecting the world in new ways. If you are already putting forth the effort to create a musical production of any kind, adding another technical layer is very much worth it to share your music, create a community, and market your product. Plus, you will end up with excellent content for blogging, your portfolio, submitting to competitions, and consistent posting to your social media channels.

In my previous three posts, we covered the why, where, and how of successful live streaming. This final article is a sort of postlude, to discuss post-stream content benefits, to clarify some concerns about licensing, copyright, ownership, and agreements, and to encourage you to think beyond the scope of what you are able to do by yourself.

Post-Stream Benefits

There is a segment in Live Streaming 101 about post-stream benefits, but I think it is worth repeating. Once your stream is over, you will have an HD video (saved to your mobile device, camera, or computer) and synced audio. If you have an engineer helping you out, you can master and remix the live audio and re-sync to the video pretty easily at this point as well.

Once the video is polished, if possible, I recommend segmenting the concert by piece and creating a separate video for each piece. I recently did this with three of my short piano pieces from a February 2018 concert at Kalamazoo College, presented with Aepex Contemporary Performance. Instead of bulking them into one video, I cut them into three shorter videos. Here’s what they look like:

Glass Study One
Glass Study Two
Glass Study Three

By having shorter content, this gives me three opportunities to repost to Facebook and Twitter, three opportunities to tag and mention my many collaborators (Kalamazoo College, Arts Council of Greater Kalamazoo, Aepex Contemporary Performance, Justin Snyder), and listenable examples of my music. I could even make a YouTube playlist of all three, and add to it if I make more videos in the future.

If we quickly dissect the social media impact of three videos, with four partners we can tag, we get 24 sharing points (three videos tagging four pages three times on two social media platforms) which will only be multiplied by the algorithms of social media and the shares made by your friends. These videos can also be featured on your website, and—as mentioned before—emailed to your subscribers. Segmenting videos and delaying the release also allows you to be consistent with your social media presence—taking a singular event and spreading the content out over many months.

There are many ways to spice up your live stream in post-production and they usually include graphics. You can do anything, but the standard for concerts seems to be 1.) a title slide or sequence of title slides, 2.) a bar or graphic in the lower third of the video image that you can use to denote the name/movement of a piece and the performers playing, and 3.) closing slides for crediting performers, funding organizations, and your website. For all of these images, make graphic files the same size as your video resolution.

This brings us to creating graphics for your stream.

Graphics for streaming and post-stream production

Inserting graphic overlays and title slides into a live stream is really only possible using an external encoding program like OBS, Switcher Go, or some other non-mobile tech. It’s a really great effect for your next level professionalism; you can have the concert poster start the stream, followed by composers/performer/piece title bars that overlay the video image, like in this live stream I did for The Gilmore.

To create these graphics—specifically the overlay bar—you need a design program that can create a transparent PNG. I use Canva, a simple online graphic design program. (I do believe that the transparent PNG option is a paid feature.) Once you get past the title slides, designing a piece/composer/performer bar for the lower third of the screen is really easy. My recommendation is that you design it in a 1920 x 1080 pixel format, which is standard HD definition, so when you load the graphics into your streaming software, they automatically fit the HD video image. To create the lower third bar effect, use the same resolution, create your lower third image, then download with a transparent background in PNG format. As always, do your research and make sure you know what your video image resolution is.

If you don’t have the encoder software that allows you to import graphic overlays during the stream, take the time to edit your video post-stream and use these graphics (like I did above) or other video editing software to make your videos look awesome.

Licensing, ownership, and approval

As with all non-public domain music, there are some licensing and copyright issues that can arise with live streaming new music. Questions about this were posed to me at my presentation at the New Music Gathering in Boston this past spring, and thankfully, after an interview with Chris McCormick at BMI, I am fully aware of the concerns that can arise, and the solution to properly and legally address them.

In short, you need to get approval from all composers represented on your concert live stream, and all performers who will be part of your live stream. I recommend drafting up a simple letter of agreement for composers and performers detailing 1.) how much they will be paid 2.) how many services are expected (rehearsals and performances) and 3.) that the performance will be recorded and streamed live, with all planned future uses outlined. It’s important to note that the rights to produce a piece can be controlled by 1.) the composer and publisher or 2.) just the composer. The composers involved should know whether or not to include their publisher if you are unsure.

When your video is uploaded to YouTube, it becomes YouTube’s responsibility to pay the PRO (Performing Rights Organization, like BMI and ASCAP) based on streaming data that it sends quarterly. If you are streaming the music of other composers (which you should already have approval for anyway), YouTube will typically direct the streaming fees to the right places. Of course, this works best for pop acts that accrue more streams and have larger representation. After speaking with Chris at BMI, I learned that Twitter and Facebook are currently working on developing their licenses with the PROs, whereas YouTube has a pretty robust system already, so we may see some future changes in how we credit and control intellectual property in live streams.

Thinking beyond your limitations

After reading these four articles, I hope you have gained a deeper understanding of where to begin your live streaming journey, how to do the research necessary, and how to ask the right questions to start your own streaming. If you get hooked like I did, consider expanding your talents and go a little more pro.

When I started streaming with The Gilmore, I was fortunate to get video work from our upstairs neighbors in Kalamazoo, the Public Media Network. They had the equipment and know how—all we had to provide was clean audio and some direction. After years of cultivation, we have a really great partnership and, through practice, have learned how to get our tech working in the best possible ways to make some great streams. After visiting the streaming room in the basement of the Detroit Symphony Orchestra’s hall, it was apparent that a high-quality stream needed an entire team of people, and early on, the DSO partnered with Detroit Public TV to make it happen. It made me wonder how many other public media groups are out there with camera equipment and know-how, and how many would be interested in collaborating with local arts groups.

The point of my short story is to encourage you to think of ways to leverage your network to build partnerships and share resources for mutual benefit. When I started working with the Public Media Network in Kalamazoo, we benefited from their robotic controlled multi-camera set up and staff expertise, and they received artistic content for their cable channels and community exposure. It never hurts to seek out local groups and ask. You may be surprised what can come together.

Another option might be to build a sort of streaming consortium that would allow you to pool resources to buy a rig that would work for multiple groups, and you could come together to produce each others’ work.

So don’t limit yourself just because you only have a mobile phone set-up. If you are interested in expanding, seek out collaborators in your community!

End Credits

Thank you for reading this far. Special thanks to my employers, The Gilmore and Kalamazoo College; my video partners Public Media Network; and the New Music Gathering and NewMusicBox for helping me hone my thoughts. Also props to Garrett Hope of the Portfolio Composer for being my first public appearance (here on his podcast) where I spoke about live streaming.

As you can tell, I love talking about this stuff, so please reach out:

Twitter: @schumakera
Facebook
Or through my website: www.adamschumaker.com

Live Streaming 103: DIY Live Stream Tech

During the month of June, I have been writing about live streaming your new music concerts. Live Streaming 101 dealt with the “why” of live streaming. Live Streaming 102 discussed where to host your stream. This week’s installment will discuss some technical requirements for live streaming, but without diving in so deep that you get lost in the ones and zeros of the codec. By the end of this post, however, you should be armed with the basic skills and knowledge required to get a live stream up and running.

Site Preparation

The first thing any stream needs is a reliable, speedy internet connection. To simplify things, here is an internet checklist for your live streaming venue:

1. Get the WiFi/internet login information
2. While you’re at it, if applicable, get the contact info for the IT person or team
3. Get online and do a speed test (google “speed test” in a browser and use the google version)
4. Do a stream test

What matters most when it comes to internet speed for this application is the upload speed. This article has a great, in-depth description about live streaming and internet speeds. The gist is that higher quality streams carry more information (video resolution, audio bit depth) and thus need higher upload speeds.

Internet speed test

If you decide not to use mobile devices or WiFi (which inherently run more risks than a hard-wired connection), you should find an ethernet port and work with IT to make sure you have access. Some schools, companies, or public school venues have firewalls built in to their internet connections, so it’s important to learn about your venue and to make sure you can get to your streaming destination as described in the previous article.

Apart from the internet, it’s also important to test the lighting, sound, and proposed camera locations for your live stream. If you are working primarily with mobile devices, finding camera points close to the stage—but not blocking audience view—will likely be ideal. If you are working with external cameras and a separate encoder, you’ll want a room outside of the hall to run cables to, where your video team can talk freely, and where any computer keys or cooling fan noises (yes, this happened to me during this stream) will not distract from the performance.

iOS and Mobile Tech Camera Set Ups

For the beginner, starting with mobile technologies is the easiest way to go live. Facebook, Twitter, Instagram, and YouTube all have this option in their mobile apps. The resolution and FPS of recent smartphone cameras is high enough to make a nice looking video. You don’t even need the latest iPhone to stream in HD!

But there are two downfalls to streaming with a single mobile device. 1.) the variety of shot is nil. So make sure when you set your shot, it is up close and tightly framed, so your subjects are in clear view. 2.) research and listening suggests smartphone microphones are optimized for the human voice, not your music. So I recommend the addition of a smartphone microphone or compatible audio interface to connect your mics to.

Next, I will take you through some specific tech I have either used or researched with the help of some tech experts from Sweetwater. I have also included Amazon affiliate links where applicable, which will support New Music USA. (If you are shopping on Amazon, you might also consider using Amazon Smile to support their work.)

Mobile Tech Highlights: smartphone audio

Disclaimer: I have received no compensation in exchange for recommending any of the following products. I simply either have used the product itself or it seems well suited to the DIY Live Streaming specs I researched while planning these articles.

For mobile phone audio, I always recommend a microphone or an interface that can handle a stereo signal. Nothing sounds more natural than a stereo signal on a good microphone, I mean, we have two ears, right? There are piles of mono options, but I wouldn’t recommend any of these for live performance streaming.

Shure MV88: stereo mic with multiple patterns, gain control, etc.

Shure MV88

In my own work, I have been using this microphone and, over all, I am pleased. It plugs right into the lightning port of an iPhone, and it has piles of control options via its free app, Motiv Audio.

For zero hassle, this is a great option. It does require the phone to be set to “do not disturb” and “airplane mode” so that cell signals don’t interfere with the electronics. This is not hypothetical. Texts and calls do weird things to the recordings. WiFi can still work in this scenario.

Tascam iXR

Tascam iXR

During my research, I was looking for a two-channel interface that could work with mobile technology. Thanks to my friend and sales rep Vern at Sweetwater, we came up with two solid options. The Tascam was first on the list. The interface boasts connectivity to your iPhone or iPad directly via USB and the lightning port on your iOS device. With this option, you can use your favorite stereo mic pair and send your mobile device an excellent audio signal.

With interfaces, it is important to remember that they cannot charge your mobile devices. Make sure your devices are fully charged for live streaming!

Presonus AudioBox iTwo

Presonus AudioBox

Presonus also makes an iOS-compatible, two-channel interface. I have yet to try out this unit, but I do know that Presonus is an excellent company with great, affordable products. I once used an interface of theirs for ten years before I finally upgraded, and I was still able to resell the device! The iTwo interface is also iOS compatible and has overall better reviews than the Tascam.

Whichever way you go, make sure you talk to a sales rep about compatibility with your video device.

Switcher Go

I came across this app and subscription service during my research, and it is extremely appealing. For a relatively affordable monthly cost, you can use multiple iOS devices to create a multi-camera shoot. This is a pretty attractive option when you’re ready to take the next step and make your live stream productions look more professional by using multiple camera angles, but are not yet ready to invest piles of money into dedicated cameras, switchers, and computers or encoders.

The blog about Switcher Go explains the basic functions of their product. For $29 a month (just do a month at a time if you are not streaming every month), you have access to their software which allows you to connect as many as nine iPhone or iPads as external cameras, wirelessly. With a few friends (who have iPhones or iPads) and some mic stands and mounts, you can create a really professional looking multicam production with an external audio source and other cool abilities.

Stands and Mounts for Mobile Devices

With all of these mobile-based solutions, there are two things you cannot forget: a stand and a mount for your device. I recommend using a good tripod microphone stand with a boom arm, and a phone mount of your liking. (There are so many to choose from.)

My favorite microphone stand is the K&M Tripod microphone stand. I have personally used these for all sorts of applications, and they have never let me down. One of them is almost 15-years old.

There are so many accessories for mobile devices it’s almost obscene. My personal favorite device mount is the Accessory Basics, but I’ll trust that you can do your own homework. When choosing, consider the compatibility with the device, and also make sure the rear camera and the lightning port are accessible while mounted so you can plug in your external mic and still get a good shot. If you are using an iPad, the same considerations apply.

External Camera and Encoder Set Ups

If you are not inclined to use mobile technology, there are other ways to connect external cameras to encoder hardware or software, and then send that signal to a streaming platform. For the beginner, I find this more problematic as it typically requires a computer, more computing power, and—if you want a multi-camera shoot—more hardware.

This is not to say that you shouldn’t do it! As you do your research, just be aware of the cost concerns to get a signal similar to what you could get with a mobile device. External cameras sending video to a computer will also typically need external audio.

External Cameras and Encoder Highlights

Zoom Q2n (audio & video solution)

Zoom has been a long-time player in the mobile A/V world. The Zoom Q2n is a microphone stand-mountable camera and X/Y stereo microphone all-in-one. For a relatively low cost, you can have video and audio going to a computer for streaming via the HDMI out. As always, be wary of adapters if your computer is not already designed to accept an HDMI connection (which carries both video and audio).

Open Broadcaster Software (encoding software)

Some external cameras are able to connect to Facebook Live via the “create” link (as discussed in Livestreaming 102), but if they can’t, there is a simple and free solution. Open Broadcaster Software (OBS) is a great program for Mac and PC that allows you to take incoming video signals and broadcast them to a streaming platform such as YouTube or Facebook. Although it has a simple interface, OBS has many options for intake and output that make it a versatile and useful program. With OBS you can have multiple video sources, separate audio sources (if needed), graphics, and other media inserted into your stream. Please note that the higher the quality video you are working with, the greater processing power you will need from your computer.

Side note: as mentioned above when discussing the Zoom Q2n, some cameras will not simply send an HDMI signal directly into your computer. I encountered this when trying to send a GoPro HDMI signal to my 2012 MacBook. Without something like a Game Capture HDMI to USB 3.0, there is no way my MacBook would accept an HDMI signal. Not all camera/computer setups are like this, so it’s important to do your research.

Look for Future Tech

Since preparing my presentation on live streaming for the New Music Gathering, my Facebook has been bombarded with ads from companies trying to sell me live streaming hardware and software. We are definitely in the middle of a boom of new live streaming technologies, which is exciting. So before you commit to a specific system, see what is out there that might best fit your needs, budget, and existing equipment.

Test Everything, Then Test Again

I cannot stress enough the need to test all components of your stream before the day of the event. Make sure audio, video, internet connection, and the output to your specified platforms all works, because usually something will go wrong and you will need the reassurance that you had it working before! Here’s a simple checklist:

1. Test your internet connection and speed
2. Test audio and video sync, shots, and levels
3. Test the connection to your streaming host/platform
4. Test with an actual stream; make sure your audio sounds like your audio before it hits the internet, and your video is clear and not choppy!
5. Check all connections and settings again before the event

In my final article next week, I will discuss live streaming with collaborators (and how to think about building those relationships), best practices for use of your video post-stream, easy ways to achieve graphic overlays and title slides, licensing and copyright issues, and ways to build your live streaming audience.

Live Streaming 102: Hosting, Preparing, and Advertising Your Live Stream

For those who are ready to add live streaming to their concert presentations, there are a pile of technical preparations and considerations to think through. Before we delve into the technology behind live streaming, let’s look at where it will be hosted.

Hosting your live video

Make it easy for your existing audience and your potential fans to find your live video by hosting it where they gather, and linking the video to as many other locations as possible.

The goal of live streaming is to reach people. To reach people, go where the people are. More specifically, go where your people are. Make it easy for your existing audience and your potential fans to find your live video by hosting it where they gather, and linking the video to as many other locations as possible. Personally, I have streamed to Facebook Live, YouTube, UStream, and Livestream.com. The DIY composer in me suggests you go with the free services like Facebook and YouTube. The tech-geek administrator in me likes how Livestream.com works. So let’s start with the free services. But before we do that, it’s important to know a bit of technical lingo.

Streaming Connections

In the next installment, we will look at how to stream with iOS and other mobile devices and beyond. Many simple stream connections can be created using just the phone in your pocket, but it’s helpful to be familiar with a different type of connection: RTMP.

Real Time Messaging Protocol (RTMP) is simply the way audio and video are streamed over the internet. All streaming to Facebook, YouTube, and Twitter can be done via RTMP. Regardless of what you are using as an encoder, all you need for RTMP streaming is the “server URL” and the “Stream name/key.”

The YouTube menu [YouTube → Creator Studio → Live Streaming] looks like this:

YouTube stream screen

The Facebook menu looks like this:

Facebook create screen

Facebook, YouTube, and paid providers all have RTMP connections. This means that you can stream to the platform without a mobile device. Instead, you can use external cameras that send video to external encoders for fancier multi-camera systems. For the novice, try using a mobile device for your first live streaming projects. For those seeking multi-camera and alternative connections, understand which platforms are able to connect to your encoder via RTMP.

Facebook Live

Facebook Live is a go-to streaming platform because just about everyone is on it. With close to 2 billion users, chances are, most of your friends are on it. Despite recent changes in the algorithm, the delivery system is effective. Plus, people know how to find you and your page, and you, or a social media assistant, can feverishly share the link with other pages once it goes live.

In my experience, although Facebook live is great at reaching people, the watch times are usually less than those captured via other platforms. Maybe it’s because we are all trained to scroll through our Facebook feeds for the next thing, or maybe the compression Facebook applies to the live videos is less appealing. The average Facebook view times have been clocking in around 1 minute per view. I like to think of Facebook Live as pure marketing rather than a true audience connection.

For best practices, including tech specs (which will be covered next week), read this Facebook article.

YouTube

We typically don’t think of YouTube as a social network, but it is. If you have taken time to recruit subscribers to your channel, they will typically receive notifications when you are live, depending on their personal settings.

YouTube has many perks over Facebook:

1.) When you drive people to your YouTube live link via your other social media accounts and email, they statistically stay longer.

2.) Unlike Facebook, you can embed your YouTube video link into your website, found in the “advanced settings” of the live page.

3.) YouTube can stream at higher resolutions than Facebook’s 720p. Live streams also populate to your YouTube channel for future views, embedding, and sharing.

Twitter & Instagram

These social media platforms are less known for streaming, and honestly less known to me. Perhaps these platforms are ripe with opportunity! For Twitter, read How to create live videos on Twitter. For RTMP streaming to Twitter, read this article. Twitter can also connect directly with Periscope, a streaming network. For Instagram, the live video feature is part of the “stories” section of the app.

All of these networks are less known for music, but as expressed previously, if your audience is there, then by all means stream there!

Paid Hosting Services

When I started streaming with The Gilmore Keyboard Festival, we chose Livestream.com as our video streaming host. This was partially because the Detroit Symphony Orchestra and several other classical music organizations used Livestream.com as their streaming host. At the time, Facebook Live and other young streaming platforms were difficult to access when using a more traditional camera setup, instead of mobile technology.

There are a few good reasons businesses use Livestream.com as a hosting service.

1.) These platforms offer excellent analytics, including time viewed; regions down to country, state, city; ways the stream was accessed; and on what kind of device the stream was viewed.

2.) They can store videos, published or unpublished, and allow the embedding of these videos onto websites.

3.) Simulcasting, which deserves its own header.

Simulcasting

Simulcasting is simply the simultaneous broadcasting of the video signal to multiple destinations. Social media platforms do not simulcast. Paid hosting services can. Using Livestream.com as an example, it is possible to send multiple video streams to multiple destinations at the same time. This multiplies the reach of a live stream by however many simulcasts you have access to.

For my work with The Gilmore, we try to gain distribution through the Facebook page of the artist, the venue, presenting partners, as well as our own, resulting in a minimum of four simultaneous live streams. The increase in viewership is impressive.

Interestingly, Livestream.com does not allow simulcasting to YouTube and Facebook, but it does allow YouTube and Twitter, or multiple Facebook pages. It’s also important to note that Livestream.com doesn’t allow RTMP connections without subscribing at one of the highest levels.

In the next article, we will briefly touch upon hard-wiring a simulcast, if you want to step up your live stream reach without purchasing a broadcaster subscription service to do the simulcasting for you.

Reaching Your Audience

With marketing, usually the more you can do, the better. Assuming you are a one-person show or a small team, I recommend marketing your live stream alongside your live performance, in as many digital places as you can.

Facebook Events are a great way to connect with your friends on Facebook, and remind them that they can join the event from afar because it will notify them of the live stream. The live stream link and info can also be posted inside the event. Facebook also keeps great stats. More importantly, if you can get a few people to co-host your event on Facebook, you will greatly expand your reach, your ability to invite audiences in your hosts’ networks, and you will also have multiple destinations for your stream (on the co-hosts’ pages).

Twitter is more immediate, so a little pre-tweeting and then live tweeting during your event, with links to the stream, can help move online traffic to your live stream.

Email is still a powerful source of reaching people. If you haven’t already started a virtual mailing list, now is a great time to do so! Emailing your audience about the concert, including the live stream link prior to and near the time of the event, will help bolster your stream audience.

Blog about your concert and live stream. If you cannot get an interview on a friend’s blog, a local media interview (radio/tv/podcast), or an article in a reputable publication, then you must do it yourself! Create an interesting discussion about your upcoming concert and make sure to including streaming links, and how and where to launch the live stream.

Recap

There are many places to host your live stream. It can be overwhelming. I recommend you find the platform where your audience is, and host your streams there so they are accessible to the most people who support you. Then make sure you review the ins and outs of creating a live stream on your desired platform. Next week, we will cover the technology behind DIY live streaming, including some tech suggestions that I have personally used or researched.

Live Streaming 101: Why Live Stream?

When I jumped into live streaming in 2013, I had no idea what I was doing—and my first stream featured a world-renowned pianist performing in a packed hall. The Gilmore Keyboard Festival, where I am on staff, was presenting a concert to the community featuring Kirill Gerstein. Because the concert was being offered free to the public, someone at a staff meeting asked, “Can we live stream this concert?” And from the silence, I blurted out, “Yes!”

You can watch segments of the 4:3 / 480p video here:

At the time, the Detroit Symphony Orchestra had been live streaming concerts for two years. Today, they are a leader in classical music live streaming, presenting around 30 concerts a year online. At The Gilmore, however, live streaming repeatedly brought up one major concern. This concern resonated throughout the office, though I didn’t believe it to be true:

If we offer the concert for free online, won’t it negatively impact ticket sales?

Despite this resistance, streaming a concert live to the internet became a small obsession of mine. With some help from the local Public Media Network, great audio engineers, and the world-class performances at The Gilmore, I managed to get our concerts online, with high-quality audio and multi-camera shoots.

As I gained experience managing small teams of videographers and audio engineers, I learned the ins and outs of the technology, the philosophy, and the social media impact. I even found ways to live stream my own new music concerts—without breaking the bank.

Building off my presentation at the New Music Gathering in Boston this year, during the month of June, I will explore why to live stream, preparing and advertising a live stream, the technology behind various live streaming set-ups, and how to begin collaborating with individuals or organizations to maximize reach and impact.

Why live stream?

If you somehow missed the memo, video consumption is, and has been, on the rise. In 2017, Facebook Live broadcasts quadrupled and 3.25 billion hours of video are watched on YouTube each month. From a marketing perspective, having video content is a no-brainer. But live streaming is a little different.

Live streaming—the act of broadcasting an event in real-time—gives us the unique opportunity to capitalize on the energy of a live performance, while enabling others outside of our community to participate. With advances in technology, it has also become increasingly easy to broadcast live video to the internet.

By live streaming our music, we gain the following:

  1. Expansion of reach and visibility (marketing, social media, locations, networks)
  2. Accessibly for both our current audience and potential future audience
  3. Increasing trust and loyalty from our fans
  4. Excellent content for later use (YouTube channel, website, grant proposals, sharing)

But what about the impact on ticket sales? This is where you need to trust your audience. I would argue that most people are cognizant of the uniqueness of a live concert experience. Given a choice and with no outside barriers, most people would choose a live event over a video version of it. By offering live streams of your events at no charge, you are trusting that the audience members you have will continue to buy tickets if they can. The benefit of the stream then becomes the ability to engage the dedicated fans who just couldn’t be there (thus allowing them to continue to participate in the experience), while also potentially reaching future audience members who are not fans—yet.

FOMO and concert attendance

Although research is limited, current case studies and surveys point to the same conclusion: after watching a live-streamed concert, viewers are more likely purchase tickets to future concerts. It’s like giving a sample of something delicious at Costco.

It’s important to note that many reports come from service providers like Livestream.com, who are trying to sell their services. Still, according to their 2017 survey, “67% of viewers are more likely to buy a ticket to a similar event after watching a live video.” The idea is simple: viewing a great live stream allows current fans to engage with a concert they would probably not have been able to attend otherwise, and allows potential fans to get a sample of a live event they may want to attend in the future. You are building community.

You’re also working off two sides of FOMO. If you’ve managed to avoid current slang and abbreviations, FOMO is the “fear of missing out.” Regardless of what one thinks about FOMO’s powers of motivation, it is a factor at work that everyone on social media experiences at some level. By live streaming your concerts, you can increase FOMO for those who are on the fence about attending your upcoming programming. On the other hand, you may also be able to dissipate some of those FOMO feelings via the live stream by giving your dedicated fans a way to participate, despite not being there.

Post-Stream Benefits

After the live-stream event (and the real-life concert), the video lives on, and some algorithms, like those on Facebook, perpetuate the views for a short while, reminding people of what they missed the night before. If you captured audience emails at your concert, you could send attendees a thank you email with a link to the video. You can also send the video to friends and colleagues who couldn’t be there.

The most important post-stream benefit is the content you’ve created. If you get the chance to clean and mix the audio and re-sync to the video, you have an entire concert to segment into individual pieces for your YouTube channel, your website, portfolio submissions, etc.

Recommendations:

  1. Make sure all content stakeholders are aware and in agreement about how the captured media will be used and distributed well in advance.
  2. Don’t repost the entire concert in full. Only keep the entire performance video up as a result of the live stream.
  3. Segment out individual pieces and create a lead in and a closer for each video, with proper credits to performers, composers, and technicians as text overlays.
  4. Develop a channel/page where all of your media lives.
  5. Use the reposting of video content to strategically activate your social media or blog/newsletter presence.

Upcoming articles

Next week, we will discuss technical preparation, advertising, basic artist agreements, and a complete guide to hosting your stream on different platforms such as Facebook, YouTube, Twitter/Periscope, and other streaming hosting services.

But What I Really Want To Do Is Direct!

Music videos are everywhere: pop artists create videos designed to go viral and to sell albums. Budding directors often cut their teeth making music videos and big names like M. Night Shyamalan, Gus Van Sant, Diane Keaton, and even Martin Scorsese have directed music videos, seemingly for fun. (It is way fun.) Formidable artistry sometimes emerges from the genre, like Beyoncé’s ingenious all-video “visual album” Lemonade, with seven directors working on the project, including herself.

Technology is no longer a barrier (even a mobile phone will do) and musicians with far smaller budgets than mega-stars are making music videos. New music folks have found their way to the medium—from cinematic works like The Lotus Eaters by Sarah Kirkland Snider featuring Shara Nova and directed by Murat Eyuboglu; to James Moore’s stunning virtuosity in his rendition of John Zorn’s The Book of Heads: Etude 33, intimately filmed by Stephen Taylor. I’d love to see even more “new music” music videos out there. Our media-saturated culture is a perfect landscape for indie musicians’ videos, and websites and social media outlets are great ways to share and promote music and artistry.

My own music video obsession began with making sure my performance work was documented, and then I moved into creating my own stand-alone music videos. (Actually, it began even before then with wanting to be a rock star and growing up with MTV, but that’s another story.) My neighbor and friend, Raul Casares, is a pro director of photography and I inadvertently apprenticed myself to him a number of years ago as we began to film my performances and music videos together. He patiently stood by as I drove the creative direction of the projects. I was hooked: the creative possibilities meshed with my aesthetic sensibilities and my lifelong adoration of film. I also love the creative control of the medium.

Misha Penton and cast inside the Silos at Sawyer Yards, Houston, Texas.

Misha Penton and cast inside the Silos at Sawyer Yards, Houston, Texas.
L-R: Misha Penton, soprano & director. Neil Ellis Orts, Michael Walsh, Sherry Cheng, voices.
Photo by D. Nickerson.

Since the release of my first music video in 2013, my work in this area has grown significantly. I’ve directed and produced four others, advanced to doing some shooting, and am now finally editing the work myself, with the last two videos being experimental new music pieces for which I also created the sound scores.

Threshold is my latest music video excursion—a work which began as a live, site-specific postopera (as musicologist Jelena Novak might say) created for The Silos at Sawyer Yards in Houston: an enormous mid-20th century rice factory, now a space offered for artistic use. It’s a labyrinthine complex of silos with a many-second sonic delay.

During the rehearsal period for the Threshold live performance, I filmed just about everything we did, either with my iPhone, my heftier Canon DSLR, or both. The process videos, dress rehearsal, and live performance documentation created an archive of material to support the work while also serving as material for stand-alone pieces. Part of what drives me to video is the unrealistic, resource-gobbling nature of contemporary music’s (too often) one-off live performance model. Creating multi-form, many-versioned projects gives the work a longer shelf life.

In the early stages of planning Threshold, I knew I wanted to create a music video as the final version of the project. I’d worked similarly on several other pieces, creating music video versions of live performance works, and I like the longevity and archival nature of media. During the rehearsal period for Threshold, when we were in the Silos space, the music video was filmed. After the live performance, the recording process began, and those audio files became the raw material for the edit and mix I created for the film’s sound score (polished and mastered by Todd Hulslander). I approached the video similarly: after filming with Raul (and Dave Nickerson), I chose all my favorite clips and created the video, adding the sound score last.

After about a year and a half of work—from conception to live performance, and finally the music video version—I now consider the Threshold project complete:

Tips & Toolkits

A music video is simultaneously an art form and a promotional tool.

I work very intuitively, and I like to think I’m pretty resourceful. I often ask myself, “What do I have at my disposal, right now?” rather than “I need seven countertenors and a goat, or I cannot realize my creative genius!” Budget is always a looming consideration, but doing a lot of the work oneself will cut that down quickly. To diminish the financial demands of making media projects, increase your technical independence overall (more on that later) and be as inventive as possible: take advantage of natural light, use interesting outdoor locations, incorporate abstract elements, and think outside the box when it comes to production.

And never underestimate the power of your mobile phone.

In addition to making creative media projects, it’s also possible to get good live performance documentation (in an intimate venue) with a smartphone mounted to a tripod—and although the resolution isn’t quite as high as still photos, video screen captures or exporting still images from the film is possible. A number of major releases have been shot mostly on mobile phones—like Sean Baker’s Tangerine (2015) and Steven Soderbergh’s Unsane (2018)—and many film festivals have categories for mobile phone (and music video) submissions. Enchant(ed) was made on my iPhone and filmed impromptu (and handheld) on a crazy-beautiful winter day in Colorado. (The voice-scape was created later in Logic Pro X.)

Although the arts are highly collaborative by nature, you should consider seeking grants or using resources to buy gear and software to become more self-sufficient—at least some of the time or as a choice—instead of using resources to pay for technical support to document projects or to realize creative media ideas. To put it plainly, instead of paying someone else to do it for you, invest in equipment over time and learn to use it. I’m one of those hardheaded, odd creatures who likes the experience of learning things on my own, so my tech skill set is largely self-taught. However, there are many options for upping technical expertise: local filmmaking and photography organizations usually offer classes, as do community colleges and continuing education departments at universities. Perhaps you have a friend or colleague who is into cameras and making films—as rock guitar icon Robert Fripp aphorized, “If we wish to know, breathe the air around someone who knows.”

There are many options when it comes to gear and software, and these tools effectively document live performance as well as realize creative media works.

Newbie Kit:

A smartphone and a tripod with a phone mount, and maybe one of those cool new gimbals from EVO (hand-held camera stabilizers). Many companies make clip-on lenses for mobile phones, like olloclip and AMIR. For live sound, something like a Zoom H4n is excellent.

Entry-Level DSLR Kit:

Canon EOS Rebel series or Sony Alpha a68. Both can be purchased bundled with an 18-55mm lens, plus you’ll need a tripod. I still use a Zoom H4n for live sound, so keep on keepin’ on with that little device.

Although a dedicated digital camera will increase quality and offer more creative flexibility, push your smartphone to its limit. I love my Canon dearly, but I recently upgraded to the iPhone X and it shoots gorgeous video with enhanced image stabilization.

Oh!—and for the love of all things sacred, always shoot in landscape and not portrait orientation: meaning, hold your phone horizontally so the image is wider than it is tall (like the wide rectangle of a computer, TV, movie screen, or proscenium stage). Also, keep your music videos under five minutes (don’t worry, I’ve broken that rule)—pop songs are usually around four minutes, so I say stick with broad audience appeal, and with the idea that a music video is simultaneously an art form and a promotional tool.

Video and Audio Editing Software:

Entry-level apps like iMovie (Mac) or Story Remix (PC) are pretty powerful. I’m Mac-based, but here’s one scoop on free PC video editing software. More powerful editing suites include Final Cut Pro or Adobe Premiere, and for audio editing I like Logic X, but there are many PC kin, some free. Home studio and pro audio recording options are beyond the scope of this article, but research recording resources in your area, like university studios or your local PBS affiliate.

External hard drives are essential because you will never have enough room for media on a laptop or on a standard computer set-up. I edited Threshold entirely on my late-2015 Macbook Air with an external hard drive (not ideal, but bless that little machine). Be forewarned: computer and external hard drives will fail at some point. Always back up full versions of your projects on two separate external drives.

I prefer Vimeo over YouTube as a distribution platform for my work because it’s ad-free, beautiful, and customizable. However, YouTube is free to use, while Vimeo charges a monthly fee for most of its plans (it does offer a free ‘Basic’ plan). Vimeo also has a number of technical advantages over YouTube, but if you’re just starting out, you may want to go with YouTube. Once your work develops in such a way that it benefits from a slick showcase, move to Vimeo.

And always credit collaborators. It’s surprising how many directors, filmmakers, and videographers are uncredited. Put all the credits and video info in the text below the video and not just at the beginning or end of the film itself. This text is search engine friendly.

My video work started when I got my first iPhone many years ago and my gear acquisition and skills built up over time. I am, by no means, a tech expert, but if you have a terrible aversion to gadgets and software, proceed with self-compassion and patience! Be resourceful, take baby steps, and make do: creativity best emerges within constraints.

Inspiration

The number of artists working in media is staggering, and the technical options range from guerilla filmmaking to extremely high-tech operations. Here are a few very cool artists whose work I find compelling that demonstrate this wide array of possibilities.

Jil Guyon is a performer and filmmaker whose surreal work, Widow_remix (trailer), is a collaborative project with composer Chris Becker and the voice of Helga Davis. Jil conceptualizes, directs, and edits, and Valerie Barnes is the cinematographer:

Zena Carlota’s ensemble piece, Lolow Kacha, features the kora, a traditional 21-string harp from West Africa, and was filmed in an intimate documentary-style by JJ Harris:

Nterini is a big budget music video by one of my favorite artists, Fatoumata Diawara, directed by Aida Muluneh with director of photography Daniel Girma:

And finally, animator, director, designer, and performer Miwa Matreyek composes music and collaborates with a number of musicians for her stunning multimedia live performances. Her website is a deep dive, so get comfy. Here’s a clip of her work, This World Made Itself:

Getting the Word Out

Beyond standard PR practices like social media posts, newsletters, press releases, and developing good relationships with arts writers in your community and beyond, submit your music videos to film festivals and find outlets to showcase and write about your work yourself. No one knows your work better than you. Blog about your video creation projects, trade guest posts with other writers in your area of interest, and always embed your video projects in posts.

Your website is another great way to showcase and organize work: performance history, videos, audio, and creative process writings. I love composer Caitlin Rowley’s vlogs. She is deeply honest and comprehensive about her approach. Her work with sound and performance, and her experiments with palimpsest-like hybrid journal / visual art is meticulous and fascinating. Soprano and artist-scholar Elisabeth Belgrano creates hypnotic and maze-like pages, and her iPhone and iPad voice recordings in Swedish churches and cathedrals are quite stunning. Interdisciplinary sound and performing artist Leona Jones, whose work centers “around a celebration of the hidden,” has organized her site beautifully with lots of headphone-friendly audio. My own work is organized in the Project section and Production Archives of my site.

Lastly, share the work with daring confidence: as the inimitable Dolly Parton is credited with saying, “Sometimes you just have to toot your own horn. Otherwise, nobody’ll know you’re a-comin’!”

It Ain’t Over Yet. Don’t give up on Net Neutrality.

Today the Federal Communications Commission voted to reclassify internet providers from utilities to information companies. This apparently simple act undoes years of bipartisan agreement on the concept of net neutrality as the guiding principle behind internet rules. Commissioner Ajit Pai, a former Verizon attorney appointed to his position by President Trump, has been relentless and single-minded over the past months in pursuing his goal, which is at best misguided and at worst deeply craven.

You’ve probably already heard a lot about why this reclassification is a truly terrible idea. I’ll just underline the perspective from New Music USA. Our constituency includes thousands and thousands of independent artists. We believe that the internet provides an absolutely indispensable tool for creating, distributing, and promoting the amazing array of musics that make this a potentially golden age for our sector. In a culture that so inattentively leaves the playing field so unlevel for artists, at least a neutral internet gives us a fighting chance to advance our work on the same terms as anyone else.

So who is actually in favor of this reclassification, this repeal of net neutrality? Very few, and (surprise!) they’re big corporations who stand to make billions of dollars off a newly unequal internet. Who’s against? Pretty much everyone else. Surveys show that more than 80% of Americans support net neutrality, and more than one million people called Congress in the last month alone, asking their representatives to save it. In a climate of deep and troubling divisions in our country, 80% (that’s eight-zero) agreement stands out as virtual unanimity. I’ve been truly moved the see the images of protests from all over the country, with ordinary people exercising their right to speak out and speak up for themselves. This is the country I want to live in.

If there’s good news here, it’s this: The FCC currently has the authority to do what it has just done. But Congress can step in and pass legislation that repairs the damage. There’s broad support for doing so. Lawmakers from all sides weighed in with letters to Chairman Pai asking him to delay the Commission’s vote: 39 Democrats and Independents signed onto one letter; Republican Senator Susan Collins joined another; Republican Representative Mike Coffman sent one of his own; not to mention the mountains of letters like this one from 32 House Democrats going all the way back to April.

There’s truly broad concern about the FCC action. And in that concern lies real hope to save the precious quality of an internet that’s equal for all.

What to Ware? A Guide to Today’s Technological Wardrobe

Circuitry for Salvage
Circuitry for Salvage

Circuitry for Salvage (Guiyu Blues), 2007. First version of design, housed in VHS tape box. 12 probes for linking to dead circuit board to be re-animated. Rotary switches select frequency range of each of six oscillator voices. Photo by Simon Lonergan.

At some point in the late 1980s the composer Ron Kuivila told me, “we have to make computer music that sounds like electronic music.” This might appear a mere semantic distinction. At that time the average listener would dismiss any music produced with electronic technology—be it a Moog or Macintosh—as “boops and beeps.” But Kuivila presciently drew attention to a looming fork in the musical road: boops and beeps were splitting into boops and bits. Over the coming decades, as the computer evolved into an unimaginably powerful and versatile musical tool, this distinction would exert a subtle but significant influence on music.

Kuivila and I had met in 1973 at Wesleyan University, where we both were undergraduates studying with Alvin Lucier. Under the guidance of mentors such as David Tudor and David Behrman, we began building circuits in the early 1970s, and finished out the decade programming pre-Apple microcomputers like the Kim 1. The music that emerged from our shambolic arrays of unreliable homemade circuits fit well into the experimental aesthetic that pervaded the times. (The fact that we were bad engineers probably made our music better by the standards of our community.) Nonetheless we saw great potential in those crude early personal computers, and many of us welcomed the chance to hang up the soldering iron and start programming.[1]

The Ataris, Amigas, and Apples that we adopted in the course of the 1980s were vastly easier to program than our first machines, but they still lacked the speed and processor power needed to generate complex sound directly. Most “computer music” composers of the day hitched their machines to MIDI synthesizers, but even the vaunted Yamaha DX7 was no match for the irrational weirdness of a table strewn with Tudor’s idiosyncratic circuits arrayed in unstable feedback matrices. One bottleneck lay in MIDI’s crudely quantized data format, which had been optimized for triggering equal-tempered notes and was ill suited for complex, continuous changes in sound textures. On a more profound level, MIDI “exploded” the musical instrument, separating sound (synthesizer) from gesture (keyboard, drum pads, or other controller)—we gained a Lego-like flexibility to build novel instruments, but we severed the tight feedback between body and sound that existed in most traditional, pre-MIDI instruments and we lost a certain degree of touch and nuance[2].

MIDI no longer stands between code and sound: any laptop now has the power to generate directly a reasonable simulation of almost any electronic sound—or at least to play back a sample of it. Computer music should sound like electronic music. But I’m not sure that Kuivila’s goal has yet been met. I still find myself moving back and forth between different technologies for different musical projects. And I can still hear a difference between hardware and software. Why?

Most music today that employs any kind of electronic technology depends on a combination of hardware and software resources. Although crafted and/or recorded in code, digital music reaches our ears through a chain of transistors, mechanical devices, speakers, and earphones. “Circuit Benders” who open and modify electronic toys in pursuit of new sounds often espouse a distinctly anti-computer aesthetic, but the vast majority of the toys they hack in fact consist of embedded microcontrollers playing back audio samples—one gizmo is distinguished from another not by its visible hardware but by the program hidden inside a memory chip on the circuit board. Still, whereas a strict hardware/software dialectic can’t hold water for very long, arrays of semiconductors and lines of code are imbued with various distinctive traits that combine to determine the essential “hardware-ness” or “software-ness” of any particular chunk of modern technology.

Some of these traits are reflected directly in sound—with sufficient attention or guidance, one can often hear the difference between sounds produced by a hardware-dominated system versus those crafted largely in software. Others influence working habits—how we compose with a certain technology, or how we interact with it in performance; sometimes this influence is obvious, but at other times it can be so subtle as to verge on unconscious suggestion. Many of these domain-specific characteristics can be ignored or repressed to some degree—just like a short person can devote himself to basketball—but they nonetheless affect the likelihood of one choosing a particular device for a specific application, and they inevitably exert an influence on the resulting music.

I want to draw attention to some distinctive differences between hardware and software tools as applied to music composition and performance. I am not particularly interested in any absolute qualities inherent in the technology, but in the ways certain technological characteristics influence how we think and work, and the ways in which the historic persistence of those influences can predispose an artist to favor specific tools for specific tasks or even specific styles of music. My observations are based on several decades of personal experience: in my own activity as a composer and performer, and in my familiarity with the music of my mentors and peers, as observed and discussed with them since my student days. I acknowledge that my perspective comes from a fringe of musical culture and I contribute these remarks in the interest of fostering discussion, rather than to prove a specific thesis.

I should qualify some of the terms I will be using. When I speak of “hardware” I mean not only electronic circuitry, but also mechanical and electromechanical devices from traditional acoustic instruments to electric guitars. By “software” I’m designating computer code as we know it today, whether running on a personal computer or embedded in a dedicated microcontroller or Digital Signal Processor (DSP). I use the words “infinite” and “random” not in their scientific sense, but rather as one might in casual conversation, to mean “a hell of a lot” (the former) and “really unpredictable” (the latter).

Vim

The Traits

Here are what I see as the most significant features distinguishing software from hardware in terms of their apparent (or at least perceived) suitability for specific musical tasks, and their often-unremarked influence on musical processes:

    • Traditional acoustic instruments are three-dimensional objects, radiating sound in every direction, filling the volume of architectural space like syrup spreading over a waffle. Electronic circuits are much flatter, essentially two-dimensional. Software is inherently linear, every program a one-dimensional string of code. In an outtake from his 1976 interview with Robert Ashley for Ashley’s Music With Roots in the Aether, Alvin Lucier justified his lack of interest in the hardware of electronic music with the statement, “sound is three-dimensional, but circuits are flat.” At the time Lucier was deeply engaged with sound’s behavior in acoustic space, and he regarded the “flatness” of circuitry as a fundamental weakness in the work of composers in thrall to homemade circuitry, as was quite prevalent at the time. As a playing field for sounds a circuit may never be able to embody the topographic richness of standing waves in a room, but at least a two-dimensional array of electronic components on a fiberglass board allows for the simultaneous, parallel activity of multiple strands of electron flow, and the resulting sounds often approach the polyphonic density of traditional music in three-dimensional space. In software most action is sequential, and all sounds queue up through a linear pipeline for digital to analog conversion. With sufficient processor speed and the right programming environment one can create the impression of simultaneity, but this is usually an illusion—much like a Bach flute sonata weaving a monophonic line of melody into contrapuntal chords. Given the ludicrous speed of modern computers this distinction might seem academic—modern software does an excellent job of simulating simultaneity. Moreover, “processor farms” and certain DSP systems do allow true simultaneous execution of multiple software routines. But these latter technologies are far from commonplace in music circles and, like writing prose, the act of writing code (even for parallel processors) invariably nudges the programmer in the direction of sequential thinking. This linear methodology can affect the essential character of work produced in software.
    • Hardware occupies the physical world and is appropriately constrained in its behavior by various natural and universal mechanical and electrical laws and limits. Software is ethereal—its constraints are artificial, different for every programming language, the result of intentional design rather than pre-existing physical laws. When selecting a potentiometer for inclusion in a circuit, a designer has a finite number of options in terms of maximum resistance, curve of resistive change (i.e., linear or logarithmic), number of degrees of rotation, length of its slider, etc.—and these characteristics are fixed at the point of manufacture. When implementing a potentiometer in software, all these parameters are infinitely variable, and can be replaced with the click of a mouse. Hardware has real edges; software presents an ever-receding horizon.
    • As a result of its physicality, hardware—especially mechanical devices—
      often displays non-linear adjacencies similar to state-changes in the natural world (think of the transition of water to ice or vapor). Pick a note on a guitar and then slowly raise your fretting finger until the smooth decay is abruptly choked off by a burst of enharmonic buzzing as the string clatters against the fret. In the physical domain of the guitar these two sounds—the familiar plucked string and its noisy dying skronk—are immediately adjacent to one another, separated by the slightest movement of a finger. Either sound can be simulated in software, but each requires a wholly different block of code: no single variable in the venerable Karplus-Strong “plucked string algorithm”[3] can be nudged by a single bit to produce a similar death rattle; this kind of adjacency must be programmed at a higher level and does not typically exist in the natural order of a programming language. Generally speaking, adjacency in software remains very linear, while the world of hardware abounds with abrupt transitions. A break point in a hardware instrument—fret buzz on a guitar, the unpredictable squeal of the STEIM Cracklebox—can be painstakingly avoided or joyously exploited, but is always lurking in the background, a risk, an essential property of the instrument.
    • Most software is inherently binary: it either works correctly or fails catastrophically, and when corrupted code crashes the result is usually silence. Hardware performs along on a continuum that stretches from the “correct” behavior intended by its designers to irreversible, smoky failure; circuitry—especially analog circuitry—usually produces sound even as it veers toward breakdown. Overdriving an amplifier to distort a guitar (or even setting the guitar on fire), feeding back between a microphone and a speaker to play a room’s resonant frequencies, “starving” the power supply voltage in an electronic toy to produce erratic behavior. These “misuses” of circuitry generate sonic artifacts that can be analyzed and modeled in software, but the risky processes themselves (saturation, burning, feedback, under-voltage) are very difficult to transfer intact from the domain of hardware to that of software while preserving functionality in the code. Writing software favors Boolean thinking—self-destructive code remains the purview of hackers who craft worms and Trojan Horses for the specific purpose of crashing or corrupting computers.
    • Software is deterministic, while all hardware is indeterminate to some degree. Once debugged, code runs the same almost all the time. Hardware is notoriously unrepeatable: consider recreating a patch on an analog synthesizer, restoring mixdown settings on a pre-automation mixer, or even tuning a guitar. The British computer scientist John Bowers once observed that he had never managed write a “random” computer program that would run, but was delighted when he discovered that he could make “random” component substitutions and connections in a circuit with a high certainty of a sonic outcome (a classic technique of circuit bending).
    • Hardware is unique, software is a multiple. Hardware is constrained in its “thinginess” by number: whether handcrafted or mass-produced, each iteration of a hardware device requires a measurable investment of time and materials. Software’s lack of physical constraint gives it tremendous powers of duplication and dissemination. Lines of code can be cloned with a simple cmd-C/cmd-V: building 76 oscillators into a software instrument takes barely more time than one, and no more resources beyond the computer platform and development software needed for the first (unlike trombones, say). In software there is no distinction between an original and a copy: MP3 audio files, PDFs of scores, and runtime versions of music programs can be downloaded and shared thousands of times without any deterioration or loss of the matrix—any copy is as good as the master. If a piano is a typical example of traditional musical hardware, the pre-digital equivalent of the software multiple would lie somewhere between a printed score (easily and accurately reproduced and distributed, but at a quantifiable—if modest—unit cost) and the folk song (freely shared by oral tradition, but more likely to be transformed in its transmission). Way too many words have already been written on the significance of this trait of software—of its impact on the character and profitability of publishing as it was understood before the advent of the World Wide Web; I will simply point out that if all information wants to be free, that freedom has been attained by software, but is still beyond the reach of hardware. (I should add that software’s multiplicity is accompanied by virtual weightlessness, while hardware is still heavy, as every touring musician knows too well.)
    • Software accepts infinite undo’s, is eminently tweakable. But once the solder cools, hardware resists change. I have long maintained that the young circuit-building composers of the 1970s switched to programming by the end of that decade because, for all the headaches induced by writing lines of machine language on calculator-sized keypads, it was still easier to debug code than to de-solder chips. Software invites endless updates, where hardware begs you to close the box and never open it again. Software is good for composing and editing, for keeping things in a state of flux. Hardware is good for making stable, playable instruments that you can return to with a sense of familiarity (even if they have to be tuned)—think of bongos or Minimoogs. The natural outcome of software’s malleability has been the extension of the programming process from the private and invisible pre-concert preparation of a composition, to an active element of the actual performance—as witnessed in the rise of “live coding” culture practiced by devotees of SuperCollider and Chuck programming languages, for example. Live circuit building has been a fringe activity at best: David Tudor finishing circuits in the pit while Merce Cunningham danced overhead; the group Loud Objects soldering PICs on top of an overhead projector; live coding vs. live circuit building in ongoing competition between the younger Nick Collins (UK) and myself for the Nic(k) Collins Cup.

David Tudor performance setup

  • On the other hand, once a program is burned into ROM and its source code is no longer accessible, software flips into an inviolable state. At this point re-soldering, for all it unpleasantness, remains the only option for effecting change. Circuit Benders hack digital toys not by rewriting the code (typically sealed under a malevolent beauty-mark of black epoxy) but by messing about with traces and components on the circuit board. A hardware hack is always lurking as a last resort, like a shim bar when you lock your keys in the car.
  • Thanks to computer memory, software can work with time. The transition from analog circuitry to programmable microcomputers gave composers a new tool that combined characteristics of instrument, score, and performer: memory allows software to play back prerecorded sounds (an instrument), script a sequence of events in time (a score), and make decisions built on past experience (a performer). Before computers, electronic circuitry was used primarily in an instrumental capacity—to produce sounds immediately[4]. It took software-driven microcomputers to fuse this trio of traits into a powerful new resource for music creation.
  • Given the sheer speed of modern personal computers and software’s quasi-infinite power of duplication (as mentioned earlier), software has a distinct edge over hardware in the density of musical texture it can produce: a circuit is to code as a solo violin is to the full orchestra. But at extremes of its behavior hardware can exhibit a degree of complexity that remains one tiny but audible step beyond the power of software to simulate effectively: initial tug of rosined bow hair on the string of the violin; the unstable squeal of wet fingers on a radio’s circuit board; the supply voltage collapsing in a cheap electronic keyboard. Hardware still does a better job of giving voice to the irrational, the chaotic, the unstable (and this may be the single most significant factor in the “Kuivila Dilemma” that prompted this whole rant).
  • Software is imbued with an ineffable sense of now—it is the technology of the present, and we are forever downloading and updating to keep it current.       Hardware is yesterday, the tools that were supplanted by software. Turntables, patchcord synthesizers, and tape recorders have been “replaced” by MP3 files, software samplers, and ProTools. In the ears, minds, and hands of most users, this is an improvement—software often does the job “better” than its hardware antecedents (think of editing tape, especially videotape, before the advent of digital alternatives). Before any given tool is replaced by a superior device, qualities that don’t serve its main purpose can be seen as weaknesses, defects, or failures: the ticks and pops of vinyl records, oscillators drifting out of tune, tape hiss and distortion. But when a technology is no longer relied upon for its original purpose, these same qualities can become interesting in and of themselves. The return to “outmoded” hardware is not always a question of nostalgia, but often an indication that the scales have dropped from our ears.

Hybrids

Lest you think me a slave to the dialectic, I admit that there are at least three areas of software/hardware crossover that deserve mention here: interfaces for connecting computers (and, more pointedly, their resident software) to external hardware devices; software applications designed to emulate hardware devices; and the emergence of affordable rapid prototyping technology.

The most ubiquitous of the hardware interfaces today is the Arduino, the small, inexpensive microcontroller designed by Massimo Banzi and David Cuartielles in 2005. The Arduino and its brethren and ancestors facilitate the connection of a computer to input and output devices, such as tactile sensors and motors. Such an interface indeed imbues a computer program with some of the characteristics we associate with hardware, but there always remains a MIDI-tinged sense of mediation (a result of the conversion between the analog to digital domains) that makes performing with these hybrid instruments slightly hazier than manipulating an object directly—think of controlling a robotic arm with a joystick, or hugging an infant in an incubator while wearing rubber gloves. That said, I believe that improvements in haptic feedback technology will bring us much closer to the nuance of real touch.

The past decade has also seen a proliferation of software emulations of hardware devices, from smart phone apps that simulate vintage analog synthesizers, to iMovie filters that make your HD video recording look like scratchy Super 8 film. The market forces behind this development (nostalgia, digital fatigue, etc.) lie outside of the scope of this discussion, but it is important to note here that these emulations succeed by focusing on those aspects of a hardware device most easily modeled in the software domain: the virtual Moog synthesizer models the sound of analog oscillators and filters, but doesn’t try to approximate the glitch of a dirty pot or the pop of inserting a patchcord; the video effect alters the color balance and superimposes algorithmically generated scratches, but does not let you misapply the splicing tape or spill acid on the emulsion.

Although affordable 3D printers and rapid prototyping devices still remain the purview of the serious DIY practitioner, there is no question that these technologies will enter the larger marketplace in the near future. When they do, the barrier between freely distributable software and tactile hardware objects will become quite permeable. A look thru the Etsy website reveals how independent entrepreneurs have already employed this technology to extend the publishing notion of “print on demand” to something close to “wish on demand,” with Kickstarter as the economic engine behind the transformation of wishes into businesses. (That said, I’ve detected the start of a backlash against the proliferation of web-accessed “things”—see Allison Arieff, “Yes We Can. But Should We?”).

Some Closing Observations

Trombone-Propelled Electronics rev. 3.0, 2005. Photo by Simon Lonergan.

Trombone-Propelled Electronics rev. 3.0, 2005. Photo by Simon Lonergan.

I came of age as a musician during the era of the “composer-performer”: the Sonic Arts Union, David Tudor, Terry Riley, LaMonte Young, Pauline Oliveros, Steve Reich, Philip Glass. Sometimes this dual role was a matter of simple expediency (established orchestras and ensembles wouldn’t touch the music of these young mavericks at that time), but more often it was a desire to retain direct, personal control that led to a flowering of composer-led ensembles that resembled rock bands more than orchestras. Fifty years on, the computer—with its above-mentioned power to fuse three principle components of music production—has emerged as the natural tool for this style of working.

But another factor driving composers to become performers was the spirit of improvisation. The generation of artists listed above may have been trained in a rigorous classical tradition, but by the late 1960s it was no longer possible to ignore the musical world outside the gates of academe or beyond the doors of the European concert hall. What was then known as “world music” was reaching American and European ears through a trickle of records and concerts. Progressive jazz was in full flower. Pop was inescapable. And composers of my age—the following generation—had no need to reject any older tradition to strike out in a new direction: Ravi Shankar, Miles Davis, the Beatles, John Cage, Charles Ives, and Monteverdi were all laid out in front of us like a buffet, and we could heap our plates with whatever pleased us, regardless of how odd the juxtapositions might seem. Improvisation was an essential ingredient, and we sought technology that expanded the horizons of improvisation and performance, just as we experimented with new techniques and tools for composition.

It is in the area of performance that I feel hardware—with its tactile, sometimes unruly properties—still holds the upper hand. This testifies not to any failure of software to make good on its perceived promise of making everything better in our lives, but to a pragmatic affirmation of the sometimes messy but inarguably fascinating irrationality of human beings: sometimes we need the imperfection of things.

 

This essay began as a lecture for the “Technology and Aesthetics” symposium at NOTAM (Norwegian Center for Technology in Music and the Arts), Oslo, May 26-27 2011, revised for publication in Musical Listening in the Age of Technological Reproduction (Ashgate, 2015). It has been further revised for NewMusicBox.

*

Nicolas Collins

Nicolas Collins

New York born and raised, Nicolas Collins spent most of the 1990s in Europe, where he was visiting artistic director of Stichting STEIM (Amsterdam) and a DAAD composer-in-residence in Berlin. An early adopter of microcomputers for live performance, Collins also makes use of homemade electronic circuitry and conventional acoustic instruments. He is editor-in-chief of the Leonardo Music Journal and a professor in the Department of Sound at the School of the Art Institute of Chicago. His book, Handmade Electronic Music–The Art of Hardware Hacking (Routledge), has influenced emerging electronic music worldwide. Collins’s indecisive career trajectory is reflected in his having played at both CBGB and the Concertgebouw.

 

*

1. Although this potential was clear to our small band of binary pioneers, the notion was so inconceivable to the early developers of personal computers that Apple trademarked its name with the specific limitation that its machines would never be used for musical applications, lest it infringe on the Beatles’ semi-dormant company of the same name—a decision that would lead to extended litigation after the introduction of the iPod and iTunes. This despite the fact that the very first non-diagnostic software written and demonstrated at the Homebrew Computer Club in Menlo Park, California, in 1975 was a music program by Steve Dompier, an event attended by a young Steve Jobs (see http://www.convivialtools.net/index.php?title=Homebrew_Computer_Club) (accessed on February 21, 2013).


2. For more on the implications of MIDI’s separation of sound from gesture see Collins, Nicolas, 1998. “Ubiquitous Electronics—Technology and Live Performance 1966-1996.” Leonardo Music Journal Vol. 8. San Francisco/Cambridge 27-32. One magnificent exception to the gesture/sound disconnect that MIDI inflicted on most computer music composers was Tim Perkis’s “Mouseguitar” project of 1987, which displayed much of the tactile nuance of Tudor-esque circuitry. In Perkis’s words:

When I switched to the FM synth (Yamaha TX81Z), there weren’t any keydowns involved; it was all one “note”…  The beauty of that synth—and why I still use it! — is that its failure modes are quite beautiful, and that live patch editing [can] go on while a voice is sounding without predictable and annoying glitches. The barrage of sysex data—including simulated front panel button-presses, for some sound modifications that were only accessible that way—went on without cease throughout the performance. The minute I started playing the display said “midi buffer full” and it stayed that way until I stopped.

(Email from Tim Perkis, July 18, 2006.)


3. Karplus, Keven and Strong, Alex. 1983. “Digital Synthesis of Plucked String and Drum Timbres.” Computer Music Journal 7 (2). Cambridge. 43–55.


4. Beginning in the late 1960s a handful of artist-engineers designed and built pre-computer circuits that embodied some degree of performer-like decision-making: Gordon Mumma’s “Cybersonic Consoles” (1960s-70s), which as far as I can figure out were some kind of analog computers; my own multi-player instruments built from CMOS logic chips in emulation of Christian Wolff’s “co-ordination” notation (1978). The final stages of development of David Behrman’s “Homemade Synthesizer” included a primitive sequencer that varied pre-scored chord sequences in response to pitches played by a cellist (Cello With Melody Driven Electronics, 1975) presaging Behrman’s subsequent interactive work with computers. And digital delays begat a whole school of post-Terry Riley canonical performance based on looping and sustaining sounds from a performance’s immediate past into its ongoing present.