Tag: performance practice

Just Intonation as Orchestrator

I place “in tune” and “out of tune” in quotes because these are highly subjective terms.

Just Intonation has become an essential part of my practice as a violinist. I initially came to it from the perspective of contemporary music and from working with composers on particular projects, but I’ve found that Just Intonation has crept into almost everything that I do and has proven to be an immensely useful tool. Just Intonation (JI) is a broad term for tuning intervals according to the natural relationships of the harmonic series. In practical use as a string player, it provides a grid (really, many distinct grids) that can be mentally/aurally overlaid onto the framework of the open strings, providing precise measurements of whether something is “in tune” or “out of tune” using only the instrument and your ears, without the aid of piano, digital tuner, etc. (I place “in tune” and “out of tune” in quotes because these are highly subjective terms – what is “in tune” using the grid of JI would be “out of tune” using the grid of Equal Temperament. It depends on which grid you are measuring with and that depends on the goals of the music you are playing, the preferences of your fellow musicians, and myriad other factors that must be negotiated fluidly in any musical situation.) JI tunings can be verified with great precision by ear by listening for difference tones and aiming for a pure sound without beats (that is, the acoustic interference between sound waves). The great potential of JI as a teacher and tool in the practice room is not really part of Western classical conservatory training, and I think it should be.

I don’t intend to go deeply into the practicalities of JI on the violin here. I’ll just say that, given all of the complexities and emotional insecurities that come with tuning on a fretless instrument, the realization that I had under my own fingers the means to precisely measure tuning was a hugely empowering revelation. Beyond that, there is the multifaceted beauty of JI, which is what has brought me back to the well again and again. JI can function harmonically, melodically, timbrally, and as a means of heightening the character and emotional qualities of the music.

I’ve generally always been more interested in harmony than melody. I can’t easily recall the words of songs – I suppose my ear is elsewhere. I love a good melody, but the power of the motion from one harmonic field to another, that undercurrent that supports the tune, is what really makes me excited about a piece of music. As a violinist, I have devoted myself to an instrument made for melodic playing and celebrated for its vocal qualities. So I’ve found myself looking for any opportunity to engage with the instrument in a more harmonically (and timbrally) driven way. Performing contemporary music offers many ways to do this, as does playing string quartets and solo Bach.

I can’t easily recall the words of songs – I suppose my ear is elsewhere.

J.S. Bach’s solo violin music strongly implies fully voiced harmonies, and though the modern violin can only sustain two pitches simultaneously, Bach’s music when played expertly gives the impression of polyphonic counterpoint, a sleight-of-hand illusion of polyphony. When I first started getting into performing music in JI, the realization that changed everything for me was that JI gives the violinist the ability to virtually access the bass register and to sustain three pitches simultaneously. This is made possible by the psychoacoustic phenomenon of difference tones. Sustain any two pitches and our brains fill in the fundamental of those pitches. Sustain two pitches that are tuned in a ratio that corresponds exactly to the harmonic series (say the 5:4 major third, which is the ratio between the fourth and fifth partials of the harmonic series), and the fundamental may be perceived clearly and strongly. If you know which fundamental you’re listening for and aim for a pure sound without beating, you can find almost any ratio on the violin involving up to the 17th partial or so. In the mid to high register of the violin, either standing up close or in a small room, the difference tone comes across as being extremely loud and resonant. It’s hard to believe that it’s just in our heads and not a real sound in the room.

As a treble instrumentalist, gaining access to the bass register was huge. It changed the way that I listen to the violin, expanding my focus downward to include a much broader range of frequencies. It also changed the way that I play, especially in solo and ensemble improvisation – empowered by the knowledge that inflections of pitch in double stops can have a real sway on the direction of the music. (Plus, playing JI intervals makes the instrument sound louder, an added benefit in an ensemble setting.)

Playing JI intervals makes the instrument sound louder.

JI intervals are used as a harmonizer to great effect in my Wet Ink Ensemble colleague Eric Wubbels’s “the children of fire come looking for fire” for violin and prepared piano, a duo that Eric wrote for us in 2012 that I count among my greatest influences as a musician. In a strikingly meditative moment in the middle of the piece, the violin plays a series of three dyads, each derived from a different prime of the harmonic series (three, five, and seven). The intervals expose a ghostly psychoacoustic bassline while tugging the listener from a complex, “minor” feeling to a simple “major” one. Wubbels then reinforces the function of the harmony by doubling the bassline on the piano. This motive is recontextualized in many colorful ways throughout this extraordinarily inventive piece, but it is in this particular moment of poignant austerity that the psychoacoustic underpinnings of the harmony are laid bare.

Eric Wubbels: “the children of fire come looking for fire” (excerpt – “tuning”), performed by Josh Modney and Eric Wubbels on Engage (New Focus Recordings)

From a technical standpoint, it is a short journey from harmonic JI violin playing to melodic. Double-stops, with the aid of the fixed points of the open strings, provide a clear way to measure intervals precisely. Decouple the notes of the double-stop and you have a melodic interval. Memorize the sound and physical feel of the interval and you can begin to construct scales accurately.

Taylor Brook is a composer who has a great affinity for melodic JI string writing. I first got to know Taylor’s music through his wonderful solo violin piece Vocalise, which has been a favorite of my repertoire for nearly a decade, and Taylor and I have since worked together on many different projects (another of my favorites is El Jardín de Senderos que se Bifurcan for string quartet, which we recorded with the Mivos Quartet in 2016). Vocalise, for violin and drone, presents a familiar vision of the violin, but one in which its traditional “beauty” is delicately teased and pulled into strange and surprising realms. In what Taylor describes as an “honorific recontextualization” of structural and theoretical elements of Hindustani music, the characteristically lush sound of the violin is heard over a drone, weaving through an intricate JI pitch space. As the pacing of the music evolves from reflective meditation to ecstatic action, the pitch relationships become increasingly adventurous and the tone of the violin shifts from sweetness to a big, bright, extremely colorful palette.

Taylor Brook: Vocalise (excerpt), performed by Josh Modney on Engage (New Focus Recordings)

Timbre is an important component of Vocalise, especially as it relates to the overall form, but the melodic pitch relationships are the prime movers of the piece, vivifying individual gestures, while timbre functions as a distinct layer on top. A different and equally rich use of JI is to employ the timbres inherent to the intervals themselves as an orchestrational device.

Intervals derived from different primes of the harmonic series have remarkably distinctive characters.

When sounded simultaneously (e.g., as double-stops), intervals derived from different primes of the harmonic series have remarkably distinctive characters. For example, ratios of three might sound “earthy,” ratios of five “sweet,” ratios of seven “restless,” and ratios of eleven “screamy.” Regardless of which subjective terms one might choose to describe them, the ear can readily identify these fields of harmony, and the fields are defined by prime number families. The idea of using JI dyads as color is employed beautifully in the recent works of Alex Mincek, another of my Wet Ink Ensemble colleagues whose artistry has inspired and shaped my musical practice. Alex is a superb saxophonist, and his relationship to musical material as a composer is rooted in the tactile immediacy of his experience as a performer. Alex often talks of instrumental “malfunction” as a gateway to finding novel forms of expression. For example, playing a simple scale on the saxophone while depressing one extra key such that, instead of notes, a string of shimmering multiphonics is manifested.

Performing JI dyads on the violin is not exactly a “malfunction,” but it is a filtering of something rather simple (that is, diatonic intervals) that yields a widely varied and extremely colorful result. In the first movement of Mincek’s Harmonielehre for violin and piano, the harmonic progression that drives the form of the piece is filtered by JI inflections of violin dyads up to the 11th partial. This relationship is further enlivened by the equal tempered tuning of the piano – ET intervals that are inherently complex and beating interact with the violin’s JI intervals in fascinating and unpredictable ways. The range of timbral possibilities opened up by these myriad combinations allow Mincek to richly orchestrate using spare musical means.

Alex Mincek: Harmonielehre I (excerpt), performed by Josh Modney and Eric Wubbels on Mincek’s Torrent album (Sound American Publications)

Parsing JI into these components – harmonic, melodic, timbral – is helpful for me as a performer and enriches my appreciation for the music, though I acknowledge that attempting to parameterize something as unified as JI is somewhat arbitrary, and in another sense each of the pieces I’ve used as examples employ JI in an immersive, all-inclusive way. But what I really appreciate is how these composers have used JI with such intention. Far from a catalog of arcane tunings or a compendium of “crazy sounds,” each of these pieces by Wubbels, Brook, and Mincek uses JI as a means of human expression and as a tool for musicality rather than as an academic exercise.

The expressive potential of JI is perhaps what draws me to it most.

The expressive potential of JI is perhaps what draws me to it most. The timbre-characters of the primes are so strong, so affecting. I imagine that this is part of our nature, some fundament of the universe that is embedded in us on an evolutionary scale. But there is also almost certainly a cultural component – as modern listeners and musicians we are steeped in Equal Temperament from an early age, and the differentness of JI intervals combined with their unimpeachable internal logic may be what makes them so moving.

This expressivity is a big part of what makes the pieces by Wubbels, Brook, Mincek, and many other of my favorites by composers such as Kate Soper and Sam Pluta so compelling. I think practices like JI that add such life and interest to pioneering contemporary works can be a part of a holistic musical approach, an overarching performance practice that has just as much to offer when projected back on established repertoire—as trumpeter/composer Nate Wooley writes, a “model of how the new cannot only move forward, but to be active in 360 degrees and three dimensions.” For the past six years, I’ve been working on applying my study and practice of JI to a reinterpretation of Bach’s famous Sonatas and Partitas for solo violin. My Just Intonation Bach project began as an analytical challenge, a search for tuning ratios that would reinforce harmonic motion already implied by the notation. This intellectual exploration quickly developed into an intuitive, creative process that considers how juxtapositions of tunings and timbres might heighten the emotional qualities of the music, and how a contemporary approach to sound production on the violin might illuminate hidden details. The fissures that form between prime interval families open up endless expressive possibilities. When applied to Bach on a micro level, dissonances are heightened, resolutions sweetened. On a phrase level, zones of light and dark are revealed. Globally, the shift between key areas takes on a wrenching immediacy.

J.S. Bach: Ciaccona (1720) with Just Intonation, performed by Josh Modney on Engage (New Focus Recordings)

Just Intonation holds great potential for string players, both as a creative and pedagogical tool. It has certainly changed the way I think about my instrument and music in general. The techniques described in this writing, and the artists who are so creatively employing them, have wonderfully enriched my own musical practice and are contributing beautifully to the kind of broad and inclusive approach to musicmaking that I’d like to see define the role of the 21st-century classical musician.

Jazz and Classical—Musical, Cultural, Listening Differences

Early next year a CD will be released featuring my compositions on Nonesuch Records. I’m very excited about the recording, which features Joshua Redman, one of today’s greatest working jazz musicians, as well as Brooklyn Rider, one of today’s most brilliant classical string quartets. (The equally brilliant jazz bassist Scott Colley and percussionist Satoshi Takeishi round out the ensemble.) This project marks a high-water mark in my work of genre blending, and offers an occasion to reflect on the differences and similarities between these two ways of making music. I’ve had sustained and rich experiences in both musical styles over the years, so I’ve had a chance to observe some general attributes of musicians who have been trained in each genre, and compare and contrast the two. For me the differences can be boiled down to a difference in musical culture.

The more of the rules you know, the deeper your understanding of them, the more you have the impression of belonging to the tribe.

Musical culture is something that is acquired gradually over a long period of study and practice within a given genre. It comes along with a set of dos and don’ts that become quite deep-seated. The more of the rules you know, the deeper your understanding of them, the more you have the impression of belonging to the tribe. Fractures and variations on these rules can occur at the level of the sub-genre. If jazz musicians think fundamentally differently than classical musicians, it must be said that “fusion” jazz musicians think quite differently than “straight-ahead” or “avant-garde” jazz musicians. The same goes for classical—world-class Mozart interpreters can stumble when tackling, say, Ravel. And the gulf between new music interpreters and more mainstream interpreters of the classical repertoire can seem vast.

It’s an obvious metaphor for political division—and I do think that stylistic preferences in music are a kind of politics played out in the abstract. People align themselves with one or another musical culture, and, though they may spend hours rationalizing their preferences, the basis for such adherence involves something much more primal. For someone who is into swing, something that doesn’t swing according to their definition can offend their sensibilities in a way that totally and completely bypasses the intellect.

Violinist bowing on a violin in standard classical music playing position (under the chin)

So the problem of merging musicians from two genres that seem far apart is in fact a diplomatic challenge, not that different from the problem of merging sensibilities within any group.  It starts with a really clear, non-judgmental understanding of the differences, both musical and psychological. Here are six areas in which classical and jazz musicians vividly differ:

1. Rhythm. There is no more marked area of difference between classically trained players and players trained in jazz than the domain of rhythm. Jazz musicians prioritize above all else a kind of steadiness of pulse, a consistency of rhythmic placement. They worship at the shrine of the eighth note, the sixteenth note. You can call this an orientation toward groove, or a metronomic approach—though, even if it begins from a principle of total evenness, it ultimately transcends the metronomic and goes to the realm of feel, that is to say each person’s own individualized approach to this evenness, to subdivision.

Very few classical musicians I’ve worked with have even heard of this idea of feel, and even the ones with good rhythm don’t obsess over it to the point that jazz musicians need to in order to obtain an expected level of competence. So to a jazz musician, the classical musician’s sense of rhythm can seem bafflingly substandard.

But in fact this needs to be understood in a completely different way. Classical musicians simply look at rhythm differently. They see it as an expressive element. By stretching the pulse one way or the other, they can support the longer musical line, which to them is of highest importance. The irony here is that jazz musicians’ use of rhythm is in a way LESS expressive than that of classical musicians. That expression is re-injected on the subtle level of feel—and indeed the best jazz soloists do make expressive use of time, by laying back against the beat or floating over it, but these effects work precisely because they create tension against an underlying pulse that is unchanging. Actual tempo fluctuation is strictly to be avoided. This is why, while it may be very difficult to get classical players to groove, it’s equally challenging to get jazz players to effect a convincing rubato.

2. Dynamics. When shading a phrase, when injecting drama into their performances, classical musicians obviously make frequent recourse to dynamics. Jazz musicians, uh, not so much! I remember in one of our rehearsals that Colin Jacobsen asked Josh Redman what dynamic he was playing at a certain passage. Josh grinned sheepishly and said, “Jazz musicians don’t really use dynamics.”  He wasn’t far from the truth—many jazz players, especially horn players, play at a fairly static volume. There certainly isn’t any established tradition of crescendo and diminuendo, outside the world of big band.

The overall dynamic of jazz is much louder than that of classical music, at least at the chamber music level. This is probably because of the prominence of the drum set in jazz, which is extremely loud compared to any chamber instrument (and has gotten considerably louder with the advent of rock music) and tends to play at a fairly consistent volume. To compete with this, other jazz musicians have gotten accustomed to playing at louder volumes, as well as becoming habituated to electronic amplification. Jazz saxophonists play at or above the volume of a classical trumpet, so when they suddenly have to play with a string quartet, they have to play around 1/8 their normal volume to blend!

3. Tone and Intonation. Jazz musicians can be obsessive about their sound and their tone quality, but overall I would say it’s less a priority than it is in the classical world. Sometimes jazz musicians also go for bigger rather than better in this regard, for the above-stated reasons.

In this category perhaps should be included things like vibrato. For a string player, vibrato is at the core of their playing, and vibrato practice is an important part of their musical development. Jazz musicians practice vibrato much less, and consequently have much less control, far less variety of speed and amplitude. It’s simply not as much used as an expressive element.

Intonation is much less of a concern in the jazz world than in the classical world. There’s the tradition of classical musicians tuning before the concert begins; many jazz musicians just hope to be in tune by the end.

In fact, I see intonation as a kind of inverse of rhythm. For classical musicians it’s a subject of years of true obsession, and like rhythm in jazz, classical musicians view intonation as a grid. You could think of jazz musicians, conversely, as having a more expressive approach to intonation. It’s not necessarily even conscious, but with saxophone players in particular a kind of idiosyncratic intonation can become an identifiable feature. I’ve seen classical musicians listen to Coltrane from his quartet period, for example, and actually burst out laughing at the intonation. But as any Coltrane aficionado with some technical understanding would agree, that sharp, almost pinched quality in the high register is an integral part of the surging angst of the Coltrane sound.

4. The Page. No discussion of the differences between jazz and classical musicians would be complete without touching on their respective approaches to the written page. Nothing tells you more about the brain structure of a musician than watching them try to negotiate written music.

Classical musicians tend to automatically inject expression into music they read. They understand well that written music is meant to be interpreted, and tend to be comfortable doing just that. I’m often amazed at how a classically trained musician can bring a page of written music so vividly to life, often without even understanding it! Their instincts in this regard tend to be highly developed.

Jazz musicians, by contrast, who are not as accustomed to reading, treat the enterprise with trepidation, and they can be really uptight about just getting the right notes. With fear and anxiety as their jumping off points, their interpretations of written music can be astonishingly leaden, played with all the joy and verve of a high school student who’s just been sent to detention.

This has to do with the relationship between theory and practice. For the jazz musician, theory and practice are inseparable—to be a successful improviser means to have integrated the two, there can be no other way. As such it’s very difficult to play anything without understanding its theoretical meaning.

On the other hand, you can be an entirely competent classical musician—I’ve seen this on many occasions—without having the slightest idea what is motivating the music you’re playing from a theoretical perspective.

This divorce of the theoretical from the practical does have the benefit of encouraging a more literary, imagistic, extra-musical approach, which can be a good thing—since after all, music really does have emotive, personal, narrative, and ultimately cultural meaning, beyond notes and rhythms, and that meaning is arguably even the most important of music’s qualities. But it also raises issues of legitimacy—anyone can give any interpretation to a piece of music, and since this is a very subjective quality, it’s harder to assess.

5. Improvisation. If classical musicians excel at rendering a written passage in musical fashion, their stumbling block tends to be improvisation. In the inverse situation to jazz musicians reading, classical musicians tend to be uncomfortable when asked to improvise. And they should be, because to improvise really well takes a lot more work than is generally understood.

In the inverse situation to jazz musicians reading, classical musicians tend to be uncomfortable when asked to improvise.

Improvisation is not merely a set of rules or precepts, or even a feeling of freedom—it is, again, a specific culture. It’s like a language. If I asked you to speak Chinese, you might try to do so with passion and vigor, but that wouldn’t really get you anywhere unless you studied it seriously for quite a while. In fact, it would take years to learn to speak it, and depending at what age you did so, you might never sound credibly like a native.

In jazz, performance and composition are organically intertwined. It’s the soloist’s voice that makes the music unique, whereas in classical music a good piece played by a less-than-stellar musician can lead to at least an intellectually interesting, if not aesthetically satisfying result, much more often than a less-than-stellar piece played by a great musician can. Technical flaws recede because, after all, the performer is simply the medium through which the composer imparts the musical message. It’s like listening to music on a great home stereo vs. cheap computer speakers—the difference may be glaring to the sensitized few, but for the most part the music comes through.

6. Shared References. The other thing that’s palpably different between jazz and classical musicians has to do with specific musical references. What did you play 1000 times in high school to the point that you now roll your eyes every time you hear it—Beethoven’s 1st Symphony or “Blue Bossa”? Those shared references, even as we may mock them, form a cultural substrate that actually plays a surprisingly big role in how we interact on a day-to-day basis.

Side view of a saxophone

Differences in Listening

If practicing these two genres entails basic differences, there is also a fundamentally different way of listening to them.

Since my early training was in jazz, for me listening to jazz is easier—and takes less mental strength—than listening to classical music. Listening to classical music, as so many introduction courses tell us, requires a basic understanding of form and sub-genre. Form—sonata and rondo, minuet and scherzo, and so forth—needs to be understood before the music can be properly ingested. Key relations also play an important role, so knowing exactly which pitches are being played is helpful in following the compositional narrative.

In jazz, by contrast, forms are based on the chaconne-like repetition of a series of chords, over which improvisations are played. The improvisations create the variation, and so in some sense the music is not travelling; it always comes back, again and again, to the same place.

I’ve noticed that the underlying repetitive structure of jazz can be really difficult to hear for people who are not initiated into its language. Traditional jazz, which is based on 12- or 32-bar forms and archetypal harmonic sequences, is something that the seasoned jazz musician, by dint of working in these forms over and over again, comes to hear intuitively. I can be at a jazz club listening to a group play standards, and I can be conversing with someone while simultaneously knowing exactly where I am in the form of whatever tune is being played. This process of listening becomes very natural, and then it becomes the basis of the assessment of how the soloist is playing. How is the soloist’s sound? How are the ideas—are they original, are they spontaneous? What is the level of interaction between soloist and rhythm section?

Even with new jazz composition, this formal repetition most often remains. The forms may be exotic, but they’re almost guaranteed to repeat at some point, to form a basis for improvisation.

Even the idea of repetition is different in classical music and jazz. Whereas in classical music a repetition tends to be strict, in jazz even a repeated melody is constantly varied both in the melody and the accompaniment. Thus jazz is both more repetitive and more flexible in its means (although this strictness of repetition in classical music has been challenged of late by early music specialists).

This compendium of differences between the cultures of jazz and classical musicians is a source of ever-increasing fascination to me. I used to feel frustrated when a violinist couldn’t play a groove, or when a jazz pianist froze up in front of a written passage. But really these are just manifestations of differences in brain structure, differences in training, and ultimately differences in culture. When you incorporate people with such differences into your music in an adroit way, you can—instead of losing something—augment your resources to create an art that’s tremendously multifaceted and rich, that celebrates and even thrives on difference.

On Readers, Fakers, Bakers, Writers, & Ruptures

Howie Leess (1920-2003) was one of the most upright, good-hearted musicians I’ve ever had the pleasure of working with, and his playing inspired me to form my first full band. I’d been told about his superb Yiddish clarinet stylings even before I heard him myself, but even so Howie’s powerfully nuanced playing was a revelation to my ears, especially his signature doyna (rubato solo) which evoked a soulful era that had nearly disappeared a generation before. He was a hard worker who’d started professionally in his early teens during the Depression on bandstands in “the mountains” [Catskills venues], and he was thrilled to adopt email (all-caps always, to save typing time) as a septuagenarian because it went instantly around the world for free – yet this man would turn down a gig if it seemed unethical to him because he’d “rather sleep well at night” than be a back-stabber. So it confused me at first to hear Howie described as both “a reader and a faker,” even if I could sense this was meant as a compliment.

Slightly younger contemporaries of Howie’s gave me other names for him, too: He was “the Mountain Goat” among guys who were regulars for decades in Lester Lanin’s cocktail and debutante-geared orchestra (because on tenor sax, he’d find his own inside parts to climb around Big Band standard reed arrangements). He was also “the fifth Epstein Brother” (several non-blood relatives who played often with that esteemed klezmer family claimed such a title, and it was a lucrative mantle since Hasidim immigrating after WWII chose this local band as a favorite for their Brooklyn weddings, which could take place any of six nights a week). I found out later that Howie was himself a serious left-winger who had little use for most of the rabbis he met, but I’d already witnessed that out of respect he would finish his coffee outside in the rain rather than risk violating kosher rules by bringing a cup with milk into a synagogue. He was a “Jewish specialist” for the Society bands, and an “American specialist” for a klezmer kapelye. His own craft at Yiddish music had been learned from several klezmer greats who came from Eastern Europe in the early 20th century, though he was born in the USA; as one contemporary music historian told me, Howie was featured on klezmer revival recordings on tenor since he was equally skilled there, even though he could also play rings around most other clarinetists of his generation. All of this qualified him as an ace in both reading and faking, and he was mainly below the radar of fame but in demand into his 80s.

Photo of Michael Hess (holding violin), Howie Leess (holding clarinet), Michael Hess, smail Butera (holding accordion), and Eve Sicular (holding a snare drum)

Howie Leess (center holding clarinet) with rest of the original personnel of the Greater Metropolitan Klezmer Band (our name for the first six months or so), left to right: Michael Hess (violin), Dave Hofstra (bass), Ismail Butera (accordion), and Eve Sicular (drums). Photo by Donna Binder.

People who speak multiple languages often feel different sides of their personalities emerge in each idiom. Similarly, musicians who perform various genres of music can express each style with their personal feel once they are at home in it, while their vocabulary and accent may reflect certain places of origin even as they to move from consciously translating to more fully inhabiting another sonic culture. This process continues to shape experience and expression as each person learns their repertoire, its character and how it interplays with surrounding habitat of humans: dances, lyrics, jokes, ceremonies, customs, histories, venues, and the shared heritage of other musicians involved. Howie’s experience seemed to make him completely bilingual in a wide swath of the American songbook as well as Yiddish and Hasidic repertoire. I was awed at Howie’s sound and his command of a room of dancers. And I’ll never forget that when I invited him to make a demo recording for our prospective new group, he said: “Sure! I love when a woman runs the business.”

There’s a balance between what can be understood by eyes and by ears.

In the working musical world described in Subversive Sounds: Race and the Birth of Jazz in New Orleans, Charles B. Hersch finds that art and commerce would both be served by a band that “transcended the usual lines, uptown and downtown, black and Creole, honky-tonk and society, readers and fakers – by being able to read music and improvise in whatever style was needed, and thus flourished professionally.” Quoting Scott DeVeaux, he then lists qualities of a successful musical enterprise in this context: “Dependability, versatility and unobtrusive competence.” Keeping an ensemble together on an ongoing and harmonious basis depends on many things, but these fundamentals still hold true in our NYC-based, wide-ranging experience from the mid-1990s ‘til today.

Besides Howie, in my own ensembles, members have approached klezmer from both “reader” and “faker” backgrounds. Among versatile musicians, neither term connotes merely literacy itself. Certainly everyone involved in Metropolitan Klezmer or Isle of Klezbos can read printed music, transcribe tunes, and write a chart if necessary. I’m actually the least fluent in these skills, and am grateful to work collaboratively in many instances. Yet there’s a balance between what can be understood by eyes and by ears.  Some of my other original bandmates, particularly our accordionist and violin/ney flute /qanun zither player, had an ear already attuned to the inner workings of Yiddish music. They had been playing related styles for years, so for Eastern European Ashkenazic musics they were often adapting this knowledge (and as needed, their instrument tunings) and calling out melody and modal cognates between klezmer and Turkish, Greek, or Arabic songs with which they were long familiar. For instance, in discussing the tonal nature of a piece, they would refer to its being in “hijaz” (a classical Arabic scale) rather than the Yiddish term “freygish.” Later on, once Howie went into semi-retirement upstate, our horn section expanded to include other wonderful players who were from more of a conservatory background and who, since graduating from prestigious music schools, had been playing grooves more based in jazz, blues, Latin, and other diasporic traditions, and often relying on charts as an initial way into a tune or arrangement. Of course their improvisational skills were constantly honed as they became ace fakers in those genres, too. Coming from a Jewish background personally did not mean that somebody was necessarily familiar with any intrinsic qualities of klezmer, although—unless they’d developed an aversion through early negative exposure to this sometimes-stigmatized heritage—it usually didn’t hurt. And sometimes, as with my experience, hearing Yiddish and klezmer led to awakening multifarious dormant understandings.

Coming from a Jewish background personally did not mean that somebody was necessarily familiar with any intrinsic qualities of klezmer, although—unless they’d developed an aversion through early negative exposure to this sometimes-stigmatized heritage—it usually didn’t hurt.

Yiddish language, in a parallel with Yiddish music, is a fusion language—as is English, but for different reasons. While the British Isles assimilated various spoken tongues, both official and vernacular, through waves of invasion arriving over the centuries, Yiddish evolved as Ashkenazic Jewry themselves moved around Europe, generally Eastward, over a millennium, both in waves of migration and along trade routes. In naming our first Metropolitan Klezmer album “Yiddish for Travelers,” I was both alluding to the geographically variegated roots of this musical culture (with certain dance types denoted as sirbas, bulgars, terkishers, and volokhs indicating—whether musicologically accurate or not—provenance among co-territorial or neighboring people) and to an imaginary travelers’ handbook. The latter was in fact based on the real post-WWII Say It In Yiddish which, though seemingly ironic, gives a lovingly ordinary set of phrases for such things as checking into one’s hotel room in mameloshn. This affirmative pocket-sized volume had been published in 1958 by Uriel and his wife Beatrice “Bina” Weinreich, who were from a renowned family of Yiddish linguists. I knew Bina in the early 1990s from working as an archivist at the YIVO Institute. (Her husband had died tragically young in 1968.) As it turns out, Say It In Yiddish also inspired novelist Michael Chabon to write first a controversially condescending essay by the same name (in which he characterized the book as “poignant and funny”) and later, in response to protests and unexpected perspectives he received in reply to his initial somewhat glib short piece, he wrote his counter factual book The Yiddish Policemen’s Union (2007). My own use of the transmogrified title came from not such a different attitude as Chabon’s but with a different impulse of honoring the tradition as something to be cherished rather than written off with regret yet dismissal and derision. (Coincidentally, Chabon had first spotted this glossary during research at YIVO a decade earlier—in 1997, the year we issued Metropolitan Klezmer’s debut CD.)

I feel lucky to have first heard an early live performance version of the klezmer revival in the early ‘80s by the Boston-based Klezmer Conservatory Band, mostly 20-somethings then onstage at Ryle’s in Somerville MA. A few years later, I flew back to Boston to work as an apprentice editor on A Jumpin’ Night in the Garden of Eden, the first feature klezmer documentary. This movie-in-progress featured the KCB among others, and I spent after-hours time studying film rushes of their drummer (pre-internet, pre-YouTube, on a 16mm Steenbeck flatbed). By the time I finally attended my first KlezKamp in December 1989, I had already picked up some Yiddish language and played a few gigs as a sub with Seattle’s beloved Mazeltones. That group’s accordionist/vocalist/co-leader, Wendy Marcus, had generously lent me many source tapes to learn style and repertoire, both from archival recordings and ‘70s/’80s commercial albums. Wendy also revealed to me the delicious, completely unexpected news that the New York-based, internationally-attended KlezKamp included an informal, convivial alliance of freylekhe felker, openly LGBT Yiddishists. The idea of attending a gathering that would nourish my folkloric musical tastes as well as my progressive Jewish secular sensibility, all in a supportive environment that even extended to my sexuality, was more than I would have imagined possible. Even the queer-friendly group’s name reflected another marvelous quality of Yiddish culture that met my cravings: in a language that seeks to amuse itself, freylekh is a double entendre alluding to a famous beginner textbook line about happy/gay people. (When describing an upbeat dance style, it’s also worth noting that freylekhs is etymologically related to the English word “frolic.”)

A group photo of the KlezKamp participants, December 1993.

So while my first KlezKamp had its hitches, I experienced a certain sense of finding home even though I’d never been consciously aware of longing for this. I had no active nostalgia. My upbringing had reflected a decidedly assimilationist cultural understanding, and even covert antipathy towards Yiddish on my Mom’s side with their Viennese-Jewish upper-class family values. Nonetheless, I embody a cliché, in that it satisfied a longing for this place I’d never been. And while there is never enough space to express the many near-obliterations that have befallen Yiddish-speaking Jewry worldwide in the 20th century, the sense of loss and grief also deepens the sense of attachment and significance of carrying forward this vibrant culture, and not just as a mission of preservation. It’s incredibly heartening to be aware of the level of talent, imagination, commitment, intelligence, and diversity among communities of people who feel especially inspired by this culture—past, present and future. People I met at that first visit to KlezKamp introduced me to musicians including amazing players with whom I have now performed, toured, and recorded with for over two decades. After 30 years, KlezKamp completed its run in 2014 although KlezKanada and Yidish Vokh are still flourishing each August and, in December 2015, “Yiddish New York” carried on much of its spirit, too. We always have the despair of whether our connections to Yiddish sources are still adequate after so much has been destroyed, but even a morbid point of view can be affirmingly ironic. To quote Isaac Bashevis Singer (whose Nobel Prize acceptance speech was spoken in his mother tongue), “Yiddish has been dying for 100 years. My prediction is that it will keep on dying for the next 100 years.”

The process of understanding Yiddish music and literature shows a repeating pattern of rescue and re-creation.

The process of understanding Yiddish music and literature shows a repeating pattern of rescue and re-creation. Many of the most famous authors, even from the late 19th and early 20th centuries, were either not initially fluent in the language and/or discouraged from taking it seriously as a worthy medium of published thought. In musical spheres, “klezmer” was often a derogatory term for an inferior, unschooled musician (despite many amply gifted and some formally-educated players among the ranks of traditional and sometimes dynastic klezmorim). Yet by the time Russian Jewish musicians were finally admitted to Tsarist-era conservatories (in disproportionately-high numbers, especially on violin), the Romantic-era search for identity coincided, and led to active movements of collecting Yiddish folk melodies and creating art music. So prodigies in composition and performance completely steeped in Western classical traditions were coming back to their roots, some with more active connections than others. (Joel Engel, a co-founder of the St. Petersburg Society for Jewish Folk Music, came from a wholly-assimilated family, but other prominent members included sons of a cantor, a rabbi, and a klezmer bandleader.) Joseph Achron, Mikhail Gnessin, and Alexander Krein were among those creating “elevated” settings for Yiddish traditional melodies, and original Hebraic-inspired pieces. While revolution, pogroms, assimilation, Stalin’s anti-Semitic purges, and the Holocaust were among the forces that dispersed this movement, many of its most active protagonists went on for decades with varying degrees of compositional output. Among the places they ended up were Hollywood, New York’s Temple Emanuel, the Soviet Union, and, in passing, Palestine.

An historic photo of Joseph Achron

Joseph Achron (1886-1943) arrived in Ellis Island on December 31, 1924 and remained in the USA until his death in Los Angeles on April 29, 1943. For more information, visit the Joseph Achron Society.

These art music adaptations of Ashkenazic Jewish traditions—paralleling the creative efforts of Dvořák, Bartók, Kodály, Janáček, etc.—were primarily an approach by readers’ (albeit often well-informed, versatile performers themselves quite familiar with traditional musical spirit and context) drawing  heavily on heritage generated mainly from fakers (who were also actively sought out and recorded in collecting expeditions utilizing literally cutting-edge technologies of the time, such as Edison wax cylinders). Into the late 20th and 21st centuries, I am glad to also see and hear Yiddish music finding new life created by and for faker-readers with wide frames of cultural, tuneful reference. And in the case of my own bands, the people who seem to be writing the most original compositions are those who began more on the reading edge, while those who began more on the faking side are invaluable references and style guides. Those bandmates who began closer to the roots of the music meanwhile have made fascinating musical translations among related genres, while we are also involved in myriad multi-faceted projects.

In the 21st century, we are certainly ready to shift even beyond the best impulses of revivalism. First, permit me to relate a tantalizing tale of innovation, tradition, delight, and dangling unfulfilled promise. I moved back to New York in 1990 to more fully pursue my klezmer aspirations, but stayed in touch with many dear people in the Pacific Northwest. My friend Trudi stayed in Seattle and to my surprise, opened a flourishing business called Sweet Lorraine’s drawing on her childhood memories of Mr. Moskowitz’s Jewish bakery in Detroit.

Trudi had worked in many fields before but to make this dream come true, she was fortunate to be able to go back first and apprentice with Moskowitz himself. As she told me, three things made her enterprise successful. The first, of course, were the recipes and secret techniques which her mentor was willing to share with her; opening her place over a thousand miles away, she wasn’t exactly his direct competitor. Second, as fondly as she recalled the tastes of his treats, she realized that they could even be improved upon simply by upgrading the ingredients: The original versions, while delectable already, were based on using the cheap stuff (and also may have been constrained by keeping to a kosher-neutral pareve formula). Trudi’s innovation would be to use the freshest grains, the lushest dried fruits, the finest eggs, and butter… not margarine. Thirdly, while not trying to market these product to a strictly observant Jewish clientele, she was actively celebrating her ethnic cultural heritage in the fairly white-bread but burgeoning foodie environs of Seattle circa 2002. Her authentic yet expanded approach immediately caught on, her rugelach, dark loaves and challahs sold to a devoted following far beyond the bakery’s Magnolia neighborhood storefront, and Sweet Lorraine’s—named, if I recall correctly, for Trudi’s own mother—was a hit for all the sixteen months it lasted. Who knows what flavor and menu alchemies Trudi might have been inspired to create if her grand revivalist culinary dream had continued? Already she had studded her macaroons with pine nuts. Sadly, constraints of capital and a lease non-renewal brought this experiment to a premature close, perhaps years before the word artisanal came into hipster parlance. (I am keenly aware that any Yiddish-culture essay bringing food into focus runs the risk of inviting kitsch or shtick. and Look, they’re even anagrams! But I’ll shake the fear of being interpreted as the former and after all proper deployment of the latter is really an art.) But Trudi’s legacy, even if less enduring than the NYC-based Levy’s Rye Bread ads, had an even higher caraway quality quotient, and a deliberate but classy register of camp. Her awareness of fantastic, earthy delicacies, and her clear ideas of production and merchandising, brought wonders for fortunate customers and employees while the place lasted.

The cover for the Isle of Klezbos' debut CD.

The cover for the Isle of Klezbos’ debut CD. You can hear it here.

In the same year Trudi’s bakery opened, Metropolitan Klezmer recorded its third album, Surprising Finds, and the band’s “sister sextet,” Isle of Klezbos went into the studio for our debut, Greetings from the Isle of Klezbos, at which point I feel like I had found my own musical identity within this idiom. But, even by the release of Metropolitan Klezmer’s second disc (which was recorded in 2000 and released in 2001), I feel the band’s creative tendencies beyond high-quality revivalism are already evident, in particular a distinct minimum of schtick— one track out of 16—and zero kitsch to my ear anyway. That disc’s title, Mosaic Persuasion (a striking phrase I had also come across while working at YIVO) is a double entendre. On the one hand, the term is an archaic euphemism for Jewish, referring to people of the “Mosaic” faith, as in the Five Books of Moses. (As I subsequently learned, in German usage a similar adjective can indicate Jews when the word Judische seems too heavily loaded.) But the added wordplay for me alludes to a beautiful optimism voiced by NYC’s then mayor, David Dinkins, when as a candidate he spoke of the city’s society as a “gorgeous mosaic.” Like the ad campaign of decades earlier (all those adorable and distinctly assorted goyim wryly posed with their slice of rye and the caption “You don’t have to be Jewish to love Levy’s”), Dinkins strove to highlight a vision of unity among distinct and proud cultures living together, benefiting from each other’s harmonious proximity and interchange.

What to Ware? A Guide to Today’s Technological Wardrobe

Circuitry for Salvage
Circuitry for Salvage

Circuitry for Salvage (Guiyu Blues), 2007. First version of design, housed in VHS tape box. 12 probes for linking to dead circuit board to be re-animated. Rotary switches select frequency range of each of six oscillator voices. Photo by Simon Lonergan.

At some point in the late 1980s the composer Ron Kuivila told me, “we have to make computer music that sounds like electronic music.” This might appear a mere semantic distinction. At that time the average listener would dismiss any music produced with electronic technology—be it a Moog or Macintosh—as “boops and beeps.” But Kuivila presciently drew attention to a looming fork in the musical road: boops and beeps were splitting into boops and bits. Over the coming decades, as the computer evolved into an unimaginably powerful and versatile musical tool, this distinction would exert a subtle but significant influence on music.

Kuivila and I had met in 1973 at Wesleyan University, where we both were undergraduates studying with Alvin Lucier. Under the guidance of mentors such as David Tudor and David Behrman, we began building circuits in the early 1970s, and finished out the decade programming pre-Apple microcomputers like the Kim 1. The music that emerged from our shambolic arrays of unreliable homemade circuits fit well into the experimental aesthetic that pervaded the times. (The fact that we were bad engineers probably made our music better by the standards of our community.) Nonetheless we saw great potential in those crude early personal computers, and many of us welcomed the chance to hang up the soldering iron and start programming.[1]

The Ataris, Amigas, and Apples that we adopted in the course of the 1980s were vastly easier to program than our first machines, but they still lacked the speed and processor power needed to generate complex sound directly. Most “computer music” composers of the day hitched their machines to MIDI synthesizers, but even the vaunted Yamaha DX7 was no match for the irrational weirdness of a table strewn with Tudor’s idiosyncratic circuits arrayed in unstable feedback matrices. One bottleneck lay in MIDI’s crudely quantized data format, which had been optimized for triggering equal-tempered notes and was ill suited for complex, continuous changes in sound textures. On a more profound level, MIDI “exploded” the musical instrument, separating sound (synthesizer) from gesture (keyboard, drum pads, or other controller)—we gained a Lego-like flexibility to build novel instruments, but we severed the tight feedback between body and sound that existed in most traditional, pre-MIDI instruments and we lost a certain degree of touch and nuance[2].

MIDI no longer stands between code and sound: any laptop now has the power to generate directly a reasonable simulation of almost any electronic sound—or at least to play back a sample of it. Computer music should sound like electronic music. But I’m not sure that Kuivila’s goal has yet been met. I still find myself moving back and forth between different technologies for different musical projects. And I can still hear a difference between hardware and software. Why?

Most music today that employs any kind of electronic technology depends on a combination of hardware and software resources. Although crafted and/or recorded in code, digital music reaches our ears through a chain of transistors, mechanical devices, speakers, and earphones. “Circuit Benders” who open and modify electronic toys in pursuit of new sounds often espouse a distinctly anti-computer aesthetic, but the vast majority of the toys they hack in fact consist of embedded microcontrollers playing back audio samples—one gizmo is distinguished from another not by its visible hardware but by the program hidden inside a memory chip on the circuit board. Still, whereas a strict hardware/software dialectic can’t hold water for very long, arrays of semiconductors and lines of code are imbued with various distinctive traits that combine to determine the essential “hardware-ness” or “software-ness” of any particular chunk of modern technology.

Some of these traits are reflected directly in sound—with sufficient attention or guidance, one can often hear the difference between sounds produced by a hardware-dominated system versus those crafted largely in software. Others influence working habits—how we compose with a certain technology, or how we interact with it in performance; sometimes this influence is obvious, but at other times it can be so subtle as to verge on unconscious suggestion. Many of these domain-specific characteristics can be ignored or repressed to some degree—just like a short person can devote himself to basketball—but they nonetheless affect the likelihood of one choosing a particular device for a specific application, and they inevitably exert an influence on the resulting music.

I want to draw attention to some distinctive differences between hardware and software tools as applied to music composition and performance. I am not particularly interested in any absolute qualities inherent in the technology, but in the ways certain technological characteristics influence how we think and work, and the ways in which the historic persistence of those influences can predispose an artist to favor specific tools for specific tasks or even specific styles of music. My observations are based on several decades of personal experience: in my own activity as a composer and performer, and in my familiarity with the music of my mentors and peers, as observed and discussed with them since my student days. I acknowledge that my perspective comes from a fringe of musical culture and I contribute these remarks in the interest of fostering discussion, rather than to prove a specific thesis.

I should qualify some of the terms I will be using. When I speak of “hardware” I mean not only electronic circuitry, but also mechanical and electromechanical devices from traditional acoustic instruments to electric guitars. By “software” I’m designating computer code as we know it today, whether running on a personal computer or embedded in a dedicated microcontroller or Digital Signal Processor (DSP). I use the words “infinite” and “random” not in their scientific sense, but rather as one might in casual conversation, to mean “a hell of a lot” (the former) and “really unpredictable” (the latter).


The Traits

Here are what I see as the most significant features distinguishing software from hardware in terms of their apparent (or at least perceived) suitability for specific musical tasks, and their often-unremarked influence on musical processes:

    • Traditional acoustic instruments are three-dimensional objects, radiating sound in every direction, filling the volume of architectural space like syrup spreading over a waffle. Electronic circuits are much flatter, essentially two-dimensional. Software is inherently linear, every program a one-dimensional string of code. In an outtake from his 1976 interview with Robert Ashley for Ashley’s Music With Roots in the Aether, Alvin Lucier justified his lack of interest in the hardware of electronic music with the statement, “sound is three-dimensional, but circuits are flat.” At the time Lucier was deeply engaged with sound’s behavior in acoustic space, and he regarded the “flatness” of circuitry as a fundamental weakness in the work of composers in thrall to homemade circuitry, as was quite prevalent at the time. As a playing field for sounds a circuit may never be able to embody the topographic richness of standing waves in a room, but at least a two-dimensional array of electronic components on a fiberglass board allows for the simultaneous, parallel activity of multiple strands of electron flow, and the resulting sounds often approach the polyphonic density of traditional music in three-dimensional space. In software most action is sequential, and all sounds queue up through a linear pipeline for digital to analog conversion. With sufficient processor speed and the right programming environment one can create the impression of simultaneity, but this is usually an illusion—much like a Bach flute sonata weaving a monophonic line of melody into contrapuntal chords. Given the ludicrous speed of modern computers this distinction might seem academic—modern software does an excellent job of simulating simultaneity. Moreover, “processor farms” and certain DSP systems do allow true simultaneous execution of multiple software routines. But these latter technologies are far from commonplace in music circles and, like writing prose, the act of writing code (even for parallel processors) invariably nudges the programmer in the direction of sequential thinking. This linear methodology can affect the essential character of work produced in software.
    • Hardware occupies the physical world and is appropriately constrained in its behavior by various natural and universal mechanical and electrical laws and limits. Software is ethereal—its constraints are artificial, different for every programming language, the result of intentional design rather than pre-existing physical laws. When selecting a potentiometer for inclusion in a circuit, a designer has a finite number of options in terms of maximum resistance, curve of resistive change (i.e., linear or logarithmic), number of degrees of rotation, length of its slider, etc.—and these characteristics are fixed at the point of manufacture. When implementing a potentiometer in software, all these parameters are infinitely variable, and can be replaced with the click of a mouse. Hardware has real edges; software presents an ever-receding horizon.
    • As a result of its physicality, hardware—especially mechanical devices—
      often displays non-linear adjacencies similar to state-changes in the natural world (think of the transition of water to ice or vapor). Pick a note on a guitar and then slowly raise your fretting finger until the smooth decay is abruptly choked off by a burst of enharmonic buzzing as the string clatters against the fret. In the physical domain of the guitar these two sounds—the familiar plucked string and its noisy dying skronk—are immediately adjacent to one another, separated by the slightest movement of a finger. Either sound can be simulated in software, but each requires a wholly different block of code: no single variable in the venerable Karplus-Strong “plucked string algorithm”[3] can be nudged by a single bit to produce a similar death rattle; this kind of adjacency must be programmed at a higher level and does not typically exist in the natural order of a programming language. Generally speaking, adjacency in software remains very linear, while the world of hardware abounds with abrupt transitions. A break point in a hardware instrument—fret buzz on a guitar, the unpredictable squeal of the STEIM Cracklebox—can be painstakingly avoided or joyously exploited, but is always lurking in the background, a risk, an essential property of the instrument.
    • Most software is inherently binary: it either works correctly or fails catastrophically, and when corrupted code crashes the result is usually silence. Hardware performs along on a continuum that stretches from the “correct” behavior intended by its designers to irreversible, smoky failure; circuitry—especially analog circuitry—usually produces sound even as it veers toward breakdown. Overdriving an amplifier to distort a guitar (or even setting the guitar on fire), feeding back between a microphone and a speaker to play a room’s resonant frequencies, “starving” the power supply voltage in an electronic toy to produce erratic behavior. These “misuses” of circuitry generate sonic artifacts that can be analyzed and modeled in software, but the risky processes themselves (saturation, burning, feedback, under-voltage) are very difficult to transfer intact from the domain of hardware to that of software while preserving functionality in the code. Writing software favors Boolean thinking—self-destructive code remains the purview of hackers who craft worms and Trojan Horses for the specific purpose of crashing or corrupting computers.
    • Software is deterministic, while all hardware is indeterminate to some degree. Once debugged, code runs the same almost all the time. Hardware is notoriously unrepeatable: consider recreating a patch on an analog synthesizer, restoring mixdown settings on a pre-automation mixer, or even tuning a guitar. The British computer scientist John Bowers once observed that he had never managed write a “random” computer program that would run, but was delighted when he discovered that he could make “random” component substitutions and connections in a circuit with a high certainty of a sonic outcome (a classic technique of circuit bending).
    • Hardware is unique, software is a multiple. Hardware is constrained in its “thinginess” by number: whether handcrafted or mass-produced, each iteration of a hardware device requires a measurable investment of time and materials. Software’s lack of physical constraint gives it tremendous powers of duplication and dissemination. Lines of code can be cloned with a simple cmd-C/cmd-V: building 76 oscillators into a software instrument takes barely more time than one, and no more resources beyond the computer platform and development software needed for the first (unlike trombones, say). In software there is no distinction between an original and a copy: MP3 audio files, PDFs of scores, and runtime versions of music programs can be downloaded and shared thousands of times without any deterioration or loss of the matrix—any copy is as good as the master. If a piano is a typical example of traditional musical hardware, the pre-digital equivalent of the software multiple would lie somewhere between a printed score (easily and accurately reproduced and distributed, but at a quantifiable—if modest—unit cost) and the folk song (freely shared by oral tradition, but more likely to be transformed in its transmission). Way too many words have already been written on the significance of this trait of software—of its impact on the character and profitability of publishing as it was understood before the advent of the World Wide Web; I will simply point out that if all information wants to be free, that freedom has been attained by software, but is still beyond the reach of hardware. (I should add that software’s multiplicity is accompanied by virtual weightlessness, while hardware is still heavy, as every touring musician knows too well.)
    • Software accepts infinite undo’s, is eminently tweakable. But once the solder cools, hardware resists change. I have long maintained that the young circuit-building composers of the 1970s switched to programming by the end of that decade because, for all the headaches induced by writing lines of machine language on calculator-sized keypads, it was still easier to debug code than to de-solder chips. Software invites endless updates, where hardware begs you to close the box and never open it again. Software is good for composing and editing, for keeping things in a state of flux. Hardware is good for making stable, playable instruments that you can return to with a sense of familiarity (even if they have to be tuned)—think of bongos or Minimoogs. The natural outcome of software’s malleability has been the extension of the programming process from the private and invisible pre-concert preparation of a composition, to an active element of the actual performance—as witnessed in the rise of “live coding” culture practiced by devotees of SuperCollider and Chuck programming languages, for example. Live circuit building has been a fringe activity at best: David Tudor finishing circuits in the pit while Merce Cunningham danced overhead; the group Loud Objects soldering PICs on top of an overhead projector; live coding vs. live circuit building in ongoing competition between the younger Nick Collins (UK) and myself for the Nic(k) Collins Cup.

David Tudor performance setup

  • On the other hand, once a program is burned into ROM and its source code is no longer accessible, software flips into an inviolable state. At this point re-soldering, for all it unpleasantness, remains the only option for effecting change. Circuit Benders hack digital toys not by rewriting the code (typically sealed under a malevolent beauty-mark of black epoxy) but by messing about with traces and components on the circuit board. A hardware hack is always lurking as a last resort, like a shim bar when you lock your keys in the car.
  • Thanks to computer memory, software can work with time. The transition from analog circuitry to programmable microcomputers gave composers a new tool that combined characteristics of instrument, score, and performer: memory allows software to play back prerecorded sounds (an instrument), script a sequence of events in time (a score), and make decisions built on past experience (a performer). Before computers, electronic circuitry was used primarily in an instrumental capacity—to produce sounds immediately[4]. It took software-driven microcomputers to fuse this trio of traits into a powerful new resource for music creation.
  • Given the sheer speed of modern personal computers and software’s quasi-infinite power of duplication (as mentioned earlier), software has a distinct edge over hardware in the density of musical texture it can produce: a circuit is to code as a solo violin is to the full orchestra. But at extremes of its behavior hardware can exhibit a degree of complexity that remains one tiny but audible step beyond the power of software to simulate effectively: initial tug of rosined bow hair on the string of the violin; the unstable squeal of wet fingers on a radio’s circuit board; the supply voltage collapsing in a cheap electronic keyboard. Hardware still does a better job of giving voice to the irrational, the chaotic, the unstable (and this may be the single most significant factor in the “Kuivila Dilemma” that prompted this whole rant).
  • Software is imbued with an ineffable sense of now—it is the technology of the present, and we are forever downloading and updating to keep it current.       Hardware is yesterday, the tools that were supplanted by software. Turntables, patchcord synthesizers, and tape recorders have been “replaced” by MP3 files, software samplers, and ProTools. In the ears, minds, and hands of most users, this is an improvement—software often does the job “better” than its hardware antecedents (think of editing tape, especially videotape, before the advent of digital alternatives). Before any given tool is replaced by a superior device, qualities that don’t serve its main purpose can be seen as weaknesses, defects, or failures: the ticks and pops of vinyl records, oscillators drifting out of tune, tape hiss and distortion. But when a technology is no longer relied upon for its original purpose, these same qualities can become interesting in and of themselves. The return to “outmoded” hardware is not always a question of nostalgia, but often an indication that the scales have dropped from our ears.


Lest you think me a slave to the dialectic, I admit that there are at least three areas of software/hardware crossover that deserve mention here: interfaces for connecting computers (and, more pointedly, their resident software) to external hardware devices; software applications designed to emulate hardware devices; and the emergence of affordable rapid prototyping technology.

The most ubiquitous of the hardware interfaces today is the Arduino, the small, inexpensive microcontroller designed by Massimo Banzi and David Cuartielles in 2005. The Arduino and its brethren and ancestors facilitate the connection of a computer to input and output devices, such as tactile sensors and motors. Such an interface indeed imbues a computer program with some of the characteristics we associate with hardware, but there always remains a MIDI-tinged sense of mediation (a result of the conversion between the analog to digital domains) that makes performing with these hybrid instruments slightly hazier than manipulating an object directly—think of controlling a robotic arm with a joystick, or hugging an infant in an incubator while wearing rubber gloves. That said, I believe that improvements in haptic feedback technology will bring us much closer to the nuance of real touch.

The past decade has also seen a proliferation of software emulations of hardware devices, from smart phone apps that simulate vintage analog synthesizers, to iMovie filters that make your HD video recording look like scratchy Super 8 film. The market forces behind this development (nostalgia, digital fatigue, etc.) lie outside of the scope of this discussion, but it is important to note here that these emulations succeed by focusing on those aspects of a hardware device most easily modeled in the software domain: the virtual Moog synthesizer models the sound of analog oscillators and filters, but doesn’t try to approximate the glitch of a dirty pot or the pop of inserting a patchcord; the video effect alters the color balance and superimposes algorithmically generated scratches, but does not let you misapply the splicing tape or spill acid on the emulsion.

Although affordable 3D printers and rapid prototyping devices still remain the purview of the serious DIY practitioner, there is no question that these technologies will enter the larger marketplace in the near future. When they do, the barrier between freely distributable software and tactile hardware objects will become quite permeable. A look thru the Etsy website reveals how independent entrepreneurs have already employed this technology to extend the publishing notion of “print on demand” to something close to “wish on demand,” with Kickstarter as the economic engine behind the transformation of wishes into businesses. (That said, I’ve detected the start of a backlash against the proliferation of web-accessed “things”—see Allison Arieff, “Yes We Can. But Should We?”).

Some Closing Observations

Trombone-Propelled Electronics rev. 3.0, 2005. Photo by Simon Lonergan.

Trombone-Propelled Electronics rev. 3.0, 2005. Photo by Simon Lonergan.

I came of age as a musician during the era of the “composer-performer”: the Sonic Arts Union, David Tudor, Terry Riley, LaMonte Young, Pauline Oliveros, Steve Reich, Philip Glass. Sometimes this dual role was a matter of simple expediency (established orchestras and ensembles wouldn’t touch the music of these young mavericks at that time), but more often it was a desire to retain direct, personal control that led to a flowering of composer-led ensembles that resembled rock bands more than orchestras. Fifty years on, the computer—with its above-mentioned power to fuse three principle components of music production—has emerged as the natural tool for this style of working.

But another factor driving composers to become performers was the spirit of improvisation. The generation of artists listed above may have been trained in a rigorous classical tradition, but by the late 1960s it was no longer possible to ignore the musical world outside the gates of academe or beyond the doors of the European concert hall. What was then known as “world music” was reaching American and European ears through a trickle of records and concerts. Progressive jazz was in full flower. Pop was inescapable. And composers of my age—the following generation—had no need to reject any older tradition to strike out in a new direction: Ravi Shankar, Miles Davis, the Beatles, John Cage, Charles Ives, and Monteverdi were all laid out in front of us like a buffet, and we could heap our plates with whatever pleased us, regardless of how odd the juxtapositions might seem. Improvisation was an essential ingredient, and we sought technology that expanded the horizons of improvisation and performance, just as we experimented with new techniques and tools for composition.

It is in the area of performance that I feel hardware—with its tactile, sometimes unruly properties—still holds the upper hand. This testifies not to any failure of software to make good on its perceived promise of making everything better in our lives, but to a pragmatic affirmation of the sometimes messy but inarguably fascinating irrationality of human beings: sometimes we need the imperfection of things.


This essay began as a lecture for the “Technology and Aesthetics” symposium at NOTAM (Norwegian Center for Technology in Music and the Arts), Oslo, May 26-27 2011, revised for publication in Musical Listening in the Age of Technological Reproduction (Ashgate, 2015). It has been further revised for NewMusicBox.


Nicolas Collins

Nicolas Collins

New York born and raised, Nicolas Collins spent most of the 1990s in Europe, where he was visiting artistic director of Stichting STEIM (Amsterdam) and a DAAD composer-in-residence in Berlin. An early adopter of microcomputers for live performance, Collins also makes use of homemade electronic circuitry and conventional acoustic instruments. He is editor-in-chief of the Leonardo Music Journal and a professor in the Department of Sound at the School of the Art Institute of Chicago. His book, Handmade Electronic Music–The Art of Hardware Hacking (Routledge), has influenced emerging electronic music worldwide. Collins’s indecisive career trajectory is reflected in his having played at both CBGB and the Concertgebouw.



1. Although this potential was clear to our small band of binary pioneers, the notion was so inconceivable to the early developers of personal computers that Apple trademarked its name with the specific limitation that its machines would never be used for musical applications, lest it infringe on the Beatles’ semi-dormant company of the same name—a decision that would lead to extended litigation after the introduction of the iPod and iTunes. This despite the fact that the very first non-diagnostic software written and demonstrated at the Homebrew Computer Club in Menlo Park, California, in 1975 was a music program by Steve Dompier, an event attended by a young Steve Jobs (see http://www.convivialtools.net/index.php?title=Homebrew_Computer_Club) (accessed on February 21, 2013).

2. For more on the implications of MIDI’s separation of sound from gesture see Collins, Nicolas, 1998. “Ubiquitous Electronics—Technology and Live Performance 1966-1996.” Leonardo Music Journal Vol. 8. San Francisco/Cambridge 27-32. One magnificent exception to the gesture/sound disconnect that MIDI inflicted on most computer music composers was Tim Perkis’s “Mouseguitar” project of 1987, which displayed much of the tactile nuance of Tudor-esque circuitry. In Perkis’s words:

When I switched to the FM synth (Yamaha TX81Z), there weren’t any keydowns involved; it was all one “note”…  The beauty of that synth—and why I still use it! — is that its failure modes are quite beautiful, and that live patch editing [can] go on while a voice is sounding without predictable and annoying glitches. The barrage of sysex data—including simulated front panel button-presses, for some sound modifications that were only accessible that way—went on without cease throughout the performance. The minute I started playing the display said “midi buffer full” and it stayed that way until I stopped.

(Email from Tim Perkis, July 18, 2006.)

3. Karplus, Keven and Strong, Alex. 1983. “Digital Synthesis of Plucked String and Drum Timbres.” Computer Music Journal 7 (2). Cambridge. 43–55.

4. Beginning in the late 1960s a handful of artist-engineers designed and built pre-computer circuits that embodied some degree of performer-like decision-making: Gordon Mumma’s “Cybersonic Consoles” (1960s-70s), which as far as I can figure out were some kind of analog computers; my own multi-player instruments built from CMOS logic chips in emulation of Christian Wolff’s “co-ordination” notation (1978). The final stages of development of David Behrman’s “Homemade Synthesizer” included a primitive sequencer that varied pre-scored chord sequences in response to pitches played by a cellist (Cello With Melody Driven Electronics, 1975) presaging Behrman’s subsequent interactive work with computers. And digital delays begat a whole school of post-Terry Riley canonical performance based on looping and sustaining sounds from a performance’s immediate past into its ongoing present.

Audience Cultivation In American New Music

“Isn’t it amazing, that we can all sit in the same room together…and not understand each other?  It could only happen in America!” —Richard Pryor


Historically, new music has sought to confront general audiences with unexpected sounds and forms.  The present, however, sees the milieu of new music splintered into factions, each with its own loyal but marginal audience. One is more likely to find these groups at odds with one another than in dialogue, and many groups congratulate themselves for being the most marginal or esoteric. These divisions within the new music community foreclose on its original mission of confronting traditional audiences, as the factionalized groups that most new music now attracts already support and expect the work in question. All of these groups believe that they have meaningful formulas for creating provocative work, but what good is that work if no one outside the communities where it is generated has access to it?  In order for new music to remain a meaningful category of cultural production, it requires successful strategies for cultivating newer and bigger audiences.

While often used to refer to the experimental within the world of classical music, the term “new music” can be more broadly applied to any music that employs innovative, unexpected sounds or forms with the intention of challenging audiences to examine their assumptions about music, performance, and the consumption of musical experiences. When approached so broadly, new music is vast and hugely varied, but the central division within the array of new music practices is that between new music that is practiced in institutional settings and new music that does not receive institutional support, corporate sponsorship, or financial backing from investors.  The latter creative communities of practice often operate off the grid and, at times, outside of the law.  I will refer to these two areas of practice as “institutional” and “DIY,” respectively.  This classification is necessarily reductive, as there are many groups and individuals that embody hybrid forms of new music practice. Nevertheless, integrating these modalities remains difficult, and dominant traditions within new music lean toward one style of practice or the other. Similarly, each constituency represents a strong but discrete audience base—the concert-going, classical-music-based community, and the DIY community, which is aligned with band culture. This bifurcation suggests that the divide is of special relevance when considering the project of audience cultivation for new music in America today.

Current engagement with audience cultivation often finds expression in terms of collaboration and dialogue, not only within respective communities, but also between them. Audience cultivation strategies centered on cross-communal new music programming are often developed around a set of axioms, which may best be expressed as follows:

  1. The music of multiple new music communities, though touted as different from one another, actually has a lot in common.
  2. Each new music community has its own audience.
  3. The composite of all new music audiences, though never manifested as a single audience, would be bigger than any one new music audience.
  4. Collaboration between multiple artists from diverse new music communities will lead to a bigger audience for all new music communities.
  5. This will happen because collaborative programming will lead to a combining of multiple new music audiences.
  6. Bigger audiences for new music mean greater impact of progressive ideologies as mediated by the music in question.

These axioms carry theoretical weight, but despite the prevalence of this thinking—visible in the programming of many new music presenting organizations—the super-audience promised by such a collaborative spirit is not materializing. Even in New York, a veritable hot bed of collaboration and dialogue within new music communities, audiences—communities of fans!—remain, for the most part, segregated.

For the past decade I have played in ZS, a band which has had the lucky misfortune of being resident outsiders in multiple new music communities—most notably at the fringe of the underground noise or DIY scene, while also at the periphery of the institution and the academy. Our deliberate compositional method and disposition has garnered us an air of otherness in the underground community, and the abrasive dynamic and timbre of our performances has set us apart from our classical new music counterparts. ZS has charted this course deliberately, and it has afforded us a unique vantage point which may prove to be useful in the ongoing dialogue around the cultivation of larger audiences for American new music. My aim in the following discussion is to use this vantage point to explore the various factors at play when attempting to mobilize collaborative strategies for audience cultivation.  These factors cover both the practical and social features of musical performance, as I believe it is the attachment to these specific means of creating that is at the heart of understanding why the multiple communities have a hard time becoming one audience. To form an understanding of such attachments, one must consider the community structures at play wherein specific means of creative production become useful. These considerations must be addressed if we hope to bring about lasting and increased meaning, for more people, vis-à-vis the challenging and refreshing musical content created by multiple new music communities!

Videos featuring Eli Keszler, Tristan Perich, and Patrick Higgins accompany this article. I chose to feature them because their practices as musicians, composers, and artists embody something that works with the strategies I’m presenting here. All three of these artists operate with ease in many contexts, including the institutional new music setting and the DIY.


Concert and Show

When considering the sound artifact—record, CD, mp3, etc.—the differences between classical or institutional new music and the new music produced by the DIY scene are present but not so pronounced. It is not hard to imagine someone who likes Tortoise recordings enjoying Steve Reich’s music, or a person listening to both Xenakis pieces and Wolf Eyes or Black Dice records. It is harder, however, to imagine a person who frequents Miller Theater for Steve Reich concert programs catching a Tortoise show at the Empty Bottle in Chicago. The similarities between these kinds of music are easy to notice when stripped of the social context that the performances take place in, but what can be said about listeners and the sound artifacts they consume cannot necessarily be said about concertgoers and the performance rituals they engage.

Ideally, performances are constructed frames in which, for a brief period, every detail serves to articulate something about a particular version of, and vision for, the world. Considering such details, then—how the audience is positioned, what the performers are wearing, the audience’s attitude, the atmosphere within the performance space—expresses the values of the environment in which a given performance takes place. A good place to begin is with a simple examination of terminology. The colloquial difference between classical “concerts” and noise “shows” are indicative of deeper disparities, and while similar in construction, the two questions, “How was the concert?” and “How was the show?” ask about different things. The former question is used to request an evaluation of the musical quality and content of a performance. The person who asks the latter, however, enquires about a broad range of elements: Who was there? Did it start on time or run late? How was the venue? How was the sound? Did anything crazy happen? The quality of the band’s performance and an individual’s response to the music are factors within a much larger set of elements that are at issue when describing a given show. When asked to describe a concert, details about whether Prosecco was served at intermission and whether or not you met anyone you knew are often not at issue. Rather, there is a privileging of the musical performance as the meter stick for determining the quality of a given concert.
In a classical setting, audiences must sit and be quiet. Those visibly doing something other than watching the concert are discouraged in the space where the concert is happening. The ensemble and the conductor wear uniform clothing so as not to distract from the event of the music making, and so on. Compare this setting to that of a show: members of the audience may be doing any number of things besides watching the bands—catching up with friends, consulting one’s phone, or participating loudly in the performance are all acceptable varieties of audience participation.  Regardless of preference where these settings are concerned, it is clear that there are different paradigms for audience-ship afoot in the different musical milieu in question, and these differences present clear obstacles for those attempting to combine new music audiences.


Band and Ensemble

The different expectations placed upon audiences in the social frames of “show” and “concert” are matched or even preceded by structural differences within the generative processes of different branches of new music. Where institutional new music is concerned, it is ordinarily the work of a single person, the composer, who has designed a specific musical experience for the audience to have. The content is chiefly communicated to the ensemble via sheet music, a set of instructions that expresses the musical design of the composer. While the balance of creative contribution may vary from ensemble to ensemble, it is clear that the role of the composer is the most generative, while the ensemble and conductor chiefly perform interpretive and executive labor, serving as vehicles for the composer’s content. Importantly, this division of labor is performed before the audience. The conductor stands with her back to the audience and presides over the ensemble; the ensemble is most often seated, wearing all black and performing for the audience, whom they face. Composers are often not visible but are generally identified in a printed program; if they are in attendance, they are acknowledged by the conductor and/or ensemble at the end of the performance of their work.  In this setting, there is a commitment to a lack of mystery surrounding individual contributions to a given performance; the place and role of the practitioners involved within the ensemble is clearly demarcated, and the composer’s authorship is clearly acknowledged.

Conversely, in a DIY setting, all generative mechanics of the band often lie under the hood of the band name. Whereas the ensemble divides the labor of music-making into discrete specialties and hierarchical execution, the band model is largely typified by a pervasive lateral quality. Band members act as composers, interpreters of material, and performers; every member is able to contribute to the musical content of the work being generated, and these contributions are happening constantly throughout the process of writing, rehearsing, and performing. Additionally, leadership in the band setting is often nebulous, and though a band may have a front woman or man, this is not understood to mean that they are responsible for authoring the musical material they render.
We have already noted the many differences between the concert and the show setting.  The divergent generative practices engaged by bands and ensembles serves to deepen the divide between these performance rituals—audiences at shows and audiences at concerts, though both viewing live performances of music, are consuming radically different culture products.


Audiences in Social Space

Where institutional new music practitioners disguise their bodies and individuality so that the audience may receive the design of the composer with extreme clarity, bands disguise notions of agency and authorship by not crediting work to specific individuals.  Instead, they foreground individuality in performance and group dynamics. By not calling attention to authorship and highlighting agency in performance instead, the band and the show create a different kind of space for the meaning-making practices of the audience. Audiences at shows understand that bands create the music that they play, however the audience’s focus is often not on the intentions of the band and its members, but on the audience’s own style of participation at a given show. Audience members may stand silently while the band plays, yell out phrases or sounds during or between performances, respond bodily, even throw a cup or bottle onto the stage (depending on the show you happen to be at!). The finished product of the show is a composite of the musical performance and the audience’s response.

Audiences at concerts and audiences at shows are thus not only consuming different culture products, but they are playing different roles where the construction of meaning is concerned. The decision that audiences make when they choose between attending a concert and going to a show can be couched in terms of consuming and creating. Concerts and shows are more than specific means of presenting music, they are cultural spaces where rituals of social identification are practiced and expedited. Concerts provide conditions for repose to be struck by connoisseurs, while shows create platforms for cultural actors who will shape their experience via participatory style. The idea that show-goers and concert-goers are seeking out different experiences impacts heavily upon audience cultivation where multiple new music communities are at issue. The real concern is no longer what music people are encountering, but how and where they are encountering it, and what their role is in the production of meaning while doing so. Traversing this invisible boundary is the real work of audience cultivation and expansion.


Obstacles and Best Practices

It follows, then, that those who wish to expand audiences in new music must consider what might make such different communities of listeners wish to widen their experiences and practice different methods of cultural engagement.  A good frame for an inquiry about best practices for audience cultivation is the interrogation of assumptions.  Both of the communities in question are able to manifest flexibility in prescribed areas of practice, but remain rigid where core values are at play.  In proper dialogue it is important for participants to enter their most deeply held core values as possible assumptions, and subsequently interrogate those assumptions in order to determine whether or not they are meaningful in the context of our current project of audience cultivation.  A widely held assumption about music in general is that audiences separate along lines of aesthetics.  In this essay I suggest that audiences of new music listeners separate not because of aesthetic barriers, but due to the specific mechanics through which music is created, presented, and consumed.  In order to address this point, every aspect of a musical production must be considered, not just the specific musical content being presented at a concert or a show.

Over the years I have encountered and enacted a variety of strategies for growing audiences for new music.  Some of them work, some of them don’t, a few of them are discussed below.

For the Institution

Institutions are responsible for many of the programs that pair diverse communities within new music.  This programming happens under relatively ideal circumstances, with significant financial backing, proper facilities, and cogent marketing teams representing the programming to the public.  Often this work manages to create pairings of artists that would otherwise not be “possible.”  That said, we have noted that this programming is not substantially expanding the size of the existing audience for new music.

One of the greatest assets that institutional support brings to the endeavor of audience cultivation is the ability to provide respectable fees to musicians. However, rather than being used to incentivize artists to engage in otherwise unlikely collaborations, funding may serve better if used to reward artists who engage in self-elected collaboration. By shifting the allocation of funds from the financing of unlikely collaborative projects to the support of existing collaborative projects, festivals and institutions will foster an overall valuation of cross-communal collaboration within new music.

There are many examples of large institutions opening their doors to a broad array of practitioners from the DIY underground. There are far fewer examples, however, of members of the institutional new music community coming to DIY venues and concertizing there.  Institution-based new music groups who wish to expand their audience base would be well served by performing in such settings, however familiarity with institutional support leaves many practitioners expecting to be compensated at rates which are unfeasible in many DIY situations. Of course it is possible to write grants, appeal to patrons, and lead Kickstarter campaigns in order to secure what is regarded as the required funding to make individual concerts happen, however I recommend against this. Musicians in a given ensemble being compensated at a rate that differs from that of other musicians on the bill at a DIY show creates social distance which defeats the purpose of this exercise.  Where the institutional new music practice is chiefly premised on aesthetics, methodology, and philosophical bent, the DIY scene is, fundamentally, an expression of something social, a fellowship among people, a community.  In order for the institution-based new music practitioners to cross over and gain awareness in this world, they must find a way to participate as a member of that community.  Accepting the terms of the DIY community—financial and otherwise—is one way for classical and institutional new music practitioners to expand social depths, form relationships with musicians in bands, and expand their audience base.
Unfortunately, there are often other obstacles for institution-based new music groups that would like to concertize in the DIY setting.

I have spoken with multiple new music groups for whom audience expansion is a priority, but whose hands are tied due to extreme exclusivity clauses imposed by large-scale classical presenters, or who face management and publicists resistant to the notion of concertizing for so little money and even less prestige.  This resistance is of note: it highlights that while there are many people within institutional new music for whom younger, broader audiences are of central concern, in many cases, the question of whether or not the general public likes new music is simply not of value to many involved.  Institution-based new music practitioners who are concerned about expanding audience size must do some campaigning within their own milieu if they wish to experience success in this endeavor.

Most importantly, institutional new music must bring an end, at least partially, to its most beloved practice—enforced silence.  There are many people who do not like being forced to sit still and be quiet.  This practice, and some of the other rituals endemic to the concert setting, need to be reconsidered and applied only selectively.  The performance of musical hierarchy described above is also inscrutable if not off-putting to listeners not familiar with the customs of concert music.  A more casual setting and presentation will benefit institutional new music practitioners seeking to expand their audience base. Acceptance, even valuation of these attributes when concertizing in the DIY setting is a good way to begin thinking about bringing some of that spirit to the concert setting.

For DIY Communities and Organizations

The DIY community has its own responsibility in the matter of audience cultivation, and equally as much to gain from an expanded audience for new music.  As far as communities of practice—artistic, professional, and social—go, the DIY community tends to value “openness,” that is, awareness of and curiosity about the values and practices of others. This said, there are sacrosanct core values within the DIY community around which little or no variation is tolerated. For the DIYers to participate in the cross-communal project of audience cultivation, it will first be necessary to reframe the rigid nature of these ideologies as more fluid aspirations that are shaped by the particular projects that they engage with.

Within the DIY community, there is skepticism, bordering on dislike, of hierarchical power dynamics, especially when the allocation of resources is at issue in the context of a supposed meritocracy.  It should not be hard to see why this vantage point presents difficulty for collaborations that pair DIY organizations with official cultural institutions. The DIY community places value on conducting business in a way that is transparent, lateral, and democratic, while the institutional milieu places emphasis on clearly articulated standards for excellence and cogent processes of becoming involved. These two strands have much to learn from each other—DIY communities could stand to become more cogent and efficient, while official culture organizations could imagine new ways of preserving and presenting standards that do not locate the project of determining quality and allocating funds at the top of a hierarchy. Instead, official culture organizations might look more readily to the ground level of their operations where the people most aligned with the audiences they seek to reach dwell. This mutual learning will require a softening of the DIY’s ideological position in order to facilitate dialogue.

Operating within their dislike of the edifice of power, DIY communities at times engage in subtle social maneuvering for devaluing dominant practices, often resulting in a communal habit of “becoming minor,” that is, seeking to frame any given practice as “most other” or “least dominant.” Within a seemingly homogenous community, subdivisions occur over any number of major or minor social differences—those who come from money, those who have been to prestigious schools, those who have good jobs but go to punk shows at night, those who did not attend college, those who are unemployed, those who come from familial backgrounds of little means—each splinter group seeking to become minor. Rather than fostering a broadening of social depths to include more and more cultural actors—a task we have seen is necessary for the project of audience cultivation—this practice forecloses on collaboration with entities outside of the DIY and causes discord within. The attitude of “becoming minor” is complexly associated with questions of power and privilege, but it is important to note that anyone who is in a position to be able to entertain the concerns expressed in this essay is already in the category of the extremely privileged.  Whether university professors or crusty punks who have renounced the shower and covered their hands with tattoos, we all exist within and embody the ostentatious wealth of our nation, replete with its power and influence. These assets can advance the agendas of communities of practice, and for this reason, divergent communities such as DIY and institutional organizations are well-served by identifying points of similarity and overlap rather than engaging in the factionalizing attitudes that value becoming minor.


The discussion of obstacles and best practices in this section is far from comprehensive.  It is easy to imagine further discourse on matters including the architectural space, geographic location, gender dynamics, ethnographies, or more nuanced discussion of the socio-economic dimensions of cross-communal collaboration.  My hope is that this essay will help to begin dialogue around all of these subjects and many more not named here.  Cross-communal dialogue is our first best hope for addressing the matter of audience expansion in new music.

Recently, I was on tour with ZS in Europe, where the distinction between grassroots communities of practice and institutional communities of practice is much less pronounced, and of lesser import to practitioners.  Our booking agent, a master at negotiating between these communities, informed us that the new prevalent slang was not DIY (do it yourself), but DIT (do it together).  Yes!  What an excellent and obvious evolution for our thinking about cultural musical action in the world.  As Americans, we do not have cultural homogeneity of participants in our communities of practice.  The word “American” itself connotes a lot.  However one interesting take on being American is that the designation implies that somewhere in your relatively recent past or ancestry there is “someone originally from somewhere else besides America.”  This makes the project of American DIT more compelling, and more important!  Through dialogue, the interrogation of assumptions, and by not turning ideals into expectations in the cases of our core values, new music can arrive at hybrid forms of practice, both aesthetically and in terms of mechanical production and generative practice.  These hybrid forms can lead to bigger audiences for American new music, if we are willing to do the often uncomfortable, often exhilarating work of getting to them.
See you at the performance!


Special thanks to Hannah DeFeyter for her assistance with this article.

Stayin’ Alive: Preserving Electroacoustic Music

Max Patch
In preparation for a performance at an electroacoustic music festival this weekend, I’ve been revisiting a slightly older piece that hasn’t been performed in its live electronics version for a few years. Several updated releases of Max/MSP software have come and gone since I last fired up the interface for this piece and, as one might expect, when I tried it out for the first time, it didn’t work correctly. Fortunately the changes needed to get it properly up and running were small, but there was still a significant time suck involved. And indeed, unless I keep on top of software upgrades and changes, pieces that incorporate live electronics could end up hopelessly broken for no reason other than years pass and technology changes. The same is true for much older electroacoustic compositions that involve very specific pieces of gear—stomp boxes, synthesizers, a particular delay unit, or a drum machine.

Because the way I am using the software in the case above is relatively straightforward, I could easily transfer my performance system to more accessible commercial software like Ableton Live or Logic; certainly more people own those programs than Max/MSP, and it makes sense to do so, to ensure that the music continues to be performed. But eventually there will be different software that replaces those programs, and what will happen then? Chances are I’m the only one who is ever going to undertake such a conversion task, and when I’m not around anymore, the performance materials for these pieces may well turn to dust. It’s a question for anyone creating electroacoustic music. What on earth is going to happen to compositions that are painstakingly crafted for effective live performance at the time of their creation, but which become increasingly difficult to mount live, simply due to the march of time?

A good clarinetist friend has taken it upon himself to recreate a number of older electroacoustic works that were originally made using equipment that is no longer available. He has resuscitated Thea Musgrave’s Narcissus, for example, translating the electronics from a very specific gear list into a more modern software situation. But I seriously doubt many people out there are really willing to take on projects of that nature, and I worry for the life of a lot of truly wonderful music. David Tudor’s Rainforest can’t really be recreated (darn good thing there are recordings, though there’s no replacing the live experience), nor can so many other works that rely on custom-designed electronic gear or software. It’s not just an issue for works that are considered classics now; there are many, many artists making great work incorporating live electronic performance today that should be able to be experienced by people decades from now. How is that going to happen?

This is an issue for individual composers and publishers alike, since it’s not any easier for the big publishers to deal with the sale, rental, and distribution of live electroacoustic music. It’s difficult and inconvenient to wrangle performance materials of this type, no matter how you slice it. Although I know that publishers do not encourage (and may actively discourage) their composers to create electroacoustic music, what I would really love to see is some sort of technology manager/archivist position become standard at publishing houses in order to deal with these issues going forward. Because like it or not, a lot of contemporary music involves technology.

Another argument could be presented that addresses the sheer volume of creative work that is being produced at this time. Should this music even be preserved for continuing performance? Is documentation in the form of audio recording and/or video enough? There is so much of everything. It’s easy to dismiss work and not be uncomfortable with its erasure because of the volume of creation and recording in this era we live in. Should these performance experiences be viewed more as ephemeral events from a very specific place and time, an expanded view of site-specific work? Is it possible that there could ever be electroacoustic “war horse” pieces that continue to be performed centuries later?

Artful Deception

Sometimes I like to think of musicians as stage magicians. There is a kind of artful deception that’s a part of performance, but it’s rarely acknowledged and often downplayed, especially in the concert music world. I mean this in a literal sense—e.g. the rock star moves or Liberacean flourishes that are decried as flashy or ridiculous—but also in a more abstract sense. There is never perfect communication between artist and audience, so there must always be a disconnection between how the trick looks (or sounds) and how the trick is actually done.

This is most painfully obvious to me when I am working with live electronics. The laptop is a black box. Sounds mysteriously emanate from it, but the means of their production are obscured. If performance didn’t require magic, this would actually be a huge boon to musicians and listeners alike. Finally, we can dispense with superfluous showmanship and focus on the substance of the music! But in real life, the effect is curiously the opposite. If anything, electronic performers are more exposed than their acoustic counterparts because they cannot easily demonstrate their methods. Attempts to make the black box a little more transparent, like live coding or gesture following, to name a couple, aren’t terribly convincing on their own. Instead performers must rely on affective tactics (i.e. schticks) like stoic immobility, or the enthusiastic headbob, or various fader manipulation shenanigans.

Maybe my anxiety about this kind of performance is partly what’s kept me focused more on electroacoustic music (and lately I’m especially aware of how quaint “electroacoustic” sounds as a genre descriptor). With live acoustic instruments as the focus, I am free to be invisible, a presence felt but not known. I don’t think of myself as a performer when I run electronics for these pieces, though I am undeniably performing. Instead I feel more like a technician or a midwife, guiding the music into being.

But maybe this feeling is obsolete, a throwback to a time when people were more naturally suspicious or scornful of electronics. Something substantial has happened in the last 5-10 years, and all of a sudden everyone knows that any kind of live music can be faked. In theory this is kind of scary, but in practice it gives us an enormous amount of freedom: all people expect is a good show.

Matters of Convention

January is a great time for music conferences (or conventions). A few organizations holding them this month that I can name off-hand are: the Association of Performing Arts Presenters (APAP), the Jazz Educators Network (JEN), Chamber Music America (CMA), and the National Association of Music Merchants (NAMM). Sadly, my relationship with the economy is tipped slightly out of my favor at the moment and having to replace the car this year means that the only convention I’ll be attending is (hopefully) the International Society of Bassists in June. (I resisted joining ISB until two years ago, mainly because their convention was being held that year in San Francisco, where my mother lives, so I could write off a visit with her as a business expense. At the convention I was wowed by the virtuosity of the likes of Nicholas Walker, Putter Smith, John Clayton, Bertram Turetzky, and Jiri Slavik. Doing lunch with Michael Formanek and Mark Dresser were also high points, but taking a private lesson with the legendary Barre Phillips changed my life and sold me on the ISB.)

I had planned to go to the Jazz Connect events at APAP, which are free and end today, but my work schedule has put the kibosh on my plans. I’m supplying the musical accompaniment for a one-person play written and performed by Stacie Chaiken, Looking for Louie, which is being staged at the Rockland Center for the Arts this Sunday (January 13) afternoon and will be in rehearsal during the second day of the convention. On Jazz Connect’s first day, I was glued to the computer writing this post and putting together a part for the rehearsal.

While it might seem paradoxical to some, that an improvising musician would be writing a part for a performance, it’s actually not at all at odds with how improvisation works. Chaiken incorporates interactivity with her audience as well as with her accompanist in Looking for Louie and the bass part is largely improvised. However Looking for Louie was staged previously, in Los Angeles and Israel, and a structural element exists in the music that specifically relates to the work’s plot. To provide improvisations that are in line with the work’s style and surface contours, I transcribed the bass part from the Israeli performance into Finale® and inserted it into an MSWord® document along with the play’s script. It’s essentially the same way I prepare for playing jazz, except the research I do for that is on-going and I’ve been at it for a longer time, so I don’t have to do it for every situation that comes along. Fortunately, Chaiken is very comfortable working with jazz musicians (her husband, Martin Berg, used to play trumpet in the Thad Jones-Mel Lewis Orchestra and is currently on the board of directors of the California Jazz Foundation), and our first rehearsal went smashingly well. I think it’s possible to now bring my own sensibilities, which lean more towards the avant-garde than her previous accompanists, to our final rehearsal without disrupting the drama.

Rockland Center for the Arts (ROCA) is a terrific place to hear music. The acoustics of its main gallery are superb and neither Chaiken nor I will need any sound support. Because ROCA receives support from the New York State Council on the Arts, as well as from local businesses and subscription memberships, performers who play there are paid a living wage. Organizations like ROCA are in the business of presenting art, not the commodification of it. Sadly, there’s a stigma attached to the presentation of art as art that leads many to believe it to be unfathomable to most. While it’s true that the masses now, more than ever, are potential prey falling to those who would limit their exposure to good quality art, especially music, it doesn’t mean that it’s beyond the public’s ability to comprehend it.

This was made obvious to me last Monday at a memorial for the late saxophonist-composer David S. Ware held at St. Peter’s Church in Manhattan’s Citicorp Center. Ware’s memorial was well-attended—in fact, standing room only—and the program well-paced. The music played was not what one would expect to be included in the year-end memoriam of NPR or the NYT (neither included Ware), but it was of the highest caliber and performed with the deepest of conviction. Saxophonists Rob Brown, Daniel Carter, and Darius Jones gave stellar performances to honor their fallen comrade, as did drummers Andrew Cyrille, Guillermo E. Brown, Warren Smith, and Muhammad Ali.

I first heard Ware when he played at the Iridium Jazz Club in July of 2003. His quartet, with pianist Matthew Shipp, bassist William Parker, and Guillermo Brown, were part of a double bill that included a quintet led by bassist Henry Grimes (who I came to see) that featured trumpeter Roy Campbell, saxophonist Brown, pianist Andrew Bemke, and drummer Michael T.A. Thompson. I enjoyed listening to Grimes’s group, but when Ware started his half of the show, I was surprised to find myself witnessing what I can only describe as a singularity at Iridium. Ware and his group not only blew the roof off of the place, but did it with a kind of playing that is all-too-rarely presented there. Ware’s sound, like Gato Barbieri’s or Clarence Clemens’s, was huge, but devoid of commercial pretense. His compositions supplied him with long, slow-moving progressions, highlighting his improvisations’ multiple-tiered voice leading that infused rapid-fire filigree over a subtle linearity.

At Ware’s memorial I was also surprised to hear multi-instrument builder-performer Cooper-Moore—who, along with Ware and drummer Marc Edwards, was a member of the collective, Apogee—for the first time. Cooper-Moore performed a solo piece on an instrument he calls a harp, and it is a harp, but is played horizontally, like a piano. The piece he played was beautifully lyrical and juxtaposed nicely between the opening piece, “Prayer,” by William Parker (who conducted and played percussion) with his group (pianist Eri Yamamoto, vocalist Fay Victor, Rob Brown, and a string ensemble led by Jason Kao Hwang) and a trio featuring Andrew Cyrille, Daniel Carter, and bassist Joe Morris. Another first for me was to hear Morris play guitar, which he did in a duet with drummer Warren Smith, performing an exquisite Morris original, “Violet.” Muhammad Ali and Darius Jones played a duo that was truly a great moment in music, expanding on the legacy of saxophone-drum playing started by John Coltrane and Rashied Ali. Poetry was read by the passionately verbal powerhouse Steve Dalachinsky and memories of Ware were shared by his widow, Setsuko S. Ware; his business partners, Jimmy Katz and Steven Joerg; and finally his long-time pianist Matthew Shipp, who also wrote a heartfelt obituary about Ware for NewMusicBox last year. The final live performance (followed by a clip of Ware playing solo soprano saxophone) was by Ware’s rhythm section who performed a medley of two Ware compositions, “Godspelized” and “Sentient Compassion.”

One of the things the words spoken about Ware at his memorial acknowledged was that his sound was highly personal and entirely idiomatic to the saxophone. That’s what struck me about Ware the first time I heard him—his sound. Raw. Big. Relentless. But accepting without being acquiescent. Like his elders, Albert Ayler and Archie Shepp, his sound was unlike what the world of “mainstream” music accepts as the “pure” sound of the saxophone, yet it was pure saxophone. His was a sound that, like his compositions, leaned away from the Western art music paradigm he had mastered. While researching today’s post, I ran across a schedule of music educators association conferences that will be held this month in Arizona, Florida, Georgia, Illinois, Indiana, Michigan, Missouri, and Oklahoma. Jazz is now included in the curriculum of many schools in these states, although until the 1970s and ‘80s it mostly wasn’t. Still, it wasn’t until after 2000 that I saw a music professor teach the music of Ornette Colman to a class. I wonder how long it will be before the music academy will tackle the music of David S. Ware.

Intersections & Dissections

In March 2010, I was asked by Mode Records to be involved in the making of a new release of previously unrecorded orchestral works by Morton Feldman, with Brad Lubman conducting the Deutsches Symphonie-Orchester Berlin. Since the studio sessions with the orchestra had proceeded ahead of schedule, we decided to also record the early graphic piece Intersection I, which had previously only been recorded by a small ensemble. It was the proper length and required no extra forces. However, there was insufficient time to familiarize the orchestra with the notation and to rehearse the piece, so I was asked to make a realization of the graphic score using more traditional notation.

Morton Feldman wrote Intersection I in early 1951, dedicating it to John Cage, whom he had met one year prior. The score is divided into four staves: woodwinds, brass, high strings, and low strings. As would become customary for Feldman, strings and brass play muted throughout, and all instruments avoid vibrato. Notes are represented by boxes on a grid. Pitches are not specified; instead, the vertical placement of the box represents the low, middle, or high range of the instrument, from which each player individually selects a note. Players on the same part begin notes on their own, but release together, with the longest possible note indicated by the full width of the box. Widths always correspond to a whole number of beats, and the beats are grouped into 4/4 measures. For the strings, Feldman also specifies different modes of playing, such as pizzicato and harmonics.
Feldman used his early graphic works to build his own musical language from scratch. In each piece, he relinquished control of certain aspects of sound in order to concentrate on just one or two—distilled form and gesture in the case of the Intersection series, building on his studies with Stefan Wolpe. But his use of indeterminacy was not bound up in a utopian philosophy, as it was with John Cage. Feldman allowed musicians to realize their graphic parts in advance, caring about “freeing the sounds and not the performer.”

Initially, I considered and rejected two strategies for creating the realization. I would not pick all of the notes myself; I wanted to work with Feldman’s instructions, not to be his co-composer. Nor would I randomize the different sound elements of the music—making the determinations necessary to execute the chance procedures seemed just as composerly as picking notes. I felt that the crowdsourced personality of the piece as implied by the score needed to be left in place.

With all of this in mind, I hit upon the idea of recording local musicians playing individually from the graphic score and transcribing the results into proportional notation. Each take would then become one of the parts from which the musicians of the DSO Berlin would perform, like actors. Though the recorded musicians would be playing outside of an orchestral context, Feldman believed that performers reacting to each other in his graphic music inevitably led to cliché. This approach would enable sounds between the different instruments to be “free” from one another.

I had no interest in artificially cultivating a very quiet, carefully finessed “Feldman sound,” since Feldman was still finding his sound when he wrote Intersection I. By working with musicians as intermediaries, the sonic reality of the piece would depend on the instruments themselves, the personalities of the players, their relationships with their instruments, and their musical history and training. It was the reverberation of an existing system, like wind blowing through an aeolian harp.


Because Feldman never made instrumental parts from the graphic score, I drew them using a pen and ruler. (By this point, I doubted he ever expected an actual performance.) With parts in hand, I assembled 25 contemporary classical musicians willing to contribute their talent and time to the project—they are each acknowledged below.

Early in the summer of 2010, I began traveling around New York City, meeting and recording the players. As expected, the open-ended nature of the notation let hear each player’s personality and relationship with their instrument virtually unfiltered. Occasionally, I would hear a player slipping into a key signature for a few measures, or outlining familiar chords. In contrast, other players would change fingerings just before playing a note in order to avoid convention. Each had their distinctive sound. In the end, I had assembled a sonic snapshot of contemporary classical performance practice in early 21st century New York.

It took many weeks of transcribing to compile the score, a task so protracted that I found myself working in Sibelius wherever I could, from back seats of moving cars to a pool house in upstate New York. Every aspect of each note had to be finessed manually—over 50,000 items in all. I dreamt of moving noteheads for weeks and compulsively organized small round objects. I also received an exceptional orchestration lesson, internalizing the sound of each instrument as I listened to the recordings.

Feldman Graphic Score

The opening of Morton Feldman’s original graphic score for Intersection I. Copyright © 1962 by C. F. Peters Corporation. Used by kind permission. All rights reserved.

Feldman Realization Score

The opening of Morton Feldman’s Intersection I in Samuel Clay Birmaher’s realization.

I flew to Berlin in late November 2010, spending my first few days exploring the snowy city on foot. By the day of the recording session, I was glad to be in the warm control room of the Radio Berlin-Brandenburg concert hall, watching the musicians on stage read the same parts I had sent overseas a month earlier. In the control room, we followed along with both the graphic and realization scores, hearing the massive sounds coming in through the monitors shift in time with the blocks of instruments on the page. Synchronicities flashed through the gray passages of cluster chords: instruments coalescing onto the same pitch, a minor chord, a perfect cadence—the collective orchestral unconscious. Those personal resonances that Feldman considered the major flaw of his indeterminate works seemed to me, as I listened, to be the vital energizing force pushing the music forward. Soon, an hour of music had passed, and six months of energy put into the realization had been distilled into the 13-minute duration of the piece.

Following the recording session, I immersed myself in Feldman’s writings to prepare to write the liner notes to the release. In July 2011, I also met with Feldman’s close friend, composer Bunita Marcus, who graciously allowed me to interview her about the music on the disc. During our talk, she lent her support to my approach to Intersection I, and indicated that it was in line with Feldman’s own attitude towards his graphic works.

After Mode released the disc that winter, there remained the question of what to do with the materials I had used to put together the realization. I knew that to perform my score a second time would be counter to the ethos the original score was written in. During the 1950s, Feldman emphasized the “sounds themselves,” so for conceptual consistency I decided to leave behind only sound. In doing so, I hoped to funnel meaning into the sensory experience of listening. I destroyed all scores in my possession and asked the few others who had copies to destroy theirs. The librarian at the DSO-Berlin has destroyed the parts at my request. All Sibelius and sound files have been permanently erased.

Aside from the short score excerpt above, this article is now all that remains of the realization process.


I am deeply indebted to the musicians who granted their time and efforts to this project: Alejandro T. Acierto, Michael P. Atkinson, Brad Balliett, Erik Carlson, Greg Chudzik, Rachel Drehmann, Emily Dufour, Gareth Flowers, Alex Greenbaum, Stephanie Griffin, Michael Harley, James Hirschfeld, Bill Kalinkos, Nathan Koci, Andy Kozar, Allison Lowell, Victor Lowrie, Amelia Lukas, Kevin McFarland, Joshua Modney, Chris Otto, John Pickford Richards, Alex Waterman, Karisa Werdon, and Jeffrey Young. Without them the realization would not have been possible.


Samuel Clay Birmaher

Samuel Clay Birmaher

Samuel Clay Birmaher is a composer living in New York City. He also performs with visual artist Matt Megyes as Gemini Society.

The Role of Analysis: A Different Angle

A couple of weeks ago, David Smooke picked up a topic that had also been on my mind: how important is analysis to the performance of a new composition? My thoughts have been spinning on this topic since then, so I wanted to approach it from a slightly different angle.

First of all, I think the word “analysis” sets up all kinds of assumptions that do not necessarily apply to many forms of music. For example, in college I spent an entire semester preparing a performance of Annea Lockwood’s piece for snare drum and voice Amazonia Dreaming. There is basically nothing in that music that can be analyzed in any traditional sense of the word; the snare drum is played in numerous unusual ways using chopsticks, toy marbles, and bare hands, the voice part is spoken(ish) vocalizations, there are large swathes of improvisation, tempos and durations are partially determined by the performer. It’s a fantastic piece, and it can take innumerable directions depending on who is performing it. It took a solid three months for me to fully understand the piece, and I decided to perform it from memory. Learning the piece well enough to play it without the score made me realize that internalizing a composition physically in some substantial way is the key to truly successful performances. Similarly, analyzing Steve Reich’s Piano Phase is not really going to improve a performance of the work. What will help is having a solid handle on the visceral experience of the slow phasing process. No form of intellectualization can take you there—all that’s left is practice. Performing any music well requires a deep physical connection to the music in addition to an intellectual understanding of what is happening within the work. Plenty of jazz performers have both of these elements going on in spades.

Obviously it is not always possible for classical musicians to memorize a new work. But the process of memorization brings with it the need to “analyze” the work in such a way that the inner connections of the music are clear to the performer. In many cases it’s not the clear-cut analysis that we learn in school, but rather a more intuitive waking up to the inner life of the music through the physical act of playing it. Although as a composer I am quite focused on the analytical elements of a new composition during the first stages of development, after that I let go, spinning material derived from that content. In my experience, performers find a lot of useful information in that more intuitive music—information that I didn’t realize was even there. It relates to the structural underpinnings, yet it’s different. Does the performer need to know every chord progression or tone row or rhythmic formula that makes up the basis of a piece? It certainly can’t hurt, but I don’t believe that knowledge will result in a great performance without a physical connection to the music itself. What serves as “understanding” for the performer is not always (I would venture to say rarely) the same as for the composer.

This interesting article cites research that musicians who feel that their instrument is an extension of their physical body experience less performance anxiety. This makes a lot of sense, and I think that idea pertains also to the performer’s relationship to the music they are playing. Cellist Joshua Roman is one musician who I know for a fact is inextricably bound to his instrument—it might as well be another limb. Or rather, in a performance setting, it serves as his voice, and he prefers to perform as much as he can from memory, stating that music stands are just barriers between himself and the audience. His mission is communication. It never ceases to amaze me how much he can tease from even the simplest piece of music. When we work together, we don’t talk about what type of scale a passage employs, or to what chords those arpeggios are referring. We work on the most effective methods to communicate the musical ideas in the piece to an audience. Does he understand the music? Oh yeah, he gets it. Do I care how he arrived at that knowledge, or whether he is fully conscious of the underlying foundation that I so carefully built? Not so much. It’s like constructing a piece of furniture—the point is the full experience of the work, not the brand of nuts and bolts that hold it together. If the small pieces are sufficiently sturdy, they automatically do their job, leaving room for other considerations.