Category: NewMusicBox

Stepping Forward at the Midwest Clinic

Every year during the week before Christmas, thousands of music educators, student musicians, and industry professionals gather in Chicago to discuss the latest trends and techniques in music education, listen to top-level ensembles from around the world, hear newly available repertoire, and peruse the expansive exhibit hall at the Midwest Clinic. The largest international band and orchestra conference in the world, this event is truly a spectacle. The four-day conference is packed with panels, presentations, concerts, reading sessions, and promotional goodies that attract close to 20,000 attendees, including many band directors. For composers, it is an exceptional networking opportunity simply due to the sheer number of conductors present. And if you are lucky enough to have your music performed, it will be heard by thousands of band directors from throughout the country who are seeking out new pieces to program.

If you are lucky enough to have your music performed at the Midwest Clinic, it will be heard by thousands of band directors.

I had been considering attending the Midwest Clinic for a while, but I wasn’t quite sure where I would fit into the mix. I’m not a band director, and I only have two band pieces in my catalog, so I’m not sure I can even call myself a “band composer.” Was it worth going?

While still on the fence, I was pointed to a Facebook post from composer John Mackey–a veritable superstar of the band world. Due to Midwest’s policy requiring publishers to buy advertising space if two pieces of music they publish are performed in showcases there and a booth in the exhibition hall if they have three or more, Mackey had purchased a booth in the exhibit hall and was offering it up, free of charge, to self-published composers who are people of color and/or identify as women. I was shocked and delighted, and I immediately jumped at the opportunity! Not only was this offer incredibly generous (booth space is not cheap!), it recognized what I already suspected before even venturing to the festival: the Midwest Clinic has a diversity problem.

A screenshot of John Mackey's Facebook post

As you enter Chicago’s massive McCormick Place convention center and ascend the escalator to register for the conference, you are greeted by larger-than-life banners honoring current and former festival award winners, and a giant, cylindrical “wall of fame” covered in photos of even more award winners and board members from throughout the conference’s 71-year history–all but a small handful of whom are white men. Such a display can feel a bit unwelcoming for those who do not look like the men in the photos, and it is disappointing to consider that the movers and shakers of the Midwest Clinic, with their impact on music education nationwide, do not reflect the diversity of the students in our schools.

But the Midwest Clinic has a diversity problem.

Beyond the leadership, Midwest Clinic’s programming is equally in need of modernization. After my second day at the conference, I realized that not a single one of the concerts I had attended included a female composer. Now, it would be impossible to see every concert at Midwest, and I had experienced just a handful of the performances. Was it a fluke that I had missed the pieces by women? To be certain, I pored through the festival program and found that of the 500 pieces performed at the Midwest Clinic by 51 different ensembles (including bands, orchestras, jazz bands, and chamber groups), only 23 pieces (4.6%) were composed by women, and just 71 (14.2%) were written by composers of color.

But what about the band concerts on their own? With such enthusiasm for new music, surely the wind ensemble programming would be more diverse than that of the orchestras, right? Alas, of the 212 pieces performed by bands during the Midwest Clinic, only seven (a measly 3.3%) were written by women, and 26 (12.3%) by people of color.

Of the 500 pieces performed at the Midwest Clinic, only 23 pieces (4.6%) were composed by women, and just 71 (14.2%) were written by composers of color.

Sadly, none of this came as much of a surprise to me. I’ve been performing in winds bands since the fifth grade, and I continue to do so today. Through the years I’ve become keenly aware of the white male dominance both on the podium and in the repertoire. In fact, the performance of one of my works by a wind ensemble last year marked the first time in more than five years that this particular group had played a piece by a woman. This dilemma is not unique to the Midwest Clinic, but a festival of its magnitude and influence has the potential to create meaningful change in the diversity represented in music education settings throughout the nation.

Luckily, members of the music education community are stepping forward to do just that, and it seems that things are slowly changing. For the first time, this year the conference included three clinics pertaining to diversity and inclusion. Tremon Kizer, associate director of bands at the University of Central Florida, offered an overview of various wind band repertoire by minority composers; Minneapolis-based music educator Adrian Davis gave a presentation on the underrepresentation of African-American males in music education, examining recruitment and retention from K-12 to professional levels; and a discussion on equity and inclusion in the music classroom was led by a diverse and illustrious panel of music educators. And of course there was John Mackey’s booth in the exhibit hall, taken over by underrepresented composers and generating quite a buzz throughout the convention center.

Luckily it seems that things are slowly changing.

Thanks to Mackey’s offer, nine of us showed up to exhibit our music together at a shared booth sponsored by Mackey’s publishing company, Osti Music. The participants were Erin Paton Pierce, Kevin Day, Evan Williams, Nicole Piunno, Haley Woodrow, Denzel Washington, Jennifer Rose, Omar Thomas, and myself. We took shifts working the booth, during which time we could display our scores, share recordings, and chat with conference attendees. The reaction from visitors was overwhelmingly positive; band directors are eager for new, high-quality repertoire to perform with their bands. While some people were initially confused that there was no music by John Mackey at the booth, they were almost always content to discover something unexpected from the composers who were present. Other visitors claimed they simply had to stop by to see what all the fuss was about. Apparently, this kind of booth-sharing has never been done at Midwest before, and there was even some confusion over whether it was within Midwest Clinic’s exhibit hall regulations. In the end, all the rules had been followed and we forged ahead.

Six of the composers who were promoting their music at the Osti Music Booth at the 2017 Midwest Clinic.

Away from the exhibit hall, our participation at the Osti Music booth was an easy conversation starter when networking with band directors, and it provided an extra layer of legitimacy for those of us mired in impostor syndrome. Between my shifts at the booth and networking throughout the conference, I made numerous new contacts with potential collaborators and fellow composers, sold a few scores, and even recruited some new members to a consortium commission I’m organizing. On top of that, I heard impressive performances by ensembles ranging in age from middle school to professional, learned about current trends and needs in music education, and discovered a thrilling assortment of music from my new composer friends.

All in all, attending the Midwest Clinic is an outstanding experience for composers of music for wind band. And while an air of exclusivity remains intact throughout the conference culture and programming, the tides are slowly turning. The messages of clinicians addressing issues of diversity and inclusion are now being heard, and John Mackey’s generosity in sharing the Osti Music booth set an incredible example of what it means to be an ally to underrepresented composers. I hope that this shift will continue, and I’m eager to see what kind of impact these efforts will have over the next few years at the Midwest Clinic and in instrumental music programs throughout the country.

The composers who were promoting their music at the Midwest Clinic's Osti Music booth sitting around a table with John Mackey having dinner.

Our group of “Osti Music” composers had a celebratory dinner with John Mackey.

 

Requited Music: Anatomy of a Scoring Gig

I’m writing this in mid-December, on Opening Day of the Mississippi Civil Rights Museum.  The museum has been in the news a lot recently.  Years in the planning, developing, and building, it takes visitors on a comprehensive voyage through the devastating, sobering and yet at times uplifting stories of those who dedicated their lives to the fight for equality for all Americans, regardless of race.  Some, such as Medgar Evers, sacrificed their lives. The Mississippi Civil Rights Museum is a place to learn the history of those who preceded us in the ongoing struggle against racial tyranny and to pay homage to their courage as we continue the battle to this day.

Monadnock’s style was about the closest marriage between music and picture I’d encountered in over two decades of film composing.

In July 2016, I received a call from Monadnock Media asking if I’d be interested in scoring one of their short films intended for the soon-to-be-opened museum.  I had, in May 2016, completed my first assignment for Monadnock: the score to a film to be perpetually screened in the media room at the Franklin Delano Roosevelt Presidential Library and Museum in Hyde Park, New York.  That film, titled 24 Hours That Changed History, is intense, with multiple simultaneous images flashing by for a few seconds at most; it is a concentrated, informative history of American military conflict from WWI to the present, with the attack on Pearl Harbor as the centerpiece. The entire story flies by within seven minutes.  My work was highly detailed and quite precise, and it was subjected to multiple layers of revision before a total meeting of the minds between composer and producer was established.  But the collaboration was ultimately a success.  It was a process for me to learn Monadnock’s style, both in terms of scoring and in terms of filmmaking.  Scoring-wise, it was about the closest marriage between music and picture I’d encountered in over two decades of film composing.  Notes and phrases had to fall within pauses in the voice-over, and the music had to dance in lockstep with the constantly transforming, evolving narrative.  Even the tightly scored National Geographic specials I composed in the 1990s and 2000s—with the rustling of the African trees accompanied by similarly sibilant-sounding cymbal crashes—could not compare with the molecular detail this assignment required.  As a film, the piece felt like a cross between documentary and branding: moving at speed, but telling a true story with a strong affective undercurrent, every note a signifier for the shifting emotions of the story.

Here’s an example of the FDR project. (There’s a moment of my music after the Space Race song.)

So after the FDR film, by the time Monadnock offered the Mississippi Civil Rights Museum gig, I figured I pretty much knew the lay of the land. But this assignment was a new world entirely.

Monadnock Media occupies a big barn in Western Massachusetts.  There, they develop and mock up dozens of projects, designing media rooms and creating films for locations around the country—from the Boston Science Museum to the Choctaw Cultural Center in Durant, Oklahoma.  Their approach is unique, with images projected on multi-faceted screens, often consisting of geometrical forms of varied shapes, sizes, and depths.  This allows them the flexibility to project as many as five simultaneous images on different planes, or several repeated images—or just one.  The voice-over and the music are often the elements that lend consistency and continuity to what is at times an almost non-linear visual narrative structure.

The new project that Monadnock asked me to score was the story of Emmett Till, a 14-year-old African-American boy who, while visiting rural Mississippi from Chicago in 1955, was brutally tortured and murdered for allegedly whistling at a white woman.  The story follows Till’s kidnapping and lynching, through the funeral, worldwide publicity, the trial and acquittal of the accused, and the ensuing national outrage, with boycotts and protests that led to what many consider the origins of the civil rights movement.  The producers and I shared the unnerving sensation that this piece of history is, sadly, very relevant today.

To prepare my compositional work for the Emmett Till project, I spent a few days in the summer of 2016 immersing myself in Mississippi Delta Blues.  I re-familiarized myself with some of the great singers, songwriters, guitarists, and harmonica players I’d heard all my life: Robert Johnson, Muddy Waters, James Cotton, John Lee Hooker, Mississippi John Hurt, and many more.  I reviewed how the blues of the Mississippi juke joints evolved, through migration, into the harder-driving electric blues of Chicago, and how some of the greatest artists of the rock ’n’ roll era—from the Rolling Stones to Bonnie Raitt—are living legends of that legacy.  I was reminded of how Mississippi gave birth to so much of the soundtrack of our lives.

The soundtrack of Monadnock’s Emmett Till story was not, however, a blues score per se.  Monadnock asked for a sparse soundscape, painted with drones, punctuated with rhythm—almost sound design, with a few precise hits on key moments.  The role of the blues was—again—a signifier, a recurring resonance of place and time, and, in a deeper way, of the emotion we as a culture associate with that sound: the bitterness of loss and suffering; the sweetness of moments of redemption. In this score, the blues surfaces as a kind of bloodletting, dropping into the texture as a subliminal reminder that you have to let yourself hurt in order to heal.

As it turned out, the score was not sparse, in the sense of spare; it was virtually wall-to-wall.  But it was truly underscore, lying beneath the voice-over and other sound effects, and emerging in moments of breath and cadence.  To avoid somnolence and animate the long tones that underpinned much of the story, I populated my drones with an active, shifting overtone structure; it was in the froth of harmonics that the real interest lay. In consort with the growing intensity of a scene, I allowed the overtones to emerge like the entrance of a violin section, subtly detuning some, emphasizing others, weakening the fundamentals.  This averted stasis and kept the score dynamic.

I kept a running compendium of re-usable themes: the “Emmett Till” theme; a “danger” theme; the “kidnapping” theme; a “mother’s grief” theme; etc.  I also kept my eye on the overarching harmonic development of the score.  Its basic key is in G minor, but it moves up through A minor, Bb minor and C minor and, at times of setback or resignation tinged with hope, down to Eb major.  (I’d toyed, for a moment, with the idea of structuring the show’s long-term harmonic progression to mirror, at a deeper architectural level, the blues, but the picture didn’t call for it so I’ll have to leave that conceit for another project.  However, the thought did increase my awareness of how the evolving keys related not only to one another, but to blues form as well.  With the key centers most prominently moving around a Gm-Bbm-Cm axis, there’s a tinge of that sub-structure in the score.)

Attach your ego to the collaborative process, not to how cool your music is.

Process-wise, this was also a close collaboration with the producers, which meant constant tweaking, many re-writes, and revision, sometimes requiring shifting on a frame.  Fortunately, mutual trust had been established through the FDR project, and through a strong opening cue I’d composed for the film.  So I didn’t worry too much about revisions.  The producers, director, and editor have a vision for the project; it’s the composer’s job to learn their perspective, and adjust when the music’s energy isn’t quite right.  And for the aspiring media composer, I would not say, as some do, “Abandon ego all ye who enter here”; rather, attach your ego to the collaborative process, not to how cool your music is. If you’re successful, the marriage of picture and story to sound is a greater high than the sum of its parts, and like requited love, returns to you, the composer, a sublime satisfaction in having given of yourself in the creation of a multi-dimensional, multi-sensory entity.

A screenshot of the final revisions of Rick Baitz's score for Monadnock's multi-media Emmett Till presentation at the Mississippi Civil Rights Museum

A screenshot of the final revisions of Rick Baitz’s score for Monadnock’s multi-media Emmett Till presentation at the Mississippi Civil Rights Museum

Composing (and revising) the Emmett Till music took me into the fall of 2016.  Once we had a score that the client (the museum) was satisfied with, the project was set to the side until the final voice-over could be recorded. Meanwhile, I began work on a second film for the Mississippi museum: Freedom Summer, about the summer of 1964, when activists in Mississippi battled voter suppression by bringing in civil rights advocates from all over the country to help register local citizens to vote. Again, the sacrifice was unfathomable, with several people losing their lives. Yet the national attention that was brought to bear on Mississippi’s Freedom Summer helped lead to the passage of the landmark Voting Rights Act of 1965.

Although there were similarities to Emmett Till, the Freedom Summer project had many new components.  The screen was set up in the shape of a church, with images projected on different parts of the altar.  The film itself began and ended with licensed recordings of gospel songs, with 11 songs interspersed throughout the film.  Because there was so much African-American music already in the film, the use of blues elements in my score was minimal; my role was to move the story forward and, moment-to-moment, embody the sentiment of the scene.  Again, I was asked for a sparse, drone-based score, punctuated with percussive pulsations, discreet melody, and in one case, my own gospel-inflected tune.  For continuity, I created a harmonic map, listing the keys of all the pre-recorded songs in the film, and carefully composed in relation to them.  My first cue follows a hopeful song in A major called “Welcome Table” featuring the refrain: “I’m gonna be a registered voter one of these days.” I enter in the relative minor, to images of intimidation and violence.  Although the entire film doesn’t cohere harmonically quite as much as Emmett Till, there is a lot of continuity, with the key centers of Ab and F#, and their close relations, holding the most sway.  Here, I roll with the story like a musical narrator, again with an active overtone presence, emerging into sonic prominence in moments of emotional intensity.  And again, there was strong hands-on involvement from the producers, with many detailed revisions.

By the way—on the business side—by this time a clause was added to my contract that increased my pay after a certain number of revisions; the producers recognized that their process required a lot of extra work on the composer’s part. This clause was invoked after the voice-over was finally recorded on both Emmett Till and Freedom Summer—more than a year after Till had been put to one side.  Monadnock managed to contract Oprah Winfrey for the voice-over in the summer of 2017.  After editing Oprah’s contribution, both shows were sent back to me for final musical tweaks, in which almost every single cue needed some adjustment.  Sometimes one note got moved seven frames (about a quarter of a second), so everything had to be carefully shifted from that point on.  This took a couple of weeks to get right.

Meanwhile, I was sent one more film to score: Why We March.  This was meant to be a short assignment featuring two songs, upliftingly celebrating the power of peaceful protest.  This project, as unique as the others, is shown on S-shaped tabletops that are actually video screens that curve through the room, where kids can sit and watch the images roll by.  I will confess that this was the hardest of the four projects I’d done for Monadnock.  Perhaps I was burnt out. I had just moved my studio from New York’s bustling Midtown to a huge space facing Riverside Park in upper Manhattan, and had dozens of boxes to unpack.  I had also just finished recording three chamber and electronic pieces for inclusion on a CD to be released by Innova Recordings, and I was working nonstop on the mixes.  I had family responsibilities.

Plus, there was a temporary score in place that had to be overcome.  Also known as a “temp track”, it is often inserted by the director to give the composer an idea of what musical approach is desired by the film’s creators.   At its best, it provides the composer with an accurate guide, but it also can be deceiving.  Maybe the director intended it to give an idea of the rhythmic drive desired – but the composer mistakenly interprets the temp track as an instrumentation guide.  Sometimes the director drops it in as a placeholder, just to say, “we want music here,” so the composer has to be aware that he is not meant to musically emulate the temp cue. Film composers speak of the challenges of “temp love”, when a filmmaker is so attached to the temporary score that nothing the composer does is right. In the case of Why We March, the producers had differing ideas on what music would serve as a good model for what would work.  This is part of the collaborative creative process, and it eventually led to a mutual understanding.  Until then, approval for my songs was slow to materialize, and they each took several tries before I got them right.  I stuck with it and Monadnock stuck with me, for which I’m grateful.  And in the end, the client was happy.

Driving back from a 2017 summer residency at the Vermont College of Fine Arts, where I am a faculty member, I stopped in at Monadnock’s headquarters in Hatfield, Massachusetts.  At that time, I had not yet performed the final revisions on any of the Mississippi projects.  To my amazement, Monadnock had prepared all three projects for screening via virtual reality (with binaural sound).  They plopped the goggles on my head and the headphones on my ears, and I was transported, virtually, into the media rooms at the Civil Rights Museum, with my music prominent in the mix. When they showed the mock-up of Why We March in VR, I was literally inside the museum space, looking down at the tables that snaked across the room.  This was one of the trippiest, most immersive artistic experiences of my life—and it really let me hear how my music was going to sound in the museum.  In fact, I instantly recognized some weaknesses in one of the Why We March songs; it needed more oomphiness at the beginning and end. So I revised it immediately upon returning to New York.

Successfully completing any collaborative composing project requires substantial craft, persistence, and access to the great well of inspiration that resides in intuition.  In the interest of honesty, I’ll reveal here that at the beginning of the whole Mississippi process, with Emmett Till, I was relying more on craft and persistence than inspiration.  I’ve felt, ever since I began composing film music seriously, that my axe, so to speak, is my ability to emotionally empathize with whatever story I’m composing to.  I tell my students to imagine themselves in the scene, whether it’s under a tree in the African forest for a National Geographic film on chimps, or in the quiet kitchen of a pair of disabled women in Illinois, getting ready for their day, for a POV documentary.  And even though I’m experienced enough to know that craft will bring emotion along with it, I’d never personally had the experience of violent discrimination to the extent that I was seeing in the Emmett Till film, and I wasn’t sure I was feeling it enough.  But after uncounted viewings of that and the other films, the utter tragedy seeped into my psyche, and I felt tremendous sadness for the victims whose stories I was living with, and embodying musically, day after day.  And although I can’t say I could ever really know the extent of their pain, I did, finally, become much more emotionally connected to the story I was telling, and a circuit was completed.

The entire process of the Mississippi Civil Rights Museum project reminds me of the title of a novel by the Argentinian author Manuel Puig, Blood of Requited Love.  Somehow, it took a lot of hard work—musically and psychologically—to get there (isn’t it often so), but the reward is what the work gave back to me: a deep experience, albeit painful and eye-opening, of the reality of our country and world.  A challenging journey, but a tremendously fulfilling one.

[Ed. note: Additional video excerpts of the Monadnock Media presentations for the Mississippi Civil Rights Museum scored by Rick Baitz are available at the Monadnock Media website.]

Polychromatic Music

Music seems to be at the forefront of an increasingly pervasive process where technological simulation is cheaply and efficiently substituted for authentic human creation and expression. Further, a technological aesthetics of ‘perfection’ has arisen which values flawless quantization, pitch correction, and production as primary elements over the power of unique, imperfect vitality of human creative expression. Polychromatic music embodies a new paradigm and aesthetic: a humanistic counterbalance to rapidly emerging technological trends which, when they don’t replace human involvement, seem to minimize and/or trivialize it.

Even as a child I was aware that the chromatic/modal tonal languages were nearing the point of exhaustion as far as new areas of exploration and creation, and this stoked a curiosity and an intuitive seeking of the possibility of new dimensions of musical language. As an undergraduate music major, many of the developments of the late 19th (chromaticism) and 20th (stochastic, aleatoric, spectral, microtonal, algorithmic) centuries made sense from this perspective. Yet they seemed difficult to assimilate and understand without a conceptual framework to anchor these new perceptual experiences in a larger foundational context and aesthetic.

With the emergence of AI (artificial intelligence) ‘creativity’ now being used to ‘compose’ music, many new questions and concerns have arisen. Any process that can be formalized in rules or clear, quantifiable instructions, can be efficiently executed by a computer. To me, it seemed that the innovations of stochastic (random operations), aleatoric (chance operations; i.e. dice rolling), serialistic (predefined patterns), and algorithmic (step-by-step procedures) composition were likely candidates for being subsumed within AI generative computation systems.

The human process of creativity lies on a continuum between compositing and composing.

A further distinction is necessary here between creative ‘composing’ and ‘compositing’. Artificial Intelligence generativity (so-called “creativity”) is based on a compositing process; it’s basically all just recombinations of pre-existing data. While it is clear that the human process of creativity lies on a continuum between compositing and composing, a salient aspect of human creativity involves the creation of new ‘data’ rather than the novel recombination of prior ideas.

This leaves us with the contemporary methods of new spectral/timbral and pitch languages as wide open frontiers for exploration and creation. With respect to new timbral languages, I think of spectral music broadly as a paradigm and aesthetic where an emphasis is on the exploration of the timbral aspects and implications of complex sounds. This would encompass harmonics, harmonic (overtone) interactions, and new frontiers in harmony (polyphony). This is an immense world of its own where technology has provided endless possibilities for exploring sound design and works of sonic creation (sound arts).

Another compelling area of exploration lies within the realm of new pitch languages—the xenharmonic philosophy and microtonal/macrotonal pitch definition methods. For the past century, the creation and use of many microtonal methods has been an exciting development in music. This presents new possibilities for differing, extended explorations of ‘tonality’. It seems that the main hindrance to the wider understanding and use of these methods is the result of a lack of any underlying conceptual framework.

At present, we have a growing number of mutually exclusive microtonal pitch definition methods, each with its own notation. As a musician coming from an empirical perspective (practice vs. theory), the impractical situation of learning a new notation system for each microtonal pitch method is a persistent impediment to a larger, unified progress beyond merely creating new microtonal scales. This is where polychromatic music, as a system, comes in.

One way of understanding and distinguishing our contemporary musical terminology of xenharmonic, polychromatic, and microtonal is by a rudimentary differentiation of philosophy, system, and method:

Xenharmonic refers to a philosophy which regards the infinite pitch scale division methods applied to the pitch continuum as equally valuable. Also, it expresses an aesthetic of freedom and openness toward any and all methods of pitch scale division and the exploration of their melodic, harmonic, rhythmic, timbral, etc. implications in new musical compositions.

We have no words for many perceptual aspects of hearing.

The polychromatic system is an intuitive, unifying conceptual framework for exploring any conceivable pitch division method. Our language is grounded in visual concepts: we have no words for many perceptual aspects of hearing: imagery, visualization, dimension, space, etc. As a result, we are faced with communicating auditory concepts in analogy or metaphor. My perspective is to link visual and auditory perceptual concepts into an idea of ‘pitch-color’. The visual basis here is the color spectrum: red, orange, yellow, green, blue, indigo, violet. From this intuitive basis, we can move from a vague flat/sharp conception of pitch to more refined and distinct conceptual ‘pitch-color’ anchors. So, with yellow as a basis of reference, orange and red would be progressively flatter, and green, blue, violet would be progressively sharper. Using a color spectrum with integrated visual/audible associations on a scale from (infra/flat)red to (ultra/sharp)violet. The distinctions of flat and sharp become an increasingly refined spectrum relative to the chromatic (macro)pitch division method, i.e. C, Db, C# etc.

The polychromatic system uses the chromatic language as a common point of departure. In this context, the chromatic language is characterized by the use of letters as pitch names, and by the representation of musical intervals numerically (and modally:  C-B as a major 7th rather that a 12th). Also, since the pitch-colors of the system are relatively defined (by the method of pitch division), it creates an intuitive bridge between differing microtonal scale derivation methods.

Microtonality consists of the various, exclusive, and divergent methods of pitch division, notation, and theory. Without a unifying conceptual framework, these methods remain mutually exclusive and excessively difficult to assimilate in a unifying and complementary manner.

A point of clarification: with respect to an integrated philosophy-system-method perspective of music, the chromatic musical language is a system, while the various temperament derivations (meantone, well, just, equal, etc.) are methods (of pitch definition).

The above categories are generalized for preliminary understanding. I see polychromatic music primarily as a system, and secondarily as an aesthetic. For me, this aesthetic involves evolving reflections on humanism in an era of increasing technology. And this is why I devote the effort to physically learn and perform my compositions: to create not only demonstrations of new musical possibilities within the polychromatic framework, but also examples of the human musician utilizing technology in a creatively assistive fashion vs. the human musician creatively assisting (editing, compositing) increasingly sophisticated technological processes.

In the next article, I will focus on describing my approach toward learning and composing within a polychromatic system.

Michael J. Schumacher: Composing is Listening

Michael J. Schumacher’s 2002 artist statement on the website of the Foundation for Contemporary Arts is a very succinct summary of his aesthetics:

I am interested in context, in defining boundaries and not crossing them. A piano piece is one thing, a sound installation another. The forms are different, the audience is different. The time, the place, it all has to be taken into account. Ultimately, we’re all collaborating with whomever’s participating.

Nevertheless, Schumacher engages in an extremely wide range of music-making—from immersive Room Pieces and other sound installations to collaborations with choreographer Liz Gerring to composing and performing all the “songs” for his indie “dance pop” (for lack of a better term) band diNMachine. In Schumacher’s home in Sunset Park, where we visited him to have an extensive conversation about his musical activities, there are tons of speakers everywhere and a great collection of vintage synthesizers, but also a grand piano in the middle of his living room as well as a small bust of the composer Franz Schubert that’s just hanging out near a window in his dining room.

“I love lots of kinds of music; I’m just aware of the differences,” Schumacher explained when I asked him about the wide variety of his musical endeavors. “I don’t think that leads to only liking one particular kind of approach.  I happen to have really fallen in love with computer algorithms.  I have to say that.  It opened up a way of listening for me that was really fantastic, and it stays fantastic now.  But I was in rock bands as a kid.  I played in some bands up until I was in my 30s. And I improvised a lot. I like having that outlet for that part of my musical being.”

Although Schumacher is deeply interested and involved with a wide range of musical styles, he firmly believes that certain kinds of music-making work better in certain kinds of spaces and that doing the wrong music in the wrong space is unfair to audiences and musicians alike, since it sets up unfulfillable expectations.

“A concert hall is a place for storytelling, but it’s a place where you know the story,” he asserted.  “You know it’s going to be an arc form.  You know there’s going to be a climax and a resolution.  And you’re enjoying that in a place of comfort, in a place of audition, a place watching a storyteller—whether it’s a conductor or an actual person telling the story—and this unfolds in a very predictable way.  Your body’s relationship to that is key: being in a seat and looking forward orients you towards a certain way of perceiving time.”

Schumacher’s deep concern for how sound installations and other primarily electronic music creations—both his own and those created by others—were perceived led him to establish several performance spaces designed specifically for such work, most notably Diapason in New York City.

“The first one was Studio 5 Beekman,” Schumacher remembered. “It was a little office space.  You entered, and there was a small foyer.  This kind of gave you a little bit of a buffer between the world and then the gallery, which was behind a door, beyond the foyer.  That little buffer was very nice, because it let people kind of take a breath.  For me it was also for limiting vision.  Turn the lights down.  It doesn’t have to be dark, but just make the visual less explicit.  I used to use a red light bulb, which got misinterpreted as a kind of gesture of some sort, but I just felt it was a dark color that allowed you to see without making the visual too much of a thing.  I think personally in those situations, it’s not good to have a lot of sound coming from outside.  If people want that, I suppose you can have a space like that, but for the most part I feel people want to be able to control the environment and not have to deal with sirens and things like that.”

Unfortunately after more than a decade, Diapason proved unsustainable and now Schumacher is contemplating hosting sound installations for a small invited audience in his own home. It’s a far more intimate environment than the theaters which present the Liz Gerring dances he scores or the clubs where his band diNMachine might typically perform. And he is well aware that those spaces result in different ways of perceiving which are best served by different approaches to making music. But, he’s also aware that not everyone listens to music the same way, regardless of the space, and is eager to create things that have an impact for anyone who hears them.

“If we’re talking about the ideal listener-viewer, I think that’s one thing.  If we’re talking about a typical audience, that’s another. Both are obviously important.”

December 6, 2017 at 2:00 p.m.
Michael J. Schumacher in conversation with Frank J. Oteri
Video and photography by Molly Sheridan
Transcription by Julia Lu

Frank J. Oteri: The homepage of your website has a sequence of photos of all these objects: a teapot with an audio speaker in it; two circuit boards interconnected; and, perhaps the most striking one, a Philadelphia Cream Cheese container. It’s tantalizing, but none of those things have audio links on them, so I suppose that’s just to whet people’s appetites.

Michael J. Schumacher:  I think at some point I did have audio where each object would be a separate channel, and as you clicked on more [of them], you’d get more of the piece.  I don’t know what happened to that.  I don’t really manage my own website; I don’t know how.  My girlfriend does that.  Sometimes things get disconnected or something changes, and it takes us a while to figure out that it happened and to fix it.

FJO:  Wow, that’s a pity.  I would have loved to have clicked on all of those images to hear all those sounds together. So they’re all one piece?

MJS:  Well, the way I work in general is I’m basically writing one piece all the time.  I’m just adding things to it, and then whenever I present it, it’s a part of that piece.  That’s how I look at that realm of the multi-channel stuff; [those sounds] would have been a particular group of things that would belong to this larger concept.

“The way I work in general is I’m basically writing one piece all the time.”

FJO:  So all those things make different sounds, but what were those sounds?

MJS:  Well, they’re all just speakers; they’re not instruments.  I use these objects—the cream cheese container or the teapot—to create a resonant body. I travel with these little drivers and can improvise a resonator on the spot.  I can go to Lisbon and put the speakers in beer glasses or garbage cans or things like that.  But I sometimes get attached to certain resonators, like the Philadelphia Cream Cheese container.

FJO:  I can imagine that a teapot could have some effective acoustic properties based on its shape, but what’s so special about a Philadelphia Cream Cheese container?

MJS:  Well, I think when they designed the container, they were clearly thinking acoustically: something works.  It came from Costco.

Schumacher's Philadelphia Cream Cheese speaker

FJO:  Even though the audio links on those images are currently not working, you’ve made so much of your music available through your amazingly generous and seemingly limitless SoundCloud page. However, about a week ago I started embarking on a plan to listen to every one of the files you uploaded to that page in order—I failed because there’s so much material there. But I also failed because as I was scrolling through the files, I came across one called Middl which had a waveform that immediately made me want to hear it. Most waveforms are somewhat random looking and nondescript, at least to me, but this one had a striking regularity to it. It was unlike any kind of SoundCloud waveform I’d ever seen.  So I jumped ahead.  I cheated on my own listening plan, because I had to hear what that thing sounded like.  And it was a really transformative hour of my life.  It sounds to me like it starts with a telephone dial tone.  Is that what it is?

MJS:  No.

FJO:  What is it?

MJS:  It’s a synthesizer oscillator, and it’s being played by a computer.  The oscillator is a kind of additive synthesizer with eight partials, and these partials are being manipulated by the computer.  So it’s pretty simple.

FJO:  It sounded so much like a telephone dial tone to me—so much so that since hearing it, I can’t interface with an actual telephone in the same way.  I’m now giving it all these musical associations.

MJS:  That’s really good.  I’ve actually tapped into something.

FJO: But apparently not intentionally.

MJS:  No.  Or maybe it was.  Maybe that was my La Monte Young moment of listening to the wires and having it inspire me.

Some of the hardware Schumacher uses to create his music.

FJO:  Another sound file I listened to had a similar effect on me. It was a sound file for the Riga 2014 Room Piece, which also lasted a bit more than an hour. After the file ended, I took off my headphones and walked down the hall.  I was washing my hands in the bathroom sink, and when I turned off the water I was suddenly transfixed by another sound I couldn’t immediately place. The room was completely empty, but there was this steady sound. Maybe it was a heat pipe. But it didn’t matter. I just wanted to listen to it, and it was because I had just listened to your Room Piece. Of course, there’s a whole history of pieces that make us more attuned to the sounds around us that most of us take for granted, starting with John Cage’s 4’33”. All of Pauline Oliveros’s Deep Listening projects were also part of this tradition. But whereas Cage and Oliveros’s reasons for pursuing a more expansive way to listen seem almost political and even spiritual, the relationship to the larger sonic environment that your music opened me up to has been purely aesthetic; it just made me focus on some interesting sounds.

MJS:  I think a lot about listening; for me, composing is listening and so it’s how you listen and how you respond to the potential meaning in a sound.  I think that what’s become really interesting since Cage is how much you can do in that regard: how and where you can find meaning; how you can juxtapose meanings; how you can suggest resonances beyond sonic resonances to real life associations. This explosion of meaning also includes pre-Cage sound—the relationship between a D and an A, or a D and B-flat. That also has meaning. What’s so exciting about making music now is you’re really—I don’t want to say manipulating, because I don’t try to manipulate meaning. I try to suggest ways that listeners can explore meaning.

“For me, composing is listening.”

FJO:  When you talk about certain sounds being pre-Cage, people listening to that D and that A in the era before Cage and Oliveros were generally listening unquestioningly to a disembodied, abstract, and perhaps idealized relationship between certain sounds typically through the filter of somebody playing an instrument or somebody singing, either themselves or someone else in their home or in a concert hall. But obviously when any musician makes these intentional sounds there’s all this other sound that’s happening, too, much of which is unintentional but just as present.  And I guess in the world we live in now, what we could call the post-deep listening moment, we are at least aware that every sound that’s around us is something that obviously we can hear, even if we’re not consciously listening to it. So how does that change the relationship of what a composer does—for you?

MJS:  These sounds that are outside the specific performance that are accompanying it in some way can be invited in or in some way interact with the performance.  I think it can work both ways.  You can be in a situation and somebody making a sound or some sound coming from the environment can affect your reception of the “musical” sounds.  Let’s just call them musical sounds.  On the other hand, the musical sounds can affect your reception of these other sounds.  So what I like to work with are short, suggestive sounds that can expose meanings in sounds outside the performance. A simple example would be if I play a short note on the piano and by coincidence—and it’s amazing how often this happens—you might hear a car horn in the distance, and that car horn might be the same pitch.  And so the mind relates the two.  In an abstract way, not in a way that it’s saying that’s a car horn, but in a way that it’s saying that’s a B.  That’s the same pitch.  That’s a musical thing, an abstract or musical way of perceiving that car horn.  I really like that, and so I like to put out there enough information that the whole sonic environment can participate in that reception of sound.  Not just a level of concrete associations, but also in these abstract associations like rhythm and pitch.  The other reason to have short events is because they articulate a time-space kind of situation, as do most of the sounds we hear in the environment.  Very few sounds in the environment are steady and non-stop.  If they are, after a while we tend to ignore them.  Most of the sounds we perceive are perceived momentarily, and we’re jumping from one to another.

FJO:  That’s sort of the opposite of the drone aesthetic.

MJS:  Not really, because drones—like La Monte Young’s drones—are incredibly active.  When you’re listening deeply to them, you’re listening to lots of things inside the drone.  For me, a drone is actually exactly that.  It’s just maybe a different way of approaching it.  I think, in both cases, we’re talking about really heightening the moment and trying for a kind of perceptual present.

FJO:  I grew up in New York City amidst 24/7 loud Midtown Manhattan traffic; I had to train myself to be able to fall asleep with the noises of sirens and everything else.  I remember the first time I ever took a trip to the countryside and there were crickets.  I could not fall asleep because it was a constant sound, and it was too close to a musical experience for me.  But of course, a musical experience could also be a completely random assortment of sounds, but I was able to disassociate that. I guess that speaks to what you were saying about how, if you’re not really paying attention, a drone might seem like this constant thing, but there’s lots of other stuff in it.

MJS:  I think that at the beginning, the point of the drone is that superficially it seems like nothing is going on.  But what it’s doing is it’s giving you this time dimension of really saying, “Okay, now I’ve been listening to this for five minutes, and suddenly I’m hearing things that I didn’t hear before.  And then the more I do that, the more I hear in this apparently monolithic sound.”  There are all these details that can only be only accessed through it—first of all—being so-called unchanging, but also giving the listener the time to contemplate.

FJO:  There’s a piece of yours that sort of does that in a weird way.  But maybe, once again, what I thought I was hearing is not quite what you were doing, like how Middl isn’t a dial tone. The piece is called Chiu.

MJS:  That’s a piece Tom Chiu and I performed together.

FJO:  Aha!  Okay.  That’s why it’s called that.  And I hear his violin, but what it sounds like you’ve done to it is created some kind of artificial simulacrum of a Doppler effect.  At least that’s what it sounded like to me.

MJS:  Well, that was a jam. I played my synthesizer.  It’s the same synthesizer that I use in Middl.  It’s made by Mark Verbos, who used to be here in Brooklyn and now has moved to Berlin.  It’s a fantastic Buchla-inspired approach to synthesizer making.  It was an improv, but we rehearsed a bit.  It was really kind of Tom’s composition.  And he has this way of playing—I guess that’s what you hear as the Doppler, this kind of slow pitch bend, kind of this constant, constantly shifting, almost glissing pitch world.

One of Schumacher's synthesizers connected to a speaker made from a Bush Beans can.

FJO: So that was all him and was just a product of the improvisation?

MJS:  Yeah.

FJO:  So once again, I made all these incorrect inferences about what I was hearing.  This is the weird thing about disembodied sound, whether you’re hearing something on a recording and there aren’t a lot of program notes for it or you’re listening on your headphones on a website with no additional information. These experiences are very different from being in a space and watching a performance or a sound installation and seeing how it works as you’re listening to it.  There’s only so much your ears can tell you about what’s going on; the eyes give away the secrets.

MJS:  They can, but sometimes with synthesizers you don’t; sometimes the player doesn’t know what’s going on.  At least I don’t. I mean, I have no idea.

FJO:  In the very beginning of our talk you were saying the images on the homepage of your website originally linked to sound files, and someone presumably could turn one of them on, but when you turned a second one on, the first one would still be on and then you’d have this cumulative effect of all that sound.  In essence, the Room Pieces also work this way because you have these different sonic modules that all exist separately but the piece is about the cumulative effect of hearing them all spatially in a particular space.  It isn’t necessarily duration-based, which makes it something you wouldn’t listen to for causality in the same way as other musical compositions.

MJS:  What do you mean by “in the same way”?

FJO:  Well, like the D and the A you mentioned before. Let’s say there’s a car that’s suddenly on B-flat, and that’s totally random, but you might—because of how your mind perceives time-based musical relationships—think you’re hearing a flat six if you hear it after the D and the A.  There’s a perception of a developmental relationship, a relationship between the third sound and those other two sounds because of the order they are in.

“I didn’t want a sense of utter randomness…  That’s where I really don’t agree with Cage.”

MJS:  That is what I’ve been trying to do with the Room Pieces.  These are algorithmic, generative compositions, and they’re modular.  But the approach was to create coincidental occurrences of that sort.  The range of sounds and the exploration of pitch and rhythm was intended to raise the question: are these intentional events?  Was this composed?  I didn’t want a sense of utter randomness, just the sense that none of it has any relationship.  That’s where I really don’t agree with Cage; I guess you could say it in that way. I think he was pretty adamant about wanting to completely cancel out this idea of relationships between sounds.  What it’s all about for me is creating these relationships, but it’s not about necessarily creating a progression or any structure that is really only interpretable in one way.  It’s really about creating the possibilities for these relationships, like a kind of drawing where you connect the dots—each listener would come up with his or her own drawing.

Schumacher's laptop displaying a software program that works out his algorithms.

FJO:  So you want those relationships to be there, but you don’t necessarily want to determine what they are.  It’s for the listener to determine.

MJS:  Right.

FJO:  There’s a wonderful comment that Julian Cawley made in one of the program essays published in the CD booklet for the XI collection of the Room Pieces: “His music changes, but it doesn’t progress.”  Is that a fair assessment of all the work that you’re doing?

MJS:  Well, definitely the Room Pieces, but lately I’ve been getting away from that approach. For 20 years I was just determined never to edit what happened, just to let it happen and not to get involved in that post-production level of saying, “Okay, I like this, but it’s not really working here, so I’m going to change it up and I’m going to add this.”  I really tried to keep it very strictly algorithmic and generative.  But lately, in the last I would say ten years, I’ve been interested in getting into the details, especially spatialization and really exploring outside of that algorithmic process how I can really look at those details of how the sounds exist in space and how they relate to each other, moving on to more deterministic pieces.

FJO:  So would it be toward things that have literally more of a beginning, middle, and end, that actually develop, that actually have a start point and end point?

MJS:  Yeah.  I think some of the new pieces on Contour Editions are definitely that way.

FJO:  Disagree with me if you think I’m off the mark on this, but it seems there’s a distinction that’s key to all of this: the difference between musical composition or musical performance on the one hand and sound installation, sound art, or even instrument design on the other.  A performance or a composition is fixed in its duration.  The order of the sounds that an audience hears and how long they are listening to these sounds is determined for them.  There’s a clear beginning, middle, and end. Whereas with a sound installation, people can theoretically walk into it whenever they want and can stay however long they want.  So the message it’s trying to convey has to be different than a progression of events over time.

MJS:  Right.  It’s a big issue in terms of making sure that listeners perceive what you think they should be perceiving, and in the right timeframe.  So 30 minutes, 45 minutes, or 5 minutes, what’s a minimum amount of time to be able to understand the piece? Is it a problem then if you leave out things? Is it really necessary for the person to wait ten minutes for something important to happen?  Is that something you want to avoid?  Those are obviously key questions.  I was lucky to have my own space, but if you’re presenting these works in museums or other settings where people are constantly moving through, and they’re really encouraged to move through and not to sit down necessarily for an hour—although obviously with video art people do that—that’s an added layer of things you need to account for.

FJO:  It’s interesting that you’re concerned about whether people will get it if they’re only there for five or ten minutes. Of course you can never assume someone is going to get your piece, even if it’s in a concert hall, on a program, and it’s a functionally tonal string quartet.  You can’t really control how people perceive anything.

“Schoenberg and his 12-tone technique is the beginning of process composition.”

MJS:  My feeling is that this whole trend toward sound art and sound installation is coming out of the concert hall’s dominance as a listening space. For me, it starts with Schoenberg.  Schoenberg and his 12-tone technique is the beginning of process composition.  Even in his case, it’s taking the ego out of the process in a sense, obviously.

FJO:  Wow.

MJS:  It’s a stretch with Schoenberg, but there’s still that hint of: “Okay, here’s this process, 12 notes.  I’ve used 11, doesn’t matter.  I have to use the twelfth.  It doesn’t matter what I think.  I’ve decided that the twelfth note is going to be one I didn’t use.” So that’s process that overcomes his taste—in a sense—and his ego.  For me, it starts there. And Cage is essentially the same thing.  It’s chance, but this was proven in the ‘50s, basically you get the same thing.  It’s a process and the result is going to be a surprise, both to the composer and, in a sense, to the audience.  Unlike classical composition where as soon as you hear a bar or two of Mozart, your brain knows what the next six bars are going to be in a sense.  That’s the beauty; that’s why it’s so relaxing to listen to because you sit there and you hear eight bars in advance. It’s kind of like knowing the future.  You’ve been given a little bit of a peek into the future and that relaxes you.  That makes you feel kind of secure.

This idea of every next step being a little bit of a mystery is a fundamentally different way of perceiving the world and perceiving music, and I think it’s completely wrong to do that in a concert hall.  And I think that people sense this.  A concert hall is a place for storytelling, but it’s a place where you know the story.  You know it’s going to be an arc form.  You know there’s going to be a climax and a resolution.  And you’re enjoying that in a place of comfort, in a place of audition, a place watching a storyteller—whether it’s a conductor or an actual person telling the story—and this unfolds in a very predictable way.  Your body’s relationship to that is key: being in a seat and looking forward orients you towards a certain way of perceiving time.

So my feeling is that composers started to sense this disconnect between where they were required to show their work and the kind of work they were interested in making, which was based on these processes that everybody was inventing from about 1945 on.  Basically the job of every composer was to invent the process, invent their own methodology, so I feel like the intuitive response to that was to invent new spaces. Part of that is radio space. In Germany and [other parts of] Europe, you get more of the experimentation in radio-based listening spaces, either the radio itself or maybe these sort of black box spaces that they would perform in.  In more extreme cases, like Stockhausen, he would go into caves and what not.  They were searching for these places where the space placed the body in an orientation towards the sound that allowed it to really be perceived in a way that connected to the process of composing it. My focus has been to try to understand the very many ways of listening, of apprehending sound, and how they relate to architecture and to the body and to try to create situations where we can help listeners understand what it is that they’re perceiving.

FJO:  I was thinking along similar lines over the weekend.  I went to the sound installation exhibit at the Museum of Art and Design in Manhattan in Columbus Circle.

MJS:  Did Charlie Morrow have something to do with it?

FJO:  Not as far as I know. I didn’t know most of the people who were involved with this except for Benton Bainbridge, but there was some very interesting work there. What I found even more interesting than the work, however, were how many people there were interacting with what were, in several cases, some really whacked out sounds, perhaps sounds that they might not have heard before or might not have had a context for.  And they were really enjoying it.  People of all ages—young children, teenagers, even some elderly people. There was an interactive piece called Polyphonic Playground that was created by the London-based collective Studio PSK where people made sounds by climbing bars or sitting on swings.  That was cool.  There was also this incredible contraption on a wall with all these disembodied guitar strings attached to pickups.  It was done by an Israeli-born artist who now lives here named Naama Tsabar; she used to play in punk bands but now more of her work is installation-based.

Anyway, some of these guitar strings were tuned to really resonant low tones, but you hear them all together from various people plucking them all at once and it creates some incredibly dissonant chords.  Yet everybody was enjoying it.  If people were to hear the same thing in a concert hall, would they appreciate it as much?  If it was a New York Philharmonic subscription series concert, there probably would have been loads of people walking out.

MJS:  Exactly.

FJO:  Why is that?

MJS:  I think these are really very old habits—and not in a bad way, just human habits. I once made a list of listening paradigms.  I’m not a scientist; I’m not a researcher in this way.  This is just kind of stuff that comes off the top of my head.  So I’m not claiming this to be true or anything.  I’m just thinking about it. Think of sitting by the camp fire and listening to somebody telling a story—somebody with a gift for telling a story, but understanding that that camp fire offers both security but also danger because just beyond the darkness there could be anything, an animal or a gangster, something.  So there is that sense of “What’s behind me?  What could potentially encroach on our sense of comfort?”  A storyteller is going to take advantage of that, a person who’s got a sensitivity to that is going to maybe then tell a scary story or something, that will bring in the darkness, bring in the rear, so to speak.  You oppose that with the concert hall where that does not exist.  In the concert hall we’re enclosed.  We’re completely safe.  We are perhaps a little bit impinged on by our neighbors, so that we feel a little bit self-conscious.  So that’s something.  All of these things contribute. Think of a political meeting in a town square where there’s a speaker, but there’s also a lot of participation from the audience.  People acknowledge their neighbors and encourage each other to talk back to the speaker, so it becomes a back and forth kind of thing.  Or a rock concert—that’s a different kind of thing.  All of these are paradigms, and they become models for listening that you can carry over into other situations.  You can listen to your stereo, but you can pretend it’s the concert hall.  Do you remember the way people used to listen to records? They would bring the record home, put it on, and sit in a comfortable chair with their speakers there, as if they were in the concert hall—my dad used to do that—and they would listen to the whole record.  It was 20 minutes of sitting there listening.  They wouldn’t put it on and go do the dishes like people do now because they know they can just keep playing the stack of CDs that is never ending, or the MP3s or whatever.

“In the concert hall we’re enclosed.  We’re completely safe.”

Another aspect of this is musical structure.  Take a look at Philip Glass’s music.  At the beginning, he was very much in the art world.  He was doing a lot of his performances in art galleries.  The take away form of one of the early pieces, if you put it in a sound editor or something, is like a bar.  It’s a flat sound.  As soon as he got commissioned to do the Violin Concerto—and that’s in a concert hall—then suddenly you’ve got that arc form.  Suddenly, it’s a standard concerto. It’s in his language, but you’ve got that climax. He was clearly intuitively responding, “Okay, now I’m in a concert hall.  I can’t do this thing that I do in the gallery where people are walking around.”  Physically, they were in a completely different orientation; they didn’t feel hemmed in like at a concert hall. So they didn’t have those same expectations of structure.  But as soon as he was doing the concert hall piece, then it was like, “Now I have to rethink this.”

FJO:  In terms of your own background and how you got into all of this stuff—you studied the piano growing up. Later you went the typical composer-training route.  You went to Indiana University, then on to Juilliard to study with Vincent Persichetti, one of the great—albeit largely unsung—masters of the sonata form: 12 great piano sonatas, 9 symphonies; it’s all very much about the concert hall.

MJS:  Yeah.

FJO:  You even wrote a symphony and a string quartet.  I was desperately trying to find places I could hear them.

MJS:  You won’t. They were student pieces.

FJO:  At what point did you have this aha moment of wanting to do something else?

MJS:  At Indiana, they had a great studio.  I was always into electronic music.  I even had synthesizers when I was in high school.  I would go to sleep with my headphones playing drones essentially. I had no idea of any of that, but that’s just what came out of the synthesizer.  I just turned it on and held a note down, then played with the filters and the LFOs and stuff.  So I was always into that. Then at IU, Xenakis had left the year before I got there, but I think he might have developed the studio a little bit and so I was working and I made a piece called Nature and Static.  This was a piece that had two parts.  One was what I called “Nature,” which had basically five or seven parts that were just playing the same minimal melody, but with different timbres.  And they just kind of intersected in a very minimal way, not unlike that Middl piece you know, but the idea was completely intuitive.  It was that you’d listen into it and you would hear this multiplicity of sounds in this very simple texture—and that I associated with nature, because nature to me was simple, but complex.  Then the other half was “Static,” which was a loop—much more of an electronic music or man-made kind of a loop.  And I processed it, so it got kind of big and loud.  For my recital, I performed a piano piece with string quartet, and this piece Nature and Static, and I turned the lights off for the electronic piece.

At that point, it occurred to me that I had to do something because it isn’t right to have people sitting in these chairs listening to this.  They should be closing their eyes and listening as if immersed in the sound.  It felt wrong to be doing it in the concert hall, but you know, you do what you can.  You turn the lights off, or whatever.  Then at Juilliard, I organized some electronic concerts at Paul Hall. But when I set the speakers up, I was also struck by the inappropriateness of that space for what we were doing.  It was just people looking around. My teacher Bernard Heiden said, “The thing I like about electronic music is when something goes wrong.”  He liked when the tape recorder broke, something dramatic.  I think it struck everybody; it’s nice good music and whatever, but regardless, it just doesn’t work in this space.

FJO:  So even as a student you were envisioning a venue like Diapason.

MJS:  Yeah. Obviously the Dream House was a big inspiration and to see that people were already doing this to a large extent. Still, New York didn’t have a dedicated sound space.  Even though Paula Cooper and other places would occasionally do sound works, we didn’t have a dedicated gallery or space for experimenting with sound.

FJO:  So ideally what was in your mind, in terms of how you put this thing together? What were the attributes that made that a more ideal space for hearing this kind of music?

MJS:  Well, the first one was Studio 5 Beekman.  That was down near City Hall.  It was a little office space.  You entered, and there was a small foyer.  This kind of gave you a little bit of a buffer between the world and then the gallery, which was behind a door, beyond the foyer.  That little buffer was very nice, because it let people kind of take a breath.  For me it was also for limiting vision.  Turn the lights down.  It doesn’t have to be dark, but just make the visual less explicit.  I used to use a red light bulb, which got misinterpreted as a kind of gesture of some sort, but I just felt it was a dark color that allowed you to see without making the visual too much of a thing.  I think personally in those situations, it’s not good to have a lot of sound coming from outside.  If people want that, I suppose you can have a space like that, but for the most part I feel people want to be able to control the environment and not have to deal with sirens and things like that.

FJO:  So Diapason eventually became a really significant venue for this stuff, but it is no more.

MJS:  Well, it was supported by my friends Kirk and Liz [Gerring] Radke.  Liz is a choreographer who I’ve been working with since the ‘80s and her husband Kirk is a really generous supporter of the arts and funded this space. We continued in that way and were also getting funding from New York State and from the city and from private foundations.  This went on for about 15 years.  But at some point, Kirk pulled out.  So I lost that funding, and that really was paying the rent; everything else was paying the artists.  So that really hurt, and for a while I tried to continue with my own money, but I couldn’t sustain it.

What I had tried to do, in the years when it was clear that Kirk was going to pull out, was I wanted to get somebody to be a real business director, a kind of executive director who would fundraise and get that part of it together because I wasn’t really good at it.  I felt if I could find that person, then they could really get the whole thing on its feet financially, be able to pay themselves a salary, be able to pay the artists, and keep the rent paid.  We were over at Industry City the last few years.  We were before our time there because people had a really hard time coming out there.  The subway was not cooperating.  People would complain—especially coming from North Brooklyn: Williamsburg and Greenpoint—that it would take them two hours to get there.  So the audience went way down.  So that was bad, too.  Now it’s a bustling place where people come on the weekends.  If we were there now, we would actually get a walk-in audience.  It would have been fantastic.  But we were basically five years too early for that.

FJO:  There really is still no place that is quite like the space that you had for this kind of work. So do you envision it ever reopening or doing something else like it?

MJS:  The last two or three incarnations were really quite great spaces. We had the two listening rooms and pretty good sound isolation.  I had a really great group of people helping me, like Daniel Neumann and Wolfgang Gil.  But I don’t know.  I could see doing it again.  I’m very interested in this question of bringing sound environments or installations into people’s homes, and that’s kind of the way I would try to do it if I did it again.  I was thinking even of having events here [in my home]—inviting an artist to give a presentation here, with a house multi-channel system, and then inviting a small audience and basically trying to use that to help that artist get the work out, to present the work to people who might help in then getting it out to a bigger audience.

Schumacher's very attentive dog near one of his electric keyboards.

FJO:  Given that that’s been such a focus of your work—the directionality of sounds and such a sensitivity to how and where sounds are experienced—it’s fascinating to me that you also perform in and create all the music for what, for better or worse, I’ll call a rock band.  It’s a somewhat inaccurate shorthand for what diNMachine is, but in terms of its performance situation, it operates like a rock band.  There is a group of musicians performing in real time and there’s an audience.  Or there’s a recording. In all cases, it’s a group doing somewhat fixed things that have a beginning, middle, and end.  The band doesn’t perform in concert spaces like the comfort zones we were talking about earlier; they’re performing in louder, club-type environments in which there’s often no sound insolation either from the world outside or from the audience members themselves, which raises all sorts of other listening issues.

“I love lots of kinds of music. I’m just aware of the differences.”

MJS:  Well, I love pop music.  And I love classical music and going to concerts. I love lots of kinds of music. I’m just aware of the differences.  I don’t think that leads to only liking one particular kind of approach.  I happen to have really fallen in love with computer algorithms.  I have to say that.  It opened up a way of listening for me that was really fantastic, and it stays fantastic now.  But I was in rock bands as a kid.  I played in some bands up until I was in my 30s. And I improvised a lot. I like having that outlet for that part of my musical being.

FJO:  The title for last year’s diNMachine album, The Opposites of Unity, is a very apt one given your openness to all these different styles and listening paradigms.  It isn’t necessarily about just one thing.

MJS:  Right.

FJO:  But there’s one track, “Jabbr Wawky,” that’s basically hip hop and another one, “Brisé,” which could well have been one of your Room Pieces to some extent.

MJS:  Yeah, it probably was derived from one. But even in “Jabbr Wawky,” there are a lot of environmental sounds.

FJO:  So the lines do get blurry even in the context of what you’re doing within the framework of the band.  I noticed that diNMachine has a new album coming out in early 2018. Will it be following a similar path?

MJS:  Well, the band has been reduced.  It’s now a duo, which makes it a lot easier.  It was kind of expensive. I try to pay people if they’re going to play my music for me.  So now, as a duo, I feel like this can go on and I don’t have to stress about it.  We can play when we get gigs.  We can rehearse pretty easily.  We live pretty close to each other and so I’m a weekend rock musician rather than trying to do this professionally.  Although, of course, I’m trying to do this professionally, but it just makes it more manageable.  Anyway, the music took a little bit of a turn towards what I’m calling synth and drums—not bass and drums, or drum and bass.  Drums and synth.  Those are really the two featured things—a lot of these songs are analog synthesizers and drums.  They don’t have guitar or saxophone; the first record had lots of various instrumentation.

FJO:  You say they’re songs, but there are still no vocals.

MJS:  It’s mostly instrumental.

FJO:  Do you perceive of this as dance music to some extent?

MJS:  I think you can dance to it for sure, definitely.  It’s got a very strong beat. You can also listen to it. That’s another interesting issue, because dance music shouldn’t be too complicated.  When the head gets too involved, the hips have a problem.

FJO:  The material for diNMachine consists of concrete pieces, even though elements of your other work come in.  Obviously when someone’s listening in a club, they’re not listening in the same way as they would in a concert hall, but listeners would still assume more causality than they would in, say, a sound installation, because of its mode of presentation.

MJS:  Well, the way that I write them usually is I improvise on my synthesizer and I just keep the tape running, so to speak, and then I’ll find some riff that I like or some section or some sound, and that will become the basis of one of these songs.  Generally, I’ll figure out the tempo and add a drum track, and then I’ll write a bass line.  Sometimes I’ll throw that synthesizer sound into Melodyne, which is a pitch-detection software used mostly to correct singers or instruments that are out of tune.  It’s not monophonic; it can read multiple notes at once.  When it first came out, the way they advertised it was they’d have a guitar, and they’d show how the Melodyne could see each note in the guitar chord and correct individually.  It was a breakthrough software when it came out.  Now, other software does that.

FJO:  It’s like a fancier Autotune, basically.

MJS:  Exactly.  But what I’ll do is I’ll throw the synthesizer in Melodyne, and it will score it.  It’ll figure out what the pitches are, but it will be wrong most of the time because the synthesizer’s very complex. Even if you’re doing a bass line, the overtone structure is very complicated and Melodyne has a lot of trouble with that.  So I’ll take that score that Melodyne has derived from the synthesizer, then I’ll throw it into a string pad, or something like that, or a piano.  And it will come out with this piano version of what the synthesizer did, which can be really cool because it kind of comments on what the synthesizer did and doesn’t quite get it right, but you can tell that it’s trying to get it right.  Then sometimes I’ll play that live.  What I really like to do is have my basis of the song and then I’ll just kind of blindfold it: drag sounds into the session and just see what happens—see what gets layered on top of it and come up with sometimes very bizarre, unpredictable things.  I won’t keep it if it’s too strange, but it’s incredible how many times something will just work in that situation.

FJO:  So in a way, it is designed so that people listen in to it rather than simply listen to it, as you were describing earlier.

MJS:  Yeah, and I’m very interested in structure. I call it free composition rather than free improvisation.  It’s like the idea of transition.  Wagner said that composition is the art of transition, and I take that really seriously.  La Monte Young said transition is for bad composers.  I’m siding with Wagner there.  I think transitions are what it’s all about.  And especially in these diNMachine songs, I’m really interested in—well, I’ve got this section of the song, what’s this next section going to be?  How different can I make it from the first section?  But where it still makes sense.

A Moog oscillator

FJO:  There’s a statement that you have on your website that’s almost like your compositional manifesto, I think.  You aim to draw the listener’s attention to sounds that you’ve created by presenting it “at the rate of every half second or less, which is the same tempo as a typical melody line.”  I thought that was interesting because the way most of us hear a melody is one dimensional; it’s a single line that’s moving over time. But your idea of manipulating sonic elements, which could be a two-dimensional plane or more likely, given your interest in directionality, a three-dimensional field, is basically to grab listeners in the same way that they’d be grabbed by a melody by controlling the durations of the various components they are hearing over that time.

MJS:  Right.  Exactly.

FJO:  And the way you do that is by the speed of change of the sound.

MJS:  Right.

FJO:  Your most recent recording, Variations, which came out on Contour Editions earlier this year, definitely sounds much more developmentally oriented to me, so maybe that gets to what you were saying earlier about getting away from a strictly algorithmic approach.

MJS: I’ve definitely been moving on. I still use it in the process, but it’s a step in the process more and more, rather than the end.  I’ve learned a lot from the diNMachine thing in terms of working with sound because in a sense, with multi-speakers, you’re never really mixing like you do in stereo.  It’s actually a lot easier to just throw sounds around and you don’t have to worry about their balance in the stereo field.  Working exclusively in stereo for a number of years now has taught me enormous amounts about this, and I’ve been trying to apply it to the multi-channel stuff.  It’s really opened my ears, too, and opened up a lot of new possibilities.

FJO:  You talked about creating home environments. This is very different from recordings people listen to in their homes, including all of yours, which are mixed down to stereo. I would think that really misses the spatialization which is a key element in so much of your work. Maybe your recordings should ideally be issued in 5.1 surround sound.

“I’m not such a fan of 5.1.”

MJS:  That’s why Richard [Garet] released the tracks in an 8-channel version.  It’s not surround.  I opted not to do that because I’m not such a fan of 5.1. And I don’t really believe that people are setting it up correctly.  It’s just like stereo, only not as developed in a way.

FJO:  Interesting.  So, it’s okay to listen to something you’ve created to be experienced spatially on a computer with headphones?

MJS:  Not so much. I put a lot of time into the stereo version.

FJO:  So listening to these tracks on a computer is sort of like looking at photographs of paintings.

MJS:  Yeah, like a reference or something.  I had the eight tracks, and I created eight spaces in the stereo field with different characteristics. Then I put the tracks into those eight spaces.  So it’s not just panning them around; it’s really trying to get depth and a sense that there are these eight separate spaces in it.  That’s another thing I really would like to continue working with.  And actually working with people who understand it a lot more than I do and who have software chops and can maybe design specific things that I can use.

FJO:  Getting back to dance music, albeit of a very different sort, for years you’ve collaborated with the choreographer Liz Gerring. You had mentioned needing to keep things simpler if it’s being danced to, in a pop/club context. Clearly in these pieces, they’re all professional dancers and there’s a kind of Gesamtkunstwerk that happens between the choreography and music. However, once again, this is something that exists in time and in a space where people are sitting in seats observing the work.  Ideally they’re listening to the music and it is a key element, but a dance audience is primarily there to see the dance and so the music has a somewhat subordinate role to it. I imagine that some of these considerations might make you create sound in a different way.

MJS:  I have to say Liz is amazing to work with.  She’s an amazing collaborator.  She regards the music as absolutely equal to the dance. Maybe not absolutely equal, but there’s no point in debating whether it’s 60/ 40 or whatever; they’re the two elements that are paramount in the work. So the music is an important element, and it’s what we’ve been grappling with all these years in this relationship.

We started from the Cage-Cunningham aesthetic of doing dance installations, where Liz would dance for three to four hours and I would improvise on my laptop at the same time, but not necessarily in any way interacting with what she was doing.  But over the years we’ve talked a lot about what we want to do.  How do we want to work on this relationship?  Do we want the music to reflect what’s going on in the dance?  To what extent?  In what ways?  We’re lucky that we’re very much on the same page aesthetically. We have a similar kind of feeling about art and work, and at this point each piece is approached in a determined way to do it better than we did the last time—to be more collaborative, to think more about that relationship and to do something innovative or interesting in that relationship.  Sometimes there are constraints based on the practicality of doing a theater piece, but it’s a completely different way of working. It’s not a defined or set way of working; it’s changing all the time.  There are elements both of what I do in the band as well as the installation approach.  You can’t really pin it down.  We’re always exploring, so it’s always different but it’s got elements of other things that I do as well.

FJO:  We talked about the concert hall and Schoenberg and Cage and all that stuff, and it sort of being anathema to an audience that is used to hearing pieces by Mozart for which they can reasonably predict what the next eight measures are going to be. Yet, if you’re in a space for a dance performance, I think as a composer writing for dance you can get away with doing a lot more. Audiences for dance performance will listen to a Cage score; Cunningham had huge audiences. Is it the visual element?  Does being able to look at something besides the musicians playing their instruments—or, in the case of more experimental electronic music, twisting knobs or sitting in front of a laptop—help bring audiences into those sounds more?  I don’t know.

MJS:  I don’t know.  If we’re talking about the ideal listener-viewer, I think that’s one thing.  If we’re talking about a typical audience, that’s another. Both are obviously important.  Not everybody can be an ideal, educated listener-viewer, but I think that regardless of what the audience is going to think or perceive, it’s really up to us to be very sensitive to the relationship of the sound and the dance.  And not to use the distraction—so to speak—of the dance, or of the visuals as a way to get away with things that don’t really work with the dance.

If we’re going to be really sensitive to what’s going on, one thing is surround sound.  I like to use surround.  But it’s problematic because the viewers are looking at a stage most of the time and to start throwing things in the back is going to compete with that.  Not that that’s a bad thing, but you have to be careful and you have to acknowledge that that creates a dissonance with the typical attitude of the viewer.  That’s why in movies they’re very careful about how they use surround sound.  It’s actually mandated in the spec for a surround sound that only effects like bombs exploding and things like that are going to be used on the rear channel.  Everything else is in the front: dialogue, music, diegetic sound—what’s called Foley.

FJO:  I guess the way around that would be to have a dance performance that is not on a proscenium stage where you have people moving all around in a space.

MJS:  Exactly. We’re actually actively looking to do that.  It’s hard to play with a proscenium stage.  That’s the thing.  That’s what Liz really grapples with because she’s not particularly a theater person that wants that perspective on the movement. She wants to go beyond what is typical in the theater. I haven’t thought about it that much, but I would imagine that it parallels the development of music where you had ballet in the theater and that established a certain way of presenting movement and the relationship with the dancers and what not.

FJO:  Once again with classical ballet, viewers probably would know what the next eight moves are going to be.  This brings us full circle.

MJS:  Yeah.

A bookcase in Schumacher's hallway that is filled with speakers and a hat on top of one of them.

Leveling Up, Part 3: Entering the Marketplace

You’ve written a band piece. Now what?

There are a couple of ways you can enter the world of educational band music. The first is to be commissioned by an ensemble to create something new just for them.

When this happens, a few problems are likely already solved for you: instrumentation, difficulty level, length, and first performance. And you’ll probably get paid, too! It’s a great gig.

On the other hand, if a piece of music is too customized for the commissioning ensemble (i.e., the year the ensemble commissions you they have 45 clarinets, 2 trumpets, and an all-state didgeridoo player) it can become very difficult to sell. If a publisher was interested in the music, you will likely be asked to re-orchestrate for a more balanced ensemble.  You may also need to write in cues and to include some doublings you never intended.

There is a lot of value to be found in filling your catalog with multiple pieces at a variety of grade levels.

The second way to enter the world of educational band music is to compose on spec. There is a lot of value to be found in filling your catalog with multiple pieces at a variety of grade levels. The more content you put out into the world: a) the easier it is for people to find you; b) the better you become at the craft of composition; and c) the closer you will get to writing Good Music every time.

You still need to solve a few problems before you begin:

  • Instrumentation: What size ensemble are you writing for and what forces are available? Ensembles with players new to their instruments will have fewer options. (It is unlikely a contrabassoon, C trumpet, or five-octave marimba will be available in Grade 1–2 ensembles.) The best way to learn what instrumentation is available at a given level is to study the scores of popular pieces. Pay attention to the degree of part independence and doubling as well.
  • Difficulty Level: I strongly encourage you to write the music that is pouring out of you! Let your imagine soar. Just be aware that you will likely end up (based on range, rhythmic/melodic complexity, harmonic language, instrumentation) with a Grade 5–6 piece. If you want your music to be available to the greater majority of educational ensembles you will also need to write pieces in the Grade 2–4 range. You do that by referring to the descriptions in the previous article, through score study, and by showing your music to your band director friends. If you’ve completed a piece of music and don’t know what the level for it should be, give it your best guess and then ask three or four band directors for help in leveling it. You’ll get great feedback, too.
  • Length: Young players who have just picked up their instruments have limited stamina. You might have an excellent idea for a Grade 2 multi-movement work that lasts for 15 minutes, but they will likely struggle to sustain that. Attend a few concerts at the elementary, middle, high school, and college/university levels and pay attention to how long the average piece lasts.

There are no special skills required for composing educational music. If you are open to the challenge of crafting well-written music within a few given parameters, start writing!

No matter what, if you want people to play your music you need to build relationships. Through each interview I have conducted for my podcast (and I’ve done almost 170 of them), one consistent idea to building a vibrant career as a composer is mentioned: relationships. To build a strong network, you need to build relationships. You build relationships by showing up at concerts and conferences.

Next Steps

I used to believe that reaching the double bar in my compositions was the ultimate goal, as if finalizing my musical vision through notation meant I had given birth to a new creation and it would go forth into the world.

I was wrong.

The music may be alive, but it’s not living just through notation.

The music may be alive, but it’s not living. After the double bar, you now have the daunting task of entering the market place, getting the attention of directors, and selling copies of your score.

You have to market and promote your music.

What follows are four questions to ask yourself as you go about marketing and promoting your music.

The principles are true no matter what kind of music you write, but I will focus the discussion on the educational band music world.

1) What level of music is it?

This entire series of blog posts started with trying to clearly define what each grade level of music meant. I found it to be an impossible task. Instead, there are guidelines for each grade level. If you have questions about how the leveling system works and want to see some basic definitions of Grades 1–6, read the previous article.

Knowing the difficulty level of your music will help you market your music, because one of the first filters a band director uses when selecting new music is to sort by grade level.

The band directors I know typically program some easier music in order to work on technique and sound production, music at the heart of the ensemble’s level that is accessible yet keeps the students on their toes, and music that challenges them and helps them mature as musicians. Where does your music fit? The answer, of course, is different for every school, director, and ensemble and will likely even differ from year to year. You should be able to confidently describe to a director which ways your music provides challenges to the players.

Knowing how difficult your music is, and answering the next question below, is the first key to marketing your music. A challenging piece for a middle school ensemble may be an easy or on-level piece for a high school ensemble.

2) Who are you writing for?

This question is less about aiming to please a specific set of people than it is about knowing who might purchase your score and parts and then perform your piece.

If you haven’t answered the first question—What level of music is it?—you will struggle to answer this question.

A common answer I hear from the composers I work with as they build their businesses is that their music is for everybody.

Is it? Really?

The surest way to guarantee no one engages with your music is to make it for everybody.

The surest way to guarantee no one engages with your music is to make it for everybody. Knowing who may be interested in your music will help you market your music. It allows you to know who to get your music in front of. Most composers have a limited marketing budget (if any) and limited time. Understanding who we should be reaching out to simplifies the process and makes our efforts more meaningful and cost efficient.

This reduces the number of people we should email. It will increase the effectiveness of any advertising you do, and it will help you know who to speak with at conferences.

Now that you know who to get your music in front of and how to describe the difficulty level of your piece, you can begin to generate traffic.

3) Where are you sending people?

In business, traffic is what leads to sales. A brick and mortar store that is difficult to get to, has poor parking, and is in a part of town that feels unsafe will struggle to generate traffic. Likewise, a poorly designed website that has an obscure address (URL), is difficult to navigate, and doesn’t provide safe and easy ways for band directors to purchase your music will not prosper.

Ideally you want to control the traffic. Some marketers refer to this as owning the traffic. If conductors are clicking on your links or searching you out, do you know—or have control over—where they end up?

Part of the problem with Facebook is that we own zero of the traffic that comes to our pages. But we do own the traffic that comes to our own websites from, or through, Facebook. The goal should always be to get people to your website.

It’s fantastic if your Facebook composer page has hundreds, or even thousands, of likes, but have those likes translated into sales of scores, performances, or new commissions? Probably not. Don’t confuse social media interest with controlling traffic. Do everything you can to send people out of social media and onto your website where you can build an email list and (hopefully) sell a score.

Clever URL names don’t work well.

Be sure your website looks good, is easy to navigate, gives visitors what they’re looking for, and has an easy to remember or find URL. (YourName.com is always the best choice; clever names don’t work well. My first URL was frogmanmusic.com, which no longer exists—why would anyone ever click on or trust that?)

4) Have you made it easy for people to buy your music?

When people are ready to make a purchase online, they want to make the purchase now! If you have your music for sale on your website (recordings, scores, or whatever), make it easy for them to make the purchase.

Here are some tips:

  • Create a large “Buy Now” button for each piece you want to sell. Don’t make the conductor who visits your site and is interested in acquiring a copy of your score search for the purchase link. It should be big and easy to find. Maybe even put it on there twice, once on the top of the page and again on the bottom.
  • Create a storefront. If you have a WordPress website, there are several plugins that will enable you to create a storefront that allows visitors to make a purchase. These plugins can also track inventory, create item pages, create and accept coupons, calculate shipping and tax, and generate receipts with unique order numbers. The WooCommerce plugin works great and is relatively easy to set up. If you don’t know how to do this, hire a freelancer from fiverr.com to set it up on your site—it’s money well spent. If you are going to run your own storefront, you will need to purchase an SSL (secure sockets layer) certificate from your website host to make your website and the storefront as secure as possible. The last thing you want to do is expose the credit card numbers and personal information of those who purchase your music.
  • No matter which platform of website you are on (WordPress, Joomla, Wix, Squarespace, custom built, or something else—and some of these come with built-in storefronts), you will need a way to process payments. Remember, the goal is to make it easy for those who are interested in purchasing your music. Therefore a cumbersome payment processor with many levels of clicking might cause people to walk away halfway through. Online marketing and sales experts call this phenomenon shopping cart abandonment—and you don’t want to cause those who have ALREADY made the decision to spend money on your music to get frustrated and walk away. I personally use PayPal, Stripe, and Square between my multiple businesses, but other frequently used processors include Amazon Payments, Braintree, and Samurai. Each processor offers a different set of benefits and has a different cost structure. They earn money by taking a percentage of each transaction and adding on a service fee—these are the same as the credit card processing fees every brick and mortar store has to pay whenever you make a purchase. Choose the one with the lowest fee structure that also integrates with your storefront and/or website platform. (Nothing is universal.) If you plan on selling your scores, parts, and recordings at conferences and in-person events, you will need a payment processor for that as well. Square and Clover are almost ubiquitous. If you live in the U.S., I guarantee that you have made a purchase at a restaurant, farmer’s market, or small business using one or both of these methods. They allow you to create invoices and process sales from your tablet or smartphone.
  • If you are traditionally published, you can still create the “Buy Now” All you need to do at this point is make that button a hyperlink that sends the customer to the purchase page of your publisher or an online retailer. Remember: make it easy and eliminate as many steps and clicks as you can.
  • A regular problem self-published composers encounter when selling to educators is the processing of purchase orders. Most school districts have very strict policies regarding how a purchase can be made—don’t expect the director to simply use their personal credit card and submit the receipt for reimbursement. It’s often not that simple or easy. A purchase order (often abbreviated PO) helps large organizations, such as a school district, systematize purchases from all vendors. They are documents specifying what is being purchased, the quantity of each item, and the price. When a vendor or business accepts a PO it becomes a legally binding contract to fulfill the order. Contrast that with an invoice (or receipt), which is written by the vendor and describes what the vendor will do or what the vendor did. POs, on the other hand, are written by the buyer and describe exactly what they want and how they want it. Very small businesses, like yourself as the composer selling a score, can struggle to process a PO because it increases the paperwork and might require you to set up special processing with your bank. The vendor may also require other things from you, such as a W-9 and your business EIN (tax number). One solution is to get your music into the online storefronts of music distributors and retailers who already have systems in place to deal with POs. Both SheetMusicPlus and J.W. Pepper offer the option to sell your music on their site for a fee or percentage cut of every sale. (By the way, J.W. Pepper is the largest online retailer of educational music.) There are also a number of co-ops and other distribution platforms and storefronts popping up for self-published composers. These include NewMusicShelf, MusicSpoke, ADJ∙ective New Music, Graphite, and the Independent Music Publishers Cooperative. Some of these are exclusive, but all of them have figured out how to make it easy for all interested parties to purchase music, including schools that have to use purchase orders.
Don’t be afraid! The world needs your voice.

Lastly, and most importantly, don’t be afraid! The world needs your voice. Many people struggle with the transactional nature of selling music. However, if you’ve taken the time to build a relationship first, it’s less about selling and more about having a dialogue about your compositions.

Composing for Carillon

The Carillon

The carillon is one of the most public of instruments. Situated in bell towers in the heart of public spaces, carillonneurs perform for entire communities. Though all who wander near the tower will hear the music, most will never know who it is playing the instrument. As performers hidden from view, carillonneurs strive to convince audiences that we are not machines playing the same tunes each day; we are real humans capable of expression and dynamic variation with lots of diverse repertoire.

Of approximately 600 carillons worldwide, North America is home to 185 such instruments distributed across universities, parks, churches, cities, and even mobile carillons on wheels. Though there are many kinds of bell instruments, a carillon consists of at least 23 tuned bells and is played from a keyboard that allows expression through variation of touch. The instrument is traditionally played solo, with the hands and feet, utilizing a keyboard and pedalboard that resemble a giant piano.

The carillon was born in the Low Countries of Europe about 500 years ago. The instrument emerged from medieval bell towers that originally functioned as signaling mechanisms to the local inhabitants. The bells would communicate not just the time of day, but civil and spiritual events: calls to prayer, the arrival of visitors, warnings such as the outbreak of a fire. In the early 20th century, as technical keyboard innovations began to allow for the expression of touch, the carillon began to develop as a concert instrument. Today carillonneurs perform all kinds of music on the bells: original compositions, classical arrangements, jazz standards, pop tunes, folk songs, film music—anything and everything that our public will enjoy.

Each Instrument is Unique

 

Carillons come in all shapes and sizes. From 23 bells to 77 bells, these instruments range from massive tower installations that house the largest tuned bells in the world to instruments that could fit in your living room. Bells cast at different foundries throughout history each have their own unique sound; some with richer overtones, some with more resonance, a longer sound, some brighter, some warmer.

Carillons come in all shapes and sizes.

Most carillons in North America are tuned to equal-temperament, but many older instruments in Europe employ the mean-tone tuning system. Though some instruments are concert pitch, keyboards will often transpose up or down to suit the height of the tower. With transposition ranging from up an octave to down a perfect fourth, the same repertoire played on two different instruments can sound vastly different.

Just as a particular concert hall will have certain characteristics, the bell tower itself and the surrounding listening space will play a key role in the sound of each instrument. While some instruments are found in the heart of bustling cities, others are in parks or suburban neighborhoods protected from traffic noise. When towers are more open and allow the bells to be visibly seen from the ground, the strike of each bell will be heard more clearly. Alternatively, sounds will blend more in closed towers where the bells are hidden from view.

Compositions for carillon are sometimes written specifically for one particular carillon, but composers can also write in a way that ensures pieces can be effective on multiple instruments.

Musical Considerations when Composing for Carillon

Overtones

The unique partials, or overtones, of bells are an important consideration. Unlike traditional Western string or wind instruments, bells have a very prominent minor-third overtone. There is additionally a hum tone that sounds one octave below the strike tone. It can be helpful to compare typical bell partials to the natural harmonic series. The following graphic illustrates this comparison for a C3 bell (one octave below middle C).

Musical notation showing the partials for bells in a carillon

Bass bells are much richer in overtones than high bells. The chord C-E-G played in the bass bells will not sound like a major chord at all, but played in the upper register this chord will sound more “in tune.” Thinning out or spacing out chords can be more effective on carillon (C-G-E), especially when writing major chords. Minor chords and diminished chords, on other hand, will sound more natural in the lower registers of the instrument.

Decay of Sound

As a bell is struck, the strike tone is heard in the foreground, but this pitch decays quickly, leaving the hum tone and overtones to emerge. Once a bell is rung, there is no way to dampen the sound or silence the bell. Each bell will continue to ring as the vibrations naturally dissipate. (Though there is an adjustment mechanism on each key that will allow the carillonneur to hold the clapper against the bell after striking, thus muting the sound, most players will advise against this as it creates a rather ugly sound and is perhaps not good for the instrument.)

A walking bass line on a fast be-bop jazz standard will not come across as intended.

Larger bells will ring longer, up to about 30 seconds, before fully coming to rest. Smaller bells will not ring as long, sometimes only for a few seconds. Rapid harmonic changes in the bass will create a blurred sound; a walking bass line on a fast be-bop jazz standard, for instance, will not come across as intended.

Depending on the bell foundry, the same bell on two different carillons can have a very different decay of sound. For instance, English bells (Taylor, Gillett & Johnston) cast in the early-to-mid 20th century have a rather short decay of sound in the trebles, whereas French bells (Paccard) cast in the later half of the 20th century are exceptionally long sounding. Some repertoire is better suited to short-sounding bells or long-sounding bells.

Dynamics

The carillon has an incredible dynamic range, arguably more so than a piano. Through variation of touch, carillonneurs are able to strike each bell so softly that nobody can hear it, or loud enough to startle somebody walking by. Bigger bass bells have more dynamic range than small high bells. Higher bells, with less bell mass, can only reach a fraction of the volume of the bass bells. Thus, crescendos moving down the keyboard are often more effective than up the keyboard.

Composers and arrangers for the carillon like to “think upside down”; rather than give the singing melody line to the soprano, placing the melody in the bass bells, with the higher bells playing harmonic and rhythmic accompaniments, can be very effective.

The carillon has an incredible dynamic range, arguably more so than a piano.

Playing loud is easy; playing soft is more difficult. Due to the large keyfall (1.6-2.2 inches), playing a note pp will require the carillonneur to take time to prepare the note by moving the key partway down before striking. It can be very challenging or impossible to play fast and soft at the same time. (Exception: When playing repeated notes, the carillonneur can keep notes prepared and play rapid trills, tremolos, or ostinatos very quietly.)

Balance

Keeping the bass bells in balance with the treble bells is a consideration for both composers and performers. Loud passages in the bass will drown out figures in the upper register, but a passage in the high register marked ff will not sound loud without accompanying bass notes to give the power. On larger carillons especially, the dynamics will come from the bass.

It might sound preposterous that a good balance could ever be achieved, with bass bells weighing tens of thousands of pounds, and high bells as small as 10 lbs. But towers are actually designed to improve balance—by placing the bass bells lower in the tower, the sound of treble bells will carry farther when high up in the tower. In some towers, louvers are positioned in the openings of the belfry to magnify this effect. Louvers are angled slats that deflect sound down to the ground. These louvers will rein in the sound of the bass bells, placed lower in the tower, by deflecting their sound more sharply towards the ground. At the same time, the louvers will keep the sound of the small high bells from drifting up into the sky.

Still, it is important for composers to consider the balance of bass and treble bells. Even the biggest bass bell can be played pp when the performer is given time to prepare each note.

Audiences are also capable of improving their listening experience. If one is standing too close to the tower, the bass bells will often be heard too loud and the instrument will sound out of balance. The best listening areas are usually found further away from the tower. Every tower is different, so a general rule of thumb: Imagine the tower falls over on its side. Standing just beyond the range of the impact will result in a decent listening place, in addition to protecting you in case the tower does fall over!

An image of a "brozen piano," which is a keyboard attached to a set of bells that are collected in the shape of a grand piano

Of course there’s no worry about standing too close to a falling tower if you’re listening to a “Bronzen Piano,” a mobile carillon in the shape of a grand piano that was developed by Anna Maria Reverté and Koen Van Assche which can easily be transported and played anywhere.

Technical

Range

Most compositions are written, or made playable, for four-octave carillon.

If writing for a particular carillon, it will be important to determine the exact range of the instrument, as well as to hear sound samples to determine the musical properties of the bells. Manuals typically span the full length of the keyboard, and pedals typically duplicate the bottom two octaves of the instrument. Here are several common ranges:

Musical notation showing the ranges for various carillons

Most compositions are written, or made playable, for four-octave carillon, C3 to C7, omitting the lowest C#3. Writing for this range will allow the piece to be played on most concert carillons. When writing for four and a half octaves, composers will often include substitutions for notes outside of the four-octave range, to make the piece playable on four-octave instruments.

Technique

Traditional technique asks the carillonneur to play each key with a closed fist, one note for each hand. Rapid passages of broken arpeggios that alternate hands (L-R-L-R…) are very idiomatic.

A four-note chord is easily realized with two hands and two feet. As keyboards have evolved and been made lighter over the 20th century, it has become additionally possible to play with open hands and fingers. Two notes, no more than a fourth apart, are easily playable with one hand. Passages can be difficult, though, when two-note chords are played in quick succession with one hand, especially when changes in hand position are required between the natural and chromatic keys. Clusters of three or four notes in one hand are also possible if the keys are all natural, or all chromatic.

It is possible, though unusual, to play two neighboring pedals simultaneously with one foot, provided they are both natural, or both chromatic.

Fast repeated notes are possible in the upper range with hands, but not as much in the lower range or with the pedals, as the clappers are bigger and heavier.

Spacing

The keys on a carillon are much farther apart than on a piano—14 inches per octave, compared to 6.5 inches per octave. This makes rapid jumps in one hand between registers quite difficult; even jumping an octave quickly requires a lot of concentration.

Rapid jumps in one hand between registers are quite difficult.

Additionally, maintaining a large gap between the left and right hands can be challenging. Rapid independent movement in the left and right hand is best kept within two octaves between the two hands, so that the performer can better visualize both hands on the keyboard.

On larger carillons with 4.5 or more octaves, it can be difficult or impossible to play both the high register with the hands, and the lowest bass notes with the feet, at the same time. Large diagonal stretches are best kept within 3 or 4 octaves.

Notation

Carillon music is written on two staves, with the top staff for the manuals and the bottom staff for the pedals. Carillonneurs generally prefer to read the top staff in treble clef and the bottom staff in bass clef, and read 8va or 8vb beyond the third ledger line, rather than changing clef.

Rolled chords are very idiomatic to the carillon and can be noted in one of two ways:

  1. Open-handed roll

A roll with an arrow pointing up will indicate to play all the notes open-handed, sequentially from bottom to top (1-2-3…). These open-handed rolls are usually kept to three or four notes, but five or six notes are possible if the notes are all clustered together, as long as both open hands can prepare all notes simultaneously.

  1. Broken roll

A “lighting bolt” will indicate to alternate both hands with closed fists and play a broken roll. For a four-note chord, this means playing the bottom note first, then the third note, then the second, and then the fourth (1-3-2-4). A three-note chord would be played 2-1-3. Broken rolls are very idiomatic to the carillon and more traditional than the open-handed roll.

Musical notation for rolled chords on the carillon.

Tremolando, or tremolo, is another common carillon technique. Tremolos are often noted in early 20th-century Flemish compositions, to allow melodies in the upper registers to sing out over the bass. Tremolos are still used, though less frequently, in modern compositions, either to bring out melodies or for other effects. Tremolo is possible between two notes with two hands, or more notes with each hand playing a cluster. Carillonneurs can be very expressive with tremolo, with both speed and dynamic.

Carillonneurs can be very expressive with tremolo.

Additional Resources

1) The absolute best resource is to find a carillonneur that will demonstrate the keyboard and the instrument. As each carillon is unique, this is essential when writing for a particular keyboard. Most carillonneurs would be very excited to hear from composers who are interested in writing for them!

2) There are two main publishers of carillon music in North America:

The Guild of Carillonneurs in North America

American Carillon Music Editions

3) The TowerBells website has an index of all carillons (and other bell instruments) in North America, and many instruments in Europe and the rest of the world. The site can be used to generate a list of instruments by location, size, pitch, year, bell foundry, etc. A particularly useful tool is the locator that displays all the instruments on a map.

4) John Gouwens has a carillon primer available here, with several musical examples.

5) Luc Rombouts published Singing Bronze in 2014, and the book is widely considered among carillonneurs as the most valuable account of carillon history. It is available on Amazon.

It Ain’t Over Yet. Don’t give up on Net Neutrality.

Today the Federal Communications Commission voted to reclassify internet providers from utilities to information companies. This apparently simple act undoes years of bipartisan agreement on the concept of net neutrality as the guiding principle behind internet rules. Commissioner Ajit Pai, a former Verizon attorney appointed to his position by President Trump, has been relentless and single-minded over the past months in pursuing his goal, which is at best misguided and at worst deeply craven.

You’ve probably already heard a lot about why this reclassification is a truly terrible idea. I’ll just underline the perspective from New Music USA. Our constituency includes thousands and thousands of independent artists. We believe that the internet provides an absolutely indispensable tool for creating, distributing, and promoting the amazing array of musics that make this a potentially golden age for our sector. In a culture that so inattentively leaves the playing field so unlevel for artists, at least a neutral internet gives us a fighting chance to advance our work on the same terms as anyone else.

So who is actually in favor of this reclassification, this repeal of net neutrality? Very few, and (surprise!) they’re big corporations who stand to make billions of dollars off a newly unequal internet. Who’s against? Pretty much everyone else. Surveys show that more than 80% of Americans support net neutrality, and more than one million people called Congress in the last month alone, asking their representatives to save it. In a climate of deep and troubling divisions in our country, 80% (that’s eight-zero) agreement stands out as virtual unanimity. I’ve been truly moved the see the images of protests from all over the country, with ordinary people exercising their right to speak out and speak up for themselves. This is the country I want to live in.

If there’s good news here, it’s this: The FCC currently has the authority to do what it has just done. But Congress can step in and pass legislation that repairs the damage. There’s broad support for doing so. Lawmakers from all sides weighed in with letters to Chairman Pai asking him to delay the Commission’s vote: 39 Democrats and Independents signed onto one letter; Republican Senator Susan Collins joined another; Republican Representative Mike Coffman sent one of his own; not to mention the mountains of letters like this one from 32 House Democrats going all the way back to April.

There’s truly broad concern about the FCC action. And in that concern lies real hope to save the precious quality of an internet that’s equal for all.

Leveling Up, Part 2: Making the Grade

My goal when I started writing these posts on leveled band music was to create clearly defined boundaries for each of the grade levels. I was constantly frustrated, and remain so, about the nebulous nature of what each level means.

For instance, I would ask my conductor and composer friends, “What is a Grade 2 piece for band?” and would receive multiple answers. The most aggravating aspect to the answers was that each one started with a variation of, “It’s hard to define.”

Leveling music provides a shortcut for educators looking for new pieces.

The leveling system was created by publishers as a way to sort music by difficulty and complexity. It provides a shortcut for educators who are looking for new pieces.  Some state music education organizations started creating curated lists of pieces for festivals and competitions that also took advantage of the leveling system. This has allowed bands from different districts to compete in juried festivals and competitions on equal footing.

The leveling system has also helped create a set of standards. We can expect students who have been studying their instrument and performing in ensembles for a given number of years to have competency at the corresponding grade level.

There is basic agreement between the various publishers and the state lists about what the grade levels mean, but there is also overlap between the levels. One publisher’s Grade 2 is another’s Grade 3. If a composer is asked to write a Grade 4 piece, how will he or she know if they’re on the mark? It depends on the specific ensemble and knowledge based on experience. The best teacher of what music should look like at a particular grade level is, of course, to study the scores of other pieces at the grade level you are aiming to hit.

In my last post we looked at the business of sheet music and how educational instrumental sheet music has sales in the neighborhood of $100,000,000 annually. It’s a big business. This post looks at the various levels and provides some general characteristics of each.

The Big Picture

Composers interested in writing for bands should start by asking two important questions:

  1. Given the age and experience of the students in the group, what is possible?
  2. Are the challenging portions of the music I am writing providing teaching opportunities or are they barriers to performance?

When asked to write for an educational ensemble, many composers begin with the limitations of the players—instrumentation, ranges, etc. This is important information! However, we also need to think about what the students CAN do. So, if you’re asked to write a Grade 2 piece you can begin by wondering what a middle school band and the students who are in it are capable of.

We need to think about what the students CAN do.

Most middle school band students have been playing for 2–3 years. They can play at least two octaves worth of notes. They are comfortable with a range of key signatures (mostly between 0–4 flats). Sixteenth notes, dotted rhythms, and simple triplet passages are all within reach. They can also do a lot of things your music engraving software won’t play back (and it’s why so many of us forget these are available), such as noise making, singing, speaking into their instruments, playing with a breathy and unfocused sound (actually, this might be lack of skill development, but you can still take advantage of it!), and more.

I will never forget performing my first P. D. Q. Bach piece as a student. It required me to remove the mouthpiece of my clarinet from the instrument and blow into it. The result was one of the worst duck calls I’ve ever heard. I’m pretty sure the piece was P. D. Q. Bach’s Grand Serenade for an Awful Lot of Winds and Percussion (check out this performance, especially from 3:20–3:50). Not only did Peter Schickele (P. D. Q. Bach’s real name) use extended techniques, but he also introduced us to non-standard notation. More than that, it was fun and exciting! Do you remember when you first encountered the use of noise and extended techniques as a student?

If you performed in an instrumental ensemble as a child, your director may have used one of the many core method books made available by publishers. These method books walk a beginning ensemble from their very first notes to performances of compositions. Pedagogues have spent decades refining the books and carefully selecting which skills are presented when. The books are coordinated so each member of the ensemble, no matter which instrument they play, is working on the same skills and music simultaneously. You can see some of the most popular methods here.

Studying these method books is a great way to learn what’s possible for the ensemble you’re composing for. If a director says the ensemble has recently finished book two of a particular method series, that means something. That method book is now a resource for what’s possible and what ground has been covered in terms of range, key, rhythm, tempo, and articulation. Coupled with a good conversation with a competent director or the commissioning ensemble, it will also provide you a way forward so you can craft a musically satisfying piece that appropriately challenges the ensemble.

In a recent interview for The Portfolio Composer podcast, I was speaking with band director Aaron Given and he gave this great piece of advice:

As you’re thinking about how hard you’re going to make [your piece] and what you need to do to make it sound the way you want it so you’re not artistically compromising yourself, think about teaching opportunities versus performance barriers.

As composers writing for younger players, we need to ask ourselves if challenging passages require increased effort from the student or if we’re actually asking students to do something that’s developmentally inappropriate. Aaron gives the example of a few measures of fast scale passages versus asking the trumpets to hit a high Bb.

Appropriate challenges are often welcome and necessary, but the long term consequence of performance barriers is that your piece will not be performed.

Appropriate challenges, such as asking the players to woodshed their scales, are often welcome and necessary for the continued development of the players and ensemble. However, asking them to make a jump in skill that does not represent a good next step is a performance barrier. An ensemble director can and will work into the rehearsal the drill and practice necessary to improve the skills called for in a piece. These are the teaching opportunities. However, as Aaron said with regards to the high trumpet Bb, a performance barrier would require him to work every day with the trumpets on overtone series exercises and embouchure control to the detriment and neglect of the rest of the ensemble in order to ready the section for performance. The long term consequence is that your piece will not be performed.

Almost all of my early pieces for concert band and wind ensemble made this mistake. If, on the whole, the piece could fit comfortably as a Grade 3, I would also include problematic passages where one section’s part was suddenly a Grade 4.5–5. It created incredible rehearsal challenges for the director and did not provide appropriate teaching opportunities.

One final word of advice: do not look at the key signatures associated with the grade levels and limit yourself to those major or minor keys. Instead, consider the key signatures as representative of pitch collections. All of the modes, pentatonic scales, and (in moderation) even some non-tonal scales can be used.

Most high school bands playing Grade 3–4 literature are comfortable with up to four, sometimes five, flats and even one sharp. Though the music should still be pitch centered, and for the most part tonal, brief whole tone, octatonic, and other synthetic scale passages can still be worked in. Treat those passages with care and use them briefly, but know that not every piece for band has to be in the key of Bb major. Moments of Debussy-like planing, Ivesian bitonality, Stravinskian stratification and juxtaposition, and Hindemithian counterpoint can have their place in educational music. But remember: Are you including those passages for teaching opportunities? Or will they become stumbling blocks for performance?

Grades

Below are brief descriptions for grade levels 1-6. Some systems stop at Grade 5. In order to accommodate pieces that are too challenging for one level, but not quite as difficult for the following level, publishers often use a half-point system, i.e., 2.5, 3, 3.5, etc.

The lines between the grade levels are fuzzy.

I compiled these descriptions from personal experience. Depending on which source you are looking at you may find some disagreement. Keep in mind that these are descriptions are not designed to be definitive. The lines between the grade levels are fuzzy and this serves only as a rough guideline. When composing for a specific ensemble, you need to discuss with the director what that ensemble is capable of, knowing that the group may or may not fit into one of these categories nicely.

Grade 1—Very Easy (1 year of playing experience)

  • First-year bands
  • Basic rhythms, with a uniformity of rhythms throughout the ensemble
  • Simple meters
  • Limited ranges
  • Limited technique
  • No exposed passages or solo work
  • Key signature: 1–2 flats (not C major*)
  • Length: 1–3 minutes

*A brief word about key signatures. Woodwind and brass instruments tend to favor flat keys because several instruments in the ensemble are transposed. For the transposed instruments (the most common being Bb clarinet, all saxophones, trumpet, and French horn), the first scale learned is often the written F or C major scale. However, due to the transposing nature of the instrument, the sounding key is typically Bb, Eb, Ab, or Db. As young wind instrument players increase their knowledge of chromatic notes and key signatures the expansion is often to add more flats. This is in stark contrast to young string players who, due to the nature of the open strings, learn sharp keys first and typically increase their knowledge by adding sharps.

Grade 2—Easy (2 years of playing experience)

  • Middle school bands, small-program high schools
  • Introduction of easy compound meters
  • Intermediate rhythms with some syncopation, dotted notes, and triplets
  • Key signature: up to 2–3 flats
  • Length: 2–5 minutes

Grade 3—Medium (3–4 years playing experience)

  • Advanced middle school bands, most high schools
  • Challenging rhythms
  • Easy changing and asymmetrical meters
  • Some solo and soli (sectional) writing, beginning of part independence
  • Slight use of extreme ranges
  • Advanced technique
  • Key signature: up to 4 flats
  • Length: 3–7 minutes

Grade 4—Medium Advanced (5–6 years playing experience)

  • Advanced high schools, colleges, and small-program universities
  • Challenging rhythms with a free use of syncopation
  • Frequent changing and some asymmetrical meters
  • Solo writing with much part independence
  • Key signature: 1 sharp to 5 flats
  • Length: 6+ minutes and multiple movements

Grade 5—Advanced (7–9 years playing experience)

  • Advanced high schools, universities
  • Very challenging rhythms
  • Changing and asymmetrical meters
  • Full range
  • Virtuosic writing
  • Key signature: All
  • Length: Any

Grade 6—Professional (10+ years playing experience)

  • Most universities
  • Very difficult in all facets

NewMusicBox Mix: 2017 Staff Picks

 

This isn’t meant to be just another 2017 “Best of” list. Rather, New Music USA being all about the discovery of new sounds, staffers here like to celebrate the end the year with a shout out to a track that caught their ears and hung on for any number of good reasons. Don’t see a 2017 favorite of yours? We hope you’ll tell us more about it below in the comments so we can all give it a listen.

Follow the links for further listening and to add the albums to your own collection.

Happy Holidays from New Music USA!!


This Is The Uplifting Part

Natacha Diels: Child of Chimera
Ensemble Pamplemousse

ALBUM: ..​.​This Is The Uplifting Part
Parlour Tapes+

Purchase via Bandcamp / USB

I love that Pamplemousse’s collective musicmaking is utterly virtuosic and serious but also light and often playing with humour. It elevates the concept of new music while simultaneously questioning its very underlying fabric. This is also the *only* physical media I’ve bought this year. It comes as a usb stick nestled in a laser-cut bamboo “cassette tape.”

–Eileen Mack, Junior Software Engineer


Passionate Pilgrim

Brad Balliett: My Flocks Feed Not
Oracle Hysterical/New Vintage Baroque

ALBUM: The Passionate Pilgrim
Via Records

Purchase via Amazon / iTunes

I caught the CD release show for this album at National Sawdust and was completely entranced by the mix of materials used to create its unique soundworld. With period instrument and modern timbres, words that feel timeless, and musical language that cuts across eras, it was easy to enter this world and hard to stop exploring it (especially with the voices of Majel Connery and Elliot Cole in my ear). Passionate Pilgrim remained in rotation for me for weeks after the show, and I’m excited to revisit it again as part of this year-end reflection.

–Molly Sheridan, Director of Content, and Co-Editor, NewMusicBox


Memory Bells

Night Foundation: Memory Bells

ALBUM: Memory Bells
Lobster Theremin

Purchase via Bandcamp / Amazon / iTunes

Grab that eggnog (or adult beverage of choice) and chill with some seriously lush downtempo from the Night Foundation—a.k.a. the Miami-based Richard Vergez—crafted with love, hardware, real tape loops, and a trumpet.

–Eddy Ficklin, Director of Platform


Glorious Ravage

Lisa Mezzacappa: Shut Out the Sun

ALBUM: Glorious Ravage
New World

Purchase via Amazon / iTunes

Lisa Mezzacappa’s album Glorious Ravage, featuring the stunning vocals of Fay Victor and an ensemble of incredibly talented musicians and improvisers, took me on a far off journey through the lens of largely forgotten female explorers. Mezzacappa transforms the words of these female explorers into song and also developed visuals for the live performance. Although I wasn’t fortunate enough to see the live performance, the music itself is completely captivating. I still feel I need at least a few more good listens through the whole album to really get my ears and mind around the music, but this makes the work all the more rewarding. I particularly enjoyed Shut Out the Sun. If you’re looking for a taste of this inspiring work, it will be well worth your time.

–Kristen Doering, Grantmaking Associate


Hushers

Kate Soper: Songs for Nobody: “III. Song”
Performed by Quince Contemporary Vocal Ensemble

ALBUM: Hushers
New Focus Recordings

Purchase via Bandcamp / Amazon / iTunes

Choosing just one track from one recording is just so difficult—I’m not a “favorites” kind of person. Who’s my best friend? I have many friends and I love them all. So I want to say a special shout out to Fabian Almazan for his really superb recording Alcanza, and I urge everyone to give it a listen. Meanwhile, I love the Quince ensemble’s pure and compelling vocal sound. I also adore this Kate Soper song, and together, this is a nearly perfect recording—at least as perfect as art could ever be!

–Deborah Steinglass, Director of Development


Okónkolo

Yosvany Terry: Okónkolo (Trio Concertante)
Bohemian Trio

ALBUM: Okónkolo
Innova Recordings

Purchase via Amazon / iTunes

The title track from The Bohemian Trio’s debut recording, Okónkolo (Trio Concertante), springs a joyous escape from the porous walls of the genre prison. How to label this? Who cares! It’s crafted with expertise, performed with seemingly spontaneous precision, and a blast to listen to.

–Ed Harsh, President and CEO


Wake in Fright

Uniform: The Light at the End (Cause)

ALBUM: Wake in Fright
Sacred Bones

Purchase via Bandcamp / Amazon / iTunes

“The Light at the End (Cause)” is a standout track from Uniform’s 2017 Wake in Fright. Making the most of electronic and analog tools to produce ear-splitting, heart-pounding noise, the NYC duo has imbued a recording with the strength of a live show. This track, and dare I say the entire record, is worth a listen.

–Madeline Bohm, Software Engineer and Designer


Soft Aberration

Scott Wollschleger: Soft Aberration
Karl Larson, piano; Anne Lanzilotti, viola

ALBUM: Soft Aberration
New Focus Recordings

Purchase via Bandcamp / Amazon / iTunes

A beautiful, slow meditation delicately and deftly handled that will only further reward with repeated listening.

–Scott Winship, Director of Grantmaking Programs


Knells II

The Knells: Poltergeist

ALBUM: Knells II
Still Sound Music

Purchase via Bandcamp / Amazon / iTunes

“Poltergeist” by The Knells really stood out to me this year amid the sea of new releases. I love the blending of genres to create something totally unique, and the music video is awesome.

–Sam Reising, Community Platform Strategist and Grantmaking Manager


Composer's Collection: John Mackey

John Mackey: Foundry
North Texas Wind Symphony conducted by Eugene Migliaro Corporon

ALBUM: Composer’s Collection: John Mackey
GIA Composer’s Collection

Purchase via Amazon / iTunes

The latest addition to the exceptional GIA Composer’s Collection series is surprisingly the first commercial CD release devoted exclusively to the music of John Mackey and features 12 stunning examples of the wonders he works in the wind band idiom. There are many treasures in this two-disc collection, but the piece I’ve pressed the replay button to hear the most is Foundry, a relatively brief (just 4 ½ minutes) 2011 “grade 3” piece (for what that means, read Garrett Hope) that was originally written for a consortium of junior high school and high school orchestras. Here the usual mix of winds, brass, and percussion are augmented with a wide array of found objects; ideally a group of 12 percussionists are asked to strike piles of metal, pipes, wood, and mixing bowls, as well as to whack a whip. Written nearly a century after Iron Foundry, Alexander Mosolov’s famous orchestral paean to Soviet industrial accomplishments, Mackey’s piece is less about work and all about play. Junior high school is one of my worst memories, but I’d re-enroll today if I was given a chance to participate in a performance of this!

–Frank J. Oteri, Composer Advocate, and Co-Editor, NewMusicBox

This Is Why Your Audience Building Fails

How do we increase the audience for new music? This is a never-ending debate, but virtually all of the standard answers assume that we need to be more inclusive, breaking down barriers for newcomers. From “people should be allowed to clap between movements” to “our next concert celebrates the work of composers from Latin America,” the common thread is evangelical: if we make the culture of new music welcoming to a broader range of people, new audiences will be won over by the universal artistic truth of our music.

This attitude is more or less unique to new music. Sure, every struggling indie band wants to play to larger houses, but the default boundaries of the audience are predefined, usually along class or ethnic lines. Country music has never seriously attempted to break into the African-American market (despite some important black roots). Norteño music does not worry about its lack of Asian American artists. Arcade Fire has probably never tried to partner with the AARP. Even Christian rock, which is fundamentally about evangelism, flips the relationship around: music to spread belief, versus belief to spread music.

So why do we put inclusivity at the center of our audience building? I suspect it is largely a reaction to our upper-class heritage: after all, our genre wouldn’t exist without the 19th-century bourgeoisie and 20th-century academia. Through openness, we hope to convince people that we’re really not that stuffy, that our music can have a meaningful place in people’s lives even if they aren’t conservatory-trained musicians or white upper-middle-class professionals.

Greater inclusivity isn’t an audience-building strategy—it’s an audience-building outcome. For most musical genres, it is the exclusivity of the community that is the selling point.

Working toward greater diversity in new music is necessary and right. The problem is that we’re putting the cart before the horse. Greater inclusivity isn’t an audience-building strategy—it’s an audience-building outcome. Making inclusivity the focus of strategy actually hurts our efforts. All we do is muddle classical music exceptionalism with easily disproven assumptions about musical taste, in the process blinkering ourselves to certain truths about how people use music in pretty much any other context.

And what do we get for our efforts? The same small audiences of mostly white, highly educated music connoisseurs. If we truly want to cultivate both meaningful growth and meaningful diversity in new music audiences, we need to take a step back and examine how people choose the music they listen to.

Communities and Outsiders

For the vast majority of people, music is—whether for better or worse—strongly connected to tribalism. It’s sometimes hard for us to see this as musicians because we treat sounds and genres the way a chef explores varietals and cuisines, each with unique properties that can be appreciated on their own merits.

Yet very few non-musicians relate to music in this way. Usually, musical taste is intertwined with how the listener sees him- or herself in the world. People choose their music the same way they choose their favorite sports teams or their political affiliations: as a reflection of who they want to be, the beliefs they hold, where they feel they belong, and the people they associate with.

In other words, musical taste is about community building—an inclusive activity. But whenever you build a community, you also implicitly decide who isn’t welcome. Those boundaries are actually the thing that defines the community. We see this clearly in variations in average tastes along racial or ethnic lines, but it’s just as important elsewhere: comparing grey-haired orchestra donors to bluegrass festival attendees, or teenagers to their parents, for example.

For most musical genres, it is the exclusivity of the community that is the selling point. Early punk musicians weren’t trying to welcome pop music fans—they actively ridiculed them. Similarly, nobody involved in the ‘90s rave scene would have suggested toning down the bold fashion choices, drug culture, and extreme event durations in order to make the genre more accessible.

Or consider the R&B family of genres: soul, funk, Motown, hip-hop, old-school, contemporary, etcetera. These are the most popular genres in the African-American community, at least partially because these genres are theirs. They made this music, for themselves, to address the unique experiences of being black in America. Sure, other people can (and do) enjoy it, make it, and transform it to their purposes. But only because everyone acknowledges that this is fundamentally black music. When Keny Arkana raps about the struggles of the poor in Marseilles, we don’t hear the legacy of Édith Piaf or Georges Brassens or modern French pop stars. We don’t hear the Argentine roots of her parents or other South American musical traditions. What we hear is an African-American genre performed in French translation.

The video for Keny Arkana’s “La Rage,” clearly influenced by African-American music videos.

In contrast, when genres get co-opted, like rock ‘n’ roll was, like EDM was, they lose their original communities. When we hear Skrillex, we think white college kids, bro-y sales reps, or mainstream festivals like Coachella—not the queer and black house DJs from Chicago and Detroit who pioneered EDM. Similarly, when we hear Nirvana or the Grateful Dead, we don’t hear the legacy of Chuck Berry or Little Richard. As exclusivity disappears, the music ceases to be a signifier for the original group, and that group moves on to something else. Community trumps genre every time.

Expanding the Circle

Things aren’t completely that clear cut, of course. There are black opera singers, white rappers, farmers who hate country music, grandmothers who like (and perform) death metal, and suburban American teenagers who would rather listen to Alcione than Taylor Swift. In addition, a lot of people like many kinds of music, or prefer specific music in certain contexts. We thus need a portrait of musical taste that goes beyond the neolithic sense of tribalism.

[s4wmlt]

The first point to note is that communities of taste, like other communities, are not mutually exclusive. There are friends you would go to the gym with, friends you’d invite over for dinner, work friends you only see at the office, and so on. Some of these groups might overlap, but they don’t need to.

Similarly with music, there is music you’d listen to in the car, music you’d make an effort to see live, dinner music, workout music, wedding music, and millions of other combinations. Again, sometimes the music for one context overlaps with another, but it doesn’t necessarily need to. As such, while people make musical taste decisions based on tribe, we all belong to many overlapping tribes, some of which use different music depending on the context.

Film is one of the clearest examples of this contextual taste at work. Why is it, for instance, that most people don’t bat an eyelash when film scores use dissonant, contemporary sounds? Because for many people, their predominant association with orchestral music is film. As I’ve written before, when uninitiated audiences describe new music with comments like “it sounds like a horror movie,” they’re not wrong: for many, that’s the only place they’ve heard these sounds. Film is where this type of music has a place in their lives, and they hear atonality as an “appropriate” musical vocabulary for the context.

In addition, film gives us—by design—a bird’s-eye view into other communities, both real and imaginary. It’s a fundamentally voyeuristic, out-of-tribe medium. We as an audience expect what we hear to be coherent with the characters on the screen or the story being told, not necessarily with our own tribal affiliations. Sure, we definitely have communities of taste when it comes to choosing which films and TV shows we watch. But once we’re watching something, we suspend our musical tastes for the sake of the narrative.

Thus, when the scenario is “generic background music,” film offers something in line with our broad societal expectations of what is appropriate for the moment—usually orchestral tropes or synthy minimalism. However, when the music is part of the story, or part of a character’s development, or otherwise meant to be a foreground element, there’s a bewildering variety of choices. From Bernard Herrmann’s memorable Hitchcock scores, to Seu Jorge’s Brazilian-inspired David Bowie covers in The Life Aquatic, to Raphael Saadiq’s “all West Coast” R&B scoring of HBO’s Insecure—anything is possible as long as it makes sense for the taste-world of the narrative.

Dealing with Outliers

Lots of people have tastes that deviate from societal norms and tribal defaults, including (obviously) most of us in new music.

All that aside, we still need to explain the outliers: the death metal grandma, the young American Brazilophile, the black opera singer… Lots of people have tastes that deviate from societal norms and tribal defaults, including (obviously) most of us in new music.

In a case like the suburban teenager, it might be as simple as curiosity and the thrill of exoticism. But when we turn to examples like the black opera singer, things get more complicated. Making a career in European classical music is incredibly hard, no matter where your ancestors come from. But black people in America also face structural challenges like systemic racism and the high cost of a good classical music education in a country where the average black family has only one-thirteenth the net worth of the average white family. Making a career in music is never easy, and it doesn’t get any easier when you try to do it outside of your tribe’s genre defaults. Yet despite the challenges, there are clearly many black musicians who have persevered and made careers for themselves in classical music. Why did they choose this path through music?

[banneradvert]

The standard explanation leans on exceptionalism: classical music is a special, universal art form that has transcended racial lines to become a shared heritage of humanity, so of course it will be attractive to black people, too. That doesn’t really stand up to scrutiny, though. Rock ‘n’ roll is at least as universal. If it weren’t, Elvis Presley wouldn’t have been able to appropriate and popularize it among white Americans, and rock-based American pop wouldn’t have inspired localized versions in basically every other country in the world.

Jazz also has a stronger claim at universalism than classical music. Multiracial from its beginnings, incorporating both black and white music and musicians, then gradually broadening its reach to meaningfully include Latin American traditions and the 20th-century avant-garde—if there is any musical tradition that can claim to have transcended tribal barriers, it is jazz, not classical music. No, musical exceptionalism is not the answer.

Maybe this is an affirmative action success story then? I doubt that’s the whole explanation. Black Americans have been involved in classical music at least since the birth of the nation—a time when slavery was legal, diversity was considered detrimental to society, and polite society thought freedmen, poor rural hillbillies, and “clay eaters” were a sub-human caste of waste people not capable of culture. That environment makes for some strong barriers to overcome, and to what benefit? It would be one thing if there were no alternatives, but there have always been deep, rich African-American musical traditions—arguably deeper and richer than those of white Americans, who mostly copied Europeans until recent decades (after which they copied black Americans instead).

I asked a handful of black classical musicians for their perspectives, and their answers shed some light. Their paths through music varied, but everyone had mentors who encouraged their passion for classical music at key stages, whether a family member, a private instructor, a school teacher, or someone else. In addition, they all got deeply involved in classical music at a young age, before they had the maturity and self-awareness to fully comprehend how racism might play a role in their careers. By the time they were cognizant of these challenges, classical music was already a big part of who they were. They felt compelled to find their place within it.

W. Kamau Bell recently shared a similar story about his path into comedy in this Atlantic video.

These anecdotes provide a partial answer, but we still don’t know where the initial inspiration comes from, that generative spark that leads to an interest in a specific instrument or type of music. For example, cellist Seth Parker Woods tells me that he picked the cello because he saw it in a movie when he was five. Something about the cello and the music it made struck him powerfully enough that a couple of years later, when everyone was picking their instrument at school (he attended an arts-focused school in Houston), he thought of the movie and went straight to the cello. To this day, he remembers the film and the specific scene that inspired him. I was similarly drawn to percussion at a young age, begging my parents for a drumset, acquiescing to their bargain that “you have to do three years of piano lessons first,” and then demanding my drums as soon as I got home from the last lesson of the third year.

Nature or Nurture

There is something fundamental within certain people that leads us to specific instruments or types of music. And thanks to science, we now know pretty conclusively that part of the reason for this is genetic, although we don’t yet know a whole lot about the mechanics involved.

Now, before we go further, let’s be very clear about what genetics doesn’t do. It doesn’t preordain us biologically to become musicians, and it doesn’t say anything about differences in musical preference or ability between genders or ethnic groups. Simplistic mischaracterizations of that sort have been responsible for lots of evil in the world, and I don’t want to add to that ignominious tradition. What genetics does do, however, is provide a plausible theory for some of the musical outliers. It’s that extra nudge in what is otherwise a predominantly cultural story.

A major contributor to our understanding of music genetics is the Minnesota Study of Twins Reared Apart. Started in the late 1970s and still going today, it has tracked thousands of sets of twins who were separated at birth and raised without knowledge of each other. The goal of the study and similar ongoing efforts is to identify factors that are likely to have a genetic component. Since identical twins have identical genomes, we can rule out non-genetic factors by looking at twins who have been raised in completely different social and environmental situations.

Most twin-study findings relate to physical traits and susceptibility to disease, but the list of personality traits with a genetic component is truly jaw-dropping: the kinds of music a person finds inspiring, how likely someone is to be religious, whether s/he leans conservative or liberal, even what names a person prefers for their children and pets.

And we’re not talking about, “Oh hey, these two boomers both like classic rock, must be genetics!” No, the degree of specificity is down to the level of separated twins having the same obscure favorite songs, or the same favorite symphonies and same favorite movements within those. In the case of naming, there are multiple instances of separated twins giving their kids or pets the same exact names. Moreover, it’s not just one twin pair here and there, the occurrence of these personality overlaps is frequent enough to be statistically significant. (For more in-depth reading, I recommend Siddhartha Mukherjee’s fascinating history of genetic research.)

It would seem that our genome has a fairly powerful influence on our musical tastes. That said, the key word here is influence—scientists talk about penetrance and probability in genetics. It’s unlikely that composers have a specific gene that encodes for enjoying angular, atonal melodies. However, some combination of genes makes us more or less likely to be attracted to certain types of musical experiences, to a greater or lesser degree. That combination can act as a thumb on the scale, either reinforcing or undermining the stimuli we get from the world around us and the pressures of tribal selection.

The genetics of sexual orientation and gender identity are much better understood than those of musical taste, and we can use those to deduce what is likely going on with our musical outliers. Researchers have now definitively located gene combinations that control for sexual orientation and gender, measured their correlation in human populations, and used those insights to create gay and trans mice in the lab, on demand. In other words, science has conclusively put to rest the nonsense that LGBTQ individuals somehow “choose” to be the way they are. Variations in sexual orientation and gender identity are normal, natural, and a fundamental part of the mammalian genome, just like variations in hair color and body shape.

When it comes to homosexuality in men, the expression of a single gene called Xq28 plays the determining role in many (though not all) cases. When it comes to being trans, however, there is no single gene that dominates. Rather, a wide range of genes that control many traits can, in concert, create a spectrum of trans or nonbinary gender identities. This makes for a blurry continuum that might potentially explain everything from otherwise-cis tomboys and girly men to completely non-gender-conforming individuals and all others in between.

When it comes to the genetics of musical taste, we’re likely to be facing something similar to the trans situation, in that individuals are predisposed both toward a stronger or weaker passion for music and a more or less specific sense of what kind of musical sounds they crave. All professional musicians clearly have a greater than average predisposition for music, since nobody becomes a composer or bassoonist because they think it’s an easy way to earn a living. Likewise, certain people will be drawn strongly enough to specific sounds that they’re willing to look outside of their tribal defaults, both as listeners and performers.

Let’s reiterate, however, that genetics plays second fiddle. One hundred years ago, classical music enjoyed a much broader base of support than it does today, which suggests that tribalism is the bigger motivating factor by far. If things were otherwise, after all, musical tastes would be largely unchanging over the centuries, and I wouldn’t need to write this article.

musical-taste-diagram

A theory of musical taste

Mason Bates’s Mercury Soul

Enough with the theorizing. Let’s turn to two specific new music events that make sense when viewed through a tribalist lens. Both are events that I attended here in San Francisco over the past year or so, and both were explicitly designed to draw new crowds to new music.

Mason Bates’s Mercury Soul series is at one end of the spectrum. Taking place at San Francisco nightclubs, the Mercury Soul format is an evening of DJ sets interspersed with live performance by classical and new music ensembles, all curated by Mason. These types of crossover concerts were instrumental to his early career successes and led to a number of commissions, many with a similar genre fusion twist. He is now one of the most performed living American composers.

A promo video for Mercury Soul.

When Mason’s work comes up in conversation, there is often reference to blending genres, breaking down barriers, and building audiences for new music. Yet Mercury Soul is a textbook example of the evangelical trope: bringing classical music into the nightclub with the assumption that clubbers will be won over by the inherent artistic truth of our music. Given the arguments presented above, you can see that I might be skeptical.

Let’s start with even just getting into the venue. As I was paying for admission, I witnessed a group of 20-somethings in clubbing apparel peer in with confused looks. Once the bouncer explained what was happening, they left abruptly. People come to nightclubs to dance, so when these clubbers saw that the context of the nightclub was going to be taken over by some kind of classical music thing, their reaction was, “Let’s go somewhere else.” Maybe they thought the concept was weird or off-putting. Or maybe they didn’t really get it. Or maybe they thought it was a cool idea but they just wanted to go dancing that night. It doesn’t really matter, because if you can’t get them in the door, you’re not building audiences.

Wandering into the venue, I saw something I’ve never seen at a nightclub before: multiple groups of grey-haired seniors milling around. Of the younger crowd, many were people I know from the Bay Area new music scene. There were obviously attendees who were there because they were regulars, but more than half the room of what looked like 200-300 people were clearly there either for Mason or one of the ensembles who were playing.

The evening unfolded as a kind of call and response between Mason’s DJing and performances by the ensembles, often amplified. During the live music segments, people stood and watched. During the electronic music segments, they mostly did the same. People did dance, but the floor remained tame by clubbing standards, and the lengthy transitional sections between DJing and instrumentalists gave the evening a feeling of always waiting for the next thing to happen. The DJ portion lacked the non-stop, trance-inducing relentlessness that I loved back in my youthful clubbing days, yet the live music portion felt small in comparison—and low-fidelity, as it was coming through house speakers designed for recorded music. As is often the case with fusion, both experiences were diluted for the sake of putting them together. The end result didn’t feel like audiences coming together, it felt more like classical music colonizing another genre’s space.

That was my experience, but maybe it was just me? I attempted to interview Mason to get his take on the impact of Mercury Soul, but we weren’t able to coordinate schedules. However, in speaking to people who have been involved as performers, what I experienced was typical. Mercury Soul has gotten some positive buzz from the classical music press, but reactions from the non-classical press have been tepid at best, and interest in the project remains firmly rooted within traditional new music circles.

Communities of musical taste are not particularly concerned with what the actual music is, so why couldn’t a community develop around genre mashups in a nightclub?

To be fair, this doesn’t imply that the concept is doomed to failure. I could certainly see Mercury Soul evolving into a unique musical experience that has appeal beyond the simple act of genre fusion. As I’ve argued above, communities of musical taste are not particularly concerned with what the actual music is, so why couldn’t a community develop around genre mashups in a nightclub?

In other words, the music is not Mercury Soul’s problem. Rather, the problem is that Mercury Soul hasn’t tried to foster a community. Instead, it makes all the standard assumptions about audience building, which means that, best case scenario, members of the taste communities being thrown together might perceive the experience as an odd curiosity worth checking out once or twice. In the end, therefore, Mercury Soul’s true community is neither clubbers nor new music aficionados—it’s arts administrators and philanthropists desperate to attract younger audiences.

SoundBox

In contrast, let’s look at the San Francisco Symphony’s (SFS) SoundBox series. These events take place in one of the rehearsal rooms at Davies Symphony Hall, which is converted into a sort of warehouse party space, with multiple elevated stages, video projection screens, lounge-style seating, and a bar. The entrance is from a small rehearsal door on the back side of the building, and the room is not used for any other public performances, so everyone who is there has to come specifically for SoundBox. Initially, SFS also made a conscious decision to omit its brand entirely from the events, so most attendees were not aware of the SFS connection before they arrived.

Each program is curated by a prominent musician, many composers among them, and the repertoire is almost entirely new music, performed acoustically (or with live electronics) from a stage, as it normally would be, and accompanied by custom video projections. The performers are drawn from the SFS roster, and they present multiple short sets throughout the evening. During the sets, people sit or stand quietly and listen to the music. The rest of the time, they mill about, chat, and get drinks from the bar. When I went, there were about a dozen or two of my colleagues from the new music scene present, but the rest were people I didn’t recognize, most of them in their 20s and 30s.

Two thirds of SoundBox attendees are new each time, the vast majority are under 40, and very few are SFS subscribers.

In terms of reception, SoundBox could not be more successful. There are two performances of each show, with a maximum capacity of 400 people per evening. I spoke with a friend who works for the Symphony, and he told me that SoundBox always sells out—in one case, within 20 minutes of the tickets going on sale. And this with no marketing budget: low-cost online promotions and word of mouth are the only way they promote the events. Two thirds of SoundBox attendees are new each time, the vast majority are under 40, and very few are SFS subscribers.

Contrast the messaging of SoundBox’s promo video to that of Mercury Soul.

Unlike Mercury Soul, SoundBox starts out by defining a community: it’s a place for culturally inclined music lovers to discover new, stimulating experiences. SoundBox then presents its music as a sort of rare gem worth expending a bit of effort to unravel, in the same way a winery might offer guided tastings of rare vintages. As a result, the event ends up feeling exclusive and mysterious, as if you are part of an elite group of in-the-know art connoisseurs. Whereas so many new music events give off the desperate air of trying too hard to be cool—“Look, we perform in jeans! We don’t mind if you clap between movements!”—SoundBox doesn’t have to try. It just is cool, appealing to the same type of confident cosmopolitanism that has allowed modern art museums to draw enthusiastic crowds far in excess of most new music events.

Despite its successes in building new music audiences, however, SoundBox has failed to meet SFS’s objectives—ironically, for the same reasons as Mercury Soul. The Symphony wants SoundBox to be a sort of gateway drug, encouraging a younger crowd to attend its regular programming. Yet despite an aggressive push to market to SoundBox attendees, my contact tells me there has been virtually zero crossover from SoundBox to SFS’s other programs. To further complicate things, SoundBox is a big money loser. An audience of 800 people paying $45/ticket and buying drinks seems like a new music dream, but it doesn’t pencil out against the Symphony’s union labor commitments, which were negotiated with a much bigger orchestral venue in mind.

This is not a failure on a musical level, but it is a failure in SFS’s understanding of audience building. SoundBox met a strong and untapped demand for a sophisticated, unconventional musical experience, and it created a community of musical taste around it, quite by accident. But it’s a different community from that of the orchestral subscriber, focused on different repertoire, different people, and a different experience. The fact that it is presented by SFS is inconsequential.

It’s more than a bit ridiculous to assume that the same people who come to hear Meredith Monk in a warehouse space will be automagically attracted to a Wednesday night concert subscription of Brahms, Beethoven, and Mozart.

To recap, then, Mercury Soul fails to encourage 20-something clubbers to seek out new music because it doesn’t create a community of taste. On the other hand, SoundBox does create a community of taste, but it’s one that is interested in coming to hear Ashley Fure or Meredith Monk in a warehouse space. More importantly, it’s a community that has no preconceptions about how this music is supposed to fit into their lives, which allows them to deal with it on its own terms. With that context in mind, it’s more than a bit ridiculous to assume that those same people will be automagically attracted to a Wednesday night concert subscription of Brahms, Beethoven, and Mozart. That is a music most SoundBox attendees associate with their grandparent’s generation, performed in a venue that has strong pre-existing associations that don’t help.

Lessons Learned

We live at a time that is not especially attuned to musical creativity. All the energy spent on audience building is a reaction to that. I have a couple of friends who are professional chefs, working in our era of widespread interest in culinary innovation. When I ask them about the SF restaurant scene, they complain that too many chefs chase fame, recognition, and Michelin stars instead of developing a unique artistic voice.

As a composer, I only wish we had that problem. Yet the situation was reversed in the mid-20th century, when works like Ligeti’s Poème symphonique could get reviews in Time Magazine but culinary culture was being taken over by TV dinners, fast food, artificial flavoring, processed ingredients, and industrialized agriculture.

Whatever the reasons for the subsequent shift, our task is to find ways to bring musical creativity back to the mainstream. Looking at the problem through the lens of communities of taste offers some insights into what we might do better:

Community Before Music

People will always prioritize their taste communities ahead of your artistic innovation. That means you either need to work within an existing community, or you need to fill a need for a new community that people have been craving.

The first solution is how innovation happens in most pop genres: musicians build careers on more mainstream tastes, and some of the more successful among them eventually push the artistic envelope.

With new music, this doesn’t really work. On the one hand, the classical canon is not an ever-changing collection of new hit songs but rather an ossified catalog of standard works. On the other, the more premiere-focused world of new music is a small community—that’s the problem to begin with.

So we are left with finding untapped needs and creating new communities around them. SoundBox proves that this is possible. It’s up to us to be creative enough to uncover the solutions that work in other contexts.

Forget Universalism

Despite my critiques of classical music exceptionalism, there are good reasons why new music should endeavor to become a truly post-tribal, universal genre. Those reasons have little to do with the music itself and everything to do with the people making it.

One of the distinguishing characteristics of new music is that we attract an extremely diverse range of practitioners who are interested in synthesizing the world’s musical creativity and pushing its boundaries. What better context in which to develop a music that can engage people on an intertribal level?

That said, this is not our audience-building strategy, it’s the outcome. The way we get to universalism is to create exclusive taste communities that gradually change people’s relationships with sound. First we get them excited about the community, then we guide the community toward deeper listening.

This is similar to what is known about how to reduce racial bias in individuals. Tactics like shaming racists or extolling the virtues of diversity don’t work and can even further entrench racist attitudes in some cases. However, social science research shows that a racist’s heart can be changed on the long-term by having a meaningful, one-on-one conversation with a minority about that person’s individual experiences of racism. By the same token, to get to an inclusive, universal new music, first we need to get people to connect with our music on the personal level through exclusive taste communities that they feel a kinship with.

The MAYA Principle

Problems similar to new music’s lack of audience have been solved in the past. Famed 20th-century industrial designer Raymond Loewy provides a potential way forward through his concept of MAYA: “most advanced yet acceptable”. Loewy became famous for radically transforming the look of American industrial design, yet he was successful not just because he had good ideas, but rather because he knew how to get people warmed up to them.

One of the most famous examples is how he changed the look of trains. The locomotives of the 19th-century were not very aerodynamic, and they needed to be updated to keep up with technological advancements elsewhere in train design. In the 1930s, he began pitching ideas similar to the sleek train designs we know today, but these were very poorly received. People thought they looked too weird, and manufacturers weren’t willing to take a chance on them.

Therefore, he started creating hybrid models that resembled what people knew but with a couple of novel features added. These were successful, and he eventually transitioned back to his original concept, bit by bit, over a period of years. By that time, people had gotten used to the intermediary versions and were totally fine with his original. He repeated this process many times in his career and coined MAYA to describe it.

I think the accessibility movement in classical music has been one of the biggest arts marketing disasters of all time.

What I like most about MAYA is that the last letter stands for acceptable, not accessible. I think the accessibility movement in classical music has been one of the biggest arts marketing disasters of all time. It gives nobody what they want, dilutes the value of what we offer, and associates our music with unpleasant experiences.

Loewy got it right with acceptable. He was willing to challenge his audiences, but he realized that they needed some guidance to grapple with the concepts he was presenting. We in new music similarly need to provide guidance. That doesn’t mean we dumb down the art, it means we help people understand it, in manageable doses, while gradually bringing them deeper.

Hard is not Bad

Often in new music we are afraid to ask our audiences to push themselves. That’s a mistake. People like meaningful experiences that they have to work for. The trick is convincing them to expend the effort in the first place.

To get there, we start with the advice above: build communities, then guide people into greater depth using MAYA techniques. Miles Davis’s career illustrates this process beautifully. He didn’t start out playing hour-long, freeform trumpet solos through a wah-wah pedal; he started out identifying the need for a taste community that wasn’t bebop and wasn’t the schlocky commercialism of the big band scene. This led him toward cool jazz, where he developed a musical voice that propelled him to stardom.

After Miles had won over his community, however, he didn’t stop exploring. He expected the audience to grow along with him, and many of them did. Sure, plenty of jazz fans were critical of Miles’s forays into fusion and atonality, but he was still pulling enough of a crowd to book stadium shows. There’s no reason new music can’t do the same, but we have to be unapologetic about the artistic value of our music and demand that audiences rise to meet it.

Define Boundaries

Since new music is trying to build audiences that transcend racial and class boundaries, we need to be super clear about who we’re making music for and who we aren’t. “This music is for everybody” is not a real answer. We must explicitly exclude groups of people in order to be successful community-makers. It is my sincere hope, however, that we can find ways to be effectively exclusive without resorting to toxic historical divisions along racial and class lines.

Here’s one potential example, among many, of how that could work. I’ve argued before that the “eat your vegetables” approach to programming is dumb. There is rarely any good reason to sandwich an orchestral premiere between a Mozart symphony and a Tchaikovsky concerto. Conservative classical audiences don’t gradually come to love these new works, they just get annoyed at being tricked into sitting through a “weird” contemporary piece. New music audiences for their part are forced to sit through standard rep that they may not be particularly passionate about. Nor does this schizophrenic setup help build any new audiences—you have to be invested on one side or the other for the experience to make any sense to begin with.

So instead of trying to lump all this music together, a new music presenter might decide that audiences for common practice period music are fundamentally not the same as those drawn to Stockhausen or Glass or premieres by local composers. Armed with that definition, the presenter might then choose to create an event that would be repulsive to most orchestra subscribers but appealing to someone else, using that point of exclusion as a selling point. Thus, an exclusive community of taste is created, but without appealing to racism or other corrosive base impulses.


Big-picture questions like how people develop musical taste tend to get glossed over because they are so nebulous. But that doesn’t mean they’re unimportant.

To close, I want to say a brief word about my motivations for writing this piece. Even though this is a fairly lengthy article, I’ve obviously only scratched the surface. The writing process was also lengthy and convoluted, dealing as we are with such a broad and opaque issue, and at many points I wondered if it was even possible to say something meaningful without a book-length narrative. Yet I feel that this subject is something we collectively need to wrap our heads around.

Big-picture questions like how people develop musical taste tend to get glossed over because they are so nebulous. But that doesn’t mean they’re unimportant. As musicians and presenters, we make decisions based on theories of musical taste every day, whether or not we articulate our beliefs. Taste is, in a sense, the musical equivalent of macroeconomics: hard to pin down, but the foundation of everything else we do.

My hope with this piece is that we can start talking about these issues more openly, drop some of the empty rhetoric, and stop spinning our wheels on the dysfunctional approaches of the last 40 or 50 years. Paying lip service to inclusivity is not enough. If you’ve read this far, then chances are you believe like I do that new music offers the world something unique that is worth sharing as broadly as possible. We desperately need to get better at sharing it.