Tag: microtonal

I Can’t Breathe:  A Virtual Dialogue

A protester carrying a banner stating "I CAN'T BREATHE." Photo by Josh Hild on Unsplash

In 2016 I first heard I Can’t Breathe, Georg Friedrich Haas’s haunting work for solo trumpet, performed by Marco Blaauw at the Huddersfield Contemporary Music Festival.  Haas’s work, written just after the birth of the Black Lives Matter organization, and well before the concept of Black Lives Matter came to international prominence, raises a number of important questions about the response of the international new music community to the increasingly multicultural and multiracial, i.e., creolized, societies in which its performances, curatorial directions, and critical and philosophical inquiries are being presented.

I Can’t Breathe was conceived and written in 2014 as a response to the police execution of an African American citizen, Eric Garner, on a New York City street. Garner’s “crime” was selling “loosies,” single cigarettes from a pack. This was said to be technically a form of tax evasion, which is not a capital crime in the statute books. However, a bystander filmed a police officer restraining Garner bodily with an illegal chokehold. On the video, Garner was heard to repeat the words “I can’t breathe” eleven times, before passing out and lying on the ground for seven minutes. While the authorities waited for an ambulance, Garner passed away; the autopsy cited “[compression] of neck, compression of chest and prone position during physical restraint by police” as cause of death. Despite nationwide protests, charges were never brought against the officers involved, although one of them was eventually terminated in 2019.


Georg Friedrich Haas: I can’t breathe (2014) for trumpet solo
Marco Blaauw, trumpet; Janet Sinica, video
(Lockdown Tape #66 in Ensemble Musikfabrik’s series of live to tape recordings of solo pieces in times of Corona lockdown by ensemble members.)

It seemed clear that Haas’s piece took on renewed relevance with the May 2020 police murder of George Floyd, who before passing away, interspersed urgent pleas to be allowed to breathe with plaintive calls to his deceased mother. In the wake of the much larger, worldwide protests over Floyd’s killing, the widest range of individuals and institutions, including those in the field of new music, are being called to account for their actions regarding race.

I have always been intrigued with the questions raised by I Can’t Breathe, so I decided to talk to both Marco and Georg about the piece. The method I am using here to combine our respective dialogues is similar to the penultimate chapter in my 2008 book, A Power Stronger Than Itself:  The AACM and American Experimental Music (University of Chicago Press), in which I selected quotes from nearly one hundred interviews with AACM members to fashion an imagined intergenerational dialogue about overarching social, cultural, and aesthetic issues that the organization and its individual members faced over the decades. I blend this new imagined dialogue with critiques of scholarly writing about the piece.

I begin with Georg’s understanding of the motivation for the work.

GFH:  Well, it was a spontaneous activity. It happened when we looked out of the window at our house and we saw some demonstrations, Black Lives Matter, below us, and I said, OK, we have to go down there to join this. And suddenly this anecdote popped up about Chopin, when he heard about the revolution in Russia and decided, instead of going to Paris, to fight in this revolution. But he changed again and decided to go to Paris and work for the idea. And it is clear that this actually helped the revolution in Poland much more than he could have done by being part of the military activity. In the same way, I decided it’s not my job to protest in the streets. It’s my job to protest in the arts. And this is maybe one of of a few pieces [of mine] which had some nonmusical connotation.

At the time, Georg was already quite late with another, much larger commission, but as he recalled, “Because it was such a spontaneous idea, there was no time for me to make a large, huge internal discussion about what is the right way to discuss this. Just do it. Do it now. And I think this idea is one of the possibilities to go to work as an artist.”

  • It seemed clear that Haas’s 2014 piece took on renewed relevance with the May 2020 police murder of George Floyd.

    George Lewis, composer & musicologist
  • I decided it's not my job to protest in the streets. It’s my job to protest in the arts.

    Georg Friedrich Haas, composer
  • I feel that sometimes in the audience, people do not dare to breathe anymore.

    Georg Friedrich Haas, composer
  • Anybody doing political stuff is told to just shut up. But as I see it, you're bringing a needed message to this public.

    Georg Friedrich Haas, composer

The present essay was prompted by a discussion I had with Marco Blaauw this past summer about the frequent negative responses to a Facebook announcement that his new-music group, Ensemble Musikfabrik, posted about this forthcoming release. Indeed, a number of the comments around the Facebook posting indicated that white new music people really had no business even speaking about the topic. One commenter suggested that Haas had “appropriated the words of a dying black man to become his anodyne aesthetic plaything.”

That such an apparently non-confrontational work could generate such heated debate seems ironic at first hearing. However, I read a number of these responses as exemplifying the growing pains that the field of new music is undergoing as its composers, performers, listeners, curators, scholars, critics, and educational institutions gradually awaken, now certainly fitfully, to the need to develop a far more refined and trenchant discourse around the location of the field in a creolized creative environment.

Marco_Blaauw playing his specially desined microtonal trumpet (Photo © Astrid Ackermann, courtesy MusikFabrik)

Marco_Blaauw (Photo © Astrid Ackermann, courtesy MusikFabrik)

Despite the shocking nature of its subject matter, I Can’t Breathe is anything but sensationalistic. Rather than a wailing lament, Haas produces a restrained elegy.

GFH: The piece starts like a sentimental twelve-tone Kaddish. What I do technically, the process is, that this Kaddish is taking away the space to breathe. You are singing freely and the space gets closer and closer. And what I did technically is just to transcribe and transform the melodic elements into smaller intervals. As I reduce it, the melody is squeezed into 16th tones. The music is really very difficult, and Marco in this performance really is able to sing emotionally within these small intervals. There exists a cantabile in these 16th tones. And I still have this very traditional translation of a huge range of intervals describing the entity of the free world, and therefore it starts with the spaces between the lowest pitches of the trumpet and the highest, soft.

Marco Blaauw’s perspective on Georg’s technique evokes the blues:

MB: A blues player colors the notes, so to notate that, Haas uses what he has always been using, microtonal intonations. In the beginning, it’s like more and more colors to the melody, and then it becomes more and more strict as the melody goes from the big trumpet range to the tiniest interval, interrupted all the time by these single notes that are held for a very long time and pull you in.

GL:  I feel that the piece as a whole can be usefully contextualized as a form of pranayama, the study of the breath: a meditation on breath and life. We are asked to feel ourselves inside the breath, following its every nuance. The piece has a timeless quality about it, although it’s only thirteen minutes long.

MB: I do think it’s very, very meditative. And in that way, I think the brain starts listening more and more for details so that when you come towards the middle of the piece, you actually hear all the microtonality, the tiniest steps. You can actually listen to them because you’ve been trained during this short duration of the piece to all these little things [sings], this blues melody, like a variation on two notes.

GL: I’d also say that with its emphasis on depiction, I Can’t Breathe is very much in the American tradition expressed in Duke Ellington’s concept of the “tone parallel,” which includes Charles Ives, Louis Moreau Gottschalk, and Thomas “Blind Tom” Wiggins. The use of the harmon mute is of course related to the African American tradition, very effectively through Miles Davis. And then there are those super-high “squeeze” notes, an innovation in technique that is closely associated with the Ellington Orchestra’s altissimo specialist, trumpeter Cat Anderson.

GFH: And it was you who said to me that this is a specific technique of jazz, these very high pitches. It’s very rarely used in new music. For me the association is more to Luigi Nono, for whom high melodic gestures are a symbol of utopia—for example, the beginning of his string quartet, Fragmente-Stille.

GL: The piece is about aspiration in the literal sense, and with regard to the conceptual context, for me the squeeze notes depict severe restrictions on the breath, while the hesitations in the tone production refer to the fragility of life as the breath is strangled. The breath becomes rougher and more fragile as the life force goes out.

MB: The association with suffocation comes in when that becomes softer and softer and longer and longer and you get literally out of breath. But I don’t think it’s really meant that way. And then the piece falls apart after that. It loses structure also by the use of softer and softer mutes. And then, in the end, it’s just the silences and the single notes which are, as in the beginning, very, very long. Don’t you think that when you listen to a piece and you see somebody play a very long phrase, it’s almost like you stop breathing? I think when you have long silences, the same thing can happen. I feel that sometimes in the audience, people do not dare to breathe anymore.

GL: It’s like the audience can’t breathe. And you, the trumpeter, evoke a sense of empathy via a kind of transubstantiation.

In a 2016 essay, musicologist Max Erwin positions I Can’t Breathe as program music, which from the foregoing conversation seems evident enough; indeed, Haas appears to find no substantive moral imperative on either side of classical music’s traditional debate over programmatic versus absolute music. However, the author provocatively characterizes the nature of the program as “more accurately, western art music snuff” (Erwin 2016, 10). However, rather than a criminal’s recording of an actual murder for macabre or prurient interest, one can summarize Haas’s origin narrative for I Can’t Breathe as a determined response to an atrocity (in this case, musically) by a concerned citizen.

However, when the deformation of race becomes involved, an atrocity is no longer just an atrocity, and music becomes more than just music. Erwin sees Haas’s approach as exemplifying “a pervasive self-satisfied attitude and concomitant mode of production within the New Music apparatus. Under these auspices, the ‘politically engaged’ composer writes ‘protest music’ which laments the fate of this or that marginalised group” (Erwin 2016, 9). Thus portraying Haas’s move to assert humanistic values as simple political posturing, Erwin maintains that the statement in Haas’s program note—“I leave no notes to the perpetrators” identifies an object of political critique—’the perpetrators’, whilst simultaneously extricating the subject (composer/artwork/audience) from the object of critique… The object of critique is exactly that; it remains fundamentally over there, safely removed from composer and audience to observe and lament (Erwin, 10).

Erwin’s critique would have greater currency and credibility if new music as a field could demonstrate an ongoing concern with black lives, including those of its own Afrodiasporic composers and performers. However, this lack of engagement with issues of race is precisely what Haas is pointing to with his program note. Bringing this level of engagement from “over there” to “right here”–to himself as composer, to his audience, to the performer, and to the historians, critics, and institutions of new music–was exactly the goal of the piece.

In an influential essay, theorist Sylvia Wynter pointed out the consequences of the routine use of the acronym N.H.I. (No Humans Involved) by Los Angeles juridical and enforcement institutions “to refer to any case involving a breach of the rights of young Black males who belong to the jobless category of the inner city ghettoes” (Wynter 1994, 42).

By classifying this category as N.H.I. these public officials would have given the police of Los Angeles the green light to deal with its members in any way they pleased. You may remember too that in the earlier case of the numerous deaths of young Black males caused by a specific chokehold used by Los Angeles police officers to arrest young Black males, the police chief Darryl Gates explained away these judicial murders by arguing that Black males had something abnormal with their windpipes.

Indeed, this image of the deformation of the Black windpipe is central to the iconography of I Can’t Breathe. The remainder of Wynter’s “open letter to my colleagues” attempts to answer her own pointed question:  Where did this classification come from?

GL: In both the title and the content of the piece, there’s a conceptual aspect which is very important. It’s not just an exercise. It’s designed to make people think. And I was telling Marco that for this sort of white audience for new music, it should make these people think.

GFH: Thank you. That’s very good. And in the end, in fact, this is what I also can prove. In interviews, I’m very often asked about this. And of course, this gives us a chance to speak about this, within surroundings in which, additionally, nobody is speaking about it. This is a way in which, in my opinion, political music does work.

Mostly staying in the softer and more difficult-to-sustain regions of the trumpet, I Can’t Breathe is zurückhaltend (reserved), and not only by the composer’s choice. Rather, the situation forces the composer’s writerly hand. Here, I find that the piece’s intensity depicts both a fragility and a Stoic nobility, where Eric Garner, George Floyd, Sandra Bland, Rayshard Brooks, and thousands of other black citizens are literally trying to draw upon their reserves of breath in a life-or-death struggle with forces who, backed by a culture in which black lives and liveness do not matter, take no significant notice of the humanity of those lives, while negating their own humanity in the process.

While Erwin’s thesis concludes that “Haas’s piece is at least five degrees removed from even the most rudimentary criteria of effective political protest” (Erwin 10, n16), in this thirteen-minute lament, political protest seems to be no concern whatsoever–unless protesting racialized injustice is now to become merely a political matter. Rather than a reductive rerouting of human values to questions of political efficacy, I Can’t Breathe is simply about black subjectivity, and what it means to be human.

Even so, our virtual conversation took on an ominous tone:

GL: The piece doesn’t have a happy ending; one could play it again and again, and a Sisyphean hell would be evoked. That accounts for what I find to be the work’s pessimistic quality– in the sense of Afro-pessimism, or how to function in the face of the possibility that Western society might prove permanently unable to shed its preoccupation with anti-blackness as a central part of its identity.

Indeed, it could be that at this late date, a reserved, conceptualist approach may not be enough. To begin with, Marco Blaauw was concerned about the ethical dimension of this kind of work and these kinds of issues being presented by white institutions, composers, and performers, in the white-majoritarian new music context:

MB: You don’t think that when I go to that festival and I ask my fee and I play that piece, that is somebody profiting from the situation?

GL: I feel that when you play this piece, and other people play it too, it brings those issues to an audience that isn’t often exposed to them, or maybe doesn’t think that those issues are relevant to their lives, or feel that what you are performing is totally antithetical to pure musical expression–what are you doing with this political stuff? Frederic Rzewski went through the same thing, John Coltrane, Bruce Springsteen–anybody doing political stuff is told to just shut up. But as I see it, you’re bringing a needed message to this public. And if you don’t do it, who’s going to do it?

Sylvia Wynter saw the disclosure of the category of N.H.I. as an opening from which to spearhead the speech of a new frontier of knowledge able to move us toward a new, correlated human species, and eco-systemic, ethic. Such a new horizon, I propose, will also find itself convergent with other horizons being opened up, at all levels of learning… It is only by this mutation of knowledge that we shall be able to secure, as a species, the full dimensions of our human autonomy with respect to the systemic and always narratively instituted purposes that have hitherto governed us–hitherto outside of our conscious awareness and consensual intentionality (Wynter 1994, 70).

This new awareness bears strong resonances, not only for the understanding of I Can’t Breathe, but for the future of new music itself. In the end, a creolized work like I Can’t Breathe represents a move toward a new identity for new music. No longer framing itself as a globalized, pan-European sonic diaspora, the goal of a creolized new music field is less about pursuing diversity than achieving a new complexity that promises far greater creative depth by recognizing the widest range of historical, geographical, political and cultural cross-connections. As the philosopher Arnold I. Davidson has noted, “Multiplication of perspectives means multiplication of possibilities.”

As Georg Friedrich Haas has declared, “With this piece, I declare my solidarity with the protesters” (Haas 2014). Indeed, each performance of I Can’t Breathe demands from  contemporary music a further solidarity: an affirmation that black lives and black liveness do matter, to its history and to its future.

The first page of the musical score of Georg Friedrich Haas “I can’t breathe” Copyright © 2015 Universal Edition Vienna.

The first page of the score for Georg Friedrich Haas “I can’t breathe”
Copyright © 2015 Universal Edition Vienna. All Rights Reserved. Used by permission of European American Music Distributors Company, U.S. and Canadian agent for Universal Edition Vienna, publisher and copyright owner. This license is valid for distribution and usage in the territory of the world.


References

https://www.facebook.com/Musikfabrik/ 10 June 2020

Erwin, Max. 2016. “Here Comes Newer Despair: An Aesthetic Primer for the New Conceptualism of Johannes Kreidler.” Tempo 70, No. 278: 5–15.

Haas, Georg Friedrich. 2014. “I Give No Sound To The Perpetrators: Ein Kommentar.” https://www.musikfabrik.eu/de/blog/georg-friedrich-haas-i-give-no-sound-perpetrators-ein-kommentar

Lewis, George E. Unpublished videoconferencing interview with Georg Friedrich Haas, 14 June 2020.

Lewis, George E. Unpublished videoconferencing interview with Marco Blaauw, 14 June 2020.

Wynter, Sylvia. 1994. “’No Humans Involved’: An Open Letter to my Colleagues.” Forum N.H.I.: Knowledge for the 21st Century, Vol. 1, no. 1 (Fall): 42-73.

Getting Your Hands Dirty (Performing Microtonal Choral Music, Part 2)

A series of three photographs of someone's year.

Emotionalist Preamble

As a choir director, the majority of my experience is with youth and amateur ensembles. Thus, I usually deal with a different set of concerns and priorities than many readers of NMBx might.

Choirs are in it for the community.

The first thing to know about choirs below the professional level is that, in my firm belief, we are in it for the community above all. There is pride in the technical execution, too, of course! But much more so, it’s about conveying emotion, and experiencing the same emotion, and thus creating and maintaining the bonds of community with each other and with an audience.

In addition, it is quite common to encounter experienced choral singers who have limited sight-reading ability, who rely instead on a finely developed skill at retaining and repeating melodies that they hear. The notation then becomes, as it was in medieval Europe, more of a memory aid than a set of explicit instructions.

Medieval Neumes

A facsimile of the manuscript for ‘Iubilate deo universa terra’ which shows a series of unheightened cheironomic neumes added to psalm verses. (Image in the public domain.)

The joy is that it brings the experience of communal music-making into the reach of a very large population. The challenge is that the director is very often, of necessity, a teacher. So, for amateur choirs, there is no guarantee that the singers will have the whole-score awareness that is a hallmark of elite ensembles; and for many, there is basically a guarantee that they won’t!

Why on earth would anyone try to bring microtonal music into this ecosystem? Well, for one thing, it will help hone everyone’s intonational awareness—which can be sorely needed. But, on its own terms: there are new worlds of emotion to be explored that are unavailable with 12 equal tones alone!

However, a director in this circumstance needs to sell the piece in question to a perhaps skeptical ensemble. Use your entire boundless enthusiasm to support the methodical techniques below. If the singers like you, they’ll give it a chance.

If the singers like you, they’ll give it a chance.

With all caveats out of the way, then, let’s get to the technical side.

The technical side

For teaching microtonal passages, I advocate a “bimodal, target-based” approach. I chose this name because I needed a title that was both accurate and impressive sounding for a paper proposal. (It worked.) But here’s what I mean:

Bimodal – Requiring an integrated awareness of both the horizontal and vertical aspects of every pitch change. That is, one must keep in mind a new pitch’s relationship to the pitch it just left, and also its context within the sonority in which it arrives. These are often independent.

Target-based – Relying on anticipating the familiar, whether melodic or harmonic, or indeed both. When this is done, intervening things can more easily fall into place, even half-unconsciously.

These two tactics are already necessary for being a good choral musician within standard repertoire, but it’s important to make them explicit when we’re working with microtonality. A useful step toward using them explicitly in microtonal pieces is using them explicitly for challenging tonal passages. So, a director might work on these tactics during the semester immediately before a microtonal piece is even on the program.

A tonal example of the bimodal strategy

One illustrative passage is in Poulenc’s O Magnum Mysterium. Among many intonational trouble spots in this piece, consider the tritone in the opening tenor, at 0:08 and a few times afterward:

Most singers can pull out a tritone, but it’s not a reliable interval.

Most singers can pull out a tritone, but it’s not a reliable interval. It’s not uncommon to need to be reminded what it sounds like, using Maria or the Simpsons theme as a mnemonic. (Here’s a heartwarming comment thread from the Simpsons video:)

A thread of Facebook comments in response to the tritone in The Simpsons theme. The messages read as follows: “; “lmfaoooo same”; “Me too, but I still don’t know what it is.”” width=”370″ height=”456″ class=”aligncenter size-full wp-image-376411″ />

Even when they have it securely, each person will execute it slightly differently, especially when neither of the tones involved acts as a leading tone. The resultant group pitch can be fuzzy. And, because it’s a “dissonant” horizontal interval, there is often the expectation of a dissonance where it lands.

So, you sing it slowly, tune that chord on a long tone—and it becomes apparent that the “Cb” is in fact a B natural, the third of a G major!

The tenors are now, ideally, experiencing that trouble spot on two levels. In one sense, they’re singing a tritone up from the previous note. But in another sense, they are occupying a very clear “home” in the resultant harmony, a home which has nothing to do with tritone-ish-ness.

In microtonal music, it’s even more important to maintain these two separate levels. This is because such music inevitably calls for singing some unfamiliar horizontal intervals—and the singers’ natural instinct will be to land on a verticality that’s equally “unfamiliar,” i.e. dissonant, and this instinct is likely to be wrong. We will see the unfortunate fruits of this approach in a 1962 performance of the Kyrie from Julián Carrillo’s Misa a San Juan XXIII later in this article.

A note on targets

We now move on to the “target-based” part of the approach. In microtonal music, for example, if your choir needs to sing an unfamiliar chain of small intervals—then give them a rock-solid idea of the interval they are encompassing, and the intervening tones can almost unconsciously fall into place. They can be refined later, in a second step.

To reinforce how easy this can sound when modeled, here’s Jacob Collier blithely doing that sort of thing to a minor third:

(His full discussion of this really starts at 10:12, but come on, go watch the whole thing. The guy is so hip, it’s surreal.)

The target-based approach is not limited to melodically filling in familiar intervals. On a broader scale, it’s about providing a series of conceptual anchors throughout a piece—where singers can regain their footing, if they happen to lose it on the way. This can be target melodic intervals as above; but also target harmonic intervals to tune to (e.g. for entrances), or target chords.

The novelty here is that the targets need not be musically prominent within the piece—they can occur on weak beats, or at de-emphasized places within a phrase, etc. They only need to be already familiar to the singers, who can then use them to recalibrate. For example, an exotic cadential sonority might be the musical goal, but does not need to be the conceptual target—that role could be an adjacent, less important, more familiar sonority. Here’s an instance of that in a piece I wrote (which will be hosted on NMBx after its premiere on January 24):

An excerpt from the score of Robert Lopez-Hanshaw's microtonal choral composition vokas animo.

An excerpt from the score of Robert Lopez-Hanshaw’s microtonal choral composition vokas animo.

This goes from a Just A major chord, to a 7:9:11 in the harmonic series of B twelfth-flat (in my preferred 72edo notation). The latter is surprisingly easy to nail, because you’re leaving a very familiar place, each part moving basically by quarter step—a distance which can easily be practiced. The common tone also helps.

Building the scaffold

The other thing that should guide your microtonal teaching is the educational idea of scaffolding, or the “zone of proximal development.” All this means is that every new concept needs to relate to immediately adjacent concepts; and the adjacent concepts give rise to insight at the individual level.

For example: You don’t learn to read by someone telling you how to read. There is no way to do it except making the connection on your own between individual letter-sounds and the way they combine into words. Your grade-school teacher just provided the conditions for you to make that leap, by making you memorize the letter-sounds, then confronting you with easily decoded combinations (and then, not-so-easy ones).

The principle here is important. Despite the appeal of a “brute force” method, such as learning a piece by rote from a synthesized recording (newly easy to produce, due to technology!), that tactic will not succeed for most people—because they haven’t internalized the building blocks to make the new intervals “stick.” And many might be unwilling to make that huge technical leap in the first place; it’s not why they’re in choir.

So, we need to look at how we can provide the scaffolding.

There are two pathways, a Just Intonation path and an equal-division path.

We’ve already covered two important things, which happen in normal choral singing, and can be applied to microtonal singing. What now follows is a list of additional concepts, each building on the previous, and some resources to master them. There are two pathways, a Just Intonation path and an equal-division path.

Just Intonation Path: Expressive Intonation

Ironically, this path begins with the opposite of Just Intonation: “expressive intonation.”

None other than Ezra Sims, the great exponent of 72edo, was set upon the microtonal path by his undergraduate choral conductor, Hugh Thomas. Thomas insisted on his ensembles singing very high leading tones when resolving to tonics, and very low 4ths when resolving to 3rds, among other things. Under such influence, says Sims, “you are liable to find it hard ever again to believe (no matter how much the keyboard instruments may try to convince you it is so) that there is, for example, one thing which is G-sharp, one frequency that defines it for ever and ever, Amen.”

Expressive intonation, at its crudest, is very intuitive. (Exaggerate the tendency of the tendency tones!) So, if it can achieve the goal of knocking singers out of a fixed-pitch way of thinking, then it smooths the way forward considerably.

Actual Just Intonation

Fahad Siadat has a series of articles, to be continued, on the website of his publishing company, which introduce the subject of Just Intonation for choirs. Some fuller resources currently available include Harmonic Experience by W. A. Mathieu, which I mentioned in the last article; and The Just Intonation Primer by David B. Doty, which is rather more direct.

A practical choir director might choose only a few intervals to work on.

A practical choir director might choose only a few intervals to work on. Major thirds and harmonic 7ths are useful to start with, because they are easy to demonstrate. Bring in a cellist to play natural harmonics and compare them with the piano! Bring in a high-level barbershop quartet to “ring” some chords! At first, you’re just developing the idea that there are several available “flavors” for a given interval, each with a different function.

Use what’s relevant to the piece at hand. If your choir adds only the harmonic 7th to their vocabulary, then that’s enough to start working on Ben Johnston’s I’m Goin’ Away.

Quantifying Comma Shifts

Ross Duffin is well-known for his book on meantone and well temperaments, How Equal Temperament Ruined Harmony (And Why You Should Care). But he also wrote a wonderful defense of, and method for, Just Intonation practice, which hinges on locating and using the syntonic comma. This is a very helpful way of thinking systematically about tuning 3rds, 6ths and 7ths compared to 4ths and 5ths. It is freely available here. He even includes exercises for practicing typical problematic intonation situations that can occur.

The Hilliard Ensemble and Nordic Voices regularly incorporate this basic system (different in the particulars) into their practice. If your choir sings Renaissance counterpoint one semester, looking at intonation through this lens, then the following semester could extend the microtonality further:

Extended Just Intonation

Now we get into the weird stuff. It is possible, with much repetition and a rock-solid reference, to memorize and reproduce intervals of the higher overtones of the harmonic series.

One possible reference is overtone singing (Fahad Siadat, personal communication), which—on a low fundamental—can reliably produce harmonics at least up to the 14th, and perhaps further. A retuned digital keyboard is another potential resource.

But there is a remarkable set of exercises available, too: Andrew Heathwaite devised a system for singing through every possible interval that occurs between members of a given group of overtone-based pitches, charmingly called Singtervals. Others have elaborated on this. It is surprisingly logical and intuitive, using a slight alteration of solfège syllables.

If a singer were to make listening to, understanding, and singing this type of matrix a part of their daily practice, they would soon be able to approach a strictly overtonal (or undertonal) piece like Henk Badings’s Contrasten without much suffering.

Quarter-Tone Path: In-Between Tones

We can use people’s ability to sing in between the pitches of a small and familiar interval to develop a true quarter-tone framework.

Starting again at the beginning of a different path, we can use people’s ability to sing equal-ish tones in between the pitches of a small and familiar interval, to begin to develop a true quarter-tone framework. At first, you could simply add an exercise to normal warmups: Sing F – Gb, then F – F quarter-sharp – Gb, and then the same in the opposite direction. The outer tones are, of course, easily checked on the piano.

The Tucson Symphony Chorus warming up prior to rehearsing my piece vokas animo.

Full 24-Tone Scale

Where it gets interesting is extrapolating this simple technique to all intervening positions in the chromatic scale. Robert Reinhart, who teaches music theory and aural skills at Northwestern University, assigned intermediate vowels to the quarter-tonal pitches between solfège notes, such as ra-reh-re-rih-ri for all varieties of the second scale degree re. He then designed—and used in the classroom—progressive exercises to train the ear on the new intervals. In many cases, these involve first singing known intervals; then filling in the gaps with quarter tones; and then ultimately singing only the altered pitches, while audiating the more familiar surrounding pitches.

This is just an extension of sight-singing pedagogy in movable-do systems! For example, to teach the pattern do-fa-la (difficult for beginners), one can repeatedly sing a major scale, and gradually remove the intervening tones re, mi and sol; first audiating them, and then making the cognitive leap to simply singing do-fa-la without any crutch.

Reinhart has presented on this subject and is currently working on a systematic collection of quarter-tone solfège exercises, graded by difficulty.

You, too, could use this basic framework to divide, say, semitones into groups of three sixth-tones—or whole tones into fifth-tones, if you’re singing Renaissance enharmonic music. The specific vowels in your extended solfège don’t matter that much, as long as they’re consistent.

Going Deeper: 72-Tone Scale

Julia Werntz is the current bearer of the 72edo aural skills tradition at the New England Conservatory, succeeding Joe Maneri. She teaches students to hear, perform, and compose with twelfth tones—that is, quarter-tones each further divided into thirds. Her class begins by developing a quarter-tone framework, and elaborates from there. The course textbook, Steps to the Sea, is both highly accessible (with plenty of audio examples) and readily available.

By the time we’re getting into twelfth tones, the Just Intonation and equal-division paths begin to merge.

By the time we’re getting into twelfth tones, the Just Intonation and equal-division paths begin to merge. For singers specifically, the simpler Just Intonation intervals correspond so precisely with pitches in the gamut of 72 tones per octave, that the difference—a maximum of about 5 cents, and usually under 3—is literally impossible to produce with the voice.

In fact, a recent study by Matthias Mauch et al. shows that, even for experienced singers, the Just Noticeable Difference and the median pitch production error on a given note both hover around 18 to 19 cents—a bit over an entire 12th-tone! The study dealt with solo melodic singing, and intonation accuracy can be somewhat higher in harmonic singing (especially in barbershop); but not by as much as you think.

(Different sources give different amounts for the Just Noticeable Difference in various contexts, and 5-8 cents is the usual value cited. But in the case of sung pitches, a little more chaos seems to reign.)

Thankfully, in case you were wondering, microtones really can be learned, and ear-training in 72edo really does have the effect of increasing pitch discrimination and production ability. It tames some of the latent chaos of music-making.

The End Result

If you’ve gone through all of this with your choir, then you’re obsessive, and they’re all saints.

If you’ve gone through all of this with your choir, then you’re obsessive, and they’re all saints. What you should really do is pick and choose among these possibilities, based on what’s going on in the piece itself. This is what I have done. But where I might not yet have used a particular technique myself, it has been field-tested by others. They all really do what they claim.

Potential Bad Results

As I promised earlier, here’s the first movement of Carrillo’s Misa:

This is not, shall we say, a touchstone performance. The singers may have assumed that the goal was an “other-worldly” sound, and presumably claimed enough success to release the recording. But unfortunately, they performed the whole score inaccurately, including what should have been the “this-worldly” parts.

They are pitchy from the very beginning. By 0:58, the tenor is an entire semitone flat compared to the others, leading to a sounding Ab major in first inversion, instead of the written C augmented chord. (When it happens again at 1:07, you can hear him drift upward to try to correct it.) In the music that’s in frame starting at 1:32, the poor Bass 1 sings written perfect 5ths as tritones, because he can’t get the lower note to go down far enough. And so on. If one heard this as an exemplar of microtonal choral singing, one might be forgiven for souring on the idea.

But despite the deficiencies of this particular performance, the piece was actually written in a way that could be easy to grasp—using the tactics I’ve outlined above. It could even work as a first venture into microtonality for a choir!

Here’s how I would approach it.

  • First, I’d add quarter tones to the warmups at the beginning of rehearsals—simply splitting a semitone. This happens constantly in Misa, so the choir would get a lot of mileage from just that exercise.
  • Then, also in warmups, we would build augmented chords and other whole-tone sonorities, like [046] and [024] in different inversions. These are Carrillo’s building blocks for the piece. They are somewhat uncommon as structural elements in conventional choral repertoire, so they would require reinforcement in order to be useful as targets.
  • Then, going to the piece itself, we would sing segments without the quarter tones that intervene, and make sure the choir has memorized the target whole-tone sonorities in their larger harmonic framework.
  • Finally, it’s time to insert the quarter tones. We would do this one part at a time, at first, to cement melodic awareness.
  • Now, the surprise! Carrillo helpfully puts all quarter-tonal changes in vertical alignment, other than some suspensions. The piece alternates between one 12-tone “world” and the complementary “world” in 24edo. So, the sonorities built by the altered pitches are generally familiar. The sonority at 1:11, on the second beat of the last measure in frame, is in fact an A quarter-flat major chord in first inversion—so it should sound like a major chord in first inversion! Everybody knows that sound.

All of this is entirely lost in the wailing, loosey-goosey intonation of this performance. I believe an accurate interpretation, on the other hand, would reveal a piece of extraordinarily different character than what’s presented in the recording: perhaps startling in just how accessible it really is.

Conclusion: Practicalities

Here are a few miscellaneous suggestions I can give about teaching microtones to choirs.

Use warmups to reinforce new musical concepts, if that wasn’t clear already. Why waste time singing major scales or arpeggios the whole warmup, when you could be practicing quarter tones by repetition, or building harmonic-series chords? This reduces the teaching time on the microtonal piece itself.

Absolutely do not play a tone cluster in place of an intervening tone, if you are modeling a microtonal melody on a standard piano. This does nothing for imagining the pitch (do we “hear” a D, when C-E is played? Hell no! So why would we hear a D quarter-sharp when D-Eb is played?), and it models a dissonance, which the choir will obligingly give you. Better to skip over the altered pitch—or better yet:

Model with the voice whenever possible. This is not only easier to follow than a keyboard, but it also demonstrates that the passage is, in fact, performable.

Retune the keyboard, if it’s digital. The task is now basically trivial, with available technology; but it may not be so for you personally. If that’s the case, and you’re a person who would read this, then you assuredly have friends who are big nerds like you, except with computers. You can ask them a favor or hire them to do it for you. BitKlavier is free software with an easy learning curve; if they can program in Max/MSP, then they should be able to use Pure Data without much fuss, which is also free; or you could shell out for PianoTeq Standard, which has professional-quality sound and very good microtonal tuning controls. There are many other options, but these are a start.

Working closely with your accompanist is critical, especially if any keys are remapped drastically! But again, if you’re a person who’s reading this, your accompanist is probably game for it.

Do all the normal choral stuff first – speak the piece in rhythm, aim for precise cutoffs, use expressive phrasing, interpret the lyrics – so that they realize how much they already know how to do.

Proper breath support is absolutely indispensable. Unfamiliarity causes uncertainty, and uncertainty causes improper support, and improper support creates sagging pitch and bad timbre, which makes the project infinitely harder. So, never lose sight of that bedrock of a well-supported sound and come back to it often.

You have to convey joy in the music.

Most importantly, you have to convey joy in the music. And isn’t that what it’s always about?

The Journey In (Performing Microtonal Choral Music, Part 1)

A 19-tone keyboard with 7 white and 12 black keys per octave. (Photo by Dan Pelleg)

Why would anyone expect a choir to be able to sing microtones? All the literature seems to be on their limitations. Everyone knows that choirs are devastatingly conservative, anyway. They, and their audiences, would surely revolt at the slightest hint of strangeness. There are some who celebrate this paradigm, saying that the limitations on the massed human voice have constrained choral music to a more traditional style in the face of modernity, and that it’s a good thing they have!

This obviously rules out microtonal music of any sort. That stuff is pretty weird.

Why would anyone expect a choir to be able to sing microtones?

But—of course—there are cracks in this theory. Looking beyond the Western choral paradigm, the world overflows with examples of formidable vocal control. There are Indian, Turkish, and Arabic singers, for whom a fundamental part of music is very tiny intervals, without which the very identity of a given melody would be compromised. The Egyptian singer Umm Kulthum, in particular, was not just an apt practitioner of these microtonal gradations in interval quality: she was the authority on proper intonation (see Farraj and Shumays, Inside Arabic Music, 2019).

And even within the Western music scene, there is the idea of Just Intonation: the pure arithmetical tuning of chords, as distinct from our modern 12-tone, logarithmic tuning. This has slowly worked its way into general choral consciousness over the last century or two, having been abandoned only relatively recently, post-Renaissance.

A Renaissance-era drawing of a monochord showing the placement of frets to correspond to various Just Intonation intervals.

A Renaissance-era drawing of a monochord showing the placement of frets to correspond to various Just Intonation intervals.

But, as many can attest who have professionally recorded themselves singing, the limitations of human pitch control are something to contend with. These limits are humiliatingly displayed by taking a look at what you thought was a pretty decent take, through pitch analysis software like Melodyne. Was I really that many cents off?

Composers may sigh and shake their head, thinking, “Sure, microtonal singing is possible. But unless I’m commissioned by Exaudi or Neue Vocalsolisten Stuttgart or Roomful of Teeth, it’s not going to happen if I call for it!”

Spoiler alert: it can. I am a choral composer and conductor, and I am also a microtonalist. I’ve recently had some successes with microtonal pedagogy for choirs, which will be the specific topic of the sequel to this article. After that, my piece vokas animo, for choir and orchestra in 72 tones per octave, will have a performance video posted to this site.

How could such a thing occur? My case might be especially unlikely. Until relatively recently, I had no exposure to ensembles like those listed above—small, professional vocal ensembles who routinely play around with extremely tiny intervals. I grew up in Tucson, Arizona, and never left. It’s a choir town, fed by the excellent and internationally recognized choral conducting graduate program at the University of Arizona; but it’s not exactly a hotbed of new music.

So this first article is about how I found microtonality, or how it found me, through collisions with writers and aspects of culture that are not, by and large, much associated with new music. It’s about how microtonal thinking influenced the music I made, and how that process came to inform the way that I now teach it to choirs. To normal singers.

Because, if I could learn it, why can’t they?

Beginnings: Rejecting Tonality

I came to music later than many of my colleagues. Before age 11, I didn’t even listen to it much. But by 14, I had picked up the guitar and learned some rock and flamenco songs from guitar tabs. At 16, I learned how to actually read music, and then worked through a secondhand harmony textbook while my friend was taking a music theory class. So I remember my struggles with the basics very clearly, and the triumphs as well. I can still taste the deliciousness of finding out about augmented 6th chords—like forbidden fruit! More fundamentally, I remember the visceral feeling when, in about 7th grade, my choir teacher first demonstrated a major chord in contrast with a minor chord on the piano. The difference was so powerful, yet so subtle! I couldn’t figure out what was changing.

Learning how these things were put together was electrifying. So on that background—with music still in its honeymoon phase, still bright and new—came my first introduction to microtones.

When I was about 16 or 17, a hyperlink on some forgotten website took me to www.anti-theory.com, a manifesto written by Q. Reed Ghazala, on something called “circuit-bending.” He described how he would painstakingly, semi-randomly alter the guts of electronic toys so they produced new and powerful noises. I ended up on a page describing an object called the “Deep Photon Bassoon,” over which a player would wave their hand and produce theremin-like glissandos. But also, using the other hand, a player could do something which sounded insane to me: cause the pitches to resolve into steps—but in “arbitrary scale divisions (how many pitches might occur between octaves).”

This was the new coolest thing I had ever heard.

A 1989 Kawasaki toy guitar used in a circuit bending project - Image by Greg Francke

A 1989 Kawasaki toy guitar used in a circuit bending project – Image by Greg Francke – used under Creative Commons Attribution license

There was no YouTube quite yet. So, aside from the occasionally terrifying sound clips on Ghazala’s website, I only came across one other example of this during that period of my life. This was the guitar solo to The Doors’ “When The Music’s Over.” (The part in question starts a bit after 2’50”)

It sounded absolutely unmoored from everything around it, sounded like the guitarist was using a slide without regard for fret positions. I thought, “That’s it, that’s what arbitrary scale divisions sound like!”

Fifteen years later, it turns out that he didn’t use a slide. And it’s not really in “arbitrary scale divisions”—it just has plenty of microtonal bends. Still, the heavily chromatic and semi-aleatoric non-melodies, the oft-warped pitch, the ametrical rhythms, and the bizarre, alien timbre had combined to create a passage which emphatically divorces itself from the music that surrounds it.

Coming back to the specific case of microtonal choral music, we can directly compare the use of microtones in this solo—and the resulting polyphonic texture—to the first movement of Giacinto Scelsi’s 1958 compositon Tre Canti Sacri, especially during the second minute of the piece.

This is a piece in which gestures are king, superseding melody; and in which intervals are arguably used for their timbre, ranging in dissonance from pure unisons and 5ths all the way to fast-beating quarter tones. The atmosphere is tense and alien—a favorite atmosphere in 20th-century art music. And, though Scelsi’s piece is tightly focused, in contrast to Robby Krieger’s freewheeling solo(s), microtonality was used as a tool to achieve the same effect in each: to exit the tonal hierarchy, to momentarily free the listener from those associations.

This sort of thing might make a choir director nervous.

Tonal hierarchies are half of what singers use to produce pitches at all … But it needn’t be a non-starter.

After all, tonal hierarchies are half of what singers use to produce pitches at all, not having any keys on our throats. But it needn’t be a non-starter. First: there are rational ways to approach a piece like this, using what is familiar in the music as a support structure. And second: music which uses microtonality specifically to reject familiar structures rarely requires precise intonation to succeed.

In the Scelsi, much of the time, I would say that an error of even 30 cents or so in either direction—unacceptable in tonal contexts—would still convey the necessary information. (For a fascinating case study of this sort of thing in instrumental microtonal music, see Knipper and Kreutz, Exploring Microtonal Performance of ‘…Plainte…’ by Klaus Huber, 2013.)

Even in this excellent recording, we do hear such variation. For example: the quarter-tonal diad between Tenor 1’s B ¼-sharp and Contralto 2’s C natural in measure 17 (about 0:38) is virtually a unison, whereas the one in measure 45 (1:39) between Tenor 1’s E natural and Tenor 2’s D ¾-sharp is much wider, even approaching a semitone. Nevertheless, I am confident that few would accuse Neue Vocalsolisten of doing injury to Scelsi’s piece!

The Other Side of The Coin: Expanding Tonality

We return to a time before I had heard of Scelsi. I was just beginning to really study composition, and fortuitously stumbled across a book while housesitting for a family friend. This was Ernst Toch’s The Shaping Forces in Music. Published in 1948, the book is an engrossing (and largely ignored) attempt to find commonality of practice between tonality and atonality. But one section, only a few pages long, stood out. In this, Toch advocated microtonality as potentially compatible with all musical approaches. He even discussed how it might have provided a neat solution to a “problem” that Beethoven, of all people, ran into (at 6’03”):

In Toch’s words: “Around the advent of [bar 8]… [the] smooth rhythmical flow of the bass is balked for three beats, there being no more moving space left for the descending voice. …[T]he problem could be solved by the use of quarter-tones as shown.” Here is one of his potential solutions (he also changed the inner voices for clarity’s sake):

A score excerpt showing Ernst Toch's quartertonal reworking of a Beethoven bass line.

Ernst Toch’s quartertonal reworking of a Beethoven bass line.

And he suggested singing it as a practical means to experiencing it. “It is recommended that the quarter-tone passage of the bass be sung, while playing the rest of the voices on the piano. One will be surprised at the facility of the task, its novelty being sufficiently eased by its tangible logic.” For me, this was one of those quotes that stuck. I took Toch’s advice, later, writing unobtrusive microtonal basslines and vocal harmonies, whenever a particular tonal problem spot seemed to require a microtonal solution.

Shortly after encountering Toch, I went to a choral conference, and someone mentioned Just Intonation for choirs—as a tuning strategy for conventional music. It seemed arcane and I forgot all the cents offset values, though it was interesting.

I also took a strange class on building instruments out of scrap metal, for which the textbook was Musical Instrument Design by Bart Hopkin. (It’s an excellent book on “outsider” approaches.) That book discusses Just Intonation a bit, and most importantly for me, it has a tuning chart in the back. This compares 12-tone equal temperament with several other systems, both JI and different equal divisions of the octave. Harry Partch’s collection of 43 tones was included. I was amazed at the sheer variety of intervals that apparently made some kind of harmonic sense. (I didn’t actually hear his music until years later.)

With all this kicking around in my head, I started playing with a rock band, and we recorded an album. One particular take on guitar had an incredible timbre, but it also had an error, so we had to redo it. But I couldn’t duplicate the timbre! After extended frustration and tinkering, we discovered that the guitar on the first take had been slightly knocked out of tune; so the major third had been flatter than usual. When I had routinely re-tuned it for the overdub, that property was erased. So, remembering the Hopkin book, I tried tuning the offending string to the 5th harmonic; and lo, there was that timbre again! A weirdly resonant and supported sound for a major triad on overdriven guitar. I thought, “So that’s what Just Intonation does.”

Later, dense vocal harmony became part of the aforementioned rock band’s schtick, but we struggled to stay in tune in live situations. So, with the guitar experience in mind, I looked for some kind of reference to use to help us out. That turned out to be W. A. Mathieu’s Harmonic Experience, a manual for understanding Just Intonation in practice (and applying it to jazz harmony). The band didn’t end up using any of the exercises—more’s the pity!—but the book showed me how it could be necessary to shift sustained tones by tiny intervals, “commas,” in order to maintain pure tuning as the underlying harmony changed. But, more importantly for Mathieu, it discussed the bodily feeling of pure tuning. That’s the way to learn these new/old intervals.

And old they are. Nicolà Vicentino wrote pieces which encapsulate parts of both Mathieu’s and Toch’s thinking—in the year 1555. Here’s one of them performed by Exaudi:

This has both Just Intonation-esque aspects—very narrow major 3rds and very wide minor 3rds—and quartertonal-esque aspects, which resemble Toch’s insertion of intervening microtones in an otherwise chromatic line. In the middle of what we would now call a V-I progression in G, Vicentino places an “extra” leading tone between the F# and the G. Unlike Toch, he tunes a whole chord to this intervening tone. This happens at 0:18—see the score excerpt below (lyrics simplified).

Score excerpt of Nicolà Vicentino's "Dolce mio ben" showing microtonal intervals.

Score excerpt of Nicolà Vicentino’s “Dolce mio ben”

It’s not truly quartertonal, nor truly Just Intonation. It’s really in 31-tone equal temperament, whose modern standard notation slightly reinterprets all the chromatic and quartertonal accidentals; but it should still be clear what’s going on. Vicentino loves this type of figure, by the way, and it pops up all the time in his surviving microtonal music.

I had been exposed to two completely different philosophies of microtonality: Either escape The System, or help it to become somehow more itself.

So, well before I took the plunge and resolved to compose in microtones—and, in so doing, got up to speed with the voluminous literature and repertoire that’s actually out there—I had been exposed to two completely different philosophies of microtonality. Either escape The System, or help it to become somehow more itself. And on the surface, those categories seem to have held up pretty well, in terms of guidance for interpreting a given passage.

One thing, though, that I wish I had been able to read as a teenager is some sort of comprehensive overview of all the ways people have used microtonality in Western music. Until very recently, nothing of the kind seemed to be around—everything had its relatively narrow agenda, and was too technical for my teenage self anyway. As we’ve seen, I was left to gradually pick up an incomplete picture from here and there. But last year, Kyle Gann published The Arithmetic of Listening, and now none of us need suffer that fate.

That book is possibly the most important microtonal resource that exists today. This is because it is, indeed, a survey of many of the ways microtonality has been used; but it’s also an entire paradigm for how microtonality can be taught. Tuning concepts like Pythagorean, meantone, 12-equal, and Barbershop intonation are explored through the lens of gradually adding prime limits to the harmonic vocabulary. After the 13-limit is passed (with discussions of Ben Johnston, Toby Twining, and Gann’s own Hyperchromatica), the conversation branches off into equal divisions of the octave, covering not just what they are, but what they do. This includes the single most helpful introduction to Regular Temperament Theory that I’ve ever encountered, which will be a life raft for anyone who has attempted to swim in the turbid internet waters that cover this subject.

Kyle Gann’s The Arithmetic of Listening is possibly the most important microtonal resource that exists today.

There are things with which I disagree: his thumbnail analysis of Ezra Sims’ String Quartet No. 5, for example, is done exclusively in terms of edo-steps—despite also quoting Sims multiple times to the effect that his use of 72tet is meant to be harmonic (i.e. ratio-based). And the brief section on non-Western tuning systems is so thoroughly salted with disclaimers (such as “not to be taken as fairly representative of how those cultures understand their own music”) that it comes across as, well, a bit salty. And anyone looking for strictly atonal resources in this book will leave disappointed—the book does not much discuss organized ways of using microtonal structures without reference to a global or local tonic (i.e. 1/1). Still, despite these and other quibbles, The Arithmetic of Listening is the first book I would recommend to anyone who wants a serious introduction to microtonality. I wish the world had had it sooner.

The Facility of The Task

But back to the narrative at hand: the ways in which I first experienced microtonal techniques are very approachable for beginners, and I consider this a fantastic stroke of luck.

Why can you sing in-between pitches? Because they’re in between.

Why can you sing in-between pitches? Because they’re in between. You are leaving somewhere and arriving somewhere, and both of those places are fixed and familiar. When I recorded a Toch-ian “double leading tone” in a background vocal for a country song, it was totally natural in context, just a slight extension of a normal voice-leading thing that happens in pop styles. It hardly took any practice at all to get that take. Anyone can do it: and I’ve taught people to do it.

Why can you sing JI intervals? Because, as Mathieu said, they are felt as much as heard. For all that Just Intonation is a theoretical construct just like everything else, it remains true that it provides easy perceptual landmarks to hit. When you’re singing in tune, it locks—just ask a barbershop ensemble. They know how to sing a perfect 7/4: not because of the ratio, or because it’s 31 cents flatter than an equal-tempered minor 7th, or whatever. Plenty of those guys can’t even read music. They know because the chord rings, in a way that stands out from the results of other nearby tunings. So, why can’t the rest of us learn such new consonances? Some are a bit more challenging than 7/4, but many are not that much more challenging. And again, I’ve had some success teaching people who aren’t by any stretch avant-garde.

Outside of these applications (which, by the way, are already infiltrating pop music via such acts as Jacob Collier and They Might Be Giants), it is well for us to recall that there is a huge range of learnable intervals in the world—far more than the simpler Just Intonation ratios. Many cultures use intervals which correspond to no particular harmonic “landmark” at all. So, a precise tuning standard clearly needn’t be dependent on acoustic phenomena per se.

It’s harder to learn such inharmonic intervals—whether as part of a traditional but new-to-you system, or a novel one—but it is possible with support. I’ve helped people do this, too. It helps morale to remember that all of our familiar 12-tone intervals, with the exception of the octave, are in fact also inharmonic. So, the ones you grew up with are just as “unnatural” as the ones you’re trying to learn!

Combining these things into a unified approach is surprisingly intuitive. They can mesh well with standard choral techniques, if one is a little creative with the use of technology, the role of the piano, the role of the director’s voice. Much of it can be achieved with the same basic tricks that people use to teach diatonic and chromatic intervals to children. Using all this, and aided by strategies from Fahad Siadat, Ross Duffin, Robert Reinhart, and others, I’ve come up with a toolbox to teach a choir just about any microtonal piece—eventually.

I’ve come up with a toolbox to teach a choir just about any microtonal piece—eventually.

Teaching any challenging piece takes time. And there are some microtonal pieces which are a lot more formidable than others; but that’s true for any genre, microtonal or not. The point is, you can use these tools as an entry point to any piece, rather than looking at something like Ben Johnston’s Sonnets of Desolation and sinking into, well, desolation.

I hope more choir directors might see this and be inspired to invest the time in learning some microtonal repertoire with their ensemble. The rewards can be great: not just from an artistic standpoint, but also for the way microtonal awareness hones intonation skills for standard repertoire.

Tune in next time for a discussion of the actual rehearsal techniques!

From Folk Song to the Outer Limits of Harmony—Remembering Ben Johnston (1926-2019)

A Caucasian man with his head titled, glasses and a white beard

I first saw Ben Johnston when I was a student at Oberlin, maybe 1976. The composers at the big Midwest music schools were in continual rotation as each other’s guest composers, which in itself was an amazing education. Ben lectured and played a recording of his Fourth String Quartet, based on the song “Amazing Grace.” He was a Quaker-bearded, good-humored, gruff, not very talkative fellow, and there was a peculiar contradiction, I think we all sensed, in this composer who had invented his own pitch notation and 22-pitch scale and written a score nearly black with ink using all these crazy polyrhythms of 35 against 36 and 7 against 8, 9, and 10 – all at the service of an old folk song anyone’s grandmother could sing. Conservative versus avant-garde was how we divided the music world up at that time. Where the hell did this fit?

Ben Johnston sitting and writing on a piece of music paper.

Ben Johnston in 1976.

Forty-odd years later, several of them spent working with him, I still think there’s an essence to Ben that in the current musical climate can only be seen as a paradox: he was a down-to-earth, populist visionary. I truly think that he thought there were no limits to what pitch and rhythm relationships musicians could learn to play, as long as the approach to the difficulties was gradual and intelligible. Famously, the third movement of his Seventh String Quartet contains more than 1200 pitches to the octave. It is structured around a 176-note microtonal scale that glacially traverses one octave over 177 measures, and, written in 1984, it remained on the page until the Kepler Quartet recorded it a couple of years ago. But it is carefully written so that if the players can get their perfect fourths and seventh harmonics in tune, they can creep securely, interval by interval, through this free, gridless, infinite pitch space – astronauts of harmony, floating beyond the gravity of A 440. The conceptual achievement leaves Boulez and Stockhausen in the dust. Moment by moment, the music can sound as mild as Ned Rorem.

The conceptual achievement leaves Boulez and Stockhausen in the dust.

Ben had a strange mind, and I say that up front only because he often frankly said so. He thought he had some kind of mental disorder, possibly caused by being taught to meditate wrong by the Gurdjieff cult in the early ‘60s – this is what he repeatedly told me, even in interviews. He was always trying various remedies. When I studied with him privately in 1983-86 (post-doctorate), he was on medication that made him very quiet. He would look at my score for fifteen minutes without speaking, and then say something incisive and profound. A few years later he was controlling his problems via diet. I went to a conference with him, where I was going to interview him onstage: the night before, he kept me up until two in the morning, talking nonstop. His Catholic priest in Champaign-Urbana recommended he go to a Zen temple in Chicago, and so for a couple of years that’s where he and I met, and I started going through the Zen services with him. Those were wonderful, and the lessons afterward took place in a blissful haze.

Ben Johnston in 1962

Ben Johnston in 1962

I do think that, whatever was strange about Ben’s mind, it was what made his music possible. At age twelve, attending a lecture on Debussy, he was introduced to Helmholtz’s On the Sensations of Tone, the foundational treatise on acoustics that first appeared in English in 1875. He spoke about it as though it confirmed for him what he already sensed: that the music we play has something wrong with its tuning. At age 17, after a concert of his music, he was interviewed by the Richmond Times Dispatch (where his father was managing editor, admittedly), and predicted, “with the clarification of the scale which physics has given to music there will be new instruments with new tones and overtones.” This was 1944. Harry Partch’s Genesis of a Music wasn’t even published yet. By 1950 Ben was in grad school at Cincinnati Conservatory, and someone gave him a copy of Partch’s then-new book, with its outline of his 43-tone microtonal scale and perceptive history of the vicissitudes of tuning over the centuries.

Thrilled to find another musician who shared his misgivings about tuning, Ben wrote to Partch asking to study with him. Partch, who once wrote that he would “happily strangle” anyone who claimed to have been his student, took him on as an apprentice and repairman instead, and so Ben went to live for six months on Partch’s ranch in Gualala, California. Partch liked to have only young men in his orbit, and was affronted when Ben’s wife Betty arrived in tandem, but Betty Johnston was a powerhouse, and eased her way into Partch’s reluctant affections. Ben later wrote that Partch

could have wished for a carpenter or for a percussionist… But he had one thing he had not counted on: someone who understood his theory without explanation, and who could hear and reproduce the pitch relations accurately.

Ben Johnston, wearing a jacket and tie, sitting outside with Harry Partch in 1974

Ben Johnston with Harry Partch at Partch’s home in 1974.

Ben’s preternatural ability to hear and reproduce exotic intervals was the one intimidating thing about studying with him. My brain not being strange in the same way, I spent years training myself to hear eleventh harmonics and syntonic commas using primitive digital technology, and to this day I would never attempt to coach an ensemble to play one of his string quartets. When I came to his house he liked to play me whatever he was working on. Once, in the early weeks, it was a piece for trumpet and piano called The Demon-Lover’s Doubles, of which he played me the piano part. His piano was tuned for maximum consonance in G major with some peculiar pitches outside that diatonic scale, and as he started, it seemed like an oddly homespun, tuneful little piece. Then, magically, his piano started going sourly out of tune and got weirder and weirder, and I was thinking, “Man, you’d think Ben would tune his piano.” Finally, of course, he returned from his modulations into distant keys, and in G major the piano sounded fine again. I just remember sitting there thinking, “Huh.”

In that experience is the alpha and omega of Ben’s vision. What fascinated him, I think, was how vastly just intonation and the higher harmonics expand the range of consonance and dissonance, in both directions. You can have so many flavors of harmony: triads purely in tune, edgy Pythagorean triads, chords with exotic upper harmonics, dark chords from a subharmonic series, excruciating chords specifically out of tune by a comma here or there, bell-like chords related by higher harmonics, grating seventh chords with deliberately mismatched ratios, tight clusters – the route from purity to noise is no longer a line but a large three- or four-dimensional space.

One of Ben Johnston's pitch charts.

One of Ben Johnston’s pitch charts.

Many, many microtonal composers, I think, are looking for a total alternative to our tuning system, total exoticism, experimenting with how far we can adapt to new intervals, adding new complexities beyond what twelve-tone music provided. Ben wasn’t. Ben was never disappointed in the major triad. For Ben, the tonal music system that we’d developed over the last few centuries was a template, a first draft, a worthwhile approximation, but only a fragment of the universe he could hear. Seventeenth-century theorists like Marin Mersenne and Christiaan Huygens had argued for including the seventh harmonic as a consonance; Giambattista Doni (c. 1594-1647) wrote music using the eleventh harmonic. Theoretically, Ben goes back to that era and accepts those arguments. Keep the system, but add back in what was prohibited. Thus, unlike the general run of modernists, he could envision a brave new world without ever having to reject or exclude anything.

Cage and Xenakis may have wanted to reinvent music, but Ben saw a way to keep the foundation and keep building.

And so we have “Amazing Grace,” which so anchors one of the most avant-garde works of 1973 that the audience can hum along with it the first time they hear it. Also the sentimental old tune “Danny Boy,” which gradually emerges from the last-movement variations of Ben’s Tenth Quartet, and the folk song “Lonesome Valley” which is the subject of his Fifth Quartet, and the folk tune in The Demon-Lover’s Double. Cage and Xenakis, whom he knew well, may have wanted to reinvent music from the ground up, but Ben saw a way to keep the foundation and keep building.

Ben Johnston with The Kepler Quartet in 2015

Ben Johnston with The Kepler Quartet in 2015 (Photo by Jon Roy).

What’s amazing about his use of old folk tunes is how devoid of nostalgia it is. He’s not like Charles Ives, with “Beulah Land” faintly heard above the dissonant chords below; there is no modernity with which the songs’ innocence is contrasted. His “Amazing Grace” grows step-by-step from five pitches to twenty-three as though all those pitches were implicitly in there to begin with – which I imagine to his ears they were! It is difficult, probably, for most of us new-music types to take “Danny Boy” as seriously as he did, but for him it was simply a familiar item of our culture from which new implications could still be drawn. He didn’t have to renounce the naïve perspective on music to see through to the other side of the musical universe. And this is why some of Ben’s works will always appeal even to people who don’t like abrasive modernism.

That’s certainly not to deny that Ben’s music could be thorny. He kept writing twelve-tone music, in just intonation, and I once asked him why. He replied, “Well, I had learned all that theory, and I didn’t want it to go to waste.” Since he said almost everything with a slight smile, I’m not sure I ever knew when he was kidding. His Sixth Quartet draws the principle of endless melody from a twelve-tone row that consists of the first six non-repeating harmonics of D and the first six subharmonics of D#. The row matrix for the piece contains 61 different pitches. Even though it uses a twelve-tone row, though, each hexachord is actually a tonality in itself, so you do hear the harmony shift back and forth between major and minor – or between otonalities and utonalities, as we microtonalists say. At the time I wrote a rave review of the Sixth Quartet for the Chicago Reader and Ben said, “I think you like that piece better than I do.”

One piece I analyzed had some repeated pizzicatos in the cello that didn’t fit into the structure, and I asked him where they came from. He looked, and said, “Oh, that was to give the audience something to listen to while I worked out this contrapuntal problem.” That was a lesson: that the composer and the audience could want different things from a piece, and that both could be satisfied.

The composer and the audience could want different things from a piece, and both could be satisfied.

As with Partch, I also insist that Ben should get credit for his rhythmic innovations as much as for his microtonality. In the Fifth Quartet “Lonesome Road” floats above a bobbling sea of polytempos, and in the Fourth Quartet there’s a long rhythm of 35 against 36 (analogous to what we call the septimal comma), involving different meters in the various instruments. Back when I was younger and smarter, I once successfully parsed it, but I’ve never figured it out again since. He was a great proponent of Henry Cowell’s theories that pitch and rhythm, both being number based, could be developed analogously and in the same directions – that was the principle, of course, of his first hit tune, Knocking Piece, which became a percussionist’s standard. That he was focused on extending musical language in terms of both pitch and rhythm has limited his influence among the mass of composers who think there’s nothing new to be done in those directions, but when we’re ready he’s left us a foundation for a radically new music.

Ben never proselytized for microtones or just intonation. He imposed no stylistic dogma. Like so many American experimentalists, he himself was stylistically multilingual: he wrote chance music, twelve-tone music, conceptualist works, a musical, and a surprising amount of his output is in a neoclassic vein, with standard forms like sonata-allegro and variations. Neo-romanticism, I think, is the only idiom he avoided, which is not to say his music couldn’t be deeply moving; he just wasn’t sentimental. In 1983 I asked to study privately with him because I loved his music (I never attended the University of Illinois where he taught for 35 years), but I didn’t want to get into microtonality, which seemed like too much work. That was fine with him, but at my first lesson he looked at a chord I’d written and remarked how beautiful it would be if tuned properly, and he reeled off the ratios. With a shock I realized I understood just what he was saying. It was as if a huge iron door had slammed shut behind me. I was in his world and couldn’t go back.

I didn’t need to. The microtonal notation he invented opened the universe to me, and I learned to think in it fluently. My own microtonal music, more single-minded and homogenous than his (not to mention more cautious – god, that Seventh Quartet!), inherited his worldview of microtones as an extension of tonality rather than an alternative. I would be remiss here if I failed to mention another of his microtonal students, Toby Twining, who, in his Chrysalid Requiem (2002), developed Ben’s ideas into one of the most impressive feats of musical architecture ever perpetrated, incredibly complicated yet unearthly beautiful. That’s a legacy.

Ben Johnston as a child driving a toy car.

A 10-year-old Ben Johnston in 1936. He was already eager to explore.

I remember once in Ben’s medicated days we had him over for dinner, and he played solitaire obsessively while we were preparing dinner. After he retired we visited him in Rocky Mount, where Ben and Betty, equally strong characters, practically barked at each other, but clearly with no lack of affection. He was a crucial link between me and several other people I didn’t meet until later, all of whom were devoted to him: Bill Duckworth, Neely Bruce, Bob Gilmore. I last saw Ben in 2010 at a microtonal conference. He could barely get around. After I delivered a paper about his music he tottered up to say “thank you,” and I replied, “No, thank YOU!” He looked up from his walker with a big grin and gruffly growled, “YOU’RRRE WELCOME!” That meant the world to me: I needed him to acknowledge how much he had done for me. A few years later I called to tell him that he appeared as a character in Richard Powers’s novel Orfeo, about the University of Illinois’s music department in the 1960s. His mind was deteriorated by Parkinson’s, and the next day his caretaker called me saying Ben was under the impression that some kind of copyright infringement had taken place and he needed a lawyer. I set his mind at rest and assured him it was a compliment.

And once when I was a young, new home-owner with a lawn to keep up, I was driving Ben somewhere and we passed a vacant lot covered with blooming dandelions. I made a slighting reference to the plant, and Ben just said, “But they’re awfully beautiful, aren’t they?” That was a lesson too. He was a lovely soul, and a caliber of musical mind we will not see again.

Ben Johnston and Kyle Gann c. 1994 (Photo by Bill Duckworth)

Ben Johnston and Kyle Gann c. 1994 (Photo by Bill Duckworth, courtesy Kyle Gann)

Understanding the Polychromatic System

We seem to be at an accelerated phase in our musical evolution, where isolated methods of music practice are rapidly multiplying without a framework of integration and orientation for musicians and listeners to grasp. The polychromatic system is one framework of integration for the various scale configurations of micro-pitch music.

Isolated methods of music practice are rapidly multiplying.

The polychromatic system is oriented toward exploring the outer limits of micro-pitch awareness and its expression in music, from a perceptual rather than conceptual (i.e. mathematical, theoretical, analytical) perspective.

The polychromatic system is based on principles of associative synesthesia: learned associations and conceptual/perceptual integration of audible pitch with visual color. Recent research has shown that associative synesthesia can be developed with practice. Music is an optimal area in which to extend the possibilities and potential of synesthetic awareness, both aesthetically and scientifically.

Intentions:

  1. To simplify and encourage musical curiosity—exploration and discovery of infinite             possible pitch scales and their sonic combinations (intervallic or other).
  2. To open new worlds of musical expression, experience, and composition.
  3. To aid in the development of new ways of ‘hearing’ sound/music and the world.

The standard 7-white, 5-black key layout of keyboard instruments.

Imagine a chromatic keyboard where each key is split. The front half of the key plays the conventional chromatic semitone pitch while the back half of the key plays the quartertone in between each front-key pitch (24 pitches per octave). We could distinguish these pitches clearly by assigning the quarter tones at the back of the key to a pitch-color (let’s say violet).

A keyboard layout for a polychromatic system of 24 equal divisions of the octave.

Moving up to 36 edo (equal divisions of the octave) brings new complexity. Now we can imagine a pitch-color above (say, blue) and a pitch-color below (say, red) each chromatic pitch. One way to describe them would be to say that, in the key of ‘C’, C-red is flat-ish and C-blue is sharp-ish relative to the chromatic pitch. The problem here is that the terminology of flat and sharp are embedded in the chromatic language both as a pitch definition (Db, C#) and as a pitch modifier (bb, x). By applying the concept of pitch-color, we can avoid both the confusion in terms and extreme notational complexity (countless, incompatible pitch-modifier symbols) ‘bolted-on’ to chromatic black and white notation.

A keyboard layout for a polychromatic system of 36 equal divisions of the octave.

This describes how I proceeded in trying to work with very high pitch-resolutions. The polychromatic system evolved to allow the creation of a simple notation and theoretical language for writing, learning, and memorizing micro-pitch music within a pitch division scheme of 106 and 72 edo.

A keyboard layout for a polychromatic system of 72 equal divisions of the octave.

The pitch-color concept is not absolutely defined, so the values (in Hertz) of C-red are different depending on the scale division method used. In this way, the language says remains generally consistent, while being adaptable to any conceivable pitch scale. Unique pitch definitions are defined at the beginning of each composition. This enables an efficiency of learning and possibility of using of multiple pitch scales (simultaneously or via ‘modulations’), all under a unifying and intuitive pitch-color concept.

The pitch-color concept is intuitive by way of an easy association of micro-pitch to the visual color spectrum—from infrared to ultraviolet. A simple association can be made with a continuum of flatness to sharpness for each reference pitch (chromatic or other) to the visual color sequence we already know from the experience of seeing rainbows and other light diffraction phenomena.

I use equal divisions of the octave as a method of pitch division because it is a rudimentary and self-explanatory element to begin from in my early explorations of auditory limits of pitched-sound differentiation. My approach is to use the highest possible edo scales within each new keyboard design.

While there is one option for a major 3rd in a chromatic system, the polychromatic system offers several pitch-color variations of that interval.

With regard to intervals, the polychromatic system uses relatively defined pitch-color definitions and is based on an idea of intervallic relativity. So, while there is one option for a major 3rd in a chromatic system, the polychromatic system offers several pitch-color variations of that interval. This intervallic flexibility has audibly compelling implications and effects when exploring variations of each component within increasingly complex harmony.

When listeners hear microtonal music as ‘out of tune’, this impression arises from a chromatic frame of reference. A new foundational frame of reference or perspective needs to be established in order to appreciate microtonal music on its own terms and not in comparison to the macro-pitches of the chromatic system. This foundation is what I have been seeking in developing a polychromatic system. In general, microtonal music can seem extremely abstract for unprepared minds and ears, especially without a new framework of understanding (for reference and comparison).

The polychromatic system builds upon the fundamental concepts of the chromatic system (note definitions, harmonic principles, and music theory) as a common point of reference – and departure. From this common framework, increasingly refined levels of micro-pitch discrimination can be explored within a known system of musical understanding. As greater refinements of pitch and harmony recognition are developed, increased awareness can enable the recognition of further pitch-colors, as entities in themselves, without trying to force them into the conventional frame of reference: a coarse pitch-resolution, chromatic system.

In analogy, imagine the chromatic system as an old monochrome (black & white) dot matrix printer, with its chunky, quantized images. If you input a high-resolution image into that framework, it is ‘processed’ into a chunky, quantized image. The foundational framework of the coarse resolution, dot matrix system design must be addressed first.

A Low resolution dot-matrix image showing very pixellated letters as well as a very pixellated photo close-up of an eye of an animal

Paradoxically, ‘tradition’ has vital importance in the creation of new musical systems. Here, we are talking about tradition as our accumulated experience of the past, a shared frame of reference, an implicit basis and context of listening to and composing music—and not in the sense of calcified conventions of the past and present.

Paradoxically, ‘tradition’ has vital importance in the creation of new musical systems.

Polychromaticism is an approach and practice which uses chromatic music theory as an initial basis for conceptual anchors. In this way, extensions of established tonal and harmonic principles assist in the understanding of the new possibilities and implications of the polychromatic system. This creates a conceptual and perceptual bridge between our current and many future systems of music.

The use of chromatic conceptual anchors comes into play when I am trying to memorize a piece for performance. Using the conceptual anchor of a dominant 11th chord, I remember the constituent pitch-colors of the chord as minor modifiers within this larger conceptual structure, instead of having to remember each pitch-color individually.

My composing process is much more intuitive than analytical. As a result, I have neglected doing harmonic analyses, after the fact, to determine possible new theoretical models. My hope is that others with a passion for analysis will create new music theory models to aid in teaching the polychromatic system more efficiently. Philosophically, I see this as part of a larger 21st-century process which is based on collaborative innovation in music, science, technology, and the sound arts.

As a musician, it seemed impossible to gain a fundamental grasp of microtonal music because it consisted of so many discrete pitch-scale methods, all separately existing without a fundamental, underlying system.

With a polychromatic basis in notation and generalized pitch-color concept, any micro-pitch language could be quickly assimilated without having to relearn scale-specific notations. The current expansion of esoteric and fragmented micro-pitch scale methods, of reinventing-the-wheel (in notation) with each, can make wide exploration within microtonal music intimidating and difficult.

The use of non-chromatic color schemes:

A diagram of various hex-color schemes.

While the color schemes here are different, my impression is that these are examples of a modally optimized layout – in the sense that adjacent keys are 3rd, 4th, 5th, and 6th intervals rather than chromatic minor 2nd’s (or smaller) intervals. In the polychromatic system there are multiple pitch-color variations of each interval, rather than the single fixed-interval value in these layouts. Why be limited to just one major 3rd interval, when multiple intervallic pitch-color choices—singularly or simultaneously—could be available?

Ultimately, the polychromatic system exists to make the process of micro-pitch exploration and creation easier. And just as the polychromatic system has evolved from the chromatic system, it too will eventually become a (legacy) reference and conceptual point of departure for many increasingly sophisticated (non-chromatic) music systems of the future. The polychromatic system is one way of making the musical evolution toward triple-digit, pitch-scale resolutions easier to understand and create with.

As a musician and a listener, I experience music as a dynamic and evolving process, a creative interaction that we choose to engage in. Ultimately, the meaning and value of music comes from the quality and depth of this creative interaction.

Art doesn’t come to us, we must come to art.

The possibilities of growth and awareness gained through our engagement with art remind me of the idea that art doesn’t come to us, we must come to art. This idea expresses both a necessary receptivity to new perspectives as well as an active personal involvement which engages the listener’s creativity and imagination, in a similar way to that of the composer. In this process, the listener becomes a receptive-artist interacting with the compositions of the expressive-artist (composer). If we become what we practice, then exposure to, and interaction with challenging art can help us expand our integrated perpetual/conceptual awareness, and expose us to new dimensions of emotion and insight beyond the limits of spoken language.

In the next article, I will describe technical (physical technique and technology) aspects of my approach to polychromatic music, as well as some discoveries and implications that I have become aware of in my early explorations.

Polychromatic Music

Music seems to be at the forefront of an increasingly pervasive process where technological simulation is cheaply and efficiently substituted for authentic human creation and expression. Further, a technological aesthetics of ‘perfection’ has arisen which values flawless quantization, pitch correction, and production as primary elements over the power of unique, imperfect vitality of human creative expression. Polychromatic music embodies a new paradigm and aesthetic: a humanistic counterbalance to rapidly emerging technological trends which, when they don’t replace human involvement, seem to minimize and/or trivialize it.

Even as a child I was aware that the chromatic/modal tonal languages were nearing the point of exhaustion as far as new areas of exploration and creation, and this stoked a curiosity and an intuitive seeking of the possibility of new dimensions of musical language. As an undergraduate music major, many of the developments of the late 19th (chromaticism) and 20th (stochastic, aleatoric, spectral, microtonal, algorithmic) centuries made sense from this perspective. Yet they seemed difficult to assimilate and understand without a conceptual framework to anchor these new perceptual experiences in a larger foundational context and aesthetic.

With the emergence of AI (artificial intelligence) ‘creativity’ now being used to ‘compose’ music, many new questions and concerns have arisen. Any process that can be formalized in rules or clear, quantifiable instructions, can be efficiently executed by a computer. To me, it seemed that the innovations of stochastic (random operations), aleatoric (chance operations; i.e. dice rolling), serialistic (predefined patterns), and algorithmic (step-by-step procedures) composition were likely candidates for being subsumed within AI generative computation systems.

The human process of creativity lies on a continuum between compositing and composing.

A further distinction is necessary here between creative ‘composing’ and ‘compositing’. Artificial Intelligence generativity (so-called “creativity”) is based on a compositing process; it’s basically all just recombinations of pre-existing data. While it is clear that the human process of creativity lies on a continuum between compositing and composing, a salient aspect of human creativity involves the creation of new ‘data’ rather than the novel recombination of prior ideas.

This leaves us with the contemporary methods of new spectral/timbral and pitch languages as wide open frontiers for exploration and creation. With respect to new timbral languages, I think of spectral music broadly as a paradigm and aesthetic where an emphasis is on the exploration of the timbral aspects and implications of complex sounds. This would encompass harmonics, harmonic (overtone) interactions, and new frontiers in harmony (polyphony). This is an immense world of its own where technology has provided endless possibilities for exploring sound design and works of sonic creation (sound arts).

Another compelling area of exploration lies within the realm of new pitch languages—the xenharmonic philosophy and microtonal/macrotonal pitch definition methods. For the past century, the creation and use of many microtonal methods has been an exciting development in music. This presents new possibilities for differing, extended explorations of ‘tonality’. It seems that the main hindrance to the wider understanding and use of these methods is the result of a lack of any underlying conceptual framework.

At present, we have a growing number of mutually exclusive microtonal pitch definition methods, each with its own notation. As a musician coming from an empirical perspective (practice vs. theory), the impractical situation of learning a new notation system for each microtonal pitch method is a persistent impediment to a larger, unified progress beyond merely creating new microtonal scales. This is where polychromatic music, as a system, comes in.

One way of understanding and distinguishing our contemporary musical terminology of xenharmonic, polychromatic, and microtonal is by a rudimentary differentiation of philosophy, system, and method:

Xenharmonic refers to a philosophy which regards the infinite pitch scale division methods applied to the pitch continuum as equally valuable. Also, it expresses an aesthetic of freedom and openness toward any and all methods of pitch scale division and the exploration of their melodic, harmonic, rhythmic, timbral, etc. implications in new musical compositions.

We have no words for many perceptual aspects of hearing.

The polychromatic system is an intuitive, unifying conceptual framework for exploring any conceivable pitch division method. Our language is grounded in visual concepts: we have no words for many perceptual aspects of hearing: imagery, visualization, dimension, space, etc. As a result, we are faced with communicating auditory concepts in analogy or metaphor. My perspective is to link visual and auditory perceptual concepts into an idea of ‘pitch-color’. The visual basis here is the color spectrum: red, orange, yellow, green, blue, indigo, violet. From this intuitive basis, we can move from a vague flat/sharp conception of pitch to more refined and distinct conceptual ‘pitch-color’ anchors. So, with yellow as a basis of reference, orange and red would be progressively flatter, and green, blue, violet would be progressively sharper. Using a color spectrum with integrated visual/audible associations on a scale from (infra/flat)red to (ultra/sharp)violet. The distinctions of flat and sharp become an increasingly refined spectrum relative to the chromatic (macro)pitch division method, i.e. C, Db, C# etc.

The polychromatic system uses the chromatic language as a common point of departure. In this context, the chromatic language is characterized by the use of letters as pitch names, and by the representation of musical intervals numerically (and modally:  C-B as a major 7th rather that a 12th). Also, since the pitch-colors of the system are relatively defined (by the method of pitch division), it creates an intuitive bridge between differing microtonal scale derivation methods.

Microtonality consists of the various, exclusive, and divergent methods of pitch division, notation, and theory. Without a unifying conceptual framework, these methods remain mutually exclusive and excessively difficult to assimilate in a unifying and complementary manner.

A point of clarification: with respect to an integrated philosophy-system-method perspective of music, the chromatic musical language is a system, while the various temperament derivations (meantone, well, just, equal, etc.) are methods (of pitch definition).

The above categories are generalized for preliminary understanding. I see polychromatic music primarily as a system, and secondarily as an aesthetic. For me, this aesthetic involves evolving reflections on humanism in an era of increasing technology. And this is why I devote the effort to physically learn and perform my compositions: to create not only demonstrations of new musical possibilities within the polychromatic framework, but also examples of the human musician utilizing technology in a creatively assistive fashion vs. the human musician creatively assisting (editing, compositing) increasingly sophisticated technological processes.

In the next article, I will focus on describing my approach toward learning and composing within a polychromatic system.

Essential Tools for Xenharmonic Music

Are you itching to dive in and compose some xenharmonic music, that is, to use scales that have more, or less, notes per octave than our standard twelve-tone tuning? I sure hope so because it is a largely unexplored musical universe with a lot of room for composers to find their niche.

There are many directions to explore, and it really depends on the mindset of each musician as to where to begin. One area is that of “just tunings” where most intervals are perfect ratio combinations of the harmonic series, so that chords are as in tune as possible with very little beating. This is where the mathematically inclined musical purists gravitate.

Another direction to explore is that of equal temperaments where tunings consist of an octave divided equally by any number of scale degrees. We say “19edo” to mean 19 equal divisions of the octave. (Back in the ’90s and early 2000s, it was more popular to say “19-tone equal temperament” or “19tet”.) My favorites are 10edo, 16edo, and 17edo. This is my world, where there is no need to sweat over ratios or technicalities, but to just play by ear and enjoy the distinct flavor of each tuning.

In the past it was easier to find synthesizers that could be tuned to equal temperament than ones that allowed for the full tuning of individual notes across the keyboard. But nowadays many synthesizers will do either one just as easily. Technology is certainly further along than when I began composing xenharmonic music. In 1990, I used hardware synthesizers and standard keyboards. It was so exciting that it didn’t seem like too much trouble at the time, but nowadays it is a far more streamlined experience.

For example, it used to seem normal to put stickers on my keyboard keys to keep track of where the octaves were for any given tuning. It’s easy to see where octaves are on a piano because of the black/white note pattern of the keys–a well-thought-out decision of long ago. Xenharmonic tunings would not only need a greater or lesser number of keys per octave, but the musician has a choice as to which keys are the “diatonic keys” (white keys) and which should be “accidentals” (black keys), and therefore what the black/white note pattern should be. I will talk more about this in my final post next week, but just imagine staring at a regular piano keyboard and trying to write 17-note-per-octave music. It was strange, but we used to do it that way–with stickers!

And I used to patiently navigate through the maze of editing pages on my hardware synthesizers’ tiny screens to access the “slope” or “keytracking” or “keyfollow” of each oscillator, in order to adjust how much the pitch rises as one plays up the keyboard. Depending on the synthesizer, the slope might anchor on a different arbitrary “root note” (such as middle C or A440). So different synths wouldn’t be in tune with each other until their individual “master tuning” was adjusted by ear, tuning the whole keyboard up or down uniformly until it audibly matched every other synthesizer. “It’s close enough for rock’n’roll,” I would say. And it was! But the whole experience was not for the weak willed. Only we musicians who were determined and slightly eccentric thought it was fun.

A rack of 1980s hardware synths

The old days of hardware synths. Or maybe you’re a geek like me. I sometimes still use them!

But things are way easier now! These days many of us rearrange the black and white keys on our keyboards to match each specific tuning. Inexpensive lightweight keyboard controllers didn’t exist back in the day, nor did eBay, but now it’s not hard for someone with a tight budget to eventually collect a few $50 keyboards and arrange them for their favorite tunings (or ask one of us in The Xenharmonic Alliance community to rearrange a keyboard for you). As for sounds, now we simply call up a “tuning file” on any number of software synthesizers that can be had for a fraction of what we used to pay for hardware synthesizers–and the tunings are accurate!

Nowadays anyone can dive right in and give xenharmonic music a shot.

Indeed, nowadays anyone can dive right in and give xenharmonic music a shot. But before you get started, you’ll need a starting place. Are you a just intonation-minded person, or an equal temperament type? Are you an acoustic musician, an electronic musician, or interested in both? Many xenharmonic composers intermix with different types of musicians, partly because there are so few xenharmonic musicians around, but also because the mix of electronic production and the organic expression of the acoustic world can mesh very nicely.

Let’s start with your own acoustic instrument if you have one that doesn’t have fixed frets or keys, such as a violin, cello, harp, trombone, slide guitar, or your voice, etc. For advice on what tunings would be best to try for your specific instrument, and advice for how to tune your instrument if need be, I would highly suggest posting a note to The Xenharmonic Alliance on Facebook. You will get answers fast. Don’t be shy! It is a very friendly and helpful community. Perhaps you want to try building a custom instrument? The community can help you with that, too.

For acoustic instruments, you will need a list of frequencies for whichever tuning you’d like to try.  If you don’t manage to get this info directly by asking The Xenharmonic Alliance community, software such as Custom Scale Editor for Macintosh, or Scala for Windows or Linux, will allow you to generate tuning charts. Once you figure out what tuning you want to try, and how best to tune your instrument, I suggest using a hardware or software Hertz-reading tuner and a microphone to tune your instrument. A laptop with a built-in microphone would be most handy for that. Another approach is to use a VST synthesizer with your tuning loaded, and with a clean sound, tune your acoustic instrument by ear to match the synthesizer notes.

What is a VST synthesizer? It’s an opportunity to get free xenharmonic-friendly synthesizers, that’s what! It stands for Virtual Studio Technology and is an audio plugin standard created by Steinberg in 1996 that allows third parties to create their own software plugins for free or commercial use, so there are a gazillion of them out there. There are other virtual synthesizer formats, but none have as many free instruments available or work with as many different music programs, so I will just list VST-capable synths here.

I suggest expanding your xenharmonic universe into the digital world because there will be more choices of tunings and sounds.

Even if you are an acoustic musician, I suggest expanding your xenharmonic universe into the digital world because there will be more choices of tunings and sounds. And vice versa. Synthesizers are my world, but I very much enjoy the organic nature of performing alongside acoustic musicians.

You’ll need a VST host software application in order to load and play VST synths. Normally this would be a sequencing program such as a Digital Audio Workstation (DAW). Ableton Live, Logic Pro (Macintosh-only), and FL Studio (PC-only) are highly recommended. These programs will not only allow you to record MIDI tracks (also known as Instrument tracks) but will also allow you to record audio tracks of your acoustic instruments. Here is a bigger list of VST host music software. I personally use Ableton now, and it’s nice because it has a built-in tutorial that steps you through the learning curve using its own demo song. I used Pro Tools for a number of years, and Ableton is a bit different, but the tutorial made it easy to make the transition.

A screenshot of Ableton Live

Ableton Live (the VST host software, or “DAW” that I use) and the options page of LinPlug’s Morphox synthesizer, showing where the tuning file is selected.

And below I will list a few VST synthesizers that I either own myself or that have gotten great reviews from trusted musician friends. As I mentioned earlier, you can simply call up a tuning file (.tun) from each synthesizer, which will automatically tune it. No more fiddling around with the individual controls to get each synthesizer microtuned in addition to getting each synth in tune with the rest. With tuning files, all of your synths will have the same “root note” frequency from which the entire tuning is based, and therefore your synths will match each other.

Many synthesizers can navigate to any directory to grab tuning files (such as 19edo.tun, bohlen-pierce-scale.tun, etc.), so I recommend having one master “Tuning Files” folder for all of your synths to share, except for the ones that need the tuning files to be in a specific directory. While we’re on the topic, go ahead and download this folder of Tuning Files and put it somewhere that you’ll remember, such as your music directory. For the more picky synthesizers, I will include the directory path where you’ll need to place your tuning files, because that piece of the puzzle can be hard to find in any documentation or online search.

Linplug’s Morphox is my current favorite VST. When you open Morphox you will see the main synthesizer controls, which you can ignore for now. Simply click on the “Options” button on the bottom right to open the back panel. Here you will find the “Microtonal Scale” section. Click the “Open” button next to it and navigate to your tuning files folder, then select the tuning you want. On the main page, click on “Demo Presets” at the bottom and explore the different factory sounds! Some of the arpeggiated sounds may sound a bit off because they were programmed for twelve tones, but this can all be adjusted once you get to know the synth a bit better. For now, just stick to the ones that sound good.

U-He’s Zebra2 is another favorite of mine. Zebra needs a tuning folder to be located here for PC: /program files/u-he/Tunefiles, or for Macintosh: /Library/Application Support/u-he/Tunefiles. When you open Zebra, you will see the “Voice MicroTuning” menu in the “Global” section on the bottom left. Click to open that window, and if it looks blank or you’re having trouble seeing the tuning file that you want in particular, control-click on “User” and select “Reveal in Finder”. The directory where the tuning files need to be will pop up. Copy your .tun files from you master tuning files folder here, and you’ll be good to go.

Alchemy is the most powerful sample-manipulation instrument in Logic Pro. Load your micro tuning files into the tuning folder located here: Program Files/Camel Audio/Alchemy/Libraries/Tuning.

Spectrasonics Omnisphere2 is highly recommended by my brother who is a top notch Las Vegas musician, composer, and producer. Omnisphere is one of his favorites. Load your micro tuning files for Windows here: Program Files/Spectrasonics/STEAM/Omnisphere/Settings Library/Presets/Tuning File. Or, for Macintosh: Library/Application Support/Spectrasonics/STEAM/Omnisphere/Settings Library/Presets/Tuning File/. Make a new folder in this tuning file folder, and place your .tun files there.

So there you have it. Get hold of a music-sequencing program that works for you and a synthesizer or two that reads .tun files. And if you are an acoustic musician, get ready to experience your instrument in a whole new way. Next week I will talk more about specific tunings that are great for the beginner and xenharmonic composition techniques. I will also show you how to rearrange the keys on an affordable MIDI keyboard controller and more.

The Science of Sound and Tunings

As a composer, what drew me to use scales that have more, or less, notes per octave than our standard twelve-tone tuning–or xenharmonic music–was the boredom that crept up on me over the years of using the same twelve notes over and over, plus a curiosity about other possible tunings and what emotional chords they might strike. Many xenharmonic composers are driven by the artistic urge to break down arbitrary barriers of creative expression. And many who are mathematically inclined explore the vast possibilities of xenharmonic tunings because the mathematics is beautiful.

In order to appreciate anyone’s desire to explore the world outside of the common twelve-tone tuning, it helps to understand where this tuning standard came from and how it is somewhat arbitrary and not even mathematically pure. This calls for a short discussion of the science of sound.

Twelve-tone tuning is somewhat arbitrary and not even mathematically pure.

To start with, there is nothing special about any particular note or “frequency” (unless a person has absolute pitch). It’s all relative. That is, the relationships between frequencies are what matters. A musical interval is the difference in frequency–the ratio–between two notes, and the way two frequencies interact has special mathematical and psychoacoustic qualities.

Our twelve-tone tuning was derived from interval ratios between the first sixteen harmonics. Harmonics were not an invention but a discovery about the natural resonant vibrations of musical instruments. Our twelve-tone tuning being related to the spectra of musical instruments results in intervals and chords that sound “in tune.”

The simplest and purest vibration is a single sine wave frequency. Sine waves are common in electronic music. Very low-pitched sine waves are often used as “sub bass” and high sine waves add “sparkle.” However, most sounds contain multiple sine-wave frequencies combined into a complex waveform. Think of tossing differently sized rocks into a pond and observing how the waves combine into an intricate pattern. Some waves reinforce to create larger ones, while others cancel out. Sound combines in this way, whether through air pressure waves interacting, or fluctuating voltage adding and subtracting in a digital mix.

Any sound that we hear, whether “musical” or not, can theoretically be broken down into individual sine waves. But most musical instrument sounds are pitched, and this is because they naturally vibrate at whole number multiples of the main frequency, creating the harmonic series. Any sine waves that fall in-between the neat and tidy harmonics are perceived as noise elements, which is not necessarily a bad thing. The “noise” might be the scrape of a violin bow, or the hammer sound of a piano key, or the pluck of a guitar, or the grit in a synthesizer sound.

As a typical illustration of the harmonic series, think about plucking a guitar string. The entire string vibrates back and forth at a certain speed (the fundamental frequency), and we perceive that vibration as the pitch of the note. At the same time, the string also vibrates in halves, thirds, fourths, fifths, and so on. This series of higher and higher frequencies, at shorter and shorter wavelengths, is the harmonic series.

Harmonic String Vibrations

Figure 1: Harmonic String Vibrations

We don’t have to probe very deep into the harmonic series to see a fundamental relationship to our historical musical preferences. If we approximate the first dozen harmonics on a staff (Figure 2) or piano keyboard (Figure 3), we can quickly see some standard musical relationships. I say “approximate” because the harmonic series doesn’t perfectly align with the intervals in our twelve-tone tuning. If you have a piano handy, try playing these harmonics. In fact, you can do this on any instrument as long as it has a range of a few octaves.

The intervals between the first several harmonics are a very solid basis for our tuning, but the series continues with smaller and smaller intervals ad infinitum. Once we get past the 18th harmonic, the intervals become microtonal, meaning smaller than half steps.

An approximation of the harmonic series on a musical staff

Figure 2. An approximation of the harmonic series on a musical staff, starting with C up to the 12th harmonic.

An approximation of the harmonic series on a piano.


Figure 3. An approximation of the harmonic series on a piano, starting with C up to the 12th harmonic.

If we begin with a low C (32.703 Hz) as the fundamental frequency, then an approximation of the first seven harmonics would be the notes C1, C2, G2, C3, E3, G3, Bb3. Just these first five harmonics alone when transposed into the same octave range are enough to build a Major triad–C/E/G–the most used chord in all of Western music. Using the first seven harmonics allows us to build a dominant 7th chord–C/E/G/Bb–the chord most often used for an ending cadence leading into a Major triad.

It’s nontrivial that the first and second harmonics are an octave apart. An octave has the strongest psychoacoustic relationship of any musical interval. It is an extremely interesting musical phenomena that octaves sound like higher and lower versions of the same note. Our twelve-tone tuning is “framed” by this very special interval, as are most (but not all) xenharmonic tunings. This is why our standard tuning includes twelve notes “per octave.” Notice that every time the harmonics double in frequency (harmonics 1, 2, 4, 8, etc.) we have another octave.

The second and third harmonics form an interval of a fifth, which is the next strongest interval to our ears after the octave. Fifths frame our triad chords, and the cadence that I mentioned earlier “resolves down a fifth,” meaning that the root notes move down a fifth interval. The third and fourth harmonics form a fourth, the next strongest interval. The peaceful “Amen” cadence resolves down a fourth, which doesn’t sound as final as resolving down a fifth, but is still a strong cadence. Fourths and fifths are a staple for rhythm guitarists who often strum those intervals as a musical pedal.

The fourth and fifth harmonics form a Major 3rd, and the fifth and sixth harmonics form a minor 3rd. The sixth and seventh harmonics form a slightly smaller minor 3rd. Thirds are another very important interval in Western harmony. Stacking a Major 3rd and a minor 3rd creates a Major triad, and stacking them the other way around creates a minor triad–the second most popular chord in all of Western harmony. So there we have it–at least most of it–as the first seven harmonics provide most of our well-established intervals.

The seventh and eighth harmonics vaguely approximate a Major 2nd (a “whole step”), and the same goes for the next few harmonics, although by slightly smaller intervals each time. The eleventh and twelfth harmonics vaguely approximate a minor 2nd (a “half step”), although it is quite a bit larger than the half steps we use in our twelve-tone tuning. Harmonics 17 and 18 come closest to approximating our standard half step. After that, the harmonics form smaller and smaller microtonal intervals that were simply not chosen to be part of our musical scale.

So, with the first several harmonics transposed into the same octave range, we get these scale degrees: C, __, __, __, E, __, __, G, __, __, Bb, __, C. There are other interval relationships that played a part, such as harmonics 5 and 8 which form a minor 6th (E to C). If transposed down to the root note, we get C to A. Adding the A to our scale then reveals another whole step between G and A, reinforcing the idea of a whole step, and so on. It was found that chopping a whole step into a “half step” was close in pitch ratio to the 17th and 18th harmonics. This interval could somewhat neatly fill in the remaining blanks to form our twelve-tone scale of half steps–C, C#, D, D#, E, F, F#, G, G#, A, Bb, B, C.

Well, almost. We don’t end up with equally sized half steps if we keep the pure harmonic ratios that originally inspired the scale. By around 200 years ago, the scale intervals were adjusted and “evened out” so that every half step had the same frequency ratio of 1:1.05946–not exactly a simple or ideal ratio. This is called “equal temperament,” and it’s how our modern pianos are tuned. It is quite useful in enabling a person to play a song in any key, and to transpose chords during the course of a song without the worry of clashing intervals.

If our ancestors had chosen to base our scale off of the first 36 harmonics, we may have ended up with 24-note-per-octave instruments involving “quarter tone” intervals instead of half steps. Pianos and other instruments would have been much more complicated to build. If we had used 25 or so harmonics, we could have ended up with 19-note-per-octave instruments. In fact, we could have easily ended up with any number of tunings, with plenty of ways to justify them as being the “best” decision.

Twelve notes per octave was probably the best decision for the time, especially considering that it simplified instrument building, yet had plenty of notes for creating a wide variety of expressive musical styles. But just as the ears of average people have adjusted to more and more complexity and variety in musical chords, styles, timbre, and rhythm over the millennia, we can now add new tonalities to the list.

 

What is Xenharmonic Music?

One might think that by this day and age, musicians on the whole have explored just about every aspect of music and pushed every thinkable musical boundary. Over the centuries, and especially in modern times, composers have experimented with every crazy rhythm, every unconventional time signature, every possible chord and chord combination, and every exotic new sound using every possible synthesis technique and sample manipulation. But one aspect that most musicians overlook is the prospect of using a completely new tonality–a different number of notes per octave.

Imagine squeezing extra keys in between other piano keys, or scooting the frets on a guitar closer together or farther apart and attempting to make music with it. It would be impossible to play any standard songs because the normal notes and note relationships have been shifted. “Xenharmonic” is the generic term that we use to refer to scales that have more, or less, notes per octave than our standard twelve-tone tuning. The pitches in xenharmonic scales are either too close together or too far apart to fit any familiar melody we’ve ever known. However, it is possible to write new music with new harmonic relationships that humankind has never heard before.

It is possible to write new music with new harmonic relationships that humankind has never heard before.

In the early ’80s, I was a curious teenager getting into writing pop songs. I had that typical feeling of wanting to push the boundaries. The tools of the time, for the most part, offered limitless potential. The sounds that came out of my synthesizers seemed unlimited, and clearly there was nothing stopping anyone from composing any possible rhythm. Nothing was stopping any musician from inventing new musical genres. I had no qualms about writing weird chord progressions and unnaturally leaping melodies to push the boundaries of pop music tonality. Being weird fit with the times after all.

But during the course of composing two music albums, I began to feel the monotony of using the same twelve notes over and over, no matter what quirky combinations I came up with. And that felt like an actual boundary. It was such a hard boundary brainwashed into my musically trained mind that I’m not sure I realized it was an artificial one. It felt more like an inevitable law of physics. After all, there were only twelve notes that I knew of in the world. There were twelve notes per octave on pianos, electric keyboards, guitars, and most instruments in the Western hemisphere. There were twelve notes per octave on any sheet music that I had ever seen. Surely there was a fundamental scientific reason for it.

It’s true that I could slide to any note with my voice or cello, and my mom could slide notes on her viola and even do some cool slides on her clarinet. But anyone with any sense would surely land on a regular note after inserting a bluesy slide here and there. I recall asking my mother, “What about the notes in between piano notes?” She explained that of course there are an infinite number of pitches, but that for historical and acoustical reasons a twelve-tone scale was derived long ago and we have used it ever since. So pianos and most other instruments are fixed in that tuning. Mom’s answer was satisfying enough–until some years later when I was hit over the head with xenharmonic music!

At Berklee College of Music in 1990, my professor, Dr. Richard Boulanger, played and sang a piece he had written in a xenharmonic tuning. My professor’s music washed over me like an exotic alien atmosphere, and my hair stood on end. I decided almost instantly that I would never write in a twelve-tone tuning ever again. I didn’t entirely keep that promise to myself, as I inevitably experienced the difficulties involved in re-tuning my synthesizer sounds, the fact that any new tuning was unrelated to the black and white keyboard pattern, and the bewilderment of writing music with no theoretical knowledge of the tuning. I had spent my life learning “music theory,” and there was no theory that I knew of for any other tunings. For all these reasons, it was much faster to write with only twelve tones. But eventually I would come to write exclusively in xenharmonic tunings.

I started with 19 notes per octave since it contains 12 notes that closely resemble our regular tuning. That makes it easier! I tried the Bohlen-Pierce scale. I tried 10 notes per octave, 16, 17, and—to my horror—13. I have been composing xenharmonic music for more than 25 years and haven’t even tried all of the tunings I’d like to try. It’s an endless universe.

Elaine and friend holding xenharmonic Vertical Keyboards.

Elaine (right) and friend with xenharmonic Vertical Keyboards.

Take some time now to listen to some xenharmonic music to let the very idea of it sink in. In my long experience of subjecting new listeners to xenharmonic music, I find that it usually takes 30 seconds to a minute for someone’s ears to relax enough to accept the new notes and chords. Some begin to enjoy the music at this point, and some reject it. Some hear it as amazing notes from outer space, some just hear it as “normal,” some hear it as being out of tune, and some are utterly repulsed by it. I assure you, this music is indeed fully in tune, according to the scale it is written in. But it can be a shock to uninitiated ears.

My music:

http://ziaspace.com/_music/discography/zia_drumnspace/
http://ziaspace.com/elaine/chaos/1minusrxsquared/movement1.mp3

My Berklee professor’s music:

http://www.ziaspace.com/elaine/BP/BPmusic/DrB/BP2010_DrB_SolemnSong.mp3
http://boulangerlabs.com

Sean Archibald and others:

http://sevish.com
http://split-notes.com

Like everything else, we are all different in the way we perceive music. When I was getting my master’s degree from New York University, I conducted a number of listening tests for a particular xenharmonic tuning. The subjects were asked to listen to a series of chord progressions and rate how well each one resolved, whether the first and last chords sounded like a tonic, how well they could decipher a key change, etc. They were also asked to rate similar aspects of composed xenharmonic music. The opinions on almost every question were all across the board. It depends on what we’re used to, what culture we are from, what musical styles we’re into, whether we are musically trained, whether we have perfect pitch, and generally how musically open-minded or adventurous we are. Psychoacoustics is a very complicated field.

There are enough musicians in the world composing music with twelve tones that the world does not need me to write more of it.

The way I see it is that there are enough musicians in the world composing music with twelve tones that the world does not need me personally to write more of it. I seem to have a rare knack for composing in xenharmonic tunings, and I am personally drawn to it, so I feel that it’s my calling.

In general, it is the most ignored possibility in musical composition. Western composers over the ages have made many attempts to break out of our tonal system, but usually opted to do so by using unconventional compositional techniques with the same old twelve notes. Twelve-tone serialism, atonal composition, and randomly generated music all manage to completely erase our sense of tonality or “key” that we are accustomed to. But the point of xenharmonic music is not to do away with tonality, but to try new and different tonalities.

Join 2300 other musicians and enthusiasts on the Xenharmonic Alliance Facebook group.


Elaine Walker fingering a chord on a Jupitar retuned electric keyboard

Elaine Walker

Elaine Walker is an electronic musician, microtonal composer, and builds new types of music keyboards. She is also the author of a physics/philosophy book, Matter Over Mind: Cosmos, Chaos, and Curiosity.

Chris Brown: Models are Never Complete

Despite his fascination with extremely dense structures, California-based composer Chris Brown is surprisingly tolerant about loosely interpreting them. Chalk it up to being realistic about expectations, or a musical career that has been equally devoted to composing and improvising, but to Brown “the loose comes with the tight.” That seemingly contradictory dichotomy informs everything he creates, whether it’s designing elaborate electronic feedback systems that respond to live performances and transform them in real time or—for his solo piano tour-de-force Six Primes—calculating a complete integration of pitch and meter involving 13-limit just intonation and a corresponding polyrhythm of, say, 13 against 7.

“I’ve always felt that being a new music composer, part of the idea is to be an explorer,” Brown admitted when we chatted with him in a Lower East Side hotel room at a break before a rehearsal during his week-long residency at The Stone.  “It’s so exciting and fresh to be at that point where you have this experience that is new.  It’s not easy to get there.  It takes a lot of discipline, but actually to have the discipline is the virtue itself, to basically be following something, testing yourself, looking for something that’s new, until eventually you find it.”

Yet despite Brown’s dedication and deep commitment to uncharted musical relationships that are often extraordinarily difficult to perform, Brown is hardly a stickler for precision.

“If you played it perfectly, like a computer, it wouldn’t sound that good,” he explained. “I always say when I’m working with musicians, think of these as targets. … It’s not about getting more purity.  There’s always this element that’s a little out of control. … If we’re playing a waltz, it’s not a strict one-two-three; there’s a little push-me pull-you in there.”

Brown firmly believes that the human element is central and that computers should never replace people.  As he put it, “It’s really important that we don’t lose the distinction of what the model is rather than the thing it’s modeled on. I think it’s pretty dangerous to do that, actually.”

So for Brown, musical complexity is ultimately just a means to an end which is about giving listeners greater control of their own experiences with what they are hearing. In the program notes for a CD recording of his electro-acoustic sound installation Talking Drum, Brown claimed that he reason he is attracted to complex music is “because it allows each listener the freedom to take their own path in exploring a sound field.”

Brown’s aesthetics grew out of his decades of experience as an improviser—over the years he’s collaborated with an extremely wide range of musicians including Wayne Horvitz, Wadada Leo Smith, and Butch Morris—and from being one of the six composers who collectively create live networked computer music as The Hub. Long before he got involved in any of these projects, Brown was an aspiring concert pianist who was obsessed with Robert Schumann’s Piano Concerto which he performed with the Santa Cruz Symphony as an undergrad. Now he has come to realize that even standard classical works are not monoliths.

“Everybody in that Schumann Piano Concerto is hearing something slightly different, too, but there’s this idea somehow that this is an object that’s self-contained,” he pointed out.  “It’s actually an instruction for a ritual that sounds different every time it’s done.  Compositions are more or less instructions for what they should do, but I’m not going to presume that they’re going to do it exactly the same way every time.”

Chris Brown’s first album was released in 1989, ironically the same year as the birth of another musical artist who shares his name, a Grammy Award-winning and Billboard chart-topping R & B singer-songwriter and rapper.  This situation has led to some funny anecdotes involving mistaken identity—calls to his Mills College office requesting he perform Sweet Sixteen parties—as well as glitches on search engines including the one on Amazon.

“These are basically search algorithm anomalies,” he conceded wryly. To me it’s yet another reason to heed his advice about machines and not to overly rely on them to solve all the world’s problems.


Chris Brown in conversation with Frank J. Oteri
Recorded at Off Soho Suites Hotel, New York, NY
June 22, 2017—3:00 p.m.
Video presentations and photography by Molly Sheridan
Transcribed by Julia Lu.

Frank J. Oteri:  Once I knew you were coming to New York City for a week-long residency at The Stone and that we’d have a chance to have a conversation, I started looking around to see if there were any recordings of your music that I hadn’t yet heard. When I did a search on Amazon, I kept getting an R & B singer-songwriter and rapper named Chris Brown, who was actually born the year that the first CD under your name was released.

Chris Brown:  Say no more.

FJO:  I brought it up because I think it raises some interesting issues about celebrity. There is now somebody so famous who has your name, and you’ve had a significant career as a composer for years before he was born.  But maybe there’s a silver lining in it. Perhaps it’s brought other people to your music who might not otherwise have known about it—people who were looking for the other Chris Brown, especially on Amazon since both your recordings and his show up together.

CB:  These are basically search algorithm anomalies, but the story behind that is that when the famous Chris Brown started to become famous, I started getting recorded messages on my office phone machine at Mills, because people would search for Chris Brown’s music and it would take them to the music department at Mills.  They would basically be fan gushes for the most part.  Sometimes they would involve vocalizing, because they were trying to get a chance to record.  Sometimes they would ask if he could play their Sweet Sixteen party.  There were tons of them.  At the beginning, every day, there were long messages of crying and doing anything so that they could get close to Chris Brown in spite of the fact that my message was always a professorial greeting.  It didn’t matter.  So it was a hassle.  Occasionally I would engage with the people by saying this is not the right Chris Brown and trying to send them somewhere else.

It’s a common name. When I was growing up, there weren’t that many Chrises, but somehow it got really popular in the ‘80s and ‘90s.  Anyway, these days not much happens, except that what it’s really meant is kind of a blackout for me on internet searches.  It’s hard to find me if somebody’s looking.  Since I started working at Mills, the first thing that David Rosenboom said to me when I came in is there’s thing called the internet and you should get an email account.  Everybody was making funny little handles for themselves as names.  From that day, mine was cbmuse for Chris Brown Music.  I still have that same email address at Mills.edu.  So I go by cbmuse.  That’s the best I can do.  Sometimes some websites say Christopher Owen Brown, using the John Luther Adams approach to too many John Adamses.  It’s kind of a drag, but on the other hand, it’s a little bit like living on the West Coast anyway, which is that you’re out of the main commercial aspect of your field, which is really in New York. On the West Coast, there’s not as much traffic so you have more time and space.  To some extent, you’re not so much about your handle; you still get to be an individual and be yourself. I could have made a new identity for myself, but I sort of felt like I don’t want to do that.  I’ve always gone by Chris Brown.  I’ve never really attached to Christopher Brown.  Maybe this is a longer answer than you were looking for.

FJO:  It’s more than I thought I’d get. I thought it could have led to talking about your piece Rogue Wave, which features a DJ. Perhaps Rouge Wave could be a gateway piece for the fans of the other Chris Brown to discover your music.

CB:  I don’t think that happens though.  That was not an attempt to do something commercial.  I could talk about that if you like, since we’re on it.  Basically, the DJ on it, Eddie Def, was somebody I met through a gig where I was playing John Zorn’s music at a rock club in San Francisco and through Mike Patton, who knew about him. He invited Eddie to play in the session and he just blew me away.  I was playing samples and he was playing samples.  I was playing mine off my Mac IIci, with a little keyboard, and he was playing off records.  He was cutting faster than I was some of the time.  Usually you think, “Okay, I’ve a got a sample in every key. I can go from one to the other very quickly.”  He just matched me with every change.  So we got to be friends and really liked each other.  We did a number of projects together.  That was just one of them. He’s a total virtuoso, so that’s why I did a piece with him.

FJO:  You’ve worked with so many different kinds of musicians over the years.  From a stylistic perspective, it’s been very open-ended.  The very first recording I ever heard you on, which was around the time it came out, was Wayne Horvitz’s This New Generation, which is a fascinating record because it mixes these really out there sounds with really accessible grooves and tunes.

CB:  I knew Wayne from college at UC Santa Cruz. He was kind of the ringmaster of the improv scene in the early ‘70s in Santa Cruz.  I wasn’t quite in that group, but I would join it and I picked up a lot about what was going on in improvised music through participating with them in some of their jam sessions.  Wayne and I were friends, so when he moved to New York, I’d sometimes come to visit him.  Eventually, he moved out of New York to San Francisco.  I had an apartment available in my building, so he lived in it.  He was basically living above us. He was continuing to do studio projects, and this was one of them.  He had his little studio setup upstairs and one day he said, “Would you come upstairs and record a couple of tracks for me?” He played his stuff and he asked me to play one of the electro-acoustic instruments that I built, so I did.  I didn’t think too much more of it than that, but then it appeared on this Electra-Nonesuch record and there was a little check for it. It was my little taste of that part of the new music scene that was going on in New York.  Eventually Wayne moved out and now he lives in Seattle. We still see each other occasionally.  It’s an old friendship.

FJO:  You’ve actually done quite a bit of work with people who have been associated with the jazz community, even though I know that word is a limiting word, just like classical is a limiting word. You’ve worked with many pioneers of improvisational music, including Wadada Leo Smith and Butch Morris, and you were also a member of the Glenn Spearman Double Trio, which was a very interesting group.  It’s very sad.  He died very young.

CB:  Very.

FJO:  So how did you become involved with improvised music?

CB:  Well, I was a classically trained pianist and I eventually wound up winning a scholarship and played the [Robert] Schumann Piano Concerto with the Santa Cruz Symphony. But I was starting to realize that that was not going to be my future because I was interested in humanities and the new wave of philosophy—Norman O. Brown.  I got to study with him when I was there, and he told me I should really check out John Cage because he was a friend of Cage’s: “If you’re doing music, you should know what this is.”  So I went out and got the books, and I was completely beguiled and entranced by them.  It was a whole new way of listening to sound as well as music, or music as sound, erasing the boundary.  So I was very influenced by that, but almost at the same time I was getting to know these other friends in the department who were coming more out of rock backgrounds.  They were influenced by people like Cecil Taylor and the Art Ensemble of Chicago and the free jazz improvisers.  These jam sessions that Wayne would run were in some way related.  There were a lot of influences on that musical strain, but that’s where I started improvising.

To me, improvisation seems like the most natural thing in the world.

I was also studying with Gordon Mumma and with a composer named William Brooks, who was a Cage scholar as well as a great vocalist and somebody who’d studied with Kenneth Gaburo. With Brooks, I took a course that was an improvisation workshop where the starting point was no instruments, just movement and words—that part was from the Gaburo influence.  That was a semester of every night getting together and improvising with an ensemble.  I think it was eight people.  I’d love if that had been documented.  I have never seen or heard it since then, but it influenced me quite a bit.  To me, improvisation seems like the most natural thing in the world. Why wouldn’t a musician want to do it?  Then, on the other side of this, people from the New York school were coming by and were really trying to distinguish what they did from improvisation.  I think there was a bit of an uptown/downtown split there.  They were trying to say this is more like classical music and not like improvisation.  It’s a discipline of a different nature.  Ultimately I think it’s a class difference that was being asserted.  And I think Cage had something to do with that, trying to distinguish what he did from jazz.  He was trying to get away from jazz.

I didn’t have much of a jazz background, but I had an appreciation for it growing up in Chicago. I had some records.  At the beginning I’d say my taste in jazz was a little more Herbie Hancock influenced than Cecil Taylor.  But once I discovered Cecil Taylor, when I put that next to Karlheinz Stockhausen, I started to see that this is really kind of the same. This is music of the same time.  It may have been made in totally different ways, and it results from a different energy and feeling from those things, but it’s not that different.  And it seems to me that there’s more in common than there is not.  So I really never felt there was that boundary.  So I participated in sessions with musicians who were improvising with or without pre-designed structures. It was just something I did.

Once I discovered Cecil Taylor, when I put that next to Karlheinz Stockhausen, I started to see that this is really kind of the same.

The first serious professional group I got involved with was a group called Confluence.  This came about in the late 1970s with some of my older friends from Santa Cruz, who’d gone down and gotten master’s degrees at UC San Diego. It was another interesting convergence of these two sides of the world.  They worked with David Tudor on Rainforest, the piece where you attach transducers to an object, pick up the sound after it’s gone through the object, and then amplify it again.  Sometimes there’s enough sound out of the object itself that it has an acoustic manifestation.  Anyway, it’s a fantastic piece and they were basically bringing that practice into an improvisation setting.  The rule of the group was no pre-set compositional design and no non-homemade instruments.  You must start with an instrument you made yourself and usually those instruments were electro-acoustic, so they had pickups on them, somewhat more or less like Rainforest instruments.  The other people in that group were Tom Nunn and David Poyourow.  When David got out of school he wanted to move up to the Bay Area and continue this group.  One of the members of it then had been another designer, a very interesting instrument maker named Prent Rodgers.  And he bailed.  He didn’t want to be a part of it.  So they needed a new member.  So David asked me if I’d be interested, and I was.  I always had wanted to get more involved with electronic music, but being pretty much a classical nerd, I didn’t really have the chops for the technology.  David, on the other hand, came from that background.  His father was a master auto mechanic, from the electrical side all the way to the mechanical side. David really put that skill into his instrument building practice and then he taught it to me, basically.  He showed me how to solder, and I learned from Tom how to weld, because some of these instruments were made out of sheet metal with bronze brazing rods.  I started building those instruments in a sort of tradition they’d begun, searching for my own path with it, which eventually came about when I started taking pianos apart and making electric percussion instruments from it.

So, long story short, I was an improviser before I was a notes-on-paper composer.  That’s how I got into composing.  I started making music directly with instruments and with sound.  It was only as that developed further that I started wanting to structure them more.

FJO:  So you composed no original music before you started improvising?

CB:  There were a few attempts, but they were always fairly close to either Cageian influence or a minimalist influence.  I was trying out these different styles.  Early on, I was a follower and appreciator of Steve Reich’s music. Another thing I did while I was at Santa Cruz was play the hell out of Piano Phase.  We’d go into a practice room and play for hours, trying to perfect those phase transitions with two upright pianos.  I was also aware of Steve’s interest in music from Bali and from Africa. These were things that I appreciated also.

FJO:  I know that you spent some time in your childhood in the Philippines.

CB:  I grew up between the years of five and nine in the Philippines.  It wasn’t a long time, as life goes, but it was also where I started playing the piano.  I was five years old in the Philippines and taking piano lessons there.  I was quite taken with the culture, or with the cultural experience I had let’s say, while I was there.  I went to school with Filipino kids, and it was not isolated in some kind of American compound.  I grew up on the campus of the University of the Philippines, which is a beautiful area outside of the main city, Manila.

FJO:  Did you get to hear any traditional music?

Being an improviser is a great way to get into a cultural interaction.

CB:  Very little because the Philippines had their music colonized.  It exists though, and later I reconnected with musicians at that school, particularly José Maceda, which is another long story in my history.  I’ve made music with Filipino instruments and Filipino composers.  One of the nice things about being an improviser is that collaboration comes much easier than if you’re trying to control everything about the design of the piece of music, so I’ve collaborated with a lot of people all over the place, including performances before we really knew what we were doing.  It’s an exploratory thing you do with people, and it’s a great way to get into a cultural interaction.

Chris Brown in performance with Vietnamese-American multi-instrumentalist Vanessa Vân-Ánh Võ at San Francisco Exploratorium’s Kanbar Forum on April 13, 2017

FJO:  I want to get back to your comment about your first pieces being either Cageian or influenced by minimalism.  I found an early piano piece of yours called Sparks on your website, which is definitely a minimalist piece, but it’s a hell of a lot more dissonant than anything Reich would have written at that time. It’s based on creating gradual variance through repetition, but you’re fleshing out pitch relations in ways that those composers wouldn’t necessarily have done.

CB:  I’m very glad you brought that up.  I think that was probably the first piece that I still like and that has a quality to it that was original to me.  From Reich I was used to the idea of a piece of music as a continuous flow of repetitive action.  But it really came out of tuning pianos, basically banging on those top notes of the piano as you’re trying to get them into tune. I started to hear the timbre up there as being something that splits into different levels.  You can actually hear the pitch if you care to attend to it.  A lot of times the pitch is hard to get into tune there, especially with pianos that have three strings [per note]. They’re never perfectly in tune.  They’re also basically really tight, so their harmonic overtones are stretched.  They’re wider than they should be.  They’re inharmonic, rather than harmonic, so it’s a kind of a timbral event.  So what I was doing was kind of droning on a particular timbre that exists at the top of the piano, trying to move into a kind of trance state while I was moving as fast as I can repeating these notes. The piece starts at the very top two notes, and then it starts widening its scope, until it goes down an octave, and then it moves back up.  It was a process-oriented piece.  There wasn’t a defined harmonic spectrum to it except that which is created when you make that shape over a chromatically tuned top octave of the piano.  It didn’t have the score.  It was something that was in my brain.  It would be a little different every time, but basically it was a process, like a Steve Reich process piece, one of the earliest ones.

FJO:  So when did you create the notated score for it?

CB:  Well, I tried a couple of times, but it wasn’t very satisfactory. I made the first version for a pianist who lives in Germany named Jennifer Hymer. She played it first probably around 2000. Then 15 years later, another pianist at Mills—Julie Moon—played it, and she played the heck out of it. So now there is a score, but I still feel like I need to fix that score.

FJO:  I think it’s really cool, and I was thrilled that there was a score for it online that I could see. You also included a recording of it.

CB:  I just don’t think the score reflects as well as it could what the piece is about.  I always intended for there to be a little bit of freedom in it that isn’t apparent when you just write one set of notes going to the next set of notes.  There has to be a certain sensibility that needs to be described better.

FJO:  Bouncing off of this, though it might seem like a strange connection to make, when I heard that piece and thought about how it’s taking this idea of really hardcore early minimalist process music, but adding more layers of dissonance to it, it seemed in keeping with a quote that you have in your notes for the published recording of Talking Drum, which I thought was very interesting:  “I favor densely complex music, because it allows each listener the freedom to take their own path in exploring a sound field.”  I found that quote very inspiring because it focuses on the listener and giving the listener more choices about what to focus on.

CB:  I think I still agree with that. I’m not always quite going for the most complex thing I can find, but I do have an attraction to it. Most of the pieces that I do wind up being pretty complicated in terms of how I get to the result I’m after, even though those results may require more or less active listening. I was kind of struck last night by the performance I did of Six Primes with Zeena Parkins and Nate Wooley. The harmonic aspect of the music is much more prominent and much more beauty-oriented than the piano version is. When I play the piano version, it’s more about the intensity of the rhythms and of the dissonance of the piano, as opposed to the more harmonious timbre of the harp or the continuous and purer sound of the trumpet; the timbre makes the way that you play the notes different.

An excerpt from Chris Brown, Zeena Parkins and Nate Wooley’s trio performance of Structures from Six Primes at The Stone on June 21, 2017.

FJO: But I think also that this strikes to the heart of the difference between composition and improvisation.  I find it very interesting that you’ve gravitated toward these really completely free and open structures as an improviser, but your notated compositions are so highly structured.  There’s so much going on, and in a piece like Six Primes, you’re reflecting these ratios not just in the pitch relations, but also in the rhythmic relationships. Such complicated polyrhythms are much harder to do in the moment.

CB:  Of course.  But that’s why I’m doing it. I’m interested in doing things that haven’t been done before.  I’ve always felt that being a new music composer, part of the idea is to be an explorer.  Sometimes that motivation is going to get warped by the marketing of the music or by the necessity to make a career, but that was always what I was attracted to about it. From the first moment that I heard Cage’s music, I said, “This is an inventor.  This is somebody who’s inventing something new.”  It’s so exciting and fresh to be at that point where you have this experience that is new.  It’s not easy to get there.  It takes a lot of discipline, but actually to have the discipline is the virtue itself, to basically be following something, testing yourself, looking for something that’s new, until eventually you find it.

I’ve always felt that being a new music composer, part of the idea is to be an explorer.

This is the third cycle of me learning to play these pieces. At first, I just wanted to know it was possible. And next, I wanted to record it. This time, I’m looking to do a tour where I can perform it more than once. Each time I do it, it gets easier. At this point, I’m finally getting to what I want, for example with 13 against 7, I know perfectly how it sounds, but I don’t have to play it mechanically. It can breathe like any other rhythm does, but it has an identity that I can recognize because I’ve been doing it long enough. It seems strange to me that music is almost entirely dominated by divisions of two and three. We have five every once in a while, but most people can’t really do a five against four, except for percussionists. There are a lot of complex groupings of notes in Chopin, but those are gestures, almost improvisational gestures I think, rather than actual overlays of divisions of a beat. Some of this is influenced by my love and interest for African-based musics that have this complexity of rhythm that is simply beyond the capability of a standard European-trained musician, actually getting into the divisions of the time and executing them perfectly and doing them so much that they become second nature so that they can be alive in performance, rather than just reproduced. It’s a big challenge, but I’m looking for a challenge and I’m looking for a new experience that way.

An excerpt from Chris Brown’s premiere solo piano performance of Six Primes in San Francisco in 2014.

FJO:  So do you think you will eventually be able to improvise those polyrhythms?

CB:  Maybe, eventually, but I think you have to learn it first. The improvising part is after you’ve learned to do the thing already.  Yesterday I was improvising some of the time. What you do is you start playing one of the layers of the music. In Six Primes part of the idea is you have this 13 against 7, but 13 kind of exists as a faster tempo of the music, and 7 is a slower one.  They’re just geared and connected at certain places, but at any one time in your brain, while you’re playing that rhythm, it might be a little bit more involved in inflecting the 13 than the 7. Sometimes, when things are really pure, you get a feeling for both of them and they’re kind of talking to each other.  As a performer, I would say that that’s the goal.  It’s probably rarer than I wish at this point.  But the only way you can get there is by lots of practice and eventually it starts happening by itself.  I think it’s the same as if you’re playing the Schumann Piano Concerto.  You’re not aware of every gesture you’re making to make that music.  You’ve put it into your body, and it kind of comes out by rote.  You know you’re experiencing the flow of the music, and your body knows how to do it because you trained it.  So it’s the same with Six Primes, but it’s just the materials are different and the focus is different.

An excerpt from Chris Brown's piano score for Six Primes

An excerpt from the piano score for Six Primes © 2014 by Chris Brown (BMI). Published by Frog Peak Music. All rights reserved. International copyright secured. Reprinted with permission.

FJO:  And similarly to listen to it, you might not necessarily hear that’s what’s going on.  But maybe that’s okay.

CB:  Yes, that goes to the quote that there’s a multi-focal way of listening that I’m promoting; the music isn’t designed to have one focal point. It’s designed to have many layers and that basically means that listeners are encouraged to explore themselves. It’s an active listening rather than that you should be listening primarily to this part and not aware of that part.

The music isn’t designed to have one focal point.

FJO:  In a way, this idea of having such an integral relationship between pitches and rhythms is almost a kind of serialism, but the results are completely different. I also think your aesthetics, and what you’re saying about how one listens to it, is totally different.

CB:  I wouldn’t say it’s modeled on that, but I do like the heavy use of structure. It’s a sculptural aspect of making music. I do a lot of pre-composition. This stuff isn’t just springing out of nowhere. Six Primes actually has a very methodical formal design that’s explained in the notes to the CD. The basic idea is that you have these six prime numbers: 2, 3, 5, 7, 11, and 13. Those are the first six prime numbers. They’re related to intervals that are tuned by relationships that include that number as their highest prime factor. I know that sounds mathematical, but I’m trying to say it as efficiently as possible. For example, the interval of a perfect fifth is made of a relationship of a frequency that’s in the ratio of 3 to 2. So the highest prime of that ratio is a 3. Similarly, a major third is defined by the ratio of 5 to 4. So 5 is the highest prime. There’s also the 2 in there, but the 5 is the higher prime and that defines the major third. There are other intervals that are related to it, such as a 6 to 5, which is a minor third, where the 5 is also the highest prime. And 5 to 3, the major sixth, etc. Basically Western music is based around using 2, 3, and 5 and intervals that are related to that. Intervals that use 7 as the highest prime are recognizable to most western music listeners, but they’re also out of tune by as much as a third of a semi-tone. Usually people start saying, “Oh, I like the sound of that. I can hear it. It’s a harmony, but it sounds a little weird.” Particularly the 7 to 6 interval, which is a minor third that’s smaller than any of the standard ones that Western people are used to, is very attractive to most people but also kind of curious and possibly scary. When you take it to 11, you get into things that are halfway between the semitones of the equal tempered chromatic scale. And 13 is somewhere even beyond that. Okay, so there are all these intervals. The tuning for Six Primes is a twelve-note scale that contains at least two pitches from each of these first six prime factors, which results in a total of 75 unique intervals between each note and every other one in the set.

The cover for the CD of Six Primes

Last year, New World Records released a CD of Chris Brown performing Six Primes.
.

FJO:  Cellists and violinists tune their instruments all the time and since their instruments have an open neck, any pitch is equally possible. The same is true for singers. But pianists play keyboards that are restricted to 12 pitches per octave and that are tuned to 12-tone equal temperament. And since pianists rarely tune their own instruments, 12-tone equal temperament is basically a pre-condition for making music and it’s really hard to think beyond it. As a classically-trained pianist, how were you able to open your ears to other possibilities?

CB: It was hard. It was very frustrating. It took me a long time, and it started by learning to tune my instrument myself. The first thing was what are these pitches? Why do I not understand what everybody’s talking about when they’re talking about in tune and out of tune? I’m just not listening to it, because I’m playing on an instrument that’s usually somewhat out of tune. Basically pianists don’t develop the same kind of ear that violinists have to because they don’t have to tune the pitch with every note. So I was frustrated by my being walled off from that. But I guess not frustrated enough to pick up the violin and change instruments.

While I was an undergraduate and started getting interested through Cage in 20th-century American music, I discovered Henry Cowell’s piano music, the tone cluster pieces, and I loved them.  I just took to them like a duck to water, and I got to be good at it.  I had a beautiful experience playing some of his toughest tone cluster pieces at the bicentennial celebration of him in Menlo Park in 1976. I really bonded with that music and played it like I owned it.  I could play it on the spot. I had it memorized.   The roar of a tone cluster coming out of the piano was like liberation to me.

FJO:  And you recorded some of those for New Albion at some point.

CB:  That came out of a concert Sarah Cahill put together of different pianists playing; it was nice that that came out.

FJO:  It’s interesting that you mention Cowell because he was another one of these people like Wayne Horvitz who could take really totally whacked out ideas and find a way to make them sound very immediate and very accessible. It’s never off-putting, it’s more like “Oh, that’s pretty cool.” It might consist of banging all over the piano, but it’s also got a tune that you can walk away humming.

CB:  I like that a lot about Cowell.  He’s kind of unaffected in the way that something attracted him. He wrote these tunes when he was a teenager, for one thing.  But he wrote tunes for the rest of his life, too.  Sometimes he wrote pieces that have no tune at all.  The piece Antimony, for example, is amazingly harsh. There’s definitely some proto-Stockhausen there, but it’s not serial.  I think that the ability to not feel like you need to restrict yourself to any particular part of the language that you happen to be employing at the moment is something that is really an admirable achievement.  There’s something so tight about the Western tradition that once you start developing this personal language, you must not waver, that this is the thing that you have to offer and it’s the projection of your personality, how will you be recognized otherwise? I think that’s ultimately a straightjacket, so I’ve always admired people like Cowell and Anthony Braxton. Yesterday I was talking to Nate Wooley about the latest pieces that Braxton is putting out where he’s entirely abandoned the pulse; it’s all become just pure melody. He’s changing.  Why do we think that’s a bad idea?  Eclecticism—if you can do it well and can do it without feeling like you’re just making a collage with stuff you don’t understand—is the highest form, to be able to integrate more than one kind of musical experience into your work.

FJO:  It’s interesting that you started veered into a discussion about discovering Cowell’s piano music after I asked you about how you got away from 12-tone equal temperament. Most of Cowell’s music was firmly rooted in 12-tone equal, but he did understand the world beyond it and even tried to explore synchronizing pitch and rhythmic ratios in his experiments with the rhythmicon that Leon Theremin had developed right before he was kidnapped him and brought back to the Soviet Union.

CB:  I was definitely influenced by [Cowell’s book] New Musical Resources. As I read about the higher harmonics and integrating them into chords, I would reflect back on what it sounds like when you play it on the piano.  It is very dissonant because of the tuning.  And I realized that.  So I thought, “Well, okay, he just never got there.  He didn’t learn to tune his own piano, maybe I should do that, you know.” I get that some in Six Primes, I think, because there’s an integral relationship between all the notes. Even though the strings are inharmonic, there’s more fusion in the upper harmonics that can happen.  So these very dissonant chords also sound connected to me.  They’re not dissonant in the same way that an equal tempered version of it is.  They have a different quality.

I’m also noticing from the other piece we played the night you attended that was using the Partch scale, if you build tone cluster chords within the Partch scale, you get things that sound practically like triads, only they buzz with a kind of fusion that you can only have when the integral version of major seconds is applied carefully.  You get all kinds of different chords out of that.  It’s wonderful.

FJO:  Now when you say Partch scale, we’re basically talking about 11-limit just intonation, in terms of the highest primes, since the highest prime in his scale is 11.

CB:  Right, but it’s more than that. He did restrict himself to the 11-limit, but he didn’t include everything that’s available within that.  He made careful, judicious selections so that he could have symmetrical possibilities inside of the scale.  It’s actually more carefully and interestingly foundationally selected than I knew before I really studied it closely.

FJO:  But he worked with his own instruments which were designed specifically to play his 43-note scale whereas you are playing this score on a standard 7-white, 5-black keyed keyboard.

CB:  I took an 88-key MIDI controller and I was using it to trigger two octaves of 43 notes.  So I’ve mapped two octaves to the 88 keys. It winds up being 86, but it is possible to do that. I’m thinking in the future of figuring out a way to be able to shift those octaves so I’m not stuck in the same two-octave range, which I haven’t done yet, but that’s kind of trivial programming-wise.

FJO:  Of course, the other problem with that is the associations the standard keyboard has with specific intervals.

CB:  You have to forget that part, and that’s why I didn’t do it in Six Primes.  And also, if I’d done it on an acoustic piano, it really messes up the string tension on the piano.

FJO:  Julian Carrillo re-tuned a piano to 96 equal and that piano still exists somewhere.

CB:  Yeah, but you can’t re-tune it easily, let’s put it that way. And it loses its character throughout the range because the character of the piano is set up by the variable tension of the different ranges of its strings.

FJO:  But aside even from that, it changes the basic dexterity of what it means to play an octave and what it means to play a fifth.  Once you throw all those relationships out the window, your fingers are not that big, even if you have the hands of Rachmaninoff.

CB:  It becomes a different technique for sure. I’m not trying to extend the technique. What I’m doing with this is essentially I’m making another chromelodeon, which was Partch’s instrument that he used to accompany his ensemble and to also give them the pitch references that they needed, especially the singers, to be able to execute the intervals that he was writing.

FJO:  Well that’s one of the things I’m curious about.  When you’re working with other musicians obviously you can re-tune the keyboard.  You can re-tune a piano, you can work with an electronic keyboard where all these things are pre-set. But the other night, you were working with a cellist who sang as well and an oboist.  To get these intervals on an oboe requires special fingerings, but most players don’t know them.  With a cello there’s no fretboard, so anything’s possible but you really have to hear the intervals in order to reproduce them.  That’s even truer for a singer.  So how do those things translate when you work with other musicians, and how accurate do those intervals need to be for you?

CB:  Those are two questions really.  But I think the key is that you’ve got to have musicians who are interested in being able to hear and to play them.  You can’t expect to write them and then just get exactly what you want from any musician.  Until we wake up 150 years from now and maybe everybody will be playing in the Partch scale so you could write it and everybody can do it!  That’s a fantasy, but I think we’re moving more in that direction.  There are more and more musicians who are interested in learning to play these intervals and all I’m doing is exploiting what’s there.  I’m interested in it.  I talk to my friends who are, and they want to learn how to play like that and that’s what’s happening.  It’s a great thing to be able to have that experience, but it’s not something you can create by yourself.  You have to work with the people who can play the instruments.  For example, you mentioned the oboe. I asked Kyle [Bruckmann] what fingerings he’s using.  “Shouldn’t I put this in the score?”  And he said, “Most of the time what I’m doing is really more about embouchure.  And it’s maybe something that’s not so easily described.”  So it comes down to he’s getting used to what he needs to do with his mouth to make this pitch come out; he’s basically looking at a cents deviation.  So I’ll write the note, and I’ll put how many cents from the pitch that he’s fingering, or the pitch that he knows needs to be sounded.  He’s playing it out of tune with what the horn is actually designed to create and he’s limited in the way that notes sound.  He can’t do fortissimo on each of these notes.  He’s working with an instrument that’s designed for a tuning that he’s trying to play outside of.  It’s crazy. But so far, I would say it’s challenging, but not frustrating so much if I’m translating his experience correctly.  He seems to be very eager to be able to do it, and he’s nailing the pitches.  Sometimes I test him against my electronic chromelodeon and he’s almost always right on the pitch. He’s looking at a meter while he’s playing.  It’s something that a musician couldn’t have done 10 or 15 years ago before those pitch meters became so cheap and readily available.

More and more musicians are interested in learning to play these intervals.

FJO:  James Tenney had this theory that people heard within certain bands of deviations. If you study historical tunings like Werckmeister III, the key of C has a major third that’s 390 cents. In equal temperament, it’s 400 cents which is way too sharp since a pure major third is 386. You can clearly hear the difference, but a third of 390 is close enough to 386 for most people.

CB:  I always say when I’m working with musicians, think of these as targets. If you played it perfectly, like a computer, it wouldn’t sound that good. For example, last night, we had to re-tune the harp to play in the Six Primes tuning. Anybody who knows about harp tuning realizes there’s seven strings in the octave and you get all the other notes by altering one semitone sharp or flat on one of those strings. So it was a very awkward translation. Basically we had a total of 10 of the 12 Six Primes pitches represented. Two of them we couldn’t get. And the ones that we had were sometimes as much as 10 cents out, which is definitely more than it should be to be an accurate representation. But again, this is where the loose comes in with the tight.

In certain cases that wouldn’t work, but in a lot of cases it does. A slight out-of-tuneness can result in a chorus effect as part of the music, and I like that; it gives a shimmer. It’s like Balinese tuning. If that’s what we have to accept on this note, well then so be it you know. It actually richens the music in a way. It’s not about getting more purity. That’s what I feel like. There’s a thing I never quite agreed with Lou Harrison about, because he was always saying these are the real pure sounds. These are the only right ones. But they can get kind of sterile by themselves. He didn’t like the way the Balinese mistuned things. But from all those years of tuning pianos, I love the sound of a string coming into tune, the changes that happen, it makes the music alive on a micro-level. It’s important to be able to hear where the in-tune place is, but to play around that place is part of what I like. I don’t expect it to be perfectly in tune. Maybe it’s because I play a piano and on the extreme ranges of the piano, you can’t help that the harmonics are out of tune. They just are. There’s always this element that’s a little out of control, as well as the part that we can master and make truly evoke harmonic relationships.

FJO:  Now in terms of those relationships, is that sense of flexibility and looseness true for these rhythms as well?  Could there be rubatos in 17?

I don’t expect it to be perfectly in tune.

CB:  Yeah, I think that’s what I was saying about being able to play the rhythm in a lively way.  They can shift.  They can talk to each other.  Little micro-adjustments to inflect the rhythm.  If we’re playing a waltz, it’s not a strict one-two-three; there’s a little push-me pull-you in there. That’s how you give energy to the piece.  I think that it’s hard to get there with these complex relationships, but it’s definitely possible.

FJO:  So is your microtonal music always based on just intonation?  Have you ever explored other equal temperaments?

CB:  I’ve looked at them, but they don’t interest me as much because I’m more attracted to the uneven divisions than to the even ones.  Within symmetrical divisions, you can represent all kinds of things and you can even make unevenness out of the evenness if you like.  But it seems like composers get drawn to the kind of symmetrical kinds of structures, rather than asymmetrical ones.  Symmetry is fine, but somehow it reminds me of the Leonardo figure inside the triangle and the circle.  It’s ultimately confining.  I like the roughness and the unevenness of harmonic relationships.

FJO:  We only briefly touched on electronics when you said that you had a rough start with it as a classical music nerd. But I was very intrigued the other night by how Kyle Bruckmann’s oboe performance was enhanced and transformed by real-time electronic manipulations the other night in Snakecharmer, and was very curious after you mentioned that you had figured out how to make this old piece work again. I know the recording that Willie Winant made of that piece that was released in 1989, but to my ears it sounds like a completely different piece.  I think I like the new piece even more because it sounds more like a snake charmer to me this time; I didn’t quite understand the title before.

CB:  There are three recorded versions of that old piece.

FJO:  That was the only one I’ve heard.

CB:  They’re on the Room record.

FJO:  I don’t know that record.

CB:  Okay, that was rare.  It was a Swiss release.  But that’s kind of an important one for me in my development with electro-acoustic and interactive music. I should get it to you.  Anyway, the basic idea is any soloist can be the snake charmer, the person who’s instigating the feedback network to go through its paces and sort of guiding it.  Probably the strangest was when Willie did it because he can’t sustain.  He’s basically playing percussion, and he’s just basically playing whatever he hears and interacting with it intuitively.  But another version of it was with Larry Ochs playing sopranino saxophone so that’s probably closer; you might hear the relationship there.  It’s more the traditional image of the snake charmer.  It sounds an awful lot like a high oboe; that was a good version.  There’s also the version that I performed, singing and whistling as the input.  Those were three different tracks, but they all start out in a similar way.  Basically the programming aspect is that it goes through a sequence of voices.  And each of those voices transposes the input that it’s receiving from the player in different intervals as the piece goes on.  So there’s a shape of starting with a high transposition going down to where it’s no transposition and below and up again.  It’s a simple sinusoid-type shape.  The next voice comes in and does the same thing with a slightly different rhythmic inflection, then two voices come in together and fill out the field.  That’s the beginning of Snakecharmer in every version so far.  There are about six different voicing changes which are in addition to transposing in slightly different ways to provide rhythmic inflections.  They only respond on the beat. Whatever sound is coming in when it’s time for them to play, that’s the sound that gets transposed.  There are four of these processes going on at once.  Once again, it’s that complexity going on in the chaos created by these different orderings, transpositions of the source.  The other thing is the reason it’s a feedback network is that there comes a point where the player is playing, the sound responds to it, and then the sound that it responds with is louder than what the player’s doing, and that follows itself.  So you start getting a kind of data encoded feedback network that I think of as the snake, an ouroboros snake that’s eating its own tail.

FJO:  How much improvisation is involved?

CB:  Quite a bit.  I’ve never provided a score. I just tell the person what’s going on and ask them to explore the responsiveness of the network. Usually I’m tweaking different values in response to what they’re doing, so it’s a bit of a duet.

FJO:  Taking it back to Talking Drum, you have these notes explaining how people are walking around in this environment. There are these field recordings, and then there are musicians who are responding to them.  I can partially hear that, but I’m not exactly sure what I’m hearing.  Maybe that’s the point of it to some extent.

CB:  That’s not quite right.  We have the recording called Talking Drum.  That is a post-performance production piece that uses things that were recorded at different Talking Drum performances.  That uses field recordings.  In a performance of Talking Drum, there are no field recordings. Basically, the idea is that there are four stations that are connected with one MIDI cable. That cable allows them to share the same tempo. At each of the stations is a laptop computer, and a pitch follower, and somebody who’s playing into the microphone. So, the software that’s running is a rhythmic program I designed that I can give a basic tempo and beat structure to that can change automatically at different points in time, but that also responds to input from the performer, the basic idea being that if the player plays on a beat that’s a downbeat, that beat will be strengthened in the next iteration of the cycle. It basically adjusts to what it hears in relationship to its own beat cycle. The idea of the multiplicity of those stations where that’s happening, is that they are integrated by staying on the same pulse through the cable. The idea is that the audience is moving around the space that this installation is in and the mix they hear is different in each location. As they move, it shifts. It’s as if they were in a big mixing console, turning up one station and then turning down the other. What I was trying to do was to create a big environment that an audience can actively explore in the same way that I’ve talked about creating this dense listening environment and asking people to listen to different parts on their own. That actually came about from the experience of going to Cuba in the early ’90s, and being at some rumba parties where there were a lot of musicians spread out in different places. I wandered around with a binaural recorder and I recorded the sound as I was moving. Then when I listened to the recording, I was getting this shifting, tumbling sound field and I thought: “There’s no way you could ever reproduce this in a studio. It’s a much richer immersive way of listening. Why can’t I use this as a way to model some experience for live performance or for live audiences?”

The cover for Chris Brown's CD Talking Drum.

In 2005, Pogus Productions issued a CD realization of Chris Brown’s Talking Drum
.

FJO:  It actually reminds me of when I first heard Inuksuit, the John Luther Adams piece for all the percussionists.  It was impossible to hear everything that was going on at any one moment as a listener. That’s part of the point of it which, in a way, frustrates the whole Western notion of a composition being a totality that a composer conceives, interpreters perform, and listeners are intended to experience in full like, say, the Robert Schumann Piano Concerto. Interpretations of the Schumann might differ and listeners might focus on different things at different times, but it is intended to be experienced as a graspable totality, and a closed system. Whereas creating a musical paradigm where you can never experience it all is more open-ended, it’s more like life itself since we can never fully experience everything that’s going on around us.  But I have to confess that as a listener I’m very omnivorous and voracious so it’s kind of frustrating, because I do want to hear it all!

Compositions are more or less instructions, but I’m not going to presume that they’re going to do it exactly the same way every time.

CB:  Sorry! I think that’s part of the Cage legacy, too. You don’t expect to have it all and what you have is a lot.  Everybody in that Schumann Piano Concerto is hearing something slightly different, too, but there’s this idea somehow that this is an object that’s self-contained.  It’s actually an instruction for a ritual that sounds different every time it’s done.  But I think the ritual aspect of making music is something that really interests me and I would hate to be without it.  Compositions are more or less instructions for what they should do, but I’m not going to presume that they’re going to do it exactly the same way every time.  Maybe some of them think they do, but I don’t think performing artists do that really. It’s mostly about making something that’s appropriate to the moment even if it’s coming from something that’s entirely determined in its tonal and rhythmic structure. That to me is what makes live music always more interesting than fixed media music.  It’s actually not an object.  It’s not something that doesn’t change as a result of being performed.   Of course, fixed media depends on how it’s projected.

FJO:  Perhaps an extreme example of that would be the kinds of work that you do as part of the Hub—electronic music created in real time by a group of people who are physically separated from each other yet all networked together but it’s really there’s no centralized control and that’s kind of part of the point of it.

CB:  That’s right.  The idea is to set up the composition process, if you can call it that. It’s not really the same as composing, but it’s a designing.  You’re designing a system that you believe will be an interesting one for these automated instruments to interact inside of.  What we do is usually a specification; each piece has verbal instructions about how to design a system to interact with the other systems.  Then we get it together and get them working and they start making the sound of that piece which is never the same exactly, but it’s always recognizable to us as the piece that it is, because it’s a behavior. I would say within our group we get used to the kinds of sounds that everybody chooses to use to play their role in the piece, so it starts to get an ad hoc like personality from those personal choices that each person makes.

An excerpt of a networked computer performance by John Bischoff, Chris Brown and Tim Perkis (co-founders of the legendary computer network band The Hub) from the Active Music Series in Oakland’s Duende, February 2014.

FJO:  In terms of focusing listening, and perhaps you’ll debate this with me, it seems that, as listeners, we’re trained to focus on a text when a piece has a text. If someone’s singing words, those words become the focal point.  I hadn’t heard much music of yours featuring a text, but I did hear your new Jackson Mac Low song cycle the other night.

CB:  I don’t write a lot of songs, but when I do I find it’s usually a pleasure to work with a pre-set structure that you admire; it’s like you’re dressing up what’s already there rather than having to decide where it goes next.  Of course, you’re making decisions—like what is this going to be, is it going to be different, how is going to be different, how is it going to be the same?—but it’s nice to have that kind of foundation to build on.  It’s like collaboration.

FJO:  I thought it was beautiful, and I thought Theresa Wong’s voice was gorgeous. It was exquisite to hear those intervals sung in a pure tone and her diction was perfect, which was even more amazing since she was simultaneously playing the cello. But, at the same time, the Stone has weird acoustics.  It’s a great place, but it’s a hole in the wall that isn’t really thought out in terms of sound design so it was obviously beyond your control. I was sitting in the second row and I know Jackson Mac Low’s poems. So when I focused in, I could hear every word she was pronouncing. But I still couldn’t quite hear the words clearly, as opposed to the vocals on Music of the Lost Cities where I heard every word, since obviously, in post-production, you can change the levels. But it made me wonder, especially since you have this idea of a listener getting lost in the maze of what’s going on, how important is it for you that the words are comprehensible?

Music of the Lost Cities from Johanna Poethig on Vimeo.

CB:  Maybe it’s just me, but even in the best of circumstances, I have trouble getting all the words in songs that are staged.  Maybe it’s because I’m listening as a composer, so I’m always more drawn to the totality than I am just to the words.  Most regular people who are into music mostly through song are very wrapped up in the words.  But I’m not sure Mac Low’s words work that way anyway.  I think they are musical and they are kind of ephemeral in the way that they glow at different points.  And if you don’t get every one of them, in terms of what its meaning is, it’s not surprising.  It’s kind of a musical and sonorous object of its own.  So I guess I’m not exceptionally worried about that, although in the recording, I probably do want a better projection of that part of the music than what happened at the Stone.  I was sitting behind her and I was not hearing exactly what the balance is.  In the Stone, there are two speakers that are not ideally set up for the audience, so it’s not always there the way exactly you want it to be.

FJO:  So is this song cycle going to be on the next recording you do?

Most regular people who are into music mostly through song are very wrapped up in the words.

CB:  I hope we’re going to record it this summer, actually.  It’ll be a chance to get everything exactly right.  I’m very pleased that people are recognizing the purity of these chords that are being generated through the group, but there hasn’t been a perfect performance yet.  Maybe there never will be.  But the recording will get closer than any other one will, and that’ll be nice to hear, too.

FJO:  It’s like the recording project of all the Ben Johnston string quartets that finally got done. For the 7th quartet, which was over a thousand different intervals, they were tuning to intervals they heard on headphones and using click tracks in order to be able to do it. And they recorded sections at a time and then patched it all together. Who knows if any group will ever be able to perform this piece live, but at least there’s finally an audio document of what Ben Johnston was hearing in his head.

CB:  I think that’s really a monumental release.  Ben Johnston’s the one who has forged the path for those of us trying to make Western instruments play Harry Partch and other kinds of just intonation relationships.  It’s fantastic.  But I think the other thing that seems to be true is that if you make a record of it, people will learn to play it.  For example, Zeena and Nate the other night, in preparation for that performance, I was sending them music-minus-one practice MP3 files so that they could basically hear the relationships that they should be playing.  It helps a lot.  Recordings also definitely help to get these rhythmic relationships. I often listen to Finale play them back, just to check myself to see if I’m doing them correctly.  A lot of times, I’m not.  It drifts a little bit.

FJO:  But you said before that that’s okay.

CB:  But I want to know where it’s drifting.  I want to know where the center is as part of my learning process.  I use a metronome a lot, and I use the score a lot to check myself, and get better at it.

FJO:  You’ve put several scores of yours on your website. Sparks is on there.  Six Primes is on there.  And there’s another piece that you have on there that’s a trio in 7-limit just intonation—Chiaroscuro. Theoretically anybody could download these scores, work out the tunings for their instruments and play them.

CB:  Sure. Go for it. But they’re published by Frog Peak, so they can get the official copy there. I would like to support my publisher. Because of the way that my compositional practice has developed, a lot of my scores are kind of a mess. I had a lot of scores, but I haven’t released them because they’re kind of incomplete. They often involve electronic components that are difficult to notate, and I haven’t really figured out the proper way to do that. Where there are interactive components, how do you notate that? I’m not that interested in making pieces for electronics where the electronics is fixed and the performer just synchs to it. There’s only one piece I’ve played where I really like doing that and that’s the Luc Ferrari piece Cellule 75 that I recorded where the tape is so much like a landscape that you can just vary your synchronization with it.

FJO:  It’s interesting to hear you say that because back in 1989, you said…

CB:  Okay.  Here it comes.

FJO:  “I want electronics to enhance our experience of acoustics and of playing instruments.  Extending what we already do, instead of trying to imitate, improve upon, or replace it.”

A model is never a complete reading of the world.

CB:  Yeah, that was important.  That came out at a time when the industry was definitely moving towards more and more electronic versions of all the instruments, usually cheap imitations.  Eventually those become personalities of their own, but it seems to me they always start like much lesser versions of the thing they’re modeled on.  Maybe it has something to do with this idea of models.  We’re moving more and more into a virtual reality kind of world and I think it’s really important that we don’t lose the distinction of what the model is rather than the thing it’s modeled on. I think it’s pretty dangerous to do that, actually.  The more people live in exclusively modeled environments, the more out of touch they’re going to get and probably the sicker they’re going to get because a model is never a complete reading of the world.  It’s a way to try to understand something about that world. If you’re a programmer, you’re always creating models.  In a sense, a synthesizer is modeled on an acoustic reality. But once it comes out of the box into the world, it’s its own thing.  It’s that distinction I’m trying to get at.  I think we’re often seduced by the idea that the synthesized thing will replace the real thing rather than the synthesized thing just becoming another reality.  That’s why I’m interested in mixing these things:  singing with the synthesis. Becoming part of a feedback system with a synthetic instrument embraces that into a space and into a physical interaction. That seems to be more of a holistic way of expanding our ability to play music with ourselves, with our models of ourselves, with each other through models, or just seeing the models execute music of its own.  The danger comes when you try to make them somehow perfect an idea of what reality is and it becomes the new reality instead of becoming just a new part of the real world.