Tag: electronic music

Michael J. Schumacher: Composing is Listening

Michael J. Schumacher’s 2002 artist statement on the website of the Foundation for Contemporary Arts is a very succinct summary of his aesthetics:

I am interested in context, in defining boundaries and not crossing them. A piano piece is one thing, a sound installation another. The forms are different, the audience is different. The time, the place, it all has to be taken into account. Ultimately, we’re all collaborating with whomever’s participating.

Nevertheless, Schumacher engages in an extremely wide range of music-making—from immersive Room Pieces and other sound installations to collaborations with choreographer Liz Gerring to composing and performing all the “songs” for his indie “dance pop” (for lack of a better term) band diNMachine. In Schumacher’s home in Sunset Park, where we visited him to have an extensive conversation about his musical activities, there are tons of speakers everywhere and a great collection of vintage synthesizers, but also a grand piano in the middle of his living room as well as a small bust of the composer Franz Schubert that’s just hanging out near a window in his dining room.

“I love lots of kinds of music; I’m just aware of the differences,” Schumacher explained when I asked him about the wide variety of his musical endeavors. “I don’t think that leads to only liking one particular kind of approach.  I happen to have really fallen in love with computer algorithms.  I have to say that.  It opened up a way of listening for me that was really fantastic, and it stays fantastic now.  But I was in rock bands as a kid.  I played in some bands up until I was in my 30s. And I improvised a lot. I like having that outlet for that part of my musical being.”

Although Schumacher is deeply interested and involved with a wide range of musical styles, he firmly believes that certain kinds of music-making work better in certain kinds of spaces and that doing the wrong music in the wrong space is unfair to audiences and musicians alike, since it sets up unfulfillable expectations.

“A concert hall is a place for storytelling, but it’s a place where you know the story,” he asserted.  “You know it’s going to be an arc form.  You know there’s going to be a climax and a resolution.  And you’re enjoying that in a place of comfort, in a place of audition, a place watching a storyteller—whether it’s a conductor or an actual person telling the story—and this unfolds in a very predictable way.  Your body’s relationship to that is key: being in a seat and looking forward orients you towards a certain way of perceiving time.”

Schumacher’s deep concern for how sound installations and other primarily electronic music creations—both his own and those created by others—were perceived led him to establish several performance spaces designed specifically for such work, most notably Diapason in New York City.

“The first one was Studio 5 Beekman,” Schumacher remembered. “It was a little office space.  You entered, and there was a small foyer.  This kind of gave you a little bit of a buffer between the world and then the gallery, which was behind a door, beyond the foyer.  That little buffer was very nice, because it let people kind of take a breath.  For me it was also for limiting vision.  Turn the lights down.  It doesn’t have to be dark, but just make the visual less explicit.  I used to use a red light bulb, which got misinterpreted as a kind of gesture of some sort, but I just felt it was a dark color that allowed you to see without making the visual too much of a thing.  I think personally in those situations, it’s not good to have a lot of sound coming from outside.  If people want that, I suppose you can have a space like that, but for the most part I feel people want to be able to control the environment and not have to deal with sirens and things like that.”

Unfortunately after more than a decade, Diapason proved unsustainable and now Schumacher is contemplating hosting sound installations for a small invited audience in his own home. It’s a far more intimate environment than the theaters which present the Liz Gerring dances he scores or the clubs where his band diNMachine might typically perform. And he is well aware that those spaces result in different ways of perceiving which are best served by different approaches to making music. But, he’s also aware that not everyone listens to music the same way, regardless of the space, and is eager to create things that have an impact for anyone who hears them.

“If we’re talking about the ideal listener-viewer, I think that’s one thing.  If we’re talking about a typical audience, that’s another. Both are obviously important.”

December 6, 2017 at 2:00 p.m.
Michael J. Schumacher in conversation with Frank J. Oteri
Video and photography by Molly Sheridan
Transcription by Julia Lu

Frank J. Oteri: The homepage of your website has a sequence of photos of all these objects: a teapot with an audio speaker in it; two circuit boards interconnected; and, perhaps the most striking one, a Philadelphia Cream Cheese container. It’s tantalizing, but none of those things have audio links on them, so I suppose that’s just to whet people’s appetites.

Michael J. Schumacher:  I think at some point I did have audio where each object would be a separate channel, and as you clicked on more [of them], you’d get more of the piece.  I don’t know what happened to that.  I don’t really manage my own website; I don’t know how.  My girlfriend does that.  Sometimes things get disconnected or something changes, and it takes us a while to figure out that it happened and to fix it.

FJO:  Wow, that’s a pity.  I would have loved to have clicked on all of those images to hear all those sounds together. So they’re all one piece?

MJS:  Well, the way I work in general is I’m basically writing one piece all the time.  I’m just adding things to it, and then whenever I present it, it’s a part of that piece.  That’s how I look at that realm of the multi-channel stuff; [those sounds] would have been a particular group of things that would belong to this larger concept.

“The way I work in general is I’m basically writing one piece all the time.”

FJO:  So all those things make different sounds, but what were those sounds?

MJS:  Well, they’re all just speakers; they’re not instruments.  I use these objects—the cream cheese container or the teapot—to create a resonant body. I travel with these little drivers and can improvise a resonator on the spot.  I can go to Lisbon and put the speakers in beer glasses or garbage cans or things like that.  But I sometimes get attached to certain resonators, like the Philadelphia Cream Cheese container.

FJO:  I can imagine that a teapot could have some effective acoustic properties based on its shape, but what’s so special about a Philadelphia Cream Cheese container?

MJS:  Well, I think when they designed the container, they were clearly thinking acoustically: something works.  It came from Costco.

Schumacher's Philadelphia Cream Cheese speaker

FJO:  Even though the audio links on those images are currently not working, you’ve made so much of your music available through your amazingly generous and seemingly limitless SoundCloud page. However, about a week ago I started embarking on a plan to listen to every one of the files you uploaded to that page in order—I failed because there’s so much material there. But I also failed because as I was scrolling through the files, I came across one called Middl which had a waveform that immediately made me want to hear it. Most waveforms are somewhat random looking and nondescript, at least to me, but this one had a striking regularity to it. It was unlike any kind of SoundCloud waveform I’d ever seen.  So I jumped ahead.  I cheated on my own listening plan, because I had to hear what that thing sounded like.  And it was a really transformative hour of my life.  It sounds to me like it starts with a telephone dial tone.  Is that what it is?

MJS:  No.

FJO:  What is it?

MJS:  It’s a synthesizer oscillator, and it’s being played by a computer.  The oscillator is a kind of additive synthesizer with eight partials, and these partials are being manipulated by the computer.  So it’s pretty simple.

FJO:  It sounded so much like a telephone dial tone to me—so much so that since hearing it, I can’t interface with an actual telephone in the same way.  I’m now giving it all these musical associations.

MJS:  That’s really good.  I’ve actually tapped into something.

FJO: But apparently not intentionally.

MJS:  No.  Or maybe it was.  Maybe that was my La Monte Young moment of listening to the wires and having it inspire me.

Some of the hardware Schumacher uses to create his music.

FJO:  Another sound file I listened to had a similar effect on me. It was a sound file for the Riga 2014 Room Piece, which also lasted a bit more than an hour. After the file ended, I took off my headphones and walked down the hall.  I was washing my hands in the bathroom sink, and when I turned off the water I was suddenly transfixed by another sound I couldn’t immediately place. The room was completely empty, but there was this steady sound. Maybe it was a heat pipe. But it didn’t matter. I just wanted to listen to it, and it was because I had just listened to your Room Piece. Of course, there’s a whole history of pieces that make us more attuned to the sounds around us that most of us take for granted, starting with John Cage’s 4’33”. All of Pauline Oliveros’s Deep Listening projects were also part of this tradition. But whereas Cage and Oliveros’s reasons for pursuing a more expansive way to listen seem almost political and even spiritual, the relationship to the larger sonic environment that your music opened me up to has been purely aesthetic; it just made me focus on some interesting sounds.

MJS:  I think a lot about listening; for me, composing is listening and so it’s how you listen and how you respond to the potential meaning in a sound.  I think that what’s become really interesting since Cage is how much you can do in that regard: how and where you can find meaning; how you can juxtapose meanings; how you can suggest resonances beyond sonic resonances to real life associations. This explosion of meaning also includes pre-Cage sound—the relationship between a D and an A, or a D and B-flat. That also has meaning. What’s so exciting about making music now is you’re really—I don’t want to say manipulating, because I don’t try to manipulate meaning. I try to suggest ways that listeners can explore meaning.

“For me, composing is listening.”

FJO:  When you talk about certain sounds being pre-Cage, people listening to that D and that A in the era before Cage and Oliveros were generally listening unquestioningly to a disembodied, abstract, and perhaps idealized relationship between certain sounds typically through the filter of somebody playing an instrument or somebody singing, either themselves or someone else in their home or in a concert hall. But obviously when any musician makes these intentional sounds there’s all this other sound that’s happening, too, much of which is unintentional but just as present.  And I guess in the world we live in now, what we could call the post-deep listening moment, we are at least aware that every sound that’s around us is something that obviously we can hear, even if we’re not consciously listening to it. So how does that change the relationship of what a composer does—for you?

MJS:  These sounds that are outside the specific performance that are accompanying it in some way can be invited in or in some way interact with the performance.  I think it can work both ways.  You can be in a situation and somebody making a sound or some sound coming from the environment can affect your reception of the “musical” sounds.  Let’s just call them musical sounds.  On the other hand, the musical sounds can affect your reception of these other sounds.  So what I like to work with are short, suggestive sounds that can expose meanings in sounds outside the performance. A simple example would be if I play a short note on the piano and by coincidence—and it’s amazing how often this happens—you might hear a car horn in the distance, and that car horn might be the same pitch.  And so the mind relates the two.  In an abstract way, not in a way that it’s saying that’s a car horn, but in a way that it’s saying that’s a B.  That’s the same pitch.  That’s a musical thing, an abstract or musical way of perceiving that car horn.  I really like that, and so I like to put out there enough information that the whole sonic environment can participate in that reception of sound.  Not just a level of concrete associations, but also in these abstract associations like rhythm and pitch.  The other reason to have short events is because they articulate a time-space kind of situation, as do most of the sounds we hear in the environment.  Very few sounds in the environment are steady and non-stop.  If they are, after a while we tend to ignore them.  Most of the sounds we perceive are perceived momentarily, and we’re jumping from one to another.

FJO:  That’s sort of the opposite of the drone aesthetic.

MJS:  Not really, because drones—like La Monte Young’s drones—are incredibly active.  When you’re listening deeply to them, you’re listening to lots of things inside the drone.  For me, a drone is actually exactly that.  It’s just maybe a different way of approaching it.  I think, in both cases, we’re talking about really heightening the moment and trying for a kind of perceptual present.

FJO:  I grew up in New York City amidst 24/7 loud Midtown Manhattan traffic; I had to train myself to be able to fall asleep with the noises of sirens and everything else.  I remember the first time I ever took a trip to the countryside and there were crickets.  I could not fall asleep because it was a constant sound, and it was too close to a musical experience for me.  But of course, a musical experience could also be a completely random assortment of sounds, but I was able to disassociate that. I guess that speaks to what you were saying about how, if you’re not really paying attention, a drone might seem like this constant thing, but there’s lots of other stuff in it.

MJS:  I think that at the beginning, the point of the drone is that superficially it seems like nothing is going on.  But what it’s doing is it’s giving you this time dimension of really saying, “Okay, now I’ve been listening to this for five minutes, and suddenly I’m hearing things that I didn’t hear before.  And then the more I do that, the more I hear in this apparently monolithic sound.”  There are all these details that can only be only accessed through it—first of all—being so-called unchanging, but also giving the listener the time to contemplate.

FJO:  There’s a piece of yours that sort of does that in a weird way.  But maybe, once again, what I thought I was hearing is not quite what you were doing, like how Middl isn’t a dial tone. The piece is called Chiu.

MJS:  That’s a piece Tom Chiu and I performed together.

FJO:  Aha!  Okay.  That’s why it’s called that.  And I hear his violin, but what it sounds like you’ve done to it is created some kind of artificial simulacrum of a Doppler effect.  At least that’s what it sounded like to me.

MJS:  Well, that was a jam. I played my synthesizer.  It’s the same synthesizer that I use in Middl.  It’s made by Mark Verbos, who used to be here in Brooklyn and now has moved to Berlin.  It’s a fantastic Buchla-inspired approach to synthesizer making.  It was an improv, but we rehearsed a bit.  It was really kind of Tom’s composition.  And he has this way of playing—I guess that’s what you hear as the Doppler, this kind of slow pitch bend, kind of this constant, constantly shifting, almost glissing pitch world.

One of Schumacher's synthesizers connected to a speaker made from a Bush Beans can.

FJO: So that was all him and was just a product of the improvisation?

MJS:  Yeah.

FJO:  So once again, I made all these incorrect inferences about what I was hearing.  This is the weird thing about disembodied sound, whether you’re hearing something on a recording and there aren’t a lot of program notes for it or you’re listening on your headphones on a website with no additional information. These experiences are very different from being in a space and watching a performance or a sound installation and seeing how it works as you’re listening to it.  There’s only so much your ears can tell you about what’s going on; the eyes give away the secrets.

MJS:  They can, but sometimes with synthesizers you don’t; sometimes the player doesn’t know what’s going on.  At least I don’t. I mean, I have no idea.

FJO:  In the very beginning of our talk you were saying the images on the homepage of your website originally linked to sound files, and someone presumably could turn one of them on, but when you turned a second one on, the first one would still be on and then you’d have this cumulative effect of all that sound.  In essence, the Room Pieces also work this way because you have these different sonic modules that all exist separately but the piece is about the cumulative effect of hearing them all spatially in a particular space.  It isn’t necessarily duration-based, which makes it something you wouldn’t listen to for causality in the same way as other musical compositions.

MJS:  What do you mean by “in the same way”?

FJO:  Well, like the D and the A you mentioned before. Let’s say there’s a car that’s suddenly on B-flat, and that’s totally random, but you might—because of how your mind perceives time-based musical relationships—think you’re hearing a flat six if you hear it after the D and the A.  There’s a perception of a developmental relationship, a relationship between the third sound and those other two sounds because of the order they are in.

“I didn’t want a sense of utter randomness…  That’s where I really don’t agree with Cage.”

MJS:  That is what I’ve been trying to do with the Room Pieces.  These are algorithmic, generative compositions, and they’re modular.  But the approach was to create coincidental occurrences of that sort.  The range of sounds and the exploration of pitch and rhythm was intended to raise the question: are these intentional events?  Was this composed?  I didn’t want a sense of utter randomness, just the sense that none of it has any relationship.  That’s where I really don’t agree with Cage; I guess you could say it in that way. I think he was pretty adamant about wanting to completely cancel out this idea of relationships between sounds.  What it’s all about for me is creating these relationships, but it’s not about necessarily creating a progression or any structure that is really only interpretable in one way.  It’s really about creating the possibilities for these relationships, like a kind of drawing where you connect the dots—each listener would come up with his or her own drawing.

Schumacher's laptop displaying a software program that works out his algorithms.

FJO:  So you want those relationships to be there, but you don’t necessarily want to determine what they are.  It’s for the listener to determine.

MJS:  Right.

FJO:  There’s a wonderful comment that Julian Cawley made in one of the program essays published in the CD booklet for the XI collection of the Room Pieces: “His music changes, but it doesn’t progress.”  Is that a fair assessment of all the work that you’re doing?

MJS:  Well, definitely the Room Pieces, but lately I’ve been getting away from that approach. For 20 years I was just determined never to edit what happened, just to let it happen and not to get involved in that post-production level of saying, “Okay, I like this, but it’s not really working here, so I’m going to change it up and I’m going to add this.”  I really tried to keep it very strictly algorithmic and generative.  But lately, in the last I would say ten years, I’ve been interested in getting into the details, especially spatialization and really exploring outside of that algorithmic process how I can really look at those details of how the sounds exist in space and how they relate to each other, moving on to more deterministic pieces.

FJO:  So would it be toward things that have literally more of a beginning, middle, and end, that actually develop, that actually have a start point and end point?

MJS:  Yeah.  I think some of the new pieces on Contour Editions are definitely that way.

FJO:  Disagree with me if you think I’m off the mark on this, but it seems there’s a distinction that’s key to all of this: the difference between musical composition or musical performance on the one hand and sound installation, sound art, or even instrument design on the other.  A performance or a composition is fixed in its duration.  The order of the sounds that an audience hears and how long they are listening to these sounds is determined for them.  There’s a clear beginning, middle, and end. Whereas with a sound installation, people can theoretically walk into it whenever they want and can stay however long they want.  So the message it’s trying to convey has to be different than a progression of events over time.

MJS:  Right.  It’s a big issue in terms of making sure that listeners perceive what you think they should be perceiving, and in the right timeframe.  So 30 minutes, 45 minutes, or 5 minutes, what’s a minimum amount of time to be able to understand the piece? Is it a problem then if you leave out things? Is it really necessary for the person to wait ten minutes for something important to happen?  Is that something you want to avoid?  Those are obviously key questions.  I was lucky to have my own space, but if you’re presenting these works in museums or other settings where people are constantly moving through, and they’re really encouraged to move through and not to sit down necessarily for an hour—although obviously with video art people do that—that’s an added layer of things you need to account for.

FJO:  It’s interesting that you’re concerned about whether people will get it if they’re only there for five or ten minutes. Of course you can never assume someone is going to get your piece, even if it’s in a concert hall, on a program, and it’s a functionally tonal string quartet.  You can’t really control how people perceive anything.

“Schoenberg and his 12-tone technique is the beginning of process composition.”

MJS:  My feeling is that this whole trend toward sound art and sound installation is coming out of the concert hall’s dominance as a listening space. For me, it starts with Schoenberg.  Schoenberg and his 12-tone technique is the beginning of process composition.  Even in his case, it’s taking the ego out of the process in a sense, obviously.

FJO:  Wow.

MJS:  It’s a stretch with Schoenberg, but there’s still that hint of: “Okay, here’s this process, 12 notes.  I’ve used 11, doesn’t matter.  I have to use the twelfth.  It doesn’t matter what I think.  I’ve decided that the twelfth note is going to be one I didn’t use.” So that’s process that overcomes his taste—in a sense—and his ego.  For me, it starts there. And Cage is essentially the same thing.  It’s chance, but this was proven in the ‘50s, basically you get the same thing.  It’s a process and the result is going to be a surprise, both to the composer and, in a sense, to the audience.  Unlike classical composition where as soon as you hear a bar or two of Mozart, your brain knows what the next six bars are going to be in a sense.  That’s the beauty; that’s why it’s so relaxing to listen to because you sit there and you hear eight bars in advance. It’s kind of like knowing the future.  You’ve been given a little bit of a peek into the future and that relaxes you.  That makes you feel kind of secure.

This idea of every next step being a little bit of a mystery is a fundamentally different way of perceiving the world and perceiving music, and I think it’s completely wrong to do that in a concert hall.  And I think that people sense this.  A concert hall is a place for storytelling, but it’s a place where you know the story.  You know it’s going to be an arc form.  You know there’s going to be a climax and a resolution.  And you’re enjoying that in a place of comfort, in a place of audition, a place watching a storyteller—whether it’s a conductor or an actual person telling the story—and this unfolds in a very predictable way.  Your body’s relationship to that is key: being in a seat and looking forward orients you towards a certain way of perceiving time.

So my feeling is that composers started to sense this disconnect between where they were required to show their work and the kind of work they were interested in making, which was based on these processes that everybody was inventing from about 1945 on.  Basically the job of every composer was to invent the process, invent their own methodology, so I feel like the intuitive response to that was to invent new spaces. Part of that is radio space. In Germany and [other parts of] Europe, you get more of the experimentation in radio-based listening spaces, either the radio itself or maybe these sort of black box spaces that they would perform in.  In more extreme cases, like Stockhausen, he would go into caves and what not.  They were searching for these places where the space placed the body in an orientation towards the sound that allowed it to really be perceived in a way that connected to the process of composing it. My focus has been to try to understand the very many ways of listening, of apprehending sound, and how they relate to architecture and to the body and to try to create situations where we can help listeners understand what it is that they’re perceiving.

FJO:  I was thinking along similar lines over the weekend.  I went to the sound installation exhibit at the Museum of Art and Design in Manhattan in Columbus Circle.

MJS:  Did Charlie Morrow have something to do with it?

FJO:  Not as far as I know. I didn’t know most of the people who were involved with this except for Benton Bainbridge, but there was some very interesting work there. What I found even more interesting than the work, however, were how many people there were interacting with what were, in several cases, some really whacked out sounds, perhaps sounds that they might not have heard before or might not have had a context for.  And they were really enjoying it.  People of all ages—young children, teenagers, even some elderly people. There was an interactive piece called Polyphonic Playground that was created by the London-based collective Studio PSK where people made sounds by climbing bars or sitting on swings.  That was cool.  There was also this incredible contraption on a wall with all these disembodied guitar strings attached to pickups.  It was done by an Israeli-born artist who now lives here named Naama Tsabar; she used to play in punk bands but now more of her work is installation-based.

Anyway, some of these guitar strings were tuned to really resonant low tones, but you hear them all together from various people plucking them all at once and it creates some incredibly dissonant chords.  Yet everybody was enjoying it.  If people were to hear the same thing in a concert hall, would they appreciate it as much?  If it was a New York Philharmonic subscription series concert, there probably would have been loads of people walking out.

MJS:  Exactly.

FJO:  Why is that?

MJS:  I think these are really very old habits—and not in a bad way, just human habits. I once made a list of listening paradigms.  I’m not a scientist; I’m not a researcher in this way.  This is just kind of stuff that comes off the top of my head.  So I’m not claiming this to be true or anything.  I’m just thinking about it. Think of sitting by the camp fire and listening to somebody telling a story—somebody with a gift for telling a story, but understanding that that camp fire offers both security but also danger because just beyond the darkness there could be anything, an animal or a gangster, something.  So there is that sense of “What’s behind me?  What could potentially encroach on our sense of comfort?”  A storyteller is going to take advantage of that, a person who’s got a sensitivity to that is going to maybe then tell a scary story or something, that will bring in the darkness, bring in the rear, so to speak.  You oppose that with the concert hall where that does not exist.  In the concert hall we’re enclosed.  We’re completely safe.  We are perhaps a little bit impinged on by our neighbors, so that we feel a little bit self-conscious.  So that’s something.  All of these things contribute. Think of a political meeting in a town square where there’s a speaker, but there’s also a lot of participation from the audience.  People acknowledge their neighbors and encourage each other to talk back to the speaker, so it becomes a back and forth kind of thing.  Or a rock concert—that’s a different kind of thing.  All of these are paradigms, and they become models for listening that you can carry over into other situations.  You can listen to your stereo, but you can pretend it’s the concert hall.  Do you remember the way people used to listen to records? They would bring the record home, put it on, and sit in a comfortable chair with their speakers there, as if they were in the concert hall—my dad used to do that—and they would listen to the whole record.  It was 20 minutes of sitting there listening.  They wouldn’t put it on and go do the dishes like people do now because they know they can just keep playing the stack of CDs that is never ending, or the MP3s or whatever.

“In the concert hall we’re enclosed.  We’re completely safe.”

Another aspect of this is musical structure.  Take a look at Philip Glass’s music.  At the beginning, he was very much in the art world.  He was doing a lot of his performances in art galleries.  The take away form of one of the early pieces, if you put it in a sound editor or something, is like a bar.  It’s a flat sound.  As soon as he got commissioned to do the Violin Concerto—and that’s in a concert hall—then suddenly you’ve got that arc form.  Suddenly, it’s a standard concerto. It’s in his language, but you’ve got that climax. He was clearly intuitively responding, “Okay, now I’m in a concert hall.  I can’t do this thing that I do in the gallery where people are walking around.”  Physically, they were in a completely different orientation; they didn’t feel hemmed in like at a concert hall. So they didn’t have those same expectations of structure.  But as soon as he was doing the concert hall piece, then it was like, “Now I have to rethink this.”

FJO:  In terms of your own background and how you got into all of this stuff—you studied the piano growing up. Later you went the typical composer-training route.  You went to Indiana University, then on to Juilliard to study with Vincent Persichetti, one of the great—albeit largely unsung—masters of the sonata form: 12 great piano sonatas, 9 symphonies; it’s all very much about the concert hall.

MJS:  Yeah.

FJO:  You even wrote a symphony and a string quartet.  I was desperately trying to find places I could hear them.

MJS:  You won’t. They were student pieces.

FJO:  At what point did you have this aha moment of wanting to do something else?

MJS:  At Indiana, they had a great studio.  I was always into electronic music.  I even had synthesizers when I was in high school.  I would go to sleep with my headphones playing drones essentially. I had no idea of any of that, but that’s just what came out of the synthesizer.  I just turned it on and held a note down, then played with the filters and the LFOs and stuff.  So I was always into that. Then at IU, Xenakis had left the year before I got there, but I think he might have developed the studio a little bit and so I was working and I made a piece called Nature and Static.  This was a piece that had two parts.  One was what I called “Nature,” which had basically five or seven parts that were just playing the same minimal melody, but with different timbres.  And they just kind of intersected in a very minimal way, not unlike that Middl piece you know, but the idea was completely intuitive.  It was that you’d listen into it and you would hear this multiplicity of sounds in this very simple texture—and that I associated with nature, because nature to me was simple, but complex.  Then the other half was “Static,” which was a loop—much more of an electronic music or man-made kind of a loop.  And I processed it, so it got kind of big and loud.  For my recital, I performed a piano piece with string quartet, and this piece Nature and Static, and I turned the lights off for the electronic piece.

At that point, it occurred to me that I had to do something because it isn’t right to have people sitting in these chairs listening to this.  They should be closing their eyes and listening as if immersed in the sound.  It felt wrong to be doing it in the concert hall, but you know, you do what you can.  You turn the lights off, or whatever.  Then at Juilliard, I organized some electronic concerts at Paul Hall. But when I set the speakers up, I was also struck by the inappropriateness of that space for what we were doing.  It was just people looking around. My teacher Bernard Heiden said, “The thing I like about electronic music is when something goes wrong.”  He liked when the tape recorder broke, something dramatic.  I think it struck everybody; it’s nice good music and whatever, but regardless, it just doesn’t work in this space.

FJO:  So even as a student you were envisioning a venue like Diapason.

MJS:  Yeah. Obviously the Dream House was a big inspiration and to see that people were already doing this to a large extent. Still, New York didn’t have a dedicated sound space.  Even though Paula Cooper and other places would occasionally do sound works, we didn’t have a dedicated gallery or space for experimenting with sound.

FJO:  So ideally what was in your mind, in terms of how you put this thing together? What were the attributes that made that a more ideal space for hearing this kind of music?

MJS:  Well, the first one was Studio 5 Beekman.  That was down near City Hall.  It was a little office space.  You entered, and there was a small foyer.  This kind of gave you a little bit of a buffer between the world and then the gallery, which was behind a door, beyond the foyer.  That little buffer was very nice, because it let people kind of take a breath.  For me it was also for limiting vision.  Turn the lights down.  It doesn’t have to be dark, but just make the visual less explicit.  I used to use a red light bulb, which got misinterpreted as a kind of gesture of some sort, but I just felt it was a dark color that allowed you to see without making the visual too much of a thing.  I think personally in those situations, it’s not good to have a lot of sound coming from outside.  If people want that, I suppose you can have a space like that, but for the most part I feel people want to be able to control the environment and not have to deal with sirens and things like that.

FJO:  So Diapason eventually became a really significant venue for this stuff, but it is no more.

MJS:  Well, it was supported by my friends Kirk and Liz [Gerring] Radke.  Liz is a choreographer who I’ve been working with since the ‘80s and her husband Kirk is a really generous supporter of the arts and funded this space. We continued in that way and were also getting funding from New York State and from the city and from private foundations.  This went on for about 15 years.  But at some point, Kirk pulled out.  So I lost that funding, and that really was paying the rent; everything else was paying the artists.  So that really hurt, and for a while I tried to continue with my own money, but I couldn’t sustain it.

What I had tried to do, in the years when it was clear that Kirk was going to pull out, was I wanted to get somebody to be a real business director, a kind of executive director who would fundraise and get that part of it together because I wasn’t really good at it.  I felt if I could find that person, then they could really get the whole thing on its feet financially, be able to pay themselves a salary, be able to pay the artists, and keep the rent paid.  We were over at Industry City the last few years.  We were before our time there because people had a really hard time coming out there.  The subway was not cooperating.  People would complain—especially coming from North Brooklyn: Williamsburg and Greenpoint—that it would take them two hours to get there.  So the audience went way down.  So that was bad, too.  Now it’s a bustling place where people come on the weekends.  If we were there now, we would actually get a walk-in audience.  It would have been fantastic.  But we were basically five years too early for that.

FJO:  There really is still no place that is quite like the space that you had for this kind of work. So do you envision it ever reopening or doing something else like it?

MJS:  The last two or three incarnations were really quite great spaces. We had the two listening rooms and pretty good sound isolation.  I had a really great group of people helping me, like Daniel Neumann and Wolfgang Gil.  But I don’t know.  I could see doing it again.  I’m very interested in this question of bringing sound environments or installations into people’s homes, and that’s kind of the way I would try to do it if I did it again.  I was thinking even of having events here [in my home]—inviting an artist to give a presentation here, with a house multi-channel system, and then inviting a small audience and basically trying to use that to help that artist get the work out, to present the work to people who might help in then getting it out to a bigger audience.

Schumacher's very attentive dog near one of his electric keyboards.

FJO:  Given that that’s been such a focus of your work—the directionality of sounds and such a sensitivity to how and where sounds are experienced—it’s fascinating to me that you also perform in and create all the music for what, for better or worse, I’ll call a rock band.  It’s a somewhat inaccurate shorthand for what diNMachine is, but in terms of its performance situation, it operates like a rock band.  There is a group of musicians performing in real time and there’s an audience.  Or there’s a recording. In all cases, it’s a group doing somewhat fixed things that have a beginning, middle, and end.  The band doesn’t perform in concert spaces like the comfort zones we were talking about earlier; they’re performing in louder, club-type environments in which there’s often no sound insolation either from the world outside or from the audience members themselves, which raises all sorts of other listening issues.

“I love lots of kinds of music. I’m just aware of the differences.”

MJS:  Well, I love pop music.  And I love classical music and going to concerts. I love lots of kinds of music. I’m just aware of the differences.  I don’t think that leads to only liking one particular kind of approach.  I happen to have really fallen in love with computer algorithms.  I have to say that.  It opened up a way of listening for me that was really fantastic, and it stays fantastic now.  But I was in rock bands as a kid.  I played in some bands up until I was in my 30s. And I improvised a lot. I like having that outlet for that part of my musical being.

FJO:  The title for last year’s diNMachine album, The Opposites of Unity, is a very apt one given your openness to all these different styles and listening paradigms.  It isn’t necessarily about just one thing.

MJS:  Right.

FJO:  But there’s one track, “Jabbr Wawky,” that’s basically hip hop and another one, “Brisé,” which could well have been one of your Room Pieces to some extent.

MJS:  Yeah, it probably was derived from one. But even in “Jabbr Wawky,” there are a lot of environmental sounds.

FJO:  So the lines do get blurry even in the context of what you’re doing within the framework of the band.  I noticed that diNMachine has a new album coming out in early 2018. Will it be following a similar path?

MJS:  Well, the band has been reduced.  It’s now a duo, which makes it a lot easier.  It was kind of expensive. I try to pay people if they’re going to play my music for me.  So now, as a duo, I feel like this can go on and I don’t have to stress about it.  We can play when we get gigs.  We can rehearse pretty easily.  We live pretty close to each other and so I’m a weekend rock musician rather than trying to do this professionally.  Although, of course, I’m trying to do this professionally, but it just makes it more manageable.  Anyway, the music took a little bit of a turn towards what I’m calling synth and drums—not bass and drums, or drum and bass.  Drums and synth.  Those are really the two featured things—a lot of these songs are analog synthesizers and drums.  They don’t have guitar or saxophone; the first record had lots of various instrumentation.

FJO:  You say they’re songs, but there are still no vocals.

MJS:  It’s mostly instrumental.

FJO:  Do you perceive of this as dance music to some extent?

MJS:  I think you can dance to it for sure, definitely.  It’s got a very strong beat. You can also listen to it. That’s another interesting issue, because dance music shouldn’t be too complicated.  When the head gets too involved, the hips have a problem.

FJO:  The material for diNMachine consists of concrete pieces, even though elements of your other work come in.  Obviously when someone’s listening in a club, they’re not listening in the same way as they would in a concert hall, but listeners would still assume more causality than they would in, say, a sound installation, because of its mode of presentation.

MJS:  Well, the way that I write them usually is I improvise on my synthesizer and I just keep the tape running, so to speak, and then I’ll find some riff that I like or some section or some sound, and that will become the basis of one of these songs.  Generally, I’ll figure out the tempo and add a drum track, and then I’ll write a bass line.  Sometimes I’ll throw that synthesizer sound into Melodyne, which is a pitch-detection software used mostly to correct singers or instruments that are out of tune.  It’s not monophonic; it can read multiple notes at once.  When it first came out, the way they advertised it was they’d have a guitar, and they’d show how the Melodyne could see each note in the guitar chord and correct individually.  It was a breakthrough software when it came out.  Now, other software does that.

FJO:  It’s like a fancier Autotune, basically.

MJS:  Exactly.  But what I’ll do is I’ll throw the synthesizer in Melodyne, and it will score it.  It’ll figure out what the pitches are, but it will be wrong most of the time because the synthesizer’s very complex. Even if you’re doing a bass line, the overtone structure is very complicated and Melodyne has a lot of trouble with that.  So I’ll take that score that Melodyne has derived from the synthesizer, then I’ll throw it into a string pad, or something like that, or a piano.  And it will come out with this piano version of what the synthesizer did, which can be really cool because it kind of comments on what the synthesizer did and doesn’t quite get it right, but you can tell that it’s trying to get it right.  Then sometimes I’ll play that live.  What I really like to do is have my basis of the song and then I’ll just kind of blindfold it: drag sounds into the session and just see what happens—see what gets layered on top of it and come up with sometimes very bizarre, unpredictable things.  I won’t keep it if it’s too strange, but it’s incredible how many times something will just work in that situation.

FJO:  So in a way, it is designed so that people listen in to it rather than simply listen to it, as you were describing earlier.

MJS:  Yeah, and I’m very interested in structure. I call it free composition rather than free improvisation.  It’s like the idea of transition.  Wagner said that composition is the art of transition, and I take that really seriously.  La Monte Young said transition is for bad composers.  I’m siding with Wagner there.  I think transitions are what it’s all about.  And especially in these diNMachine songs, I’m really interested in—well, I’ve got this section of the song, what’s this next section going to be?  How different can I make it from the first section?  But where it still makes sense.

A Moog oscillator

FJO:  There’s a statement that you have on your website that’s almost like your compositional manifesto, I think.  You aim to draw the listener’s attention to sounds that you’ve created by presenting it “at the rate of every half second or less, which is the same tempo as a typical melody line.”  I thought that was interesting because the way most of us hear a melody is one dimensional; it’s a single line that’s moving over time. But your idea of manipulating sonic elements, which could be a two-dimensional plane or more likely, given your interest in directionality, a three-dimensional field, is basically to grab listeners in the same way that they’d be grabbed by a melody by controlling the durations of the various components they are hearing over that time.

MJS:  Right.  Exactly.

FJO:  And the way you do that is by the speed of change of the sound.

MJS:  Right.

FJO:  Your most recent recording, Variations, which came out on Contour Editions earlier this year, definitely sounds much more developmentally oriented to me, so maybe that gets to what you were saying earlier about getting away from a strictly algorithmic approach.

MJS: I’ve definitely been moving on. I still use it in the process, but it’s a step in the process more and more, rather than the end.  I’ve learned a lot from the diNMachine thing in terms of working with sound because in a sense, with multi-speakers, you’re never really mixing like you do in stereo.  It’s actually a lot easier to just throw sounds around and you don’t have to worry about their balance in the stereo field.  Working exclusively in stereo for a number of years now has taught me enormous amounts about this, and I’ve been trying to apply it to the multi-channel stuff.  It’s really opened my ears, too, and opened up a lot of new possibilities.

FJO:  You talked about creating home environments. This is very different from recordings people listen to in their homes, including all of yours, which are mixed down to stereo. I would think that really misses the spatialization which is a key element in so much of your work. Maybe your recordings should ideally be issued in 5.1 surround sound.

“I’m not such a fan of 5.1.”

MJS:  That’s why Richard [Garet] released the tracks in an 8-channel version.  It’s not surround.  I opted not to do that because I’m not such a fan of 5.1. And I don’t really believe that people are setting it up correctly.  It’s just like stereo, only not as developed in a way.

FJO:  Interesting.  So, it’s okay to listen to something you’ve created to be experienced spatially on a computer with headphones?

MJS:  Not so much. I put a lot of time into the stereo version.

FJO:  So listening to these tracks on a computer is sort of like looking at photographs of paintings.

MJS:  Yeah, like a reference or something.  I had the eight tracks, and I created eight spaces in the stereo field with different characteristics. Then I put the tracks into those eight spaces.  So it’s not just panning them around; it’s really trying to get depth and a sense that there are these eight separate spaces in it.  That’s another thing I really would like to continue working with.  And actually working with people who understand it a lot more than I do and who have software chops and can maybe design specific things that I can use.

FJO:  Getting back to dance music, albeit of a very different sort, for years you’ve collaborated with the choreographer Liz Gerring. You had mentioned needing to keep things simpler if it’s being danced to, in a pop/club context. Clearly in these pieces, they’re all professional dancers and there’s a kind of Gesamtkunstwerk that happens between the choreography and music. However, once again, this is something that exists in time and in a space where people are sitting in seats observing the work.  Ideally they’re listening to the music and it is a key element, but a dance audience is primarily there to see the dance and so the music has a somewhat subordinate role to it. I imagine that some of these considerations might make you create sound in a different way.

MJS:  I have to say Liz is amazing to work with.  She’s an amazing collaborator.  She regards the music as absolutely equal to the dance. Maybe not absolutely equal, but there’s no point in debating whether it’s 60/ 40 or whatever; they’re the two elements that are paramount in the work. So the music is an important element, and it’s what we’ve been grappling with all these years in this relationship.

We started from the Cage-Cunningham aesthetic of doing dance installations, where Liz would dance for three to four hours and I would improvise on my laptop at the same time, but not necessarily in any way interacting with what she was doing.  But over the years we’ve talked a lot about what we want to do.  How do we want to work on this relationship?  Do we want the music to reflect what’s going on in the dance?  To what extent?  In what ways?  We’re lucky that we’re very much on the same page aesthetically. We have a similar kind of feeling about art and work, and at this point each piece is approached in a determined way to do it better than we did the last time—to be more collaborative, to think more about that relationship and to do something innovative or interesting in that relationship.  Sometimes there are constraints based on the practicality of doing a theater piece, but it’s a completely different way of working. It’s not a defined or set way of working; it’s changing all the time.  There are elements both of what I do in the band as well as the installation approach.  You can’t really pin it down.  We’re always exploring, so it’s always different but it’s got elements of other things that I do as well.

FJO:  We talked about the concert hall and Schoenberg and Cage and all that stuff, and it sort of being anathema to an audience that is used to hearing pieces by Mozart for which they can reasonably predict what the next eight measures are going to be. Yet, if you’re in a space for a dance performance, I think as a composer writing for dance you can get away with doing a lot more. Audiences for dance performance will listen to a Cage score; Cunningham had huge audiences. Is it the visual element?  Does being able to look at something besides the musicians playing their instruments—or, in the case of more experimental electronic music, twisting knobs or sitting in front of a laptop—help bring audiences into those sounds more?  I don’t know.

MJS:  I don’t know.  If we’re talking about the ideal listener-viewer, I think that’s one thing.  If we’re talking about a typical audience, that’s another. Both are obviously important.  Not everybody can be an ideal, educated listener-viewer, but I think that regardless of what the audience is going to think or perceive, it’s really up to us to be very sensitive to the relationship of the sound and the dance.  And not to use the distraction—so to speak—of the dance, or of the visuals as a way to get away with things that don’t really work with the dance.

If we’re going to be really sensitive to what’s going on, one thing is surround sound.  I like to use surround.  But it’s problematic because the viewers are looking at a stage most of the time and to start throwing things in the back is going to compete with that.  Not that that’s a bad thing, but you have to be careful and you have to acknowledge that that creates a dissonance with the typical attitude of the viewer.  That’s why in movies they’re very careful about how they use surround sound.  It’s actually mandated in the spec for a surround sound that only effects like bombs exploding and things like that are going to be used on the rear channel.  Everything else is in the front: dialogue, music, diegetic sound—what’s called Foley.

FJO:  I guess the way around that would be to have a dance performance that is not on a proscenium stage where you have people moving all around in a space.

MJS:  Exactly. We’re actually actively looking to do that.  It’s hard to play with a proscenium stage.  That’s the thing.  That’s what Liz really grapples with because she’s not particularly a theater person that wants that perspective on the movement. She wants to go beyond what is typical in the theater. I haven’t thought about it that much, but I would imagine that it parallels the development of music where you had ballet in the theater and that established a certain way of presenting movement and the relationship with the dancers and what not.

FJO:  Once again with classical ballet, viewers probably would know what the next eight moves are going to be.  This brings us full circle.

MJS:  Yeah.

A bookcase in Schumacher's hallway that is filled with speakers and a hat on top of one of them.

Live Sound Processing and Improvisation

Intro to the Intro

I have been mulling over writing about live sound processing and improvisation for some time, and finally I have my soapbox!  For two decades, as an electronic musician working in this area, I’ve been trying to convince musicians, sound engineers, and audiences that working with electronics to process and augment the sound of other musicians is a fun and viable way to make music.

Also a vocalist, I often use my voice to augment and control the sound processes I create in my music which encompasses both improvised and composed projects. I also have been teaching (Max/MSP, Electronic Music Performance) for many years. My opinions are influenced by my experiences as both an electronic musician who is performer/composer and a teacher (who is forever a student).

A short clip of my duo project with trombonist Jen Baker, “Clip Mouth Unit,” where I process both her sound and my voice.

Over the past 5-7 years there has been an enormous surge in interest among musicians, outside of computer music academia, in discovering how to enhance their work with electronics and, in particular, how to use electronics and live sound processing as a performable “real” instrument.

So many gestural controllers have become part of the fabric of our everyday lives.

The interest has increased because (of course) so many more musicians have laptops and smartphones, and so many interesting game and gestural controllers have become part of the fabric of our everyday lives. With so many media tools at our disposal, we have all become amateur designers/photographers/videographers, and also musicians, both democratizing creativity (at least to those with the funds for laptops/smartphones) and exponentially increasing and therefore diluting the resulting output pool of new work.

Image of a hatted and bespectacled old man waving his index finger with the caption, "Back in my day... no real-time audio on our laptops (horrors!)"

Back when I was starting out (in the early ’90s), we did not have real-time audio manipulations at our fingertips—nothing easy to download or purchase or create ourselves (unlike the plethora of tools available today).  Although Sensorlab and iCube were available (but not widely), we did not have powerful sensors on our personal devices, conveniently with us at all times, that could be used to control our electronic music with the wave of a hand or the swipe of a finger. (Note: this is quite shocking to my younger students.) There is also a wave of audio analysis tools using Music Information Retrieval (MIR) and alternative controllers, previously only seen at research institutions and academic conferences, all going mainstream. Tools such as the Sunhouse sensory percussion/drum controller, which turns audio into a control source, are becoming readily available and popular in many genres.

In the early ’90s, I was a performing rock-pop-jazz musician, experimenting with free improv/post-jazz. In grad school, I became exposed for the first time to “academic” computer music: real-time, live electroacoustics, usually created by contemporary classical composers with assistance from audio engineers-turned-computer programmers (many of whom were also composers).

My professor at NYU, Robert Rowe, and his colleagues George Lewis, Roger Dannenberg and others were composer-programmers dedicated to developing systems to get their computers to improvise, or building other kinds of interactive music systems.  Others, like Cort Lippe, were developing pieces for an early version of Max running on a NeXT computer using complex real-time audio manipulations of a performer’s sound, and using that as the sole electroacoustic—and live—sound source and for all control (a concept that I personally became extremely interested and invested in).

As an experiment, I decided to see if I could create a simplified versions of these live sound processing ideas I was learning about. I started to bring them to my free avant-jazz improv sessions and to my gigs, using a complicated Max patch I made to control an Eventide H3000 effects processor (which was much more affordable than the NeXT machine, plus we had one at NYU). I did many performances with a core group of people, willing to let me put microphones on everyone and process them during our performances.

Collision at Baktun 1999. Paul Geluso (bass), Daniel Carter (trumpet), Tom Beyer (drums), Dafna Naphtali (voice, live sound processing), Kristin Lucas (video projection / live processing), and Leopanar Witlarge (horns).

Around that time I also met composer/programmer/performer Richard Zvonar, who had made a similarly complex Max patch as “editor/librarian” software for the H3000, to enable him to create all the mind-blowing live processing he used in his work with Diamanda Galás, Robert Black (State of the Bass), and others. Zvonar was very encouraging about my quest to control the H3000 in real-time via a computer. (He was “playing” his unit from the front panel.)  I created what became my first version of a live processing “instrument” (which I dubbed “kaleid-o-phone” at some point). My subsequent work with Kitty Brazelton and Danny Tunick, in What is it Like to be a Bat?, really stretched me to find ways to control live processing in extreme and repeatable ways that became central and signature elements of our work together, all executed while playing guitar and singing—no easy feat.

Six old laptops all open and lined up in two rows of three on a couch.

Since then, over 23 years and 7 laptops, many gigs and ensembles, and releasing a few CDs, I’ve all along worked on that same “instrument,” updating my Max patch, trying out many different controllers and ideas, adding real-time computer-based audio. (Only once that was possible on a laptop, in the late ’90s.) I’m just that kinda gal; I like to tinker!

In the long run, what is more important to me than the Max programming I did for this project is that I was able to develop for myself an aesthetic practice and rules for my live sound processing about respecting the sound and independence of the other musicians to help me to make good music when processing other people’s sound.

The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

Many people, of course, use live processing on their own sound, so what’s the big deal? Musicians are excited to extend their instruments electronically and there is much more equipment on stage in just about every genre to prove it. The omnipresent “[instrument name] plus electronics”, like a “plus one” on a guest list, fills many concert programs.

However, I am primarily interested in learning how a performer can use live processing on someone else’s sound, in a way that it can become a truly independent voice in an ensemble.

What is Live Sound Processing, really?

To perform with live sound processing is to alter and affect the sounds of acoustic instruments, live, in performance (usually without the aid of pre-recorded audio), and in this way create new sounds, which in turn become independent and unique voices in a musical performance.

Factoring in the acoustic environment of the performance space, it’s possible to view each performance as site-specific, as the live sound processor reacts not only to the musicians and how they are playing but also to the responsiveness and spectral qualities of the room.

Although, in the past, the difference between live sound processing and other electronic music practices has not been readily understood by audiences (or even many musicians), in recent years the complex role of the “live sound processor” musician has evolved to often be that of a contributing, performing musician, sitting on stage within the ensemble and not relegated, by default, to the sound engineer position in the middle or back of the venue.

Performers as well as audiences can now recognize electroacoustic techniques when they hear them.

With faster laptops and more widespread use and availability of classic live sound processing as software plugins, these live sound processing techniques have gradually become more acceptable over 20 years—and in many music genres practically expected (not to mention the huge impact these technologies have had in more commercial manifestations of electronic dance music or EDM). Both performers and audiences have become savvier about many electroacoustic techniques and sounds and can now recognize them by hearing them.

We really need to talk…

I’d like to encourage a discourse about this electronic musicianship practice, to empower live sound processors to use real-time (human/old-school) listening and analysis of sounds (being played by others), and to develop skills for real-time (improvised) decisions about how to respond and manipulate those sounds in a way that facilitates their electronic-music-sounds being heard—and understood—as a separate performing (and musicianly) voice.

In this way, the live sound processor is not always dependent on and following the other musicians (who are their sound source), their contributions not simply “effects” that are relegated to the background. Nor will the live sound processor be brow-beating the other musicians into integrating themselves with, or simply following, inflexible sounds and rhythms of their electronics expressed as an immutable/immobile/unresponsive block of sound that the other musicians must adapt to.

My Rules

My self-imposed guidelines were developed over several years of performing and sessions are:

  1. Never interfere with a musician’s own musical sound, rhythm or timbre. (Unless they want you to!)
  2. Be musically identifiable to both co-players and audience (if possible).
  3. Incorporate my body to use some kind of physical interaction between the technology and myself, either through controllers or the acoustics of the sound itself, or my own voice.

I wrote about these rules in “What if Your Instrument is Invisible?” (my chapter contribution to the excellent book, Musical Instruments in the 21st Century: Identities, Configurations, Practices (Springer 2016).

The first two rules, in particular, are the most important ones and will inform virtually everything I will write in coming weeks about live sound processing and improvisation.

My specific area of interest is live processing techniques used in improvised music, and in other settings in which the music is not all pre-composed. Under such conditions, many decisions must be made by the electronic musician in real-time. My desire is to codify the use of various live sound processing techniques into a pedagogical approach that blends listening techniques, a knowledge of acoustics / psychoacoustics, and tight control over the details of live sound processing of acoustic instruments and voice. The goal is to improve communication between musicians and optional scoring of such work, to make this practice easier for new electronic musicians, and to provide a foundation for them to develop their own work.

You are not alone…

There are many electronic musicians who work as I do with live sound processing of acoustic instruments in improvised music. Though we share a bundle of techniques as our central mode of expression, there is very wide range of possible musical approaches and aesthetics, even within my narrow definition of “Live Sound Processing” as real-time manipulation of the sound of an acoustic instrument to create an identifiable and separate musical voice in a piece of music.

In 1995, I read a preview of what Pauline Oliveros and the Deep Listening Band (with Stuart Dempster and David Gamper) would be doing at their concert at the Kitchen in New York City. Still unfamiliar with DLB’s work, I was intrigued to hear about E.I.S., their “Expanded Instrument System” described as an “interactive performer controlled acoustic sound processing environment” giving “improvising musicians control over various parameters of sound transformation” such as “delay time, pitch transformation” and more. (It was 1995, and they were working with the Reson8 for real-time processing of audio on a Mac, which I had only seen done on NeXT machines.) The concert was beautiful and mesmerizing. But lying on the cushions at the Kitchen, bathing in the music’s deep tones and sonically subtle changes, I realized that though we were both interested in the same technologies and methods, my aesthetics were radically different from that of DLB. I was, from the outset, more interested in noise/extremes and highly energetic rhythms.

It was an important turning point for me as I realized that to assume what I was aiming to do was musically equivalent to DLB simply because the technological ideas were similar was a little like lumping together two very different guitarists just because they both use Telecasters. Later, I was fortunate enough to get to know both David Gamper and Bob Bielecki through the Max User Group meetings I ran at Harvestworks, and to have my many questions answered about the E.I.S. system and their approach.

There is now more improvisation than I recall witnessing 20 years ago.

Other musicians important for me to mention who are working with live sound processing of other instruments and improvisation for some time: Lawrence Casserley, Joel Ryan (both in their own projects and long associations with saxophonist Evan Parker’s “ElectroAcoustic” ensemble), Bob Ostertag (influential in all his modes of working), and Satoshi Takeishi and Shoko Nagai’s duo Vortex. More recently: Sam Pluta (who creates “reactive computerized sound worlds” with Evan Parker, Peter Evans, Wet Ink and others), and Hans Tammen. (Full disclosure, we are married to each other!)

Joel Ryan and Evan Parker at STEIM.

In academic circles, computer musicians, always interested in live processing, have more often taken to the stage as performers operating their software (moving from the central/engineer position). It seems there is also more improvisation than I recall witnessing 20 years ago.

But as for me…

In my own work, I gravitate toward duets and trios, so that it is very clear what I am doing musically, and there is room for my vocal work. My duos are with pianist Gordon Beeferman (our new CD, Pulsing Dot, was just released), percussionist Luis Tabuenca (Index of Refraction), and Clip Mouth Unit—a project with trombonist Jen Baker. I also work occasionally doing live processing with larger ensembles (with saxophonist Ras Moshe’s Music Now groups and Hans Tammen’s Third Eye Orchestra).

Playing with live sound processing is like building a fire on stage.

I have often described playing with live sound processing as like “building a fire on stage”, so I will close by taking the metaphor a bit further. There are two ways to start a fire with a lot of planning or improvisation, which method we choose to start with use depends on environmental conditions (wind, humidity, location), the tools we have at hand, and also what kind of person we are (a planner/architect, or more comfortable thinking on our feet).

In the same way, every performance environment impacts on the responsiveness and acoustics of musical instruments used there. This is much more pertinent, when “live sound processing” is the instrument. The literal weather, humidity, room acoustics, even how many people are watching the concert, all affect the defacto responsiveness of a given room, and can greatly affect the outcome especially when working with feedback or short delays and resonances. Personally, I am a bit of both personality types—I start with a plan, but I’m also ready to adapt. With that in mind, I believe the improvising mindset is needed for working most effectively with live sound processing as an instrument.

A preview of upcoming posts

What follows in my posts this month will be ideas about how to play better as an electronic musician using live acoustic instruments as sound sources. These ideas are (I hope) useful whether you are:

  • an instrumentalist learning to add electronics to your sound, or
  • an electronic musician learning to play more sensitively and effectively with acoustic musicians.

In these upcoming posts, you can read some of my discussions/explanations and musings about delay as a musical instrument, acoustics/psychoacoustics, feedback fun, filtering/resonance, pitch-shift and speed changes, and the role of rhythm in musical interaction and being heard. These are all ideas I have tried out on many of my students at New York University and The New School, where I teach Electronic Music Performance, as well as from a Harvestworks presentation, and from my one-week course on the subject at the UniArts Summer Academy in Helsinki (August 2014).


Dafna Naphtali creating music from her laptop which is connected to a bunch of cables hanging down from a table. (photo by Skolska/Prague)

Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist.   As a performer and composer of experimental, contemporary classical and improvised music since the mid-1990s, she creates custom Max/MSP programming incorporating polyrhythmic metronomes, Morse Code, and incoming audio signals to control her sound-processing of voice and other instruments, and other projects such as music for robots, audio augmented reality sound walks and “Audio Chandelier” multi-channel sound projects.  Her new CD Pulsing Dot with pianist Gordon Beeferman is on Clang Label.

Remembering Halim El-Dabh (1921-2017): A Citizen of the “Fourth World”

I still remember the first time I heard recordings from The Columbia Princeton Electronic Music Center that were issued in the early 1960s on Columbia Masterworks, as major a label as it got back in those days. (In fact, the head A&R guy there, Goddard Lieberson, was so powerful that he had the nickname “God.”) But by the time I got my hands on these LPs, bought for a pittance in a second-hand shop in the early ’80s, their liner notes’ claim of this being the music of the future seemed somewhat quaint. There was, however, a track on one of those records that didn’t sound at all like either wishful thinking from the past or a never-arrived-at future; it was just plain weird, but in a wonderful way. It was Leiyla and the Poet by an Egyptian-born composer named Halim El-Dabh.

El-Dabh came to the United States on a Fulbright in 1950, studied with Ernst Krenek and Aaron Copland, and wrote scores for Martha Graham. He was subsequently invited to work in the electronic music studio at Columbia University by Otto Luening and Vladimir Ussachevsky, after having already pursued electronic music independently. (Over a decade earlier in Cairo, he had already experimented with manipulating sounds using wire recorders at least four years before Pierre Schaeffer “invented” musique concrète.) To my 1980s ears, the 1959 piece he created at the Columbia-Princeton Electronic Music Center, Leiyla and the Poet sounded like a bizarre amalgam of psychedelic rock and the emerging global “world music” that was being created by traditional musicians from across the globe. But of course, Leiyla and the Poet predates all of those developments, too.

For decades that was the only piece of his I had ever heard, even though I treasured it. Then, at some point a little over a dozen years ago, Halim El-Dabh showed up briefly at the offices of the American Music Center to give us a copy of Denise A. Seachrist’s 2003 biography of him, The Musical World of Halim El-Dabh. After reading the book, I learned that in the late 1960s, El-Dabh accepted a tenured position at Kent State University in northeastern Ohio and, though he continued to travel around the world to teach and perform, it remained his home for the rest of his life. I had hoped to listen to more of his music, which is woefully underrepresented on commercial recordings (though there are some intriguing samples of it in a CD that accompanied the biography and on his website), and to eventually do a talk with him for NewMusicBox. But it never happened. On September 2, 2017, El-Dabh died at the age of 96, just a few months after attending the premiere of one of his recent compositions.

Back in June 2017, Tommy McCutchon, founder of the vital Unseen Worlds record label, conducted an extensive interview with Halim El-Dabh which might contain El-Dabh’s final in print reflections on his three-quarter-century involvement with musical traditions from around the world and finding ways to connect them together. In his preface to the interview, McCutchon stated that although the term “Fourth World” is now acknowledged as “the conceptual invention of American composer Jon Hassell, used to describe a particular style of ambient music he first popularized in the late seventies in collaboration with Brian Eno,” another example (which also predates it) is the “fully integrated cultural representation” in “the work of Egyptian-born composer, educator, electronic music pioneer, and ethnomusicologist Halim El-Dabh.”

‘Fourth World Music’ has since become a dominant sub-genre designation for any music that combines avant-garde electronic processing with a mélange of world music aesthetics. In it, familiar reference points intersect at an unlocatable place in the listener’s imagination, where the intellect is allowed to thrive. We can easily locate the Third World in popular culture, news, and travel, but the Fourth is the lesser-known beyond. It is not unlike four-dimensionality: we all know what 3D is, but the concept starts to get fuzzy when we talk about a fourth dimension.

For El-Dabh, however, this lesser-known beyond was where he and his music lived his whole life, and it was how he taught music to all people:

“I don’t like the idea of separation, and looking at it as something different. I don’t like that about Western music education. The way you start at school, the children have a natural rhythm. Teaching everything in 4/4 or 2/2 [meter]—I think there’s more to teach [than that]. I’ve met with a lot of elementary schools, and the kids have natural rhythms, a variety of natural rhythms. So, why should I hammer in them certain rhythms they’re really not used to? When you talk about Western music, that’s a huge tradition you’re talking about. The influence of Western music is huge. We just have to look at it in a variety of ways, and enhance in certain ways.”

Composing Xenharmonic Music

A very important aspect of music composition is, of course, that of consonance and dissonance. Consonant chords sound clean and smooth, whereas dissonant chords sound harsher and generally have an audible “beating,” like a fast tremolo. Dissonance lends the feeling of an unanswered question (such as a dominant 7th chord), and consonance gives us the feeling that it has been answered (such as a major triad). This basic concept of various musical passages leading us through question-and-answer or tension-and-release feelings should be as valid in xenharmonic music as it is in standard twelve-tone music. But it’s a challenge!

Let’s begin with the fact that you can throw most of the harmony lessons you’ve ever had right out the window when composing xenharmonic music. We don’t necessarily hear standard concepts like “major” or “minor” or “dominant” in other tunings. Instead, each tuning is its own alien world ripe with unexplored territory, each with its own set of melodies, chords, and progressions waiting to be discovered and theorized. When we do stumble upon note combinations that remind us of standard chords, they may sound a bit “off,” or else the transition from one chord to another may feel slightly different than what we’re used to. That kind of push and pull on our traditionally trained music brains is what I personally enjoy.

You can throw most of the harmony lessons you’ve ever had right out the window when composing xenharmonic music.

Xenharmonic purists tend to focus on the mathematics of tunings, expressing tonal relationships as interval ratios. They generate beautiful mathematical charts, entropy maps, and latices, which deeply inspire xenharmonic composers, including those of us who aren’t purists! However, many xenharmonic musicians take the desire for pure ratios to an extreme, wanting everything to be perfectly in tune. This leads to an interest in “just tunings” (unequal temperaments based on pure ratios), or using a zillion-notes-per-octave or even “dynamic tunings” that offer a constant stream of perfect chords–as free as possible from any beating. The more pure the ratios (low number integer ratios are purest), the cleaner and smoother the sound.

Harmonic entropy plotted for triads.

Harmonic entropy is plotted for triads. See the original post on The Xenharmonic Alliance.

In my mind, however, the more important angle to consider is how we perceive one note or chord leading into the next. We hear music over time as a series of notes and chords, after all. Harmonic movement is where emotion and meaning comes alive in a composition. That is far more important to me than whether each individual snapshot in time is in tune or not. It is all a matter of taste and aesthetic, but I don’t usually enjoy music that is based on pure ratios throughout, because it sounds one dimensional to my ears. It misses the boat on dissonance, which is just as important as consonance. Yin and yang, light and dark, tension and release!

Dissonance is just as important as consonance.

Beating or not, partly what contributes to our sense of consonance and dissonance is simply what we’re used to. In the Western world we’ve heard our imperfect twelve-tone equal temperament all our lives, and therefore may perceive perfectly in-tune 3rds and 6ths as sounding worse than their tempered counterparts, which have more beating. That simple fact has sparked much curiosity and debate about how our brains actually perceive consonance and dissonance.

It may be a surprise to learn that modern research shows strong evidence that beating is not the best measure of whether chords and intervals sound pleasant or in tune (Edward Large et al.). Our brains don’t directly decipher in tune-ness from beating. What actually happens when we hear musical sound is that our neurons begin oscillating, and this “neural resonance” dynamically “pulls” intervals into tune, as long as the frequencies are within proximity to ratios of the harmonic series. In chaos theory speak, our neural oscillations become an “attractor.” In musician speak, if it’s close enough for rock’n’roll, it will sound in tune!

In musician speak, if it’s close enough for rock’n’roll, it will sound in tune!

I think this is good news all around. For one, the research shows that our sense of consonance is indeed driven by our preference for the harmonic series, and therefore all of our traditional musical ideas still stand. But more profoundly, it shows that our traditional harmony is a mere branch of something larger. With every new research paper in this field, we can begin to see the outlines of a universal harmonic theory, implying that we can develop unique but related “harmonic rules” for any tuning.

Now enter my world as an equal-temperament composer. I believe that music composition in equal temperaments is easier and simpler than using “just” tunings or other options and that it’s an entirely legitimate means of music composition. For me, personally, equal temperaments have offered decades of fascinating exploration—messy ratios and all. I prefer to fully explore equal-tempered tunings that have a very limited number of notes, such as 10edo, 16edo, 17edo, or 19edo, and discovering their particular “flavor”, as opposed to working with something like 53edo that has so many choices of frequencies that it doesn’t, in itself, offer a distinct flavor.

What are these flavors I speak of? In general, microtonal scales (smaller than half steps) offer a tenser vibe, and macrotonal scales (larger than half steps) have a more open and alien feel. Any tuning can just as easily sound ugly or exotic or beautiful. It really truly depends on how it is used. When I’m trying out a new tuning, it always starts off on the ugly end of the spectrum until I mess around for quite a while, eventually discovering chord combinations and nifty melodic lines, and what intervals to avoid.

Any tuning can just as easily sound ugly or exotic or beautiful.

It really helps to have a proper instrument to discover your new tuning on. Even if you aren’t a piano player, keyboard “controllers” (meaning no internal sounds–just keys) are a very flexible and relatively inexpensive way for anyone to get into xenharmonic composition or to expand the setup you may already have. And this goes hand in hand with the “virtual instrument” synthesizers I reviewed last week. I have collected several keyboards over the years and have rearranged the black/white keys for each of my favorite tunings.

The current trend is to use M-Audio Keystations (49, 61, or 88 keys), which can be had for anywhere from $50-$200 on eBay and other online stores. If you can afford it, buy more than one so that you will have extra keys. You’ll need them if you’re going to dive in and rearrange the keys! Some tunings will need extra black notes, and some will need extra white notes. It’s cheaper in the long run to buy extra keyboards rather than extra individual keys, which are usually marked way up in price.

Here is a video that shows how I remove and rearrange keys on an 88 note Keystation–in this case, for a 15-note tuning which requires lots of black keys! As you’ll see, you only need a screwdriver, needle nose pliers, and possibly a sander of some sort.

One deciding factor in choosing a tuning is the level of difficulty in building a keyboard. Scales that require a smaller ratio of white notes than normal are easier to put together. White keys have a wider area to contend with and trying to squeeze more of them on the keyboard causes a need for them to be thinner. I have sanded many white keys thinner in my day. It works but is not ideal, as pianists are used to uniformly sized white keys. On the other hand, using more black keys than normal results in gaps between the keys, and thus a wider spacing than normal.

I will show a few keyboard examples here along with music links for each, and perhaps you’ll see/hear something that attracts you. Then you can either build one yourself, or ask one of us to build one for you!

10edo is one of my all-time favorites, and yet it gets a bad rap for its impure ratios. Here is something I wrote in 10edo. The diatonic scale in 10edo has larger half and whole steps than 12edo, and the thirds are right in between Major and minor, lending to its alien feel. The diatonic scale has a harmonic minor vibe to it. In a perfect world, a ten-tone keyboard would look like this:

An idealized 10-tone keyboard

However, putting three white keys in a row would involve skipping some of the keyboard contacts where there would normally be a black note. One solution is to make the C a black note painted white. However, it would be a bit strange since C is the first note of the diatonic scale.

Otherwise, here is my “cheater way,” as I call it: Sawing off the wide part of the white keys allows any desired black/white key arrangement. For this style of keyboard, I remove the entire keyboard cover and build my own handle onto the back. It looks prettier than having a big gap where the white keys are chopped off, but fashioning the handle itself is work! I would not judge anyone who leaves it in the original casing.

10 tone "Jupitar" Vertical Keyboard


10 tone “Jupitar” Vertical Keyboard by Elaine Walker

19edo is highly recommended for anyone who feels a bit intimidated by xenharmonic music composition and would like to ease into it. It is a good “transition tuning,” as it offers something close to our 12-tone diatonic scale but with more pure 3rds and 6ths. Mind you, it has worse 4ths and 5ths–there is always a tradeoff. The experience of 19edo is like an exotic version of 12edo, with some extra black notes for ornamentation. Here is one of my 19edo songs from the ‘90s.

The most typical 19edo keyboard, however, requires doubled up black keys, which leaves unsightly gaps. But again, who’s going to judge? Not I! Simply play a whole step instead of a half step, and a minor third instead of a whole step, and you’ll see the relationship to 12edo!

19-tone keyboard

19-tone keyboard by Darren McDougall. Having more black notes than usual creates gaps between the white keys.

The 17edo keyboard is easy to make, as it has the same ratio of black:white keys as 12edo. It looks like a surreal piano. I thought 17edo sounded terrible until I got used to it, and now it is one of my favorites. 17edo is also a clear favorite in the xenharmonic community. Here is something I composed in 17edo.

17-tone "Insanetar" Vertical Keyboard

17-tone “Insanetar” Vertical Keyboard by Elaine Walker. The keys fit perfectly since it has the same ratio of white and black keys as a standard piano.

Here is a 13edo keyboard which has some white keys shaved thinner. 13edo music is neuron-bending since it is just slightly off from 12edo. Enjoy this 13edo music by Aaron Andrew Hunt. I would not suggest trying this at home. I mainly wanted to show this crazy keyboard with the squished white keys around the single black notes.

13-tone keyboard

13-tone keyboard by Elaine Walker. Having more white notes than normal causes them to not fit. Some white keys are sanded thinner.

When you start looking around The Xenharmonic Alliance and other websites, you will notice many other keyboard options, such as isomorphic keyboards, although they tend to be quite pricy and largely unavailable. My favorite of these is a specific type of hexagon keyboard, known as a sonome (which is like saying “hexagon piano”). If you can get your hands on something cool like this, do it! It will open up a world of xenharmonic improvisation that isn’t as easy on a regular keyboard. Having a five octave reach, seeing chords as “shapes,” and being able to transpose while maintaining the same fingering, can make xenharmonic composition a much smoother experience.

I suggest that you not worry about theory.

Whichever tuning or instrument you choose to compose with, I suggest that you not worry about theory and just improvise by ear for as much time as you can spare. Don’t fret over the specifics of your timbres beyond whether they sound good to you–that is, if you want to compose truly beautiful xenharmonic music. At some point you will want to see what theory is “out there” for your tuning, if any, and it will be interesting to compare it to what you come up with on your own. Share your music and findings with the xenharmonic community. It is always exciting when someone posts new music. Don’t be shy about asking questions. We’re all happy to help and we even build keyboards for each other.

I will leave you with some more informative links:

My page on microtonality
My FAU lecture on microtonality
The Xenharmonic Alliance
My personal website

Xenharmonic performance at Berklee

Xenharmonic performance at Berklee College of Music, 2010

Chris Brown: Models are Never Complete

Despite his fascination with extremely dense structures, California-based composer Chris Brown is surprisingly tolerant about loosely interpreting them. Chalk it up to being realistic about expectations, or a musical career that has been equally devoted to composing and improvising, but to Brown “the loose comes with the tight.” That seemingly contradictory dichotomy informs everything he creates, whether it’s designing elaborate electronic feedback systems that respond to live performances and transform them in real time or—for his solo piano tour-de-force Six Primes—calculating a complete integration of pitch and meter involving 13-limit just intonation and a corresponding polyrhythm of, say, 13 against 7.

“I’ve always felt that being a new music composer, part of the idea is to be an explorer,” Brown admitted when we chatted with him in a Lower East Side hotel room at a break before a rehearsal during his week-long residency at The Stone.  “It’s so exciting and fresh to be at that point where you have this experience that is new.  It’s not easy to get there.  It takes a lot of discipline, but actually to have the discipline is the virtue itself, to basically be following something, testing yourself, looking for something that’s new, until eventually you find it.”

Yet despite Brown’s dedication and deep commitment to uncharted musical relationships that are often extraordinarily difficult to perform, Brown is hardly a stickler for precision.

“If you played it perfectly, like a computer, it wouldn’t sound that good,” he explained. “I always say when I’m working with musicians, think of these as targets. … It’s not about getting more purity.  There’s always this element that’s a little out of control. … If we’re playing a waltz, it’s not a strict one-two-three; there’s a little push-me pull-you in there.”

Brown firmly believes that the human element is central and that computers should never replace people.  As he put it, “It’s really important that we don’t lose the distinction of what the model is rather than the thing it’s modeled on. I think it’s pretty dangerous to do that, actually.”

So for Brown, musical complexity is ultimately just a means to an end which is about giving listeners greater control of their own experiences with what they are hearing. In the program notes for a CD recording of his electro-acoustic sound installation Talking Drum, Brown claimed that he reason he is attracted to complex music is “because it allows each listener the freedom to take their own path in exploring a sound field.”

Brown’s aesthetics grew out of his decades of experience as an improviser—over the years he’s collaborated with an extremely wide range of musicians including Wayne Horvitz, Wadada Leo Smith, and Butch Morris—and from being one of the six composers who collectively create live networked computer music as The Hub. Long before he got involved in any of these projects, Brown was an aspiring concert pianist who was obsessed with Robert Schumann’s Piano Concerto which he performed with the Santa Cruz Symphony as an undergrad. Now he has come to realize that even standard classical works are not monoliths.

“Everybody in that Schumann Piano Concerto is hearing something slightly different, too, but there’s this idea somehow that this is an object that’s self-contained,” he pointed out.  “It’s actually an instruction for a ritual that sounds different every time it’s done.  Compositions are more or less instructions for what they should do, but I’m not going to presume that they’re going to do it exactly the same way every time.”

Chris Brown’s first album was released in 1989, ironically the same year as the birth of another musical artist who shares his name, a Grammy Award-winning and Billboard chart-topping R & B singer-songwriter and rapper.  This situation has led to some funny anecdotes involving mistaken identity—calls to his Mills College office requesting he perform Sweet Sixteen parties—as well as glitches on search engines including the one on Amazon.

“These are basically search algorithm anomalies,” he conceded wryly. To me it’s yet another reason to heed his advice about machines and not to overly rely on them to solve all the world’s problems.


Chris Brown in conversation with Frank J. Oteri
Recorded at Off Soho Suites Hotel, New York, NY
June 22, 2017—3:00 p.m.
Video presentations and photography by Molly Sheridan
Transcribed by Julia Lu.

Frank J. Oteri:  Once I knew you were coming to New York City for a week-long residency at The Stone and that we’d have a chance to have a conversation, I started looking around to see if there were any recordings of your music that I hadn’t yet heard. When I did a search on Amazon, I kept getting an R & B singer-songwriter and rapper named Chris Brown, who was actually born the year that the first CD under your name was released.

Chris Brown:  Say no more.

FJO:  I brought it up because I think it raises some interesting issues about celebrity. There is now somebody so famous who has your name, and you’ve had a significant career as a composer for years before he was born.  But maybe there’s a silver lining in it. Perhaps it’s brought other people to your music who might not otherwise have known about it—people who were looking for the other Chris Brown, especially on Amazon since both your recordings and his show up together.

CB:  These are basically search algorithm anomalies, but the story behind that is that when the famous Chris Brown started to become famous, I started getting recorded messages on my office phone machine at Mills, because people would search for Chris Brown’s music and it would take them to the music department at Mills.  They would basically be fan gushes for the most part.  Sometimes they would involve vocalizing, because they were trying to get a chance to record.  Sometimes they would ask if he could play their Sweet Sixteen party.  There were tons of them.  At the beginning, every day, there were long messages of crying and doing anything so that they could get close to Chris Brown in spite of the fact that my message was always a professorial greeting.  It didn’t matter.  So it was a hassle.  Occasionally I would engage with the people by saying this is not the right Chris Brown and trying to send them somewhere else.

It’s a common name. When I was growing up, there weren’t that many Chrises, but somehow it got really popular in the ‘80s and ‘90s.  Anyway, these days not much happens, except that what it’s really meant is kind of a blackout for me on internet searches.  It’s hard to find me if somebody’s looking.  Since I started working at Mills, the first thing that David Rosenboom said to me when I came in is there’s thing called the internet and you should get an email account.  Everybody was making funny little handles for themselves as names.  From that day, mine was cbmuse for Chris Brown Music.  I still have that same email address at Mills.edu.  So I go by cbmuse.  That’s the best I can do.  Sometimes some websites say Christopher Owen Brown, using the John Luther Adams approach to too many John Adamses.  It’s kind of a drag, but on the other hand, it’s a little bit like living on the West Coast anyway, which is that you’re out of the main commercial aspect of your field, which is really in New York. On the West Coast, there’s not as much traffic so you have more time and space.  To some extent, you’re not so much about your handle; you still get to be an individual and be yourself. I could have made a new identity for myself, but I sort of felt like I don’t want to do that.  I’ve always gone by Chris Brown.  I’ve never really attached to Christopher Brown.  Maybe this is a longer answer than you were looking for.

FJO:  It’s more than I thought I’d get. I thought it could have led to talking about your piece Rogue Wave, which features a DJ. Perhaps Rouge Wave could be a gateway piece for the fans of the other Chris Brown to discover your music.

CB:  I don’t think that happens though.  That was not an attempt to do something commercial.  I could talk about that if you like, since we’re on it.  Basically, the DJ on it, Eddie Def, was somebody I met through a gig where I was playing John Zorn’s music at a rock club in San Francisco and through Mike Patton, who knew about him. He invited Eddie to play in the session and he just blew me away.  I was playing samples and he was playing samples.  I was playing mine off my Mac IIci, with a little keyboard, and he was playing off records.  He was cutting faster than I was some of the time.  Usually you think, “Okay, I’ve a got a sample in every key. I can go from one to the other very quickly.”  He just matched me with every change.  So we got to be friends and really liked each other.  We did a number of projects together.  That was just one of them. He’s a total virtuoso, so that’s why I did a piece with him.

FJO:  You’ve worked with so many different kinds of musicians over the years.  From a stylistic perspective, it’s been very open-ended.  The very first recording I ever heard you on, which was around the time it came out, was Wayne Horvitz’s This New Generation, which is a fascinating record because it mixes these really out there sounds with really accessible grooves and tunes.

CB:  I knew Wayne from college at UC Santa Cruz. He was kind of the ringmaster of the improv scene in the early ‘70s in Santa Cruz.  I wasn’t quite in that group, but I would join it and I picked up a lot about what was going on in improvised music through participating with them in some of their jam sessions.  Wayne and I were friends, so when he moved to New York, I’d sometimes come to visit him.  Eventually, he moved out of New York to San Francisco.  I had an apartment available in my building, so he lived in it.  He was basically living above us. He was continuing to do studio projects, and this was one of them.  He had his little studio setup upstairs and one day he said, “Would you come upstairs and record a couple of tracks for me?” He played his stuff and he asked me to play one of the electro-acoustic instruments that I built, so I did.  I didn’t think too much more of it than that, but then it appeared on this Electra-Nonesuch record and there was a little check for it. It was my little taste of that part of the new music scene that was going on in New York.  Eventually Wayne moved out and now he lives in Seattle. We still see each other occasionally.  It’s an old friendship.

FJO:  You’ve actually done quite a bit of work with people who have been associated with the jazz community, even though I know that word is a limiting word, just like classical is a limiting word. You’ve worked with many pioneers of improvisational music, including Wadada Leo Smith and Butch Morris, and you were also a member of the Glenn Spearman Double Trio, which was a very interesting group.  It’s very sad.  He died very young.

CB:  Very.

FJO:  So how did you become involved with improvised music?

CB:  Well, I was a classically trained pianist and I eventually wound up winning a scholarship and played the [Robert] Schumann Piano Concerto with the Santa Cruz Symphony. But I was starting to realize that that was not going to be my future because I was interested in humanities and the new wave of philosophy—Norman O. Brown.  I got to study with him when I was there, and he told me I should really check out John Cage because he was a friend of Cage’s: “If you’re doing music, you should know what this is.”  So I went out and got the books, and I was completely beguiled and entranced by them.  It was a whole new way of listening to sound as well as music, or music as sound, erasing the boundary.  So I was very influenced by that, but almost at the same time I was getting to know these other friends in the department who were coming more out of rock backgrounds.  They were influenced by people like Cecil Taylor and the Art Ensemble of Chicago and the free jazz improvisers.  These jam sessions that Wayne would run were in some way related.  There were a lot of influences on that musical strain, but that’s where I started improvising.

To me, improvisation seems like the most natural thing in the world.

I was also studying with Gordon Mumma and with a composer named William Brooks, who was a Cage scholar as well as a great vocalist and somebody who’d studied with Kenneth Gaburo. With Brooks, I took a course that was an improvisation workshop where the starting point was no instruments, just movement and words—that part was from the Gaburo influence.  That was a semester of every night getting together and improvising with an ensemble.  I think it was eight people.  I’d love if that had been documented.  I have never seen or heard it since then, but it influenced me quite a bit.  To me, improvisation seems like the most natural thing in the world. Why wouldn’t a musician want to do it?  Then, on the other side of this, people from the New York school were coming by and were really trying to distinguish what they did from improvisation.  I think there was a bit of an uptown/downtown split there.  They were trying to say this is more like classical music and not like improvisation.  It’s a discipline of a different nature.  Ultimately I think it’s a class difference that was being asserted.  And I think Cage had something to do with that, trying to distinguish what he did from jazz.  He was trying to get away from jazz.

I didn’t have much of a jazz background, but I had an appreciation for it growing up in Chicago. I had some records.  At the beginning I’d say my taste in jazz was a little more Herbie Hancock influenced than Cecil Taylor.  But once I discovered Cecil Taylor, when I put that next to Karlheinz Stockhausen, I started to see that this is really kind of the same. This is music of the same time.  It may have been made in totally different ways, and it results from a different energy and feeling from those things, but it’s not that different.  And it seems to me that there’s more in common than there is not.  So I really never felt there was that boundary.  So I participated in sessions with musicians who were improvising with or without pre-designed structures. It was just something I did.

Once I discovered Cecil Taylor, when I put that next to Karlheinz Stockhausen, I started to see that this is really kind of the same.

The first serious professional group I got involved with was a group called Confluence.  This came about in the late 1970s with some of my older friends from Santa Cruz, who’d gone down and gotten master’s degrees at UC San Diego. It was another interesting convergence of these two sides of the world.  They worked with David Tudor on Rainforest, the piece where you attach transducers to an object, pick up the sound after it’s gone through the object, and then amplify it again.  Sometimes there’s enough sound out of the object itself that it has an acoustic manifestation.  Anyway, it’s a fantastic piece and they were basically bringing that practice into an improvisation setting.  The rule of the group was no pre-set compositional design and no non-homemade instruments.  You must start with an instrument you made yourself and usually those instruments were electro-acoustic, so they had pickups on them, somewhat more or less like Rainforest instruments.  The other people in that group were Tom Nunn and David Poyourow.  When David got out of school he wanted to move up to the Bay Area and continue this group.  One of the members of it then had been another designer, a very interesting instrument maker named Prent Rodgers.  And he bailed.  He didn’t want to be a part of it.  So they needed a new member.  So David asked me if I’d be interested, and I was.  I always had wanted to get more involved with electronic music, but being pretty much a classical nerd, I didn’t really have the chops for the technology.  David, on the other hand, came from that background.  His father was a master auto mechanic, from the electrical side all the way to the mechanical side. David really put that skill into his instrument building practice and then he taught it to me, basically.  He showed me how to solder, and I learned from Tom how to weld, because some of these instruments were made out of sheet metal with bronze brazing rods.  I started building those instruments in a sort of tradition they’d begun, searching for my own path with it, which eventually came about when I started taking pianos apart and making electric percussion instruments from it.

So, long story short, I was an improviser before I was a notes-on-paper composer.  That’s how I got into composing.  I started making music directly with instruments and with sound.  It was only as that developed further that I started wanting to structure them more.

FJO:  So you composed no original music before you started improvising?

CB:  There were a few attempts, but they were always fairly close to either Cageian influence or a minimalist influence.  I was trying out these different styles.  Early on, I was a follower and appreciator of Steve Reich’s music. Another thing I did while I was at Santa Cruz was play the hell out of Piano Phase.  We’d go into a practice room and play for hours, trying to perfect those phase transitions with two upright pianos.  I was also aware of Steve’s interest in music from Bali and from Africa. These were things that I appreciated also.

FJO:  I know that you spent some time in your childhood in the Philippines.

CB:  I grew up between the years of five and nine in the Philippines.  It wasn’t a long time, as life goes, but it was also where I started playing the piano.  I was five years old in the Philippines and taking piano lessons there.  I was quite taken with the culture, or with the cultural experience I had let’s say, while I was there.  I went to school with Filipino kids, and it was not isolated in some kind of American compound.  I grew up on the campus of the University of the Philippines, which is a beautiful area outside of the main city, Manila.

FJO:  Did you get to hear any traditional music?

Being an improviser is a great way to get into a cultural interaction.

CB:  Very little because the Philippines had their music colonized.  It exists though, and later I reconnected with musicians at that school, particularly José Maceda, which is another long story in my history.  I’ve made music with Filipino instruments and Filipino composers.  One of the nice things about being an improviser is that collaboration comes much easier than if you’re trying to control everything about the design of the piece of music, so I’ve collaborated with a lot of people all over the place, including performances before we really knew what we were doing.  It’s an exploratory thing you do with people, and it’s a great way to get into a cultural interaction.

Chris Brown in performance with Vietnamese-American multi-instrumentalist Vanessa Vân-Ánh Võ at San Francisco Exploratorium’s Kanbar Forum on April 13, 2017

FJO:  I want to get back to your comment about your first pieces being either Cageian or influenced by minimalism.  I found an early piano piece of yours called Sparks on your website, which is definitely a minimalist piece, but it’s a hell of a lot more dissonant than anything Reich would have written at that time. It’s based on creating gradual variance through repetition, but you’re fleshing out pitch relations in ways that those composers wouldn’t necessarily have done.

CB:  I’m very glad you brought that up.  I think that was probably the first piece that I still like and that has a quality to it that was original to me.  From Reich I was used to the idea of a piece of music as a continuous flow of repetitive action.  But it really came out of tuning pianos, basically banging on those top notes of the piano as you’re trying to get them into tune. I started to hear the timbre up there as being something that splits into different levels.  You can actually hear the pitch if you care to attend to it.  A lot of times the pitch is hard to get into tune there, especially with pianos that have three strings [per note]. They’re never perfectly in tune.  They’re also basically really tight, so their harmonic overtones are stretched.  They’re wider than they should be.  They’re inharmonic, rather than harmonic, so it’s a kind of a timbral event.  So what I was doing was kind of droning on a particular timbre that exists at the top of the piano, trying to move into a kind of trance state while I was moving as fast as I can repeating these notes. The piece starts at the very top two notes, and then it starts widening its scope, until it goes down an octave, and then it moves back up.  It was a process-oriented piece.  There wasn’t a defined harmonic spectrum to it except that which is created when you make that shape over a chromatically tuned top octave of the piano.  It didn’t have the score.  It was something that was in my brain.  It would be a little different every time, but basically it was a process, like a Steve Reich process piece, one of the earliest ones.

FJO:  So when did you create the notated score for it?

CB:  Well, I tried a couple of times, but it wasn’t very satisfactory. I made the first version for a pianist who lives in Germany named Jennifer Hymer. She played it first probably around 2000. Then 15 years later, another pianist at Mills—Julie Moon—played it, and she played the heck out of it. So now there is a score, but I still feel like I need to fix that score.

FJO:  I think it’s really cool, and I was thrilled that there was a score for it online that I could see. You also included a recording of it.

CB:  I just don’t think the score reflects as well as it could what the piece is about.  I always intended for there to be a little bit of freedom in it that isn’t apparent when you just write one set of notes going to the next set of notes.  There has to be a certain sensibility that needs to be described better.

FJO:  Bouncing off of this, though it might seem like a strange connection to make, when I heard that piece and thought about how it’s taking this idea of really hardcore early minimalist process music, but adding more layers of dissonance to it, it seemed in keeping with a quote that you have in your notes for the published recording of Talking Drum, which I thought was very interesting:  “I favor densely complex music, because it allows each listener the freedom to take their own path in exploring a sound field.”  I found that quote very inspiring because it focuses on the listener and giving the listener more choices about what to focus on.

CB:  I think I still agree with that. I’m not always quite going for the most complex thing I can find, but I do have an attraction to it. Most of the pieces that I do wind up being pretty complicated in terms of how I get to the result I’m after, even though those results may require more or less active listening. I was kind of struck last night by the performance I did of Six Primes with Zeena Parkins and Nate Wooley. The harmonic aspect of the music is much more prominent and much more beauty-oriented than the piano version is. When I play the piano version, it’s more about the intensity of the rhythms and of the dissonance of the piano, as opposed to the more harmonious timbre of the harp or the continuous and purer sound of the trumpet; the timbre makes the way that you play the notes different.

An excerpt from Chris Brown, Zeena Parkins and Nate Wooley’s trio performance of Structures from Six Primes at The Stone on June 21, 2017.

FJO: But I think also that this strikes to the heart of the difference between composition and improvisation.  I find it very interesting that you’ve gravitated toward these really completely free and open structures as an improviser, but your notated compositions are so highly structured.  There’s so much going on, and in a piece like Six Primes, you’re reflecting these ratios not just in the pitch relations, but also in the rhythmic relationships. Such complicated polyrhythms are much harder to do in the moment.

CB:  Of course.  But that’s why I’m doing it. I’m interested in doing things that haven’t been done before.  I’ve always felt that being a new music composer, part of the idea is to be an explorer.  Sometimes that motivation is going to get warped by the marketing of the music or by the necessity to make a career, but that was always what I was attracted to about it. From the first moment that I heard Cage’s music, I said, “This is an inventor.  This is somebody who’s inventing something new.”  It’s so exciting and fresh to be at that point where you have this experience that is new.  It’s not easy to get there.  It takes a lot of discipline, but actually to have the discipline is the virtue itself, to basically be following something, testing yourself, looking for something that’s new, until eventually you find it.

I’ve always felt that being a new music composer, part of the idea is to be an explorer.

This is the third cycle of me learning to play these pieces. At first, I just wanted to know it was possible. And next, I wanted to record it. This time, I’m looking to do a tour where I can perform it more than once. Each time I do it, it gets easier. At this point, I’m finally getting to what I want, for example with 13 against 7, I know perfectly how it sounds, but I don’t have to play it mechanically. It can breathe like any other rhythm does, but it has an identity that I can recognize because I’ve been doing it long enough. It seems strange to me that music is almost entirely dominated by divisions of two and three. We have five every once in a while, but most people can’t really do a five against four, except for percussionists. There are a lot of complex groupings of notes in Chopin, but those are gestures, almost improvisational gestures I think, rather than actual overlays of divisions of a beat. Some of this is influenced by my love and interest for African-based musics that have this complexity of rhythm that is simply beyond the capability of a standard European-trained musician, actually getting into the divisions of the time and executing them perfectly and doing them so much that they become second nature so that they can be alive in performance, rather than just reproduced. It’s a big challenge, but I’m looking for a challenge and I’m looking for a new experience that way.

An excerpt from Chris Brown’s premiere solo piano performance of Six Primes in San Francisco in 2014.

FJO:  So do you think you will eventually be able to improvise those polyrhythms?

CB:  Maybe, eventually, but I think you have to learn it first. The improvising part is after you’ve learned to do the thing already.  Yesterday I was improvising some of the time. What you do is you start playing one of the layers of the music. In Six Primes part of the idea is you have this 13 against 7, but 13 kind of exists as a faster tempo of the music, and 7 is a slower one.  They’re just geared and connected at certain places, but at any one time in your brain, while you’re playing that rhythm, it might be a little bit more involved in inflecting the 13 than the 7. Sometimes, when things are really pure, you get a feeling for both of them and they’re kind of talking to each other.  As a performer, I would say that that’s the goal.  It’s probably rarer than I wish at this point.  But the only way you can get there is by lots of practice and eventually it starts happening by itself.  I think it’s the same as if you’re playing the Schumann Piano Concerto.  You’re not aware of every gesture you’re making to make that music.  You’ve put it into your body, and it kind of comes out by rote.  You know you’re experiencing the flow of the music, and your body knows how to do it because you trained it.  So it’s the same with Six Primes, but it’s just the materials are different and the focus is different.

An excerpt from Chris Brown's piano score for Six Primes

An excerpt from the piano score for Six Primes © 2014 by Chris Brown (BMI). Published by Frog Peak Music. All rights reserved. International copyright secured. Reprinted with permission.

FJO:  And similarly to listen to it, you might not necessarily hear that’s what’s going on.  But maybe that’s okay.

CB:  Yes, that goes to the quote that there’s a multi-focal way of listening that I’m promoting; the music isn’t designed to have one focal point. It’s designed to have many layers and that basically means that listeners are encouraged to explore themselves. It’s an active listening rather than that you should be listening primarily to this part and not aware of that part.

The music isn’t designed to have one focal point.

FJO:  In a way, this idea of having such an integral relationship between pitches and rhythms is almost a kind of serialism, but the results are completely different. I also think your aesthetics, and what you’re saying about how one listens to it, is totally different.

CB:  I wouldn’t say it’s modeled on that, but I do like the heavy use of structure. It’s a sculptural aspect of making music. I do a lot of pre-composition. This stuff isn’t just springing out of nowhere. Six Primes actually has a very methodical formal design that’s explained in the notes to the CD. The basic idea is that you have these six prime numbers: 2, 3, 5, 7, 11, and 13. Those are the first six prime numbers. They’re related to intervals that are tuned by relationships that include that number as their highest prime factor. I know that sounds mathematical, but I’m trying to say it as efficiently as possible. For example, the interval of a perfect fifth is made of a relationship of a frequency that’s in the ratio of 3 to 2. So the highest prime of that ratio is a 3. Similarly, a major third is defined by the ratio of 5 to 4. So 5 is the highest prime. There’s also the 2 in there, but the 5 is the higher prime and that defines the major third. There are other intervals that are related to it, such as a 6 to 5, which is a minor third, where the 5 is also the highest prime. And 5 to 3, the major sixth, etc. Basically Western music is based around using 2, 3, and 5 and intervals that are related to that. Intervals that use 7 as the highest prime are recognizable to most western music listeners, but they’re also out of tune by as much as a third of a semi-tone. Usually people start saying, “Oh, I like the sound of that. I can hear it. It’s a harmony, but it sounds a little weird.” Particularly the 7 to 6 interval, which is a minor third that’s smaller than any of the standard ones that Western people are used to, is very attractive to most people but also kind of curious and possibly scary. When you take it to 11, you get into things that are halfway between the semitones of the equal tempered chromatic scale. And 13 is somewhere even beyond that. Okay, so there are all these intervals. The tuning for Six Primes is a twelve-note scale that contains at least two pitches from each of these first six prime factors, which results in a total of 75 unique intervals between each note and every other one in the set.

The cover for the CD of Six Primes

Last year, New World Records released a CD of Chris Brown performing Six Primes.
.

FJO:  Cellists and violinists tune their instruments all the time and since their instruments have an open neck, any pitch is equally possible. The same is true for singers. But pianists play keyboards that are restricted to 12 pitches per octave and that are tuned to 12-tone equal temperament. And since pianists rarely tune their own instruments, 12-tone equal temperament is basically a pre-condition for making music and it’s really hard to think beyond it. As a classically-trained pianist, how were you able to open your ears to other possibilities?

CB: It was hard. It was very frustrating. It took me a long time, and it started by learning to tune my instrument myself. The first thing was what are these pitches? Why do I not understand what everybody’s talking about when they’re talking about in tune and out of tune? I’m just not listening to it, because I’m playing on an instrument that’s usually somewhat out of tune. Basically pianists don’t develop the same kind of ear that violinists have to because they don’t have to tune the pitch with every note. So I was frustrated by my being walled off from that. But I guess not frustrated enough to pick up the violin and change instruments.

While I was an undergraduate and started getting interested through Cage in 20th-century American music, I discovered Henry Cowell’s piano music, the tone cluster pieces, and I loved them.  I just took to them like a duck to water, and I got to be good at it.  I had a beautiful experience playing some of his toughest tone cluster pieces at the bicentennial celebration of him in Menlo Park in 1976. I really bonded with that music and played it like I owned it.  I could play it on the spot. I had it memorized.   The roar of a tone cluster coming out of the piano was like liberation to me.

FJO:  And you recorded some of those for New Albion at some point.

CB:  That came out of a concert Sarah Cahill put together of different pianists playing; it was nice that that came out.

FJO:  It’s interesting that you mention Cowell because he was another one of these people like Wayne Horvitz who could take really totally whacked out ideas and find a way to make them sound very immediate and very accessible. It’s never off-putting, it’s more like “Oh, that’s pretty cool.” It might consist of banging all over the piano, but it’s also got a tune that you can walk away humming.

CB:  I like that a lot about Cowell.  He’s kind of unaffected in the way that something attracted him. He wrote these tunes when he was a teenager, for one thing.  But he wrote tunes for the rest of his life, too.  Sometimes he wrote pieces that have no tune at all.  The piece Antimony, for example, is amazingly harsh. There’s definitely some proto-Stockhausen there, but it’s not serial.  I think that the ability to not feel like you need to restrict yourself to any particular part of the language that you happen to be employing at the moment is something that is really an admirable achievement.  There’s something so tight about the Western tradition that once you start developing this personal language, you must not waver, that this is the thing that you have to offer and it’s the projection of your personality, how will you be recognized otherwise? I think that’s ultimately a straightjacket, so I’ve always admired people like Cowell and Anthony Braxton. Yesterday I was talking to Nate Wooley about the latest pieces that Braxton is putting out where he’s entirely abandoned the pulse; it’s all become just pure melody. He’s changing.  Why do we think that’s a bad idea?  Eclecticism—if you can do it well and can do it without feeling like you’re just making a collage with stuff you don’t understand—is the highest form, to be able to integrate more than one kind of musical experience into your work.

FJO:  It’s interesting that you started veered into a discussion about discovering Cowell’s piano music after I asked you about how you got away from 12-tone equal temperament. Most of Cowell’s music was firmly rooted in 12-tone equal, but he did understand the world beyond it and even tried to explore synchronizing pitch and rhythmic ratios in his experiments with the rhythmicon that Leon Theremin had developed right before he was kidnapped him and brought back to the Soviet Union.

CB:  I was definitely influenced by [Cowell’s book] New Musical Resources. As I read about the higher harmonics and integrating them into chords, I would reflect back on what it sounds like when you play it on the piano.  It is very dissonant because of the tuning.  And I realized that.  So I thought, “Well, okay, he just never got there.  He didn’t learn to tune his own piano, maybe I should do that, you know.” I get that some in Six Primes, I think, because there’s an integral relationship between all the notes. Even though the strings are inharmonic, there’s more fusion in the upper harmonics that can happen.  So these very dissonant chords also sound connected to me.  They’re not dissonant in the same way that an equal tempered version of it is.  They have a different quality.

I’m also noticing from the other piece we played the night you attended that was using the Partch scale, if you build tone cluster chords within the Partch scale, you get things that sound practically like triads, only they buzz with a kind of fusion that you can only have when the integral version of major seconds is applied carefully.  You get all kinds of different chords out of that.  It’s wonderful.

FJO:  Now when you say Partch scale, we’re basically talking about 11-limit just intonation, in terms of the highest primes, since the highest prime in his scale is 11.

CB:  Right, but it’s more than that. He did restrict himself to the 11-limit, but he didn’t include everything that’s available within that.  He made careful, judicious selections so that he could have symmetrical possibilities inside of the scale.  It’s actually more carefully and interestingly foundationally selected than I knew before I really studied it closely.

FJO:  But he worked with his own instruments which were designed specifically to play his 43-note scale whereas you are playing this score on a standard 7-white, 5-black keyed keyboard.

CB:  I took an 88-key MIDI controller and I was using it to trigger two octaves of 43 notes.  So I’ve mapped two octaves to the 88 keys. It winds up being 86, but it is possible to do that. I’m thinking in the future of figuring out a way to be able to shift those octaves so I’m not stuck in the same two-octave range, which I haven’t done yet, but that’s kind of trivial programming-wise.

FJO:  Of course, the other problem with that is the associations the standard keyboard has with specific intervals.

CB:  You have to forget that part, and that’s why I didn’t do it in Six Primes.  And also, if I’d done it on an acoustic piano, it really messes up the string tension on the piano.

FJO:  Julian Carrillo re-tuned a piano to 96 equal and that piano still exists somewhere.

CB:  Yeah, but you can’t re-tune it easily, let’s put it that way. And it loses its character throughout the range because the character of the piano is set up by the variable tension of the different ranges of its strings.

FJO:  But aside even from that, it changes the basic dexterity of what it means to play an octave and what it means to play a fifth.  Once you throw all those relationships out the window, your fingers are not that big, even if you have the hands of Rachmaninoff.

CB:  It becomes a different technique for sure. I’m not trying to extend the technique. What I’m doing with this is essentially I’m making another chromelodeon, which was Partch’s instrument that he used to accompany his ensemble and to also give them the pitch references that they needed, especially the singers, to be able to execute the intervals that he was writing.

FJO:  Well that’s one of the things I’m curious about.  When you’re working with other musicians obviously you can re-tune the keyboard.  You can re-tune a piano, you can work with an electronic keyboard where all these things are pre-set. But the other night, you were working with a cellist who sang as well and an oboist.  To get these intervals on an oboe requires special fingerings, but most players don’t know them.  With a cello there’s no fretboard, so anything’s possible but you really have to hear the intervals in order to reproduce them.  That’s even truer for a singer.  So how do those things translate when you work with other musicians, and how accurate do those intervals need to be for you?

CB:  Those are two questions really.  But I think the key is that you’ve got to have musicians who are interested in being able to hear and to play them.  You can’t expect to write them and then just get exactly what you want from any musician.  Until we wake up 150 years from now and maybe everybody will be playing in the Partch scale so you could write it and everybody can do it!  That’s a fantasy, but I think we’re moving more in that direction.  There are more and more musicians who are interested in learning to play these intervals and all I’m doing is exploiting what’s there.  I’m interested in it.  I talk to my friends who are, and they want to learn how to play like that and that’s what’s happening.  It’s a great thing to be able to have that experience, but it’s not something you can create by yourself.  You have to work with the people who can play the instruments.  For example, you mentioned the oboe. I asked Kyle [Bruckmann] what fingerings he’s using.  “Shouldn’t I put this in the score?”  And he said, “Most of the time what I’m doing is really more about embouchure.  And it’s maybe something that’s not so easily described.”  So it comes down to he’s getting used to what he needs to do with his mouth to make this pitch come out; he’s basically looking at a cents deviation.  So I’ll write the note, and I’ll put how many cents from the pitch that he’s fingering, or the pitch that he knows needs to be sounded.  He’s playing it out of tune with what the horn is actually designed to create and he’s limited in the way that notes sound.  He can’t do fortissimo on each of these notes.  He’s working with an instrument that’s designed for a tuning that he’s trying to play outside of.  It’s crazy. But so far, I would say it’s challenging, but not frustrating so much if I’m translating his experience correctly.  He seems to be very eager to be able to do it, and he’s nailing the pitches.  Sometimes I test him against my electronic chromelodeon and he’s almost always right on the pitch. He’s looking at a meter while he’s playing.  It’s something that a musician couldn’t have done 10 or 15 years ago before those pitch meters became so cheap and readily available.

More and more musicians are interested in learning to play these intervals.

FJO:  James Tenney had this theory that people heard within certain bands of deviations. If you study historical tunings like Werckmeister III, the key of C has a major third that’s 390 cents. In equal temperament, it’s 400 cents which is way too sharp since a pure major third is 386. You can clearly hear the difference, but a third of 390 is close enough to 386 for most people.

CB:  I always say when I’m working with musicians, think of these as targets. If you played it perfectly, like a computer, it wouldn’t sound that good. For example, last night, we had to re-tune the harp to play in the Six Primes tuning. Anybody who knows about harp tuning realizes there’s seven strings in the octave and you get all the other notes by altering one semitone sharp or flat on one of those strings. So it was a very awkward translation. Basically we had a total of 10 of the 12 Six Primes pitches represented. Two of them we couldn’t get. And the ones that we had were sometimes as much as 10 cents out, which is definitely more than it should be to be an accurate representation. But again, this is where the loose comes in with the tight.

In certain cases that wouldn’t work, but in a lot of cases it does. A slight out-of-tuneness can result in a chorus effect as part of the music, and I like that; it gives a shimmer. It’s like Balinese tuning. If that’s what we have to accept on this note, well then so be it you know. It actually richens the music in a way. It’s not about getting more purity. That’s what I feel like. There’s a thing I never quite agreed with Lou Harrison about, because he was always saying these are the real pure sounds. These are the only right ones. But they can get kind of sterile by themselves. He didn’t like the way the Balinese mistuned things. But from all those years of tuning pianos, I love the sound of a string coming into tune, the changes that happen, it makes the music alive on a micro-level. It’s important to be able to hear where the in-tune place is, but to play around that place is part of what I like. I don’t expect it to be perfectly in tune. Maybe it’s because I play a piano and on the extreme ranges of the piano, you can’t help that the harmonics are out of tune. They just are. There’s always this element that’s a little out of control, as well as the part that we can master and make truly evoke harmonic relationships.

FJO:  Now in terms of those relationships, is that sense of flexibility and looseness true for these rhythms as well?  Could there be rubatos in 17?

I don’t expect it to be perfectly in tune.

CB:  Yeah, I think that’s what I was saying about being able to play the rhythm in a lively way.  They can shift.  They can talk to each other.  Little micro-adjustments to inflect the rhythm.  If we’re playing a waltz, it’s not a strict one-two-three; there’s a little push-me pull-you in there. That’s how you give energy to the piece.  I think that it’s hard to get there with these complex relationships, but it’s definitely possible.

FJO:  So is your microtonal music always based on just intonation?  Have you ever explored other equal temperaments?

CB:  I’ve looked at them, but they don’t interest me as much because I’m more attracted to the uneven divisions than to the even ones.  Within symmetrical divisions, you can represent all kinds of things and you can even make unevenness out of the evenness if you like.  But it seems like composers get drawn to the kind of symmetrical kinds of structures, rather than asymmetrical ones.  Symmetry is fine, but somehow it reminds me of the Leonardo figure inside the triangle and the circle.  It’s ultimately confining.  I like the roughness and the unevenness of harmonic relationships.

FJO:  We only briefly touched on electronics when you said that you had a rough start with it as a classical music nerd. But I was very intrigued the other night by how Kyle Bruckmann’s oboe performance was enhanced and transformed by real-time electronic manipulations the other night in Snakecharmer, and was very curious after you mentioned that you had figured out how to make this old piece work again. I know the recording that Willie Winant made of that piece that was released in 1989, but to my ears it sounds like a completely different piece.  I think I like the new piece even more because it sounds more like a snake charmer to me this time; I didn’t quite understand the title before.

CB:  There are three recorded versions of that old piece.

FJO:  That was the only one I’ve heard.

CB:  They’re on the Room record.

FJO:  I don’t know that record.

CB:  Okay, that was rare.  It was a Swiss release.  But that’s kind of an important one for me in my development with electro-acoustic and interactive music. I should get it to you.  Anyway, the basic idea is any soloist can be the snake charmer, the person who’s instigating the feedback network to go through its paces and sort of guiding it.  Probably the strangest was when Willie did it because he can’t sustain.  He’s basically playing percussion, and he’s just basically playing whatever he hears and interacting with it intuitively.  But another version of it was with Larry Ochs playing sopranino saxophone so that’s probably closer; you might hear the relationship there.  It’s more the traditional image of the snake charmer.  It sounds an awful lot like a high oboe; that was a good version.  There’s also the version that I performed, singing and whistling as the input.  Those were three different tracks, but they all start out in a similar way.  Basically the programming aspect is that it goes through a sequence of voices.  And each of those voices transposes the input that it’s receiving from the player in different intervals as the piece goes on.  So there’s a shape of starting with a high transposition going down to where it’s no transposition and below and up again.  It’s a simple sinusoid-type shape.  The next voice comes in and does the same thing with a slightly different rhythmic inflection, then two voices come in together and fill out the field.  That’s the beginning of Snakecharmer in every version so far.  There are about six different voicing changes which are in addition to transposing in slightly different ways to provide rhythmic inflections.  They only respond on the beat. Whatever sound is coming in when it’s time for them to play, that’s the sound that gets transposed.  There are four of these processes going on at once.  Once again, it’s that complexity going on in the chaos created by these different orderings, transpositions of the source.  The other thing is the reason it’s a feedback network is that there comes a point where the player is playing, the sound responds to it, and then the sound that it responds with is louder than what the player’s doing, and that follows itself.  So you start getting a kind of data encoded feedback network that I think of as the snake, an ouroboros snake that’s eating its own tail.

FJO:  How much improvisation is involved?

CB:  Quite a bit.  I’ve never provided a score. I just tell the person what’s going on and ask them to explore the responsiveness of the network. Usually I’m tweaking different values in response to what they’re doing, so it’s a bit of a duet.

FJO:  Taking it back to Talking Drum, you have these notes explaining how people are walking around in this environment. There are these field recordings, and then there are musicians who are responding to them.  I can partially hear that, but I’m not exactly sure what I’m hearing.  Maybe that’s the point of it to some extent.

CB:  That’s not quite right.  We have the recording called Talking Drum.  That is a post-performance production piece that uses things that were recorded at different Talking Drum performances.  That uses field recordings.  In a performance of Talking Drum, there are no field recordings. Basically, the idea is that there are four stations that are connected with one MIDI cable. That cable allows them to share the same tempo. At each of the stations is a laptop computer, and a pitch follower, and somebody who’s playing into the microphone. So, the software that’s running is a rhythmic program I designed that I can give a basic tempo and beat structure to that can change automatically at different points in time, but that also responds to input from the performer, the basic idea being that if the player plays on a beat that’s a downbeat, that beat will be strengthened in the next iteration of the cycle. It basically adjusts to what it hears in relationship to its own beat cycle. The idea of the multiplicity of those stations where that’s happening, is that they are integrated by staying on the same pulse through the cable. The idea is that the audience is moving around the space that this installation is in and the mix they hear is different in each location. As they move, it shifts. It’s as if they were in a big mixing console, turning up one station and then turning down the other. What I was trying to do was to create a big environment that an audience can actively explore in the same way that I’ve talked about creating this dense listening environment and asking people to listen to different parts on their own. That actually came about from the experience of going to Cuba in the early ’90s, and being at some rumba parties where there were a lot of musicians spread out in different places. I wandered around with a binaural recorder and I recorded the sound as I was moving. Then when I listened to the recording, I was getting this shifting, tumbling sound field and I thought: “There’s no way you could ever reproduce this in a studio. It’s a much richer immersive way of listening. Why can’t I use this as a way to model some experience for live performance or for live audiences?”

The cover for Chris Brown's CD Talking Drum.

In 2005, Pogus Productions issued a CD realization of Chris Brown’s Talking Drum
.

FJO:  It actually reminds me of when I first heard Inuksuit, the John Luther Adams piece for all the percussionists.  It was impossible to hear everything that was going on at any one moment as a listener. That’s part of the point of it which, in a way, frustrates the whole Western notion of a composition being a totality that a composer conceives, interpreters perform, and listeners are intended to experience in full like, say, the Robert Schumann Piano Concerto. Interpretations of the Schumann might differ and listeners might focus on different things at different times, but it is intended to be experienced as a graspable totality, and a closed system. Whereas creating a musical paradigm where you can never experience it all is more open-ended, it’s more like life itself since we can never fully experience everything that’s going on around us.  But I have to confess that as a listener I’m very omnivorous and voracious so it’s kind of frustrating, because I do want to hear it all!

Compositions are more or less instructions, but I’m not going to presume that they’re going to do it exactly the same way every time.

CB:  Sorry! I think that’s part of the Cage legacy, too. You don’t expect to have it all and what you have is a lot.  Everybody in that Schumann Piano Concerto is hearing something slightly different, too, but there’s this idea somehow that this is an object that’s self-contained.  It’s actually an instruction for a ritual that sounds different every time it’s done.  But I think the ritual aspect of making music is something that really interests me and I would hate to be without it.  Compositions are more or less instructions for what they should do, but I’m not going to presume that they’re going to do it exactly the same way every time.  Maybe some of them think they do, but I don’t think performing artists do that really. It’s mostly about making something that’s appropriate to the moment even if it’s coming from something that’s entirely determined in its tonal and rhythmic structure. That to me is what makes live music always more interesting than fixed media music.  It’s actually not an object.  It’s not something that doesn’t change as a result of being performed.   Of course, fixed media depends on how it’s projected.

FJO:  Perhaps an extreme example of that would be the kinds of work that you do as part of the Hub—electronic music created in real time by a group of people who are physically separated from each other yet all networked together but it’s really there’s no centralized control and that’s kind of part of the point of it.

CB:  That’s right.  The idea is to set up the composition process, if you can call it that. It’s not really the same as composing, but it’s a designing.  You’re designing a system that you believe will be an interesting one for these automated instruments to interact inside of.  What we do is usually a specification; each piece has verbal instructions about how to design a system to interact with the other systems.  Then we get it together and get them working and they start making the sound of that piece which is never the same exactly, but it’s always recognizable to us as the piece that it is, because it’s a behavior. I would say within our group we get used to the kinds of sounds that everybody chooses to use to play their role in the piece, so it starts to get an ad hoc like personality from those personal choices that each person makes.

An excerpt of a networked computer performance by John Bischoff, Chris Brown and Tim Perkis (co-founders of the legendary computer network band The Hub) from the Active Music Series in Oakland’s Duende, February 2014.

FJO:  In terms of focusing listening, and perhaps you’ll debate this with me, it seems that, as listeners, we’re trained to focus on a text when a piece has a text. If someone’s singing words, those words become the focal point.  I hadn’t heard much music of yours featuring a text, but I did hear your new Jackson Mac Low song cycle the other night.

CB:  I don’t write a lot of songs, but when I do I find it’s usually a pleasure to work with a pre-set structure that you admire; it’s like you’re dressing up what’s already there rather than having to decide where it goes next.  Of course, you’re making decisions—like what is this going to be, is it going to be different, how is going to be different, how is it going to be the same?—but it’s nice to have that kind of foundation to build on.  It’s like collaboration.

FJO:  I thought it was beautiful, and I thought Theresa Wong’s voice was gorgeous. It was exquisite to hear those intervals sung in a pure tone and her diction was perfect, which was even more amazing since she was simultaneously playing the cello. But, at the same time, the Stone has weird acoustics.  It’s a great place, but it’s a hole in the wall that isn’t really thought out in terms of sound design so it was obviously beyond your control. I was sitting in the second row and I know Jackson Mac Low’s poems. So when I focused in, I could hear every word she was pronouncing. But I still couldn’t quite hear the words clearly, as opposed to the vocals on Music of the Lost Cities where I heard every word, since obviously, in post-production, you can change the levels. But it made me wonder, especially since you have this idea of a listener getting lost in the maze of what’s going on, how important is it for you that the words are comprehensible?

Music of the Lost Cities from Johanna Poethig on Vimeo.

CB:  Maybe it’s just me, but even in the best of circumstances, I have trouble getting all the words in songs that are staged.  Maybe it’s because I’m listening as a composer, so I’m always more drawn to the totality than I am just to the words.  Most regular people who are into music mostly through song are very wrapped up in the words.  But I’m not sure Mac Low’s words work that way anyway.  I think they are musical and they are kind of ephemeral in the way that they glow at different points.  And if you don’t get every one of them, in terms of what its meaning is, it’s not surprising.  It’s kind of a musical and sonorous object of its own.  So I guess I’m not exceptionally worried about that, although in the recording, I probably do want a better projection of that part of the music than what happened at the Stone.  I was sitting behind her and I was not hearing exactly what the balance is.  In the Stone, there are two speakers that are not ideally set up for the audience, so it’s not always there the way exactly you want it to be.

FJO:  So is this song cycle going to be on the next recording you do?

Most regular people who are into music mostly through song are very wrapped up in the words.

CB:  I hope we’re going to record it this summer, actually.  It’ll be a chance to get everything exactly right.  I’m very pleased that people are recognizing the purity of these chords that are being generated through the group, but there hasn’t been a perfect performance yet.  Maybe there never will be.  But the recording will get closer than any other one will, and that’ll be nice to hear, too.

FJO:  It’s like the recording project of all the Ben Johnston string quartets that finally got done. For the 7th quartet, which was over a thousand different intervals, they were tuning to intervals they heard on headphones and using click tracks in order to be able to do it. And they recorded sections at a time and then patched it all together. Who knows if any group will ever be able to perform this piece live, but at least there’s finally an audio document of what Ben Johnston was hearing in his head.

CB:  I think that’s really a monumental release.  Ben Johnston’s the one who has forged the path for those of us trying to make Western instruments play Harry Partch and other kinds of just intonation relationships.  It’s fantastic.  But I think the other thing that seems to be true is that if you make a record of it, people will learn to play it.  For example, Zeena and Nate the other night, in preparation for that performance, I was sending them music-minus-one practice MP3 files so that they could basically hear the relationships that they should be playing.  It helps a lot.  Recordings also definitely help to get these rhythmic relationships. I often listen to Finale play them back, just to check myself to see if I’m doing them correctly.  A lot of times, I’m not.  It drifts a little bit.

FJO:  But you said before that that’s okay.

CB:  But I want to know where it’s drifting.  I want to know where the center is as part of my learning process.  I use a metronome a lot, and I use the score a lot to check myself, and get better at it.

FJO:  You’ve put several scores of yours on your website. Sparks is on there.  Six Primes is on there.  And there’s another piece that you have on there that’s a trio in 7-limit just intonation—Chiaroscuro. Theoretically anybody could download these scores, work out the tunings for their instruments and play them.

CB:  Sure. Go for it. But they’re published by Frog Peak, so they can get the official copy there. I would like to support my publisher. Because of the way that my compositional practice has developed, a lot of my scores are kind of a mess. I had a lot of scores, but I haven’t released them because they’re kind of incomplete. They often involve electronic components that are difficult to notate, and I haven’t really figured out the proper way to do that. Where there are interactive components, how do you notate that? I’m not that interested in making pieces for electronics where the electronics is fixed and the performer just synchs to it. There’s only one piece I’ve played where I really like doing that and that’s the Luc Ferrari piece Cellule 75 that I recorded where the tape is so much like a landscape that you can just vary your synchronization with it.

FJO:  It’s interesting to hear you say that because back in 1989, you said…

CB:  Okay.  Here it comes.

FJO:  “I want electronics to enhance our experience of acoustics and of playing instruments.  Extending what we already do, instead of trying to imitate, improve upon, or replace it.”

A model is never a complete reading of the world.

CB:  Yeah, that was important.  That came out at a time when the industry was definitely moving towards more and more electronic versions of all the instruments, usually cheap imitations.  Eventually those become personalities of their own, but it seems to me they always start like much lesser versions of the thing they’re modeled on.  Maybe it has something to do with this idea of models.  We’re moving more and more into a virtual reality kind of world and I think it’s really important that we don’t lose the distinction of what the model is rather than the thing it’s modeled on. I think it’s pretty dangerous to do that, actually.  The more people live in exclusively modeled environments, the more out of touch they’re going to get and probably the sicker they’re going to get because a model is never a complete reading of the world.  It’s a way to try to understand something about that world. If you’re a programmer, you’re always creating models.  In a sense, a synthesizer is modeled on an acoustic reality. But once it comes out of the box into the world, it’s its own thing.  It’s that distinction I’m trying to get at.  I think we’re often seduced by the idea that the synthesized thing will replace the real thing rather than the synthesized thing just becoming another reality.  That’s why I’m interested in mixing these things:  singing with the synthesis. Becoming part of a feedback system with a synthetic instrument embraces that into a space and into a physical interaction. That seems to be more of a holistic way of expanding our ability to play music with ourselves, with our models of ourselves, with each other through models, or just seeing the models execute music of its own.  The danger comes when you try to make them somehow perfect an idea of what reality is and it becomes the new reality instead of becoming just a new part of the real world.

Kristin Norderval: Permanent and Impermanent Sonic Moments

There is a long tradition of artists creating socially conscious work. Some would say it should be an obligation, especially now in these uncertain and divisive times. But addressing societal wrongs is perhaps the one common focus that unites decades of work created by composer/vocalist Kristen Norderval.

Norderval’s output has been extraordinarily diverse. Her activities include improvisations singing and transforming sounds on her laptop alongside other musicians, a song cycle featuring her own voice accompanied by the viol consort Parthenia, electronic scores for dance, sound installations involving upturned pianos or repurposed trash, and an evening-length opera, The Trials of Patricia Isasa, which premiered last year during the 2016 OPERA America conference in Montreal.

When we met with her across the street from her northern Manhattan apartment surrounded by nature in Inwood Hill Park (which she described as her back yard), she credited the central role that various progressive causes have played in inspiring her music: “As the eldest child of two political scientists, I have always been interested in politics and events in the world. Politics was in our house all the time. So I’ve always been aware and my music has been very centered in wherever we are as a society.”

One of her earliest realizations, soon after she began writing songs, inspired by Joni Mitchell, Buffy Sainte-Marie, Joan Baez, Odetta, and Yoko Ono, was the lack of visible female role models for women who were interested in composing large-scale works. “I could see myself as a singer-songwriter,” she remembered. “I can see there’s an identity. Whereas a composer seemed like what you do if you write for orchestra or at least for string quartet, or these other things that I didn’t know. I didn’t know other female composers growing up, so it was hard to think I could be that.”

But she persevered, studying both composition and voice in Seattle at the Cornish School and the University of Washington, despite one of her teachers claiming there were no historically significant female composers: “I just knew that was wrong. … It can’t start with me; I’m not that brilliant. So I went looking as an undergraduate. When I did my recitals for voice, I always included female composers, like Hildegard, Strozzi, Clara Schumann, Fanny Mendelssohn, up through the contemporary composers that I was discovering.”

One of contemporary composers she discovered was Pauline Oliveros who, during a campus talk for teachers, got the participants to perform one of her deep listening text scores. Norderval was astounded. Shortly afterwards in the school library, she read Oliveros’s introduction to her Sonic Meditations in which she outed herself. “’I am a human being, a lesbian, a two-legged person, living with cats.’ All these different categories. It really blew my mind because this was the ‘70s. I was out, but it was a very different time.” She went on to apprentice with Oliveros and worked with her for many years, ultimately organizing the last deep listening retreat that Oliveros was part of, in 2015, just a year before her death. Norderval’s immersion into Oliveros’s music and philosophy gave her an aesthetic framework that allowed her to embrace all sound, as well as to pay equal attention to sonic events that are permanent and impermanent.

“If I’m doing an improvisation, and it’s just for here and now, that’s my chosen impermanence,” Norderval explained. “Or if I write a score that has instructions and motives, so it can be done in different ways, the one version is impermanent, but the next version is just as valid. … The voice is always flexible, but … once you’ve got a sound file that’s a fixed sound file, it’s totally inflexible. It’s interesting to me to make a permeation, but also, when I’m working with pre-recorded sound files, I’m processing in the moment, often with several files, and choosing in that moment: Okay, I’m only going to use this little tiny bit of this file. Now I’m going to expand it to the whole thing. Now I’m going to pitch shift. Now I’m going to delay feedback—pretty basic processing tools, but everything is in the moment, so it’s like drawing from a palette that I know. I know the sound files, and I’ll do it in a different mix the next time. It’s like cooking up a different stew.”

Another inspiration for Norderval’s approach, especially for her fascinating installations—many of which she has created in collaboration with her partner, choreographer Jill Sigman—was working in Norway with a Sami sculptor named Iver Jåks who assembled Arctic stones, wood, reindeer horns, and leather and give curators free reign in putting them together. “I thought that was so wonderful,” Norderval recalled. “I remember a phone conversation we had—I ended up recording part of it and using it in a piece that I made about him—where he says, ‘I’m not going last forever; why should my artwork last forever?’”

But nowadays, she acknowledged, she has “come to a combination of notated and improvised.” One of the most precisely notated of her works is the opera The Trials of Patricia Isasa, which is based on the real-life story of a woman who was abducted during the Dirty Wars of the Argentinian dictatorship and who finally brought her torturers to justice 33 years later. It is a poignant and deeply moving work that, while being very much an important story for our own time, has deep resonances that will hopefully earn it a permanent place in the operatic repertoire.

“I think it’s very much our story because the U.S. was behind the whole Operation Condor that supported all those dictatorships,” she explained. “The Ford factory was encouraging the military dictatorship to impose certain economic policies, and they used the Ford factory as a place for torture. My feeling and [librettist] Naomi Wallace’s feeling was that when we look back at a story about this historical period—and it’s not that far back because 2009, when she got a conviction, is very close—it’s a way of saying this is what happened there and this is what could happen here. For me, the important part was the accountability part, because my concern for us as a nation is that we have had no accountability for an invasion of another country that was based on lies and no accountability for torture. The torture and war programs that are committed in all of our names are also related to our prison industrial complex, our mass incarceration, the fact that it’s only been 50 years since we dismantled the legal apartheid system of Jim Crow in this country. I was ten-years old when interracial marriage was finally legal. That’s crazy. People have been working to try to dismantle that ever since, and that’s the period we’re in now. We’re in the backlash period. We have to look with a big historical overview to see how to deal with those effects and issues with some kind of accountability. That, for me, was the story. And that is our story. That, plus Patricia was involved in the whole thing. So it’s our collective story.”


Kristin Norderval in conversation with Frank J. Oteri
Recorded in Inwood Hill Park, New York, NY
June 8, 2017—10:30 a.m.
Video presentations and photography by Molly Sheridan
Transcribed by Julia Lu

Frank J. Oteri:  Here we are in the middle of all this nature, yet we’re still in New York City, a non-stop, high-tech, 21st-century urban metropolis. It seems like an apt place to talk with you about your music, since your music seems to have two different things going on within it which often seem to be in contradiction.  One is that it’s all about sheer physicality—mostly the sound of your voice, but it’s not just the sound of your voice. There’s this wonderful passage in your score for a dance piece called Rupture where a dancer is walking, literally, on eggshells.  And it creates a remarkable sound. I probably wouldn’t have realized how that sound was produced if I hadn’t seen it in an online video, but I think it’s an excellent example of paying a great deal of attention to the properties of sound created corporeally. However, you also employ a great deal of electronic manipulation of sound as well as electronically generated sounds in your work.  Those two things seem like opposites to me, but maybe they’re not for you.

Kristin Norderval: To me they don’t feel like opposites.  The technology of electronic sound recording allows us to bring all of nature’s sounds into our art music.  Also, working out that interest in physicality is one of the reasons that I worked for Jill Sigman, my partner, on Rupture.  Those sounds of the eggshells—that was her exploration of that.  So that becomes a sonic element, but it’s starting from her choreography.  That wasn’t in my score, actually; that was her physical exploration in the piece. But there’s a place where we overlap. Both of us are very interested in exploring physical presence: the quality of sound and how you do it, or the quality of movement and how you do it.

FJO:  Of course, in terms of being focused on physicality, your instrument is you, since you’re a singer.

KN:  That’s right.

FJO:  You also still actively sing other people’s music in addition to your own, so you really have a double life as a singer and as a composer. What came first, and how did you realize you had this instrument within you that was capable of such a wide range of sound?

KN:  That’s a big question.  The first memories that I have as a young, young kid—before I was two—are sounds.  And I was singing, people tell me, around two or maybe before.  So I was always singing. I started writing songs when I got my first guitar.  I used my babysitting money and bought a guitar at age ten and started writing songs.  So they’ve always been intertwined, but it’s gone through big changes in focus at different times in my life.  When I was writing songs for guitar and voice, or piano and voice, I was performing in coffee houses, doing that whole kind of thing.  I remember as a teenager saying, “I know how to write chord symbols and write out the words of my songs, but how do you actually write music?”  I could read music, because I’d been taking piano lessons, but I didn’t have the concept of how to actually notate music.  So my goal as a teenager was to try to figure out where I could go to learn to write down what I heard in my head and to be able to hear in my head what I saw on the page.  That was my goal when I went to the University of Washington.

FJO:  I don’t know your earliest music, but on your website you list a solo piano piece from 1980 with a very intriguing title—Aggressions. I’d love to hear that one day.

KN:  I’d have to dig that out of my archives. It’s a hand-written manuscript.

FJO:  Clearly you did figure out how to write down music that was in your head then, or at least some of it. But a great deal of the music that you do nowadays, which uses extended vocal techniques and electronic manipulations, is much more elusive in terms of music notation since a lot of it defies what that notation was developed to notate. When did those kinds of sounds come into your head?

KN:  As a kid, I loved the sound of my dad’s diesel car. I could tell the difference between motors and I loved being on a bus or in any kind of car when the windshield wipers were out of synch; it was fascinating to me.  All kinds of mechanical sounds were very interesting to me, which is another way that I think about electronics. If you go back to steam motors, maybe that’s not electricity, but for all those mechanical sounds, we need power to make them sound and so that’s always been a fascination of mine.  But how do you notate that?  It’s a good question.  It’s still a question to me.  I have things where there are instructions or, if it’s working with a sound itself, then the sound file is the thing.  How do you notate within a metrical or semi-metrical language something that has to be flexible enough to listen to the variances that happen in the sound, like the airplanes going over us?

FJO:  Right.  And obviously, notation’s the enemy of improvisation to some extent since musicians who are trained to be really good at seeing what’s on a page and replicating it precisely—which, mind you, is a really incredible skill to have—often find it somehow counterintuitive to be told they should come up with something on their own. It requires a different headspace.

KN:  I really like both.  There are places in my scores where it’s very specific, and it has to be metric and precise.  And there are other places where one thing is precise and another thing can be fluid over it and change with elbow room or breathing room or room for a different gesture.  Then there are some text scores. I was just working with a group of five actors in Oslo on a theater piece, and I worked with them on deep listening exercises.  My wonderful mentor Pauline Oliveros was a big influence on me.  That kind of listening work and work with improvisation is really central to getting people to the skill sets that they need to interpret a text instruction.  What are the tools you have to interpret that text instruction?  You can interpret it simply or you can interpret it in a more complex way.  That’s where training in improvisation or in listening to sound in a different way comes in.

FJO:  So to get back to high school.  You were a singer-songwriter, playing guitar, taking piano lessons, so obviously understanding how to read music, but not quite understanding how to make that work for your own work.  But you were also intrigued by windshield wiper sounds.  At that point, were you aware of people like Pauline Oliveros, Meredith Monk, or Joan La Barbara?

KN:  I was not.  But I was aware of Yoko Ono.  She was inspiring.  Of course the Beatles were also inspiring, but Yoko Ono was really inspiring!  I had her book Grapefruit in high school.  We were living at that time in Canada in a steel town—Hamilton, Ontario.  I worked with a Grotowski-based theater group for a summer and then continued with them past that.  That exploration of physical theater was really interesting.  But I was also interested in musicology.  I was interested in singing.  I was interested in ethnomusicology and composition. But I didn’t really know any professional musicians until I had checked around the States looking at music schools to try to figure out where I would go.  I ended up going to the University of Washington. They had a program where you could enter as a general music major and then decide over the course of your studies what you were going to major in.  I ended up auditioning for voice, piano, and composition, and I ended up getting a double degree in voice and in composition at the end of that.

That was the start of my opening up to singers like Jan DeGaetani and Leontyne Price.  I had a workshop with Kenneth Gaburo at Cornish which was just like opening the whole world.  I was in the improv group with Stuart Dempster at the University of Washington; he and William O. Smith, Bill Smith, were running that.  Bill Smith was my composition teacher, one of my important composition teachers, along with Diane Thome, who is a wonderful composer for instruments and electronics.  That was also where I was introduced to Pauline Oliveros.  She was giving a talk for teachers. I was there, I guess, on the recommendation of Stuart Dempster.

Pauline gave the audience the score of either the Tuning Meditation or one of the simpler deep listening text scores.  And I was astounded.  I thought, “They’re not going to do this!”  But they did.  She was so trusting in that it was going to be cool.  And it was.  It turned out really cool.  I remember going into the library at the University of Washington and finding a very early edition of her Sonic Meditations with her handwriting in that early score and a picture of her and her introduction where she outed herself: “I am a human being, a lesbian, a two-legged person, living with cats.” All these different categories. It really blew my mind because this was the ‘70s. I was out, but it was a very different time.  Then I had the pleasure of hearing her in San Francisco in concert. When I moved here to New York, I actually was able to work with her and do the whole deep listening apprentice work.  I ended up organizing the last deep listening retreat that she, Ione, and Heloise did together in the Arctic—in Norway in 2015.

FJO:  It’s hard to believe she’s gone.

KN:  Yeah.  Working with her changed the way I make music and the way I listen, the way I relate to all these sounds around us all the time.  She’s amazing.  And she’s still listening, as they say, and so are we.

FJO:  Even though you didn’t know any musicians growing up, your parents seemed to have been supportive of your going in this direction.

KN:  My mother was a very good amateur pianist.  I think as a young person she might have had dreams to follow music, but it wasn’t at all in the cards.  She’s Norwegian and she grew up in Nazi-occupied Norway during the Second World War, so there really wasn’t much opportunity for professional artists.  She went into political science and journalism. My dad was also a political scientist.  He was American.  He was also an amateur violinist, so I knew music.

FJO:  And he was also an instrument collector. You showed me some of his instruments in your apartment.

KN:  That’s right. There were instruments from Southeast Asia in my childhood home, plus recordings from Indonesia from various villages he’d gone to visit.  And home movies.  We lived in Malaysia for a while, and we were in Norway many summers and then lived there for a while when I was a teenager.  Then we lived in Canada and various places in the States.  So I had all kinds of musical influences.

A shelf in Norderval's apartment containing various art objects and musical instruments.

A shelf in Norderval’s apartment containing various art objects and musical instruments.

FJO:  We talk to a lot of composers about their role models. You mentioned Yoko Ono and Pauline Oliveros.

KN:  And Joni Mitchell.

FJO:  What’s interesting in terms of your role models is that all of the people you mentioned are women. We’ve talked to a lot of composers over the years and especially those from earlier generations, like Pauline, talked about the difficulty in finding female role models. It’s not like there weren’t role models.  There have been all of these significant female composers throughout history, but they’ve been relegated to footnotes.  I don’t need to tell you this; you’ve edited a collection of Clara Schumann’s songs. So I know that you’re aware of this history and there’s a certain empowerment through knowing that history, I would think.

KN:  Yes, there is.  I have to say, when I was writing songs for voice and piano and for voice and guitar, I had inspiration from Odetta, Buffy Sainte-Marie, Joni Mitchell, and Joan Baez, as well as Bob Dylan, the old blues singers, and Pete Seeger.  I was very influenced by all of that.  So I could see myself as a singer-songwriter.  I can see there’s an identity.  Whereas a composer seemed like what you do if you write for orchestra or at least for string quartet, or these other things that I didn’t know. I didn’t know other female composers growing up, so it was hard to think I could be that.  But it wasn’t so hard to think I can learn how to notate so that I can put what I have in my head onto paper.  It wasn’t a definition that way.  When I was preparing to try to get into the University of Washington, I took composition study at Cornish.  And I remember, I was asking this of my first composition teacher—I was notating some simple things, for solo piano and maybe something for a small instrumental trio combination—and I asked, “Who are the other women composers?”  And it was like, “There aren’t any of importance.”  I just knew that was wrong.  I totally knew that was wrong.  It can’t start with me; I’m not that brilliant.  So I went looking as an undergraduate.  When I did my recitals for voice, I always included female composers, like Hildegard, Strozzi, Clara Schumann, Fanny Mendelssohn, up through the contemporary composers that I was discovering.

FJO:  No doubt the person who said this to you was a male composition teacher.

KN:  It was a male composition teacher.

A room with shelves of books, a chair, and in the middle, an upright piano with a triangular painting on top of it.

A piano is still the centerpiece of the living room in Norderval’s apartment.

FJO: Now was this around the time you composed a choral piece based on poetry by Emily Dickinson called Passenger of Infinity?

KN:  That piece actually came after I was finished with my undergraduate degree and I’d moved to San Francisco to do a master’s. Actually, I just worked first, then I got into the conservatory and did a master’s in voice. That was during the AIDS crisis. The San Francisco Lesbian and Gay Chorus was doing some commemoration concerts and fundraising concerts and dealing with the deaths of a lot of colleagues and friends and singing in a lot of funerals.  So they commissioned a work. That was written for them.  I recently just redid the last movement of that piece for a cappella [chorus]. The original version was SATB with piano accompaniment, but the last movement had a pretty simple piano accompaniment so I figured it could work as an a cappella piece.  A little chorus in Montreal, the chorus that sang in my opera in Montreal, just did that on a concert in December, the new a cappella version.

FJO:  Oooh, I want to hear that.  So you still keep that piece in circulation?

KN:  Well, I have the score, but it hasn’t been performed in the version that I did for San Francisco since the original performances.  I’m not the greatest about trying to promote and get re-performances or get my scores out there for multiple things.  I tend to write for specific occasions and specific ensembles or soloists, people that ask me for music. It’s a weakness of mine in terms of promotion, I guess.  But on the other hand, it’s a very personal thing.  The music becomes very much a part of that performer or that ensemble’s identity and experience.

Kristin Norderval standing beside a prehistoric glacial drill hole in Inwood Hill Park

Kristin Norderval standing beside a prehistoric glacial drill hole in Inwood Hill Park.

FJO:  That, of course, is the other contradiction in the world of notated music.  You create a lot of work that is intended to exist in the moment, but once you write something down, you fix it for all eternity theoretically. Suddenly there is the possibility for a piece to have an afterlife after the initial performance. It’s interesting that you don’t really think about that.

KN:  Maybe that’s because there’s that inherent contradiction.  I worked with a sculptor in Norway who is a Sami sculptor, he’s not alive anymore—Iver Jåks.  He would work with Arctic stones, wood, reindeer horn, leather woven in a traditional Sami way, and various other things.  He’d assemble these pieces and then say to a curator, “Okay, you put it together.  Here’s the sculpture.”  And I thought that was so wonderful.  We did a whole school tour with an ensemble up in the Arctic, taking his pieces and getting school children to act as curators and put together his sculptures.

Then I did the same with sound. I said, “Okay, let’s go record some sounds.  Now you put the sounds together.”  And it would be different for each group.  That was very liberating. I remember a phone conversation we had—I ended up recording part of it and using it in a piece that I made about him—where he says, “I’m not going last forever; why should my artwork last forever?”

FJO:  A lot of your work is impermanent. I’m thinking, in particular, of all of your sound installations, many of which defy replicability. You even have one you did outdoors using objects that had been thrown away that you called Our Lady of Detritus.

KN:  That was a collaboration with Jill.  We both wanted to work with this theme of repurposing and recycling and looking at waste and the issues of waste.  I had been working with hemispherical speakers and a small digital amp. I knew Holland Hopson had designed hemis for the Princeton Laptop Orchestra, so I used the recipe for the Princeton Laptop Orchestra’s small hemis that have digital amps built into them.  I wanted to see if it would be possible to run that on solar power.  So we contacted an engineer and figured out how many panels we needed and how many hours of daylight to charge up a big battery and how long we could perform on that battery.  It was a really interesting project to do.

FJO:  But a really hard piece to document.

KN: Yes, it was.

FJO:  I was particularly intrigued by piano, piano, pianissimo… because you’re not only changing people’s perceptions of what a piano sounds like, but also changing how we respond to them as visual objects by mounting all these broken pianos at different angles.

KN:  That piece came about as a study piece for my opera. I was already involved in the libretto development. I’d been to Buenos Aires, and I’d made a lot of interviews with Patricia Isasa and other survivors [of Argentina’s military dictatorship]—children of the disappeared, and the surviving mothers and grandmothers.  I wanted to do a study piece to start working with the sounds.  My very first image from my impulse to work with Patricia Isasa’s story was of a piece that would be for voice and a kind of trashed piano where the piano would have sounds created out of things that were to done to it coming through its own body.  I used the sound installation as a study for those piano sounds, then channeled those sounds through each of the pianos.  It’s an eight-channel installation and each piano has a transducer affixed to the soundboard, so the piano itself was the loudspeaker.  The sounds that were coming through the pianos were sounds that I had recorded of me doing things to the pianos.  Either scraping on the strings, detuning, hitting strings with metal objects, clipping strings, knocking on the boards.  Some of them are very intense physical sounds.  The idea was that the piano as a body was recounting its own sonic history.  It’s a very bourgeois instrument.  It’s an instrument that’s associated with a certain level of stability in society.  When things up-end that stability, it has a hard time existing in the same way.

FJO:  So the upturned pianos are a metaphor—

KN: —for all the upturned political upheaval.  There was also a sculpture in Buenos Aires where there are two units that are strangely balanced on each other, a sort of box/house unit.  That gave me an idea for these balancing things.  Then I asked Jill to come in and work with me.  So she helped in making the final configuration of the pianos in the space.  Then we had her painting, inscribing the names of the victims of the disappearances on the wall, over the whole week that we were there in the gallery.  She was painting every day that the installation was up and in the course of a week she only got through about 1,600 names.  If it takes a week to just write 1,600 names on a wall, it gives you more of a sense of the vastness of 30,000 people being disappeared.

FJO:  I’m very eager to talk with you about the opera in greater detail, but before we get there I find it fascinating that prior to you ever having had a work done on the stage of an opera house, you created an installation for the lobby of the Oslo Opera. Were there other performances going on in the house when that was done?

KN:  Yes.  Again, that was also very much Jill’s project. She was the main instigator in that particular project, Hut No. 6, as part of the CODA Dance Festival.  They had dance performances on the main stage, on the small stage, and all around the city.  Our piece was a performance installation in the lobby of the opera house that went on for over a week. We were there every day for five to six hours, interacting with everybody who came through the lobby.  My part was the sound installation that used a hand-powered generator—I used an old bicycle wheel to help people generate their own power.  And an installation inside of Jill’s hut that was ongoing that had interviews with people about how they felt about home in Oslo.  Then I was singing in the performances every day.

FJO:  What unites all of these projects, I think, is that they all go against the whole hagiography of the canon and this idea that the goal of making art is to create timeless masterpieces. These are very much things that were created for a specific time and place that are not necessarily capable of ever being done again, which is very different from pieces you’ve done which have notated scores.

KN:  Music actually functions on a lot of different levels for different pieces.  I want some of my pieces to exist past me.  So I would like to have a score that can be done by other people.  Other pieces are done specifically for a particular theater piece or a particular dance; it’s not going to be a repertory piece.  Other things are done as an improvisation in the moment.  They can exist as a recording and have a life on a CD, but they’ll never exist in real time again because that was that moment and it’s not re-creatable. 

One thing I want to comment on here brings us back to talking about role models and female composers.  I’ve told this story, so some people who know me will probably recognize it.  When I was doing my doctorate at the Manhattan School of Music, I was also working at the Library for the Performing Arts. My boss knew I was interested in women composers and how women composers have been represented in the music industry.  So she asked me to make an exhibition, in one of their big cases at the library, about women and recorded sound.  It was a real learning experience for me, because what I was seeing, when I’d go back and do the research, was that every single stage of recorded sound had female composers from that era, but when the technology changed, those female composers didn’t get re-recorded on the new technology.  We had a big push of recordings on LPs in the ‘80s and the ‘90s; women musicologists were bringing up historic scores and more female composers were getting trained and became able to record their own contemporary works.  But lots of stuff on LPs never made it to CD.  And a lot of stuff on CDs now hasn’t gone over to streaming, either it doesn’t go over or it doesn’t get credited.  Streaming information is not good.  You could have a collection of pieces on an album, and maybe you just have last names.  How do you find out who’s who?  I have recordings with Monique Buzzarté in our ensemble ZANANA, but you can’t search for Norderval on Spotify or other streaming searches.  It has to be only ZANANA.  Then maybe they credit me as a performer in the duo.  So there are a lot of problems with actually knowing what existed at various times and making it over to the next stage.  Who decides what is worth keeping and archiving?

FJO:  I’m going to tell you something that’s probably going to make you very mad; it made me very mad.  Just about a week ago, I chanced upon a blog post which was a few years old, but it was linked from a much more recent post, which is how I found it.  It was posted by a woman in England who is a musicologist, but she was just starting out when she wrote it.  It was a 2014 post.  Anyway, the post described how when she was in a library looking at older editions of the Grove Dictionary of Music and Musicians, she saw that each of the earlier editions had a number of female composers in them, but then when you went to the next edition—

KN:  —they disappeared. Same thing.  Exactly.  Yeah.

FJO:  So I found her email and wrote to her.  I wanted her to know that when I was asked to update the articles about chamber music and orchestra music for the new edition of the Grove Dictionary of American Music, I also added in female composers who were not mentioned in earlier editions.  I also asked her for a list of the female composers whose biographies had appeared in one of the editions of Grove but were omitted in later editions but sadly she didn’t keep a list since she wrote that post before she embraced the musicological discipline of strict note taking. At some point, we’re going to have to have a group of researchers reconstruct this list to find out who all these composers are. But it does tie into this notion of impermanence that we were talking about earlier.

KN:  But there’s a difference. There are different kinds of impermanence.  If I say I’m doing an improvisation, and it’s just for here and now, that’s my chosen impermanence.  Or if I write a score that has instructions and motives, so it can be done in different ways, the one version is impermanent, but the next version is just as valid.  But the impermanence of just being not taken care of is a different thing.  I think of composers like Eleanor Hovda.  What an amazing composer!  Her work hasn’t been highlighted and preserved in the way that it should be for that amazing level of work.  She’s just one person right off the top of my head.

A shelf of scores in Kristin Norderval's living room.

A shelf of scores in Kristin Norderval’s living room.

FJO:  There are tons of stories like that. So what can be done to safeguard your work so that it isn’t lost?  Is that an issue for you?

KN:  Maybe it’s not an issue about my work, but it’s an issue of education in general for composers, especially for female composers, for composers of color, and for composers who are working in non-mainstream ways.  I think we have a crisis of education right now at all kinds of levels.  When I was growing up and moving around, at every single school I went to in all these different towns that we lived in, I would choose a new instrument in the school band.  So I learned a lot, not very well, but enough.  But there are no school bands anymore.  That’s not a part of public education.

FJO:  Well, there actually are still quite a few really amazing school bands.

KN:  Yeah, but it’s not automatic.  It’s not seen as part of what we really need to be full human beings.

FJO:  That’s definitely true, and it is unfortunate.  But for the past two years I’ve attended the Midwest Clinic, which is a major event for wind bands and other community, school, and military ensembles.  There were some amazing groups from high schools.  Last year there was a string orchestra from a high school in Nevada that played Penderecki’s Threnody and it was incredible. But, sure, this isn’t happening everywhere.  Music isn’t valued as much as it ought to be, and I think it’s a larger societal problem because one of the things that music teaches you is the lesson of listening, to get outside yourself and to actually pay attention to someone else’s thoughts.  If you can’t get outside yourself, you’re just in an echo chamber, which is the zeitgeist now in part because we don’t learn how to listen to music in the same way.

KN:  Or even doing it and making it together.

FJO:  There’s a special kind of listening I think that comes when you’re making music with somebody else.  You have to listen, especially in an improvisatory context.  I want to talk about that in terms of the improvisatory projects you’ve done—both the duo with Monique Buzzarté and the more recent trio recording that came out last year with two musicians I hadn’t heard before.  With projects like that, I imagine there’s a whole lot of listening to each other that has to go on in the moment.  But before you go into the studio to create work like that, how much pre-planning is there?  How much rehearsal?

KN:  For the recording Parrhésie with flutist Ida Heidel and pianist Nusch Werchowska, we spent time listening together outside the recording studio and doing slow walks, opening up our ears to the environment and to each other.  Then there were certain texts that we might say are an inspiration.  Let’s use this for how we focus in, even if the improvisation didn’t contain a specific text. There were some where I’m singing text, but there are some where we had just taken a line and, okay, that’s what we’re going to all focus on.  We’d spend a moment, and then we’d go.  The pre-planning is different in different situations.  With Monique and my work, some of the pieces were completely free in the moment and others had a structure that we had worked out, that had some things fixed in terms of motives or direction or that kind of thing.

FJO:  So if you’re doing a tour to promote these albums, what do you do when people ask you to play what’s on the album? You can’t.

KN:  Right, not really.  But both on my solo album and on the ZANANA album, there are some pieces that we could do.  It would be a little different, but they have a structure that is repeatable and you would recognize it as the same piece.  But others were just created then and there.

FJO:  Now what I found so interesting about the solo album is that in your program notes you described some tracks as being pre-existing electronic pieces that you just sang over when you mixed them in the studio. So they became something else in the moment.  It’s a way of taking something that was fixed and permanent and making it more organic and alive.

KN:  That’s interesting.  I wasn’t thinking about it specifically like that.  The voice is always flexible, but the tape—once you’ve got a sound file that’s a fixed sound file—it’s totally inflexible. It’s interesting to me to make a permeation, but also, when I’m working with pre-recorded sound files, I’m processing in the moment often with several files and choosing in that moment: Okay, I’m only going to use this little tiny bit of this file.  Now I’m going to expand it to the whole thing.  Now I’m going to pitch shift.  Now I’m going to delay feedback—pretty basic processing tools, but everything is in the moment so it’s like drawing from a palette that I know.   I know the sound files, and I’ll do it in a different mix the next time.  It’s like cooking up a different stew.

FJO:  At the heart of it all is a spirit of collaboration—even those solo pieces.  What I found so interesting about the solo pieces is that you’re collaborating with yourself.

KN:  Yeah, on the computer which sometimes gives me things that I’m surprised by, then I get to respond to what it’s given me.

FJO:  This is me then; this is me now.  You’re in a dialogue with yourself.

KN:  Right.

FJO:  But it also erases this idea: Oh, I’m a composer and I create these masterpieces in my room; I’m not influenced by anybody, and these pieces are completely mine and now you must do what I wrote, for all eternity.

KN:  But I have come to a combination of notated and improvised, and I’ve realized I actually have some specific ideas about the improvisation.  So in the process of working with another performer, I either give instructions verbally, or I think now I need to add that to my instructions on the score because you’re improvising, but I didn’t really mean that.

FJO:  So it’s possible that people can perform things wrong or incorrectly?

KN:  It’s possible that they would perform things that aren’t in the range that I would prefer, and then I have to figure out how to re-articulate my preferences.

FJO:  Now, to get back to this idea of collaboration.  A lot of works—even many of your solo pieces—grew out of works that were collaborative to some extent, since they were created to be presented with film and dance.

KN:  And theater.

A laptop and overgrown plants on a desk.

A laptop and plants peacefully coexist on Norderval’s work desk.

FJO:  You’ve done tons of work with Jill Sigman, who is somebody with a very similar aesthetic to yours. Her choreography really comes out of this idea of a raw physicality that is also somehow being altered.  I’m thinking of Papoose, which I find wonderful and disturbing at the same time, because it’s doing things with a body that are obviously natural but also somewhat unexpected. It’s almost like what you do when you take your voice and then manipulate it electronically. It’s taking it to another space.

KN:  Yeah.  Totally.  I learn from those collaborations a lot.  It opens up ways of thinking about development and processing and contrasts.

FJO:  A lot of your pieces don’t involve text, but when you do have a text, you’re also collaborating with the text.

KN:  Right.

FJO:  Going all the way back to that early Emily Dickinson piece of yours again—those words already existed. But you’re adding something to them which theoretically brings certain things out and it also becomes something else in the process.  It’s sort of an involuntary collaboration, since she’s not around to collaborate with.  Similarly in Nothing Proved, the piece you wrote for Parthenia that you’re in the studio recording this week, you worked with texts by Elizabeth I before she was a queen. Once again, she had no say in the music you composed, since she’s been dead for hundreds of years.  But because she wrote that text, it ties your hands in terms of what you can do.

KN:  Yes, it does, certainly rhythmically. Well, you could consciously go against the rhythm of speech in terms of accents on syllables, but then it’s going to have a particular effect that you’re ac-cen-ting other sy-lla-bles. I like to keep the intelligibility of the text.  Usually there’s something in a particular text that I choose that is already giving me melodic content.  I’m hearing some beginning of a motive right away.  I have no idea where it comes from. My piece Elegy: for Gaza is from a poem by Timothy O’Donnell; I read that poem in The Nation on the 1 train and it was already singing to me.  So I had to do this.  I had to find out who this guy is, write to him, and get his permission.  There are pieces like that that just jump out.  Then, there are other times, like in the opera, where there are certain places I had that relationship but there are also other places where I had to find ways to include a text because it’s important to further the story, or it’s part of the relationship of the characters, and so I had to find a way to deal with a lot more text than I’m usually dealing with in a song cycle or a single work with text.

The score for Norderval's composition Nothing Proved

An excerpt from the score for Norderval’s song cycle Nothing Proved which she performed with the viol consort Parthenia.

FJO:  With your opera The Trials of Patricia Isasa, you’re dealing with something else as well.

KN:  A living person.

FJO:  And a true story.

KN:  Yes.

FJO:  A really horrible story that has, I don’t know if I’d call it a happy ending, but at least some resolution.

KN:  A victorious ending.

FJO:  You said earlier that at first you conceived of a piece for voice and a trashed piano and then it evolved into an opera. I’m curious about that transformation, but also how you first became acquainted with Patricia Isasa’s story and what made you want to create music inspired by it.

KN:  I’m going to tell you the long story.

FJO: I love long stories.

KN:  As the eldest child of two political scientists, I have always been interested in politics and events in the world. Politics was in our house all the time.  So I’ve always been aware and my music has been very centered in wherever we are as a society.  But after 9/11 it became even more so.  I did a lot of pieces where I felt like I had to give expression to where we are going and why we invaded a country based on lies, why this stuff is happening and there’s no accountability.  One of my big pieces that I did in Oslo was a multimedia piece that came out of my disgust over Abu Ghraib and the whole situation with renditions, the kidnapping of people from all kinds of places and sending them off to black sites.  And my question was: How is it that Western Europe and North America and all these other nations are going along with this? How does it happen that populations are sucked into agreeing to these policies that are obviously abhorrent and against international law?  I researched the torture memos.  I started looking at all of the work that the Center for Constitutional Rights was doing.  I got a lot of information about what was going on.  That piece was a collaboration with Jill, another dancer in Norway, actors, a sculptor, and some other musicians in Europe. At the end of that piece, I felt I knew a lot about torture and wasn’t done with it.  It wasn’t enough to explore what it is about us that makes us drawn into groupthink.  Was there somewhere I could explore how we see accountability?  I was keeping my eyes and ears open for a potential subject.

At some point, I was thinking I wanted to do a piece on Chelsea Manning, but it was before the trial, so it wasn’t a finished story.  It was in process.  Then in 2010, I heard an interview with Patricia Isasa on Democracy Now where she was recounting her recent victory, in December 2009—successfully prosecuting and convicting six powerful people in Argentina who had been part of renditions, torture, and murder during the Dirty Wars of the military dictatorship.  Her case had come 33 years after her abduction.  I thought, “Wow, that’s amazing!”  Her spirit, her strength of character—she had so much energy, so much conviction, and she got a conviction!  So I was inspired by that.  I found her website and I wrote to her.  And I got a response.  The next thing we were writing back and forth. At that point, I didn’t know it was going to be an opera, but I knew I wanted to work with her story somehow.  She was coming to New York, and I said, “Come and let’s talk.”  And she ended up staying with me.  For several days, we just talked and talked and I recorded interviews.  Then I started trying to work with those interviews.

At a certain point, I realized that this really is such a big story and such a difficult topic, so it really needs an excellent writer to put this together.  So then I was going through who I knew—what plays do I know?  I liked the work of Naomi Wallace very much, but I’d never met her.  But again, I took cold contact with her through her publisher.  She ended up agreeing to a workshop period that I was able to organize in Oslo with a retired dramaturg from the Norwegian [National] Opera.  No strings attached.  I said, “We’ll work for a week; if we can get something together and we hit it off, we’ll take it further.  If we don’t, we all go off and do our own thing.” And she was very generous.  It was an amazing process.  We came away from the first week with a rough idea of the course that we wanted to look at.  We fleshed out what we wanted to center on in the storytelling.  I came away with several aria-type texts, and I wrote three character studies.  Those three character studies were done by Ensemble Pi; they ended up in the opera pretty much as is.  Then the process of working with Naomi over the next year on the full text was great, and it went from there.

FJO:  It was very fortuitous that it was staged in Montreal last year during OPERA America’s annual conference, which will hopefully lead to more productions of it in the future. I wish I could have been there for that, but luckily the company put a video recording of the whole thing online which also hopefully will get more people excited about it.

KN:  Thank you.

FJO:  One of the things I find so fascinating about the opera, and I say this in a positive way, is that in some ways it’s your most conventional piece.

KN:  Yes, it is.

FJO:  But there’s also something that’s very unconventional about it—the main character is actually three different roles. There are three Patricias.  There’s Patricia, the 16-year old who’s abducted.  There’s the Patricia of the near present, who manages to get a conviction of these people.  And those are two different singing roles on stage, and they even sing duets with each other.

KN:  The inner self that is propelling you forward to do something.  When Naomi had the first draft of the libretto finished, I went to Argentina and read it through for Patricia. We sat on a roof in Buenos Aires. Where I had little bits of motives, I sang; otherwise, I just read. That was really, really interesting.  There were some things that she made comments about, but she could totally relate to this thing of having the two characters.  That was good to have her blessing.  She had come to one workshop in Oslo, too, before that first draft was finished.

FJO: And then there’s a third Patricia. She’s in the opera as herself as well. In addition to the two singers on stage, documentary footage of the real Patricia’s image and voice is projected to the audience. That definitely makes the story seem more real and more impactful.  But there’s also the impact of the actual music, which sounds very different from a lot of your music.  There are Argentinian elements in it—tango-ish sounds at times, a bandoneón.

KN:  That was partly in response to the text and the subject and partly that I knew I wanted to work with these instruments that would locate it geographically and timewise. I felt like I needed to use the instruments, but I needed to meet them on my own terms.  I didn’t want to imitate tangos, but I do have a tango-inspired section in the courtroom because it felt like a dance.  It’s a court theater.  I listened to a lot of tango and a lot of nuevo tango.  I also listened to a lot of other Argentine composers, especially composers that were working with the sounds of Buenos Aires.

FJO:  There’s a big debate these days in the visual art community about who has the right to tell someone else’s story. There’s been a huge brouhaha over this abstract painting by Dana Schutz inspired by the famous photo of Emmett Till’s open coffin that’s in the Whitney Biennial.  Then there was an installation sculpture that recreated a scaffold that was the site of a massacre of Native Americans that was being set up at the Walker in Minneapolis but was later removed and destroyed with the consent of the artist after protests from members of the Dakota tribe. In our current climate, it’s possible that someone might question a North American’s desire to create a work based on this Latin American story.

KN:  I think it’s very much our story because the U.S. was behind the whole Operation Condor that supported all those dictatorships.  That comes out in certain places in the opera.  The Ford factory was encouraging the military dictatorship to impose certain economic policies, and they used the Ford factory as a place for torture.  So it’s very much an American story.  My feeling and Naomi Wallace’s feeling was that when we look back at a story about this historical period—and it’s not that far back because 2009, when she got a conviction, is very close—it’s a way of saying this is what happened there and this is what could happen here.

For me, the important part was the accountability part because my concern for us as a nation is that we have had no accountability for an invasion of another country that was based on lies and no accountability for torture.  The torture and war programs that are committed in all of our names are also related to our prison industrial complex, our mass incarceration, the fact that it’s only been 50 years since we dismantled the legal apartheid system of Jim Crow in this country.  I was ten-years old when interracial marriage was finally legal.  That’s crazy.  People have been working to try to dismantle that ever since, and that’s the period we’re in now.  We’re in the backlash period.  We have to look with a big historical overview to see how to deal with those effects and issues with some kind of accountability.  That, for me, was the story.  And that is our story.  That, plus Patricia was involved in the whole thing.  So it’s our collective story.

Kristin Norderval standing by a broken light post in Inwood Hill park

Kristin Norderval in Inwood Hill park

Forty Years in New Music

Having produced new music recordings for 40 years, I’ve seen some tectonic shifts in both the welcome expansion of the stylistic landscape of the music itself, as well as huge transformations in how new music is delivered to listeners.

Scene #1:

Mid 1970s, a composition lesson at the University of Colorado. At that point, Terry Riley’s In C and Steve Reich’s It’s Gonna Rain had been recorded. Philip Glass had composed Music in Similar Motion, Music with Changing Parts, and Music in Twelve Parts, performing such works with his ensemble at New York’s Whitney and Guggenheim museums. My professor opines (paraphrased): “Minimalism is just a fad. It’s been done before. Think of the second movement of Beethoven’s 7th.” Poof! Minimalism dismissed. Of course, my oblivious professor was not alone. Minimalism was also severely castigated by Boulez, Carter (who compared it to fascism and Hitler’s speeches), and leading critics at The New York Times (kids nowadays just want to get stoned).

Scene #2

Mid 1970s, a composition lesson at the University of Colorado. Another professor, the inventive Cecil Effinger, mentions his idea that a record label could be a tax-exempt organization, just like symphonies, art museums, etc. This may seem obvious today, but back then no such general purpose label existed. There were just a few nonprofit labels, with built-in restrictions, such as the Louisville Orchestra’s First Edition Recordings (20th-century music by living composers), Composers Recordings, Inc. (contemporary classical music by American composers), and New World Records (American music). Effinger and a few others battled with the IRS for two years, and in 1976 Owl Recording, Inc. became the first broadly purposed tax-exempt label in the United States. With its exceptionally expansive mission of releasing recordings of “high artistic, educational or historical worth not otherwise available,” I sensed a great potential. Owl’s board of directors, seeing my enthusiasm, essentially let me take over running the label.


Owl Recording, Inc. originated as an attempt to save Owl Records, a small local label that was about to dissolve. As I familiarized myself with the existing catalog, I became captivated by the powerful, original musique concrète works from the relatively unknown Tod Dockstader, and I’ve been involved with his music ever since.

Over the next 15 years, I learned about producing, releasing, and promoting new music recordings, as well as how to successfully apply for grants. I worked with such composers as Vincent Persichetti, Morton Subotnick, and Iannis Xenakis. At one point, a talented, environmentally concerned composer from Alaska contacted me, with the result that I released one of the first recordings of John Luther Adams.

While I was proud of the LPs (yes, LPs) I was releasing, I also experienced some growing frustrations. Financial support was generally only available to composers with appropriate “credentials,” which meant being connected to the academic world, and such restrictions clearly limited the stylistic range heard on Owl’s releases.

In the 1980s, the new music world was changing. A vibrant “alternative downtown” scene was emerging in sharp contrast to the “official uptown” scene. Uptown meant The Juilliard School, Lincoln Center, Columbia University, and Pulitzer winners, while Downtown included La Monte Young, Steve Reich, Philip Glass, Glenn Branca, Terry Riley, John Zorn, and many more, with performances in alternative, casual settings. In 1987 the highly influential Bang on a Can festival was founded.

Around 1990, several factors converged for me. First, the funding bias toward academic music virtually eliminated music from the promising Downtown scene for Owl. Secondly, CDs were becoming the dominant medium, which did not bode well for Owl’s mostly vinyl back catalog. Finally, Tod Dockstader’s LPs had sold out, and repressing them on vinyl didn’t make sense.

Suddenly, my next step seemed obvious: I’d start my own label, freeing me from Owl’s inherent restrictions in order to cover a wider range of new music that included the invigorating Downtown world, to have full control over design, liner notes, promotion, etc., and to make a fresh start by releasing CDs, not LPs. I’d begin by reissuing all of Tod Dockstader’s classic music on CD.

Tod Dockstader at Gotham 1960s

Tod Dockstader at Gotham in the 1960s

I contacted Tod, who was skeptical there would be any interest. In part, I was able to convince him because CDs present audio a lot more accurately than LPs, such as the deep bass that helps convey the elemental power of his music. For the first time, listeners could hear what Tod had heard in the studio. He agreed to move ahead and provide updated notes.

After forming Starkland in 1991, I released the first Dockstader Quatermass CD in 1992, and the second Apocalypse CD in 1993.

The covers for the 1st two Starkland releases, both of which are devoted to reissues of music by Tod Dockstader

We didn’t know what the reaction would be. After all, we were re-releasing music that was about 25 years old, and technology had greatly advanced over those years. Neither Tod nor I anticipated the more than two dozen rave reviews and robust sales that resulted. One publication ranked Dockstader as an electronic music pioneer on par with Varèse, Stockhausen, and Subotnick. Another, The Wire, claimed that thanks to these recordings “Dockstader will be remembered as the innovative, visionary figure he undoubtedly was.” (Notice the past tense.)

Encouraged, I started to release a variety of new music CDs, typically devoted to a single composer, such as Paul Dresher, Phillip Bimstein, Charles Amirkhanian, and Guy Klucevsek. I somewhat obsessively took over all stages of each release: project development, graphic design, mastering, promotion, and sales. These initial releases were successes, receiving fine reviews in major publications and respectable sales.

A few of these recordings reveal what a small record label can accomplish.

Consider the story of Phillip Bimstein. While he had studied classical music at the Chicago Conservatory of Music, he initially emerged into the music world in the 1980s with his new wave band Phil ‘n’ the Blanks. After moving to Springdale, Utah, one day Phillip chatted with his neighbor, farmer Garland Hirschi, and asked why his cows mooed. Charmed by Garland’s answer and general storytelling, Phillip decided to create an aural portrait of this lifelong rancher by recording their conversations, using snippets of both Garland’s comments and those mooing cows, along with instrumental writing based on Garland’s speech patterns. Shortly thereafter, I met Phillip at a new music festival in Telluride, Colorado, and was delighted by his Garland Hirschi’s Cows piece. It turned out he had more music, and I released his first CD in 1996.

The CD was something of a hit. Airing the title piece often prompted dozens of calls to radio stations. Philip went on to receive grants from the National Endowment for the Arts, Meet The Composer, and the American Composers Forum, and his music was performed at Carnegie Hall, Lincoln Center, the Kennedy Center, and London’s Royal Opera House. Later we did a follow-up CD, Larkin Gifford’s Harmonica, which also did well.

The cover for Philip Bimstein's Starkland CD Garland Hirschi's Cows which is a photo of a farmer with a pair of cows.

In 1998, I contemplated doing something special for the upcoming 2000 millennium. Around that time, I became aware of the behind-the-scenes development of a new DVD-Audio format, which, for the first time, would allow high-resolution surround sound to be played in the home. (Standard DVDs, then as now, offered surround sound, but only with less-than-CD quality sound.) Releasing a DVD-A seemed irresistible.

My interest in surround sound began in the mid-1970s, when the industry attempted to put quadraphonic sound onto vinyl LPs and I was connected to a small local company that developed the first digitally-controlled quadraphonic panning device. While both quad sound and the panning device disappeared, a seed had been planted.

But what would be the content of the Starkland DVD-A? The project grew more ambitious: I decided to commission short works from about a dozen composers whose music seemed likely to be enhanced by surround sound. My goal was it would be the first such recording of its kind, though I had no way of knowing if another label was also secretly planning something similar.

I tried to select composers who would use surround in diverse ways. An obvious starting point was composers who regularly worked with technology: Paul Dresher, Pauline Oliveros, Maggi Payne (who had previously composed quadraphonic works), Carl Stone (who had been using quadraphonic techniques in live performance), and Pamela Z.

Other composers were those who used space as part of their music: Ellen Fullman (whose Long String Instrument uses strings stretched over nearly 100 feet), Phil Kline (who used space as part of his massed boombox works), and Bruce Odland (who had created large-scale multimedia installations in public spaces). There were also composers whose music inherently feels spacious: Ingram Marshall (think of his works like Alcatraz and Fog Tropes) and Meredith Monk (with her obvious affinity for the use of space in many works).

Composers whose music was exceptionally dense or worked in polymetrics would benefit from the expanded surround soundfield: Paul Dolden (whose astonishingly layered works feature perhaps 400 individual parts and explore complex polyrhythmic and microtonal tuning relationships), and Lukas Ligeti (who has long worked with polymeters).

Finally, the outlier: Masami Akita (aka Merzbow). If his noise music assaults listeners with dense walls of sound, how much more effective might this be if we can be sonically pummeled from all directions? (Of course, I knew that many would not enjoy this piece, but I feel part of my job is to sometimes shake things up.)

The project consumed over two years of my life, and at times I thought I’d never make it. I was working on a format that did not yet exist, and no one had ever seen a DVD-A.

Released in 2000, Immersion was a major success, and remains Starkland’s biggest seller, for several reasons. First, there was extensive media coverage; Billboard devoted a full page to it. Secondly, Amazon prominently featured it as an outstanding exploration of this new format. Finally, people were hungry for material specifically created to take advantage of the new DVD-A format.

A delightful surprise happened via Amazon: Immersion was often the #1 bestselling DVD-A during its first year there. The other initial DVD-A releases were decidedly unimaginative. The major classical labels issued standard repertoire, and the pop labels tended to reissue rock classics that were not originally conceived for surround. The bizarre result was that avant-garde music was outselling Fleetwood Mac, Metallica, Deep Purple, Steely Dan, the Doors, Neil Young, and the Beethoven symphonies.

One of the biggest honors Immersion received was when New York’s Whitney Museum selected Meredith Monk’s work, Eclipse Variations, as part of their 2002 Biennial. Along with others, Meredith’s piece was presented in a specially designed “surround sound” installation room.

And to my great shock while attending the gala opening night, I discovered that the score of Meredith’s I had commissioned had been used as the cover art for the Biennial’s catalog.

The cover for the 2002 Whitney Biennial catalog which features an excerpt of a musical score by Meredith Monk

After the success of Immersion, I developed a follow-up project: commissioning a major 60-min. work from Phil Kline, whose piece The Housatonic at Henry Street from the Immersion DVD-A I loved. The result, Around the World in a Daze (released in 2009), is likely the largest work ever commissioned for a hi-res surround sound recording.

Phil Kline and Tom Steenland both wearing sunglasses and standing on opposite sides of a traffic pole on a city street corner.

Phil Kline and Tom Steenland at the corner of Henry and Rutgers streets in lower Manhattan, where “The Housatonic at Henry Street” was recorded.(Photo by Aleba Gartner)

Phil’s use of surround is dazzling. We hear hypersampled Wagner, a mournfully multi-tracked “wailing wall,” a buildup to a massive climax of hundreds of thousands of “falling pennies” that dramatically explores the psychoacoustic possibilities of surround sound, a Bach prelude eerily processed into a Zurich train station, and a concluding work that places listeners inside multiple layers of a field recording of 15,000 chattering, African gray parrots.

My enthusiasm for this double-DVD led me to design a uniquely shaped package, unappreciated by some who value a precisely aligned DVD collection.

The oversized cover for Phil Kline's Around the World in a Daze which looks like a boombox.

How the packaging for Phil Kline's Around the World in a Daze looks when it is opened up: two DVDs nested next to each other.

People sometimes ask how I select the music for Starkland. There’s no simple answer. I suppose I look for music that is distinctive, that has something to say, that conveys something special is going on, even if I can’t quite define it. While I don’t shy away from music that seems simple and readily accessible, like Phillip Bimstein’s cow piece, I also have embraced challenging, not-background-for-your-next-dinner-party music which sounds and feels imaginatively different.

An example is Elliott Sharp’s The Boreal CD (2015). For the title piece commissioned by the JACK Quartet, he developed unique bows, substituting ballchain and metal springs for the traditional horsehairs. The results are otherworldly textures unlike anything I’ve ever heard from a string quartet. Other works reveal a sophisticated intelligence that produce music which is captivating in ways that I initially couldn’t define but felt oddly special. Later, I saw the scores reveal his repeating musical cells that constantly shift their patterns. I learned his organizing principles can be based on fractal geometry, chaos theory, Fibonacci numbers, and bio-genetic concepts. Yet the key point is all this underlying complexity can audible sensed; something elusively distinctive is going on.

Several years ago, I noticed most of the composers on Starkland were approximately my age and established. However, I think part of Starkland’s role is to release music from younger, emerging composers who are not so well known. To remedy this, I thought of the outstanding International Contemporary Ensemble, which at that time had already premiered over 500 works, generally by younger composers. I contacted founder Claire Chase about having Starkland issue a CD of ICE performing emerging composers, and she thought this was a terrific idea.

Released February 2016, this On the Nature of Thingness CD has seven works from Phyllis Chen and Nathan Davis, both members of ICE.

The title piece is the cornerstone of the CD, and Nathan’s settings of the text are richly evocative. At one point, we hear the soprano Tony Arnold accompanied by a chorus of jaw harps, and in the “Vowels” movement, Tony mesmerizingly intones the text on a just single pitch. Nathan’s other two works, one for solo piano and the other for bassoon and live digital processing, are also convincingly fresh and captivating.

Phyllis Chen imaginatively creates colorful timbres by unconventional methods, employing toy pianos, tuning forks, music boxes, metallic bowls, and tuning rods extracted from toy pianos, all of which results in a magically conjured world of exotic textures.

Remaining flexible has led me down unexpected paths. One example is the exceptionally gifted accordionist Guy Klucevsek. Many years ago, attending his concert in Boulder left me deeply moved and impressed. I introduced myself afterwards and we’ve stayed friends since. (I must admit, when I founded Starkland, I did not expect to release accordion music.)

It’s not hard to be seduced by Guy’s world, which encompasses a cornucopia of styles and approaches. Aside from his technical chops, he’s one of those performers where everything sounds innately musical. Guy’s arrangements are charmingly eccentric. Witness what he does with Burt Bacharach tunes, from his soft, high, ethereal rendition of One Less Bell To Answer, to his wittily worded version of Bacharach’s first hit, The Blob (penned for the fun horror film with the same title).

Then there are the impressively diverse compositions Guy has commissioned. For example, Aaron Jay Kernis wrote a big, powerful piece, Hymn, for Guy, inspired by Aaron’s concerns with the world’s wars and sufferings, coupled with his visits to the concentration camps of Auschwitz and Birkenau. Aaron considers the work to have “a central position in my oeuvre.” On the other hand, we have Fred Frith’s humorous, theatrical The Disinformation Polka.

Guy also performs music that is straightforwardly beautiful, without being cloying or clichéd, from Carl Finch’s Prairie Dogs to Guy’s own The Asphalt Orchid (in memory of Astor Piazzolla).

Given this wealth of material, it’s not surprising I’ve now issued four Klucevsek recordings. In September 2016, I released Teetering on the Verge of Normalcy, which presents one gorgeous piece after another. The magic that happens when Guy plays with the wonderful violinist Todd Reynolds is one of those rarities that keeps me going. Here they perform Moose Mouth Mirror at the CD’s release concert at New York’s Spectrum:

Starkland’s most recent release brings me full circle, back to Tod Dockstader and our initial two CDs. The enthusiastic reception of those CDs greatly encouraged him to continue composing. One result was his 3-CD Aerial project (released by Sub Rosa in the mid 2000s). Another result is that, when he died in 2015, he left behind a vast archive of around 4,200 sound files on his computer. With the diligent help of archivist Justin H Brierley, I reduced these to the 15 tracks that appear on Tod Dockstader: From the Archives.

The cover for the latest Starkland CD release, Tod Dockstader From The Archives.

Tod’s music has long seemed original and powerful to me. While determining why any music works is ultimately unanswerable, two factors can help explain the appeal of his music. First, most of the sounds are real-world (i.e., concrète), and therefore have an inherent distinctiveness that is missing in pure electronic sounds. From what I know, he never used a synthesizer. I recall the story from his early composing days, when Bob Moog invited Tod to Bob’s home to demonstrate his new synthesizers. Tod went, listened, pondered, and left – synth-less. Sterile synthesizer sounds lacked the richness of Tod’s concrète palette, and of course the keyboard itself was an anathema to him. The second reason is that Tod worked by instinct, rather than filling out a preconceived formal structure. In his case, being an autodidact instead of having formally studying composition clearly worked to his advantage. I recall Tod describing his process of generating lots of material on tape, and then taking a razor blade to excise the material that didn’t work. He reported that, sometimes, everything simply disappeared under The Blade. Ruthless self-editing was clearly a strength.

Tod also had a spot-on sense of shaping materials: when to move away from a rhythm he’d set up, when to introduce new material, when to return to earlier material in a section, how densely layered a section should be, and how to satisfyingly end a piece.

Released November 18, 2016, the music on this new CD ranges from the powerfully pulsating Super Choral, to the lulling rhythms of First Target, to Anat Loop’s spasmodic juxtapositions, shifting from electric arcing to a xylophone trapped in a hurricane. We also hear driving unnatural machines, organ clusters, meandering buzzes, a slowed-down animal roar, violent whooshes, some ominous German, and garbled, underwater murkiness. The CD ends with a shocking coda, music unlike anything else in Tod’s repertoire.

What is the future for record labels? The simple answer is: I don’t know. The first step is to note what value labels can offer. Some of the recordings I’ve described above suggest the benefits labels can provide.

A typical Starkland CD serves to document and preserve the compositions with high quality recordings approved by the composers and their notes on the music, along with the widespread dispersion of the release and permanent availability from the label. (Starkland has never had a recording go out-of-print.) And in a case like the Dockstader CDs, the updated notes written for the CD became the definitive commentary on the music; he never prepared notes for concerts (because there weren’t any), and he never set up a website.

CDs can significantly advance the careers of composers and performing ensembles. Starkland’s two Bimstein CDs helped him attain widespread exposure, receive numerous grants and commissions, and end up with performances at venues like Carnegie Hall and wonderful reviews in publications like The New York Times. Because of our track record (pun intended), the media is likely to pay attention to the 100-150 promo CDs we send out. We also help draw attention to new releases by having liner notes written by established figures such as John Adams, Laurie Anderson, Claire Chase, Kyle Gann, Allan Kozinn, David Lang, Meredith Monk, Bill Morrison, Pauline Oliveros, and John Schaefer.

Labels can generate new works by commissioning composers (possibly with visual artists) to create content exclusively for a new release. I’m proud to have commissioned over two hours of surround-sound music that premiered on two first-of-their-kind releases.

Finally, labels can help the listening public discover new music they might otherwise miss. How do listeners decide what to buy (and hopefully not steal)? The astute critic George Grella recently answered this question, writing: “That is precisely where record labels matter, have always mattered, and matter now more than ever… The process of gathering critical opinion from friends, critics, and one’s own ears begins with the label, the most important gatekeeper.”

For all these reasons, labels have value, and that makes me think they will continue to exist in some way. The two key questions for the future then become:

  • How will labels deliver music?
  • How will new music recordings be financed?

Today’s au courant prediction for future delivery is streaming will rule and CDs will disappear. Not everyone agrees. Many like physical objects, held in their treasured collections. My guess is that in the foreseeable future, we will continue to see CDs released by major artists, those who value a CD’s high quality sound and documentation, those who want to sell something at their concerts, and those who want to be taken seriously by the major media. Today, there are likely “more labels than ever,” as Grella recently wrote. Starkland currently receives more project submissions than at any point over the last 25 years. Of course, streaming will continue to play a valuable role in discovery. But while we have extensive digital distribution by Naxos, the starting point for all projects is still a physical CD and we haven’t yet done a digital-only release.

It’s hard to predict what we will end up with farther down the road. The public accepts the crappy sound of mp3 and earbuds because of the convenience. This may change. Mp3 thrives because of the limitations of data transmission and storage. With rapid advances of technology along with clever encoding (such as the nascent MQA codec and the Mastered for iTunes format), we may well end up with high quality digital delivery and storage. If it’s well organized, that could change everything.

Future financing will be a challenge. Unless you’re Philip Glass, sales of new music recordings won’t cover the production and promotional expenses. There will still be some grants available, and the emergence of crowdfunding is a healthy approach that I think will grow.

We have to admit the major labels missed the boat on the digital revolution, and tech companies like Apple, YouTube, and Amazon have taken over music distribution. But in my opinion they don’t care about the music like labels do. It’s just another way to get people to buy their products and visit their websites. Standalones like Spotify are different since they only sell music, but they currently lose millions and have no viable business model. What these giants all have in common is the power to pay smaller labels virtually nothing.

Reality check: when someone streams a Starkland track, we typically receive about $0.0043.

Despite this dismal situation, I think composers and musicians will continue to see value in professionally produced recordings, and will find a way to make that happen.

Finally, let’s return to the most important part of this “business” – the music itself. Today, the new music world is healthier than it’s ever been over the last 40 years, a truly unexpected and exciting state of affairs. Today’s audiences meaningfully connect to a lot of new music, a sharp contrast to yesteryear’s dry, academic music that alienated so many listeners. Universities cover a broad range of styles and are far less insular than decades ago. Old school dichotomies such as Uptown vs. Downtown have mostly disappeared. There’s no “official” style for composing. There are outstanding ensembles devoted to new music, and there’s a substantial audience. Music can be heard, more readily than ever, around the world.

Over the years, the once irrelevant and exclusionary Pulitzer Prize changed direction (e.g., David Lang’s award), and the historically conservative Grawemeyer Award has presented its 2017 composition award to the 37-year old Andrew Norman for his nontraditional, rambunctious Play. Props to the Boston Modern Orchestra Project, which not only commissioned and performed the work, but also recorded it on its own label, which greatly increased the work’s prominent stature and widespread acclaim. (Alex Ross remarks he has “listened to Play at least a dozen times.”)

I feel lucky to have participated in this evolving world over the last 40 years, and look forward to more in the future.


Tom Steenland

Thomas Steenland is the founder and Executive Director of Starkland. “A new music force for 40 years” (Sequenza21), he has released dozens of albums, presenting world premiere recordings of over 160 works by more than 80 composers. Tom studied physics at Johns Hopkins, music theory at Goucher College, composition at the University of Colorado at Boulder, and recording engineering at the University of Colorado at Denver. He lives in Boulder.

The Electric Heat of Creativity—Remembering Donald Buchla (1937-2016)

In 1962-63, in a vacated Elizabethan house on Russian Hill, Ramon Sender and I joined our equipment to make a shared studio; this became The San Francisco Tape Music Center. After the house burned down in 1963-64, we moved to Divisadero Street where we spent days and nights wiring a patch bay console we had got in the AT&T graveyard; we needed to tie all our equipment together. A bit like Dr. Frankenstein, we were putting all kinds of discarded equipment together to create an instrument that would allow for the composer to be a “studio artist.” The device or devices could not have an interface that was associated with any traditional music making, especially not a black and white keyboard.  It would have to have the capacity to control all the musical dimensions as equal partners. We thought, we talked, and we read. Our first imagined system came from what we knew about graphic synthesis.

A bit like Dr. Frankenstein, we were putting all kinds of discarded equipment together to create an instrument that would allow for the composer to be a “studio artist.”

We knew the work of Norman McLaren and were aware of many of the other experiments taking place. Drawing seemed like an intriguing approach to a personalized music maker.

We outlined the following process:

• Create a pattern of holes on a flat round disc
• Spin the disc with a variable speed motor.
• Pass light through the rotating disc.
• Convert the resulting light pattern to sound by placing a photo cell to receive the light pattern passing through the disc.

A pattern could be made for each sound; the size of the pattern would represent amplitude; the shape would result in timbre and the speed of rotation would be some kind of frequency change.

Our soldering skills, starting from zero, quickly grew to modest, but, alas, never to excellent nor even good.  Where and how were we to start?

Instead of continuing our Frankensteinian kludge approach to hunting and gathering in electronic graveyards, we decided to put an ad in the San Francisco Chronicle to find someone who might be interested in building our device. The first person to answer the ad seemed to have some sort of eye dysfunction; his eyes were focusing on two different and constantly changing places at the same time. Unaware that the ’60s drug scene had begun, we described what we were after.  The fellow seemed interested and, after waiting several days without another answer to the ad, he was the only one we had. So we gave him a key to the studio and told him to go ahead and see what he could do.  On arriving at the studio the next morning, we were horrified to discover that he had cut a bunch of wires in the back of our newly wired patch-bay. We took back our key and began the tedious task of putting the patch bay back together.

A short time later, another engineer appeared. He seemed quite normal; that is, he appeared to see and hear in appropriate ways.  We presented our idea and, quietly, he said, “Yeah, I can do it.”

The next day he arrived with a machine; a paper disc attached to a little rotary motor mounted on a board and a couple of batteries a flash light, a small loud speaker, and a small amount of circuitry. He turned it on, and it made a nasty sound!!! Amazed and thrilled, we declared, “It Works!” And he dryly responded, “Yeah, but this isn’t the way to do it.”  That was the arrival of Don Buchla!

A handwritten drawing on a piece of graph paper showing the foot pedal, motor, lamp, lens, and audio output for the light synth.

Don Buchla’s original drawing of the Light Synth

After this, Ramon went on to work on upgrading the studio and I immersed myself in the task of understanding what Don was talking about. He introduced me to the world of voltage control. An entirely new vocabulary was suddenly entering my ears.  The only vocabulary I had for musical sounds were a handful of Italian words—piano, forte, crescendo. This new vocabulary consisted of words from outer space—transistors, resistors, capacitors, diodes, and integrated circuits.  Don was “the man who fell to earth.”

I bought the Navy manual on electronics, but, after starting it, realized that I had to take a step back and get some basics and bought the Navy manual on electricity!  The bedtime reading was intense. After a few weeks of the basics of electricity, I plunged into the manual on electronics.  After a bit of scanning and surface exploration, I found myself struggling with that new vocabulary of transistors and diodes. It took a lot of aspirin [for the nightly headaches] and searching, to be able to follow what Don was explaining.  The long nights morphed from struggling with the steepest learning curve I have ever experienced to a dialog between myself and Don in an attempt to conceptualize a new composer’s creative tool. With Don’s help, even with only a rudimentary understanding of electronics, it was possible to see the power of control voltage as shaping the energy of musical gestures.  Traditionally the result of the fingers on the keyboard, the arm energizing the bow that energizes the strings of a violin, the air blown into a flute, could be understood as metaphors for gesturally-shaped control voltages. It was elegant; it appeared to satisfy the characteristics of all musical dimensions; pitch, amplitude, timbre, timing, and—a brand new dimension—spacial positioning.

With Don’s help, even with only a rudimentary understanding of electronics, it was possible to see the power of control voltage as shaping the energy of musical gestures.

The idea, suddenly, and without aspirin, was coming into focus. We worked regularly for almost a year; I would describe the functionality I thought was necessary to do something musically and Don would look up as if looking at the ceiling or somewhere within himself, return his gaze to me and say, “I made a module that does that.” Was he saying he made it some time ago and had just remembered it or had he designed it at that moment? I never knew and when I would ask him, he would always just smile; that coy half smile of his. But, somehow, within a few days he would bring me a drawing of the new module.

With every meeting a new module would arrive, and eventually he designed an entire analogue computer-like music making machine. It was all on paper. We would need $500 dollars for him to make it.  With the help of the Rockefeller Foundation we finally were able to pay for the parts.  Don never built a prototype; he just arrived one day with the entire machine. At the bottom of every module it read, “San Francisco Tape Music Center, Inc.”  I was upset that we were suddenly in “business”; “OK” he said, and the next ones he made were called “Buchla and Associates” and the now historic Buchla 100 was born.

Within a few weeks of the delivery and public unveiling of the 100, I moved to New York and installed a large 100 system.  There I worked (played, really) with it continuously creating Silver Apples of the Moon and The Wild Bull.  My relationship with Don remained constant, but now, over the phone. I kept finding things that we had not considered or just plain got wrong.

He would say, “I just made a new envelope generator with a pulse out at the end of the envelope.”

“Great, how soon can I get it?”

Don: “I have already mailed it to you.”

I would call him and say, “Could you make a module that would allow me to convert my voice into a control voltage?”

Long pause. No doubt he was looking at the ceiling.

“I have made that.”

A week later one of the first envelope followers arrived and, in addition to knobs, I could use my voice and finger pressure to control all the dimensions of music.

The Buchla 200

The Buchla 200

Within a few years of back and forth additions to the 100, he went on to make, what many of us consider to be the Stradivarius of analog machines, the Buchla 200.

Many of us consider the Buchla 200 to be the Stradivarius of analog machines.

Don had an unusual genius in the creation of interfaces. In adapting our hands to a rectangular piano keyboard, it takes the first several years to master the art of using the thumb. It made sense as the evolution of music and musical instruments morphed together through time.  But, with the explosion of electronic technology in the second half of the 20th century, we no longer needed to be bound to music or the instruments of those traditions; yet the piano keyboard was brought forward and became the instrument for the new technology.  As McLuhan said, “We look at the present through a rear-view mirror. We march backwards into the future.”  Don’s answer to a new interface for a new music was Thunder, his ergonomic interface.

A photo of Buchla's Thunder interface

Buchla’s Thunder

It went on and on, but for me, the three most revolutionary interfaces were: Thunder; Lightning, a baton that could be waved in the air producing a “joy stick” X, Y array of voltages; and his “Kinesthetic Multi-Dimensional Input Port Module with Motion Sensing Rings” that produced X, Y, Z control voltages.

For me, the three most revolutionary Buchla interfaces were Thunder, Lightning, and the Kinesthetic Multi-Dimensional Input Port Module with Motion Sensing Rings.

After a lifetime of designing and building, Don went back to his masterpiece of the 1970s to create the 200e. He had been eager for me to have the 200e but I resisted.  In 2010, my opera Jacob’s Room was going to get its premiere in Austria.  The sponsors wanted me to make a short European tour of solo performances with the video artist, Lillevan, who had done the live video for the opera, but from the late ’80s, I had made a transition to computers and stopped performing in public. So I decided to give the 200e a spin and flew out to spend a few days with Don as he showed me how it worked and I picked the modules I thought I could use.

I took it with me to Europe to tour with Lillevan and made a patch in the hotel the first night I arrived.  With no real time to work with the 200, I had decided to work mostly with sound files on my Mac using Ableton but maybe still use the 200e in some way.  Our first concert was at the Modern Art Museum in Liechtenstein. I did the whole performance with only the Mac, but at the end of the concert, the audience kept cheering, “How about an encore?” Lillevan said. “An encore?! I had never done an encore, what could it be?”  I looked at the 200e, made a few adjustments to the patch and said, “Let’s do it.”  It was as if it was 1966 in my studio on Bleecker Street.  I turned and knobs even repatched as I played.  I was ecstatic; the audience was ecstatic.

Don and I had remained close for 53 years, although for about 30 years, the friendship was without the virtual electric connection we had in the early days.  But since that performance in Liechtenstein until his recent death, we shared again that wonderful electric heat of creativity.

A group of six musicians playing very bizarre looking instruments.

Don’s “popcorn” performance at the 1980 Festival of the Bourges International Institute of Electroacoustic Music.

Ramon and I had brought Don home from the hospital after his cancer treatment and I began to fly out regularly to be with him and his wife Nannick Bonnel. He was determined to live as fully as he could for as long as he could.  Early in his recovery when he was home from the hospital but still not able to walk, I remember calling him from the airport to tell him that I was on my way up to see him in their hilltop house in Berkeley.  My greeting was, as always, the idiotic “How are you doing?”

“Great!” he said, “I just got back from a walk.”

“A walk?” I said in true amazement, “My God! I would have trouble walking on that incredibly steep hill. Where did you walk?”

“Oh,” he answered proudly, “I walked from the bedroom to the kitchen; it didn’t take so long!”

We both laughed.

Over the next several years, with a lot of help from Nannick, he got himself around. Every time I performed in the Bay Area, I stayed with them and he came to performances.  He also did his own performances from time to time and traveled up until the last few years.

Donald Buchla (left) and Morton Subotnick at NAMM.

The picture above is at the NAMM show when he was signing on to a company that would be selling his equipment.  He continued to create complex imaginative modules, the last one being the “Polyphonic Rhythm Generator”; a set of interconnected rings of sequenced pulses which was his homage to the great North Indian Tala tradition! He just kept going.

Toward the end he began using a walker. When I came out to visit, he wanted to go to the Berkeley Museum. It was a very rainy few days, but he walked, one tiny slow step at a time, to their car. Nannick got his wheel chair into the rear while I helped him into the car. I tried to help him, but he waved me off fiercely as he pulled himself slowly from the walker into the front seat.  We went to the museum and I wheeled him from painting to painting; bringing him as close as possible to every painting so that he could see it.

After that, we decided to go to a movie! Michael Moore’s Where to Invade Next. It was at a theatre in Berkeley on the 2nd floor without an elevator.  While Nannick found a place to park the car, Don and I walked up those stairs, one painful step after another.  In the theater we had to go up again to get a seat. He sat forward staring at the screen, trying to comprehend and see. After we began the painful steps down.

I saw him again a few months later; Joan was able to make the trip with me. Don was clearly deteriorating rapidly. He wanted to go out to a restaurant where we could see the sunset over San Francisco.  We went, even more painfully, wheelchair to walker, step by step.

He finally gave into the big sleep.  Rest well, my dear friend!

Morton Subotnick and Donald Buchla

One of the last photos of Morton Subotnick and Donald Buchla together.

Memories of Milton

Our community, no, our world, has always been defined in one way or another by the presence of Milton Byron Babbitt. We have all, at one time or another, had to come to terms with his music, with his ideas about music, and with the place he helped define for composers within the American university. Anyone who can make us question the way forward so profoundly is helping us become ourselves, whether we like it or not. And with Milton Babbitt’s passing a few years ago, our world is diminished in ways we have yet to realize.  From my tone, my great affection may be obvious, but even people who—to put it kindly—were not fond of Milton owe him a debt of gratitude.

For those of us making our way in the studio, Milton represented something very special.  His leadership was instrumental in establishing the Columbia-Princeton Electronic Music Center. While Luening and Ussachevsky were responsible for the series of Rockefeller grants that established the Center, as well as arranging the loan of the RCA Mark II synthesizer and its move up to the Prentice building on 125th Street in New York, Milton helped establish the C-P EMC as a major center of musical inquiry. (This in spite of the fact that ultimately only Milton, Charles Wuorinen, David Lewin, and perhaps a few others of whom I am not aware ever used the RCA.)  In his role as co-director of the EMC, as in every role he took on, Milton served as an example of someone with the very highest standards for the art of composition.

The fledgling profession needed a public advocate and Milton Babbitt was electronic music’s most articulate spokesperson. He appeared on radio and television and also sat on government and foundation panels.  His lecture topic was often ostensibly technical.  To quote a line from a lecture recorded at the New England Conservatory in the 1960s, “[T]he joy of the electronic medium is that once you can capture sound… it’s not susceptible to change” (by which he meant no longer susceptible to inaccuracy).   In other words, Milton used the newness of electronic music as a way to speak about cherished ideas about music that, while given new power by the evolving technology, were already part of his musical thinking.  When Milton spoke, as he often did, about the importance of “time and order in music” he could just as easily have been speaking about Schoenberg as electronic music.  This too was the power of his example—musical thinking, while potentially expanded by changing technology, is always at the core of what we do.  Milton, and others at Columbia, always asked, “How can I use the tools that I now find in front of me to explore musical ideas?”

In spite of what many might think, Milton’s position on rigor in electronic music practice was as much practical as ideological.  The world of electroacoustic music is still tainted by the notion that the studio is the playground of dilettantes, while real composers write for “real” instruments.  In the late 1950s, when the EMC was established, it was critical that the new venture have the highest aspirations if it were to survive and flourish.  For all that was being invested in grant dollars, facilities, personnel, and all the public notice, there would also need to be a tangible payoff.

Arguably that payoff came quickly.  Composers flocked to the Columbia-Princeton studio—Stravinsky and Shostakovich among many others visited.  At Babbitt’s suggestion at Tanglewood in the summer of 1958, the young Argentine composer Mario Davidovsky came to work there, a move that helped define his career and who, along with another émigré composer, Bülent Arel (who came to CP-EMC from Turkey in 1959), shaped a major electronic music tradition. The studio also drew countless students who were inspired to make their own musical and technical contributions, among them Wendy Carlos, Charles Dodge, Halim El-Dabh, Alice Shields, and Robert Moog.

One aspect of the legacy of the early EMC is the notion of a studio as a musical instrument rather than simply a facility for research.  Hundreds of pieces were composed at Columbia through its first few decades, and a number of these have become acknowledged as masterpieces.  I would include on this list many of Babbitt’s electronic works, especially the pieces for live and pre-recorded resources, such as Philomel (soprano and tape) and Reflections (piano and tape).

Babbitt’s electronic works, all realized on the mammoth RCA Mark II synthesizer are idiosyncratic and highly personal.  The RCA, as Milton put it in the 1997 interview he contributed to our Video Archive of the Electroacoustic Music

, “was the perfect instrument for me.” It was the RCA’s degree of rhythmic precision that most interested Milton.  Ironically, while most composers were excited about the new sounds that the studio made possible, Milton claimed no interest in the sounds themselves.  I suspect that, as in many such things, he was overstating his case.  I cannot hear pieces like Ensembles for Synthesizer without feeling great affection for the sound of the piece.  And one aspect that makes Philomel such a coherent world is the way in which the electronic sounds interact with the vocal sounds (recorded by Bethany Beardslee) that Milton took pains to process through the RCA.

I have avoided the temptation to personalize this essay to this point, but while I never studied with Milton, he had a profound effect on my life and work both through the example of his music, which I heard performed live frequently through the 1970s and ’80s, and through his consistent personal encouragement over our friendship of thirty years—this in spite of a vast difference in our personal aesthetics. (I suspect that many readers could report the same.) Rarely a day goes by when I do not think, usually fondly, of some thought-provoking phrase of Milton’s.  In writing my tribute piece for his 80th birthday, Left to His Own Devices, I spent an exhaustingly intensive few months with samples from archival interviews Milton had given over many years.  In hearing those same phrases over and over again, intoned in that beautiful, resonant voice, a layered and nuanced music emerged and became one with the composer himself.  It is this unified yet ineffable complexity, together with a core of generosity and kindness, that will always be the way I remember the man.

A table with a variety of electroacoustic music gear. Image courtesy Blake Zidell & Associates for NYCEMF and the New York Philharmonic Biennial)

Electroacoustic Music is Not About Sound

A table with a variety of electroacoustic music gear. Image courtesy Blake Zidell & Associates for NYCEMF and the New York Philharmonic Biennial)

Yes, I do mean this title to be provocative, but my intention is to question some of our priorities and assumptions about composing, not to be polemical or suggest some correct way of composing. Rather, I am sharing some thinking that I have found serves my students and me well. The main thing I want to explore is my own attitude about musical time. Admittedly, this is a huge topic, with whole books and dissertations rightly devoted to it. I can only scratch the surface in a blog post so will just try to (re-)start a conversation about something which seems, strangely, to have become accepted as settled business. While I am at it, I am wondering too about our seeming complacence at having given up control of pitch.

There are basic aspects of compositional thinking that seem to have become almost extinct—particularly, but not exclusively, in the realm of electroacoustic music.   To put it plainly, the ideas of narrative structure and of pitch specificity are now rarely considered. To claim that pitch specificity is important is to risk being labeled a reactionary or, worse yet, conventional. An even more profound change has taken place in our discourse regarding time—the most salient feature of music.  There is the strong suggestion that it is quaint to think of music as a narrative form, unfolding in time. The notion seems so old fashioned that the use of time-denying alternative terminology, adapted from the non-time-based arts, has become accepted practice. (The term sound-object comes to mind.) But, there is still a lot to be gained by an awareness of and the ability to control pitch, no matter how abstract and seemingly “unpitched” musical materials may be. And the unfolding of structure moment by moment is still what music is about—that is, it is about time. I love inventing sounds as much as anyone, but without attention to time we just have sounds. Sound unfolding in time, on the other hand, produces musical thought.  I write this while fully realizing that some readers will find this statement obvious, while others will find it either shortsighted or just plain wrong.

Scrutiny of the nature of sound itself has intensified over the years, especially in the context of electroacoustic music, where the possibilities for the creation and manipulation of sound are truly endless. In fact, the very experience of composing in the studio encourages this focus.  It is an incredibly gratifying experience to work directly with sound, listening to and changing material in real time.  The immediacy of this experience is one of the things that sets work in the studio apart from instrumental writing. This change in our way of making music convinced many composers that a fundamental change was also at hand in the very way in which a piece could embody meaning. The de-emphasis of pitch as the main carrier of an idea, in favor of a more foreground function for timbre, was already well underway in the early 20th century. (The Farben movement in Schoenberg’s Opus 16 is one of the usual examples, while Scelsi demonstrates a further development.) From the 1950s on, the development of technology to capture and manipulate sound accelerated this conceptual transformation.

A mixing console, processor, speaker, video screen and other equipment in Eric Chasalow's music studio.

New materials do demand new approaches, but this does not erase the necessity of paying attention to shaping the narrative. On the contrary, distinctive sounds, each potential in its own perceived space, allow for a new narrative clarity. Just as in film, our more famous time-based cousin, music can have multiple narratives intertwining and adding complexity to the flow of ideas. With crosscutting, flashback, and the like, one can create powerful illusions of nonlinearity, but in no case are we able to escape the reality that time only moves forward.  When we acknowledge this fact, we face the necessity of structuring musical time with great care.  If we do, it is more likely that the music will require and reward an intensified engagement by the listener. This allows us to invoke memory in subtle and powerful ways.

I am very well aware of philosophies that propose to disrupt older notions about musical time, deriving from work that goes back at least to the mid 20th century.  There are tropes on the static as “the eternal” (Messiaen), “moment form” (Stockhausen), and “discontinuity” (my old friend, Jonathan Kramer). It’s just that no matter how many alternative philosophies I encounter, I am always led back to the fact that there is still power in the flow of one moment of experience to the next. It is true that our brains can hold multiple impressions at once, and reorder and reconsider them fluidly. Still, we experience a piece as a succession of elements, and the ordering of these drives the overall experience. If I can get you to care about how time increments in my piece, you will become an engaged listener. Conversely, if I cannot convince you to follow the narrative journey, you will not hear what I have to say. If I only convince you to listen some of the time—to drop in and out of awareness—I have provided, at best, an assemblage of moments rather than a cohesive argument. Another way of thinking about this is in relation to aleatoric relationships we encounter everyday. We may be surrounded by objects, and it is possible that by being awake to our surroundings we will become aware of inherent, even beautiful structures, but it is more likely that the chance experience will not rise above the mundane. (Apologies to John Cage, whom I heard express otherwise many times.) The artist is able to create and reveal meaningful connections where we may not otherwise find them, and for composers, time is the most powerful domain with which to achieve this.

All of the proceeding, however, cannot exist unless listeners allow for the time necessary to experience a piece of music. This has certainly become more and more rare in lives mediated by devices and experienced in five-second chunks. My most naïve idea may be that anyone is willing to concentrate and truly listen through a piece of music at all. If we cannot make this assumption however, we lose musical experience, so to abandon this hope is to abandon music. There is a larger topic here about where we are when we hear music—a concert hall (or alternative formal space) or online, on the subway, in a variety of other informal contexts.

Let’s turn then to the matter of pitch. Why does an increased interest in sound, or the foregrounding of one of its elements, timbre, mean that now pitch is an unimportant element? Am I the only one who finds it ironic that, as we pay such close attention to sound, so little attention is given to pitch specificity? Pitch is such an important part of the complex we call “sound.” Yes, timbre and pitch are not independent in the physical embodiment of a sound, but we can and do think of controlling them independently, and there are many computer tools for doing so. Isn’t ignoring pitch structure a kind of dumbing down? Aren’t we asking listeners to stop paying attention to important details when we fail to make choices regarding pitch? Are we perhaps giving up the precise control of pitch because new technologies make other things easier? Do the newer contexts and new technologies distract us? Perhaps some of us have emerged from the highly politicized prominence of serialism with such distaste for pitch that we feel relief in its seeming erasure.  Perhaps it is just the pendulum swinging from one extreme to the polar opposite. Whatever the reason, I find the lack of attention to pitch impoverishing.  We need every detail, every nuance at our disposal as musicians. Performers know the importance of nuance very well, while composers sometimes are too willing to let some things slide. What is especially great about electroacoustic music though is that it adds to what can make up the layers of meaning in music. Sound of any source and quality can be brought into dialog with any other, creating layers of meaning.  Spoken texts can collide with environmental sounds, familiar instruments, or synthesized sounds that seem completely nonreferential.  Even with these diverse and complex sources, pitch is still very much present and need not be ignored.

While many of my electroacoustic pieces provide good examples of what I am discussing, the beginning of one older piece, Crossing Boundaries (2000), is particularly clear.  The piece layers sounds from many sources, including recordings of spoken text from archives and answering machines, and bits extracted from historical recordings.  It starts with a quick succession of pitched sources that combine into a complex that we can hear as mostly an Eb chord, oscillating between minor and major. The sounds are more fluid and ever-changing than one would get in an instrumental piece, yet the Eb moving to D, then elsewhere (one can follow very specifically) creates a harmonic framework that provides a feeling of upbeat, focusing attention on the entrance of speaking voices.  The whole middle of the piece lingers around G, but as this starts to move, it changes the sense of time passing dramatically. For much of the piece, our attention is on the voices as they speak various short phrases, many of which refer to the concept of time. The piece is, then, an expansion of word-painting technique, and the underpinning for this “metamusical narrative” is a framework of sonorities that is always kaleidoscopic and never imitative of traditional instruments, but where the pitch choices matter a great deal. It is an example of pitch structure shaping the larger musical trajectory of an electroacoustic piece.  I must add too that, in spite of this example, I do not mean to suggest that tempered pitches are necessary. The entire universe of microtonal tunings is wide open, especially with tools that allow our precise control of frequency.

What is true of composing is also true in analysis. One may discover meaningful relationships within a piece by considering the dimension of pitch where one might not expect. My former student, John Mallia, did his dissertation on Varèse’s Poème électronique, a piece most often discussed in terms of the wide array of sound sources it employs. John discussed these too, but much of his work looked at aspects of the structure where harmonic relationships were clearly very important, particularly in shaping phrases.  The analysis even finds precedence for these structures in Varèse’s instrumental pieces. It should not be so surprising that composers carry what they know about music from working with instruments into their studio work. The trick is to use the new context to spawn new musical possibilities, but figuring these out does not require throwing out old concerns as much as we might imagine. There have been numerous examples throughout history of new forms developing through a tension between evolutionary and revolutionary thinking, and there is no reason to think we have somehow recently escaped the value of historical precedents.