Tag: improvisational prep work

Delays, Feedback, and Filters: A Trifecta

My last post, “Delays as Music,” was about making music using delays as an instrument, specifically in the case of the live sound processor. I discussed bit about how delays work and are constructed technically, how they have been used in the past, a bit about how we perceive sound, and how we perceive different delay times when used with sounds of various lengths. This post is a continuation of that discussion. (So please do read last week’s post first!)

We are sensitive to delay times as short as a millisecond or less.

I wrote about our responsiveness to miniscule differences in time, volume, and timbre between the sounds arriving in our ears, which is our skill set as humans for localizing sounds—how we use our ears to navigate our environment. Sound travels at approximately 1,125 feet per second but though all sound waves we hear in a sound are travelling at the same speed, the low frequency waves (which are longer) tend to bend and wrap around objects, while high frequencies are absorbed or bounce off of objects in our environment. We are sensitive to delay times as short as a millisecond or less, as related to the size of our heads and the physical distance between our ears.  We are able to detect tiny differences in volume between the ear that is closer to a sound source and the other.  We are able to discern small differences in timbre, too, as some high frequency sounds are literally blocked by our heads. (To notice this phenomena in action, cover your left ear with your hand and with your free hand, rustle your fingers first in the uncovered ear and then in the covered one.  Notice what is missing.)

These psychoacoustic phenomena (interaural time difference, interaural level difference, and head shadow) are useful not only for an audio engineer, but are also important for us when considering the effects and uses of delay in electroacoustic musical contexts.

My “aesthetics of delay” are similar to what audio engineers use, as rule of thumb, for using delay as an audio effect, or to add spatialization.  The difference in my approach is that I want to find a way to recognize and find sounds I can put into a delay, so that I can predict what will happen to them in real time as I am playing with various parameter settings. I use the changes in delay times as a tool to create and control rhythm, texture, and timbral changes. I’ve tried to develop a kind of electronic musicianship, which incorporates acousmatic listening and quick responses, and hope to share some of this.

It’s all about the overlap of sound.

As I wrote, it’s all about the overlap of sound.  If a copy of a sound, delayed by 1-10ms, is played with the original, we simply hear it as a unified sound, changed in timbre. Short delayed sounds nearly always overlap. Longer delays might create rhythms or patterns; medium length delays might create textures or resonance.  It depends on the length of the sound going into the delay, and what that length is with respect to the length of the delay.

This post will cover more ground about delays and how they can be used to play dynamic, gestural, improvised electroacoustic music. We also will look at the relationship between delays and filtering, and in the next and last post I’ll go more deeply into filtering as a musical expression and how to listen and be heard in that context.

Mostly, I’ll focus on the case of the live processor who is using someone else’s sound or a sound that cannot be completely foreseen (and not always using acoustic instruments as a source– Joshua Fried does this beautifully with sampling/processing live radio in his Radio Wonderland project).  However, despite this focus, I am optimistic that this information will also useful to solo instrumentalists using electronics on their own sound as well as to composers wanting to build improvisational systems into their work.

No real tips and tricks here (well maybe a few), but I do hope to communicate some ideas I have about how to think about effects and live audio manipulation in a way that outlasts current technologies. Though some of the examples below will use the Max programming language, it is because it is my main programming environment, but also well suited to diagram and explain my points.

We want more than one, we want more than one, we want…

As I wrote last week, musicians often want to be able to play more than one delayed sound, or to repeat that delayed sound several times. To do this, we either use more delays, or we use feedback to route a portion of our output back into the input.

When using feedback to create many delays, we route a portion of our output back into the input of the delay. By routing only some of the sound (not 100%), the repeated sound is a little quieter each time and eventually the sound dies out in decaying echoes.  If our feedback level is high, the sound may recirculate for a while in an almost endless repeat, and might even overload/clip if we add new sounds (like a too full fountain).

Using multi-tap delays, or a few delays in parallel, we can make many copies of the sound from the same input, and play them simultaneously.  We could set up different delay lengths with odd spacings, and if the delays are longer than the sound we put in, we might get some fun rhythmic complexity (and polyrhythmic echoes).  With very short delays, we’ll get a filtered sound from the multiple copies being played nearly simultaneously.

Any of these delayed signals (taps) could in turn be sent back into the multi-tap delay’s input in a feedback network.   It is possible to put any number and combination of additional delays and filter in the feedback loop as well, and these complex designs are what make the difference between all the flavors of delay types that are commonly used.

It doesn’t matter how we choose to create our multiple delays.  If the delays are longer than the sounds going into them, then we don’t get overlap, and we’ll hear a rhythm or pattern.  If the delays are medium length (compared to our input sound), we’ll hear some texture or internal rhythms or something undulating.  If the delays are very short, we get filtering and resonance.

Overlap is what determines the musical potential for what we will get out of our delay.

The overlap is what determines the musical potential for what we will get out of our delay. For live sound processing in improvised music, it is critical to listen analytically (acousmatically) to the live sound source we are processing.  Based on what we hear, it is possible to make real-time decisions about what comes next and know exactly what we will get out.

Time varying delay – interpolating delay lines

Most cheaper delay pedals and many plugins make unwanted noise when the delay times are changed while a sound is playing. Usually described as clicks, pops, crackling or “zipper noise”, these sounds occur because the delays are “non-interpolating.”   These sounds happen because the changes in the delay times are not smooth, causing the audio to be played back with abrupt changes in volume.  If you never change delay times during performance, fixed simple delays and a non-interpolating delay is fine.

Changing delay times is very useful for improvisation and turning delay into an instrument. To avoid the noise and clicks we need to use “interpolating” delays, which might mean a slightly more expensive pedal or plugin or a little more programming. As performers or users of commercial gear we may not be privy to all the different techniques being used in every piece of technology we encounter. (Linear or higher order interpolation, windowing/overlap, and selection of delayed sounds from several parallel delay lines are a few techniques.) For the live sound processor / improviser what matters is: Can I change my delay times live?  What artifacts are introduced when I change it?  Are they musically useful to me?  (Sometimes we like glitches, too.)

Doppler shift!  Making delays fun.

A graphic representation of the Doppler Shift

An interesting feature/artifact of interpolating delays is the characteristic pitch shift that many of them make.  This pitch shift is similar to how the Doppler shift phenomenon works.

The characteristic pitch shift that many interpolating delays make is similar to how the Doppler Effect works.

A stationary sound source normally sends out sound waves in all directions around itself, at the speed of sound. If that sound source starts to move toward a stationary listener (or if the listener moves toward the sound), the successive wave fronts start getting compressed in time and hit the listener’s ears with greater frequency.  Due to the relative motion of the sound source to the listener, the sound’s frequency has in effect been raised.  If the sound source instead moves away from the listener, the opposite holds true: the wave fronts are encountered at a slower rate than previously, and the pitch seems to have been lowered. [Moore, 1990]

OK, but in plainer English: When a car drives past you on the street or highway, you hear the sound go up in pitch as it approaches, and as it passes, it goes back down.   This is the Doppler Effect.  The soundwaves travel at the same speed always, but they are coming from an object that is moving so their frequency goes up and then goes down when it is moving again away from you.

A sound we put into a delay line (software / pedal / tape loop) is like a recording.  If you play it back faster, the pitch goes higher as the sound waves hit your ears in faster succession, and if you slow it down, it plays back lower.  Just like what happens to the sound of a passing siren from a train or car horn that gets higher as it approaches and passes you: when delayed sounds are varied in time, the same auditory illusion is created. The pitch goes down as delay time is increased up as delay time is decreased, with the same Doppler Effect as the case of the stationary listener and moving sound source.

Using a Doppler Effect makes the delay more of an “instrument.”

Using a Doppler Effect makes the delay more of an “instrument” because it’s possible to repeat the sound and also alter it.  In my last post I discussed many types of reflections and repetitions in the visual arts, some exact and natural and others more abstract and transformed as reflections. Being able to alter the repetition of a sound in this way is of key importance to me.  Adding additional effects in with the delays is important for building a sound that is musically identifiable as separate from that of the musician I use as my source.

Using classic electroacoustic methods for transforming sounds, we can create new structures and gestures out of a live sound source. Methods such as pitch-shifting, speeding sounds up or slowing them down, or a number of filtering techniques, work better if we also use delays and time displacement as a way to distinguish these elements from the source sounds.

Many types of delay and effects plugins and pedals on the market are based on simple combinations of the principal parameters I have been outlining (e.g. how much feedback, how short a delay, how it is routed). For example, Ping Pong Delay delays a signal 50-100ms or more and alternates sending it back and forth between the left and right channels, sometimes with high feedback so it goes on for a while. Flutter Echo is very similar to the Ping Pong Delay, but with shorter delay times to cause more filtering to occur—an acoustic effect that is sometimes found in a very live sounding public spaces.  Slapback Echo has a longer delay time (75ms or more) with no feedback.

FREEZE!  Infinite Delay and Looping

Some delay devices will let us hold a sample indefinitely in the delay.  We can loop a sound and “freeze” it, adding additional sounds sometime later if we choose. The layer cake of loops built up lends itself to an easy kind of improvisation which can be very beautiful.

“Infinite” delay is used by an entire catalog of genres and musical scenes.

Looping with infinite delay is used by an entire catalog of genres and musical scenes from noise to folk music to contemporary classical.  The past few years especially, it’s been all over YouTube and elsewhere online thanks to apps and applications like Ableton Live and hardware like Line 6, a popular 6-channel looper pedal. Engaging in a form of live-composing/production, musicians generate textures and motifs, constructing them into entire arrangements, often based upon the sound of one instrument, in many tracks, all played live and in the moment.  In terms of popular electronic music practice, looping and grid interfaces seem to be the most salient and popularly-used paradigms for performance and interface since the late 2000s.

Looping music is often about building up an entire arrangement, from scratch, and with no sounds heard that are not first played by the instrumentalist, live, before their repetition (the sound of which is possibly slightly different and mediated by being heard over speakers).

With live sound processing, we use loops, too, of course. The moment I start to loop a sound “infinitely,” I am, theoretically, no longer working with live sound processing, but I am processing something that happened in the past—this is sometimes called “live sampling” and we could quibble about the differences.  To make dynamic live-looping for improvised music, whether done by sampling/looping other musicians, or by processing one’s own sound, it is essential to be flexible and be able/willing to change the loops in some way, perhaps quickly, and to make alterations to the audio recorded in real-time.  These alterations can be a significant part of the expressiveness of the sound.

For me, the most important part of working with long delays (or infinite loops) is that I be able to create and control rhythms with those delays.  I need to lock-in (synchronize) my delay times while I play. Usually I do this manually, by listening, and then using a Tap Tempo patch I wrote (which is what I’ll do when I perform this weekend at Spectrum as part of Nick Didkovsky’s Deviant Voices Festival on October 21 at Spectrum and the following day with Ras Moshe as part of the Quarry Improvised Music Series at Triskelion Arts).

Short delays are mostly about resonance. In my next and final post, I will talk more about filters and resonance, why using them together with delay is important, as well as strategies for how to be heard when live processing acoustic sound in an improvisation.

In closing, here is an example from What is it Like to be a Bat? my digital chamber punk trio with Kitty Brazelton (active 1996-2009 and which continues in spirit). In one piece, I turned the feedback up on my delay as high as I could get away with (nearly causing microphones and sound system to feedback too), then yelled “Ha!” into my microphone, and set off sequence of extreme delay changes with an interpolating delay in a timing we liked. Joined by drummer Danny Tunick, who wrote a part to go with it, we’d repeat this sequence four times, each time louder, noisier, different but somehow repeatable at each performance. It became a central theme in that piece, and was recorded as the track “Batch 4” part of our She Said – She Said, “Can You Sing Sermonette With Me?” on the Bat CD for Tzadik label.

Some recommended further reading and listening

Thom Holmes, Electronic and Experimental Music (Routledge, 2016)

Jennie Gottschalk, Experimental Music Since 1970 (Bloomsbury Academic, 2016)

Geoff Smith, “Creating and Using Custom Delay Effects” (for the website Sound on Sound, May 2012) Smith writes: “If I had to pick a single desert island effect, it would be delay. Why? Well, delay isn’t only an effect in itself; it’s also one of the basic building blocks for many other effects, including reverb, chorus and flanging — and that makes it massively versatile.”

He also includes many good recipes and examples of different delay configurations.

Phil Taylor, “History of Delay” (written for the website for Effectrode pedals)

Daniel Steinhardt and Mick Taylor, “Delay Basics: Uses, Misuses & Why Quick Delay Times Are Awesome” (from their YouTube channel, That Pedal Show)
Funny

Diligence is to Magic as Progress is to Flight

Austin Wulliman

Austin Wulliman
Photo by Doyle Armbrust

Two weeks ago, I visited a pair of dynamic, hardworking Chicago musicians in their studio. I was intrigued to see violinist Austin Wulliman (known for his work with Spektral Quartet and Ensemble Dal Niente) and composer/bassoonist Katherine Young (known as a great improviser and increasingly in-demand composer) working together in an entirely new context. The pair was preparing to reveal Diligence is to Magic as Progress is to Flight, the result of more than a year and a half’s worth of improvisation, sound creation, and collaboration. The immersive, fifty-minute piece would find its first home in the Defibrillator Gallery for a weeklong residency and culminating performance.

The instruments Wulliman would use for the performance—a prepared viola, two prepared violins, and a “normal” violin—were scattered throughout the small, windowless electronic music studio where the pair had holed up for the day. Young was stationed at the computer with an enormous array of sound samples arranged on the screen in front of her. Although they were at the end of a long day in the studio, the pair spoke with great energy about their upcoming performance. It was evident that their long-term, close collaboration had led to great mutual admiration and a wide array of new experiences for both of them.

When I asked Wulliman what was new for him in the collaborative process with Young, he answered: “Basically everything.”
“That was the intention going into it,” he explained. “When I approached Katie about this over a year and a half ago, I wanted to do something where I had the freedom to explore sounds with somebody who’s great at that.”

Wulliman saw the collaboration with Young as a chance to embark on sonic explorations that performers aren’t often afforded in the context of a fully notated score. Young began their collaboration by sending Wulliman videos and photographs for him to respond to with improvisations, and from this jumping-off place the pair began to develop a common language of sounds that would comprise Diligence. “This has been by far the most I’ve been in the workshop with somebody,” Wulliman said enthusiastically. “I feel like we made the materials together. I’ve always been in the room helping to make the sounds; Katie has always led the way in terms of shaping things, and guiding it becoming a piece.”

Prepared string instrument

Photo by Doyle Armbrust

The collaborative process revealed exciting new territory for Young as well. “It’s been really exciting to be able to spend this much time with sounds that I am not responsible for producing in the moment,” she said, referring to her work as a performing bassoonist. “I’ve been able to get outside of the closeness of having this instrument that [I’m] so connected to. I can say, ‘What if you do this thing, Austin?’ It’s hard to ask yourself those questions in terms of your own instrument. You feel it’s not possible. You think you know what’s possible, with your own instrument. It’s been exciting and has freed me up to think more about structure.”

The result of this collaboration, revealed September 27 at Defibrillator Gallery, was a subtle, sensual performance that enveloped the audience in an ever-changing ecosystem of sound and color. Many moments of Diligence were surprising, even revelatory: Wulliman tearing into a growling prepared G string with cadenza-like fervor, blending hushed bridge sounds with the surrounding tape part, or turning wild pizzicato textures into a virtuosic anti-caprice.

What made Diligence so satisfying was that it brought the greatest strengths of both composer and performer into bold relief. Young’s compositional hallmarks—her visceral approach to sound; her organic use of repetition, structure, and pacing; her attentiveness to the smallest details of timbre; her adventurousness in using instruments in unexpected ways—made the work feel like a living thing, breathing and unfolding as the evening progressed. And Wulliman’s strongest characteristics as a performer—his intensity of focus, his absolute commitment to each musical gesture—made listeners feel that the pair’s collaborative vision was being fully embodied in each moment.

As the performance ended and the packed gallery gave a series of enthusiastic ovations, an unexpected quote came to mind: Mother Teresa’s adage that “we can do no great things; only small things with great love.” In our contemporary music landscape, long-term collaborations can be logistically and financially difficult to achieve. We live in a culture where bigger is better, where more is more. Composers and performers are often required to write, learn, and perform music on tight timetables, and without a great deal of time for inquiry and reflection. Diligence, then, was a particularly rare treat: the chance to enter a sonic world created by two gifted musicians over a long period of time; the chance to hear sounds that were crafted intentionally, gradually, and with great love.

I Vote For Change(s)

Once a week I put on the hat of journalist and begin what, for me, is the painstaking process of focusing my thoughts on something long enough to be considered a topic and then writing a few paragraphs on it. Fortunately, I submit the manuscripts of my labor to the virtual hands of a team of veritable authorities on the subjects I write about. I say fortunately because, left to my own devices, punctuation, spelling, and rhetoric become the inventory of a china shop that I run through with bovine grace. I’m actually twice-blessed because when errors about historical record enter my monographic treatises, Team NewMusicBox gives them their sorely-needed reality check and delivers me from the jaws of debunkery. With that said, I now confess that I feel sorry for the composer who was left out to dry by The Huffington Post when trashing the legacy of John Cage earlier this month. While I will not deny Daniel Asia’s, or anyone else’s, dislike of Cage’s, or anyone else’s, music—provided, of course, that the opinion is based on a piece-by-piece assessment and not on a blanket one (if one hasn’t listened to a piece of music, one shouldn’t pass judgment on it), I admit that I agree with Isaac Schankler and Dan Joseph’s condemnation of Asia’s article, but for a different reason.

When I read Asia’s description of “harmony, and thus counterpoint,” as “central to Western music for over a thousand years,” I found myself almost as angry at Huffington’s editorial staff as I was at the New York Times when they let an Andrew Solomon interview of Keith Jarrett include comments by Wynton Marsalis, whom Jarrett had disparaged in the interview. Although I cannot say with any authority exactly when harmony became the distinctive property of Western art music, I’m pretty sure that harmony as we know it wasn’t practiced in 1013. Zarlino and Palestrina notwithstanding, harmony as an independent field of study didn’t occur until the 1600s, with Rameau writing the first treatise dedicated exclusively to the subject in 1722. Pushing the practice of harmony—and “thus” counterpoint—to the time of free organum is like pushing the practice of jazz back into the 19th century, when the word probably didn’t exist. Besides, if the idea of harmony as a way of establishing a tonal hierarchy is being invoked, then counterpoint, the practice of creating simultaneously voiced independent voices, should probably not be.
This is all a preamble for a discussion of a topic that was suggested by a few readers of last week’s post; that writing a part for a performance that is largely improvised is not out of line with the idea or the practice of improvisation. I’ve pointed out several times that Louis Armstrong copyrighted his entire part to “Cornet Chop Suey” years before he recorded the piece. Similarly, the practice of transcribing and even memorizing solos from recordings of the masters is a necessary part of learning to play jazz. So when I was confronted with putting together an hour-and-a-half long accompaniment to a one-person theater piece last week, I knew that I would have to not only learn to play the sketches that the previous bassist had supplied to the play’s author, but that I would have to be able to reference the entire recorded performance to make artistically viable choices. I admit that while I’m not a great memorizer (although I can do so if the situation demands it), I’m pretty good at taking musical dictation. (I haven’t developed perfect pitch, so it goes slowly.) I took the script that the author sent to me and inserted a transcription of the accompaniment into it. I then took out any long stretches of text that weren’t accompanied and used that result as a score to reference for my improvisations. The excerpt below covers about fifteen minutes of the performance. Music notated on traditional staves shows a starting-off point (six measures followed by “(etc.)”) and five motivic elements that are introduced as the improvisation leads to the final major-10th diad. The number “30” in brackets is the page number from the original script that didn’t get deleted. (The entire script is 36 pages without the musical cues inserted.)

Looking for Louie

Excerpt from p. 5 of my score for Stacie Chaiken’s Looking for Louie.

Because I was expected to improvise my part, I was able to take liberties with the original score. I could develop what I considered motives and themes, as long as it didn’t interfere with the flow of the action; something that was discussed in rehearsals. In a way, the music for Looking for Louie is a continual work-in-progress. Each time it’s performed, new material is introduced and some old material is discarded. When (or if) I perform it again, I know I’ll change much of what was done. Some of this change is programmed into the score; I have no expectation of repeating verbatim what was improvised during the “Dirge.” Some I might re-notate (and, thus, “recompose”) with more, or less, specific instructions. Indeed, some of what was played while I was reading from the part shown above had little to do with what was on the page. My sole guidance for future performances will be the reaction from my collaborator.

This type of collaboration is pretty common in jazz performance. Musicians will get together to rehearse for a concert, gig, whatever, and work things out. Sometimes “things” can be pretty specific and the pencils, and sometimes music paper, come out to write down what the performers need to know to do what they need to do. What is interesting is that the lines between what would be considered “jazz” and what would be considered “aleatoric” improvisation are becoming increasingly blurred. When I performed earlier this month at ABC No Rio with Ingrid Laubrock, David Taylor, and Jay Rosen, I knew that everyone was familiar with free-improvisation, improvising over chord changes, and improvising inside generic frameworks. As a result, I used a combination of notated music that was to be played as correctly as possible and written directions. Another approach that crossover jazz/avant-gardists deploy is using graphic notation with symbols that might or might not be legended for interpretation in performance. This might, or might not, be accepted as real jazz playing, but it’s important to remember that the musicians who played the music that was originally called “jazz” rejected the term, sometimes vehemently. Max Roach, Duke Ellington, and Louis Armstrong all went on record as wishing the term wasn’t used for their music and artists like Nicholas Payton are calling for their music to be re-labeled as Black American Music. I find myself leaning more towards using “jazz” to describe the huge amount of music created by improvising musicians—especially, but not limited to, American improvising musicians—who negotiate chord changes, even when there are none to negotiate, and trans-genre groove-oriented musical styles. The artists who started creating music this way were bringing a new way of listening to the Great American Culture Machine’s consumer class, a class that was largely bored to tears with what the GACM had been offering. The trap that needs to be avoided is the one where the same performance is repeated over and over again.

Matters of Convention

January is a great time for music conferences (or conventions). A few organizations holding them this month that I can name off-hand are: the Association of Performing Arts Presenters (APAP), the Jazz Educators Network (JEN), Chamber Music America (CMA), and the National Association of Music Merchants (NAMM). Sadly, my relationship with the economy is tipped slightly out of my favor at the moment and having to replace the car this year means that the only convention I’ll be attending is (hopefully) the International Society of Bassists in June. (I resisted joining ISB until two years ago, mainly because their convention was being held that year in San Francisco, where my mother lives, so I could write off a visit with her as a business expense. At the convention I was wowed by the virtuosity of the likes of Nicholas Walker, Putter Smith, John Clayton, Bertram Turetzky, and Jiri Slavik. Doing lunch with Michael Formanek and Mark Dresser were also high points, but taking a private lesson with the legendary Barre Phillips changed my life and sold me on the ISB.)

I had planned to go to the Jazz Connect events at APAP, which are free and end today, but my work schedule has put the kibosh on my plans. I’m supplying the musical accompaniment for a one-person play written and performed by Stacie Chaiken, Looking for Louie, which is being staged at the Rockland Center for the Arts this Sunday (January 13) afternoon and will be in rehearsal during the second day of the convention. On Jazz Connect’s first day, I was glued to the computer writing this post and putting together a part for the rehearsal.

While it might seem paradoxical to some, that an improvising musician would be writing a part for a performance, it’s actually not at all at odds with how improvisation works. Chaiken incorporates interactivity with her audience as well as with her accompanist in Looking for Louie and the bass part is largely improvised. However Looking for Louie was staged previously, in Los Angeles and Israel, and a structural element exists in the music that specifically relates to the work’s plot. To provide improvisations that are in line with the work’s style and surface contours, I transcribed the bass part from the Israeli performance into Finale® and inserted it into an MSWord® document along with the play’s script. It’s essentially the same way I prepare for playing jazz, except the research I do for that is on-going and I’ve been at it for a longer time, so I don’t have to do it for every situation that comes along. Fortunately, Chaiken is very comfortable working with jazz musicians (her husband, Martin Berg, used to play trumpet in the Thad Jones-Mel Lewis Orchestra and is currently on the board of directors of the California Jazz Foundation), and our first rehearsal went smashingly well. I think it’s possible to now bring my own sensibilities, which lean more towards the avant-garde than her previous accompanists, to our final rehearsal without disrupting the drama.

Rockland Center for the Arts (ROCA) is a terrific place to hear music. The acoustics of its main gallery are superb and neither Chaiken nor I will need any sound support. Because ROCA receives support from the New York State Council on the Arts, as well as from local businesses and subscription memberships, performers who play there are paid a living wage. Organizations like ROCA are in the business of presenting art, not the commodification of it. Sadly, there’s a stigma attached to the presentation of art as art that leads many to believe it to be unfathomable to most. While it’s true that the masses now, more than ever, are potential prey falling to those who would limit their exposure to good quality art, especially music, it doesn’t mean that it’s beyond the public’s ability to comprehend it.

This was made obvious to me last Monday at a memorial for the late saxophonist-composer David S. Ware held at St. Peter’s Church in Manhattan’s Citicorp Center. Ware’s memorial was well-attended—in fact, standing room only—and the program well-paced. The music played was not what one would expect to be included in the year-end memoriam of NPR or the NYT (neither included Ware), but it was of the highest caliber and performed with the deepest of conviction. Saxophonists Rob Brown, Daniel Carter, and Darius Jones gave stellar performances to honor their fallen comrade, as did drummers Andrew Cyrille, Guillermo E. Brown, Warren Smith, and Muhammad Ali.

I first heard Ware when he played at the Iridium Jazz Club in July of 2003. His quartet, with pianist Matthew Shipp, bassist William Parker, and Guillermo Brown, were part of a double bill that included a quintet led by bassist Henry Grimes (who I came to see) that featured trumpeter Roy Campbell, saxophonist Brown, pianist Andrew Bemke, and drummer Michael T.A. Thompson. I enjoyed listening to Grimes’s group, but when Ware started his half of the show, I was surprised to find myself witnessing what I can only describe as a singularity at Iridium. Ware and his group not only blew the roof off of the place, but did it with a kind of playing that is all-too-rarely presented there. Ware’s sound, like Gato Barbieri’s or Clarence Clemens’s, was huge, but devoid of commercial pretense. His compositions supplied him with long, slow-moving progressions, highlighting his improvisations’ multiple-tiered voice leading that infused rapid-fire filigree over a subtle linearity.

At Ware’s memorial I was also surprised to hear multi-instrument builder-performer Cooper-Moore—who, along with Ware and drummer Marc Edwards, was a member of the collective, Apogee—for the first time. Cooper-Moore performed a solo piece on an instrument he calls a harp, and it is a harp, but is played horizontally, like a piano. The piece he played was beautifully lyrical and juxtaposed nicely between the opening piece, “Prayer,” by William Parker (who conducted and played percussion) with his group (pianist Eri Yamamoto, vocalist Fay Victor, Rob Brown, and a string ensemble led by Jason Kao Hwang) and a trio featuring Andrew Cyrille, Daniel Carter, and bassist Joe Morris. Another first for me was to hear Morris play guitar, which he did in a duet with drummer Warren Smith, performing an exquisite Morris original, “Violet.” Muhammad Ali and Darius Jones played a duo that was truly a great moment in music, expanding on the legacy of saxophone-drum playing started by John Coltrane and Rashied Ali. Poetry was read by the passionately verbal powerhouse Steve Dalachinsky and memories of Ware were shared by his widow, Setsuko S. Ware; his business partners, Jimmy Katz and Steven Joerg; and finally his long-time pianist Matthew Shipp, who also wrote a heartfelt obituary about Ware for NewMusicBox last year. The final live performance (followed by a clip of Ware playing solo soprano saxophone) was by Ware’s rhythm section who performed a medley of two Ware compositions, “Godspelized” and “Sentient Compassion.”

One of the things the words spoken about Ware at his memorial acknowledged was that his sound was highly personal and entirely idiomatic to the saxophone. That’s what struck me about Ware the first time I heard him—his sound. Raw. Big. Relentless. But accepting without being acquiescent. Like his elders, Albert Ayler and Archie Shepp, his sound was unlike what the world of “mainstream” music accepts as the “pure” sound of the saxophone, yet it was pure saxophone. His was a sound that, like his compositions, leaned away from the Western art music paradigm he had mastered. While researching today’s post, I ran across a schedule of music educators association conferences that will be held this month in Arizona, Florida, Georgia, Illinois, Indiana, Michigan, Missouri, and Oklahoma. Jazz is now included in the curriculum of many schools in these states, although until the 1970s and ‘80s it mostly wasn’t. Still, it wasn’t until after 2000 that I saw a music professor teach the music of Ornette Colman to a class. I wonder how long it will be before the music academy will tackle the music of David S. Ware.