Search Results for:

Three Composers Selected to Receive Bush Artist Fellowships


Bush Foundation

On May 1, 2001, the Bush Foundation announced the winners of the 2001 Bush Artists Fellowships (BAF). Fellowships were awarded to 15 artists in areas of music composition, film/video, literature, and scriptworks. Each fellow will receive $40,000 for a 12 to 18 month period.

The winners in music composition were Brent Michael Davids (Minneapolis); Anthony Gatto (Minneapolis); and Peter Ostroushko (Minneapolis). For a complete list of the winners, click here.

The artists selected to receive fellowships were chosen by national panels from a total of 396 applications. The members of the preliminary and final selection panels were all artists and arts professional from outside Minnesota, North Dakota, South Dakota, and Wisconsin. For a complete list of music panelists, click here.

Begun in 1976, the Bush Artist Fellows Program provides artists with significant financial support that enables them to advance their work and further their contribution to their communities. Up to 15 fellowships are awarded annually to artists living in Minnesota, North Dakota, South Dakota, and the ninth Federal Reserve district of western Wisconsin. Awards are made in seven categories, which rotate on a two-year cycle, and include visual arts, two-dimensional; visual arts, three-dimensional; choreography/multimedia/performance art; literature; music composition; scriptworks; and film/video.

The BAF supports artists whose work reflects any of the region’s diverse geographic, racial, and aesthetic communities. Artists may be at any stage of their life’s work from early to mature. Fellows may use the award in a variety of ways – to explore new directions, continue work already in progress, or accomplish work not financially feasible otherwise. They may decide to take time for solitary work or reflection, engage in collaborative or community projects, embark on travel or research, or pursue any other activity that contributes to their lives as artists.

The Bush Foundation is a private, grant-making foundation with charitable purposes. Archibald Granville Bush, sales and general manager of the 3M Company, and his wife, Edyth, created the Bush Foundation in 1953. The Foundation makes grants to institutions in education, humanities and the arts, human services, and health, primarily in Minnesota, North Dakota, and South Dakota, and supports historically black private colleges and fully accredited tribally controlled colleges throughout the United States. In addition to the Bush Artist Fellowships, the Foundation also offers fellowship programs for individuals through its Leadership Fellows Program and Medical Fellows Program.

Curtis Curtis-Smith


C. Curtis-Smith

C. Curtis-Smith is a Washington-state native now teaching at Western Michigan University. He has received over a hundred grants, awards, and commissions in the course of his career, including a Guggenheim Fellowship, an award from the American Academy and Institute of Arts and Letters, the Koussevitsky Prize at Tanglewood, and 23 consecutive Standard Awards from the American Society of Composers, Authors, and Publishers.

Mr. Curtis-Smith submitted a set of Four Etudes, ranging, he says, from “dark (and) erratically capricious” to “bright (and) lilting.” The four etudes are extracted from a complete set of twelve, seven of which were premiered by pianist Lori Sims at Alice Tully Hall in September 2000.

The first etude (of both the Van Cliburn and the complete set) is called “Chords in Canon.” As is suggested by the title, the etude includes some strict canon, with chords being used in place of a monophonic line. The canon begins at a fairly close time interval – three sixteenth notes – that is later further compressed, in a stretto-like section, into a single sixteenth-note.

The second etude (in the Van Cliburn set) is marked Grave, and it is a study in “violent contrasts of dynamics and touch, and rapid shifts of register.” In certain passages, according to the composer, “three widely separated layers of sound are juggled simultaneously, resulting in orchestral sonorities.” To achieve these sonorities, precisely timed use of the damper pedal is crucial. Curtis-Smith even goes so far as to notate the pedaling rhythmically, sometimes down to the exact eighth or sixteenth note. “It’s a big piece, not so much in length as in orchestral breadth. It should be cataclysmic in its net effect,” Curtis-Smith noted.

The third etude is marked “Ironic.” It is a study in the independence of the hands. “When I was a little kid practicing scales, I got tired of practicing in rhythmic unison,” Curtis-Smith explained. “I tried practicing in [polyrhythms such as] two-against-three, and finally, in my experimentation, I tried keeping the left hand even, and speeding the right hand up and slowing it down in an irrational way. Sometimes I did the opposite. It took me all these years to put that concept into a piece.” Though the scales have disappeared, what remains from his childhood games is the practice of keeping the left hand absolutely steady while the right hand accelerates and decelerates.

In the fourth etude in the Van Cliburn set, Curtis-Smith transforms the rhythmic play of the previous etude into something “very bright and flowing.” “It is beautiful in its surface sound, and though all of the etudes can be thought of as tonal, this is the most overtly tonal of the set.” The etude is in B-flat major, with a middle section in D-flat. The etude focuses on the “independent rubato” of the right hand. While the speeding up and slowing down of the right hand in the third etude necessitated the use of “feathered beams,” the rubato in the fourth etude is subtle enough to be notated “normally” as two against three, nine against five, and so forth. “The fourth etude represents a very pleasant resolution of the harmonic and rhythmic tensions of the third,” Curtis-Smith stated.

Curtis-Smith has an extensive background as a pianist. He has appeared in recital at the National Gallery and the Phillips Collection in Washington, D.C. and has performed with the Seattle and Indianapolis Symphonies. Mr. Curtis-Smith has also premiered three of William Bolcom‘s Pulitzer Prize winning Twelve New Etudes. “My own pianistic abilities have affected my writing to a great extent,” he admits. The composer himself has recorded all twelve etudes, and the CD, on the Albany label, is due out in September.

The complete set of etudes, which lasts forty minutes, were written on a Faculty Research and Creative Activities and Support Fund grant from Western Michigan University. It is the first solo piano music that he has written since his left-hand piano concerto for Leon Fleisher ten years ago. “Because of commissions that always seemed to take the shape of chamber music or a symphonic piece, until the etudes I hadn’t had the opportunity to write solo piano music,” Curtis-Smith explained.

The composer thinks of the twelve etudes as a “suite of twelve movements, rather than a loose anthology.” This does not preclude the pianist from playing the etudes in smaller groups, however. The composer is well aware of the significance of the number twelve when it comes to piano etudes. “At first I was going to write six, I thought that would be enough of an accomplishment,” he laughed. “But eventually there seemed to be enough ideas for a seventh, and then an eighth. Finally there were twelve, and I liked that hooking back into tradition.”

Lowell Liebermann


Lowell Liebermann
Photo by Linda Harris

The Dallas Symphony Orchestra‘s composer-in-residence, Lowell Liebermann, submitted a set of Three Impromptus, Op. 68. The Three Impromptus were written to commemorate the centennial of Yaddo. Stephen Hough gave the premiere at Alice Tully Hall on May 4, 2000. The title refers back to Schubert, according to Liebermann, because the pieces contain “a little bit of the spirit” of the older composer. “I use a kind of repetitive rhythmic idea as a taking-off point,” Liebermann explained, something that Schubert also did in his two sets of impromptus.

Though the Three Impromptus were written with Hough in mind, Liebermann claims that he never tailors his music “too specifically” to the abilities of specific performers. “I generally think of the ideal performer who can do anything,” he explained. The Impromptus are currently being recorded by David Korevaar as part of a two-CD set of all of Liebermann’s piano music due out on Bridge in the near future.

Other new works by Mr. Liebermann include Rhapsody on a Theme by Paganini, commissioned by the Indianapolis Symphony for conductor Raymond Leppard‘s farewell concert in May 2001, and a score commissioned by Tokyo’s NHK Symphony to celebrate its 75th anniversary later this year.

Lowell Liebermann, who received his D.M.A. from The Juilliard School, made his Carnegie Recital Hall debut as a pianist at the age of sixteen, performing his Piano Sonata, Op.1. In 1996, his opera based on Oscar Wilde‘s The Picture of Dorian Gray met with critical and popular acclaim.

Mr. Liebermann’s Piano Concerto No. 2, commissioned by the Steinway Foundation as part of its 21st Century Piano Project to coincide with the American Symphony Orchestra League national conference, was given its 1992 premiere by Stephen Hough, conductor Mstislav Rostropovich and the National Symphony Orchestra.

James Mobberley


James Mobberley
Photo by Larry Levenson

James Mobberley composed Give ’em Hell! last year for the Harry S. Truman Library and Museum in Independence, Missouri. Pianist Robert Weirich gave the premiere performance on March 19, 2000 as part of series celebrating music at the White House. Truman was well known as an amateur pianist and an enthusiastic music-lover; Mobberley explained that before he started to work, the personnel at the Museum “let him know what [the former President] liked and disliked.”

Though Mobberley felt that ultimately “it didn’t make any sense” to base the piece on the President’s musical tastes, he nonetheless made use of two elements that he thinks would have met with Truman’s approval: Impressionist-sounding music, and the song “I’m Just Wild About Harry.” While the piece starts from those two departure points, however, “it diverges pretty fast,” Mobberley emphasized. “I just wanted to write a piece that made good sense to me, that had the emotional shape and development that I like to put into pieces.” The moments of Impressionistic sound are “very fleeting,” he stated, and all he takes from the song are two notes, though these two notes are used throughout the piece.

Give ’em Hell is in three sections. The first section, which starts with an “attention-getting” arpeggio figure, builds up to a climactic point that returns in the third section, when “all hell breaks loose.” In between: a “slow, chewy center,” according to Mobberley. Would Truman have liked the piece? “I’m assuming that he would have listened to it for a few minutes,” Mobberley laughed. “But the clusters at the end would have rotted his socks!”

A native of Pennsylvania, Mr. Mobberley received his masters in composition at the University of North Carolina at Chapel Hill and went on to earn a doctorate at the Cleveland Institute before joining the faculty of the Conservatory of Music at the University of Missouri-Kansas City. He has also served as the Kansas City Symphony‘s first artist-in-residence since 1991.

A composer of music for orchestra, chamber ensemble, theater, dance, film, and video, Mr. Mobberley at times combines electronic and computer elements with live performance. Mr. Mobberley’s many fellowships, grants, and awards include a Guggenheim Fellowship, the Rome Prize Fellowship, a Composer’s Fellowship from the National Endowment for the Arts, and the Lee Ettelson Composers Award.

Judith Lang Zaimont


Judith Lang Zaimont
Photo courtesy of the composer

Tennessee native Judith Lang Zaimont submitted a work entitled Impronta Digitale, an eight and a half minute perpetuum mobile in shifting compound meters. The tempo of the piece is marked 192 to the dotted eighth: “extremely fast,” according to the composer. A running pulse is interrupted only twice by “slow, dreamy music.”

Zaimont was inspired, in writing this movement, by three toccatas that she herself plays: Schumann‘s Op. 7 Toccata, Prokofiev’s Toccata, and Ravel‘s Toccata from Le Tombeau de Couperin. “I’ve embedded the features [of all three pieces] into the layout and technical requirements,” she explained.

The title – translated into English as “fingerprint” – refers both to technical aspects of the music and to the fact that it spotlights certain of Ms. Zaimont’s characteristic “fingerprint” sound-structures. “‘Fingerprints’ are multi-tone sonorities that are laid out fast across a large register,” Zaimont clarified. In Impronta Digitale, each fingerprint is laid out as a series of superposed chords, which, when the damper pedal is depressed as indicated, coalesce to form “one conglomerate sound.”

The digitale portion of the title refers to the “toccata aspect” of the piece. Most of the time, the composer commented, there are two different “strands” of music happening simultaneously. Adding to the technical difficulty of the piece are three sections of “hammered” chords divided between the hands.

In addition to standing alone as an independent composition, Impronta Digitale serves as the third movement of her 1999 Sonata for Piano Solo, which was cited as the most important piano piece of 1999 on Piano & Keyboard magazine’s 20th century timeline. The Sonata for Piano Solo was first performed by Bradford Gown at The Philips Collection in Washington, D.C. in May, 2000.

Impronta Digitale has been selected for performance by one-third of the accepted competition entrants, including competitors from Russia, China, Japan, Korea, Italy and the United States. “I think it’s amazing,” Zaimont commented. “A fascinating aspect of this entire process has been imagining why it is that a piece appeals to a particular pianist. People with programs that are very technically oriented may choose something poetic in order to show another side of their artistry. If their program is high on poetry, they may be looking for something to balance that. And there’s the ‘main line’ pianist, fiercely accomplished technically, who will look for something else to demonstrate that.” The old way of approaching the new work at the Cliburn Competition – the single commission – implied that “many different kinds of pianists had to push themselves through one aperture.” The new approach, she mused, provides the pianists with “a series of parallel apertures.”

Ms. Zaimont, while born in Tennessee, grew up in New York, and currently teaches at the University of Minnesota. She has served as a professor at Queens College (New York) and the Peabody Institute, and as Chair of the Department of Music at Adelphi University, Ms. Zaimont is also the creator and editor-in-chief of the critically-acclaimed, award-winning book series The Musical Woman: An International Perspective.

Ms. Zaimont has composed over a hundred works for virtually every medium: opera, orchestra, chamber, vocal, choral, dance, film, and solo instrumental. Winner of a Guggenheim Fellowship, a Maryland State Arts Council Creative Fellowship, and grants from the N.E.A. and the American Composers Forum, Judith Lang Zaimont holds degrees from Queens College and Columbia University. She also studied orchestration in Paris with André Jolivet, on a Debussy Fellowship.

1st Music 18 Commission Winners


Emily Doolittle

Princeton doctoral student Emily Doolittle recently finished a set of three commissioned pieces for Tafelmusik. One piece, for the full ensemble, is called green notes; another, for just the string section, is called falling still; a third, for viola d’amore and soprano, is called Virelais. After the three pieces are performed at the Scotia Music Festival, (May 27-June 10, 2001), where Doolitle serves as Composer-in-Residence, she will turn her attention to the First Music Commission for the New York Youth Symphony.

Emily has already used her ASCAP Foundation Morton Gould Young Composer’s Award money to buy a new iBook laptop computer.


Anthony Cheung
Photo by Mei-Chi Cheung

Harvard freshman Anthony Cheung‘s chamber opera AS IS: Arnold Schoenberg and Igor Stravinsky in L.A. was given its premiere at the Agassiz Theater on May 10, 2001 as part of “VI-2,” a project involving six undergraduate composers. The short chamber opera, for which Cheung wrote the libretto, centers around a fictitious meeting between the two composers. “Stravinsky and Vera visit Schoenberg and Gertrude at Schoenberg’s home – it’s pretty ridiculous,” Cheung explained. “They end up singing about painting and ping-pong.”

Another piece of Cheung’s, No Unnecessary Noise, was given its first performance by the Auros Group for New Music on April 29, 2001. The piece, which Cheung describes as “a polymetric fugue,” was inspired by a visit to New York where the composer saw signs warning of fines for making “unnecessary noise.”


John Kaefer
Photo by Lou Ouzer

“I am continually fascinated by the relationship between music and art,” writes composer John Kaefer. He feels that a style drawn from the visual arts – the mosaic – is reflected in his compositional style, in particular in his First Music 18 piece, entitled Mosaic. “The principles behind the mosaic can be translated into my compositional language. For example, the use of the pipe organ, along with an orchestra tutti, will project a sense of grandness and strength. On the other hand, small pointillistic details in the winds and strings will provide subtle changes in color.”

Mosaic will be a “concert opener,” but Kaefer asserts that he is trying to avoid the tried and true approach to the genre. In addition to making use of the Carnegie Hall pipe organ, Kaefer wants to use groups of players antiphonally around the hall. “I haven’t gotten permission to do that yet,” he laughed.

Kaefer is currently finishing a substantial four-movement piano concerto. The composer, who has played the piano since he was eight, claims that it is not a “typical Romantic concerto in the sense of the soloist versus the orchestra. There are soloistic parts, certainly, but the pianist also plays as part of the orchestra.” The piece will be approximately 25 minutes long. Kaefer is currently finishing his Master’s degree at Yale; this coming fall, he will begin studies at Juilliard with Robert Beaser as one of the C.V. Starr Doctoral Fellows.


Michael Klingbeil
Photo courtesy of the composer

Michael Klingbeil plans to write a chamber work for oboe, clarinet, piano, violin, and cello to fulfill the terms of his First Music 18 commission. “It’s a Pierrot ensemble with an oboe instead of a flute, and no percussion,” he explained. Klingbeil is keeping high school students in mind as he writes, but he doesn’t think that will have a profound effect on the final result. “I know that [the students] are going to be talented people,” he explained. “I want it to be something that they can get something out of, but I also want to be able to deal with the musical ideas I’m thinking of.”

The working title of Klingbeil’s piece is Defractions.. At the moment, he is working out what he calls the “harmonic structure” of the piece,” mapping out a trajectory, keeping in mind pitch areas and register. “I have rough ideas of how long I will stay in certain harmonic areas, but there is flexibility, depending on how the trajectory works itself out in the final composition.”

Klingbeil plans on using the ASCAP Foundation Morton Gould Young Composers Award money to attend various festivals and conferences to present his music.

Athena


Christopher Ariza
Photo by A. Ariza

In the fall of 1999, Christopher Ariza enrolled in a computer music course at NYU taught by Dr. Elizabeth Hoffman. The primary objective of the course was for the students to become familiar with two common computer music programs: Csound and Max. Ariza focused his time on developing algorithmic compositional systems in Max, eventually realizing that he needed a “more robust” programming language to take his explorations further.

[Two examples of Ariza’s work in Max can be downloaded from his web page. Guido’s Windchime requires MIDI to play interactively (although mp3s of two of Ariza’s own renditions are available); Ignota’s BeatBox uses MSP for real-time synthesis.]

Ariza decided that the answer to his problem was to write a new algorithmic “front end” for Csound. A “front-end” is a program that makes life easier for the user. One function of most front ends for Csound, for instance, is to provide a graphic interface that allows you (the user) to click on buttons to generate code that otherwise you would have to type in by hand.

Writing a Csound file from scratch is both difficult and time-consuming, and so over the years composers and programmers have developed a number of front ends. Csound has been around for a while – it was created by Barry Vercoe in 1985 – and, as its name implies, it was written in the C language. Csound’s advantages lie in the complexity it can handle. Composers can program incredibly elaborate chunks of music, and they can also design their own incredibly elaborate instruments. The disadvantage to Csound is that it does not function particularly well in “real-time.” This is because the program sucks up a large amount of processor power in proportion to the complexity of the code it is handling.

Csound is generally split into two parts: a “score” file and an “orchestra” file. The score file supplies information necessary to schedule sound “events,” while the orchestra file contains the “instrument” data necessary to coax actual sound out of an output device. Front ends exist for both the score and the orchestra files. Athena functions as an algorithmic front end for the score file. By “algorithmic,” Ariza means that his program provides “shortcuts” specifically for composers who think of their music in terms of algorithms. Athena handles surface details such as rhythms, note choices, and dynamics, while still allowing the composer to make large-scale compositional decisions.

Ariza started out writing his front end in C, but quickly abandoned it for the newer Python programming language. Python is actually written in C, but it is a “higher level” language. A “higher-level” language is essentially a powerful short-hand (and, of course, any computer language that uses letters rather than ‘0’s and ‘1’s is already a short-hand of sorts.) Ariza likes Python because it is easy to learn, has a compact and intuitive syntax (Python code is generally one-tenth the length of the equivalent C code), and it includes some built-in features that take care of such important tasks as automatic memory management and garbage collection. Python is an open source language that is free and highly portable, meaning that a program written in this language will be accessible to a large number of people.

Perhaps the most important difference between C and Python, however, is that in Python, programmers can easily write programs that are “object oriented.” Object orientation is an approach to programming that combines “data” and “functions” into “objects” that are deployed in hierarchies. From a single object (or “class”) definition, you can create numerous independent, modifiable, fully functional copies (or “class instances”). Ariza compares an object to a “mini-machine inside the machine.” The advantage to such an approach is that it provides an efficient way to organize larger, more complex programs, and such programs are also easily modified and extended.

Athena is the first object-oriented algorithmic front-end to Csound written in Python. Cmask is an older algorithmic front end, and it has been ported to Python in a version called Pmask. Neither Cmask nor Pmask is object-oriented, however.

Athena also stands out from its colleagues in its dual function as front-end and powerful set class utility. Athena users can easily take advantage of a huge set class (SC) database, a Python “module” (same as a “class” in C++ or Java) containing 342 set classes (all dyads through nonachords) that can be toggled between their non-inverted (Tn) and inverted (Tn/I) forms. Each set class can reference its respective normal form, Forte index, Forte vector, Morris invariance vector, and all n-class subset vectors. (This module can actually be used in any Python program, and can be downloaded from Ariza’s website.)

Ariza has written functions to accomplish standard set class transformations such as retrograding, rotating, and slicing, as well as twenty additional functions to calculate the degree of similarity between two set classes. Seven of these similarity-measuring functions can be used to analyze user-entered strings of set classes, called SinglePaths. These “Path Engines” can be used to analyze SinglePaths either in terms of adjacent SCs or in terms of a “reference” SC to every SC in a Path. Analysis results can be viewed as a SC similarity contour graph.

Below is an example of an SC similarity contour graph showing the results of an analysis of a SinglePath (4-23, 7-12, 3-6, 3-11A, 6-33A, 5-8) using David Lewin’s REL function to compare each of the set classes to a reference set class, 4-23.

Path path: Lewin REL analysis
Tn classification
reference SC: 4-23
sim range |0(min) (max)1|
1.00 4-23 |……………….*|
0.47 7-12 |………*……….|
0.32 3-6 |……*………….|
0.43 3-11A |……..*………..|
0.64 6-33A |…………*…….|
0.23 5-8 |….*……………|

Ariza is working on a way to incorporate information on voice leading into Paths, as well, in collaboration with City University of New York Professor Joseph Straus. This capability would allow you to create a Path that would “pull towards” a final set class not only in terms of pitch-class motion (i.e. a C is a C no matter what octave it is in) but also in terms of actual pitch space, for example. Conversely, of course, you would also be able to analyze a user-entered Path in terms of its voice leading.

The seven Path Engines can also be used to generate MultiPaths. A MultiPath represents a more complicated nest of
relationships than a mere Path. To create a MultiPath, you select a Path Engine, enter beginning and ending set classes (either as Forte numbers or as pitch-class sets), and specify a “similarity range,” expressed as a number between 0 and 1. The resulting collection of Paths will all have in common the same beginning and ending set class. Within each Path, adjacent set classes will be related, and every set class will be related to the common final set class. Such a collection of interrelated set classes could form a “network” of pitches for a complex composition, or provide you with a choice of Paths that meet certain criteria.

An Athena Texture requires three pieces of information: a Path, a start time, and an end time. Each Texture represents a chunk of music, be it short or long, which is algorithmically constructed. Generic information on the Texture class is contained in a Python module called “texture_class.py.” Each new Texture module, whether it is “pre-fab” or programmed by the user, is considered a “subclass” because it “inherits” its basic ability to function from “texture_class.py.”

You don’t have to write new Texture modules every day, of course (unless you want to!). Thanks to the wonders of object-oriented programming, you can modify each new “instance” of a Texture module without altering the code of the source module. Each class instance can be specialized, allowing for the greatest possible mileage from both the sub-class (for example, LineGrooves.py) and the class definition (texture_class.py).

With each new Texture class instance, you can enter new values not only for the Path, but also for such attributes as register, rhythm, transposition, and such acoustic properties as stereo panning position and amplitude. You also have control over parameter fields for particular Csound instruments, allowing you to adjust timbre and other instrument-specific features

Running two instances of the same texture with the parameters set to the same exact values can result on different musical realizations, depending on the degree of variability incorporated into those parameters. Running two different instances of the same texture, in which the values of certain parameters have been changed, can result in wildly different music – all without having to write any new Python code. You can even execute multiple texture instances at the same time, by themselves or in conjunction with instances of different textures.

Ariza’s mercuryB is an excellent example of a composition constructed entirely from different instances (31 to be exact) of a single texture, called LineGroove.py. LineGroove is a simple, monophonic texture that works with two user-supplied rhythms. Ariza emphasizes the fact that mercuryB, like any composition in Athena, is not a fixed entity, but rather an “open-work,” a set of potentialities that changes every time the program creates a new score. Click here to be linked to the RealAudio version of one such realization of mercuryB; click here to be linked to a higher fidelity mp3 download.

Why would you ever want to write a new Texture module? With a brand-spanking-new module, you could change the number of parameters available to yourself and other users for modification. You could also add special functions unique to the texture, or incorporate a special database – a translation table for mean-tone tuning, for example, or an Indian tala that would act as a rhythmic constant. These unique features would be present each time you created a new “instance” of that texture.

Each instance (or layered instances) of a Texture can be executed in Athena using a variety of Csound instruments. (As the project develops, users familiar with Csound code will be able to add their own instruments to the Athena Orchestra.) In creating a Texture instance, Athena generates a standard Csound score file, then renders an aiff file using the Csound “Perf” engine and the necessary orchestra file. The resulting sound file can stand alone as a complete piece, or it can be exported into a program like Max/MSP where it you can manipulate it further (in real time, even) within the context of a larger piece.

Ariza’s software is remarkably open and flexible on several levels. To begin with, Athena is free and open-source. “I certainly understand the need for commercial software,” he stressed, “but the interchange of ideas and software [in academia] has allowed for a lot of growth.” Ariza is calling on the computer-savvy public to alert him to bugs in the program, and to contribute Texture modules and Csound instruments.

Ariza states that “Athena allows every user to customize the software to their needs.” Making such “customizability” possible is no mean feat, but it has been made considerably easier by an object-oriented programming language like Python. “What is most important [in Athena] is the breadth of possible musical realizations of the fundamental organizing parameter: the Path, or the succession of set classes.”

An extreme example of this breadth would be the interpretation of a Path solely as rhythm. “You could fix a Texture to produce only one frequency, and you could use the Path the partition the duration,” Ariza explained. A 4-element path would partition a 40-second duration into four 10-second partitions, for instance. Aspects of the set class could even be used to create the rhythm itself; for instance, the Texture could be designed to interpret the set class integers as portions of an established “beat.” If “0” were set to equal one-sixteenth of a “beat,” then the set class (0,2,4) would sound as one sixteenth followed by an eighth followed by a quarter.

Ariza didn’t particularly set out with such flexibility in mind, however. Like Didkovsky with JMSL, Ariza has written Athena primarily with his own compositional work in mind. In the late 1990s, he was “spending a lot of time trying to get interesting ‘vertical’ harmonies from 4-part 12-tone combinatorial groups.” He was preoccupied with finding set class successions that would complete the aggregate through inversion and transposition, but was unhappy with the results. “The [resulting harmonies] were often redundant or Z-related,” Ariza stated.

Problems of twelve-tone harmony led Ariza to the idea of the Path, but in implementing it he “abandoned the twelve-tone mandate.” “Rather than serial or motive-based constructions, I’ve become interested in content groups (set classes).” Ariza describes a Path as a “set of changes.” This set of changes was “the broadest frame” that he could think of that would apply to a wide variety of music. “A Path could be used within the context of functional harmony, but since it is based on the whole gamut of set classes, there is no a priori dependency on [any harmonic system],” he explained.

Python, as an object-oriented language, encourages thoughts of generality first, specialty second. It is a far cry from an older computer language like LISP. LISP is also an object-oriented language (CLOS is the Common Lisp Object System), but there is a noticeable difference in the opacity of the programming syntax. “The syntax of LISP is cumbersome, hard to learn, and not too user friendly,” Ariza laughed. Of course, as difficult as it may be to learn, LISP is correspondingly powerful – it is often the language of choice for artificial intelligence research.

The advantage to newer languages like Java and Python is not that they are necessarily all-powerful, but rather that they are easy to learn, thus making it possible for a wider spectrum of composers to write “computer music” (or write acoustic m
usic with the aid of a computer) without devoting their lives to it. Python, after all, allowed a composer with limited programming skills to write a complex and detailed program in less than a year’s time. The “ease” factor could also have the side effect of inspiring the non-composer – someone like me – to try his or her hand at creating fantastic sound worlds.

The current version of Athena runs on the Macintosh and includes a user interface and editor created with the Tk GUI toolkit. It can be downloaded from Ariza’s site. A helpful “Read Me” file that accompanies the program files provides instructions on how to download Csound, which is necessary to run Athena. Currently, it comes with two modules “LineGroove.py” and “ChordHits.py.”

Over the summer, Ariza will be hard at work on a command-line version that will be portable to any platform that runs Python. He is also planning three new Texture modules: one that creates four-part choral voice-leading; one that creates regular aggregates; and one that uses specific stochastic or chaotic equations to generate musical properties.

Mushroom


Eric Lyon
Photo by Paul Herman Reller

Mushroom is a more extreme form of an approach I’ve always used in digital music – to make the computer do as much dull mechanical work as possible so I have more time to do flashy compositional work,” explained Dartmouth Assistant Professor Eric Lyon. “One of the nice things about working with lots of complex, random tools is that after you make a ton of sounds, you can totally forget how a specific sound was actually created.”

With Mushroom, a “sound-composting” program, Dr. Lyon has roped the computer into accomplishing the kind of complicated chance operations that would make John Cage proud. “My process work prior to Mushroom had been to do one thing after another to a sound until I got something that cracked me up. I [designed] Mushroom to string my processes together because I expected that at least part of the time, it would come up with interesting juxtapositions that I might not have thought of myself. The results are often better than you might expect.”

Lyon has written about the current dilemma of digital music composers. “There are a bewildering variety of sound-processing tools and techniques…that allow [you] to hone a sound to perfection.” The danger of the “perfection” approach, Lyon elaborates, is that it bypasses the accidents and experiments that may ultimately produce more interesting results. A sound processor that incorporates random elements is described as “oracular.” With an oracular processor like Mushroom, you have no parametric control over the transformations worked upon the sound. This means that you leave all choices about duration, volume, transposition, etc. up to the computer.

When you run Mushroom on your input sound, it “pumps it through a string of randomly selected processors,” Lyon stated. You specify the input sound, a desired number of output sounds, and the processing “level” – how many times you want your original sound to be cumulatively re-processed. Each of the derived sounds will be created from a different random sequence of processors. You may not be able to recognize your original sound by the time you have processed it a few times, but you may end up with a sound far more interesting.

Just as you have no control the sequence of processors, you also have nothing to do with what the processors actually do to your sound. The processor in question does that by itself. Lyon gives as an example a process that ring-modulates a sound. You do not select the ring-modulation frequency; rather, the ring-modulator processor contains an algorithm that chooses the frequency for you.

The activity at each level of processing in Mushroom is archived in a database. All sounds derived directly from the input sound are assigned level 1. Lyon gives as an example a sound called “magic.snd.” If you request three output sounds for magic.snd, this is what will go into the database after the first layer of processing:

magic_G1_1.snd
magic_G1_2.snd
magic_G1_3.snd

If you request a second Mushroom run at level 1, Mushroom will randomly select a sound from all level 1 sounds, and will then make new sounds at level 2 (these will be archived as magic_G2_1.snd, magic_G2_2.snd, and so forth). This process-and-archive routine will repeat as many times as you specify (default is three).

Once you have listened to the derived output sounds and found one that you like, you can “reconstitute” the sequence of processors that produced it. By calling a utility called Mushmimic, you can specify a derived sound, and request that Mushroom apply to a new input sound the sequence of processors that generated the derived sound. Sounds can be recreated at any level in the processing routine. The recreated sound can be saved as a Sun/NeXT sound file (“.au”).

If you wanted to use your sound file as part of a larger piece of music, you would need to import it into some other mixing program like Max or Cubase. (Lyon has also written a stripped-down “real-time” version of Mushroom for Max/MSP.)

A good example of Lyon’s compositional work using Mushroom is his 1998 piece 1981. “With 1981 I just tossed all kinds of samples and synthetic sounds into Mushroom and then organized them into what struck me as a compelling narrative drive,” he commented. Some of the processors that he used included a voice synthesizer called Festival, a virtual drum machine called BashFest, and automatically controlled Csound instruments. Festival is the creation of Alan Black and Paul Taylor at the University of Edinburgh, while Lyon wrote Bashfest himself.

To mix 1981, Lyon used an experimental mixing program by Chris Penrose called MisterMixUp. “It’s basically a poor person’s ProTools,” Lyon laughed. Mixes in MMU are represented in plain text, allowing Lyon to easily write Perl scripts that algorithmically generate MMU mix files. To hear a sample of 1981, click here.

The processing of a single sound on progressively higher levels – what Lyon calls “hierarchical” processing – can generate profound musical relationships. The same input source can give birth to apparently disparate sounds. Though the listener may not be aware of it, these deep, hidden relationships between sounds could form part of the structure of a composition.

Like JMSL and Athena, Mushroom can be personalized depending on the preferences and technical capabilities of the composer/programmer. Sound processes can be added or deleted without affecting Mushroom’s basic ability to function. For more information how this is possible, click here.

Lyon has implemented Mushroom both as a standalone Perl framework, and as a web installation that operates on purely synthetic sounds. The standalone version can use any sound as input. This version, which runs on Linux or any other Unix system, is too big to download, however, because there are too many other files involved. Mushroom coordinates approximately one hundred other programs.

The standalone version of Mushroom has no graphic user interface (GUI), and a basic knowledge of Perl is necessary to make any modifications to the program. You don’t need to know any Perl, however, to try out the < a href="http://arcana.dartmouth.edu/cgi-bin/eric/MUSHROOM/do_rand_shroom.cgi" target="_blank">web installation of Mushroom. In the fully implemented version of the site, users will be able to choose which sound file to process and then process it simply by clicking buttons. Unlike the standalone version, which relies on a command-line interface, the web version requires no text entry.

Eric Lyon’s 1981

The Guts of Mushroom

Professor Lyon wrote Mushroom in Perl, a language developed in the late 1980s by Larry Wall, and a common choice for programmers who want to coordinate many different systems at the same time.

When Lyon first became “seriously involved” with computer music in 1985, he wrote in C, but he started looking at Perl in 1993. “It was immediately evident that Perl could really help me do some things I was already doing with much less restraint.” Perl is an excellent language for the creation of “shell scripts,” which Lyon uses to automate his work. A shell script is a file containing a list of commands to be executed.

Back in 1985, Lyon explained, “computers were so slow that you really had to send batch jobs if you had some moderately complex idea that you wanted to hear.” A batch job is a collection of scripts sent to the CPU of the computer “to do whenever it can get around to them.” “In the past, I would often have to wait a day or two for a batch job to finish, but with today’s faster computers, the most computation-intensive jobs I run rarely require more than twenty minutes.” A key advantage to shell scripts is that they execute in the background, allowing you, the user, to do other work at the same time.

In conceptualizing how Mushroom works as a computer program, it is helpful to think of it as an overseer of many smaller programs of different “nationalities.” Mushroom itself, on the highest level, is not that hard to understand. Central to the program is the idea that all sound processes are “created equal.” In other words, on the highest level, sound processes are reduced to their most basic components: input and output sounds. This kind of “generic” implementation ensures that all sound processes are interchangeable, regardless of the input device or its encoding language.

Mushroom scripts, written in Perl, are designed to operate on any input sound in a directory. The program starts by “taking stock” of how many and what kind of processors to run on the sound. The program first looks in the current directory for preference and processor files, and creates them if they are not found. If the user has specified the number of processors to run, that number will be contained in the preference file; the default is three. The processor file contains a list of available processes.

Once the preference and processor information has been loaded, Mushroom begins to create derived sounds. First an input sound is selected. If the run is at level 0 (i.e. the sound has not yet been processed) this will always be the user-specified original sound. At any higher level, an input sound is randomly selected from available sounds at the previous level. Next, a series of processes is selected from the processor list. Finally, the processors are called using Perl functions named after the processes.

As an example, Lyon gives a process called “rev1” that adds reverberation to an input sound. The Mushroom function (in Perl) that calls “rev1” is surprisingly short:

sub rev1_mproc {
local ($insnd,$outsnd) = @_;
`rev1.pl $insnd $outsnd`;
}

Mushroom knows nothing about the Perl script “rev1.pl” except that its two parameters are “name of input sound” ($insnd) and “name of output sound” ($outsnd). The script “rev1.pl” must exist in some directory where Perl can find it. The script for “rev1,” also in Perl, is somewhat more complicated:

#!/usr/local/bin/perl
$HOME = $ENV{“HOME”};
require “$HOME/PERL/libperl.pl”;
($insnd,$outsnd,$dry,$tail,$gain) = @ARGV;
defined $outsnd || die “insnd outsnd [dry tail gain\n”;
#SET DEFAULTS IF UNDEFINED
$dry = $dry || .3;
$tail = $tail || 1.5;
$gain = $gain || 1.0;
# PATH OF THE Csound ORC
$proc = “$CSDIR/REVERB/rev1”;
$nchans = &getchans( $insnd );
if($nchans == 2 ){
$instr = “i1”;
} else {
$instr = “i2”;
}
$snddur = &getdur( $insnd );
$dur = $snddur + $tail ;
$score = $proc . “.sco”;
# CREATE Csound SCO
open( SCORE, ” target=”_blank”>$score”);
printf SCORE “$instr 0 %.5f 1 0 %.5f %.5f %.5f .01\n”,$dur, $gain, $dry, $snddur;
close( SCORE );
#RUN Csound TO GENERATE PROCESSED SOUND
`csio.pl $proc $insnd $outsnd`;

Comments in Perl (notes by the programmer that explain the program) are preceded by the “#” sign. If you look at the comments, you will notice that the script relies on other scripts and processes implemented in Csound, which is written in C, not Perl. This is nothing unusual, however. Sound processes in Mushroom can be implemented in any acoustic compiling program, driven by any language, provided that it does not require interactivity or communication through a graphic user interface. Multiple programs may even be used within a single process. “Perl brings them together,” Lyon has written, “and Mushroom harvests the results.”

The addition of sound processors to Mushroom is a relatively easy task. Most difficult is the creation of the new processing script, which requires knowledge of the acoustic compiler you are calling. (An example of this kind of script is the longer “rev1” script above). The processing script can only have two parameters: input and output sound file names. Once you have written the processing script, you add the name of that process to the processor list. Last, you add a “calling” function to Mushroom (an example of this would be the shorter “rev1” script above). The new processor would then be available to Mushroom for random insertion into a sound processing sequence.

Winners

Film/Video
Shelli Ainsworth – Minneapolis, MN
Steven Matheson – St. Paul, MN
Garret Williams – Minneapolis, MN

Literature
Jonathan Brannen – St. Paul, MN
Sarah Fox – Minneapolis, MN
Maureen Gibbon – Plymouth, MN
Adrian Louis – Marshall, MN
Kevin McColley – Pinewood, MN
Dan O’Brien – Whitewood, SD
Sheila O’Connor – Minneapolis, MN
Bertrand Vogelweide – Richardton, ND

Music Composition
Brent Michael Davids – Minneapolis, MN
Anthony Gatto – Minneapolis, MN
Peter Ostroushko – Minneapolis, MN

Scriptworks
W. David Hancock -St. Peter, MN