The following is a transcription of a phone interview that was subsequently transcribed, and then edited by both Scott Johnson and Steven Ricks. Ricks’s questions are in bold, with Johnson’s responses in regular font.
So what or how do you know about SEAMUS?
Mainly through my friend Joe Waters. Joe and I actually went to high school together and were in a rock band. Your typical teenage hippie band of the era. Yeah it’s funny—we started bumping into each other again in recent years, and that’s where I heard of the organization. So what you’re mostly interested in are my encounters with electronics?
Yes, SEAMUS is definitely interested in the angle of electronics and how technology and electronic tools relate to composition. For example, how have these tools influenced your music? How are they used? We could pursue a discussion of gear and the actual nuts and bolts of how you create your music, but also aesthetic issues—how the tools influence the choices you make, etc.
As a “compositional” organization, or I should say an organization of composers, I think the readers and the members are definitely interested in the compositional angle, the aesthetic angle. So, what sorts of ideas drive you and your work, and how are they affected by the specific tools you use?
Well, one of my favorite sayings is, “The hand shapes itself to fit the tool.” I’m always interested in the Darwinian life of ideas and practices, and how they spread and move through the culture. It’s made me very aware of something that I think most musicians and composers understand to begin with. Which is that you inherit most of your tools, you inherit most of what you get, and the great innovations that we talk about tend to actually be very small, single-digit-percentage tweakings or recombinations of things that already existed in the world or in human culture, seen from a new angle. In my case, I wound up being—oh, there’s no good word for it—a classical composer, a notation oriented composer, a western tradition composer, whatever you want to call it. But the tool kit I first approached it with was that of a rock kid, playing guitar.
I actually moved to New York to be a visual artist, and was determined to give up music because I felt like there wouldn’t be any place for me in the post-classical world — an electric guitarist who didn’t know much about (or particularly like) Brahms. Well, I failed at giving up music, and one of the things that led me back into it was working with multitrack tape recorders and tape loops. I started doing it in installation pieces – visual pieces that had tape-based sound sources. In one case I had a hundred-foot tape loop running through two different tape decks at opposite ends of a room, constantly recording at on e end and playing back at the other, so people would be hearing whatever they did sixty seconds ago. And as soon as I started using tape recorders my musical training reasserted itself. In addition to being a rock kid I’d taken undergraduate theory courses before I ran away to New York, so pretty soon there I was, doing music again. My first fully formed piece when I got back to it was John Somebody—a piece with layered electric guitars, that used speech in a way that came partly from those installation art pieces.
I think it’s interesting that you played electric guitar, you played in rock bands, and you went to New York to give up music, but then you ended up devoting your life to a sort of classical art music that relies heavily on notation. It seems like an interesting twist and I wonder why that is? It seems like it could have gone a lot of other directions . . .
Yeah, it could have. And actually, if you listen to John Somebody you can hear the procedural transition to more traditional notation happen between the first half and the second half. I should describe this, because it was an interesting instance of what I was talking about—adapting yourself to the technology, and having your efforts structured by technology and whatever your cultural inheritance is. In the first half of the piece it’s all just me playing electric guitar, and banging on a hand drum as well, and I was able to assemble this directly on tape from nothing but sketches, mixing scores, and memory. Later on I added some winds, and of course then I had to deal with making a complete score and getting parts to players.
This was not alien to me. I did study music at the University of Wisconsin, even though I always felt a bit like an outsider. I was interested in music—but in a kind of music that I wasn’t completely sure existed. In high school I was listening to the 60’s predecessors of prog-rock bands—what Joe Waters and I played—like Procol Harum, Jimi Hendrix, and so forth. And on the other hand I had fallen in love with Stravinsky and studied jazz guitar, which is where I had my first exposure to more complex harmonies—not from studying common practice classical music. I went from rock, to learning a little harmony from jazz, and THEN I got interested in classical music and studied it more seriously. At the university I was in art school, and I lied to the music department, signing up as a music major, and got into the theory courses. By the time they’d caught up to me a couple months later I was probably the second or third hardest working person in the class and they didn’t have the heart to throw me out. This was the early 70’s, which was of course the extended 60’s, and things were a bit looser at that point.
Anyhow, back to “the way things went . . . “—this is where the question of the available materials influencing working methods come into play. I was obviously influenced by rock and its various sounds and technologies, in which I include multitrack recording technology as a creative tool, not just a method of documentation. My first steps were completely indebted to Les Paul: one, for inventing the electric guitar; and two, for inventing multitrack tape decks. I liked the tool kit of popular music, but intellectually speaking, a lot of that music was unsatisfying. I have favorite bands still, but songs are songs, and rock bands tend to hit that four minute point and then they run out of something to say—not all of them, but most of them. There are now, as there were then, bands like Radiohead that are experimental, very clearly extending the boundaries of popular music, in the same way that Duke Ellington and the bebop guys were extending early roots of jazz into the realm of art music. So I was there at the cusp . . . a rock and roll kid wanting more, but not really wanting to go back to the 19th century.
I had my moments of infatuation with high modernist composers like Stockhausen, but the fact is I did not completely take to that sound world. I have a critique of it that I probably don’t want to go into here, which has to do with the nature of the human brain. If you look at the world musics of almost every culture, usually what you have is a drone, a drum, and a melody, or two of those elements. In general the pitches used are not given equal importance. I think the 12 tone method was a sort of well-intentioned, utopian attempt to see to what extent you can reprogram how we hear. But it generally works best for those who have been trained to have it work. I think that our preference for the lower intervals of the harmonic series seems to be deeply ingrained, and an artificial system that battles that rubs against the grain. Which is actually an effective tool when you want to induce tension or a sense of dislocation from the ordinary, but not so great for the sort of relations that music has always had to many other aspects of human experience.
So, back to the story here. In the late 70’s I came drifting back into music with background that mixed rock with some self-induced classical training. The world was still not very welcoming to someone with my sensibilities because most of the serious composition was what I call “High Modernism”: not just strict serialism, but a larger grouping of related styles, which at that time was a triumphant and self-confident musical ideology.
I moved to downtown New York and all of a sudden I was in a very experimental, do-it-yourself scene, and I did feel welcome, and I started to act on those rock inflected, classical-plus-American-populist impulses. This was right at the point when multitrack tape technology had begun producing relatively cheap and available home-sized tape recorders, and so the first thing I did when returning to music was to sit in a room alone and work with that new toolkit. Viewed from the world of contemporary technology, my process seems slightly deranged. I would layer up eight-channel stacks of synchronized concréte material by copying in from mono loops on a smaller deck, using a variable speed controller to manually achieve synch with the tracks already on the big deck. Then I would choose a region of the 8 channel tape where I’d gotten the most perfect synch, and cut a big multitrack loop. Next I would make a stereo mix from that loop, creating the actual timeline of a piece, using a manual mixing board with a graphic mixing score and numerous practice runs.
It was like carving a sculpture from a solid block, and the results resemble the layering in minimalism or techno dance music. I literally had to do finger exercises, turning buttons on and off, moving faders up and down, and practicing little mix segments — which of course nowadays you can program in ProTools and the like, and BANG, it’s the easiest thing in the world. But it was quite laborious at the time. And then finally I would run that stereo mix back onto 2 channels of the eight-track, and fill in the rest of the tracks with an instrumental score.
I once spent a month making a 26 foot tape loop which was just the melody of a woman laughing, edited together from fragments of a source tape–this was “Involuntary Song no. 3” in John Somebody. If I remember correctly, it was 2 5/8 inch to a quarter note and it took me a month to make this thing, with little 16th inch slivers to make the beats all line up. The reason I had to be so precise is that the long melodies had to synchronize with another six channels of individual eighth-note laughs on six pitches, each copied in from its own mono loop, from which I made an accompaniment of “ha’s” with moving harmonies, by doing another real time, 6 channel manual mixdown. This accompaniment was then synchronized with that 26 foot melodic tape loop to create a long, tonally harmonized structure. It was so labor intensive! Despite being machine-dependent, and clearly artificial, this concréte work was very different from electronic music. There weren’t any synthesized sounds at all, and I never seriously explored synthesizers. After doing what I describe above, I would think about learning a new technology, and I thought “When exactly during any of these learning processes was I going to have time to write any music?”
Right—well, I think concréte is a very established tradition and a very strong thread in the history and evolution of electronic music, and it seems like composers often do have leanings or give themselves over to either one way or the other. That’s oversimplifying it, obviously, since people use recorded sounds and also use synthesis, but it can be that people are seduced by the one or the other. The natural characteristics of the acoustic world and the approaches/techniques of concréte are still a vital thing, and a lot of people are really interested in them. The tools have evolved like you say—rather than magnetic tape, the computer and the digital realm has really opened up new opportunities and made things simpler in some ways . . .
. . . and in the course of getting much simpler to use, it has allowed me to get more complex musically. Right now I’m working on a piece for large ensemble with the voice of philosopher Daniel Dennett. There is one movement that has two full keyboards full of speech samples. That first section of John Somebody (which is what most people think of when they think of John Somebody) is where I first invented that speech transcription technique. And that movement is built entirely out of four little recorded phrases. Now, I can take a phrase, I can take a hundred phrases, and analyze all the pitches and rhythms as I did at the very beginning with that initial insight — but I can easily make polyphony with them, synchronize them rhythmically, tune individual syllables if I want (I almost never do that, by the way—I kind of like keeping the in-between-ness of natural speech.) But it’s so much easier to work with the media now, and I’m actually able to get involved with some rather complex text and ideas in my current pieces, rather than being restricted to a low syllable count, because everything was so labor-intensive.
In pieces like Americans, or How It Happens, the big piece I did for the Kronos Quartet in the 90’s, I developed the sampler technique that I still use. I wanted to put some basic electronic functions on the same platform as the instrumental notation, so each speech sample has a key assignment, triggered from a dedicated staff in Finale. I write the stereo mix into the score, with left, center, and right staves, and I often use little micro-anticipation/delay arrows that allow me to place an individual sample in more than one stereo location with slight time offsets — this creates new locations in the stereo spectrum, just as our ears use time delays to pinpoint sound sources. The performers have to work with click tracks, because if the tempo or sample trigger is even slightly off the whole thing can fall apart — and since the recorded speech I’m working with is “natural” and irregular, even a machine-perfect sync sounds loose. It’s sort of a pain to perform with a click track, but it’s the only way to get this effect. Polyphony is extremely difficult because these speech phrases are full of little irregularities. I make them sound as if they were rational, in time, and in tune. But they’re not. It’s all fakery! But that’s always been the idea of concrete — it suggests something natural, but it’s manipulated. My biggest interest was in the human element of speech, and a lot of that came from an earlier musical experience—the call and response of blues. When I was in college there was a bar that had blues players coming up from Chicago, and of course there was a lot of call and response going on in that American electric blues style.
What was the name of the bar?
. . . the “Nitty Gritty”. . .
. . . and that was in Madison?
It was in Madison, yeah . . . I played there with Joe [Waters] and our band a number of times, but I also played there with other groups, including a weekly gig with a little Sunday night house band. The call and response in that music is something that really struck me. I always think of an old Jimi Hendrix song called “Gypsy Eyes” where he sings unison with his guitar, like players of earlier generations.
When I came up with this idea of transcribing speech pitches and melodies for instruments there were really three influences. The idea to have the instruments imitate the voice came from the call and response of blues; for the sound of looped and layered voices, the predecessor was the vocal pieces of Steve Reich in the 60’s. And finally, not so much in terms of the sounds but in terms of my specific instrumental technique, Messiaen’s idea of transcribing bird song was extremely influential.
I was living in New York in the mid 70’s, in a loft on the Bowery two blocks away from CBGB’s, and I clearly remember the day I came up with this transcription technique. I had been recording friends to get the speech material for some of these installation pieces, these conceptual art pieces I was doing, and I came home with a brief recording of a painter friend, Judy Rifka. I had her call up someone she knew and just kind of have an everyday conversation, and I recorded only her side of it. At one point she asked, “You know who’s in New York? Remember that guy? ‘J-John Somebody’? The, he was a, he was sort of a…” To this day I have no idea who “John Somebody” was, or what he was sort of like (years later, Judy couldn’t remember).
I went home, had a listen, and the pitches were so clear. I picked up a guitar and played it, there was this implied 5th of a triad, then the major 3rd, and then a chromatic crawl down to the root. And the pitches of the succeeding phrases suggested some big rock and roll power chords. From an
E down to a C# minor, and then a C and a D and back to E . . . a little like the progression in “Stairway to Heaven”. The humor of pompous power chords under a phrase about the perfect nobody was irresistible. And finally there was an eminently loopable phrase like a syncopated guitar groove. So that’s what started John Somebody, just transcribing those four fragments. I remember on that day I thought of Messiaen, and the blues bar, and I literally picked up the guitar and did the blues imitation thing. I’d also been in the habit of playing around with tape recorders and loops as well, which as I said goes back to both Steve’s pieces and the conceptual art of the time, so the whole idea just clicked into place.
So I was talking before about how what we have inherited defines what we do, and we make small alterations. Ideas get passed around, and this particular idea is an interesting case because you can see how all these various sources came together when I come up with this transcription idea. Steve Reich never put instruments to vocal loops until after he heard John Somebody—a couple years after he heard John Somebody he did Different Trains. So in a way he was, in my mind, in the same position regarding me as Manet was regarding the Impressionists. Manet influenced the Impressionists, but then after he saw the Impressionists, it changed his style.
Steve has continued to use this technique, as has Jacob ter Veldhuis (Jacob TV), a friend of mine from Holland who does a lot of speech sampling pieces. Jacob is in love with American pop and popular culture so his pieces are full of American pop culture jokes. And my transcription technique has been useful for a couple of other people, but it’s just a small minority interest, because it is a bit slow and labor-intensive, and in a way you’re trapped by these found pitches. That’s actually one of the things that appealed to me at the beginning and still does, because it forces me to make harmonic decisions that I might not have otherwise made.
With all of this transcribing of speech and the human voice, do you find that the material falls into any consistent harmonies, scales, or patterns?
Can you articulate some of them?
Well, one thing in particular that’s kind of universal is that people tend to have a pedal point . . .
So they’ll keep going back to a sort of tonic or reference tone . . . ?
They’ll come to rest on a pedal point that is usually within a major second—if you really want to hear it, listen to a newscaster or a preacher. Perfect examples. They get very sing-songy, and that’s part of the hypnotism, part of the creation of this sort of aura that being in this room at this moment is special. It becomes ritualistic—a musical return to a pedal point. I think this is one of the similarities between speech and music that stems from similarities in their evolutionary tracks. It’s as if they are two branches that stem from the same area of the brain but then went off in different directions. Of course speech and music have different purposes: one is to give us information, the other is basically emotional, at least that’s what I think music is, or started out as. But they touch upon each other, and so if you listen to people speaking, that pedal point serves the function of the drone. It’s the reference point, the point against which other pitches or other phrases are measured, and that’s why when somebody gets excited, or when somebody’s really not feeling ok, the distance they travel from their pedal point indicates how they’re feeling. Think of our connotations with the words “high” and “low,” feeling up, feeling down, etc. When you’re feeling up, your pitch actually goes up, when you’re feeling down your pitch as well as your dynamics actually stay down, usually within a narrowed range. I think that these metaphorical functions lie somewhere way back at the root of our brains—as we became human, as we evolved into what we are. And I think that they’re expressed both in speech and in music.
One of the things we say about a particularly eloquent instrumentalist is that they’re making their instrument “speak.” And then we criticize performers when they don’t make their instrument warm enough or “human” enough. I think we just make these kind of comparisons and associations and similes without thinking—it’s built in. Now of course people don’t speak in scales, with some exceptions . . . this guy I’m working with right now, the philosopher Daniel Dennett, probably has the most interesting melodically-generative voice I’ve ever worked on. He does a lot of public speaking, so he has a very wide range of pitches and means of expression. For example, when he uses words that involve tension, or suspense, or uncertainty, he will literally use a tritone! It’s really startling the way that our unconscious, or even conscious but traditional associations with pitches show up naturally. Tritones are one startling thing, large leaps are another startling thing. People tend to make them when they are trying to express excitement or evoke excitement. I think there are real similarities between the melodic shapes of speech and music, and what those melodic shapes mean and connote.
And you might be surprised that in some cases it actually gets down to intervals. For example, a resolution going down a fifth. Like when people say something like “I think this, therefore that,” and they’ll actually descend a fifth from the “this” to the “that.” People will often descend, roughly, fifth-ish, to express certainty.
So that descending fifth is a cadential formula or pattern, even in speech?
Yes, a cadential final point. I run into this all the time. This is why I think some of our musical predilections are not really up to us, not really a matter of choice. Now this is not to say that dissonance is bad or unnatural. What it says is that dissonance connotes uncertainty, or fear, or agitation, and it’s easier to see this when you look at the use of musical dissonance in popular culture, for example in horror movies. That’s exactly where it gets used. It does not get used for lullabies. It gets used for the guy with the machete.
Right. So you mentioned working with philosopher Daniel Dennett and his recorded speech—is this a new piece? Perhaps you can describe in some kind of detail your compositional process. How do you get from coming across a philosopher that you’re interested in, to the point of recording him or identifying recordings you want to use, and then how does that eventually become the composition?
The techniques that I use on these speech sampling pieces have been basically stable since I worked out the way I do this is the early 90s. I was working on the piece called How it Happens, an hour-long piece written for the Kronos Quartet, which unfortunately no one’s ever heard in its entirety because only about half of it is recorded, and movements are on three different CD’s. Anyway, this piece with Daniel Dennett, Mind Out Of Matter, is really a direct descendent of that piece. The technique I use is the same regardless of the source material.
I read a book of Daniel Dennett’s called Darwin’s Dangerous Idea: Evolution and the Meanings of Life in the ‘90s, and it actually helped me put in place a lot of the ideas I’ve had about the evolution of music. I probably shouldn’t even begin, this is a very long story. In my lecture at BYU I talked about this—essentially these are ideas about memes, as in Richard Dawkins’ ideas of memes, and how ideas transmit themselves through cultures — AND, how they do this, as do genes, for their own good, and not necessarily for our good. And then we, as a culture, choose the ones that seem to work for us, or are infected by ones that harm us. When I read Darwin’s Dangerous Idea, all of a sudden a lot of the ideas I’ve had about the evolution of musical style were put into context — culturally and biologically. This helped inspire me to write the only really big piece of prose writing I’ve done. It’s a 15,000-word essay called “The Counterpoint of Species,” sort of a pun on species counterpoint, and is an exploration of the idea that musical styles are like biological species, that they evolve in the ecosystem of individual human minds and cultures, and ideas are passed along like viruses, and are in fact evolving Darwinian entities, like genes. Anyway, I wrote a fairly large essay about this that John Zorn put in a book called Arcana, which is a collection of musicians writing about music, and I signed and sent a copy of it to this Daniel Dennett. And much to my surprise he wrote back! And he said “Send me five copies to send to my colleagues,” and I felt rather like the Scarecrow in the Wizard of Oz, when the wizard says “You already have a brain.”
So I got in touch with him, and listened to a lecture of his, and found out he has this wonderful, animated speaking voice. He has sung choral music, and it’s really amazing, he often kind of arpeggiates triads over the course of speaking — totally unconsciously.
. . . Wow! . . .
. . . and there are lots of thirds and fourths and fifths and sixths, and it’s not just the kind of monotone that people often will do with their voice.
So the materials for this piece are a lecture that he was giving, actually a talk based on one of his books called Breaking the Spell: Religion as a Natural Phenomenon, which is an analysis of religion’s origins and evolution, and how he feels that it’s something that grows naturally out of the structure of the human brain and human culture. Anyhow, a lot of the material is from that. Then I went up to Boston and did hours of interview myself.
And how do you record it? What do you take with you?
I just took a microphone and an interface and a laptop, nothing to it.
Well, what microphone?
I don’t even remember . . .
. . . so it’s not that important to you?
I borrowed a decent microphone from a friend of mine with a recording studio . . .
. . . so ultimately you would use these samples in the electronics part of the piece, right?
The Dennett lecture, I have no idea what they recorded with—it was a guy who actually has a recording studio and had a nice mic. I don’t even own a good mic — I mean, I might record somebody once every few years. I don’t do this very often.
This Daniel Dennett piece is a large, evening-length piece, and I have several years of work into gathering and editing these recordings. They’re reasonable quality, but I’ve worked with trashy quality too—in the Kronos piece, there are some phone interviews that were broadcast on NPR, so I have phone voices. Sometimes I will take the voices I have and purposely degrade them, or equalize them to extremes of highs or extremes of mids, or subject them to other kinds of processing, chorusing or something like that. Anybody who’s worked with music concrete or with electric guitars will know that sometimes less-than-optimal equipment will create an interesting sound. I mean, think of all the guitar junkies and their tube amps—that’s all about distortion.
I then take it into ProTools and basically, mostly, just play these samples back unchanged. And when I mix them, I mix them like it’s a normal recording session, and I use a little reverb, or don’t use a little reverb if that’s the sound I want, etc. I use pretty simple, basic audio recording techniques and tools.
The only time I’ve ever used synthesizer was in the late ‘80s-early ‘90s, in Patty Hearst, and in Rock Paper Scissors, which was pretty much the only big piece I have with synth sounds. I tended to write for very standard types of sounds that I could describe verbally in the score, with the assumption that the platforms that I was using at the time—you know, the Rolands and Yamahas and stuff that were around in 1992—were not going to be around forever and somebody would eventually have to recreate the sounds on a different system.
I actually want to recreate them at some point—I just have not had time to do it. But you know, frankly, since then I have not done any synthesis, because of what a problem it is for reproduction. Nobody performs these pieces because it’s a chore to get the gear together, and there aren’t ensembles that have this gear on hand—all the standing ensembles tend to be acoustic instruments, still. It’s unfortunate that in a very practical way the world has pushed me away from spending more time with electronics. When I was doing these pieces I would maintain my own ensemble, I would give my own performances, and I would tour with these works and my gear. But I rarely perform anymore.
I did a soundtrack to a movie called Patty Hearst (1988), which is my one and only film score, which has some synthesis in it. I was working at Phillip Glass’s studio at the time, and Phillip had this whole machine, a studio support system—very skilled, and very efficient, with synthesizer specialists, a whole workshop, like a Renaissance painter. I made use of that, and even came up with one interesting synth use. But when it comes down to just me in a room, I find that I have a little bit of a resistance to learning new software and machines. I prefer a sort of immediacy; I like to sit down and get rolling compositionally.
When I was playing a lot of guitar live I used an Eventide harmonizer and created a piece called Five Movements—a whole half hour (in five movements) of solo guitar run through an Eventide harmonizer. The early Eventide was a pitch-shifting device, but also if you shifted down you would get pulsed behavior with a timed delay. This piece is similar to the delay pieces many people do, but in this case it’s a delay plus pitch shift, and I worked with that and built these pieces around a very particular piece of machinery. A couple of people have performed these pieces, but I badly need to sit down and get a laptop application and recreate a virtual version of these old harmonizer patches, or have somebody else do it, so that the pieces can get performed. Because in addition to that electronic aspect there was a whole bunch of foot pedals and a complicated signal path, and once I stopped personally flying the gear to the location of the performance, they don’t get performed.
And since that time, even with my electric guitar stuff, I’ve tended to do very little elaborate electronic processing. My rule became: I want to write for whatever you can go down to the guitar shop and get tomorrow afternoon, without really knowing very much. I began to lean more on the electric guitar rather less as an electronic instrument, and more as an ordinary instrument that happens to be plugged in. Now this is very unsophisticated, and I know that writing works for a live performer interacting with electronics is a whole world that has opened up, and I actually would like to touch on it at some point. But I don’t know exactly when that’s going to happen. There’s so much to do and so little time.
Well, absolutely. I often share the following quote by Luciano Berio with my students:
“Composers who work with new means in electronic music (computers included) tend to place their pasts in parentheses . . . Sometimes, one has the impression that they let themselves be chosen by the new technologies without being able to establish, dialectically, a real rapport and a true need for them. We can in fact pass indifferently from one system to another, from one computer to another—they are ever faster, more sophisticated, more powerful, and ever smaller—without really using musically that which was there.”
. . . and that’s always on my mind in regards to technology. There is a tension between keeping current with equipment or tools and the amount of energy and time that takes versus simply using what you know and trying to get the most artistic results out of it, because it’s immediate, and because you can dig right in and do what you know how to do and do it well . . .
Right. Electronics alter that traditional focus of the Western compositional tradition. I’ve wound up making a commitment to scored compositions, so that’s affected my relationship to electronic or electric elements. I ended up gradually simplifying them to avoid difficulties with performance and evolving platforms—a practical tendency, not an ideological position. Whatever choices people make, electronics change the game, and create a different environment from the inheritances of the various acoustic traditions.
It’s always a conflict. I know that by opting for this Dionysian or Romantic kind of thing—that I want to sit down and do something NOW—I know that that has probably kept me from things. One of the really interesting things about the electronic world is that you can make up sounds that you’ve never heard before, and I think that the way those sounds usually come about is by knowing your gear and sitting down and playing with it. I mean I know that those guitar pieces I’ve been mentioning—Five Movements—they happened because I had this new toy. Once upon a time I literally I built a loft for the guy who invented the Eventide harmonizer, who was a friend of a friend, and part of the payment for my carpentry work was getting an Eventide harmonizer, and that became my new instrument. And it also was pivotal in creating the other more specifically electronic-music-sounding pieces of mine, early pieces, called U79 and No Memory. Those were little bits of voice, mutated by this device, and it came out of “hands on” experience. And for the synth piece I did for an electric quartet that we mentioned, Rock Paper Scissors, I kind of cobbled these patches together, not writing stuff from scratch, but basically starting with what was on the machine and mutating it a little bit. It takes a lot of time to fit yourself to a new tool.
Before we conclude, I wanted to say a few things about the question of reproducibility. The other thing that’s been significant in the Western tradition is the idea that the music is readily reproducible from the written score. If you write a string quartet, all you need is string players, and they’re everywhere. Your piece will be performed, and can be reproduced in live performance fairly easily. The constant evolution and replacement of electronic platforms and devices means that if you make a piece that’s dependent on the characteristics of a certain device, you may or may not be able to reproduce it, or, particularly if it’s a performance situation, you may not be able to find that machine again.
What I’m going to have to do with many of my early pieces is go back in with a contemporary platform and retrofit the sound. This conflicts with the whole idea of “an orchestras always going to sound like an orchestra,” and that’s what makes it easy to write for orchestra. Everybody knows what’s there and everybody knows what they can do. So that’s the challenge, I think, of electronic instruments and electronic music. There are all these possibilities, but it’s difficult to know how much you can count on those possibilities being there. Unless of course you’re doing a fully recorded piece in which case, fine, it’s fully recorded and you don’t need to reproduce it.
There’s a lot in what you just said, in particular the idea of how a desire to create something that will be easily reproducible, or reproducible at all, affects the music you’re writing.
Yeah, it’s a consideration, if you’re a composer and you want people to hear your stuff, or if you want to hear your stuff.
Well, this actually leads to my last question, which was one of the points that you made in your lecture here at BYU, and which I thought was really interesting—your notion, even goal (?) of expanding the audience of art music and reaching out to people across stylistic lines. Not that you’re interested in watering down your product, or that you expect our audience to be as big as the audience for Miley Cyrus, or whomever, but . . .
. . . but actually, there’s something in the way you framed that which is the way that most people think of this issue. And I think it’s the wrong way to think of this issue, that is, the question of watering down or pandering to the masses or doing something in order to please somebody else—I find that 180 degrees away from my experience. My experience is that I have an internalized self who likes a lot of the same sounds that the larger culture likes. I grew up with it. I’m a rock and roll guitarist for goodness sake—that was how I started. For me to edit that out is to pander to a high-end audience and to pretend that I’m something I’m not. I like that stuff. I may not like all the music that’s written for it. But I like the sounds, I remember what it was like to be a teenager, out at night and excited by this and that the same way Beethoven remembered what it was like to be a teenager and go to a see a bunch of people dancing folk dances or whatever they did then, and he did not edit that out of his music. And Stravinsky did not edit Russia out of his music, although he pretended to. He lied about it. Because he was trying to be upwardly mobile and pretend that he had invented everything, cut it all from whole cloth.
This fear of watering it down, of pandering to others, is just a false way of framing the question. It denies the fact that if you’re pandering to yourself—if you’re a person like me and you include elements from popular culture because its part of you. I am writing that which I want to hear, and what I want to hear includes that stuff. I do not think it involves any compromise. On the contrary, I think that if I were to very self-consciously go a sort of Milton Babbitt direction—and by the way he loved the Broadway musicals, and wrote one (you don’t hear that in his music!)—that would be pandering. Babbitt was very careful to make sure that most of his high modernist work appears in the guise of what is the highest cultural coinage, which in the music world has come to be that which annoys dummies. In other words, the Rite of Spring phenomenon: the idea that they hated me this year, but next year I’m the toast of Paris. I think this mentality is self-defeating. I don’t think that the question of attending to the culture you grew up in is a question of compromise, I think that pretending that you didn’t grow up there is compromise.
Well, I love that twist on it. I think that’s a really great, compelling thought. To conclude, then, what role has technology played, or do you think it can play, in this sort of communicating with an audience? How does it relate to this idea of really being true to yourself, and to reaching out or speaking to potential audience members?
I think it can relate in two really distinct and opposing ways, and they’re both good. One is the way of high modernism, which is the jolt of the shock of the new, which can be great fun. I’ve said a few things contrary to high modernism, but that doesn’t mean I wasn’t struck by it when I first heard it and said “What is that?!” and “I want to hear more of that odd sound . . . “ You know, it’s the value of novelty, and the pleasure of novelty, and it’s one of the ways in which you can reach out.
And the exact opposite way is the pleasure of the familiar, and that is for me hitting the distortion on my electric guitar and playing a power chord like some testosterone-poisoned 17-year-old. It’s fun, and it’s familiar, and it is definitely going to pull in a certain amount of people who connect with that sound. That doesn’t mean it can’t pull me in too. One of the things I like about some of my more populist sounding pieces is that if you hear just a couple bars you think “Oh, it’s that—I hear a drum set, I hear an electric bass, I hear that,” and if you hear a couple more bars you go “Something’s wrong here.” This isn’t behaving the way a song is supposed to behave, and then soon you realize that it actually isn’t a song at all. It’s an instrumental composition and it’s more complex than the materials you’re hearing might suggest at the outset.
So again, I go back to what I just said about the fallacy of the “pandering to the masses” thing. If you go about pandering to yourself, and trust your instincts, you can escape that way of thinking. On the one hand, you might have fun making some sort of post-modern ironic joke about familiarity, on the other hand, you might create something that’s startling, that makes you go “what just happened?” along the lines of the strategy of high modernism. Well, if they both work for you, you should just do it, and trust that you’re not the only one in the world who can appreciate both. You know, we think that there’s us and then there’s the audience. Well, we went to the same high school as that audience, we played on the schoolyard when we were six with that audience. We aren’t that different. We’ve convinced ourselves that we’re that different, yet we’ve gradually mutated our tradition in such a way that it’s become what I call the Ivory Ghetto. We’ve created a gulf between us and the audience that I don’t think was there in 1850. I think that it’s a gradual kind of cultural mutation that kind of jumped the tracks, and I don’t see any reason why it’s necessary.
Well, Scott, thanks so much for taking all this time with me.