Computer Music at Stanford
Simon Millward visits the world-renowned Centre for Computer Research in Music and Acoustics at Stanford University, California and talks to its famous director, Professor John Chowning, inventor of FM synthesis.
Few people realise the debt we owe the research establishments working at the frontiers of new musical technologies. One of the most famous is the Centre for Computer Research in Music and Acoustics (CCRMA) at Stanford University, California. CCRMA is a major force in computer music and exerts a worldwide influence over past, current and future developments in the industry. Simon Millward visited the Centre in January-February of this year and had the opportunity of using the mainframe computer system to compose an original work of his own. Here he brings us his report of the visit plus an interview with the Director of CCRMA and founder of FM synthesis, Professor John Chowning.
CCRMA is situated on the southern extreme of Stanford University's campus in a unique and beautiful building known as The Knoll. It is principally a research establishment but attracts composers from all over the world into an interactive environment, especially suited to the composition of computer music.
The system at CCRMA is based around a Foonly F4 mainframe computer controlling a Systems Concepts digital synthesizer designed by Peter Samson of Systems Concepts in San Francisco. This synthesizer, otherwise known as 'The Samson Box', is a six-foot high monolithic black box capable of almost any possible musical function and something like 2000 times more powerful than your average portable synthesizer. The computer is used on a time-sharing basis by as many as 25 users at any one time.
The Centre's main strength - and its greatest resource - is the presence of the continuous and varied selection of interested musical parties, each of whose approach makes for an ideal exchange of ideas. Guest composers are invited at the discretion of the Centre's Director, John Chowning, and the senior staff. The generosity and good nature of the staff and administration are central to the congenial working atmosphere of CCRMA. Each invited guest is encouraged to pursue his/her own ideas with no undue bias.
A substantial piece of music was produced by myself during my six-week visit. Like most works produced at the Centre it relies entirely on the powerful control and synthesis capabilities of the CCRMA system. This control includes the precise placement and distancing of sounds in either a stereo or quadrophonic sound image. Synthesis of a variety of timbres is possible via available system instruments which incorporate FM, Additive, and other methods of synthesis. Music is controlled in every domain with the use of a number of computer languages designed specifically for compositional purposes. Any composer visiting the Centre for the first time must, therefore, learn a new computer language but, since this is usually an Algol-like language, many computer literate users will not find this very difficult.
The Centre owes much to the pioneering work of John Chowning (who invented the now famous concept of FM synthesis) and to the subsequent support and encouragement from musical instrument manufacturers, among them, of course, the Yamaha Corporation. The legendary Yamaha DX7 owes its existence to Chowning's original research and to the later refinement of FM into a portable digital synthesizer by the technicians at Yamaha's research laboratories. A close liaison is still maintained between Chowning and Yamaha and proof of this is evident throughout the CCRMA building with the conspicuous presence of a large amount of Yamaha musical equipment. Indeed, as well as the mainframe system, John Chowning has encouraged the establishment of a well stocked MIDI studio at the Centre.
Interest in the research undertaken at the Centre comes from a wide spectrum of the music industry. Julius Smith and Guy Garnett of CCRMA are busy arousing interest in a new synthesis method known as Waveguide synthesis. Guy Garnett is presently using this method to synthesize piano tones. This involves using the computer to create delay lines to approximate a model of the working parts of a real piano. Other activities include microtonal research, being undertaken by Doug Keislar in a PhD research programme; speech synthesis, being undertaken by Bob Shannon (who, during my visit, was attempting to synthesize the sentence 'These take the shape of a long round arch with its path high above and its two ends apparently beyond the horizon'!); and a number of other research projects including the use of MIDI for more extensive musical applications. Such research programmes often form the seeds for new musical technologies which may one day end up incorporated into a popular portable synthesizer. So, next time you play your DX7 or whatever, spare a thought for all the hard-working researchers who made it possible and try to appreciate where the power behind the button is really coming from.
The CCRMA building is an ideal environment for the composition of computer music. It is very spacious with three or more acoustically treated sound studios and a large number of what are referred to as 'listening stations'. Listening stations are where most of the Centre's visiting and resident composers spend their time. This involves working with a computer terminal and a set of two to four loudspeakers. No mixing desks are necessarily required since the computer takes care of the placement, levels and mixing of sounds throughout a piece.
In addition to the frequent lectures and concerts held at the Centre itself, CCRMA also organises larger computer music events including a yearly summer open-air concert. Also, the Centre stocks a comprehensive library of research papers and relevant literature around the subjects of computer music composition and synthesis and has on release an LP record and two cassettes featuring many of the major works produced by resident and visiting composers.
I considered there could be no better way of fully understanding the activities and philosophy of CCRMA than to conduct an in-depth interview with the Centre's Director, John Chowning. This took place at his office in the CCRMA building, which has an attractive view overlooking the University campus towards the southern reaches of San Francisco Bay.
The atmosphere is peaceful and relaxed - an atmosphere which tends to pervade the entire building. Surrounded by the evidence of Dr. Chowning's not inconsiderable administrative tasks as Director and an array of texts on computer music related subjects, I couldn't help noticing a Japanese translation of Chowning's recently published book, FM Theory and Applications, but didn't have the courage to ask him for a copy for its novelty value. After taking some photographs, it was time to fire the first question.
What do you consider to be the main reasons for the establishment and growth of a Centre such as CCRMA?
JC: I think, at first, it was an interest in electronic music in relation to musical composition in the 20th Century and a University of this kind had the computational capabilities and technical help to sustain an initial effort. But over the years the activities have expanded. As you know, there's an amount of research that goes on here which is in direct support of musical composition itself - in terms of signal processing and digital synthesis. I guess the bigger question is 'Why should a Centre such as CCRMA be a research centre as well as a production centre?' Production is what composers who come here are interested in but I think that there are few centres in the world which can maintain large enough computer systems to allow substantial research to go on without the constraints that exist in small systems or dedicated hardware. For example, the generation of computers and processors that are now appearing, while very powerful, are pretty much task-specific and do not allow one to venture off into some domain of research in a chosen area. In direct contrast, a large general purpose computer is always an open-ended system. While very slow, perhaps, in performing various computations, the fact is that any sort of computation can be done. I think this 'open-endedness' is an important attribute of this Centre, which we've been able to maintain over the years.
With the rapid advancement of computer related musical instruments and technology in the commercial sector, do you think that CCRMA will be made a thing of the past in, let's say, 20 years from now?
JC: Research in the auditory domain is not something that's going to run out of interesting problems in the foreseeable future and the idea that commercial devices are becoming ever more powerful and less expensive is good for production, certainly. There are things that can be done now - with the new model DX7 for example - in the domain of microtuning, which is not currently available on any other commercial device. This brings it some way closer to the tuning capabilities of our own 'Samson Box' which is, of course, desirable. But I think we have to remember that a large part of the reason why it does have that microtuning capability is not because the commercial world was demanding it but because the academic and art music worlds were demanding it. This is not the money-making sector at all. Now, let's look at the impetus for research in, say, this area of microtuning. Microtuning is a musical issue which, through different kinds of research, can be explored in many different ways. Though one aspect may be thoroughly explored and a new synthesizer may appear on the market, it does not mean that research has been exhausted in the area of microtuning.
I think in 20 years time there will be a whole new batch of research interests to be addressed because there is a tradition in a research lab that one doesn't find in, let's say, a recording studio in Hollywood, where they don't have the time to be able to sit back and reflect. What we treasure in an environment like this is the time to be reflective about the experiments being done, without the pressure of having to produce. It's much more like archaeologists searching for 10 years before they find any fossil at all. It's the same tradition here, where an interest may be maintained without any absolute proof that there is anything to be found but, if there is, the fact that a lot of thought has been given to the problem makes the whole thing pregnant with meaning. We have a whole history of compositional language which wasn't thought up just by one person: the development of compositional languages here span some 25 years of cumulative experience.
On the question of language, how relevant or irrelevant is traditional music notation to computer music?
JC: Well, it gets pretty irrelevant round here. You won't see many people with musical manuscript making translations into traditional notation. But it is done, no doubt about that. However, the aspects of control so uniquely appropriate to computer processing are exactly those that are not easily traditionally notated, so one has to express it in a different way.
So, computer music requires a language of its own, which you have gone towards developing at CCRMA?
JC: Pierre Schaeffer went some way towards it in his 'Suaffage' of tone colour but it turned out to be not particularly useful to any composer of computer music as far as I know. There are other ways we can deal with it. We can now certainly describe with some degree of accuracy, using precise engineering and physical representations, the nature of a complex timbre.
Would you say then that the Centre is involved in the development of a new universal language involving the expression of some of the new parameters of, shall we say, loudspeaker music?
JC: Well, the language is universal at this Centre but I don't really know how extensive a universal language is elsewhere. Certainly, the spatial aspects of music, because they are now more easily controllable, have become a common aspect of every musical source here. We use such things as distance and angle of sound as well as reverberation.
Since the study or composition of computer music is essentially interdisciplinary, does the prospective composer need a thorough grounding in both musical theory and computer science?
JC: Well, I don't think so. Certainly one has to learn a little about acoustics, processing, and something about the way in which computers work in order to have substantial contact with the medium. I don't think one has to necessarily be a good programmer. Some composers I know have been very successful at the level of instrument design, joining oscillators and processors together and making some real contributions in synthesis without having very much skill in general purpose programming at all. There are other composers who have been able to construct general purpose compositional algorithms who haven't been involved in any low-level synthesis programming and I think these composers are uniquely successful also. So there are these two poles, and each one is a perfectly valid musical position. The music that has consequently been completed is, more often than not, of good quality.
The term 'computer music' could be misleading to many people since it conjures up many images which are not necessarily to do with anything musical. Is there a better term?
JC: I've thought about that. It's true, we do make this association between concepts of de-humanisation and computers. So, having been asked this question a number of times over the years, I guess my response is that computer music, rather than the dehumanisation of music, is an aspect of the humanisation of computers. It is using them to do something which is essentially expressive. Whether or not we should change the name, I'm not so sure. We think of 'organ music', we think of 'orchestra music', and 'computer music' is certainly music which is utterly dependent on computers. We could use the term 'digital' or 'digitally related music' but I don't know whether that does any better. I think that people who actually hear the music are going to become rather more comfortable with the term 'computer music' because they'll find that it's not necessarily all the little squeaks, pops and bleeps that the cinema industry and others lead them to believe.
Nevertheless, would you say that computer music doesn't really make any substantial contact with the public? Is there some kind of barrier between the public and computer music?
JC: In fact, the public does hear a lot of computer music. Many film and TV scores are now performed with Yamaha DX7s under the control of an Apple Macintosh or some other sequencer, but we don't think of this as computer music, we just think of it as a soundtrack. Really, the question here is not about computer music and a large audience but about 'art music' and a large audience. I go out there and I don't hear much 'computer art music', that's true, but I don't go out there and hear much of Beethoven's chamber music either. There is no real issue about computer music and mass audiences. Computer music today has a much larger audience. If we define it as any music which involves the generation and composition of sound with digital synthesizers and with computer control, then it's everywhere and it has a huge audience! But who cares? What I care about is 'art music'. I don't know which other term to use. That's the only one that is inclusive enough: 'art music' may include jazz, the very best of rock, Western classical music, but it's music the intent of which is not money oriented or to be placed in a subservient position to some other medium or trivia. As far as an audience for computer music which is produced here is concerned, we have had over 2000 people attend our larger concerts on more than one occasion. Now that's a pretty big audience for contemporary music.
Do you think it is unhealthy for computer music to exist in isolation from conventional instrumentation?
JC: In its early years, it was always music produced for taped performance without accompaniment but, now, the taped piece can be viewed as one part of an ensemble. The problem is that this taped music is utterly inflexible as far as dynamism is concerned. Up to now, the taped part has been used not always in a satisfactory way from the performer's point of view, except in special cases where it seems to have worked quite well. But I think now, with real-time digital synthesizers that are more powerful and subject to dynamic control in the context of performance, we'll see more and more traditional use of performers and orchestrated computer sound with traditional instrumentation.
Having begun to use the system at CCRMA a little, it appears to me that the sounds I can make and the sounds I have heard in other people's pieces are all recognisable as having come from 'The Samson Box'. Shouldn't the system have the power to create totally different soundscapes which cannot be recognised as having come from the same synthesizer, since the CCRMA system is a general purpose one?
JC: I think that's partly just a question of our having system instruments which most users, in their first approach to the computer, use as their point of departure. You can change the system instruments, to some degree, as a function of the parameters, but most people who use the system for the first time don't get down into the computer at the level of 'instrument' design itself, and it's there that you can make the big changes. If you were here for a longer period you would find yourself getting into the construction of your own instruments, and then you would perhaps notice a substantial difference in timbral content.
Are there any commercially-based artists who are especially significant in your opinion?
JC: I think there are lots of people who use the available technology in ways that I think are imaginative and creative. But they are not driving the development of the technology, they are users of it.
Who is driving the development of the technology?
JC: I think it's partly places like here. Certainly, the artists make demands and want something better but their world is not generally one where there is any very good means to articulate to an engineering community who finally have to fulfill technological demands.
But there are some very good people who live somewhere between the engineering and academic worlds and the world of the artists. These people are often very good communicators. I mean, David Bristow, with whom I worked at IRCAM in Paris and wrote the new book on FM, he is one of these people who communicates very effectively between these worlds. Now, his background is as a performing artist - a jazz and rock keyboard player - but he is very insightful as to what the problems of technology are and what one needs to do in order to solve them.
You mentioned the new book you have co-written with David Bristow, 'FM Theory and Applications'. What aspects of FM does it cover?
JC: We haven't tried to explain how to use a DX7, nor have we tried to explain the complete theory of Frequency Modulation synthesis. What we've done is to try to extract from the theory of Frequency Modulation synthesis those aspects which have a direct relationship with the synthesis of sound with a few key rules: how to think about it in simple FM terms, which is extendible to complex FM, and we depend upon the understanding that the reader, in all probability, has some Yamaha 'X' series instrument which will allow some experiments which help the ear connect to the theory. And that is the important thing.
I notice the book stresses a grounding in the disciplines of maths and acoustics. Will many musicians interested in programming FM synths have the patience for this?
JC: How much math and acoustics? We've tried to keep it more or less self-contained. We thought that it was helpful to have some understanding of a sine wave. That's a pure tone, and you could listen to it with the DX7. Now, in explaining FM theory, we first took the harmonic series - a collection of sine waves - and we show how you can just add these together and develop a complex sound, using algorithm 32, for example. This sound has six components which are quite enough to create an effect, but in order to understand the theory of FM one has to understand something about 'phase'. Now that's not something that many musicians know much about in any precise way, except in terms of flanged effects, etc. In the book we've gone to some trouble to explain one special case of phase, namely: a sinusoid that is 180 degrees out-of-phase with itself. This is important to FM. But you don't have to go to some other physics or acoustics textbook in order to understand that. If you want to understand more about the theory of phase then you are free to do so, but we've tried to make it pretty much self-contained. Step-by-step we try to lead you with your ear through FM theory.
So I don't think you open that book and have to have three other books open at the same time. If you can add, subtract, multiply and divide, that's about all you need.
Many musicians and programmers are frightened off from programming new sounds with FM because it doesn't accommodate an intuitive approach like additive synthesis. Does the new book remedy this?
JC: Yes, I think it does. If you can find your way through the book it will help in modifying existing timbres which, as you know, is the method by which most programmers begin creating their own sounds. I think understanding something about the theory will help that process but good programmers don't sit there thinking about the theory, they use their ears. Knowing the theory helps one know how to use the ear and to that extent I think the book is very helpful.
Would you say that without yourself having developed the original FM principle, things may have been very different for CCRMA? Have Yamaha been a big support to the Centre?
JC: Oh yeah, sure. Yamaha have licensed the technology and their involvement has meant support for the lab. There's no doubt that they have been an enormous help.
Research obviously plays and important part in CCRMA's activities and could result in another major development which could provide an equivalent boost in support for the Centre surely?
JC: Yes, that's true, but the search for a major development for commercial use is not the driving force behind the research that goes on here. Julius Smith's ideas about filtering closed waveguide networks has potential applications in the music industry and could also be income-producing. At Stanford, we have a very effective office called 'The Office Of Technology Licensing', which acts as an interface between the academic world and industry. They are very effective indeed and are also very sensitive to the constraints that are applied to research that goes on at the University. We can't have, nor do we want, research that is secret. For example, there's no military support of research that can't be known by everyone. The Office has provided a very efficient means for researchers to get their ideas exploited in an effective way by the relevant industries and FM technology is just one example.
I notice you are putting together a new MIDI studio here and that almost all of the gear is manufactured by Yamaha. Do you propose to expand the equipment range to include items made by other manufacturers?
JC: We probably will. It's been proposed by some of the composers here that we should get a sampler but I don't know a whole lot about them. I don't have a whole lot of interest in sampling as a general base to my thoughts about music. In sampling, you get what you get. But sampling with some very powerful processing becomes more interesting. Synthesis is somewhat different. From my own personal view of the composition of electro-acoustic music, synthesis allows me to think about sound in a way that I can't with natural instruments. Sometimes I get less than I asked for but sometimes I get more.
Are you conducting any research into MIDI-based applications here at CCRMA?
JC: There are some LISP machines here which output and receive MIDI and there is certainly some interest in that with Chris Chafe and others, but there is another project at IRCAM Paris, which is called MIDI LISP and is based upon the Apple Macintosh. We don't have the latest version of that which is still being perfected in Paris. David Wessel of IRCAM is in charge of that. It involves a form of LISP which runs on the Macintosh and is optimised for MIDI input and output.
Do you think MIDI is an important development?
JC: Oh yeah, sure. It certainly is important. It's not what everybody would like to have as a standard but what's really important is the fact that it exists and that we're learning a lot about the interconnection of devices and finding out how we could improve things for a second MIDI standard. Without a prolonged period of experimentation with the current standard I think we would lose a lot of potential positive definition for the second standard. Undoubtedly, there will be a MIDI 2.0, and it will have to be a standard which is compatible with MIDI 1.0. It will have to be faster and make up for all of the deficiencies of MIDI 1.0.
Finally, what aspects of your work give you the most pleasure?
JC: Well, I do three things: administrate - I have to do a lot of that - teaching and composition. I do mostly administration, followed by teaching and a little composition. I'd like more time to compose and teach and I'd like to spend more time with individual students. The administrative tasks are currently somewhat of a burden but part of the administrative load is due to the fact that we have only recently moved into this facility.
Are you happy with the environment of the CCRMA building?
JC: I think the environment is very comfortable. There is lots of listening space and the studios are pretty fine. They were designed by George Ausperger, who is a Los Angeles acoustical consultant who has fitted a number of studios around the country, along with a Palo Alto architect who supervised the refurbishment of the entire building. They worked well together and did an excellent job.
How do you see the future of CCRMA?
JC: It should be perpetuated much as I described it earlier in our discussion. CCRMA has to be an 'open' research facility and we should encourage staff, researchers and students to pursue their interests to the extent that this institution must remain of general usefulness to the greatest number of people. There is no exclusivity here since tens of users utilise the system at the same time. As long as research comes within the bounds of the current capabilities of the system then that research should be pursuable. And as long as there are a large amount of musical interests to be researched then I'm happy.
For more information about CCRMA write to: (Contact Details).
About the author: Simon Millward works as a freelance producer and engineer and is currently studying for a degree by Independent Study in Computer Related Music for which the visit to Stanford University formed part of his unique programme of study. The visit resulted in the composition of a piece called 'Gestures' which is available, along with other works by the composer, from (Contact Details). Price £5 inc postage (C-30). Cheques/postal orders should be made payable to 'C.Kam'.
Feature by Simon Millward
Previous article in this issue:
Next article in this issue: