Music at City University
Computer music and electro-acoustic education in London.
David Ellis delves into computer music and electro-acoustic education in London
At the same time as one's greedily fingering the latest digital music wizardry, it's salutary to remember that such objects don't appear out of thin air and reflect the consequences of intensive R&D, as well as years of research into the nature of sound and synthesis techniques. It's also arguable that any musician using such products of high technology should be aware of the ingredients that went into their design as well as the ways in which the sounds are actually produced.
The Music Department at City University, London adopts the technologically-intelligent musician as its basic educational standard in a way that's wholly refreshing in comparison to the archaic musicological approach of many other Music Faculties. The two sides to the activities at City could come under the headings of 'analogue' and 'digital', but this would do a disservice to the very careful and non-pigeonholed course structure that Simon Emmerson has built up over the past five years. I'll be talking to him later on about the studio and the course itself, but, firstly, I started off my day at City by talking to Dr Kevin Jones, a research student working on computer music.
The pioneering work in the field of digital synthesis was done by Max V Mathews at the Bell Telephone Laboratories in the States. He wrote a series of programs, MUSIC 1 through 5, that configured the computer as an infinitely complex synthesiser. The original MUSIC 5 was written in FORTRAN, which is notoriously slow, but a former member of the department, Stanley Haynes, now working at IRCAM in Paris, re-wrote part of the program in assembler to speed up the rate of sound generation.
Kevin Jones explained where MUSIC 5 came into his own work. 'The version of MUSIC 5 that Stanley was using was quite old and just capable of setting up basic oscillators and using them to control each other. I developed a special version of the program with pre-programmed instruments so that beginners could sit down and write simple instrument files and define instruments just in terms of frequency, waveform and envelope. This enabled them to build up waveforms and experiment with simple frequency modulation, but they didn't actually have to program the oscillators together.'
Obviously, MUSIC 5 was pretty tough on the composer warily treading digital synthesis ground for the first time, and, in the first version, this wasn't helped by the necessity of entering logarithmic pitch data related in some way to the sampling rate. 'With my version, you could at least enter pitches as Hz, but Stanley actually wrote a conversion routine which meant that you could enter more conventional musical parameters. The main way I used the program was to generate note lists via recursive procedures which call themselves in order to generate comparatively dense structures.' Whatever neatness lay in these programs was subject to the frustrations of inadequate processing facilities to generate tapes that could be played back via the department's D/A converter.
'It took us about 9 hours on the University mainframe, an ICL 1905E, to generate about 60 seconds, which gave a processor time/music time ratio of something like 100:1! The 7-track playback unit here in the studio was used to play the tapes from the ICL, but, since the University has changed over to a Honeywell which operates on a 9-track format, it's now incompatible. The only way we can get the thing to work is by having our tapes converted from one format to another at the UCCL computing centre. So, we decided it just wasn't worth carrying on with mainframe work.'
The cynic would argue that high-level music languages are fairly detached from reality, and the cost of mainframe processing time must be incredibly uneconomic, so I wondered how Kevin felt this work was justified.
'Well, the development of digital synthesisers as commercial products must owe a lot to the work of Max Mathews and others in the States, and I think that MUSIC 4 and 5 must have been a necessary link in the chain. In general, the sort of research done in Universities or other technological institutions often filters its way down, but, with the MUSIC series, Max Mathews's original research started off in connection with speech synthesis and acoustic research, and then musical applications developed out of it.'
It's curious that whereas in the States it's de rigeur for large synthesis set-ups to be connected with big companies like Bell Telephone Labs, over here there's just nothing of the sort; I mean, can you imagine British Telecom supporting digital music synthesis? Though this seminal research could be said to have reached a full-stop in this country, it's very alive and kicking elsewhere.
'MUSIC 5 has now spawned MUSIC 10 which is being used at IRCAM and also at Stanford. MUSIC 10 is modelled more on ALGOL, in that you assemble your instruments using procedural calls, and then you put logical expressions, tests, and so on, into your instrument definitions, which makes it much more sophisticated and gives you "intelligent" instruments. Note calls are also built into procedural definitions and the whole thing is much easier to work with.'
Even so, these high-level languages take a lot of assimilation, and it's only too easy for a composer to be distracted from the business of actually getting down to producing music. Really, composers need an interpreter interface which will allow them to enter music in the most painless way possible.
'Well, I think that's what people in big computer music installations are just starting to realise, but there was a time when people like John Chowning, at Stanford's Artificial Intelligence Laboratory, said that if a composer wants to use a digital synthesis system then he's got to learn how to use it. Now, at IRCAM, they've changed a lot of that and a composer often works with an assistant. But, to do anything that's more than fairly basic, I think a composer does need to get to grips with programming languages.'
I wondered what Kevin thought of computer music in this country as compared with, say, France.
'Well, I'm afraid that there isn't any. It's a great shame, because the foundation was laid with Peter Zinovieff's EMS studio to do amazing things, but everything just fell apart. I know of no one else who's really seriously involved in computer music, apart from myself. Jonathan Harvey has worked at IRCAM and Tim Souster at Stanford, but apart from them there's nothing happening here. That's why I feel it's so important that we extend the music course here at City. Really, I feel the amount of computing should be much greater — at the moment computer studies is a core subject for every other department apart from music.'
Knowing only too well the conservatism of University administrators, I foresee some difficulty in getting the marriage between computers and music accepted.
'Yes, I suppose so, but having the Apple here means that people can work with computer techniques and get a good idea of applications in music.'
Kevin has been extensively working with the Alf music synthesiser cards in conjunction with the Apple, both in his personal work and for teaching purposes. The latter include some aural training programs that enable a student to test himself alone without the embarrassment of appearing tone deaf in front of a class! Kevin's PhD was basically in algorithms and compositional techniques applied to computer music. A section of this is on generative grammars which are used to create stochastic webs, or, in other terms, producing structures with elements of chance.
'I'm trying to surprise myself by generating structures rather than sounds, and, because the structures are interesting, I'm pretty convinced that I'll get an interesting sound. It's all down to the idea of cybernetic serendipity! Even though you're stuck in a square wave cage with the Alf modules, calling the subroutine at different places enables you to develop nice sophisticated canonic structures.'
Two examples of this technique can be heard on the E&MM demonstration cassette No. 4. This also includes an example realised on the ICL1900 computer using the MUSIC 5 program to create a digital tape, which was replayed through the department's D/A converter.
Practically the first thing learnt in the Music Department is that the term used to describe a technological-based form of music-making is 'electro-acoustics' rather than 'electronic music'. Simon Emmerson explained: 'We're unashamedly French-orientated, in that we deal with the 'sound object'. So, how you record and treat the individual sound is, for us, the ultimate sound synthesis. In the first year of the course, the studio activities of the students are geared towards recording individual sounds and splicing them together. The thing about the French approach is that emphasis is put on the sounds themselves speaking to a greater or lesser extent.'
This seems perfectly reasonable to me, as it's incredibly easy for a student to use the sounds of a commercial synthesiser in a way that's dictated more by the way in which commercial music uses such prepatched instruments than from the intuition of the composer's mind. I asked Simon about the origin of the course at City: 'The course was planned in the early seventies and some money was given by the Worshipful Company of Musicians to establish two studentships in electronic music. The first was mine, from 74 to 76, and the second was Kevin's, from 76 to 78. Meanwhile, the degree course started in 75, and all along it was intended to bridge science, technology, art, music and ethnomusicology. So, the course consists of three pillars: scientific aspects (sound in nature), cultural aspects (sound in culture), and performance.
'The obvious influence on the course has been that of York University because statistically the number of members of staff that have been to York is rather large. If anything, we're more technologically-based than them, but we also have a very strong performance link with the Guildhall School of Music and Drama. The scientific aspects of music have always been very important, and, for certain modules of the course, students are sent across the road to the Physics department.'
Simon then outlined the ways in which people can enter courses in music at City University: 'The BSc in Music is a normal degree course entered via the UCCA scheme. We interview people before they've taken their 'A' levels and also give them an audition and a short written paper. A Grade 8 pass on an instrument is formally required, but we don't trust it — that's why we include an audition as well. Usually, 5 points at 'A' level — 2 Ds and an E, for instance — are sufficient to enter the course, and these need not include music. We also have some postgraduates and adult education classes. The latter take the form of two hour blocks on a Monday evening, and there's a basic and advanced course on electroacoustic techniques. Also, the studio is available for hire by composers and others at rates dependent on the project.'
It's encouraging to hear that a University studio is prepared to open its doors to outsiders, especially so when one of the facilities likely to be included in this offer is the Fairlight CMI. The addition of this piece of equipment has been purposely limited to the ever-popular VCS3s. The Fairlight will obviously tie in well with the electro-acoustic philosophy of the Music Department, as its voice cards specifically hold 'soundobjects' in the form of waveform memory. The Fairlight has been given a special studio all to itself, not surprisingly, but mainly so there's some degree of control on the number of grubby paws running over its keys. The main studio at City University includes two 4-track machines and one 8-track machine plus the fairly obligatory Dolby A noise reduction. There's also an EMT echo-plate (several thousand pounds-worth!) waiting to be installed that City were very fortunate to pick up at a knock-down price from the recent Abbey Road auction. The studio is obviously only as good as the people using it, and City University seem to be very fortunate in having Alejandro Vinao, a composition research student, as one of their main musical ambassadors. Two of his pieces, 'Other Fictions: GO' and 'Una Orquestra Imaginaria', won first and second prizes, respectively, at the 79 and '81 Bourges International Competitions for Electro-Acoustic Music. Alejandro has very kindly prepared some excerpts of these exciting pieces, and they can also be heard on the demonstration cassette.
While studio-based electroacoustic work appears to be in a fairly healthy state in this country, judging by the success of composers like Alejandro Vinao, Dennis Smalley, and others, and the fact that the Electroacoustic Music Association includes something like twelve University installations, the same can't be said for the live performance applications of electronics in 'serious' music.
'I think that's correct, in the sense that the use of electronics in music isn't very pluralised in this country, but EMAS has an equipment pool which is being hired out on an average of once a week, and there are a number of small groups like Lontano, Singcircle and Electric Phoenix that are very involved with the use of electro-acoustics.'
However, in many cases, the 'electronics' are used in a very passive way — switch on the tape and play along with it — and it would be really refreshing to see another group like Intermodulation where the members actually interacted with the electronics.
'Yes, that's true. In the 60s there were a lot of composer-performer groups and there doesn't seem to be that sort-of experimentation now. I mean, there are very few composers actually writing live electronic pieces. I think this must just be part and parcel of the much more conservative attitude to experimentation in music as compared with ten or so years ago.
I don't actually think that Britain is really in a worse position with regard to contemporary music than anywhere else. After all, the neo-romanticism (not to be confused with the "new romantics"!) of the 70s is prevalent all over the world. The one area that is forging ahead is technology, but this seems pretty likely to outstrip any creative development, considering the innate conservatism of the British public, and this makes it very important to introduce the technology at an early stage into Music Academies and Universities, which, so far, seem to be making very little effort to come to terms with the new technology.'
Finally, I asked Simon about how he saw the Music Department developing in the future. 'We'd really like to move into yet more interdisciplinary areas, like the psychology of music. Ideally, I'd like a totally free creative ethos wedged between various disciplines of science, so that the whole lot could interact in a really creative way. My ideal would be a team consisting of a psychologist, a computer programmer and a technologist, and then we'd have the start of a feedback loop for the people involved in producing music so that machines could be developed to suit their needs.'
Anybody interested in knowing more about the courses offered by The City University's Music Department, the possibility of using the studio, or the activities of EMAS, is invited to contact Simon Emmerson at the Music Department, The City University, (Contact Details), and I'd like to thank Simon Emmerson and Kevin Jones for being such good hosts.
Feature by David Ellis
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!