University of Surrey Electro-Acoustic Music Studio
A look at the Surrey Sound electronic and computer music studio, scene of the Tonmeister degree course.
The University of Surrey has achieved some degree of notoriety on account of its extremely adventurous 'Tonmeister' course, a 3-year degree course that turns fledgling musical and technical talent into fully-rounded, musically-literate recording engineers (or, as the German has it, 'Tonmeisters'). So, it's not surprising to find that the Music Department has two well-equipped studios in which students are instructed in the gentle art of persuading magnetic particles to align themselves in more or less musical ways. However, there's also an expanding Electro-Acoustic Music Studio run by Robin Maconie, a lecturer in the Department of Music, and it's this that's our port of call in Studio Scene this month.
Lists of equipment are always of some interest - if only for oggle value - but in the Electro-Acoustic Music Studio's case, it clearly reflects the trends afoot in the outside world:
Digital: 2 Apple IIs, Alf music boards, Mountain Computer MusicSystem, and Soundchaser keyboard with Turbo-Traks 16-track software.
Analogue: VCS3, Synthi AKS, random voltage generator, pitch-to-voltage converter, 8-octave filter bank, Coloursound 8-way filter bank, Powertran vocoder.
Ambisonics: Calrec Soundfield microphone and control unit, Audio & Design Ambisonic Transcoder/UHJ encoder, Minim AD-2 Ambisonic decoder.
Recording: Sony SL-F1 VCR/PCM-F1 digital audio processor, Ferrograph Studio 8, Teac A3340, NEAL cassette-recorder.
Armed with this list, plus some knowledge of Robin Maconie's interests (courtesy of his book on Stockhausen and a coauthored article in New Scientist on computer composition), I spent an enjoyable afternoon on the University's campus in Guildford talking about micros and their musical applications in an educational context...
If a student is studying here, when would he or she get introduced to the Apple-based systems you've got here?
Well, there are formal lessons in the second year for both the music and Tonmeister students, the options being either straight programming using the Alf cards or the Soundchaser, or using the systems as an aid to composition. In their second year, the Tonmeister students go away for their industrial year, but I have them again for a special course in the third year called 'electro-acoustic music', during which they can do projects of an imaginative kind in digital synthesis or whatever. Some of the new lot of students coming back this year will probably gravitate towards electro-acoustics, and especially into the building of some sort of interface for the Apple or a more powerful system.
By 'interface', do you mean an alternative means of entering music?
Yes. The problem at the moment is that there's quite a lot of hack work involved in coming to terms with the equipment. I mean, it seems a pity that you have to fuss around with the Apple keyboard or the game paddles in order to enter notes. Also, there are disadvantages to a piano-type keyboard, in that you start thinking in terms of the piano, and that can be the kiss of death. I'm sure there must be some sort of modified interface between the composer and computer that would enable the positive virtues of the microprocessor to be utilised by the composer without the disadvantage of having to approach it through an alien sort of technology.
Do you find that students take easily to these Apple systems?
I've found that those who aren't mathematical, but are keen to do composition, can come to terms with something like the Alf's Entry program very quickly. Because of that, it's a good entry for them to the microprocessor world and, thanks to the subroutine facility in the Alf program, they can experiment with reprises and ostinato patterns. In fact, some of them have produced quite spirited Steve Reich-type pieces as a result. The third way we use the Apple is for analysis. I get all the students to take some pieces of classical music - a Bach Invention or Stravinsky's 3 Pieces for String Quartet, for instance - which can be written out as a series of subroutines which reprise at different time intervals. The mere act of writing out these pieces, using the Entry software, is a straightforward and practical way of analysing music. Once they've done that, we move on to envelopes, timbre, and so on.
What about programming itself - how many students come onto the course with prior computing knowledge?
Well, more and more are, because we're now acquiring students at first-year level who've been introduced to micros at secondary school. We're talking about a development that's only taken place during the last two or three years, really. One or two students show a great deal of ability and, to be honest, I depend on the students for fresh ideas and their interests to take what I'm doing further.
Could you show me what you've been doing with the Alf card on the Apple?
Well, I'll tell you about the Canon program first of all. This was produced by a student at the request of Colin Gough, our external examiner at Birmingham University, who wanted a program to produce a canon that'd be very easy to use. In fact, it's got a lot of relatively sophisticated bits and pieces, including the ability to program a melody in major and then convert it to minor or pentatonic. A couple of other good things about it are that you hear the melody as it's entered and you get a visual display of the notes when they're playing. Of course, the real limitation of the Alf cards is that they only produce square waves, but I've done some work with them using a technique that Stockhausen used in Kontakte - that of speeding-up melodies to create unusual sounds. For instance, here's Footsteps, which creates the effect of footsteps in gravel...
Very realistic! How did you get that effect?
By trial and error, really. If I slow it down, you'll be able to see that it's actually just a tune, played with a shortish envelope, speeded-up so that the pitches lose their individuality and turn into a different sort of sound. Another effect is Windsurge, which produces a surging sort of noise that gradually forces its way upwards. In fact, it was based on harmonic regions, so that it starts with a wobbling around a fundamental, then there's a shift up to the first harmonic, a wobble around the second, and so on. Then, when you speed it up, it turns into a random noise which sounds as if it's being blown through a tube.
I guess it's really like using a shift register to produce various flavours of noise. It's useful to know that square waves can be pushed a bit further!
Yes, and it shows that lively musical sounds can be produced by applied programming techniques despite hardware limitations. Obviously there's a hell of a long way to go, but it gives me a few crumbs of comfort! Of course, we're always interested to swap ideas with other people using this sort of equipment.
When did you get the Soundchaser?
We got the basic system in April, and then the Turbo-Traks software update a few months later. The big advantage of it over the Alf system is that you've got so much more control of timbral quality. For instance, it's difficult to define precise envelopes with the Alf but very easy with the Soundchaser. On top of that, you've got the ability to define the harmonic make-up of a sound very accurately.
What sort of things have you been doing with it?
Apart from its obvious use as a self-contained synthesis/recording set-up, we've also been using it for experimenting with composing programs. Of course, normally you're restricted to entering notes from the keyboard, but a student of mine, Alan Reekie, has developed a program called 'TMaker' that generates note files from whatever composing rules you feel like putting in the program. So, after the program has generated the file, you can give it to the Soundchaser to play it. For instance, there's one version of the program called 'TBaroque' that produces a three-part composition, with all three parts running at different speeds, and the two upper parts harmonised to the cantus firmus.
What sort of rules have you used to generate the two top parts?
I think what it does is to calculate a random interval and then harmonise the other parts. Subsequent intervals then get chosen out of a limited range of options - a semitone or tone up or down, as I recall.
So you're not actually going beyond a sort of first order transition rules scheme, where note spinning doesn't take into account what notes came before?
Yes, I think this one is more entropic - you start from certainty and proceed towards uncertainty. This is only one example and the musical results tend to be a little turgid, but I've worked on other programs. For instance, there's one called 'Maverick', which has a much freer sort of melody composition. One of the very provocative things about working in this field is that you learn that you can produce music which is more primitive or more avant-garde simply by stretching or compressing the same rules.
This way, you have a basis for evaluating melody which has nothing to do with culture - it's entirely to do with numbers. The upshot of this is that you start hearing a Bach Invention in terms of parameters that you probably weren't aware of before. Similarly, when you hear Xenakis, you realise that this is in fact very primitive music, though spreading it over five octaves rather than one turns it into something that appears avant-garde. Of course, to be able to demonstrate the continuity between one form of melody and another is a new or a renewed way of approaching music. Renewed because I think that some of the medieval composers were on the ball with this - the renaissance composers, for instance, who one thinks of as not as sophisticated as our equal-temperament stuff. In fact, it has a kind of elegance and precision which we can associate with the architecture of the time.
And, of course, elegance and precision is what you need when working with micros.
Yes, but one of the difficulties that has been facing computer music up to now is that its energy seems to be split between developing synthesiser techniques - experimenting with timbres and so on - and the mathematics of music; getting a computer to analyse a piece of music and reproduce the style. So there's a polarization of effort - some people see this equipment as a means of arriving at orthodox effects in a different sort of way and possibly want to make a million out of it, whilst others are fascinated by the philosophical or mathematical implications of it. It seems to me that trying to do digital synthesis simply on the basis of waveform construction is a grave defect, and the only argument for it is that it's cheap and easy, not that it tells you anything. It seems an illiterate way of going about things. In fact, I'm for going back to the way normal acoustic instruments are built.
But doesn't the modelling of acoustic instruments imply preconceived notions about sound synthesis?
I don't think so. It's been shown that the violin can be built by a person with no more than a piece of knotted string. He can derive all the dimensions of a Cremona violin. These instruments were built by people with ears but with very little literacy. They realised that production of beautiful sounds was the outcome of a series of procedures - a matter of getting everything right, including the grain of the wood, the age of the wood, the strings, the tension, the varnish, and so on - but the separate bits cooperated to produce a total effect that was very satisfying and beautiful. I think it should be like that with a machine - the better it's put together, the better the whole.
Yes, I see that, but by starting with the digital equivalent of a vibrating string or whatever, aren't you still really pre-defining the sound in the same way as using a VCO or a noise source in an analogue synthesiser?
Perhaps, but what I would like to see is the computational facility of a computer applied to modifying a data string rather than originating a data string. I simply assume this is more efficient because it doesn't require an exact model to be followed at any one point. So, what I'd like to see is a system that's designed to deal with sound synthesis as an organic procedure, rather than as a perfect sound reproduction system derived from racing through waveform number tables.
I suppose the nearest we're getting to producing your sort of 'organic' instrument with present technology is by means of FM synthesis and non-linear distortion techniques, where very realistic sounds are produced with comparatively little effort. In fact, judging by the new Yamaha breed of machines, it would appear that the ear does seem to be fooled pretty convincingly.
Well, to be honest, I've never been convinced by the examples I've heard of Chowning-type FM synthesis, and it seems to me that the sorts of people who argue that the ear is easily fooled are themselves easily fooled. There's a difference between achieving a manageable compromise with equipment that's obviously a compromise and understanding just how much precision you need to compute a convincing instrumental sound. No computer-synthesised tubular bell sounds like a real tubular bell because the real thing swings this way and that, which adds a whole different dimension to it. And instantly you hear that, you know what a real bell sounds like. The funny thing is that the better your illusion of a real sound is, the more boring it is to people because they don't hear any difference. It seems to me that we should be aiming for a radically different approach to the business of synthesising sounds, because the old-fashioned approach is dictated by economic or business concerns, rather than with an idea of coming to grips with the real nature of sound.
Aside from the pros and cons of different methods of digital synthesis, I gather that you're also keen to put sounds in their own spatial perspective.
Yes, a very clear long-term objective of mine is to include ambient projection of a sound in the organic process I've been describing. So, one should be able to synthesise a sound in a series of stages, starting as a normal instrument does, with the vibration of a reed, string, or air column, modify that by giving it the resonant characteristics of a tube or box, and then split it four ways, assigning each of the four split signals to a separate EQ unit. In that way, you should be able to impose a kind of 3-dimensional radiation characteristic which is at least approximate to the real instrument.
Surely there's more that can be done with just two channels? The Zuccarelli approach, for instance.
Well, a student of ours went over to Italy to interview Zuccarelli before he came over here, and she was bowled over by it. I must say that the demonstration I heard at the AES exposition in March, from the Italians he left behind - the imitation Zuccarelli, if you like - gave the impression of exaggerated height and depth, but very little front, back, and sideways. Actually, there was a paper in a very recent AES Journal which suggested that you can get a very good wrap-around effect with just two channels, but only as far as 270 degrees - you're still left with a quadrant that needs a third speaker. However, until you can produce laser holographic images which are complementary to ambisonic music, I think you're going to have difficulty getting your average consumer to bother with 3-D sound, as it were.
You've been doing some of■ your own ambisonic experiments, haven't you?
Yes, using those Kontakte-like effects produced with the Alf cards, we did a montage of effects, re-recorded them in a hall using the nearest equivalent we could make to a Soundfield mic, encoded it into a UHJ 2-channel format, and then played it back over 4 channels in a big hall down at the end of the campus on the occasion of a sort of flower display. There were people walking around this, and the sounds were basically sorts of bird chirpings, water and wind noises, and so on. If you like, it was muzak, but the sounds created an atmosphere that was noticed and appreciated by the audience. This seems to me a fair enough way of using the specialisation of sounds and creating new kinds of audiences. You've got to get them to accept it without complaining, without feeling that they're being asked to do something incredibly difficult. You won't find a more difficult audience than the women in the Flower Association of Surrey!
I want to build on the tremendous interest in this technology that now exists within the University and the world at large. I'm working towards producing music which is electro-acoustic, computer-synthesised, or multi-media, but in a way that actively attracts an audience. Apart from the pure music side, there's the development of digital equipment for music production and recording. For instance, we're hoping to obtain a student project - a digital filter - that was produced for a higher degree in the engineering department. The chap who designed it seems to have made some important discoveries as regards the processing of audio signals in general. This sort of thing is an area we'd like to encourage. We don't have a research division as yet, but there's tremendous scope for it - especially if we can attract support for research scholarships in this field.
You must be in an excellent position to do that, given the on-campus presence of the Tonmeister course.
The Tonmeister course is a marvellous guarantee of sound recording knowledge, good ears, and good musical brains. It seems to me that we've got to look into the education of musicians more on the technical side. If we're going to make any progress, the last thing we want is a studio set-up which is really a kind of hobby room to which eccentric composer types are banished. It's my conviction that music should be central - as it was in medieval times - to the study of the Sciences. I think that music is a way of encoding number information in a way that allows relationships to be perceived in a form that's totally different in terms of rapidity and effectiveness. I believe that the challenge of electronic music, timbre synthesis, and so on, is of such importance that they'll lead to spinoffs just like the space programme.
And perhaps this will extend to primary and secondary school level...
Yes, indeed. But the great advantage of the microprocessor revolution is that music seems to be a part of that attraction, although it's at a very rudimentary level. Children are now able to get playing with electronic music in their own homes. What we need to do is to develop a teaching programme that will relate to what these children discover in a sensible and coherent way. I think we're at a very interesting time, because, after a period of very rapid expansion, there's a levelling-off of technology. I mean, we've got digital recording, which represents a plateau of quality, and similarly ambisonics. So, rather than having to keep up with constantly changing technology, which has been the fate of computer music up until the present time, and electronic music, too, we now have a stage where we can look forward to a period of relative stability. It's like the typewriter reaching a certain level of reliability, and then, over the next twenty years, developing its full importance and range of use.
Feature by David Ellis
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!