'Wee Also Have Sound-Houses'
Richard Attree looks at the computer as musical instrument, and the implications thereof.
MUSIC 5 at the City University, a composer's view by Richard Attree
Technology has become the dominant force in our culture. It shapes our lives as religion once did. The dominant technology has become digital and we have moved from the chemical into the electronic era (as once we moved into the mechanical, or industrial age). This leap has precipitated a climate of renaissance in which a creative dynamic dialogue may once again arise between 'artist' and 'scientist'. All feel the need to forge a new language with concepts and techniques that span, rather than isolate, disciplines. This push toward generality and objectivity has resulted in the ideas of information theory, communication science, theories of games, strategies, heuristics, artificial intelligence and cybernetic thought in general (see the introduction to Norbert Wiener's seminal book,1 or read Shrodinger's ideas2). These are the modern ideas — to be understood because they are shaping art and science alike. They form the conceptual framework — the 'software'; and the hardware...?
Electronic computers have become the essential product: the sign-of-the-times as the internal combustion and steam engines have been in the past. Their increasing availability has been seized on by a great many interested parties. Among the motley crew of computer users found in, say, a university, we may now discover the odd composer or two (humming to themselves and perusing his line-print-out). In fact, since Lejaren Hiller's pioneering Iliac Suite of 1956/57 the available technology has influenced much musical thought of the last two decades. This situation is not completely without precedent in the history of music, as, for example, the origins of harmony in Western music have sometimes been attributed to the invention of keyboard instruments. So let us see just how computers are being used by musicians; perhaps the arguments for their necessity will then fall into place.
I must firstly distinguish three approaches, all of which are popularly described as producing 'computer music'. The process that will occupy this article is 'software synthesis' — the job of synthesising composed musical ideas by calculating successive samples of the sound-wave. Here the computer is being used to generate a digital representation of the sounds themselves, whereas in the second approach it functions as a glorified sequencer to control specifically designed sound-producing equipment. The computer provides voltages (if the synthesiser is analogue) or bits of information (in the very recent case of digital oscillators) that control musical parameters that were previously controlled by man-produced voltages from keyboards.
The main advantage of this approach over the purist digital synthesis is that of being able to work in 'real-time', a piece of jargon that will no doubt amuse the philosophical reader, basically guaranteeing the possibility of improvisation where keys are punched and sounds occur immediately. A real-time digital synthesiser has been on the horizon for some time now and when it arrives, it should combine the accuracy of digital control with the immediacy of 'raw sound' hot from actual oscillators. Of course the question of the desirability of improvising systems is not completely settled when real-time is bought at the expense of limitations resulting from the designer's particular configuration of synthesiser modules.
Another type of 'computer music' that should not be confused with the digital synthesis of composed ideas, is music that has been composed by the machine. Often at this point, aesthetic prejudices and downright ignorance combine to produce a Hollywood nightmare of white-coated operators pushing buttons and piping the resultant muzak to the masses.
Computer synthesis is not really anything mysterious. Converting data from the composer into acoustic information is a translation process, usually one-to-one. Composition, on the other hand, is admitted by even the most hardened non-mystics to be more problematic and less susceptible to the sort of step-by-step analysis that computers require. Mind you, who would have thought that a commercially-available chess-playing machine would be a reality? Or that computers might be 'taught' how to prove theorems, recognise objects, or even speak a language? The trend is towards more intelligent systems and 'the possibility of a computer creating music art or literature is perhaps obscure only because our pride forces us to believe these areas are man's exclusive provinces.' (Dr William H. Davidow's foreword to Music By Computers3). In other words, if we try to forget our prejudices, we may just find that this approach offers a great potential for real progress, even though it infers a radical re-thinking of the composer's role.
The relevant aesthetic question moves from 'is this an interesting theme?' to 'is that algorithm useful in producing interesting music-shapes?'.
The argument for such work is beyond my present scope; however the argument for digital synthesis is part of the movement towards greater accuracy in controlling the process of realising composed ideas that led composers to the classic electronic studios of the 50s and to the voltage-controlled synthesisers of the 60s. Software synthesis is in theory unlimited in its scope for producing complex waveforms accurately.
The pioneering work in this field was done by Max V Mathews at the Bell Telephone Labs. He wrote a series of programs — MUSIC 1 through 5, that allowed the composer to treat the computer as an infinitely complex synthesiser. Sound waves are digitally represented as a series of numbers. Each of these 'samples' defines an instantaneous value of the waveform's amplitude (see fig 1). The computer calculates this series of samples by reading in and manipulating data supplied to the synthesis program by the composer. To hear the sounds, we reverse this quantification process by converting the set of discrete samples into a smoothly varying AC voltage which may then drive an amplifier and speaker. This reverse process is known, naturally enough, as 'Digital-To-Analogue Conversion' (or D-A for short).
Of course any quantification of an analogue signal will introduce errors. Obviously the more often we sample the wave the better the approximation, and the faster the wave is changing the more often we need to sample it. In fact it may be shown (Mathews ch. 14) that to reproduce accurately a frequency of N Hz, we require a 'sampling rate' of at least 2N Hz. Thus to cope with the audio band of 20-20K Hz, our synthesis system must be capable of calculating 40000 numbers for each second of sound. If this is to occur in real time, the computer must produce a new sample every 25 microseconds. This is just too fast for available systems to cope with the number of operations involved in computing each sample. Most music systems compute the samples outside real time and store them on disc or mag tape for D-A conversion later. This in itself creates storage-space problems, since at a sampling rate of 40K, a three-minute piece requires storage-room for over seven million numbers and may take five hours to synthesise (a processor time/music time ratio of 100:1 is typical for complex sounds.)
The efficiency of the synthesis program is obviously of crucial importance. An important principle is that the operations that occur most often and need to be performed fastest are also the most repetitive and most easily generalised. For instance, the composer may define one cycle of a waveform and store the data. The OSC routine (the software stimulation of an oscillator) may then repeat this cycle at any specified frequency. Often-used waveforms such as sine-waves or exponential segments are stored, and the library programs for 'looking up' values are written in machine code. The main bulk of MUSIC 5 is written in Fortran but the program runs much faster if key routines are rewritten in what is known as 'assembler' language. Stanley Haynes reports a 500% improvement in running MUSIC 5 at the City University as a result of re-writing the much-used GEN routines. The more complex operations such as envelope shaping do not have to be performed at audio frequencies. Incidentally, this crucially useful inverse relationship between the complexity and rate of occurrence of a given operation is also at work in the ear. Obviously there is not much room for subtlety at 20K.
Input to MUSIC 5 consists of two blocks of data: the 'instrument definition' and the 'score'. The former information specifies a network of interconnected 'unit generators'. These are modelled on analogue signal-processing modules — oscillators, envelope shapers, filters et al. The latter block is a list of 'notes' each of which specify action time, instrument number, duration, frequency and any other parameters the composer sees fit to define. The program then arranges for the defined 'instruments' (see fig 2) to 'play' the 'score'.
This approach may seem limited. While it is clear that radically superior synthesis programs will be written in the future, it must be conceded that MUSIC 5 works in that it allows the composer to bootstrap his way into this potentially infinite network of possibilities. There is something intuitively natural about analysing synthesising sounds in terms of envelope structure and harmonic spectra. This naturalness means that a composer may program real or imagined sounds in terms of one-function blocks, each of which takes care of one perceived aspect of the timbre he is building. This model has provided a tool for researchers in acoustics and psycho-acoustics to build a back-up library of analysed timbres. The composer may then move from the known to the unknown by varying parameters that have been found important and using instrumental tones as points of reference (see Risset's catalogue5).
At the City University, MUSIC 5 exists already as a package on the University twin ICL 1905E. It is planned to run a version on an often-vacant hybrid computer — an EAI 640, whose core has recently been expanded from 16K to 32K words (MUSIC 5 requires 25K of storage). If this is successful, it should mean interactive (not quite real-time) music synthesis for one user at a time. Long term objectives are obviously geared towards building a real-time music system by interfacing a digital synthesiser — or, to look at it another way, by gradually replacing key MUSIC 5 software with high speed micro-processor-based hardware.
The synthesis package was set up by Stanley Haynes who pioneered software synthesis in the UK at Southampton University. Input may be punched-card form or from an on-line terminal. Since the program uses vast amounts of disc-space and 'mill-time', it is usual to limit the on-line mode (via the time-sharing system 'maximop') to test runs and to leave production runs in the batch queue to be run overnight via 'George 2'. Output samples are stored on disc, rewritten to mag tape, and taken over to the D-A converter for audio playback. The D-A facilities were the result of a collaboration with the Computer Unit and as such are a good example of the potential for inter-disciplinary work that exists within a university environment. The sounds are finally heard through Quad 405s and Celestion speakers and recorded on a Revox. The hi-fi end will hopefully be improved when the computer music lab is hooked up to the electronic music studio with its 8-track, Tannoys and noise reduction!
My main criticisms of the system as it stands are: the 'classic' groan at 24-hour turnover; the lack of digital editing facilities; the time-limit on sections (typically 40 seconds); and the lack of disc-space for stereo — it's 'back to mono' at TCU. These limitations all combine to make large-scale composition impractical and really restrict the composer to a series of sonic experiments. This is, however, fitting, as the system is barely a few months old and MUSIC 5 itself only a few years, so we all have the sense of exploring new territory. Indeed each new run arouses great interest as there is a real possibility of totally original sounds. I must add also that Stanley Haynes' plans as outlined above would eventually resolve all four of my reservations and result in a music system of great power and flexibility.
A musical instrument is a tool for the sonic realisation of musical ideas. As such the computer is no more unnatural than a piano which is, of course, a fiendishly complex machine. The parameters by which the musical results are judged I leave to aesthetic wrangling; those which govern the success of the program define the potential for subtlety built into the system. As in the past, the best instruments become an extension of the musician. The success of the system that physically embodies the program (the 'package') is to be judged by its speed and fidelity. Historical necessity has led composers to digital systems. At the moment, we are just approaching the foothills but at least we can see a way out of the present musical trough. New symphonies, however prestigious, can only further erode the valley. We await a new breed of composer with Bacon's Atlantis in his sights 'who will at last musicalise, and make fruitful, the often barren experimental harvest of his predecessors'. (Donald Mitchell)
1 Norbert Wiener — Cybernetics
2 Erwin Shrodinger — What is life, Cambridge University Press, New York 1946
3 Various researchers — Music by Computers — John Wiley, New York, London 1969
4 Max V Mathews — The Technology of Computer Music MIT 1969
5 J C Risset — An introductory Catalogue of Computer-generated Sounds Bell Telephone Labs, Murray Hill, New Jersey
Feature by Richard Attree
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!