Macromusic (Part 4)
Beyond the Groove
Having commenced his attack on computer music that simply played back like some glorified tape recorder, Max Mathews was under strong pressure to justify his standpoint with workable alternatives. GROOVE went a certain way in this direction, but such hybrid techniques of synthesis — computers controlling analogue synthesisers — negated the tremendous versatility inherent in the 'pure' digital synthesis of Music V and its descendants. However, the GROOVE system, for all its analogue synthesis limitations, was a very productive system — particularly because of the apparent efficiency of the 'successive-approximation improvisation' (gulp...) achieved with it by a continual process of editing/feedback/editing/feedback/etc.
One of the originators of GROOVE, F.R. Moore, has also claimed that "the presence of a listening, thinking, feeling human being in the production feedback loop, rather than as a mere observer of the production, is readily audible in the results." That realisation doesn't come as any great surprise, but, of course, life is made more complicated in the design stage if real-time or almost real-time feedback is desired. The goal to develop a computer music system that meets every eventuality as regards demands from composers and performers must be a powerful driving force, and this is well typified by the Computer Audio Research Laboratory (CARL) that's under construction at the University of California, San Diego. Figure 1 gives the schematic for such a 'functionally complete' computer music system. However, there was one essential difficulty in going from the GROOVE system to interactive digital synthesis, and that was the problem of making the latter operate fast enough to function in real time.
Fortunately, the 70s saw the emergence of suitably fast digital hardware that permitted real-time synthesis. Two digital engineers working at Bell Labs, Pepino di Giugno and Hal Alles, produced a sound generator building block consisting of a digital oscillator time-multiplexed 64 ways (giving 64 individual digital sound generators) on a single 8" x 10½" board (a total of about 160 cheap ICs). Development of this hardware was given a considerable incentive by the fact that Boulez's great white hope, IRCAM, was currently on the look out for a suitable real-time synthesis system to operate alongside the trusty Music V on a PDP-10 minicomputer. In fact, the original idea of Luciano Berio, the head of the computer music side of IRCAM, was to construct an analogue synthesiser with 1,000 oscillators! It was tactfully explained to him that this wasn't the best way of going about synthesis in the digitally-enlightened 70s, no matter how generous the French government's grant might be, and the di Giugno/Alles board became IRCAM's so-called '4B machine' for implementing real-time synthesis. However, there were a fair number of problems associated with this hardware, and it's worthwhile noting them just to appreciate the difficulties that 'classical' electromusicians had in pursuing their various goals at this time (1977) and the remarkable advances that have been made since then.
The 4B machine ran at a 32 kHz sampling rate, giving a practical upper limit to the audio bandwidth of 13 kHz, but generated envelopes by a piecewise approximation method using a series of ramp waves. The big problem was that these envelopes were generated at 4 kHz, which produced an annoying 'chirp' on some very fast envelopes, and required the controlling computer to initiate a new ramp each time an old one ended. This was a lot of work for the computer (which tended to slowdown under the strain) and a lot of work for the composer (who tended to do something fairly similar). Furthermore, the design of the board was based around unit generator-type 'module definitions', and these were hard-wired into PROMs in such a way that repatching wasn't very practical. The unit generator concept also extended to using (or 'burning') some of the 64 oscillators for other purposes, such as envelope generation. Thus, the initial promise of 64 tone generators tended to evaporate as soon as more complex synthesis was mentioned.
The next development in the 4n series of IRCAM 'sound processors' was the 4C machine, another single board digital synthesiser designed by di Giugno. This included 64 oscillators with up to 16 different waveforms at any given time, 32 envelope generators, and various other timing and control functions. The standard 4C system used a PDP-11 minicomputer, a single 4C synthesiser board, and up to four DACs. With everything running full ahead, the 16 kHz sampling rate brought the effective bandwidth down to about 6.4 kHz, but, by judicious mixing of sampling rates internally, and by sacrificing some of the oscillators, high quality, flexible synthesis was perfectly feasible. The basic 4C system has also been modified considerably to include digital recording/processing of live sound via the connection of an ADC and a direct memory access (DMA) controller (Figure 2). As well see in a later article, DMA is really a great boon for real-time digital synthesis because it enables DACs and ADCs to gain access to memory without the time-consuming intervention of a microprocessor, and, with current processor technology, you really need that extra speed!
The most recent piece of di Giugno hardware at IRCAM is the 4X machine. This impressive system is designed primarily for the synthesis and processing of natural sounds, including as it does 16 ADCs and 16 DACs, and can perform real time Fast Fourier Transforms (a complex mathematical operation that breaks sounds down into their harmonic constituents); carry out linear predictive coding (the technique used by the Texas 'Speak 'n' Spell' type of chips) of speech and music; and record, edit, and mix sounds inputted from microphones.
One of the more recent uses of the 4X machine was in the performance of Boulez's 'Repons' at this year's Proms, where it was used to transform in real time the sounds of six solo instrumentalists. This is definitely an intelligent way of using intelligent synthesisers, offering as it does the spontaneity of a live performance with the extraordinary possibilities opened up by real-time digital sound processing. However, the 4X machine would appear to have nothing on the utterly awesome machine that's being developed at Lucasfilm Ltd. in San Rafael, California.
The digital audio project at Lucasfilm Ltd. (established by George Lucas of 'Star Wars' fame), under the direction of James A. Moorer, a stalwart of the computer music field, was set up to produce a successor to the Moviola, a machine invented in 1900 for the purpose of transferring sound tracks onto film and still going strong 80 years on. The Lucasfilm machine is called the Audio Signal Processor (ASP) and is constructed from one to eight Digital Sound Processor (DSP) units. Such unglamorous titles hide the mind-boggling technology of 3,600 emitter-coupled logic chips in each DSP. The point about emitter-coupled logic is that it's very fast (more than twice the speed of Shottky TTL logic) and, when put together in the form of the ASP, will perform some 140 million operations per second! Remember that a Cray-1 gets up to about 33 million operations per second on a good day and a 1 MHz 6502 microprocessor can just about drag its feet along with 330,000 simple arithmetic operations in the same time period.
Each DSP unit handles 16 ADCs and 16 DACs (16-bit, of course) configured as sound transformation channels, with overall control coming from a 68000 processor embedded in each DSP. Wavetables, interpolation functions, reverb algorithms, or whatever are stored in 3 Mbytes of memory in each DSP unit. These DSPs also make waveform sequencing, and most other software tricks aimed at producing timbral variety, totally redundant, because with all that processing power, they're able to perform the real McCoy of digital filtering — in fact, each DSP is capable of generating sixty 2nd order filter sections without blinking (ie. slowing down the sampling rate). As if all that wasn't enough to make even a Venusian green with envy, a real-time console — also based around a 68000 processor — provides just about every interactive control one could possibly wish for. The design of the console is such that every control input (knobs, joysticks, digital faders, touch-sensitive strips, keyboard, or 'Lucasmouse') can be assigned to any more-or-less musical parameter, such as modulation index, spatial mixing, reverberation contours, and so on.
Quite honestly, a machine like the ASP makes the average computerised mixing desk look old hat! Mind you, given the tired state of the film industry, it's hard to see how such a revolutionary machine could make any sort of inroad into Moviola territory. There's also the old adage of "garbage in equals garbage out", and the same is likely to be true of the humble film music composer struggling to come to terms with such a system. Still, it should ensure some amazing sound tracks for sequels to 'Star Wars' and 'The Empire Strikes Back' (assuming that there are still cinemas around to show them).
Whilst one half of the 4B design team was beavering away at IRCAM, the other half, Hal Alles, returned to Bell Labs to work on his own digital synthesiser hardware. What actually emerged was a single board design, comprising a mere 110 ICs, offering 32 independent digital oscillators, envelope generators to control amplitude and frequency, as well as time functions and FM inputs for each oscillator. In fact, only one high speed oscillator circuit is used, but it is time-multiplexed 32 ways to generate the multiple oscillators (each with a 32 kHz sampling rate). Each oscillator is controlled with eight 16-bit registers in RAM (addressed as sixteen 8-bit words) which is interfaced to the buss of the controlling microprocessor so that it appears in the processor address space. All this makes for an easily-controlled, high quality system (especially if 16-bit DACs are used) for real time digital synthesis, and the long and the short of this happy hardware story is that the Alles design ended up in the memory map of a Z80 as the driving force of a commercial digital synthesiser, the Synergy (from Digital Keyboards Inc.).
In fact, the original testbed system built around the Alles board was a fairly lavish affair, including dual disk drives, 64K of RAM, and many performance and software features (notably those for constructing sounds and very extensive multitracking facilities) left out of the Synergy. This system was (is?) actually marketed under Crumar's name (one of Digital Keyboards' backers) as the General Development System (GDS) and includes amongst its users Wendy Carlos, Klaus Schulze, and Billy Cobham, to name but a few.
Max Mathews was also involved in some of the design considerations of the GDS and the Synergy, but his main preoccupation from Music V days onwards has been to improve the composer's relationship with the computer. After all, the whole idea was to do better than some pianist playing a piece of graceless contemporary music, and that meant devising ways and means of getting away from the 'sewing machine music' label of a computer that played like a switched on Bach.
The first device that Mathews developed, with a view to real time control of the playback process, was the 'electric baton'. The idea behind this was that it should allow the composer to rehearse and conduct the computer's performance by setting tempos and balancing voices in real time. The rehearsed performance could then be stored, and subsequently played back as a closer approximation to the real intentions of the composer, or further conducted as a 'live' performance. The GROOVE system provided the interactive speed needed for such real-time control, with Pierre Boulez being one of its main users (in conjunction with the Conductor program).
However, a more potentially useful development for promoting musician-machine interaction is Mathews 'sequential drum'. Instead of producing a sound, the 'drum' produces three electrical signals. The first is proportional to how hard one hits the drum and triggers the sounding of a note and determines its loudness. The other signals from the drum indicate to a computer where the drum was hit in terms of X and Y co ordinates. The X signal might be used typically to control the decay time of a note across the stereo field. Thus, hitting the left side of the drum would slow down the decay of a note on the same side, leaving the right to decay at a faster rate. The Y co-ordinate can be used very effectively to vary the timbre of notes: hit at the top, one gets a rich harmonic sound: hit at the bottom, there's just the fundamental. The 'sequential' side of the drum comes into view when it's interfaced with the playback of a score already programmed and stored in memory. Each time the drum is hit, the player automatically gets the next note or chord in sequence (actually, this is rather similar to the 'one key' operation of the VL-Tone and other Casio sequencing keyboards!), thereby giving him plenty of time to concerntrate on the more juicy details of timing, timbre, dynamics, and whatever else he cares to allocate to the tapping of fingers.
Figure 3 illustrates some ideas about performance and where the sequential drum fits in.
The interesting (and, for us musicians, important) thing that's happened over the past five or so years is the development of digital synthesis hardware, like that from the wire-wrapping pens of di Giugno and Alles, that shoulders a good deal of the responsibility for waveform generation, scaling, and so on. This means that the move from macros to micros as controllers of such hardware is possible without making whacking great sacrifices as regards quality, efficiency, or flexibility — factors that are amply demonstrated by Crumar's GDS and, more recently, the Buchla 400 system. Moreover, even the Lucasfilm Audio Signal Processor, a machine that must truly deserve the tag of 'state-of-the-art', exists and works by virtue of microprocessors (albeit 16-bit ones). But a measure of caution is also worthy at this stage, as it's all too easy for the musician or composer to lose track of his aim — creating music — in pursuit of whatever pot of gold lies at the end of the computer music spectrum. Devices like Mathews' sequential drum and Boulez's use of the 4X machine redress the balance by returning to the down-to-earth business of man-machine interaction with intelligent ways of playing intelligent music-making machines.
All the work in macrocomputer music that Mathews sparked off in the '50s has provided a vast amount of food for thought, and if this mini-series about music on macrocomputers has provided even a mere aperitif, I'll be delighted: the problem now is to guide the development of music on microcomputers in a similarly fruitful way. Max Mathews has one vision of the future: "Computers will add a new dimension to music, especially the home computer. It will be sufficiently easier to play so that many people who otherwise could only listen to music will become active musicians. This may be the biggest accomplishment of the home computer market." There are other points of view, of course, but, in the next article, we'll put controversy aside and go back to the basics of digital synthesis on microcomputers.
Feature by David Ellis