• Macro Music
  • Macro Music
  • Macro Music

Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

Macro Music (Part 3)

More music with mainframe computers


Macro-music is all about making music on large mainframe computers and provides a starting point that will lead to articles on software techniques and hardware solutions for high quality synthesis, as well as micro-controlled one-chip synthesisers and latest commercial 'add-ons' for microcomputers.

University of Illinois Studio D (digital studio) in 1979.


FM Synthesis



Knowing what to put into an instrument definition and what to leave out isn't by any means a problem that's exclusive to digital synthesis. Faced with a battery of modules in a big Moog or Roland synthesiser, it's incredibly tempting to turn waveforms every which way but loose in an attempt to imprint your own distinctive label on the sound - designer synthesis, one might say. However, for the proto-synthesist of the '60s, the power of programs like Music V versus the inevitable naivety of their users created a good deal of frustration. It was this that prompted Max Mathews's comments about 'the psychoacoustic problem' and led to various explorations of this difficult and fascinating territory.

Work in this area started in earnest at Bell Labs when Jean-Claude Risset, a young French composer and physicist, came to do some research for his doctoral thesis. In the past, composers had tended to make educated guesses as to what made a brass sound brass-like, but Risset used the IBM 7094 at Bell Labs to test out his hunches about what was needed to simulate the timbre of an instrument like the trumpet. The result of this work was a technique called 'analysis by synthesis', which led ultimately to Risset's 'Introductory Catalogue of Computer Synthesised Sound', a fairly monumental treatise explaining ways of emulating conventional instruments from the unit generator building blocks that were part and parcel of any Music V set up. Whilst most computer musicians would probably agree that reproducing conventional instruments is hardly making full value of their resources (mental or digital), the emulation/simulation hobby-horse seems to be something of a yard-stick in the digital synthesis business. Like other facets of human behaviour, maybe one can't help but follow the lemming-like path to the familiar.

An intriguing aspect of the early days at Bell Labs was that users of these programs were scientists rather than composers (more-or-less) pure and simple. Admittedly, people like Mathews were (are) so tempered with the humanities that the distinction is largely meaningless, but the tradition of the physicist adopting the role of computer music composer remains true to the present day. One of the first 'real' composers to be involved with Bell Labs was David Lewin at Harvard, though even his contribution was from a distance, being quite literally an example of envelope composition - via the post! Nowadays, composers have somewhat ousted physicists from the creative driving seat in most computer music installations, and the work at Stanford's Centre for Computer Research in Music and Acoustics, Bell Labs, and the Institut de Recherche et de Coordination Acoustique/Musique (IRCAM) in Paris have given composers a whole new vocabulary of sound that is only now being exploited to anything like its full potential. It's also true that a lot of computer music has been a parade of complexity for complexity's sake. Lars-Gunnar Bodin has said: "In spite of great efforts in time and money, relatively little of artistic significance has been produced in computer music." Ezra Pound really hit the nail on the head with his comment that "170 pages of mathematics are of less value than a little curiosity and a willingness to listen to the sound that actually proceeds from an instrument."

Seeing that Pound's 'Cantos' frequently appear to share an equal footing with mathematics as well as literature, this comment might have a tinge of the pot calling the kettle black, but there is a valid point here. Assuming that musical instruments have remained in force because Mankind (or that part of it that uses them) finds them pleasing to the ear, it makes sense to model the implementation of digital instruments on their natural counterparts, but in shape rather than sound. John Chowning's approach to 'the psychoacoustic problem' operated on the premise that digital mirroring of the behaviour of acoustic instruments was a more realistic way to Man's soul than simply copying and reconstituting them with 'analysis by synthesis'. However, the 'analysis' work of Risset did yield an insight into the quality of natural sound that was to prove of general relevance for any attempt at digital synthesis: the character of the temporal evolution cf the spectral components (harmonic or inharmonic) is of critical importance in the determination of timbre. The majority of natural sounds and acoustic instruments produce a characteristic spectral 'signature' (see Figure 1) - especially over the initial attack phase of the envelope (the attack transients) - and it's this that's largely responsible for giving the ear an identifying 'cue signal' and sustaining interest in the sound. In contrast, the largely static harmonic spectra of most synthesiser sounds readily imparts to the listener the cue that labels the sound as being of electronic origin.

Figure 1. Spectral signatures for trumpet and flute tones.

The original sound in each case was recorded, digitised, subjected to Fourier analysis to split the sound into individual harmonics, and then plotted using a line-segment approximation.


A valid starting point for 'realistic' digital synthesis is therefore to impose some sort of spectral signature on the sounds being produced. As we'll see later on in this series, there are many ways of doing this - and with varying degrees of success - but the solution that Chowning alighted upon was in fact borrowed from radio transmission technology. The technique was frequency modulation (FM), the process by which the instantaneous frequency of a carrier waveform is varied according to a modulation waveform. Although FM had not previously been applied to the generation of waveforms way down at the audio end of the frequency spectrum, the intriguing attribute of FM synthesis that attracted Chowning is that it allows a complex spectral signature to be produced with the minimum of fuss and bother. More to the point, the FM simulations of brass, woodwind, and percussion are remarkably accurate. Indeed, Yamaha have seen fit to buy the exclusive rights to Chowning's FM synthesis techniques for use in their 'GS' range of synthesisers - a move that makes it jolly difficult for anyone else to apply the techniques of FM synthesis commercially.

The mathematics of FM synthesis can get quite involved, but the principles are fairly easy to grasp. Basically, as one applies modulation to a carrier wave, new frequency components - the sideband frequencies - start to appear around the original carrier, and the separation of the sideband components is determined by the amplitude of the modulation waveform. In other words, the modulation frequency determines the spacing of the components in the spectral signature, whilst the number of components is determined by the modulation amplitude. However, it's the manner in which these sideband frequencies change as the modulation is altered that makes the technique such a powerful synthetic tool. One of the important parameters of FM synthesis is the modulation index, I, which is equal to the ratio of the peak deviation of the carrier (proportional to the modulation amplitude) to the modulation frequency. When I=0, the frequency deviation is also zero and, surprise, surprise, there's no modulation. However, when I is greater than zero, frequencies occur above and below the carrier at intervals of the modulation frequency (Figure 2). What's actually happening is that, as I increases from zero, energy is effectively stolen from the carrier and distributed among an increasing number of sideband frequencies.

Figure 2. The effect of varying the modulation index on FM spectra.

The sideband frequencies appear at intervals of the modulation frequency, m, and are arranged symmetrically around the carrier frequency, c.


The really interesting feature of this is the manner in which the redistribution takes place and the audible consequences of it. Indeed, the sort of dynamic FM spectrum produced by decaying the modulation index from 4 to 0 (going from the bottom to the top of Figure 2, for instance) is to FM synthesis what the low-pass filter sweep is to analogue synthesis. However, whereas a filter sweep merely sequentially removes the higher harmonics, a decaying modulation index invokes a complex and essentially non-linear devolution of the individual harmonic components of a FM spectrum. And whilst the evolution and devolution of these components is purely a function of applied mathematics - the Bessel functions - the subjective impression of sounds created with dynamic FM synthesis is of lively and realistic musical instruments. In addition, FM synthesis gives one the unique opportunity to experiment with spectra containing inharmonic components - also called non-integer harmonics - common to brass and percussion instruments.

Figure 3. Dynamic FM patch using Music V unit generators.


Chowning implemented FM synthesis using Music V's unit generators. To achieve dynamic FM spectra, the basic software patch shown last month needs to be expanded to something like that in Figure 3. This dynamic FM patch includes another three unit generators so that the modulation index can be specified as a function of time and an envelope can be applied to the modulated carrier. UG4 and UG5 serve as time-domain function (envelope) generators, with UG4 imposing an envelope on the modulated carrier and UG5/6 together providing dynamic control of the modulation index. The output of the patch is then described by eight parameters, P1 to P8:

P1 = starting time of note
P2 = instrument code
P3 = duration of note
P4 = output amplitude
P5 = carrier frequency
P6 = modulation frequency
P7 = modulation index 1, I1
P8 = modulation index 2, I2

P7 sets the low point for the modulation index, P8 the high point, and P8-P7 the extent of the decaying index (equivalent to the length of a lowpass filter sweep in spirit, if not in body). Risset demonstrated for the trumpet that an increase in amplitude led to the energy being spread proportionately over an increased bandwidth of harmonics.

Chowning applied this interesting observation to his pursuit of dynamic FM spectra by using the following parameters for his Music V patch:

P3 = 600 msec
P4 = 1000 (maximum amplitude)
p5 = 440 Hz
P6 = 440 Hz
P7 = 0
P8 = 5

The modulation index, and therefore the distribution of sideband frequencies, changes in line with the envelope function that's programmed into the patch (Figure 4 shows an envelope that's characteristic of brass instruments - complete with an attack 'overshoot'). Integer ratios of P5/P6 (as in the above case) produce integer sideband frequencies (i.e., ones that lie within the harmonic series). However, the synthesis of some brass instruments may be more aptly described by spectra with non-integer components, such as those produced by using fractional ratios between the carrier and modulation waveforms.

Figure 4. Brass envelopes for dynamic FM synthesis.

This envelope incorporates the attack overshoot characteristic of many brass instruments. The short duration of the envelope is particularly suitable for producing the sort of brassy 'blips' necessary for simulating rapid tonguing (the horn section of Earth, Wind and Fire, for instance!).


All this may seem tough going and a bit removed from the mainstream of digital synthesis programs, but the crucial point of Chowning's work is the ease with which these techniques allow temporal control over instrumental timbre. Though it's true that both additive and subtractive synthesis can achieve similar results (though usually with some puffing and panting), the boon of FM synthesis is the fact that nature takes over in deciding which harmonic comes where, and, somewhat surprisingly, the ear seems quite happy to go along with this mathematical sleight of hand. In fact, it would appear that 'cuing' and so on is more dependent on the overall pattern of harmonic evolution/devolution than on the precise variation in amplitude for each harmonic component occurring during the course of a note's envelope.

GROOVE



The beauty of Music V was that almost anyone involved in computer music could read a Music V score or understand a Music V instrument definition and translate it into the terms of his or her favourite music synthesis program. However, wider acceptance of the principle of producing music with the assistance of a computer demanded certain changes in the notion of what the computer should really be doing to earn its keep. The most obvious bugbear of Music V et al. was the 100:1 less than realtime nature of the synthesis. That fact, and the necessity to code music in Fortran strings, meant that a particular sort of composer (the musical physicist on a research grant) was more likely to be involved than his conventional piano-playing counterpart. There's nothing wrong with that, of course, but many composers might have been more interested in finding out about the magic and mysteries of computer music if the music input techniques had been allied to something they more-or-less intuitively understood (i.e., the piano keyboard). However, it's also true that many people (meaning composers) can't wiggle their fingers fast enough, and control phrasing, dynamics and timbre sufficiently well, for this mode of entry to be satisfactory without some additional input technique for making edits, correcting fluffs, adding sonic spice, and so on. The big advantage of non-real-time synthesis is that it allows the composer to sculpt each sound individually and then store it on tape for subsequent playback. Whether or not the results really warranted this painstaking attention to the inner detail of notes is a moot point, and, for Max Mathews, the niceties of sophisticated musical utterances were tempered by a concern to reduce the gap between thinking music and hearing music. GROOVE provided this and also improved on the man-machine interface to boot.

The GROOVE system was developed at Bell Labs in 1968 by Mathews in conjunction with F. R. Moore and was really a hybrid set up consisting of a minicomputer connected to analogue synthesis modules. The system also included a number of input devices such as joysticks and knobs for realtime interaction with a score that had previously been entered by the user. The point about GROOVE was that it looked upon the score as a recording of the basic control functions needed to run an analogue synthesiser. The program then played back these functions in conjunction with sampled human gestures generated in real time by the user playing on input sensors connected via ADCs to the computer. GROOVE also allowed the musician to back-track through stored functions - using a display or printout for reference - and make precise edits with real-time feedback as he was doing this. The most extensive use made of GROOVE was by Emmanual Ghent, but, over the years that it remained extant (up until 1979 when time ran out for the minicomputer on which it was running), various notables (including Pierre Boulez) made use of its innovative features. Viewed from afar, so to speak, the most important feature of GROOVE must be the 'edited improvisations' that it allowed between machine and performer, thereby freeing computer music from the barren rigidity of unchanging performance. In fact, Max Mathews comes down very heavily on what he calls "dumb ways of playing intelligent machines", and the 'tape recorded' mode, where one puts the entire score of a composition into the computer, presses the start button, and lets the machine churn out notes, is far from Mathew's ideal for the future of computer music. Considerably more satisfactory are situations where the composer has at least partial control of the playback process. The processing-playback time gap of the Music series of programs made this impossible, so GROOVE represented the first step in the direction of making computer music bendable to realtime human whims and fancies. Part IV of Macromusic will examine the further steps taken by Max Mathews and others to make the computer a better and more immediate musical slave. You'll also find out the connection between 'Star Wars' and computer music!



Previous Article in this issue

APRS Preview

Next article in this issue

Clarion Recording System


Electronics & Music Maker - Copyright: Music Maker Publications (UK), Future Publishing.

 

Electronics & Music Maker - Jun 1983

>

Should be left alone:


You can send us a note about this article, or let us know of a problem - select the type from the menu above.

(Please include your email address if you want to be contacted regarding your note.)

Topic:

Computing

Synthesis & Sound Design


Series:

Macro Music

Part 1 | Part 2 | Part 3


Feature by David Ellis

Previous article in this issue:

> APRS Preview

Next article in this issue:

> Clarion Recording System


Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

If you're enjoying the site, please consider supporting me to help build this archive...

...with a one time Donation, or a recurring Donation of just £2 a month. It really helps - thank you!
muzines_logo_02

Small Print

Terms of usePrivacy