Home -> Magazines -> Issues -> Articles in this issue -> View
Macro Music (Part 2) | |
Article from Electronics & Music Maker, April 1983 | |
Music with Mainframe computers
Macro-music is all about making music on large mainframe computers and provides a starting point that will lead to articles on software techniques and hardware solutions for high quality synthesis, as well as micro-controlled one-chip synthesisers and latest commercial 'add-ons' for microcomputers.

Stemming from Mathews's own experiences of learning and playing the violin was a strong feeling that computer music programs should help a person's creative juices flow without constraining him. To this end, the next program in the series, Music III, introduced in 1960, saw the introduction of the unit generator concept. These unit generators were sonic building blocks existing solely in software, and, like the modular approach being adopted for analogue synthesisers at about the same time by Donald Buchla and Robert Moog, a simple sound was easy to patch and more elaborate ones took longer (see Figure 3 for an example). By allowing the complexity of the synthesis program to follow the complexity of the composer's intentions, and by making the building blocks correspond to many of the functions of analogue synthesisers, Mathews made a big conceptual advancement for computer music - that of contracting the technological language barrier and giving the user something that he could instantly relate to, but, at the same time, providing the means for expansion to meet his own creative development. Actually, it's a bit reminiscent of that superb oxymoron that appeared in a local newspaper: "Wanted: new investors for an expanding contracting business' - very bitter-sweet!

A good example of a typical unit generator is the oscillator which has two inputs, an output, and a stored waveform that's programmed into it. Generally, the first input specifies the amplitude of the output and the second the frequency. Patching up unit generators produces, naturally enough, an instrument with characteristics determined by the parameters, P1 to Pn, programmed by the composer. Some of the most dramatic synthesis using unit generators was done by John Chowning at the Stanford Artificial Intelligence Laboratory in the late '60s using more complex versions of the patch in Figure 3. John Chowning (one of the many pupils of the remarkable Nadia Boulanger in Paris), with the aid of Max Mathews, started setting up a computer music program at Stanford in 1964 around a descendant of Music III. In fact, Music IV, which appeared in 1963, was no more powerful than Music III, and only marginally more convenient to use, but the program that Stanford started to use, Music V, marked the emergence of a music synthesis program that was extremely flexible and machine-independent. This meant that computer music was at last able to move away from Bell Labs and reach a wider range of musicians.
Music V in fact adopted a practice used in many microcomputer music systems - that of using machine code for the critical synthesis routines and a high level language (Fortran, in this case) for user interaction, i.e., for setting up unit generators and entering 'scores'. This meant that Music V was comparatively easy to transfer to different computers and efficient in terms of processing speed. However, Music V remained loyal to the tradition of delayed playback synthesis and this made it somewhat inconvenient in comparison to the totally real-time synthesis possible with the analogue synthesisers emerging from the Moog stable. Rewriting of Music IV and Music V into machine code throughout improved on the original 100:1 computing vs. playing time, though not to the extent that composers would ideally have liked.
Music V underwent many developments as regards the interface between computer and composer. The two blocks of composer-derived data, the 'instrument definitions' and the 'score', remained the corner-stone of Music V's operation, but one major obstacle to generating music from a digital synthesis program was what Mathews called 'the psychoacoustic problem'. The basic difficulty with all music synthesis, whether subtractive or additive, is that it's extremely difficult to predict why a particular mass of sound waves produce the impression of a particular timbre. With traditional acoustic instruments, it's enough that the performer understands the effect of different methods of fingering, bowing, embouchure, or articulation on the quality of sound, not the actual physical changes that result in the composition of the sound waves. With a digital synthesis program, on the other hand, the composer can only realise its full potential if everything about the sound wave is rigorously spelt out for the computer. To this end, potential help for the composer came from two sources: firstly, by analysing the sounds of instruments and reconstructing them; and secondly, by synthetic techniques that happen to mirror the behaviour of natural instruments. We'll move on to considering these in Part 3 of Macro-music.
Amiga Notes |
Software Support - Hints, Tips & News From The World Of Music Software |
Apple Notes |
When Is A Computer? |
Apple Notes |
Lab Notes: In Pursuit of the Wild QuASH |
Apple Notes |
Apple Notes |
Technically Speaking (Part 1) |
Backing Up Is Hard To Do |
Amiga Notes |
Music Composition Languages (Part 1) |
Browse by Topic:
Feature by David Ellis
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!
New issues that have been donated or scanned for us this month.
All donations and support are gratefully appreciated - thank you.
Do you have any of these magazine issues?
If so, and you can donate, lend or scan them to help complete our archive, please get in touch via the Contribute page - thanks!