Music Composition Languages (Part 2)
Can computers improve or even replace current styles of notation?
The idea behind this series of articles is to follow the path taken by musical notation, from the traditional principles to be found on any sheet of printed music to the development of special high-level languages for the synthesis of music on computer-based equipment. But first, we start with a few Deep Thoughts...
Mention the term 'Music Composition Language' to the average musician and you're likely to be met with a blank stare. After all, conventional methods of music production don't actually differentiate between a language for composition and a language for performance, so a musician is hardly likely to consider the language of musical notation as something that's particularly exclusive to composition. But does conventional notation actually fulfil all the requirements of a true language, ie., an efficient and flexible means of passing on information from one person to another?
Well, herein lies the problem: conventional notation is designed for conventional music, where key relationships, rhythmic patterns, dynamic ebb and flow, and instrumental range all tend to fit into schemes of composition set by (fairly) rigorous rules of counterpoint, harmony and orchestration. Not surprisingly,then, it tends to get into dire trouble when coming up against contemporary music full of unusual subdivisions of the octave, polyrhythms, tessitura that range over many octaves, abruptly changing dynamics, different tempi played simultaneously, and all the other tricks of the modern expressive composer.
If a composer's style has become sufficiently advanced as to include these extensions to the basic musical vocabulary, then he's forced either to make the best of a bad job, when it comes to notational practice, and rely upon the highly-developed reading skills of his performers (the example in Figure 1, for instance), or to succumb to a Faustian deal, whereby compositional precision is exchanged for the less taxing (to both composer and performer!) but incredibly dramatic graphical representation of musical events illustrated by the Schidlowsly example (for solo harp!) in Figure 2. With the former approach, one's practically obliged to use a stop-watch and calculators work out which note comes where, whereas, in the latter case, composition becomes more-or-less redundant. Clearly a sorry state of affairs!
For this reason, various individuals have tried to persuade the musical establishment to alter, if not change altogether, their notational tune. The trouble is that it's totally impractical to switch from one form of notation to another, when the former has been the basis of many centuries of musical tradition. In fact, the two notational languages that seem logical and sensible within the context of contemporary music — Klavarscribo and Equitone — have met with a stalemate situation that's rather reminiscent of the reaction that the proponents of a European language met in trying to turn people from the illogicality and quaintness of English to the precision and coolness of Esperanto.
Outside the context of contemporary music, musical notation works fine 99% of the time because it's used for the purpose of providing playing instructions to instruments (and their players) that themselves have certain inherent limitations as far as subdivisions of the octave, tessitura, and range are concerned. In fact, the general consensus of opinion amongst composers who've emerged from the experimentation of the '60s appears to be that 'you don't write what can't be played or notated'. So, one explanation for the 'neo-romantic' swing-back that's been observed over the last ten years might be that the limitations of the current means of musical communication have actually achieved a sort of negative (?) feedback effect on the creative process.
If that's the situation for the 'concert music' side of things, where does electronic music find itself? Well, as far as attempts at notation are concerned, it's all a bit confused: The obvious problem to be surmounted is that the diversity of means and effect in electronic music is so great as to be impossible to put into conventional notation. However, ever since Stockhausen first persuaded ex-WWII oscillators to send their sine waves into the ether (Studie II. for instance), electronic music composers have been investigating ways of systematising their creative procedures with some sort of shorthand.
This can take the form of elaborate graphics, as in the case of the Ligeti's Poeme Electronique (Figure 3). a sheet of numbers, as with the Riedl example (Figure 4). or it can involve elaborate verbiage, possibly designed more for self-justification than for analysis by the next generation of electromusicologists!
On the other hand it might just be a continuous print-out from a spectrum analyser, which would probably indicate more about what's actually happening in the music. At least, this would offer a plausable update to the pitch and envelope graphs of Stockhausen's Studie II (Figure 5).
However, there's a crucial difference between the use of such 'scores' in electronic and instrumental music: in the latter, the score exists to communicate intentions and instructions from composer to a group of performers, but, in the electronic situation, performance is seldom dependent on anyone (or anything) being able to accurately interpret such scores. In fact, creativity — the jottings in a note book, the knob twiddling, and the tape splicings — pre-exists the score, and so, in effect, the latter is superfluous apart from its role in assisting the composer to gather and order his thoughts. Of course, if there was a tradition whereby scored electronic pieces were reinterpreted and resynthesised by other musicians and composers, then there would be plenty of sense in having such a score. This is one place where a universal music composition language designed specifically for ease of transport from one creative situation to another would be a real step-forward.
"This effluvial extreme of the attack-effluvium continuum will vary greatly amongst listeners. The greater the distance travelled from the closed attack archetype towards an effluvial state, the more important inner motion becomes as a continuant oriented phase is born. While the internal motion of this continuant phase may be described in terms of particles or grains, outer onset and termination phases will need redefining."
(from a paper presented by Dennis Smalley at the Stockholm Electronic Music Conference in 1981).
Some people might argue that playing back the tape of a piece with a 'sound projectionist', for real-time control of sound diffusion through an auditorium, is equivalent to a reworking of the piece, but it seems a poor substitute for the potential excitement of reinterpretation, and it must surely be the fundamentally rigid nature of a prerecorded electronic piece that acts as a barrier to the wider acceptance of electronic music outside of the rock arena.
I suppose the single most engaging factor about producing music of an electronic or electro-acoustic nature is that it's probably the nearest one can get to creating a sculpture out of sound. Waveforms really are very like clay, bendable into different shapes and different textures, and, if the end result doesn't quite gel, you just re-work the material. So, in many ways, electronic music is a much more direct pathway from the mind to the final product. The inevitable problem, of course, with a medium that stretches through time as well as space is that you're never able to take in the whole effect of a piece in one instant. It's easy to envy the visual artist who's able to stun an audience simply by presenting one image in one place at onetime. However, they lose out on what the sonic arts have to offer — the ability to manipulate the perceptual apparatus by a process of ebb and flow over an extended period of time — and a good example of a piece that does that to near perfection is 'Pentes' by Dennis Smalley (on UEA 81063).
The problem with this sculptural approach is that you're obliged to tune your ears to a very fine degree. In fact, you really have to develop the ability of a human polyphonic sequencer when it comes to remembering all the sound-events streaming through your consciousness. It's at this point that the notion of using a micro to interface between your creative ideals and musical reality makes an awful lot of sense. More than that, the fact that you have to communicate with a computer very precisely if you want it to serve you well means that one's given the option more-or-less on a plate of putting right (for oneself, anyway) whatever inadequacies existed in conventional notation.
However, before launching into the development of a MCL to end all MCLs, it has to be recognised that the past record for schemes aimed at establishing communication between composers and computers is somewhat less than encouraging. In fact, composers of the Music V school were faced with a downright obstinate instrument that may have required weeks of effort from the user to produce a few lack-lustre sounds — a situation that's far removed from the present proposal of using the micro as an intelligent amanuensis. Whilst it's certainly true that languages like Max Mathew's Music V went out of their way to be rigorous in terms of defining sounds with accuracy and precision, it's equally true that reading a Music V note or instrument list is rather more complicated than the average shopping list (see Figure 6, for instance).
Input to Music V consists of two blocks of data: the 'instrument definition' and the 'score'. The former specifies a network of the unit generators that we came across in E&MM's 'Macro Music' series, and, in this case, is set up as 5 sine wave oscillators. The latter block is a list of 'notes' each of which specify action time, instrument number, duration, frequency, and sundry other composer-defined parameters. Even with only a cursory look at the Music V MCL, it's pretty obvious that this sort of score is likely to be totally bewildering to the uninitiated. One of the main stumbling blocks is the requirement for pitches to be entered as log to base 5 frequencies, and amplitudes as decibel levels. It's hard not to get the niggling feeling that a little too much emphasis has been placed on the machine side of this particular man-machine interface!
What, then, should one use as a starting point for a more workable sort of MCL for use in micro-based synthesis? Well, one possibility adopted by a number of systems is to develop a shorthand that's derived from the macros of conventional musical notation, ie., pitch values, rhythmic values, transpositions, tempo indications, and so on. The example shown in Figure 7 is purely hypothetical. but it serves to illustrate a possible extension to the programming techniques used in many hybrid programmable synthesisers. The one factor that gives this potential MCL some connection with Music V is the fact that the instrument definition ("Dyna-mute", in this case) immediately precedes the actual notes (the part called "Bass 1"). However, as a MCL, this example is limited by the closed-shop nature of the grammar, ie., notes that are only variable in two dimensions (pitch and duration) which are then plugged into fixed-patch analogue voices.
The problem with this approach is that we've designed a musical data entry system that's too simplistic for its own good. It's as if the programmer has made up his own mind about what a composer wants to get out of the system and, therefore, has written the MCL so that the number of variables in the system is down to the barest minimum. Perhaps this reflects a poor understanding on the part of the programmer of what a composer actually does in his craft.
So far, the main justification for an MCL would seem to be that a) it puts instrument definitions and notes in the same place (a good thing) and b) it keeps keystrokes for data entry to a minimum (also a good thing).
But is there any reason why the composer should be restricted to the laborious specification of one note after another? This surely is missing the point of what a MCL should accomplish, ie., the communication of compositional instructions to the computer. Given that a variety of intuitively-used compositional devices (thematic in-versions/transformations, and so on) are actually achieved by the mental equivalent of algorithms (what could be called 'silent compositional processing'), a true MCL should be flexible enough to incorporate all the hidden musical processing that a composer at present is obliged to do in his head. So. why not make the MCL sufficiently flexible as to allow the composer to write these algorithms into his score?
An amanuensis should remove the hard graft of composition, and that's just as true for working out the magical squares or hierarchical structures that might be used to generate imagined sounds as for the final act of turning the imagined sounds into the splendours of sonic reality. Next month, we'll come down from the mountain tops to look at how MCLs in use today meet these requirements — in particular the Fairlight Composer, the Roland MC4, and a couple of Apple-based systems.
Feature by David Ellis