Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

Article Group:
Computer Musician

Music Composition Languages (Part 2)

Can computers improve or even replace current styles of notation?


The idea behind this series of articles is to follow the path taken by musical notation, from the traditional principles to be found on any sheet of printed music to the development of special high-level languages for the synthesis of music on computer-based equipment. But first, we start with a few Deep Thoughts...

Mention the term 'Music Composition Language' to the average musician and you're likely to be met with a blank stare. After all, conventional methods of music production don't actually differentiate between a language for composition and a language for performance, so a musician is hardly likely to consider the language of musical notation as something that's particularly exclusive to composition. But does conventional notation actually fulfil all the requirements of a true language, ie., an efficient and flexible means of passing on information from one person to another?

Conventional Notation



Well, herein lies the problem: conventional notation is designed for conventional music, where key relationships, rhythmic patterns, dynamic ebb and flow, and instrumental range all tend to fit into schemes of composition set by (fairly) rigorous rules of counterpoint, harmony and orchestration. Not surprisingly,then, it tends to get into dire trouble when coming up against contemporary music full of unusual subdivisions of the octave, polyrhythms, tessitura that range over many octaves, abruptly changing dynamics, different tempi played simultaneously, and all the other tricks of the modern expressive composer.

Figure 1. 'Reseaux' by Miroglio for harp (extract).


If a composer's style has become sufficiently advanced as to include these extensions to the basic musical vocabulary, then he's forced either to make the best of a bad job, when it comes to notational practice, and rely upon the highly-developed reading skills of his performers (the example in Figure 1, for instance), or to succumb to a Faustian deal, whereby compositional precision is exchanged for the less taxing (to both composer and performer!) but incredibly dramatic graphical representation of musical events illustrated by the Schidlowsly example (for solo harp!) in Figure 2. With the former approach, one's practically obliged to use a stop-watch and calculators work out which note comes where, whereas, in the latter case, composition becomes more-or-less redundant. Clearly a sorry state of affairs!

Figure 2. Music for solo harp by Schidlowsly.


For this reason, various individuals have tried to persuade the musical establishment to alter, if not change altogether, their notational tune. The trouble is that it's totally impractical to switch from one form of notation to another, when the former has been the basis of many centuries of musical tradition. In fact, the two notational languages that seem logical and sensible within the context of contemporary music — Klavarscribo and Equitone — have met with a stalemate situation that's rather reminiscent of the reaction that the proponents of a European language met in trying to turn people from the illogicality and quaintness of English to the precision and coolness of Esperanto.

Outside the context of contemporary music, musical notation works fine 99% of the time because it's used for the purpose of providing playing instructions to instruments (and their players) that themselves have certain inherent limitations as far as subdivisions of the octave, tessitura, and range are concerned. In fact, the general consensus of opinion amongst composers who've emerged from the experimentation of the '60s appears to be that 'you don't write what can't be played or notated'. So, one explanation for the 'neo-romantic' swing-back that's been observed over the last ten years might be that the limitations of the current means of musical communication have actually achieved a sort of negative (?) feedback effect on the creative process.

Contemporary Needs



If that's the situation for the 'concert music' side of things, where does electronic music find itself? Well, as far as attempts at notation are concerned, it's all a bit confused: The obvious problem to be surmounted is that the diversity of means and effect in electronic music is so great as to be impossible to put into conventional notation. However, ever since Stockhausen first persuaded ex-WWII oscillators to send their sine waves into the ether (Studie II. for instance), electronic music composers have been investigating ways of systematising their creative procedures with some sort of shorthand.

This can take the form of elaborate graphics, as in the case of the Ligeti's Poeme Electronique (Figure 3). a sheet of numbers, as with the Riedl example (Figure 4). or it can involve elaborate verbiage, possibly designed more for self-justification than for analysis by the next generation of electromusicologists!

Figure 3. Ligeti's Poeme Electronique.


Figure 4. The example shows four simultaneous variously articulated noise processes. First the noise is heard only through the 12th filter notated in the top system; its amplitude is at first extremely low. It increases irregularly, the durations of the dynamically uniform sections, expressed as units of tape transport (one transport unit = 15 ms), also being irregular. After 16 units, the noise sent through the 3rd filter in the 2nd two-line 'system' enters at its greatest intensity, to become rapidly softer and reach the low dynamic value of 29 after 10 units. Two units previously the noise sent through the 14th filter in the 3rd system begins; its amplitude undergoes exactly the same treatment, unit for unit in sequence, as the 2nd noise.
(Click image for higher resolution version)


On the other hand it might just be a continuous print-out from a spectrum analyser, which would probably indicate more about what's actually happening in the music. At least, this would offer a plausable update to the pitch and envelope graphs of Stockhausen's Studie II (Figure 5).

However, there's a crucial difference between the use of such 'scores' in electronic and instrumental music: in the latter, the score exists to communicate intentions and instructions from composer to a group of performers, but, in the electronic situation, performance is seldom dependent on anyone (or anything) being able to accurately interpret such scores. In fact, creativity — the jottings in a note book, the knob twiddling, and the tape splicings — pre-exists the score, and so, in effect, the latter is superfluous apart from its role in assisting the composer to gather and order his thoughts. Of course, if there was a tradition whereby scored electronic pieces were reinterpreted and resynthesised by other musicians and composers, then there would be plenty of sense in having such a score. This is one place where a universal music composition language designed specifically for ease of transport from one creative situation to another would be a real step-forward.



"This effluvial extreme of the attack-effluvium continuum will vary greatly amongst listeners. The greater the distance travelled from the closed attack archetype towards an effluvial state, the more important inner motion becomes as a continuant oriented phase is born. While the internal motion of this continuant phase may be described in terms of particles or grains, outer onset and termination phases will need redefining."
(from a paper presented by Dennis Smalley at the Stockholm Electronic Music Conference in 1981).


Figure 5. Page 10 of Stockhausen's Elektronische Studie II.
Published by Universal Edition A. G., Vienna

The vertical axis contains the pitches in frequency numbers, the horizontal axis shows the durations in centimetres of tape for each second. The envelopes at the bottom represent the dynamics and can easily be referred to their own frequency rectangles which are shown above them.

Since the material of this composition is all derived from a complex of five reverberated sine waves as symbolised in each rectangle, this score not only provides a precise representation but also a clear and correct image of the translation of sound into vision.


Some people might argue that playing back the tape of a piece with a 'sound projectionist', for real-time control of sound diffusion through an auditorium, is equivalent to a reworking of the piece, but it seems a poor substitute for the potential excitement of reinterpretation, and it must surely be the fundamentally rigid nature of a prerecorded electronic piece that acts as a barrier to the wider acceptance of electronic music outside of the rock arena.

Music as Sculpture



I suppose the single most engaging factor about producing music of an electronic or electro-acoustic nature is that it's probably the nearest one can get to creating a sculpture out of sound. Waveforms really are very like clay, bendable into different shapes and different textures, and, if the end result doesn't quite gel, you just re-work the material. So, in many ways, electronic music is a much more direct pathway from the mind to the final product. The inevitable problem, of course, with a medium that stretches through time as well as space is that you're never able to take in the whole effect of a piece in one instant. It's easy to envy the visual artist who's able to stun an audience simply by presenting one image in one place at onetime. However, they lose out on what the sonic arts have to offer — the ability to manipulate the perceptual apparatus by a process of ebb and flow over an extended period of time — and a good example of a piece that does that to near perfection is 'Pentes' by Dennis Smalley (on UEA 81063).

The problem with this sculptural approach is that you're obliged to tune your ears to a very fine degree. In fact, you really have to develop the ability of a human polyphonic sequencer when it comes to remembering all the sound-events streaming through your consciousness. It's at this point that the notion of using a micro to interface between your creative ideals and musical reality makes an awful lot of sense. More than that, the fact that you have to communicate with a computer very precisely if you want it to serve you well means that one's given the option more-or-less on a plate of putting right (for oneself, anyway) whatever inadequacies existed in conventional notation.

Music V



However, before launching into the development of a MCL to end all MCLs, it has to be recognised that the past record for schemes aimed at establishing communication between composers and computers is somewhat less than encouraging. In fact, composers of the Music V school were faced with a downright obstinate instrument that may have required weeks of effort from the user to produce a few lack-lustre sounds — a situation that's far removed from the present proposal of using the micro as an intelligent amanuensis. Whilst it's certainly true that languages like Max Mathew's Music V went out of their way to be rigorous in terms of defining sounds with accuracy and precision, it's equally true that reading a Music V note or instrument list is rather more complicated than the average shopping list (see Figure 6, for instance).

Input to Music V consists of two blocks of data: the 'instrument definition' and the 'score'. The former specifies a network of the unit generators that we came across in E&MM's 'Macro Music' series, and, in this case, is set up as 5 sine wave oscillators. The latter block is a list of 'notes' each of which specify action time, instrument number, duration, frequency, and sundry other composer-defined parameters. Even with only a cursory look at the Music V MCL, it's pretty obvious that this sort of score is likely to be totally bewildering to the uninitiated. One of the main stumbling blocks is the requirement for pitches to be entered as log to base 5 frequencies, and amplitudes as decibel levels. It's hard not to get the niggling feeling that a little too much emphasis has been placed on the machine side of this particular man-machine interface!

Figure 6. Example of a Music V instrument definition.


Choosing MCL



What, then, should one use as a starting point for a more workable sort of MCL for use in micro-based synthesis? Well, one possibility adopted by a number of systems is to develop a shorthand that's derived from the macros of conventional musical notation, ie., pitch values, rhythmic values, transpositions, tempo indications, and so on. The example shown in Figure 7 is purely hypothetical. but it serves to illustrate a possible extension to the programming techniques used in many hybrid programmable synthesisers. The one factor that gives this potential MCL some connection with Music V is the fact that the instrument definition ("Dyna-mute", in this case) immediately precedes the actual notes (the part called "Bass 1"). However, as a MCL, this example is limited by the closed-shop nature of the grammar, ie., notes that are only variable in two dimensions (pitch and duration) which are then plugged into fixed-patch analogue voices.

The problem with this approach is that we've designed a musical data entry system that's too simplistic for its own good. It's as if the programmer has made up his own mind about what a composer wants to get out of the system and, therefore, has written the MCL so that the number of variables in the system is down to the barest minimum. Perhaps this reflects a poor understanding on the part of the programmer of what a composer actually does in his craft.

So far, the main justification for an MCL would seem to be that a) it puts instrument definitions and notes in the same place (a good thing) and b) it keeps keystrokes for data entry to a minimum (also a good thing).

Figure 7. Example of a parameter list approach to MCL.


But is there any reason why the composer should be restricted to the laborious specification of one note after another? This surely is missing the point of what a MCL should accomplish, ie., the communication of compositional instructions to the computer. Given that a variety of intuitively-used compositional devices (thematic in-versions/transformations, and so on) are actually achieved by the mental equivalent of algorithms (what could be called 'silent compositional processing'), a true MCL should be flexible enough to incorporate all the hidden musical processing that a composer at present is obliged to do in his head. So. why not make the MCL sufficiently flexible as to allow the composer to write these algorithms into his score?

An amanuensis should remove the hard graft of composition, and that's just as true for working out the magical squares or hierarchical structures that might be used to generate imagined sounds as for the final act of turning the imagined sounds into the splendours of sonic reality. Next month, we'll come down from the mountain tops to look at how MCLs in use today meet these requirements — in particular the Fairlight Composer, the Roland MC4, and a couple of Apple-based systems.


Series

Read the next part in this series:
Music Composition Languages (Part 3)



Previous Article in this issue

Rumblings

Next article in this issue

Muzix 81


Electronics & Music Maker - Copyright: Music Maker Publications (UK), Future Publishing.

 

Electronics & Music Maker - Nov 1983

Donated & scanned by: Stewart Lawler

Computer Musician

Topic:

Computing


Series:

Music Composition Languages

Part 1 | Part 2 (Viewing) | Part 3


Feature by David Ellis

Previous article in this issue:

> Rumblings

Next article in this issue:

> Muzix 81


Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

If you're enjoying the site, please consider supporting me to help build this archive...

...with a one time Donation, or a recurring Donation of just £2 a month. It really helps - thank you!
muzines_logo_02

Small Print

Terms of usePrivacy