How East Met West
New technology has already brought the music industries of East and West closer together, but how has that affected the music itself, and what will happen as the two cultures get closer still? Gary Larson gives his viewpoint.
Everyone knows technology makes things more accessible, but how have electronic instruments benefitted from East-West dialogue, and what advances are likely to be made as that dialogue becomes more intensive?
"OH EAST IS East, and West is West, and never the twain shall meet..." The immortal words of Rudyard Kipling, written at a time when any cross-fertilisation between the two sides looked unlikely, to say the least.
Well don't look now, but in electronic music, at least, the twain may finally Be meeting. The first formal link was established in late 1982, when representatives of American and Japanese synthesiser companies created what everyone who is anyone now knows to be the Musical Instrument Digital Interface (MIDI), a hardware and software protocol.
If nothing else, MIDI has cured many of the compatibility problems that still afflict microcomputers, and it has provided us as well with all manner of new gadgets and programs. In time, it may even realise its full potential as the basis of a computer music network, with bulletin board systems serving as portable IRCAMs, winging new music and new ideas about music into our homes.
In the meantime, while we await that golden age, there is another aspect of the converging twain that should not be overlooked. Thanks largely to some technology from the West - semiconductor chips, digital synthesis algorithms - as exploited by some revolutionary devices from the East - most notably Yamaha's FM synthesisers - the complex world of electronic music has another, more fundamental common denominator: a shared means of producing sounds.
The music itself is still as varied as ever (some would say as restricted as ever), but growing numbers of players - from unknown garage bands to the galaxy of rock 'n' roll stars, and even academic composers tucked away on college campuses - are using essentially the same instrument, one of the several Yamaha DX/TX configurations.
Thus, while electronic music's Tower of Babel still stands, at last there now exists the possibility for communication between some of the diverse tongues. In short, MIDI has given us a way to talk, and digital FM synthesis has given us something to talk about.
And it's about time, too. From the very start, the world of electronic music was a divided one, with the divisions not so much linguistic or geographic as economic (the "haves" versus the "have nots") and professional (the "insiders" versus the "outsiders"). Some composers - that is, generally those with certain institutional affiliations - enjoyed access to high-priced equipment, while others had to make do with bargain-basement items.
Among the pioneers, a fortunate few (Schaeffer and Henry in Paris, Eimert and Stockhausen in Cologne, Berio in Milan) were able to ply their trade in state-owned radio studios. (Notably absent from this list is the BBC, whose Radiophonic Workshop neither encouraged experimentation nor welcomed outsiders; Roberto Gerhard was the lone exception among composers, which led to others, including Tristram Cary, Peter Zinovieff, and even the BBC's own Desmond Briscoe and Daphne Oram, to do their experimental bidding elsewhere.)
In contrast to the state-radio "insiders", John Cage resorted to simple audio-test recordings and variable-speed phonographs in his revolutionary Imaginary Landscape No. 1, of 1939 (and 12 radios in Imaginary Landscape No. 4 ten years later).
Similarly, Vladimir Ussachevsky and Otto Luening, in their historic 1952 tape music concert at the Museum of Modern Art in New York, used borrowed equipment to create and perform their works.
Countless others, many with only razor cuts and frayed nerves to show for their efforts, studied Frederick Judd's 1961 classic Electronic Music and Musique Concrete, and attempted home-brew versions of the new sounds.
And once this new music became established, the "insiders" tended to be those with university affiliations. Milton Babbitt, for example, parlayed Rockefeller Foundation money and the generosity of the Radio Corporation of America into a near-monopoly of the RCA Mark II synthesiser, a $250,000, room-size, one-of-a-kind item at the Columbia-Princeton Electronic Music Centre.
Similar enclaves were formed at the University of Illinois (where Lejaren Hiller programmed the university's giant Illiac computer to compose works), and the University of Toronto, and eventually major centres were established in Paris (IRCAM), Utrecht, Stanford University, and the Massachusetts Institute of Technology.
And while England has no single University studio of the stature of these centres, the work of composers such as Denis Smalley at the University of East Anglia (whose Tides was one of the artistic triumphs at the International Computer Music Conference in Vancouver last summer) should not go unnoticed. And digital studios can be found at Universities in Durham, Nottingham, and York.
Historically, the most important exception to this general dominance of academic institutions was equally remote from the composer in the street - the Bell Telephone Laboratories in Murray Hill, New Jersey. It was in this unlikely location that computer music got its start in the late fifties when Max Matthews, hired by the telephone conglomerate to research speech synthesis, hit upon the idea of using the high-speed computer to produce musical sounds. Out of Matthews' efforts grew MUSIC IV, a music programming language that mathematically replicated analogue synthesis modules, and which became the basis of similar languages used at computer-music installations all over the world. Still, this was a restrictive medium - yet another weapon in the arsenal of the "haves" that further separated them from the "have-nots".
Even with the advent and subsequent popularisation of small, portable analogue synthesisers in the sixties and seventies, when serious electronic composition and performance became viable outside the university, the distinction between, say, Walter Carlos' or Keith Emerson's mighty Rolls-Royce modular Moogs and the Ford Fiesta MiniMoogs that just about everybody else drove, amounted almost to a caste system.
In time there were lots of compact synths to choose from, and some pretty sleek ones at that, especially after Sequential Circuits introduced the microprocessor to analogue synthesis with the Prophet 5. And with the Japanese invasion of Korg, Roland, Yamaha, et al, a whole fleet of inexpensive synths - the Hondas and Toyotas of the trade - became available.
Not surprisingly though, the question soon became one of numbers, with one's musical prowess seemingly related to how high a stack of keyboards one could command. And not even all that musical horsepower carried much weight among academic composers, who scorned those who played more keyboards than chord changes.
Enter Dr John Chowning, one of those composers in white coats, labouring away in Stanford University's Artificial Intelligence Laboratory and seeking the same elusive trumpet tones that Jean-Claude Risset had tried to capture several years earlier (1964) at Bell Labs. But whereas Risset had painstakingly used 16 or so digital oscillators to create separate envelopes for each harmonic, Chowning employed a mere two oscillators and elegantly simple frequency modulation algorithms to create a similarly rich and dynamic spectrum. Before long, bell-like FM sounds were ringing through computer music installations throughout the world.
But it was not until Yamaha acquired the exclusive rights to market the process that the digital revolution first reached beyond the academy to touch musicians everywhere. "The Performance is About to Begin", proclaimed American media ads when the DX7 and DX9 were introduced in 1983. That was perhaps the first understatement in the history of the business.
It was not the extent to which popular musicians embraced the new instrument that was surprising, though. Rather, it was the emergence of the DX7 (or its rack-mounted, eight-headed big brother, the TX816) as the instrument of choice among so-called "serious" composers, the erstwhile "insiders" who became intrigued by the power of FM and its potential to create a diverse community of sonic explorers.
In Vancouver last year, for example, several works - highlighted by Canadian composer David Keane's Elektronikus Mozaik - were based on sounds generated by Yamaha DX instruments. Long-time Buchla synthesist Morton Subotnick (see interview elsewhere this issue) is now working exclusively with Yamaha equipment, and even Max Matthews has brought a DX7 and TX216 into his studios at Bell Labs.
A cottage industry has grown up around the development of DX software, and users' groups have been formed to exploit the new technology to its fullest.
Other flavours of digital synthesis - Casio's Phase Distortion or Sequential's Vector Synthesis - may prove equally fruitful. But regardless, it is clear that the "insiders" no longer have the electronic music market cornered. The new sounds are there for all of us to explore, and thanks to MIDI, we can now share our explorations with others.
Feature by Gary Larson
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!