Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

TechTalk (Part 1)

John Chowning

John Chowning, inventor of FM synthesis and the man indirectly responsible for Yamaha's DX synths, receives an interrogation from Simon Trask. In the event, our man scarcely gels a word in edgeways.


As part of their British Music Fair activities, Yamaha brought FM synthesis inventor John Chowning over to Olympia to give a series of public talks. He also gave E&MM an interview so informative, we're having to run it over two issues.

Almost anyone who's worked with Yamaha's ground-breaking series of Frequency Modulation synths will have come across the name of John Chowning, the founding father of FM's musical applications. For some years now, he's been director of the Centre for Computer Research in Music and Acoustics at Stanford University in California, but currently he's on sabbatical at IRCAM in Paris - just a short hop across the water from August's British Music Fair. The BMF seemed an ideal opportunity to get the lowdown on a chunk of music technology history, to discover what he's working on now - and to find out what the man's really like 'in the flesh'. In fact, far from being a lofty academic, Dr Chowning proved to be a modest, approachable and above all open-minded man, with plenty of enthusiasm for the latest trends in music technology. But first things first...

I can't imagine many people know that much about you, so let's start with some background.

My background is all musical, and my training rather traditional. However, I played and had a lot of interest in jazz for a number of years when I was younger, in which context I played the drums. I became interested in composition while I was at college in Wittenburg, Ohio, and gained a degree in composition there. Then I went to Paris and studied composition with Nadia Boulanger for three years, after which I went to Stanford as a graduate student in Composition.

After a year of graduate work I read an article by Max Matthews - this was, I think, in December 1963 - which was the first published work on the use of computers to synthesise sound. It was rather different to the work of Hiller and Xenakis which revolved around the use of computers in composition. While I was in Paris, Boulez had the Domaine Musicale series, so while I was studying with Boulanger I'd heard a lot of new music involving electronic music - some of the early works of Stockhausen such as Kontakte, and Berio's Omaggio a Joyce - so I was intensely interested in electronic music and in loudspeakers as a medium. At Stanford they didn't have any electronic music equipment of the sort that studios in those days had, but they did have a fledgling computer system that was rather good for the time, at the beginning of the Artificial Intelligence project. So having read this article by Max Matthews I investigated the possibility of our implementing his programs at Stanford - Bell Labs MUSIC IV was the program they had at the time. With the help of a young mathematics undergraduate student, David Poole, I was able to implement the MUSIC IV program at Stanford using a PDP1 and an IBM 7090 joined together through a shared common disk, and that was, I think, the first on-line computer music system in the world.

That was my beginning with computer music in 1964, and I quickly saw that I could learn to program in less time than it takes to learn good counterpoint or 18th century harmony, and that learning to program extended a kind of freedom of rather extraordinary dimensions - that is, I realised that if I could program the computer I could kind of be an engineer without ever having to learn to solder.

I quickly became interested, in '64 and '65, in the problem of space, and in trying to create sounds with the computer which did not seem to come from point sources - that is, loudspeakers. In fact, the first bit of musical acoustic or psychoacoustic research was in creating spatial illusions, and by just having a modicum of programming capability I was able to design these programs which moved sounds in space. I learnt a lot about perception, and I also learnt that to have effective control of these machines, you needed to have some skills in acoustics and psychoacoustics. And of course, my musical background was the most valuable thing of all, because my ears were what lead me everywhere, including to FM.

Had you learnt anything about acoustics or psychoacoustics as part of your musical training?

No, not in my musical training. I learnt about them 'in the lab', so to speak - kind of an incidental education. Very few music schools even today give any training in acoustics - and psychoacoustics would be even more strange.

So learning to program was a very exciting thing for me, and I realised a lot about the value of programming using a high-level language - which at that time was Fortran. In fact, MUSIC IV was written in Fortran and assembly language.

We got a PDP6 in 1965, and David Poole and I wrote a special program based on MUSIC IV but optimised to make use of the timesharing environment that was provided by the PDP6. It was an improvement on MUSIC IV in some ways, and like that program, it was based on the idea of the unit generator - a very important conceptual notion that Max Matthews first presented in the early MUSIC programs, and which allowed those of us who had interest but no technical training to get hold of these ideas of signal processing without having to encounter the mathematics, which was at that time (and still largely is) completely foreign to me.



"The Yamaha GS1 had 50 chips in it just for FM processing. The DX7, by contrast, has just two."


We were very naive technically - there are 16-year-old kids today who have much more knowledge about sampling, the digital representation of sound and so forth. But we were working side by side with computer scientists and engineers, and it was an almost ideal environment because there were enough people in the lab that I could ask questions of before I'd cycle back to the first person and embarrass myself by repeating my questions.

By 1966, I'd worked out the basic ideas for moving sounds through illusory space - that is, for controlling the angle, velocity and apparent distance of sound. So I could move sounds through this illusory space with only three or four loudspeakers and have a 360-degree sound-space. That was very exciting. How to pan sound was pretty obvious - the only robust parameter which you can control is relative amplitude between speaker pairs. However, the question of distance was not well understood, and there I think we made a bit of a contribution through realising that distance is mostly a function of the ratio of direct to reverberant signal. If you have independent control over reverberation and direct signal, you can create an illusion which I think is the analogue of perspective, so I call it 'auditory perspective'. In the auditory domain, something that is soft in pressure may have the subjective impression of great intensity, because we understand that it's not soft because it's soft, but it's soft because it's far away. But at the source it's loud because we hear a relatively great amount of reverberation with the direct signal. It's a very important subjective cue, and one which I feel ought to be better used in the synth or loudspeaker music that we hear. Much of it I find so loud that it's on the edge of pain, and I think often it's really not what composers or performers wish. They want a big sound, sure, but not pain.

What's needed is a better understanding of the perceptual attributes - the distance cue, for example. I've thought a lot more about this distance perspective in recent years, and in particular the relationship of spectrum to perceived amplitude. With the DX7 you have control over spectrum as a function of velocity, which is very important. One of the important attributes of FM is that such a relationship is an easily-made coupling, because velocity is not just loudness or intensity. In fact, sometimes the better representation of loudness is not intensity at all, but spectral richness or bandwidth. That was implemented easily because the algorithm allows it.


So at what point did the initial development of FM synthesis take place?

That began in 1966. I was quite happy about what I was able to do with the projection of sounds in space, but the sounds themselves were still rather dull. What could we do with a computer but not vast amounts of computer time? We could make a square wave, triangle or sawtooth or a sinusoid - perhaps sum a few sinusoids - but nothing terribly interesting or dynamic, and certainly nothing that approached the richness of sounds that we experience in the natural world. And so I was experimenting with modulation (vibrato, in fact), realising that with the computer I could extend the depth of modulation and rate of the vibrato to arbitrary limits. There was no boundary because it was done in software. Mind you, this was not in real time. A few seconds of music would maybe take a couple of minutes to come through, but for any numbers of minutes it could turn into hours. So we didn't make many pieces and we made them with very great care.

In experimenting with these sounds I just kept pushing the vibrato rate and depth, with one sinusoid modulating another. I noticed that beyond the audio band, I didn't experience the instantaneous change in pitch any longer. Instead, what I was hearing was a change in timbre. So then I just picked some values: say a carrier frequency (at that time I called it a centre frequency) of 100Hz and a vibrato rate of 100Hz, and maybe a vibrato depth of 100Hz. That seemed strange, but the computer didn't care. I noticed that it was no longer a sinusoid I heard, nor was it a sinusoid with vibrato, but a slightly richer tone than what would be formed by a sinusoid on its own, still at 100Hz. Then I discovered that if I put everything up by a factor of two the pitch was an octave higher, as you would expect, and the spectrum seemed to be about the same.

So I did a few experiments, first with basic integers and then with some more complex ratios, and realised that with very simple control over two oscillators, I had all these different kinds of tones, and I thought that was wonderful. Then I realised that if I made the modulation depth of one of these oscillators change in time, I had what was a standard effect on synthesisers of the time - the bandpass filter. But I also knew that you couldn't do those things in digital at the time, because it was just too expensive. So then I asked an engineer to help me understand what was going on according to the theory. We looked at a standard engineering text and he explained to me the equation and its trigonometric expansion, which really points at the spectral domain - and that was what was interesting. We discovered that what I was doing was exactly predictable by the equation defined in the 1920s for FM broadcasting, where the broadcast signal was always the music or speech signal and the modulating signal was always a sinusoid. So to explain that they start with a simple case and use a sinusoid as a modulator, and that's exactly what I was doing, except that I was doing it so that the carrier was in the audio band.



"After I'd synthesised trumpet tones using FM, I realised there was probably some commercial interest in the technique."


Having found this theoretic explanation, we were able to predict pretty much what was going to happen. Then we extended it: what happens if you have two carriers or two modulators in parallel? Or cascade modulation as in Algorithm 1 on the DX7? It's the explanation that's complicated, not the actual process, but that's the trade-off. Intuitively it's not so clear, but computationally it's efficient, whereas additive synthesis is intuitively very clear but computationally a bit unwieldy, so the database is pretty large.

I experimented with a number of different forms of the algorithm and realised that it was extensible. If you had three oscillators then you got what you'd expect, maybe twice as much power or even more. I did a number of simulations of various tones, but there were some that I couldn't do, like brass tones, and I didn't quite understand why. However, I'd made the acquaintance of Jean-Claude Risset, a French composer who at the time (1967) was working at Bell Telephone laboratories with Max Matthews on computer music, and who'd done some analysis of brass tones. Through the analysis and Fourier synthesis of those tones, he'd realised that one of the signatures of brass tones was the relationship between increase in intensity and increase in bandwidth - the harmonics came in sequentially during the rise time of the amplitude envelope.

In 1970 I remembered this, and while I was looking at the basic FM equation I thought that if I used the same envelope for the modulation index as I'd used for the amplitude envelope, then the right thing should happen. Within a half-hour or so I had some unbelievably good brass tones, rather elegantly done with only two oscillators. At that moment I realised that there are a lot of correlations between energy and bandwidth - with the bell tones, for example, which were probably the first truly realistic tones that I'd managed with just two operators. I was using the same envelope for the modulation index as I was for the amplitude, except that the ratio of carrier to modulator was one which gave inharmonic components.

The brass tone was a big step, because I realised then that Jean-Claude had used maybe 16 oscillators in parallel to get a sound which was different, but not necessarily better-sounding, than the one that I was doing with just two oscillators.

How did Yamaha come to adopt FM synthesis?

Well, just after I'd synthesised these trumpet tones using FM, which was in '71, I realised that there was probably some commercial interest in such a technique. So I explained it to an office at Stanford called Technology Licensing, which was just beginning at that time. I played them some examples, and explained how it was done in a rather simple way. They contacted some American organ companies to see if any of them would be interested. Hammond sent some engineers a couple of times, but they just didn't understand this idea of 'digital'. They thought the tones I was producing were very interesting, but I don't think they really understood when I explained how it all worked - it was just not a part of their world.

Anyway, Stanford University heard that Yamaha were making organs, though they were not a well-known company in the States at the time, at least not in that area. So Yamaha sent one of their engineers who happened to be in Los Angeles at the time, and after ten minutes of explanation he understood exactly what I was talking about. Yamaha were already, I guess, thinking digital and doing theoretic work in the area.

So Yamaha quickly became very interested. I signed the rights of the patent over to Stanford, who paid all the patent costs, and I think that rightly they get most of the income from the royalty, because the work was done in their labs, and because they use most of the money to support the Centre for Computer Research in Music and Acoustics.

Over the years Yamaha worked at the development of FM, but it couldn't be implemented in a commercially viable form until the development of VLSI chips. Remember that the GS1, Yamaha's first FM keyboard, had 50 chips or so in it that were related just to the FM processing. The DX7, in contrast, has just two. There were eight operators in the GS1 whereas the DX7 has six, but then there are aspects of the DX7 which are more complicated than the GS1, so it may be about the same order of complexity. But the reduction is enormous, so when people say that I developed the DX7, I have to say that's not true. It's an idea that I had, and I worked with Yamaha, but it required a very high degree of mathematical expertise to realise, and I don't have that expertise. I was very naive, and it was maybe because I was not very well educated in technology that I was able to discover what I did. Radio engineers are so programmed to think of the carrier frequency as being in megaHertz, outside of the audio range. I was a naive musician doing something that I shouldn't have been doing, in a way. It didn't make any sense musically to have vibrato that deep and that fast, and from the point of view of engineering, there was no particular interest in the results of having such low carrier frequencies.

Next month: Using FM to mimic grand pianos until musicians can't tell the difference, working with Dave Bristow, and the future of MIDI...


Series - "TechTalk - John Chowning"

Read the next part in this series:

TechTalk (Part 2)
(EMM Oct 85)


All parts in this series:

Part 1 (Viewing) | Part 2


More with this artist



Previous Article in this issue

Glass

Next article in this issue

Checklist


Electronics & Music Maker - Copyright: Music Maker Publications (UK), Future Publishing.

 

Electronics & Music Maker - Sep 1985

Scanned by: Stewart Lawler

Topic:

Synthesis & Sound Design


Artist:

John Chowning


Series:

TechTalk - John Chowning

Part 1 (Viewing) | Part 2


Interview by Simon Trask

Previous article in this issue:

> Glass

Next article in this issue:

> Checklist


Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

Donations for May 2022
Issues donated this month: 0

New issues that have been donated or scanned for us this month.

Funds donated this month: £10.00

All donations and support are gratefully appreciated - thank you.


Magazines Needed - Can You Help?

Do you have any of these magazine issues?

> See all issues we need

If so, and you can donate, lend or scan them to help complete our archive, please get in touch via the Contribute page - thanks!

Please Contribute to mu:zines by supplying magazines, scanning or donating funds. Thanks!

Monetary donations go towards site running costs, and the occasional coffee for me if there's anything left over!
muzines_logo_02

Small Print

Terms of usePrivacy