Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

TechTalk (Part 2)

John Chowning

Part 2 of an in-depth interview with Job Chowning, inventor of FM digital synthesis. Simon Trask gets in a few more questions than he did last month.

The second part of our interview with John Chowning, inventor of FM synthesis and father of the Yamaha DX instruments that use those principles.

After a first half that dealt mainly with John Chowning's background and the story behind the development of FM synthesis and Yamaha's involvement with it, it seemed worthwhile to ask him what his present tasks included, and what he thought the future had in store for both the academic and the commercial music fields. How did you get together with Dave Bristow?

I first met David just over a year ago through Gary Leuenberger, who I'd known for a couple of years in the context of the GS1. David and Gary had already worked together on the sounds for the DX products. Anyway, David was visiting California and Gary brought him to see me at Stanford, so I gave him a tour of the lab and showed him some of the work we were doing.

Then at the last ICMC in Paris (see report, E&MM February '85), David was demonstrating some of the Yamaha gear. I told him that I was coming to IRCAM on sabbatical for six or seven months from February of this year, and that I wanted to work on a piece which would use Yamaha FM equipment. So we talked about the idea of collaborating, and of him helping me learn about the Yamaha gear, which I had never really had much hands-on experience with. Happily it's worked out well, and Yamaha have kindly provided us with the gear and provided him with the time to work with me.

We also discussed the possibility of he and I working on an FM tutorial that would cover both the theory and the practice of FM. This is now well under way, and should be available before the end of the year. Unlike the original paper that I wrote, we're doing this for people who are musicians — in a sense, for myself, remembering how I learned and understood with very minimal knowledge of mathematics. We're also doing this bearing in mind that you can also use instruments such as a DX7 as a very effective means of pedagogy - that is, teaching people not only about FM but also about acoustics and some aspects of psychoacoustics, which are very important to understand. We hope this will be a great aid for those who want to program FM sounds.

What led you to the idea of writing a piece which would make use of the new Yamaha equipment?

I realised that the GS1 was an instrument with a keyboard which felt comfortable to pianists, and that the sounds could respond to gesture in a way that was natural to pianists. Consequently I felt that some pieces from the 'art music' world could be done with these instruments, and that maybe some really skilled, virtuoso pianists could think about playing these keyboards now. The sounds will obviously not be the same, but what they do with gesture could be captured and effectively used. So I thought about a piece using GS1-like keyboards with maybe a computer connected so that I could have some kind of control over the timbres under the pianists' fingers. So I proposed this and asked Yamaha if they'd let me have some of this equipment so that I could write the piece for three or four keyboard players. And then about this time the DX7 came out, which was really based on the DX1 and had one of these good keyboards but a better synthesis algorithm than the GS1. It was more sophisticated, and the timbres were more varied and more extensible.

You were used to working with very powerful computer systems. Didn't you find the DX7 a bit of a come-down?

In raw synthesis power that's true, but remember there's some things you can't get at except in real time of the sort that you have on the DX7. At Stanford we have a very powerful digital synthesiser controlled by a PDP10 computer, with absolutely arbitrary control over, for instance, the number of operators, the number of envelope segments, independent segments for frequency and amplitude, and independent frequency of each operator. It's real-time computation, and it could be performed in real time, though we don't have it set up for that because it's used in a time-sharing, lab context.

"People who feared technology felt that it could only create universal sounds — but they didn't understand the power of digital technology."

When I first had experience of the KX88 controlling the DX7 or TX816, I called that real-time squared, because it's another dimension which is not exclusive but complimentary to the kind of experience that I've had. It's a bounded system, simpler than what we can do, but it's very well defined, so you tend to explore it in a more extensive way. And there's a kind of control over the synthesis and programming of the synthesis that you don't have unless you have a dedicated, unique system. So I've learnt a lot, and I think it's another qualitative experience - as I said, real time squared.

So at IRCAM I'm working on a piece for three KX88/TX816 combinations that'll be played by three pianists. David and I have been working hard at getting the best piano sounds we can out of the TX816, and we've done rather well, I think. Some of the tones we've produced are appropriate for classical music, and I feel this has an interesting future. In the past, those who feared technology felt that although some interesting sounds might be produced, there would only be universal sounds. But they didn't understand the power of digital technology, and the fact that sounds can he particularised. We can imagine a future where each 'piano' can be particularised to the requirements of the individual player. The technology humanises itself, in a way, when it's understood and used in thoughtful ways. And the more powerful the technology becomes, the more flexible it becomes.

Anyway, we're using the DX7 to develop these piano sounds with a couple of algorithms. I'm asking good pianists to play this piece, and part of the concept of the piece is that as performers they make contact with a world that's very familiar to them, and yet they also make contact with a world that is not familiar to them - but the force of their learning and control in virtuoso gesture is effective. It's not a piano piece, but at a certain point in the piece I wanted to have the best possible piano sounds I could get. We've used a spectrum analyser a bit to get a feel for the spectral aspects of real piano tones, but mainly it's a matter of using your ears. David is a very skilled player, and as I'm not a pianist, his ability has been of very great value. He's worked on some of the high tones, though of course there's been a great deal of sharing of ideas.

The critical thing was finding an algorithm for producing the very rich bass tones of the piano. So I've used an algorithm which has three operators in a cascade - 16 or 17, I think — and ratios which gave me a large bandwidth but with a lot of energy around the fundamental. The key scaling ability of the DX7 is also of great importance - the fact that you can fade out one operator's effect on the carrier wave and fade in another with perhaps a different ratio to the carrier.

What about the fact that performance of these instruments is rooted in the chromatic scale?

I would like to have control over that. There's a lot of interest within the 'art music' world in microtonality, and in inharmonic spectra coupled to alternative tuning systems (which I used in a piece of mine called 'Stria', composed in 1977). I hope that we can develop systems where the user has control over the division of the frequency space, and I think it would be interesting in the commercial music world, as well, to have the option of not using tempered tuning. A lot of musicians working with old music would love to be able to play with these quite remarkable harpsichord tones in, say, mean-tone temperament. And other musics have other kinds of tuning - Balinese music, for instance. There are lots of reasons to make the frequency divisions finer and available to the user.

"Dave Bristow and I have been working hard at getting the best grand piano sounds we can out of the TX816, and we've done rather well, I think."

Companies like Roland and Yamaha know that, but it's a question of how much extra computation they can allow in an affordable instrument. As soon as multipliers become less expensive, for instance, there's a whole world of signal processing that can be realised digitally, and that will be very exciting.

During the years of developing FM, Yamaha were working in a whole different area, namely that of large-scale integrated circuits. This is extraordinarily important, because it's there that we find the power of the implementation. The DX7 has two FM chips, one of which does all the envelope generation and the other all the operator computation. It's through this implementation in VLSI that these instruments can be relatively inexpensive, and yet still be very powerful. Gradually we will have greater control over more and more musically-important dimensions, and frequency control is one of them.

There are other aspects of FM implementation that I think and hope will be realised: the simulation of resonance structures, for example. These are very appropriate for many categories of timbre, like vocal tones (with sung vowels), bassoon tones and some string tones. These have very strong resonant characteristics which can be nicely done with FM - but obviously it'll take more computation to achieve that.

You work primarily in the 'serious/academic' music world. What, in your view, is the state of that world at the moment?

Well, I think that MIDI is going to change it enormously, and that's much to the good. Ten years ago we had no choice, if we wanted powerful real-time sound generation, but to design and build our own devices. But had there been something on the market we certainly would have purchased it. Why have a unique device if it's always posing problems of maintenance, and which you can't take anywhere because it's big and heavy? So I think the development of MIDI-based instruments, which are modular and have a standard format for control, will be of great importance to 'serious' music.

At the last ICMC at Paris, there was a whole session devoted to MIDI control for the first time. Why? Because this idea of control is of enormous importance to all of us. It means we can construct a system of arbitrary components out of elements, in the same way that when I first began my work with computers, I used a big commercial machine which I designed by controlling modules of programs. The great virtue of these MIDI-based devices is that they are powerful yet affordable. So it's better to use a device like the 4X at IRCAM to do that which is unique to its capability, and use MIDI-compatibility and commercial devices to do what it can't do better. So if you want to do massive FM, get a TX816.

"Everyone at IRCAM, Stanford and so on realises that if you're going to work on something now, it's got to have MIDI."

So what do you see as the limitations of MIDI?

Well, in the initial stages people were saying that MIDI wasn't fast enough, because if you tried to put, say, five DXs in series there would be problems concerning simultaneity. But already the industry is solving that through the introduction of MIDI branching units, and there are MIDI processors appearing to tackle all sorts of requirements and problems without the actual standard itself having to be changed. And I think that's very healthy, because it allows people to really understand what's of value in MIDI before the whole standard changes, so consequently any changes will be made with a good deal of forethought. Certainly a new standard will try to maintain compatibility with the present standard, because nobody will want to make obsolete all this gear which is appearing now, and that's one of MIDI's greatest strengths.

I think that the value of the research centres, such as the one we have at Stanford, is that they can provide those methods and tools which can only be realised, explored and developed on large computer systems, but which can finally be implemented in more practical ways. This is, after all, what happened in the case of FM. And at Stanford, for instance, one of my colleagues who's a wizard at signal processing, Julius Smith, has a new idea for digital reverberation which maybe in the future will have a very great effect on what we can do in room simulation with affordable devices.

On the other hand, everyone in the world of IRCAM, Cornell, Stanford, UCSD and MIT and so forth realises that if you're going to work with anything today, it's got to have MIDI. Otherwise you're cutting yourself off from a whole development which can be done for you. So although I don't think we'll move just to MIDI-controlled devices, everything we want to do that concerns music production had better include MIDI.

As soon as I get back to Stanford we'll set up a MIDI studio. It's been a wonderful experience for me seeing the first MIDI studio at IRCAM - the first one in an academic context that I know of. The 4X at IRCAM, which is DiGiugno's very powerful signal processing device, has MIDI In and Out as well, and that's important.

One area where we no longer compete is digital reverberation: we can't do better in the lab than they can do now with commercial products. So the 4X shouldn't spend a single tick of its power trying to do reverberation any more, because it can be done more effectively, at less cost and with more reliability by commercially available devices. MIDI is the answer, and if it's control you're after, MIDI should saturate your thinking.

That just about concludes what John Chowning had to say during the all-too-brief time I spoke with him. Considering we had little more than an hour, he managed to pack an awful lot in - and almost all of it was well worth hearing. It was certainly refreshing, meeting someone from an academic background who was so obviously capable of putting his own work into a realistic perspective, and seeing beyond the confines of the non-commercial arena. There should be more people like Dr John Chowning...

More with this artist

Previous Article in this issue

The Beat Goes On...

Next article in this issue

Sequential Potential

Electronics & Music Maker - Copyright: Music Maker Publications (UK), Future Publishing.


Electronics & Music Maker - Oct 1985

Scanned by: Stewart Lawler


Synthesis & Sound Design


John Chowning


TechTalk - John Chowning

Part 1 | Part 2 (Viewing)

Interview by Simon Trask

Previous article in this issue:

> The Beat Goes On...

Next article in this issue:

> Sequential Potential

Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

Donations for October 2021
Issues donated this month: 8

New issues that have been donated or scanned for us this month.

Funds donated this month: £50.00

All donations and support are gratefully appreciated - thank you.

Please Contribute to mu:zines by supplying magazines, scanning or donating funds. Thanks!

Monetary donations go towards site running costs, and the occasional coffee for me if there's anything left over!

Small Print

Terms of usePrivacy