Fairlight's Father (Part 1)
TechTalk: Kim Ryrie | Kim Ryrie
In an exclusive interview, the co-founder of the Fairlight company and father of the CMI talks about the machine's development, and outlines his plans for the future. Simon Trask pops the questions.
In the first half of an exclusive two-part interview, Fairlight co-founder Kim Ryrie outlines the background behind the CMI's invention, and gives a sneak preview of what his company may be doing in the future.
YOU MAY HAVE heard the story of the two Australians sitting by Sydney harbour, eating their lunch and arguing about what to call their new company. The harbour hydrofoil sped past while the two were in midargument, and they both noticed the name. Problem solved.
The two Australians were Kim Ryrie and Peter Vogel, and the hydrofoil was called "Fairlight". A star was born, but the origins of the Computer Musical Instrument have more to do with one man's enthusiasm for music and electronics than with water transport.
"In 1970 I started a magazine called Electronics Today International. We tried to come up with four DIY construction projects each month - a new garage door opener, things like that.
"After the Moog and Switched On Bach came out, I decided to run a do-it-yourself synthesiser construction project in the magazine. The result was an analogue synthesiser called the ETI4600. It was a bit of a monstrous beast, but it was fun."
However, that fun soon turned to dissatisfaction with what could be achieved using analogue techniques. Ryrie called in Peter Vogel, an old schoolfriend who also happened to be a wizard electronics designer. It was the start of a partnership which is still going strong, with Ryrie now managing director and Vogel head of R&D at Fairlight.
"I said: 'How about we start a company and build a new synthesiser? We could use these new microprocessor things...' My original plan was for a digitally-controlled analogue synthesiser - something like the Prophet 5 turned out to be. But around 1975 we met up with a computer consultant called Tony Furse, who was Motorola's consultant in Australia on the 6800 range of microprocessors, and who'd previously worked on the design of integrated circuits for Fairchild in the States.
"Tony had been working for some years on an all-digital waveform manipulation system. This would generate by means of additive synthesis a whole series of complex waveform cycles and in effect 'animate' them. So we picked up on this and worked on it for several years."
The prototype system consisted of some 20 circuit boards and 4K of static RAM (a large amount in those days), could handle eight voices, and was about twice the size and price of the current CMI - not a very viable device, commercially. But around 1977, the team turned their attention to the new 16K dynamic RAM chips that were starting to become available, and that, according to Ryrie, was the turning point.
"We got hold of some chips and designed one channel onto one card - memory access time wasn't fast enough to allow us to put all eight channels on a single card.
"Having designed this new system we then thought: 'Gosh! We've got so much memory - 16K for each channel - maybe we could actually sample a real sound and play it back'. So it wasn't until we'd designed the hardware that the thought came to us to do that. It wasn't hard to implement - you could buy eight-bit ADCs which weren't outrageously expensive because they were used in many other applications. Sampling didn't require anything particularly profound, technologically. We were just using two 6800s talking out of phase to the same waveform memory — a single 6800 simply didn't have the power to do what had to be done in real time.
"The first sampled sound ever came from a dog which belonged to one of our programmers. You could tell it to speak and it would bark. We sampled that onto the first prototype board, which had wires trailing all over the place. That sample ended up in the original Series I sound library."
Years of R&D doesn't come cheap, of course. Fairlight initially financed their operations by producing an Electronic Paintbox which converted a monochrome signal into six colours, and which sold well to local TV stations. Subsequently, the company produced the hardware for a business computer marketed by Remington Office Machines.
"The first sampled sound came from a dog that belonged to one of the programmers... We sampled it onto the first prototype hoard, and it ended up in the original sound library."
"We didn't get rich doing that, but it certainly made us enough money to keep going. That lasted for about two years, which was long enough for us to start being able to sell the CMI I."
Ironically, it was Japan's entry into the business computer market which prompted Fairlight to phase themselves out of that area, feeling it would no longer be profitable for them.
The Series I CMI made its debut in 1979, with sampling as the star of the show and the original additive synthesis system relegated to a supporting role on Page 4. The Series II followed a few years later with essentially the same architecture, but offering improved fidelity, and replacing the 6800s with dual 6809 processors.
Today, the Fairlight has achieved such fame that even non-musicians are aware of what it does. But what did musicians and producers make of the CMI when it was first launched?
"We had the CMI at an AES show in New York in 1979, and the reaction was universally 'Oh my God, that's amazing — what would you use it for?', which was a bit depressing. The software was rather basic, but we were able to play sounds on the keyboard, and some people immediately thought it was wonderful - but as a whole it took a while to get moving. In terms of sales the response was enough to keep us in business, but we weren't about to turn into a multinational overnight."
But if the response from many people was initially one of non-comprehension, the underlying ramifications of the system's arrival were bound to set in before too long. Here, after all, was a technique which couldn't create any sound of its own, but which could imitate the sound of any other instrument. Was the initial idea behind the CMI that it should be a "transparent" instrument?
"I don't think we did necessarily feel that a flute should sound like a flute. At the back of everyone's mind, of course, that's the ideal situation, but it was more a matter of going for what you could - what could we do? Many productions were done with flutes not sounding like flutes, but it seemed right for the piece of music. So we really didn't get involved in those discussions; it wasn't really our department.
"Obviously one of the big questions was: Aren't we going to put all these acoustic players out of business? But that was never the intention; rather, the intention was just to be able to play any sound - and traditional instruments just happen to be a subset of that.
"We made the machine because we wanted things to sound complex, not necessarily like a classical instrument. I personally love classical instruments, and I love real orchestras. We're starting now to be able to do that on the Series III, and even then it's only quite reasonable. That personally gets me terribly excited, because I love the power of a real orchestra and I'd never heard that reproduced on an electronic instrument. So in a way I'm thrilled that we can do it, but we weren't disappointed when we couldn't."
"We had the CMI at an AES show in New York in 1979, and the reaction was universally: 'Oh my God, that's amazing - what would you use it for?', which was a bit depressing."
While the CMI's sampling ability stole the limelight, there was another aspect of the Computer Musical Instrument which had a profound impact on the musicians who actually used it: the famed Page R. How did the rhythm page come about?
"While working on the sampling, we were also working on a keyboard sequencer which we called Page 9. It was an overdubbing keyboard sequencer that recorded key velocity, but which didn't really have good editing facilities at all.
"I think it was at the 1980 AES show that Roger Linn came along with his drum machine. We let him share our booth, and we gave demonstrations on the half-hour, alternating demos with Roger. I honestly thought that Roger was going to slash his wrists by the end of the show. Everyone was coming in and playing with his drum machine, and saying: 'That's amazing. What would you use it for?'. He was so depressed...
"I was quite impressed by the organisation of his drum sequencer, with patterns and so on. I thought that approach could work on the Fairlight with all our sampled sounds. So I drew up a very simple display page with blobs for the notes, and we thought we could call it the Rhythm Sequencer. Our programmer, Michael Carlos, wrote Page R from that basic specification, and although we intended it to be rhythmically oriented, we found that people were using it increasingly for more general music composition.
"So we added more and more features that we felt composers would want, and Page R developed into a very interactive sequencer because of the way the memory is structured. It has a block of memory with X number of bytes in it all the time, whether they're used for notes or not. So it's incredibly inefficient as a memory storage system, but it's terribly interactive because of that structure. That's why it's only 16 monophonic channels - eight on the Series II - and of necessity is quantised to some degree.
"The new CAPS sequencer for the Series III is the more traditional form of sequencer, in that it records note-ons and note-offs and so forth. The idea is that you can use Page R to get the structure of your song together, and once you've done that you can transfer across to CAPS and move on to the rest of the song."
As you may already know, CAPS is an 80-track sequencer which integrates MIDI into the Fairlight scheme of things. The software is only just being made available on the Series III, but Fairlight are already planning dramatic new developments.
"We've just purchased the exclusive rights to something called 'Clynes' microstructure', which we're intending to make available in the second stage of CAPS.
"Dr Clynes is the head of the electronic music department at the New South Wales Conservatorium. He spent about five or six years researching what he calls the 'pulse' of composers. His claim is that all of the classical composers, for example, have an inherent 'pulse' - the way that they play things. For instance, the third microbeat in some part of music will always be played slightly ahead of time by a few milliseconds, and perhaps at a slightly lower amplitude, and perhaps the note will attack and decay in a slightly different way. He spent an awful lot of time analysing all this on his department's computers, and he's come up with all these pulses for all the classical composers.
"We'd been sitting on the sidelines watching all this develop, because we were a little bit sceptical. But the results are absolutely startling. You could play in a piece of Bach and it might not sound particularly authentic. But you would then run his software through the composition - in this case the Bach algorithm - and the music would actually come back sounding like Bach.
"We added features we felt composers would want, and Page R developed into an interactive sequencer because of the way it's structured. It's inefficient as a memory storage system, but it's very interactive."
"In fact the way that it's played seems to make it sound more like Bach than the notes themselves, because you could then run the Beethoven algorithm on the same piece of music and it would come back sounding more like Beethoven than Bach, even though it was Bach who had composed the notes.
"So it's a fascinating concept, and one of the reasons why it's quite involved is because it does involve the way in which notes attack and decay. We apply what are called Beta functions onto the attack and decay slopes, and of course you can't get that information through MIDI - there's no way for it to handle that. So all this has to happen within the CMI system, between CAPS and the internal voice-generating section of the machine. However, we can send out key velocity and differences in playing time over MIDI - that does give you some of the effect, but the whole effect requires control over the envelopes. Where notes are played will define how they're shaped.
"Now, Clynes' work relates to classical music, and one of our big concerns was whether or not this would be of interest to popular musicians. So then he started doing experiments in that area, and it really does seem to make a difference. You can also come up with your own algorithms - put in any old thing and it comes up sounding interesting.
"We don't know where it's heading, but we feel confident that it's heading somewhere. There's still a fair amount of research to be done in putting Clynes' system on the Series III, but there may be a release before the end of the year."
AS WE ALL know, technology doesn't always keep pace with human imagination. Did the Fairlight team want to achieve more with the early CMI than the technology of the time allowed them to do?
"Oh, I think so. But we tended not to think of that at the time, but just go for whatever we could do with the available technology rather than get depressed about it. I don't think you get anything done if you concern yourself with what you can't do.
"If you're producing a piece of music and you're always concerned about what the hardware won't allow you to do, then that's the end of your composition, because suddenly your whole mind is working in limitation mode rather than getting-something-out-the-door mode.
"So it really wasn't much of an issue. When something new came along we'd start playing with it and seeing what we could do with it. That's why we kept the Series III in a very modular arrangement. The first version of Series III had all of the analogue outputs and inputs on a couple of large circuit boards, but in the end we felt that wasn't a very good idea because if someone were to bring out a new startling and amazing anti-aliasing filter, or a much better A-to-D, or a better de-glitching arrangement, we'd have to redesign the whole thing and sell that.
"What we've done is come up with a very modular arrangement of circuit boards, with each channel's A-to-D on a separate board. We do have an ongoing hardware development program which will make new versions of some of the CMI modules depending on how the technology is moving. It's one of the advantages of a modular system versus the approach of everything on one card in one box."
"We're working on the next generation, which may allow a more powerful system to be produced more cheaply, but our mind is always on what can be produced rather than how much it costs."
Another reason for the CMI's success has been its emphasis on user-friendliness. As Kim Ryrie explains, there's a good reason why that has always been a priority.
"Well, I hate computers. I've never been able to sit down at a computer and program it. I have an Apple II at home which I use only when I absolutely have to.
"I know the way musicians feel about computers. The hard thing is to get programmers, who love typewriter keyboards, enthusiastic about the idea that some people like knobs and buttons and seeing things on screens and not having to type much - being able to poke at things with pens rather than having to type at 300 words a minute.
"So user-friendliness was part of a very early philosophy, and Peter was always in agreement about that. And because we had quite a number of musicians working on the project, they also felt strongly about that aspect of it. It became a bit of a thing to see who could make the most user-friendly display page.
"That wasn't too difficult with Series I and II, but with Series III it became a big problem. Whereas you could teach anyone how to use the Series II in 10 minutes, the III was a whole different ball-game. Instead of having sounds that were always 16K in length, you suddenly had 14Mbytes of RAM, variable sample lengths, 64 subvoices per voice, as many voices as you liked in an instrument...
"It's been a real challenge getting the Series III software as user-friendly as the Series II was, because there's so much more involved. Just to give you an example, control parameters such as vibrato rate and attack and decay rates can be set for subvoices as well as voices - local and global parameters - and that's quite hard to orchestrate and make accessible to the user."
WHILE THE JAPANESE music industry concentrates on producing ever more sophisticated instruments at the budget end of the market, Fairlight has remained resolutely at the top end. But although we're unlikely to see a £2000 instrument from the company just yet, times are changing.
"We work towards a system that will allow what we consider to be state-of-the-art production. If we could do it for £2000, we would do it.
"In fact, we've had a lot of interest from people who can't afford a Fairlight who have asked us if we could do a smaller, cheaper one. But the amount of R&D that goes into the Fairlight is so enormous that we feel we really just want to concentrate on one design at a time. That's not to say that we aren't working on the next generation, which may allow a more powerful system to be produced a little more cheaply, but our mind is always on what can be produced, rather than how much it costs - though we do try to get the cost as low as we can.
"What we are doing is bringing out a new configuration of the Series III which will use 20Mbyte floppy disks rather than hard disk. It'll use the Series III's hardware and software — so, for instance, it'll have CAPS - but a typical configuration will probably be eight voices with 4Mbyte of RAM. The advantage of that may be that some people would be able to afford to make it part of their production system, and then add to it as money allows - it'll be upgradable to the complete Series III system.
"The eight-voice/four-meg configuration allows you to play virtually any Series III sound that is now around; most multi-sampled sounds on the III take about 4Mbyte. So what it means is that people who can't go for the full system will at least be able to get those sounds that they can't get using the cheaper sampling instruments. That's something that we're hoping to bring out quite soon..."
Part 1 (Viewing) | Part 2
Gear in this article:
Interview by Simon Trask
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!