The imitation of acoustic instruments has become an increasingly important role for synths and samplers, but are sounds as important as the way they're played? Tim Goodyer questions the progress of synth design.
WHEN THE FIRST synthesisers appeared, they offered tantalising glimpses into a world of new sounds. These were the sounds of oscillators and filters; sounds never before available to the musician. In the years that followed, a new element was to have a tremendous impact on the design and use of these revolutionary instruments - somewhere (not too far) down the line, we started using them to imitate existing instruments. Why we - as musicians - felt this necessary is another argument, but today it's an important aspect of almost every synth. Samplers too, although capable of furnishing us with unimaginable variations on sounds we record into them, are often judged on their libraries of instrument samples.
Accepting that we want synthesisers to sound like instruments that have been around for hundreds of years, how well are we doing?
With early analogue systems, few convincing sounds were available to the synthesist - passing impressions of flutes and oboes were about the medium's limit. The first quantum leap came in 1983 with Yamaha's DX7, and FM synthesis. For the first time the synthesis system followed acoustic rules - layers of sine waves are, after all, the way natural sounds are composed.
The second leap forward came with Roland's D50, and LA synthesis. The D50 used short samples combined with pure synthesis to achieve its results. The theory is that the ear (brain) relies more heavily on information from the early part of a sound to interpret it. By using a sample of a real flute "chiff", say, the ear then accepts a synthesised tone as being the "real thing". LA inspired a generation of imitators (Kawai's K1, for example) all of which used a combination of synthesised and sampled tones.
The other major player in the field of imitation is, of course, the sampler. Here recordings of real instruments are replayed under the control of a keyboard or other MIDI controller. What could be simpler?
In a limited sense, all post-analogue imitators can convince the average listener that they're listening to acoustic instruments. Or can they?
Objectively there's no difference between a piano playing middle C ff (very loud) on a record, and a sample of a piano playing middle C ff on a record. But where an acoustic piano can readily play the same note pp (very soft) with the appropriate harmonic adjustment, the sampler plays the same note with the same harmonics at lower volume. Equally, the piano will readily give you the A below middle C pp, while the sampler will give you an ff middle C slowed down and played quietly. Similar compromises are to be found in all imitative instruments - the shortcomings being directly related to the harmonic variation of the original instrument.
As the '90s get underway, technology gives us electronic instruments that imitate acoustic ones under certain conditions. As long as you play the sounds in a certain way the imitations are good. But try to elicit the kind of performance associated with the genuine instrument and the differences quickly become apparent. It's as if we've taken some sort of cross-section of acoustic instruments - we've mastered the sounds but not the dynamics. Now we know where to go, perhaps we can expect the next breakthrough in synth design to be the Dynamical Synthesiser.
Editorial by Tim Goodyer
Next article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!