Home -> Magazines -> Issues -> Articles in this issue -> View
Article Group: | |
Fairlight Explained (Part 8) | |
Article from Electronics & Music Maker, May 1985 |
Another world first for E&MM: Jim Grant takes us through the creation of a sound without using the CMI's sampling facility.
In which we take a look at the difference between linear and logarithmic conversion, and incidentally end up creating a sound using the Fairlight's synthesis facilities.
Most people involved in the modern music industry are already aware of the Fairlight's incredible potential as a music production tool. These days, you switch on the television, radio or record player in the almost certain knowledge that a CMI will make its presence felt somewhere along the line, and when you consider just how useful its specification is to studio engineers and producers, that's hardly surprising. Even in 1985, there aren't many machines capable of spreading six octaves of sampled sound across the keyboard, and manipulating that sound within user sequences to the nth degree of precision.
But if you're fortunate enough to sit in front of a CMI for any length of time without any production deadlines to meet, you'll soon discover that its creative power lies as much with sound synthesis as it does with music production per se. Pages 4 and 5 are good examples of this in that they offer the fairly standard synthesis tools of harmonic sliders and profiles, but Page 6, which we introduced last month, is something of a software oddity, since it allows control over the whole waveform - from a single byte to macro type commands such as GAIN, MIX and MERGE.
Now, for any command or process to be really useful in the field of sound synthesis, it must be responsible for some radical change in the sound structure that's both intuitive and easily understandable. For example, the VCF of an analogue synth changes the sound a great deal, and can be simply explained and understood in terms of the attenuation of harmonics. FM synthesis, on the other hand, also results in vast timbral differences, but comprehension of the processes involved (and their possible results) is a lot more difficult. That's why actually arriving at a pre-specified sound on something like a Yamaha DX7 requires so much in the way of practice and patience - and why so many musicians prefer programming analogue synths, even if the ultimate sonic potential isn't as great.
Fortunately, the Fairlight's internal configuration side-steps most of these operational problems, and a good example of how this is done is the ADD command. This takes a choice of segments from one loaded voice and adds them directly into the same segments of the currently selected voice, scaling the amplitude to avoid clipping if necessary. If you were to ADD all the segments into another, playing the keyboard would result in both sounds being heard together but only using one voice. Figure 1 shows a square wave and Figure 2 a sinewave both resident in different channels of the CMI: the result of ADDing them together is shown in Figure 3. The addition's proportion can be varied by using the GAIN command prior to the action, or by repeatedly ADDing one voice to another to increase its amplitude relative to the composite sound.
Now seems as good a time as any to explain something we've mentioned many times in the past but haven't really discussed in any detail, namely the difference between linear and non-linear voice data. As you may remember, a waveform is stored in 16K of RAM, in which each byte has a binary value that corresponds directly to the amplitude of the waveform at that point. A byte consists of eight bits, and considering all the possible combinations of these results in the amplitude of the sound at any point being limited to one of 256 levels.
The term 'linear' refers to the relationship between the actual amplitude and the value of the binary number used to represent it. If all this sounds a bit on the technical side (and it ought to), have a quick glance at Figure 4. This shows that zero amplitude corresponds to binary 0, while maximum negative excursion is represented by 1111 1111, or 255, and the maximum positive value is held as 0111 1111, or 127. Still confused? Well, the value of the Most Significant Bit holds the key. If that value is 0, the waveform is positive, while a 1 gives negative excursions. Anything else in between is in simple proportion. This form of representation results in a ratio between the smallest and largest signal that can be handled (or in other words, dynamic range) of about 48dB. You might consider that to be not a particularly impressive figure, since it means that at low signal amplitudes, the sound is more or less surrounded by hiss.
The fact is, storage (in one form or another) of low-amplitude sounds is a perennial engineering problem, to which the most common solution is some sort of noise reduction system such as Dolby. In the digital world, and as a direct result of research into digital telephony, a different solution is to use more binary bits of the byte to represent low-signal levels than you use for the high ones. This is shown in Figure 5, in which the lower values of the triangle wave use up more of the binary bits than a corresponding increase at large triangle amplitudes. The binary data is now no longer linear - it's logarithmic. For the waveform to be recovered, the data must be passed through a DAC that has a curve bent the opposite way to 'straighten out' the sound. I know all this sounds more than a little involved, but it does bring the magic dynamic range ratio up to about 72dB, which is at least respectable. Only catch is, the process only achieves this with a corresponding increase of quantisation noise at larger signal levels, though this is masked by the volume of the signal itself.
You might be familiar with this conversion process under its commonly used name of companding, and it's a system used by many hardware manufacturers including Linn and E-mu.
So if it's so good, why doesn't the CMI use it? Simple. Remember your school days when you added log numbers to multiply? Well, this is what would happen if the ADD command was used with sounds held in the form generated by Figure 5: instead of adding the sounds together to produce a mix, we'd get the product, and end up with VCA-type effects at low frequencies and strange sidebands at higher ones. Which isn't, all things considered, a particularly desirable state of affairs.
Anyway, enough of the lecture and back to the Fairlight. The MIX command can also drastically alter the waveform RAM. Essentially, it generates a crossfade between two specified segments which must not be adjacent, ie. there must be at least one segment in between. The waveform memory of each segment between the start and end points contains a proportion of the existing waveform in that segment and that of the end segment: this is best illustrated by examining Figure 6 and Figure 7 for before and after views. Remember that the new contents of each segment is a mix between where you are in the waveform and the destination segment. Thus from Segment 2 onwards, the waveform simply fades up to a square wave. MIX is most commonly used to add a clean fadeout to a sound that decays to noise or doesn't decay properly in 128 segments.
Have a look at the percussion sound sampled using Page 8 and shown in Figure 8. It's pretty clear that the sample ends in a dither of noise. Now, suppose we needed nothing more than a short percussive strike, and that only the beginning of the sound was of any interest to us. A quick solution would be to turn to Page 6 and ZERO, say, Segments 64 to 128, halve the sound, and then MIX from Segment 45 to 64. Looking at Figure 9 shows the result - a sound that dies away evenly to a noise-free end, much to the relief of all concerned.
MERGE is fairly similar to MIX - with one fundamental difference. Again, a form of crossfade is generated between start and end segments, but this time, the previous contents of intermediate segments don't figure in the result. Quite simply, the segments in between contain a decreasing proportion of the start segment and an increasing proportion of the end segment. Figures 6 and 10 (oh yes, very logical - Ed) reveal all. The MIX and MERGE commands are tremendously powerful for splicing together sounds of differing origins and producing an even fade from, say, a violin bow attack to a sung 'ahh'.
So, now that we've discussed most of the commands available, let's try to create a sound using everything except the Fairlight's Page 8 sampling facility. The question is: am I allowed to use Page 2 and pull a sound off-disk to work with? Well, I've decided I'll have to cheat a bit because I already have a thoroughly marvellous sound called PAN.VC, which attacks with the characteristic breath chiff of pan-pipes: First off, we configure Page 3 to generate two voices, one with an NPHONY of 7 in Register A to be played on the keyboard, the other monophonic in Register B as a scratchpad voice. Using Page 2, we load Register B with PAN.VC, which can be seen in Figure 11. The breath chiff is clearly visible at the beginning of the sound, but unfortunately, the sampling started a fraction too soon, and the waveform has a few initial segments of low-level rubbish - nothing to do with Electronic Soundmaker, you understand. The cure is to rotate the voice left to bring the start of the sound proper coincident with the start of the RAM.
The next step is to flick to Voice 1 in Register A and ZERO it. Using a new command, TRANSFER, the first few segments from PAN.VC can be copied to the blank voice currently selected. Stabbing the keyboard at this juncture reveals that all is well, so the next thing to do is to work on the body of the sound itself.
Before any sound can really make the grade as far as aesthetics are concerned, it must have plenty of timbral and amplitude movement within it. A good way of producing harmonically-rich waveforms is to use Page 5 and create a few segments spread across those unused by the chiff. Figure 12 gives the general idea - note that the created waveforms are all different. So what about the segments in between? Well, this is where MERGE comes in handy, filling in the ZEROed segments, and using the segments created on Page 5 as the start and end points.
OK, so far it sounds quite interesting timbrally (and looks it too, as Figure 13 shows) but it's still in need of some amplitude variation. An easy way to achieve this is to invert a couple of segments (numbers 32 and 96, say) and MIX from segments 1 to 32, 64 to 32, 64 to 96 and 128 to 96, using Page 6. If you look closely at the differences between Figures 13 and 14, you shouldn't have much difficulty identifying the variation in amplitude, especially in the sound's first quarter.
All that remains is to insert loop points on Page 7 or Page 4, and adjust the attack and damping on Page 7.
Read the next part in this series:
The Fairlight Explained (Part 9)
(EMM Jun 85)
All parts in this series:
Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Part 8 (Viewing) | Part 9
The Poor Man's Guide to Clap Sounds |
Sampling Techniques - How To Get The Most From Your Sampler (Part 1) |
The Art of Looping (Part 1) |
A Vocal Chord (Part 1) |
Guide to Electro-Music Techniques - Synth Performance Controls (Part 1) |
A Deeper Wave - Wavetable Synthesis |
Making The Most Of Your Mirage (Part 1) |
Understanding the DX7 (Part 1) |
Sampling Through The Ages |
Sweetening the pill - Sampler test |
Criminal Record? - Sample CDs (Part 1) |
Patches |
Browse by Topic:
Fairlight CMI Review
(EMM Jun 81)
Fairlight Goes MIDI
(EMM Jun 85)
Synth Computers
(12T Nov 82)
Browse category: Sampler > Fairlight
Fairlight & Fair Does
(MM Feb 87)
Fairlight Series III
(EMM Apr 86)
Fairlight Series III - THE ULTIMATE SOFT MACHINE
(SOS Sep 86)
Browse category: Sampler > Fairlight
Computer Musician
Topic:
Series:
Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Part 8 (Viewing) | Part 9
Gear in this article:
Gear Tags:
Feature by Jim Grant
Previous article in this issue:
Next article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!
New issues that have been donated or scanned for us this month.
All donations and support are gratefully appreciated - thank you.
Do you have any of these magazine issues?
If so, and you can donate, lend or scan them to help complete our archive, please get in touch via the Contribute page - thanks!