When Is A Computer? (Part 3)
To you they may be just a bunch of knobs, sliders and switches. To a volt, they're the first step in a digital dialogue. In part three of our 'When Is...' series, Andy Honeybone investigates how polysynths memorise programs and control instructions, and explains why you wont be seeing the £70 digital cassette deck in your Christmas stocking.
Armed with an unnerving comprehension of both analogue and digital quantities from last month's bumper instalment, the time has come to cast our gaze over analogue to digital conversion.
First, we may care to ponder on the advantages of this process. Attention at the back! These are they.
1) For control: Voltages held as digital words in a computer's memory do not drift nor become corrupted, and may be retained with no externally applied power (ie, they use a battery back-up). This last point is very important if you have slaved to program 64 ultra-fab voice settings, or spent days editing a ten minute sequence.
2) In signal processing: All the standard sound treatments involving delay (echo, reverb, automatic double-tracking, chorus, flanging, pitch transposing) or filtering (equalisation, phasing) can be performed digitally. The advantage is in the much greater signal to noise ratio offered (less hiss) and the preservation of signal quality.
3) As a recording medium: Digitally encoded audio may be bounced from tape to tape almost indefinitely with no loss of sound quality (eat your hearts out, Portastudio owners). In addition, vast dynamic ranges can be handled with home digital disc players now available, a completely digital path is possible from sound source to listener.
Going digital has a number of problems, not least of which is its method of communication with a largely analogue world. The mystery tour begins here. Beneath the Starship Enterprise style control panel of a programmable polyphonic synthesiser are the mechanics of many rotary controls. These are called potentiometers or 'pots' for short. If a voltage is fed through a pot, an output voltage is given which is proportional to the degree of rotation of the shaft. In a conventional analogue synthesiser with no programming facilities such as the Mini-Moog, this output voltage from any particular pot would go straight to the voltage control input of one of the sound sources or treatments — say the filter cut-off frequency. Giving that knob a tweak would directly result in the filter frequency changing.
In a programmable machine things are not as straightforward. Instead of the pots connecting to the voltage controlled modules the voltages disappear into a computer. From the far side of this device emerge wires which carry voltages out to the various modules. This scheme allows either the front panel controls or control positions held in memory to dictate the sound of the synthesiser. The voltages on the pots are electrical analogues of the quantities and qualities they represent.
Surprisingly, that stalwart of the brass band, the baritone horn, has little to do with analogue to digital conversion. Seated among the mellifluous ranks of cornets, tenor horns and trombones, the baritone horn is a source of the acoustic disturbance we call sound. Should we dangle a microphone into the highly polished bell of this oft-neglected instrument we would be rewarded at the far end of the cable with a voltage giving us pitch and sound information. This voltage could be said to be an electrical analogue of the sound of the baritone horn. From the above you should get the idea that this analogue 'thing' which we have must convert to digital is nothing other than a voltage proportional to our item of interest. In the case of the programmable synthesiser we were fortunate in that the control signals were voltages to start with. For the baritone horn we had to use a transducer — a microphone — to obtain the voltage information.
The problem of converting an analogue signal to digital words is the time it takes for the conversion — it's not an instantaneous process so we can only 'look' at the input signal in between conversions. In effect, we are sampling it. Obviously this results in information being lost. If the analogue input is a voltage from a synthesiser control pot then it is unlikely that it will change very rapidly, so there is little chance of the digital words not reflecting the position of the pot. If the input is the signal from a microphone, the story is very different.
Sampling theory tells us that the sampling rate must be at least twice the maximum frequency we wish to acquire. Being in no position to argue, we have to accept this. Therefore to encode high quality audio with an upper frequency limit of 24kHz a sampling rate of 48kHz is required. This is known as 'very fast'.
But this is only one dimension of the problem because we also need to know how small the digital step size must be (remember the least-significant bit of last month?). The ear is very sensitive and so the step size has to be minute, but there is also the need to cover a wide dynamic range (very soft to very loud). You can think of the audio signal as superimposed over a chess board with the vertical columns representing sampling time slots and the horizontal rows corresponding to the digital steps of the converted value. Each step is one sixty-five thousandth of the total for good quality audio. For these reasons, the seventy quid digital tape deck won't be in the shops for many a Christmas yet.
As you can see there is a bit of a gulf between converting a control signal and the job of digitally encoding audio, even though much the same process is used for each. Conversion is achieved in a round about way by the use of a digital to analogue converter (DAC) and a bit more hardware besides.
It works thus: the digital to analogue converter is driven by an ever increasing number to generate a rising voltage staircase. This voltage becomes one input to a device called a comparator. The other comparator input receives our input signal for conversion. All the time the voltage from the digital to analogue converter is less than the input signal, the comparator does nothing.
When the input level is exceeded, the comparator stops the count to the digital to analogue convertor and 'lo, the value of the count driving the convertor is the digital equivalent of the analogue value. This technique is called successive approximation, and the time taken for conversion comes from the repeated comparisons until a match is found. Conversion times of twenty-five millionths of a second are common but even faster techniques exist for the demands of video.
Although 'dial-in' programming systems are found on synthesisers such as the Moog Source, it seems that most keyboard players prefer to twiddle a whole array of knobs. This is why Roland have a 'proper' programmer for their JX-3P model which fits over the existing touch panel controls. On average there are about twenty rotary controls to a synthesiser and you might think that a separate analogue to digital converter would be required for each. Fortunately this is not the case and a technique called multiplexing comes to the rescue of our wallets.
Because none of the controls are changing at any great speed, a single analogue to digital converter can be used with an electronic switch at its front which connects each control in turn. This scanning arrangement rapidly acquires all the control positions and feeds them to the computer to store.
A similar technique occurs at the output of the computer. Here a digital to analogue converter is used to demultiplex the voltages held as digital words in the computer's memory. Each control setting is sequentially converted to a voltage and is routed to its respective control input by more switching circuitry. Every voltage controlled input of the sound sources and treatments has to be provided with a short-term analogue memory to hold the control voltage while the de-multiplexing circuitry is 'refreshing' another line. This memory takes the form of a sample and hold unit that many of you will be familiar with. Essentially the voltage is held on a capacitor and high impedance buffering allows its value to be read without leaking it away.
If the front panel controls were being used to edit a voice, the de-multiplexer would look for its data in the memory area allocated for the control panel settings. If a voice was recalled from memory, the de-multiplexer would simply look at that particular memory area. As you can see, the computer serves to organise data in and data out. It does not actually contribute to the sound.
A figure of one in 65,000 was quoted for the digital resolution when encoding quality audio. The demands for the conversion of synthesiser controls are much less — one in 64 has been used quite successfully. You can think of this as a rotary switch with 64 click stop positions. For the pitch controls this would give a range of five octaves in semitone increments. It would be very unlikely if anyone could tell the difference between attack settings 37 and 38. Once again it all comes back to the size of the least significant bit.
The techniques discussed here are used in many of the new generation of intelligent musical instruments. Stay on top of digital drummers and sampling machines by following 'When is a Computer?'
Feature by Andy Honeybone