When Is A Computer? (Part 2)
Or, where is the difference twixt analogue and digital?
In his courageous efforts to make computer music seem more than a ten times table with adenoid trouble, Andy Honeybone this month tackles the 'digital' concept.
Computer music has been around almost as long as computers themselves. The sudden upsurge of interest results from the micro boom which has made computing power widely available and cheap enough to match consumer applications.
Just 80 years separate the invention of the microphone and the first computer. The ENIAC (Electronic Numerical Integrator And Computer) was one of the first large scale digital computers and was built by the University of Pennsylvania using about 18,000 valves.
Some 30 years later, around the start of the Seventies the first microprocessor family appeared. This was the 4004 developed by the Intel Corporation and was prompted by the rapid sales of desk top calculators.
At that time the Japanese calculator manufacturers had to go to the Americans for 'chip' fabrication and always demanded custom designs. Realising that this would tie each company to one buyer, the Americans decided to split the calculator design into separate functional blocks which would individually have a larger market. This philosophy proved its worth and by 1972 the first single chip microprocessor, the 8008, became available.
Those of you who can remember when black and white television tennis first appeared in the pubs will not care to be reminded that the year was 1974. One year later home computing came into being.
Meanwhile, back at Intel, Masatoshi Shima had invented the 8080 microprocessor which became the most sought after component at that time. Masatoshi left Intel to join Zilog where he designed the powerful Z80 microprocessor which did everything the 8080 did — and more. Needless to say, Intel were a bit cut up about being ripped-off and a big legal battle ensued. The result has faded into history with Zilog now being the eight-bit processor market leader.
The kangaroo leaps of technology reflect equally in the development of electronic musical instruments. Around 1930 Fred Trautwein was performing on his glass-filled valve "Trautonium" which looked like a block of flats with a keyboard stuck on the side. The Hammond organs followed updating tone generation principles patented in 1897 by Thaddeus Cahill.
By 1965 Robert A Moog had developed his concept of voltage controlled music modules, which was to set instrument design standards up to the present day. The various sections of a synthesiser were built so their inputs, outputs and other connections were matched. That way they could be linked in any desired order and 'controlled' by their own internal voltages rather than external knobs or switches. Only in the last three years or so have digital techniques begun to provide affordable alternatives for tone production and control.
The acceptance of voltage control gave the confidence necessary for manufacturers to consider making integrated circuits to perform musical functions such as oscillators, filters and amplifiers. These 'chips' reduced the amount of circuitry within a synthesiser by enough to allow polyphonic versions to become a reality. The Prophet-5 was the first of this new wave and almost unannounced was a microprocessor lurking under its covers.
So with the history lesson over it's time to grab a spanner and examine the nuts and bolts of computer aided music. The objective is to gain an understanding of the principles involved in producing sounds which will eventually trickle from loudspeakers or rattle the cranium of the personal HiFi addict. Having to understand technology is the price to be paid for the credibility with which it invests you. For example, if your band line up at the front of the stage and soak the audience by blowing raspberries for ten minutes, you would gain little other than a punk following. Should you create the same sounds by FM synthesis while showering the audience with champagne, you may well find yourself with an Arts Council grant.
As we slacken off the first nut, the question arises 'how can a piece of electronics make a sound?' to which you could counter 'what's a sound anyway?'. Sound is the brain's perception of changes in air pressure as registered at the ear.
You may remember (or still practice) the trick of holding a folded cigarette packet against the spokes of a revolving bicycle wheel to generate a buzz which gets higher in frequency the faster the wheel spins. The vibrating cigarette packet is functioning in the same way as, say, the reed of a saxophone. The backward and forward motion of the 'reed' is expanding and compressing the air which couples the frequency to the ear drum.
To produce a sound electronically via a loudspeaker, a signal is needed which will move the speaker cone backwards and forwards at a given rate. Such a signal is generated by a circuit called an oscillator which is really a switch. The oscillator turns on for a short time and then turns off — and then on again, etc, etc, etc. When the oscillator is on it supplies a voltage to the speaker which moves the cone forward. When the oscillator is off, the voltage drops and the cone moves back.
Of course this is a very simple illustration and far more complex waveforms can be reproduced by a loudspeaker. The speaker cone does not just move to either of two positions as in the example, but it can be anywhere within its 'throw' (the full mechanically possible range). This infinite resolution is typical of an analogue device. As the rest of this article is about analogue and its opposite — digital, it seems the right time to give a digital example.
We all know computers are digital beasts, but let's start with something a little easier — a two piece extending ladder. If we want to make it longer, we have to raise the inner ladder so that its hooks will rest on the next rung up of the lower ladder. We can't adjust it to the exact height required; we have to take the nearest 'step' as dictated by the spacing of the rungs. This limited resolution is typical of digital devices.
Very often this limited resolution is not a problem. For example, even though a newspaper picture is composed of fairly large dots, it provides enough information to convey the message of the original photograph. Similarly, semitone increments in pitch may be all that's required of an oscillator for most musical applications.
Returning to the oscillator described earlier, we should now give a squirt of penetrating oil to the subject of its control. The on and off times of an-oscillator determine both its frequency and tone. These times are determined by the rate at which a capacitor charges and discharges. Varying the size of the capacitor and its accompanying resistor will give a form of control, but because these are manual methods they are not suitable for eventual computer control. A better way is to control the charging current itself, and this can be achieved via some crafty circuitry by an applied voltage. We therefore end up with a Voltage Controlled Oscillator (VCO) which outputs a frequency related to the input voltage it receives.
Robert Moog's concept was to define what has now become the 'Classic Synthesiser' comprising oscillators, filters, amplifiers, envelope generators and a keyboard and to make each item voltage controlled. By making the control signals and output waveforms the same magnitude, Moog effectively removed any differences between them and this gave rise to the large racks of patch-cord modules where any input could be controlled by any output.
Its been implied that voltage control is a method suitable for computers. The sharp witted among you should have worked out that a voltage is an analogue quantity being infinitely variable, whereas a computer is digital and resolves in steps — so what gives? Well, the computer has to call in the help of a digital to analogue converter ('boo - cheat' you say).
The computer output is generally in the form of a 'port' consisting of eight wires, each of which can be either high or low (at five or zero volts). Together they make an electronic 'word' referred to as a byte. Individually the highs or lows are known as bits, which is short for bi-nary digi-t. They are graded in order of 'importance' — at one end is the least significant bit (LSB) worth 1 unit, and at the other is the most significant bit (MSB) worth 128 units.
This is because the vast majority of analogue synths work on a grading system of one volt per octave — if zero volts equals, say, a C, then exactly one volt would equal the C an octave higher. With 12 semitone steps in each octave, you can see the logic involved.
Binary is the numbering system in which a computer thinks, and is best left to text books. For now, just remember that whereas normal (decimal) numbers go ones, 10s, 100s, 1,000s, etc from the numbers 0-9, binary numbers go ones, twos, fours, eights, 16s, 32s, 64s, 128s, etc, and only use the numbers 0 and 1. For example 101 in binary is five in decimal (four plus one).
The maximum number that can be expressed with eight bits is 255 which, although not a huge figure, would correspond to a range of 20 octaves if we were talking in semi-tones. It is the least significant bit (LSB) which controls the step size or resolution in a digital system. If we want the LSB to generate the voltage equal to one semitone we have to scale the converter such that when that bit goes high, then one twelfth of a volt is given on the output.
If the computer were now to send a slowly rising count to the converter, then the oscillator frequency would rise in semi-tone steps giving a chromatic scale.
These principles are used in many digital keyboard scanning systems to present the oscillators with a drift-free pitch voltage. Of course analogue to digital conversion is also possible, but that's another toolbox.
Feature by Andy Honeybone
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!