Musical theory vs. Musical reality
This month, Andy Robinson judges the case of Musical Theory versus Musical Reality.
Ever since the first white-coated scientist invented technological progress, there has been a tendency amongst many of the users of its fruits to accept all of the claims made for these products, without questioning the basis on which the claims are made. This problem has reached epidemic proportions in the world of hi-tech music, where manufacturers are constantly making more and more extravagant claims about what their gadgets can do. I want to question one or two of these claims which so many people swallow wholesale.
My starting point for this little analysis is the simple fact that music is a human language which has no meaning, and cannot be said to be 'right' or 'wrong', until heard and interpreted by a human listener - and if it sounds right, then it is right. Or to put it another way: if musical theory disagrees with musical reality (the stuff we hear with our ears) then the theory is wrong.
1. Can a drum machine play rhythmically?
The idea that drum machines play more accurately than human drummers is one which keeps cropping up in various guises. It first appeared when people discovered how mechanical and boring drum machines sound. Instead of jumping to the obvious (and correct) conclusion that the machine was playing the beats at the wrong time, they concluded instead that the machine sounded unnatural because it was too accurate - and so, many later drum machines and sequencers included the facility to introduce random timing errors which were supposed to make them sound more human. This particular silliness has died a death (I hope) to be replaced by more sensible features such as 'groove quantise', but the attitude persists: it is still common to hear people say that quantisation "corrects human errors". In fact, the best it can do is make a bad drummer sound boring.
In musical reality, the four beats of a 4/4 bar should not necessarily be played the same length, nor should a quaver offbeat necessarily appear exactly midway between two beats, nor should each bar necessarily be exactly the same length as the one before it. My evidence for these statements comes from the simple fact that that's how the best (ie. the most musical sounding) musicians actually play. If you have a theory which disagrees with this piece of reality, then your theory is wrong.
2. Can a synthesizer play in tune with itself?
In the adverts for the new synthetic Rhodes piano (SOS August 1989), we read that it uses a new 'stretched scale' tuning which "reflects the imperfections that give traditional instruments their harmonic interest and tonal variations." Someone on the design team must know better than that - it's a pity they didn't tell the advertising copywriters. Here's the real story.
When a traditional piano tuner tunes a steam piano, his/her objective is to make the instrument sound as good - as musical - as possible. A simplistic theoretical view of tuning says that two notes an octave apart should have their frequencies in the ratio 1:2 exactly, but the tuner in fact tunes the octaves slightly wider than this. Even if we don't understand (from a theoretical point of view) why this is a valid way to tune a piano, we can nevertheless recognise that it is, because many generations of musicians have decided that their pianos sound better this way.
So what is wrong with the 1:2 theory of tuning octaves? The point is that if you use this even-tempered tuning, as it is called, then the ratio of the frequencies of notes to their fifths will be too small (it should be 2:3, but it will in fact be about 2:2.9966). By tuning the octaves a little wide, we can achieve a better compromise between wide octaves and narrow fifths. There are other equally valid ways of tuning an instrument: for instance, read Terry Riley's discussion of just intonation in SOS August 1989.
3. Does a sampler really sound anything like an acoustic instrument sampled?
A friend tells you that he has multi-sampled a fine saxophonist into his wonderful new 16-bit 50kHz stereo sampler. He then plays it back at you from the keyboard and tells you that it "must" sound like a real saxophone, since each note was recorded from one. For some kinds of material (single separated notes or chords in a backing track) a listener might believe that a saxophonist was playing the part, but as soon as you attempt anything a bit more expressive (a melody, a solo or a riff), the illusion quickly fades. Once again we are dealing with musical reality: a sampled saxophone playing a melody does not sound like a real saxophone playing a melody, no matter how good the samples. It's no good for the samplist to protest that it "must" sound like a saxophone. As before, we can find an explanation for this discrepancy ...
A musician plays each note of a phrase in a way that is appropriate to its particular position in that phrase: even a beginner does this to the extent of his or her ability - it is as natural as breathing. There are so many ways in which a musician can play a single note, and the sampler only records one of them. To use a non-musical analogy: suppose you get hold of a high quality sampler with lots of memory and you sample a few hundred individual words of English, spoken by yourself. You then play them back in different orders to produce a variety of English sentences. Will this sound anything like a human being speaking English naturally? No.
It may be possible to create a closer approximation to a sax (or even a human voice) with breath controllers, modulation wheels and a lot of practice and experimentation, but if you really want to sound like a sax then I think it might be cheaper and easier to go out and buy one.
A very similar mistake was made by the early organ/synth theoreticians in the Sixties. They knew that any continuous waveform can be exactly synthesized from a mixture of sinusoidal waveforms, so they imagined that all they had to do to synthesize a trumpet sound was to perform a Fourier transform of a real trumpet sound (which calculates the necessary sinusoids), and set up the synth to produce the exact same mixture of sinusoidal waveforms (a process known as 'additive synthesis'). What they forgot is that a single note played on a trumpet is not a continuous waveform, it is a miniature drama containing all sorts of varied and rapidly changing sounds. The early synths could not generate anything like this variation of sound within a note. Samplers face exactly this problem at a higher level, being incapable of the expressive control which can turn a sequence of notes into music.
So where is it all going? Despite the tone of this article, I am very optimistic and believe that there is more good music (and more bad music) being made today than ever before. In time, the infatuation with electronic gadgetry will fade and these devices will be used as the simple tools that they are, instead of being sold as fashion accessories and worshipped as deities.
In a world which still finds musical uses for such instruments as the accordian and the banjo, there must surely be musical uses for the sequencer and the sampler too. It is up to you to find them.
Opinion by Andy Robinson
Previous article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!