Multitimbrality Made Simple
One persistent area of hi-tech confusion surrounds multitimbrality. Vic Lennard explains what it is, what it's for and what's wrong with it.
As the general understanding of MIDI has increased, there have emerged a few areas of persistent confusion - if multitimbrality is giving you problems, read on.
THE TERM "MULTITIMBRAL" is used extensively when describing modern musical instruments, and yet its precise definition is far from clear. If you ever want to appear on Mastermind - specialist subject, MIDI - all you'd need to do would be to digest the facts contained in the Detailed MIDI Specification. This is the MIDI "bible", which may contain certain anomalies and ambiguities but at least provides the necessary information for anyone interested in MIDI (apart from the Standard MIDI File specification) in a single 64-page book. All necessary information, that is, with the exception of a definition for the expression "multitimbral".
In fact, multitimbrality appears to have preceded MIDI by some five years or so. It was patented by an obscure gentleman, who defined it along the lines of independent digital control for multiple sound sources. I attempted to trace this person about a year ago, but without success. It seems that he crops up every now and then, when he threatens to sue various manufacturers for infringing his copyright! Still, anyone who could envisage such technology over 15 years ago must have been rather forward thinking.
IF A SYNTH is termed "polyphonic", it has the ability to play many different notes simultaneously. So the polyphony of an instrument is the total number of notes which it allows you to play at the same time. The number of voices within a synth is usually the same as the polyphony for that synth, as one voice is normally responsible for the sound being created by one key. A little care has to be taken, however - some manufacturers use the word "voice" in place of "note" while others use it in place of "sound". Personally, I prefer the former and will use that convention throughout this article. Many synths allow you to double up voices, or to work in a "dual" mode where the pressing of a key effectively plays two voices. In this situation, the polyphony would be halved, and such synths are often referred to as being "bitimbral". Again, there is no specific definition for this - a bitimbral synth may simply allow you to overlay two sounds, or may let you split two sounds so that the keys above the split-point play one sound while those below the split-point play a different sound. No mention of MIDI - the sounds either side of the split may or may not be on different MIDI channels.
The polyphony of an instrument used to be indicated within the model name - the Roland Jupiter 8 offers eight voices while the Sequential Circuits Prophet 10 has ten voices. That convention has changed - otherwise what are we supposed to make of the Oberheim Matrix 1000? Roland's D50 gives you access to 32 partials (or part-sounds), but as up to four partials are combined together to make a single voice, it's impossible to calculate the precise polyphony of this instrument unless you specify its setup at any given moment.
What's the point of having a polyphony value greater than the number of fingers on your hand? You can't play more than ten notes solely from your fingers (unless you're into jazz) but don't forget the sustain pedal. It's quite common for 16 notes to be taken up by a pianist, which is why eight-note polyphonic piano modules are sometimes of limited use. If you exceed the polyphony of a synth, a system called note-stealing comes into effect, whereby either the first or last note played will be reassigned - some synths allow you to choose which of these you would prefer. Alternatively, most modern synths have an "overflow" mode where notes exceeding the polyphony are re-transmitted from the MIDI Out socket to another synth. Also, some synths let you work in pairs with even-numbered notes on one synth and odd-numbered notes on the other - Yamaha's EMT10 is one such beast.
Now, what happens if you're using less notes than the polyphony of a synth but want to use different sounds at the same time?
A MULTITIMBRAL SYNTH can be thought of as several independent synths within one unit, constrained by the total polyphony of the unit and various other factors which we'll discuss later. The first multitimbral synth to be marketed as such was Roland's MT32, launched in January 1987. The MT32 has the same kind of voice structure as the D50 - 32 partials with between one and four partials being used per voice. However, it is also nine-part multitimbral, with the first eight parts choosing from the 64 internal sounds, and the ninth part being a dedicated rhythm section containing a selection of 30 percussion instruments. But this certainly wasn't the first multitimbral synth. That honour goes to Sequential Circuits' SixTrak, released in December 1983. This is a six-voice synth with a built-in digital recorder onto which you can record six different instruments, each using one voice. In fact, the word "multitimbral" was actually used in the publicity brochure.
Irrespective of the manufacturer, each part on a multitimbral synth will generally have its own MIDI channel, sound (or tone/timbre), MIDI key range and Output level. This idea of a multitimbral synth seems to be the answer to a musician's dreams; only one device needed to provide for all his/her synth needs, meaning no bird's nest of audio, mains and MIDI cables. However, it's not that simple - there are several restrictions. The first of these is in the hardware department; synths often provide only a stereo pair of output sockets through which all sounds emanate. While you'll usually have the option of selecting the position of each instrument in the stereo field, or pan, you are not able to individually EQ or effect each instrument. It's also quite a bind to have to balance the relative levels by using push buttons on the front panel of such a synth.
The second problem is that there is often only a global effect offered - you can't use one kind of reverb on, say, the drums and another type on the brass. There may be more than one type of effect offered, and you may have the option of using the different effects on different instruments within the synth, but again these are usually output from the same pair of stereo outs. All in all, it's a compromise which results in a lack of flexibility.
No matter how great your synth's polyphony, you're going to run out of voices sooner or later. You then have the awkward situation of losing notes without being able to control where from. To avoid this, most multitimbral synths let you set a reserve for each of the sounds so that, should the polyphony be exceeded, you will be able to control from which sounds notes are stolen.
There is a further problem which many of you will have encountered but possibly won't be aware of the cause. Take a multitimbral synth and feed multiple MIDI channels of information through it. Now start moving around the screens on the liquid crystal display. You should notice two things: the screens are sluggish to respond and the MIDI information passing through the synth starts to glitch. This is because the same micro-processor generally handles the MIDI input/output, screen refreshes and data processing. The more strain it is under, the less it can achieve within a clock cycle. Consequently, for the best possible timing of the MIDI information, never manipulate the screens while a synth is in use. It's true to say that some multitimbral synths are under-powered microprocessor-wise, resulting in an inaccuracy in timing which is proportional to the amount of data being processed at any moment. Consequently, the delays in handling MIDI information are variable.
When many MIDI channels of data need to be processed, all synths will have an order of priority which might be expected to be in ascending order of part number. However, it ain't necessarily so. For example, the M1 gives part eight priority, followed by seven, six, and so on down to channel one. Similarly, Roland's D110 in its earlier versions gave the Rhythm part lowest priority. Typically, this information is never given in the manual.
The "definition" of independence of MIDI channel per part means that many samplers are not, strictly speaking, multitimbral. For instance, Korg's DSS1 can have various keyboard splits, but they all share the same MIDI channel. Korg call this "multisound" but perhaps MIDI devices of this nature should be referred to as being "multi-zoned". Of course, there's also the (still) industry-standard 12-bit sampler, the Akai S900/950, which can have a different MIDI channel set per zone, or keygroup, as Akai call them. What, then, do we call this? A multitimbral, multi-zoned, multisound? Isn't life complicated?
WHEN YOU WORK with a multitimbral synth, you have the option of changing either the whole group of sounds you're using, or just one of them. Most MIDI units have a Global or Control MIDI channel, which is often one less than the lowest MIDI channel being used. Receiving a MIDI Program Change on this channel will select the patch of that number, something you might want to do at the start of a song. However, doing this in the middle of a song would cause all sounds that were playing at that moment to glitch. To get around this, it is usual to send a MIDI Program Change on the MIDI channel of a specific part to change the sound for that part.
NO ARTICLE ON multitimbrality would be complete without mention of MIDI mode 4, commonly called Mono Mode. Each voice of a synth operates on consecutively-numbered MIDI channels, starting from a base channel, but not all of the available voices have to be used. What use is MIDI mode 4? Take the example of a guitar synth using pitch-to-voltage technology. Strings are bent to alter the pitch but the pitch change varies with the gauge of the string. So multiple-string bends are impossible when working with one MIDI channel - the pitchbend data transmitted is different for each string and summing pitchbend is not to be recommended. However, if each string is transmitting on a different MIDI channel, this problem doesn't occur, although an incredible amount of MIDI data is generated. If the synth being used has eight voices, operating in Mono mode, then two of the voices will be redundant in this situation.
There's little doubt that multitimbral synths have changed the way most of us work with MIDI. There are many synths whose polyphony exceeds 20 voices and which have four or six outputs, giving them the ability to play back an entire song from just one module. That said, don't forget that MIDI is a serial protocol - only one note can be transmitted at a time. The more data you send to a multitimbral synth, the more you clog up its input buffer, and the delays will increase up to a point at which they become audible - though you can improve the situation by filtering out unwanted MIDI data (like Aftertouch) and thinning out pitchbend and continuous MIDI controllers (like MIDI Volume - Controller 7) if your sequencer allows you to.
Feature by Vic Lennard
Previous article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!