MIDI - What's Wrong With It ?
Hybrid Technology's Chris Jordan, designer of the Music 500 Synthesizer for the BBC Micro, speculates on the future of MIDI and what developments it requires to satisfy the needs of more adventurous users and the promise of more advanced technology.
Chris Jordan of Hybrid Technology, designer of the Music 500 Synthesizer for the BBC Micro, speculates on the future of MIDI and what developments it requires to satisfy the needs of more-adventurous users and the promise of more-advanced technology.
As just about any MIDI user will appreciate, the most obvious problem with MIDI is speed, or rather the lack of it. The notorious MIDI timing delay must be known to everyone who has ever tried daisy-chaining a MIDI link, and no-one takes seriously the suggestion that you can drive 16 instruments on the same bus. The delay problem has been so accepted, in fact, that we are now seeing sequencers with 'time shift' functions to compensate!
There are two main lines of attack open for solving the speed problem: firstly, speeding up the transfer rate, and secondly, shortening the paths of transfer. MIDI speed-up is needed for other reasons which we'll look at later on, so let's now explore the second idea.
The basis of this is simple - each instrument takes a certain time to transfer information from MIDI In to MIDI Thru, and the accumulated delay felt by the last device in the chain can be enough to cause problems. The solution? - a simple box with one MIDI In and a number of MIDI Thrus, one for each receiving device. Each receiver then has a more-or-less direct route from the transmitter, so the delay problem is largely eliminated.
Such 'splitter boxes' were the first MIDI accessories to appear, and they now abound in various forms, some incorporating routing functions and other refinements.
The logical next step is to equip 'master' transmitters like computers, sequencers etc with multiple independent MIDI Outs, each destined for one receiver, and this has already begun. Contrasting examples are the Moog Song Producer for the Commodore 64, and the to be re-released MIDI Card for the Fairlight CMI.
This alternative connection scheme is called a 'star' network in computer terms, due to its shape. Its advantages over the daisy-chain shape have been long recognised by computer system designers. Figure 1 shows an example.
The irony of this situation, in which MIDI's multi-instrument 'network' approach has been dumped in favour of a group of simple one-to-one links, is demonstrated admirably by Yamaha's QX1 Sequencer/TX816 Synthesizer combination - a glance around the back shows a nest of eight separate MIDI cables all going from the sequencer to the synthesizer, looking remarkably like a multitude of analogue synth patch leads!
The MIDI star network gives other advantages as well as relief from delays. Because each device has its own MIDI bus, there is no need to send data via different channels, so anti-social instruments with fixed receive channels, like the early DX7, can be accommodated. This releases the channels for another very important use, as we shall see later.
Another refinement to the MIDI star network which we will be seeing more of is the two-way link - devices will have return links going back to the controller. This allows the proper two-way communication which is particularly important in computer-based set-ups. Take a MIDI patch dump as an example. It's probably struck you how dumb it is having to reach over and start a patch send on the instrument, after you've just selected the patch receive function on the computer. With the two-way link, the computer can itself request the instrument to send and, in the reverse direction, prepare it to receive patch data in a particular memory location. With this sort of intelligent co-operation, the computer can have much better control over the instrument, hopefully as good as any intimately-interfaced peripheral - a disk drive for example.
Sequential have already implemented this type of patch request protocol as system exclusive functions on the Six-Trak and other instruments - they are definitely the ones to watch for progress in this area.
By far the biggest pressure to boost the speed of MIDI comes from the needs of sampling instruments. A sound sample is in essence just a set of patch data, albeit much larger, and we expect to be able to send it to and from other instruments, and particularly computers and storage devices, in the same way.
It seems that the original MIDI designers never even considered the need to send really large chunks of data between instruments. Needless to say, MIDI is woefully inadequate for sending sample data. Some rough calculations show that a 64K 12-bit sample would, at best, take about 38 seconds to transfer. If you think you could live with that, keep in mind the swift trend to larger sample memories - transferring the full 48-second store of the new MDB Window Recorder would take around 12 minutes!
However, this doesn't seem to have dampened the enthusiasm of the manufacturers - the MIDI Manufacturer Association have already agreed a standard for transferring sample data. So far this has only been implemented on the Sequential Prophet 2000, but the most interesting thing about this instrument (on this subject anyway) is a double-speed MIDI option - straight MIDI but going at twice the speed. Big deal, I hear you say - time for one cup of tea instead of two while your data trickles across - but the point is, this is the first step in the right direction.
The speed of the current MIDI hardware design is limited by just one thing: the opto-isolator, the critical isolating device in the standard receiver that transfers the data without any actual electrical connection to the MIDI cable and transmitter, thereby avoiding dreaded earth loop problems. The opto-isolator model used in most instruments is cheap and slow. Though obviously more expensive, there are alternatives that run at up to one hundred times the speed - just the ticket for a Super-MIDI which can still be switched back to standard 'slow' MIDI for compatibility. Once again Sequential are the company to watch for developments in this field.
Two past moves on the MIDI speed-up problem are worth a quick mention. DigiDesign have side-stepped the problem by using 'RS422', a MIDI-like computer interface standard, for their Emulator II Sound Designer software running on the Apple Macintosh computer. This has no opto-isolator and so can run much faster, but has to give careful attention to the resulting earthing problems. The Powertran MCS1 Sampler deserves a mention as the earliest attempt (known to this writer) to speed up MIDI in a production product - the unit had a special 16-times normal speed MIDI jack with 500kHz clock line, for connection to its complementary BBC Micro MIDI Interface. Unfortunately, the software to store samples on disk never used this option - it had a hard enough time transferring to disk at the standard MIDI rate - but it was worth a try.
Though from the user's point of view MIDI is too slow, from the point of view of the receiving instrument it is often too fast! Basically, the receiver may not have enough time to accept a message and acton it before another one arrives. If the receiver can't deal with each byte that arrives at the port before the next one arrives, the result is a nasty pile-up.
Most designs use a 'buffer' for incoming data. This is a store into which bytes are put, in correct order, as soon as they arrive. The bytes are taken out and dealt with in good time, in the hope that the buffer will be big enough to store the messages that arrive while the receiver is doing some more important, time-consuming, task.
A typical such task is a patch change. Though this is very fast on some instruments, Casio products in particular, on others it takes an inexplicably long time. The DX7 is very bad, taking so long that sequencer and MCL programs need to send patch changes well in advance. The DX21 is generally so slow that a simple note sequence can cause it to cough up the 'MIDI Buffer Full' error message if it is fast enough.
On the whole, there is little excuse for this kind of poor receive performance. We aren't looking towards super-powerful microprocessors to solve this - just waiting for certain manufacturers to pull their fingers out and start doing the job right.
The average home computer has a genuinely hard job responding to incoming data in time, even as far as getting the bytes safely into the buffer. The BBC Micro is a pretty fast machine, but it has other devices such as screen, keyboard etc to attend to at the same time as MIDI, and it can easily be tied up long enough to miss a couple of MIDI bytes. The UMI-2B Sequencer for the Beeb attempts to solve this with some very drastic disabling of Beeb's other functions, but it still occasionally has to report a 'data error' during recording.
This problem is destined to evaporate entirely on more advanced home computers. This is not just because of the extra power of processor chips like the 68000, but also because machines like the Apple Macintosh and Commodore Amiga have very advanced operating systems to which MIDI-style data ports and buffers are second nature. These will take an enormous burden off system designers, and hopefully allow them to apply more effort to the higher-level functional aspects of their systems.
Some of these advantages are offered to less powerful computers by so-called 'intelligent' MIDI interfaces - dedicated 'slave' processors that handle the nitty-gritty of MIDI data transfer, including the all-important buffering. Two examples are the Hinton Instruments MIDIC and the Roland MPU-401.
The problems we've looked at so far are those that most users and manufacturers are aware of, but there are others which are nothing like as visible, and potentially more serious. These concern the more conceptual side of MIDI and its software protocols.
First, let's define what must be one of the most misused terms in our industry - the word 'voice'. Ask a DX21 owner how many voices his keyboard has got, and the answer may range from eight (simultaneously playable notes) to 128 (different sounds that can be selected). The real answer, of course, is eight, because a 'voice' is a single note-playing hardware unit of the synthesizer. A 'sound' that can be used on a voice is called a 'patch', or in software on a computer, an 'instrument'. Attempting to clear this up by asking our hypothetical (but plausible) DX21 owner how many simultaneous voices are allowed, you may get the answer "two"- one for each half of the split keyboard! Oh dear...
It must be said that Yamaha are largely responsible for this ambiguity (another example: CX5 FM Voicing Program), seemingly as the result of their policy of appropriating existing words that are free for re-use on the basis of being outside their own current vocabulary - 'operator', 'algorithm' and 'job' spring readily to mind.
That finished with, back to the point, which is: MIDI is a keyboard-based system - the music data is represented not as pitches directed to the particular voices, but as keys being pressed and released, it being the job of the receiver to assign these presses to voices. The problems start when we send non-keyboard music down MIDI, for example when you want to play the same pitch on two of your, say, six synthesizer voices. This could easily happen if you were playing a three-part canon (a reasonable application for a six-voice synthesizer), and two parts cross over, thus playing the same note. Obviously, you can't have the corresponding key down twice, so you can't communicate this event over a single MIDI channel. What actually sounds varies from instrument to instrument - most ignore the second press and unfortunately act on the first release, but some (DX21 and CZ101 included) actually do play two notes by a sane quirk of the software. Before you all say 'there's your solution then - let all keyboards respond to multiple key depressions', remember that we still can't actually talk to these voices separately. For example, who can say to which voice a polyphonic pressure message should go?
Things start to get worse when we think about how events like percussion hits have been fitted in to MIDI's conceptual keyboard scheme. These are sent as key depressions, and each key (that is, pitch) is directed to a particular voice in the percussion unit. Now that we have used up the pitch option to select the voice, how do we communicate pitch for tuned percussion instruments? Quite. I could go on to talk about patch changes on percussion instruments, but by now you should have realised that if MIDI was supposed to be a general Musical Instrument interface, rather than purely a keyboard interface, someone, somewhere made an almighty boo-boo! [Perhaps they were put off calling it a Keyboard Instrument Digital Interface because of the resulting acronym (KIDI)?!- Ed]
However, all is not lost. Salvation lies in that rarely-used MIDI mode. Mode 4 (Mono On/Omni Off). This lets each voice of a synthesizer be controlled individually, in fact, via its own MIDI channel, so that the controller effectively has a number of monophonic synthesizers which it can use to play polyphonic music, if it wishes, on its own terms. In the case of a percussion unit, each voice could have its pitch controlled exactly like any normal synthesizer voice - in fact, if it responded to pitch, it would be a normal synthesizer voice!
Now we come to the crunch - how many MIDI instruments support Mode 4? The answer is: not many. Those that do are mainly ones that let each voice have its own sound (referred to as being 'poly-timbric' or 'multi-timbral'). Examples at the lower end of the price range are most of the Casio synths, the Sequential Multi-Trak and the Rhodes Chroma Polaris. Unfortunately, the present MIDI specification allows manufacturers to implement just the 'easy' modes for basic control, and the lazy manufacturers, including most of the Japanese, have stopped there. Yamaha, for example, doesn't have a single truly poly-timbric instrument in its range. Even the CX5, where separate voices is expected for granted, has only one single LFO shared among its eight voices.
The importance of individual control of voices, both for note information via Mode 4 and patch information for poly-timbrality, will in the near future become a major influencing factor in the design of MIDI interfaces for instruments. This is particularly due to the vastly-increased potential for control offered by computer systems and music composition software. A good example of the level of control we shall expect in the future is the Oberheim Matrix-12. This incredible machine is fully poly-timbric and has an excellent MIDI implementation - in conjunction with a computer and extensive music composition software, it becomes a staggeringly powerful mini-studio.
Concerning the more obvious MIDI software problems, there is considerable room for improvement in simple details of implementation. Many users will have come across important messages which are sometimes provided, sometimes not. There are a few which were included in the MIDI spec for very good reasons, but which manufacturers seem to have ignored in the absence of any obvious application. The most important of these is the 'All Notes Off' message, MIDI's emergency 'shut-up' function. This is essential for computer-based control, being the only reliable way of silencing all units when the controlling program is aborted, as a result of pressing ESCAPE for example. For some inexplicable reason, there are more instruments that don't respond to 'All Notes Off', including the otherwise very good Casios, than do.
In summary, then, some things we can expect to see more of in the future are:
- Star networks to replace the conventional MIDI chain.
- 'Super-MIDI' running at high speed.
- Faster MIDI functions in instruments.
- Advanced two-way protocols for computer control of instruments.
- Mode4 implementations for direct voice control.
- Poly-timbrality for greater musical possibilities.
- More complete, accurate and sensible MIDI implementations.