Home -> Magazines -> Issues -> Articles in this issue -> View
Quick, Quick, Slow | |
Article from Music Technology, June 1987 |
Much has been said on the subject of MIDI delays, but so far the truth has been difficult to uncover beneath a flurry of rumour and debate. Chris Meyer gives us the lowdown.
We all know MIDI is none too quick at communicating data, but how bad do MIDI delays need to be before they get in the way of music, and what causes them in the first place?
"AVOID USING MIDI Thru sockets, since each one adds three milliseconds of delay." How many times have you heard that? I've seen it isolated and set in bold print in a couple of major music publications in the US. Not quite as often, but often enough, I've also seen "avoid long runs of MIDI cable - this also contributes to MIDI delay."
But a real, honest-to-God MIDI Thru socket actually has just a couple of microseconds of delay. This is induced by the optoisolator that the signal passes through. There is a very small amount of time that the LED inside an optoisolator needs to turn on and off, and an equally brief time for the photodiode to detect what the LED is doing and convert it into an electrical signal. By definition, this time has to be less than a MIDI bit - 32 microseconds. If it was any longer, the bits that make up a MIDI byte would start slurring into each other, and the integrity of the signal would be lost (resulting in notes being left on, and so forth). For an optoisolator to have a three-millisecond delay would mean that it had somewhere to store up 90 to 100 MIDI bits (the equivalent of three full MIDI note-on messages) while it was busy delaying them - and that takes a RAM buffer. In other words, it's physically impossible.
The same holds true for MIDI cables. Electricity through a cable travels, at worst, at half the speed of light. There is not an audible delay in the amount of time it takes light to travel a hundred feet. True, the integrity of a MIDI signal can be lost through such things as cable capacitance, but in reality, you're not going to perceive it.
If you remain paranoid about these supposed delays, look for cables with low capacitance and replace all the optoisolators in your MIDI equipment with ones that have fast "rise times" - you certainly won't be hurting anything, and you will be saving yourself potential problems if you do happen to run a MIDI signal over a couple of hundred feet through several MIDI Thrus.
Now, there always tends to be some germ of truth or reality behind any myth, rumour, story, or old wives's tale. The MIDI Thru delay is one such case. Some supposed MIDI Thrus are not really just optoisolators - the signal passes through the microprocessor inside an instrument before going out the back panel. It is the time needed for the processor to recognise the existence of the message and send it back out that causes the delay. This setup is known as a "software thru", and is actually very useful on things such as MIDI sequencers, because it lets you have, among other things, the data from a master keyboard that's recorded and passed along to the actual sound-generating modules merged with whatever else the sequencer happens to be playing back. The infamous "three millisecond" number actually originated with the venerable Roland MSQ700 MIDI sequencer, which did precisely this - and happened to take that long to do it.
THE ABOVE DISCUSSION should have touched off something in your head like, "hey, didn't he mention 'merging' as being the real 'Thru' delay?" Yes, I did. And even though one rarely sees MIDI mergers, mappers, and processing boxes mentioned as sources of MIDI delay, they are indeed guaranteed sources of it.
Every time a MIDI byte passes through a microprocessor, it incurs at least 320 microseconds of delay. Why? Well, MIDI data is transmitted serially - this means one bit at a time. All ten bits of a MIDI byte (eight for the actual message; two to mark the beginning and end) must be received by a UART (Universal Asynchronous Receiver/Transmitter) before it can be assembled. Then it is sent back out, one bit at a time, where it is received by another UART and acted upon there. In the very best case of a MIDI merger with no competing activity (ie. no two messages coming in at the same time), a byte must be read in by a UART, recognised by the processor, and sent back out the UART, giving a minimum delay of that one MIDI byte and some slight software overhead.
If there is any competing data, one message must wait until the other is fully transmitted before it can be sent. A typical MIDI message is two to three bytes, meaning a delay of up to two-thirds to a full millisecond waiting for the other message.
What if the creator of a MIDI merger is a poor software engineer, or is using a weak microprocessor? Then there is an additional perceivable delay involved in the processor actually recognising the existence of a byte, and managing to get it back out again. In reality, I would not expect this delay to be over the three milliseconds of an MSQ700 (and quite often it's less), but I sometimes have to wonder when I see the manufacturer of one such device bother to write an editorial in another magazine claiming people cannot perceive a 20-30 millisecond delay - the real figure is more like half that.
Next question. What if the merging box is actually processing MIDI data? If part of the job of the box is deciding whether to pass the message or translate it into another one, it has to wait until it gets enough of that message before it can send it back out again. For example, if a MIDI mapper is splitting a keyboard by sending different key ranges out over different MIDI channels, it would have to wait at least until it received the second byte of a note-on message (the one which carries the key number) before retransmitting it - a 640-microsecond delay, not counting processing overhead. Some combination boxes care so much about this that if they detect that they are not going to be processing a signal that goes from one MIDI In to another MIDI Out, they don't even pass it through their processor and direct wire it to the optoisolator (JL Cooper's MSB+ is one such box).
Time to put this all into perspective. Several psychoacoustic studies pick the 10-15 millisecond range to be the one at which a musician can begin to perceive a delay. In the signal processing realm, this corresponds to chorusing. Shorter times - 2-10 milliseconds - enter the flanging range, and some musicians and manufacturers claim (through actual listening tests) this is enough to shake the "feel" of one instrument against a tight groove or another instrument. The transmission of a single MIDI message does not take long enough to induce this much delay, so any fault rests on just how good a MIDI processor is at passing on MIDI data. (Don't you wish minimum and maximum delay was a required part of the specification? Like frequency response and distortion are for audio devices? Me too.)
Unless, of course, there's...
TAKE YOUR RIGHT hand, and start bashing a five-note staccato chord of 64th-note triplets at 250 beats per minute. Have you outrun MIDI yet? No. But you're pushing it.
Now reach over with your left hand and start moving the modulation and pitch wheels/levers/grope controllers back and forth while doing that. Now have you outrun MIDI? Choke, gasp, clog... Maybe.
Now say you want five-note chords banging out 64th-note triplets on all 16 MIDI channels with a channel pitch-bending and modulating for good measure. Suddenly, you find your composition chugging along at somewhere around 15 beats per minute (no, that is not a typesetting error).
What's going on here? MIDI can only deal with information so fast - namely, 3125 bytes per second. Taking tricks like running status into account, you need on average 2-2½ bytes to define a MIDI message. This translates to about 1200 to 1600 messages a second before MIDI cries "no more", and simply won't transmit any more. In just the same way, in fact, that after a certain point, a sponge can't hold any more water.
That still seems like a lot, until we throw it against the 10-15 milliseconds it takes us to perceive a delay. That means we can fit roughly one to two dozen messages (note-ons, note-offs, individual pitch-bend, modulation, and channel pressure commands) into an "instant of time" (ie. a downbeat) before we notice something's going wrong - like a smearing, or something. This, along with our "too many hands at once" example above, is when and where we start to notice MIDI clogging. If too many messages try to fit down one MIDI cable and into an instant of time, somebody's got to wait.
Continuous controllers and the like tend to be the biggest culprits. A note is just one message, and one or two dozen notes are a lot to fall regularly on every beat. However, a pitch-bend or a pressure dig sends out dozens and dozens of messages to describe its actions. How fast it sends out these messages decides how bad the clogging gets. A synthesiser with a weak processor that's busy doing other things will tend to look at the wheels/benders/gropers only once every several milliseconds, not causing very many problems. But master controllers that have nothing else to do but sit around and send out MIDI have been known to clog MIDI off a pitch-bend alone.
How do we get around this? Hmmm. The "too many hands" issue can only be saved by multiple MIDI outputs on a sequencer. This splits up the load across several MIDI cables, thereby giving several times the bandwidth. Over-eager controllers can be curbed by preventing unwanted data from even being sent, and there are features on some sequencers (such as the one from Voyetra Technologies) that allow the user to "thin out" the MIDI controller stream. I would not be surprised to see ultra-fancy master controllers of the future that allow the user to pick the density of the controller data sent out, allowing a balance of smoothness of performance versus MIDI bandwidth to be struck.
A musician and "friend of mine" testing a MIDI sequencer I wrote in a different life once complained of ragged timing of notes entered in step time. We determined that the keyboard was transmitting pressure every time he struck a note, and all of these pressure commands were piling up on the same "instant" of time - particularly if he held his finger on the key for a while after striking the note...
And I'm aware of one guitar-to-MIDI designer who is doing his master's degree thesis on 'The Psychoacoustics of MIDI', studying which pieces of information (pitch-bend, modulation, note attack, and so on) need to be heard in what order, so that the less critical information can be sent last and not interfere with the more sensitive stuff.
BROADLY SPEAKING, THESE are the synths and samplers themselves. That's right. Yer actual receivers of MIDI data (which also quite often happen to be its source) are the ones that introduce the largest so-called "MIDI delays".
Take this normal, and quite real, sequence of events involved in transmitting a MIDI note from one synth to another. You strike a key on a keyboard. Either the main microprocessor scans the keyboard every couple of milliseconds or so to see if you've done just that, or a slave processor figures that out and tells the main processor what happened. A flag gets set, or the note gets loaded into a buffer. Another subroutine comes around and asks, "do we have any new notes to process?" It unloads it, figures out what voice gets that new note, and loads, it into a buffer to transmit over MIDI. Another subroutine actually starts the transmission.
So far we're talking about one to five milliseconds to get the note to the MIDI Out jack. It takes another millisecond to transmit it. On the other end, the receiver is loading this new message into a buffer. About as often as it checks its own keyboard, it checks to see if it has any new MIDI messages to deal with. From here, it sets a flag or loads another buffer, until the routine mentioned above comes around and says, "do we have any new notes to process?" Then it unloads the data, figures out which voice gets it, and sets some flags for another routine to actually start playing that voice (start the envelopes and so on). This normally takes between two and seven milliseconds, although I've heard one manufacturer say that the entire process can be as long as 15 milliseconds.
The case tends to be worse if you're a sampler - the voices are harder to get started than on a normal synthesiser. You have to shut down the filter and amplifier envelopes gracefully (one to eight milliseconds), stop playback of the one sample, redirect the playback electronics to the start of another sample, and start its playback and envelopes up again.
IF YOU WANT to get a synthesiser or sampler manufacturer angry, go up to him or her and mention "MIDI 2.0". Aside from how hard-won compatibility within MIDI 1.0 has been, and the desire to avoid making obsolete several hundred thousand existing pieces of equipment (not to mention panicking their owners), it just wouldn't solve anything with today's (or tomorrow's, or next year's) level of affordable synthesiser computing power.
The thing most often mentioned in connection with MIDI 2.0 is a higher baud rate - in other words, increased bandwidth. The bandwidth issue is the only valid source of actual MIDI-induced "delay" uncovered above. All of the other delays come from insufficient processing power. A MIDI processor delaying a signal by three milliseconds does so because of insufficient processing power. A DX7 which exclaims "there has been a MIDI error" when hit with a bandwidth-full of pitch-bend does so because of insufficient processing power.
A good portion of the problem stems from the fact that a MIDI'd synthesiser or sampler has a lot to do other than just send or receive MIDI data. It has to look out for the user pressing buttons. It has to keep the voices playing back and envelopes moving along - that procedure alone usually takes up fully half of the processor's available resources.
It is true that raw data, with the proper hardware, can be transferred at rates up to 16 times the normal MIDI rate (such as the Emulator II's RS422 computer port). However, it doesn't have time to do anything with that data while receiving it at that speed. An implementation like that could result in a rather unexciting musical instrument.
Let me give a very real-world example. I was one of the central players in the faux pas of giving Sequential's Prophet 2000 the ability to transmit and receive MIDI data at twice the normal MIDI hardware specification (62.5Kbaud). Aside from being bluffed into believing that another major manufacturer was about to introduce two- and four-time rates on their machines, we wanted to see how fast we could transfer sample data over MIDI. The first barrier was the UARTs we were using - they made no promises above 100Kbaud. The second barrier was how long it took us to receive a byte from the UART, buffer it, unload it, and process it - this fell very roughly around 70-80 microseconds with the 2MHz 6809 processor we were using. Under normal conditions, a MIDI byte takes 320 microseconds, which gave us 240 microseconds to process it. However, the processor was taking half of its available time to keep up voice playback, leaving 160 microseconds free to do other things, and 80 microseconds to do everything else other than MIDI.
Once you double the baud rate, you are caught in a double jeopardy - for a given period of time, you have twice as much data to process, and are spending twice as much time receiving it. At double baud, a byte comes in every 160 microseconds. The processor wants 80 of that to keep up the voices, and you need 70-80 of that just to receive the byte. If data is coming in at full bandwidth, you have less than 10 microseconds (20 clock ticks of a 2MHz processor, with the most primitive actions taking at least a couple apiece) for every byte to do something with it - along with keeping up everything else. Who has time for more data?
More powerful processors are becoming available - an 8MHz 68000 (main processor in the Macintosh, Amiga and Atari ST), for example, is becoming popular. However, they still cost several times more than a lowly Z80 or 6809, and can only process up to twice as many primitive instructions as their weaker counterparts in a given period of time.
Software slyness can buy a bit more performance, but there is a brick wall lurking nearby - and quite often this extra processing power gets thrown at new and more features as opposed to MIDI bandwidth. Throw in some people's demands that a "MIDI 2.0" include abilities such as telling the sender that it indeed got or is willing to receive a message (who has time for that noise?) along with all of our other reasons, and I can assure you that a MIDI 2.0 is not around the corner.
But why is any of this a problem in the First place? A very good question. Do you think three or four guitarists can land six-string power chords within 15 milliseconds of each other? I think not. So, why is it a problem that MIDI has trouble doing it?
Along with drum machines and MIDI'd sequencers came today's current fashion of multi-layered, tightly played, tightly sequenced material - they're what made it possible. This form of music would not exist in (over)abundance without MIDI, and it is this form of music that highlights MIDI's timing weaknesses. Technology begets its own worst enemy - a twist on the old chicken-and-egg dilemma.
However, given that either cannot or will not change the style of music you're playing, you can do a number of things to ensure MIDI delays don't make themselves apparent in your work.
Use multiple MIDI outputs. Don't use continuous controllers excessively. Compliment manufacturers whose machines react faster than others. Don't wait on MIDI 2.0. Take mellowness lessons. And castrate the next person that tries to tell you not to use too many MIDI Thrus because each one has a three-millisecond delay.
MIDI Muting - Sound Workshop |
Wot, No Keyboards? - The Alternative MIDI Controllers Session (Part 1) |
Technically Speaking |
Technically Speaking |
Radio Days - Technology On The Air |
Taking Control - Using MIDI Continuous Controllers |
BeeBMIDI Monitor (Part 1) |
Introduction |
Technical Introduction |
Technically Speaking (Part 1) |
That Syncing Feeling (Part 1) |
MIDI - An Introduction |
Browse by Topic:
Feature by Chris Meyer
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!
New issues that have been donated or scanned for us this month.
All donations and support are gratefully appreciated - thank you.
Do you have any of these magazine issues?
If so, and you can donate, lend or scan them to help complete our archive, please get in touch via the Contribute page - thanks!