Introducing the MIDI
A look at the new, exciting Musical Instrument Digital Interface
In less than two decades the music synthesiser has developed from its monophonic origins into a fully integrated microprocessor controlled instrument. During this time manufacturers have all had their own ideas of how control voltage and/or trigger signals should be implemented. This has made it almost impossible to interface directly between different machines, without some form of conversion circuitry to cope with incompatible signal levels or polarities.
The fabrication of integrated circuits dedicated to electronic music production by Solid State Music (SSM) and Curtis Electro-Music (CEM) has helped this situation. Control voltages of 1V/Octave and positive going triggers between 5 and 15V have become the standard for any manufacturer using these devices. However, this does not help in the case of the current microprocessor controlled polyphonic machines. The complex algorithms used prohibit direct connection and each manufacturer again tends to produce dedicated interfaces which connect to his own specific controllers.
The problems of compatibility along with the advent of the home computer, providing its musically creative possibilities, has forced instrument manufacturers to finally make their products comply to an industry standard specification and thereby protect their equipment from obsolescence.
The Musical Instrument Digital Interface (MIDI) is such a specification, which has been developed by leading manufacturers in the last few years. It does not dictate instrument design but merely specifies a language which carries meaningful information between instruments.
What does this all mean to the musician? The purpose of the specification is to allow synthesisers, other electronic keyboards, sequencers, drum machines and home computers to be linked in one programmable system. Useful lifetime of equipment is therefore also multiplied. Some of the exciting possibilities are as follows:
Synthesisers can be configured 'in parallel' with instruments played simultaneously or remotely.
Entire compositions, consisting of monophonic and polyphonic sequences and rhythm can be played at a touch.
Computer terminals can be used for composing, sequence creation and editing.
Graphic quality printers can produce the 'hardcopy' manuscript of an improvisation or composition.
Video synthesis can be integrated with music synthesis.
Musical education such as reading music, scale recognition, and ear training can be automated.
Sequential Circuits Inc first became interested in microcomputer interfacing in conjunction with the design of the Prophet-10 polyphonic and its internal polyphonic sequencer. The Prophet and its sequencer each were based on Z-80 microcomputers. To record, as notes were played, every few milliseconds (at a rate set by the sequencer clock), the Prophet would send its complete keyboard 'status' to the sequencer. The sequencer had to figure out which notes were going on and off, and record these events in reference to the clock count. On playback, the sequencer computer also sent the complete keyboard status every clock pulse, with events as counted out by the clock. The Prophet would play these notes just as if they came from its own keyboard. Later, this sequencer was made available as an accessory for the Prophet-5. The Prophet-5 Remote Keyboard was also developed which used this interface. SCI Published the data protocol upon which this interface was based, in the hopes that the programming public would be encouraged to develop their own interfaces for the Prophet-5.
This did not occur, apparently because in being conceived for a specific application, the interface was very fast but too clumsy for general-purpose use. It was criticised as requiring too much programming 'overhead,' in the constant transmission of meaningless keyboard information. As a result of this experience, SCI resolved to pursue a more streamlined interface that would be easier for programmers to work with.
In the meantime, occasional discussions between the presidents of Sequential Circuits, Oberheim Electronics and Roland (Dave Smith, Tom Oberheim and Ikutaro Kakehashi) also revealed a shared interest in the interface problem and development of an interface widely acceptable to the industry.
Smith then outlined a specification for a 'Universal Synthesiser Interface' (USI). It was developed with the assistance of SCI's Chet Wood and presented at the Autumn, 1981 convention of the Audio Engineering Society (AES).
The USI differed markedly from the earlier SCI Digital interface in that rather than being polled at the sequencer clock rate, information was only sent when an event actually occurred - for example, a note going on or off. The USI was proposed to be serial, operating at 19.2 kBaud, with TTL levels, and connected through phone jacks.
After incorporating changes in response to comments from AES, Smith sent a questionnaire to all manufacturers and industry consultants he could find, asking for their suggestions and any special requirements. There was a strong response to this initiative; some saying, for example, that it would not be possible to do it serially, that a parallel interface was necessary. Others thought the proposed serial speed too fast for operation with home computers. Many other issues were raised.
All respondents were invited to a conference in coincidence with the January, 1982, Western National Association of Music Merchants (NAMM) convention in Anaheim. This meeting was attended by representatives from SCI, Roland, Oberheim, CBS/Rhodes, Yamaha, E-mu, Unicord (Korg), Music Technology Inc., Kawai, Octave Plateau, Passport Designs and Syntauri. Other manufacturers seemed to be maintaining a 'wait-and-see' policy.
At this meeting the chief changes which occurred to the USI were to add opto-isolation to prevent audio ground loops, and to increase the speed to 31.25 kBaud.
Following the USI discussion at Anaheim, an alternative specification was presented by some of the Japanese companies which had grown out of their own research. Whereas the USI was basically content to specify note on/off codes, this new proposal went on to define many more complex operations. It also offered a different data structure, with status and data bytes being flagged by bit 7 (1=status, 0=data). This greatly simplified the protocol by eliminating all the checks which were otherwise needed to distinguish the data category. With the most significant bit now defined as a 'flag', data is thereby limited to 7 bits, but this is sufficient for most synth data, and when not, can simply be sent as multiple 4-bit nibbles.
After the Anaheim meeting, Smith and Wood integrated the USI and Japanese proposals, forming the first MIDI specification. This was sent to all of the meeting participants but, curiously, provoked no further comment from this continent. The final document was therefore arrived at after several exchanges between SCI and Roland, which is serving as liaison with Yamaha, Korg and Kawai.
To simplify cabling between instruments, the interface is serial. It operates at 31.25 kBaud (thousand-bits-per-second), asynchronous. This is considered a high speed for serial operation - in comparison to the typical RS-232 maximum of 19.2 kBaud - but it was chosen to prevent objectionable delays between equipment. The 31.25 kHz clock can also be easily obtained from hardware, for example, by dividing 1 Mhz by 32. One serial data byte consists of a start bit, 8 data bits (D0 to D7), and a stop bit - for a total of 10 bits transferred in 320 microseconds (us).
Physically, MIDI appears as two or three jacks on the instrument. See Figure 1, the hardware schematic. The connectors are DIN 5-pin (180 degree) female panel mount receptacles. DIN connectors were agreed to by US manufacturers because it was felt that DIN connectors are now widely available here. However, the specification does provide that a manufacturer can use XLR connectors, if the firm makes available all necessary conversion cables.
The two required jacks are MIDI OUT and MIDI IN. The transmitter data typically originates in the instrument's UART. The interface circuit is a 5-mA current loop, designed especially to prevent the formation of audio ground loops which often develop in complex systems. The output is normally meant to drive only one input. If transmit data is low (0), current flows from Vcc (+5V) through Ra, over pin 4 of both connectors, through the opto-isolator, returns over pin 5, then through Rc. The opto-isolator output is normally pulled high by Rd. However, when current flows through the internal LED, the isolator output switch turns on, grounding Vo, thus sending a low to the receiver UART. When data is high, the LED does not light. The receiver UART therefore sees a high. D1 protects the opto-isolator from reverse-polarity currents which may result from transmitter anomalies.
Interconnecting cables should not exceed fifty feet (15 meters), and must have a corresponding 5-pin DIN male plug. The cable should be shielded twisted pair, with the shield connected to pin 2 at both ends. Notice that while the MIDI OUT jack is grounded to the instrument chassis, MIDI IN is not. This allows the cables to provide their shielding services without creating ground loops.
The optional third jack, MIDI THRU, provides a direct copy of data coming in MIDI IN. It is included when the manufacturer intends the instrument to operate in a 'chain' or loop' network, as opposed to a 'star' network.
The first thing to realise about MIDI is that the total control features available still depend on the design of each specific piece of equipment. MIDI does not magically transcend equipment limitations or differences. Rather it merely enables them to 'communicate' at their 'least common' level. For example, specific programmed sounds can't be transferred directly between different models of synthesisers because of inherent differences, but keyboard information and program selections can be communicated.
One of MIDI's design goals was to be simple enough so that you could connect any polyphonic synthesiser to any other, or to a sequencer, and at the very least the notes would be correctly played or stored. This would be possible with virtually no other action on the part of the user. Above this minimum, each instrument may or may not include further facilities for complex control options.
Each type of equipment has different minimum requirements. For synthesisers, minimal usefulness seems to include remote control and program switching. While polyphonic sequencers send and receive keyboard data, they may or may not be interested in program changes. Monophonic sequencers can only deal with individual lines, so keyboard data must somehow be different for them. Drum units don't usually care about specific keyboard notes, but may need to synchronise to their timing, or to the sequencer, and perhaps react to program changes as well.
While most of these requirements and useful control options can be foreseen, the number of possible interconnections cannot. Therefore, while the specification says that each transmitter will drive one and only one receiver, provision has been made so that any specific instrument or synthesiser voice on the MIDI bus can be addressed, regardless of the interconnection scheme. This is accomplished by assigning up to 16 channels under increasingly powerful (and complex) modes.
Each unit connected to the MIDI bus has separate transmit and receive ports. There are three modes of operation for transmitters and receivers: Omni, Poly and Mono. Omni mode is the most general level of operation, interfacing all units. Poly mode allows each unit (synth, sequencer, ordrum box) to be addressed separately. Mono mode is the most specialised, allowing individual addressing of (for example) each synthesiser voice.
Normally, transmitters will periodically send out a Mode Select command for the most powerful mode to which they can be configured. However, the actual data transmitted will be in the mode to which a second transmitter may have switched the receiver. For example, Synth A by default transmits in Omni mode to Synth B. Synth B, being capable of Poly mode operation, periodically transmits Poly Mode Select codes to Synth C. But the data sent from Synth B to C will be in Omni format (because Synth B's receiver is constantly getting Omni Mode Select commands from Synth A). Synth C may or may not respond to the Poly Mode Select commands from Synth B, because if a receiver is capable of operating in the requested mode, it switches to that mode. Otherwise, it ignores the Mode Select command. (Note, the Mode Select commands double as 'All Notes Off' commands, therefore can only be sent while all notes are off, or when it is desired to turn all notes off).
At power up or reset, all instruments default to Omni mode. See Figures 2 and 3. Regardless of the system configuration, Omni transmitters always send polyphonic data on Channel 1. Omni receivers respond to Note On/Off Events sent over any channel (1-16). These notes are handled according to the internal assignment scheme of the synthesiser. So this configuration allows any number of polyphonic synthesisers to play in parallel, as soon as they are interconnected.
A receiver's mode can only be changed by a Mode Select command transmitted in the channel(s) to which it is currently assigned. If the receiver is not capable of operating in the requested mode, it ignores the Mode Select command. No unit may switch its own modes. Even though a receiver in Omni mode receives in all channels, it will respond to Mode Select commands in only one channel: the one to which it is assigned.
Receivers and transmitters without channel selection capability are always assigned by default to Channel 1.
Poly mode allows individual addressing of each unit. In other words, the master controller can send separate parts to each synth, whereas in Omni mode they all played the same part.
As shown in Figure 4, the master controller in the chained network sends all commands, which are encoded with their destination channel number, over one line. This requires each unit include an address selector switch to define its channel of operation.
The channel definitions having been made, the master controller must issue the command to the receiver on that channel to switch to Poly mode. Thereafter, the receiver listens for keyboard data encoded with its channel number. Any number of notes can be sent, to which, again, the polyphonic synth will respond according to its own priorities.
Poly mode will be useful for sequencing multi-part arrangements of standard synths, for example, which can't be done in Omni mode.
When a synthesiser has Mono capability, and it receives a Mono Mode Select command, it configures itself to receive on the channel it is assigned to and above, up to the number of voices it has. For example, the Prophet-T8 in Mono mode will transmit and receive on Channels 1-8. (Future synthesisers could contain more elaborate channel selection capability).
Channeling each voice provides fast transfer of individual pressure (also called 'after touch') data for each key. It also makes true legato possible, because the note value (=voice pitch) can be changed without having to first turn the note off (as in Poly mode).
There are five categories of MIDI data: Channel, System Common, System Real Time, System Exclusive and System Reset.
Each data category encompasses a number of 'status bytes' which define specific commands under that category, and which precede data bytes which specify the exact operation. Status bytes are distinguished from data bytes according to whether the most significant (MS) bit is set (1=status) or reset (0=data). The status bytes under each category are defined below. Note that any data sets (eg. Note On event data) which are sent successively under the same status, can be sent without a status byte until a different status byte is needed.
Channel information performs most of the routine work. Commands are addressed to specific channels by a 4-bit number which is encoded into the status byte. The associated data bytes can identify keys going down (on) and up (off), their on or off velocities, and pressure or 'after-touch' (on keyboards so equipped).
System Common, Real Time, and Reset information is intended for all channels in a system. System Common information identifies song selections and measure numbers for all units. Real Time information is used for synchronizing everything (perhaps to a master sequencer). Therefore, Channel and System Common information is interruptible by System Real Time information.
System Exclusive information allows the exchange of data which can be formatted as the manufacturer wishes. Only devices which recognise the manufacturer's format will attend the exchange.
Reset simply initialises all equipment to power-on condition.
The five categories are ordered in Table 1 according to their utility.
The MIDI is one of the most important and powerful developments in electro-music technology. Not only does it allow machines to communicate with each other, but allows the instrument to become a peripheral for a computer system, unleashing the tremendous power that such a set-up can offer.
Our thanks to Sequential Circuits Inc., for allowing us to reproduce text from their MIDI specification.
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!