Home -> Magazines -> Issues -> Articles in this issue -> View
Sounding Off | |
Karl Steinberg & Mark BadgerArticle from Sound On Sound, January 1989 |
Charlie Steinberg and Mark Badger define their concept of what should and what shouldn't be called a 'MIDI workstation'.
What is a 'Workstation'? And what is a 'MIDI Workstation'?? According to some synthesizer manufacturers, their machines are 'workstations'. So is a workstation a keyboard? Could it not be something a whole lot more?
If we look at the computer world, a workstation means some sort of terminal with inputs like a QWERTY keyboard and a mouse, and outputs like a screen and printer. There are probably also some connections to the 'outside world' in the form of a Modem or a Network Link to other terminals. The purpose of these communication links is to allow a number of terminals to access the same information and to distribute the workload of the tasks being accomplished. Every element of the system is designed to provide an easy and efficient workplace for the individuals who use it. Thus, a workstation is not only the means of communication between a mechanical assistant and its human operator, it is also the communication between the different operators at the various points on the network. This distribution of information and thus the work at hand enables the process of dealing with it to proceed at a more efficient pace.
This situation could be the same when considering a 'MIDI workstation': all the requirements of a professional MIDI user handled with the greatest possible overview and ease of access at all times. It should be noted that most MIDI tasks are real time applications, and the user might need use of and access to several functions or programs on the same terminal. The terminals would thus be multitasking, distributing information and workload to other terminals on the network as required.
Let's take a look at a typical modern studio. A MIDI sequencer will normally provide the main MIDI system control, from a single computer terminal. The sequencer is probably synchronised to a SMPTE synchroniser, which is in turn locked to a multitrack tape machine. The tape machine is probably under the control of a mixer automation system, running on a second computer terminal at the mixing desk, also synchronised to SMPTE timecode. If you are synching to 'picture', then there will be a third machine system for the video, also synchronised to SMPTE and controlled via a third computer terminal.
Most of the work proceeds on a sort of 'shuttle' basis, going over particular sections repeatedly until things sound good. Herein lies the basis for the 'cycle' mode found on most MIDI sequencers and the autolocators on tape machine controllers.
In the studio, as long as the master tape is running and all the machinery is properly synchronised, everything can proceed under automatic control. However, tape machines have to shuffle through quite long strips of tape in order to play sections of audio, and waiting for them to get to the right point can be a bore. On every drop-in, there is a chance that some element in the system has not yet caught up and is about to 'throw a wobbly'.
If patch changes or MIDI controlled mixing events are not 'in sync' with the tape locator positions, then the synths and effects units are left on the wrong patch when dropping in. Often the necessary MIDI messages must be specially inserted into the piece, slowing the creative flow. These controls could be operating from the mixer automation system. But, due to the physical limitations of tape transports, users will probably prefer to work in 'internal sync' mode on the MIDI sequencer when dealing with the MIDI parts, so it's usually easier to exercise control from the sequencer.
Unfortunately, while the MIDI sequencer acts as the 'master' the mix automation system must be left inert as there is nothing for it to sync to - not many desk automation systems synchronise to MIDI Song Pointers! This means that there will be no picture either. You could always sync to the video by changing the SMPTE sync patching! And anyway, what happens to the SMPTE-based automation if we want to change the tempo of the music?
If you think about that for a moment, you may begin to appreciate how using such systems can be likened to constructing a house of cards. As long as you keep a steady hand you can steer a steady course, building layer after layer of sound decisions yet retaining as much flexibility as possible. However, certain processes make decisions for you. There can be no turning back after marrying certain of the advantages inherent in different ways of working (audio on tape along with real time MIDI control for instance, where SMPTE timecode and Clock-based tempo collide).
So, a MIDI workstation must be able to overcome these limitations by exchanging information between a number of different machines, allowing them to respond in 'harmony' to the requirements of the people using them. In the simplest case, the control functions would all occur on one computer terminal running several programs simultaneously (sequencer, mix automation, synth and sample editors, tape controllers). The programs would exchange data in order to behave sympathetically, and blast out the appropriate MIDI control signals to the system.
"...most MIDI tasks are realtime applications, and the user might need use of and access to several functions or programs on the same terminal. The terminals would thus be multitasking..."
To extend the notion of our MIDI workstation concept to something more similar to that of the computer world, control could be shared amongst several operators working at their own terminals by way of a Local Area Network (LAN). For economy's sake, we would be sorely tempted to use the MIDI connection itself to implement the interconnection of terminals.
Each of the terminals could run just those programs specific to the requirements of that user, yet the programs would share common important information via the network. In a MIDI environment, this would allow someone to adjust a patch, while another did the mix, the third writes a sequence, and the fourth sets up the sampler. Each could know where the others are and their tool - the computer terminal they are working at - could respond accordingly. If required, the same level of control could alternatively be exercised from one terminal alone.
It becomes obvious that a system capable of supervising the actions of all these controls is necessary - a Multitasking Real-time Operating System. Such systems are well known in the computer world, Unix being a popular example. In the audio world we are forced to rely on expensive monolithic machines which seek to provide a full range of audio facilities. The MIDI standard itself is severely limited by the simplicity of its Start, Stop, Song Pointer and Continue messages, and the coarse resolution of MIDI Clock data! Yet the economy and potential of MIDI control is characterised by the oft-seen Atari ST control of a Fairlight's sampling facilities.
What will the future bring to MIDI users? Tone synthesis by multi-mode expanders seems to be the order of the day, with even more voices on offer tomorrow! The potential of mix automation via MIDI is only just beginning to be explored and the range of powerful effects on tap seems to grow by the hour. Algorithmic exploration of tonal sequences will no doubt provide fertile ground for research, and when the same intelligence is applied to mix events we will witness yet more stimulating developments. Digital audio techniques hold out the prospect of manipulating recorded material directly by time-flexing, volume control, and pitch adjustments, further blurring the division between reality and synthesis through the use of tonal analysis.
Theoretically, the management of complex audio systems will become easier, more versatile, and further centralised through the use of several specialised controllers which can share data. For the studio, larger networks can be envisaged, with several workers allowed access to important functions of the system yet retaining control over their own specific operations. This would mean that parameters such as SMPTE-based sync, high clock rate tempo timing, generic machine control, channelised cue listings and various offsets, not to mention the routing of system inputs and outputs, would all have to be accessed in an orderly and compatible manner.
Can the transmission of this information between various terminals be accommodated for on a communication network such as MIDI? If it cannot, and we can identify parameters for communication which are not provided for in the MIDI system, does this mean that the time has come to seek new pastures? If what we need are multitasking network terminals, is it time to throw our STs, Macs, Amigas and PCs into the bin and get a mortgage on a Sun workstation?
Karl 'Charlie' Steinberg is the head of Steinberg Research in Germany, and creator of the famous Pro24 sequencer for the Atari ST.
Mark Badger used to be a regular contributor to SOS until he realised he could earn more money working for Steinberg! He now runs the Steinberg Hotline and acts as a product specialist in the UK.
Opinion by Charlie Steinberg, Mark Badger
Previous article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!
New issues that have been donated or scanned for us this month.
All donations and support are gratefully appreciated - thank you.
Do you have any of these magazine issues?
If so, and you can donate, lend or scan them to help complete our archive, please get in touch via the Contribute page - thanks!