• The Future of Keyboard Techn...
  • The Future of Keyboard Techn...
  • The Future of Keyboard Techn...
  • The Future of Keyboard Techn...
  • The Future of Keyboard Techn...

Magazine Archive

Home -> Magazines -> Issues -> Articles in this issue -> View

The Future of Keyboard Technology

A Round-Table Discussion With Major Japanese Instrument Manufacturers

The idea of putting a group of rival Japanese manufacturers together in a room to openly discuss their future directions may seem more akin to an explosive science experiment than a realistic idea. Gregory D. Moore reports on the outcome.


The idea of putting a group of rival Japanese manufacturers together in a room to openly discuss their future directions seemed more akin to an explosive science experiment than a realistic idea. However, with the coordinated efforts of Keys magazine, MMI, and Talk Studio, the MIDI round-table was organised. As with many science experiments, this one turned out to be a great deal of fun and much was learned. Some of the manufacturers went back to their companies with new ideas for the next generation of products, and I think we all felt a sense of accomplishment at having taken the first step toward bridging the gap between musicians and manufacturers.

Unfortunately, at the last minute, Yamaha informed us that their representative would be unable to attend. The important thing to realise is that beyond the language barrier and the 'faceless companies' are people who are every bit as eager, concerned, and willing to listen and discuss the future directions of musical instrument technology.


What new directions and developments in MIDI instrument design will we see in the future?

ROLAND
Mr. Haruo Noriyasu
Chairman of Japan MIDI Standards Committee and Managing Director, Roland Corporation.

Mr. Noriyasu (Roland): Basically, manufacturers should develop instruments that can be used without detailed knowledge of MIDI. When using computers to make music today, musicians must have a good understanding of MIDI to achieve the most musical results.

When designing new instruments, we must first decide whether the instrument is to be controlled by MIDI sequencers or to be played in real time. These are two completely different animals. You just can't think of everything as being an 'instrument'. Instruments used to be played only in real time. Now, with MIDI sequencers, the situation is different. Time itself is a component of music and if you remove the time factor, which is what MIDI allows us to do, you must stick your hands into MIDI.

Mariko Oki (Talk Studio): So is it essential from now on for musicians to become thoroughly knowledgeable about MIDI?

Mr. Noriyasu (Roland): Well, with a real-time performance instrument, the musician has a keyboard or some other controller hooked up to a sound generating box and there is absolutely no need to be knowledgeable about MIDI in this situation. It could even be a 'black box', it doesn't matter whether it utilises MIDI or not. It is only when you wish to break away from real-time playing, to alter the performance a little here and there, that you must get involved with MIDI. This is a completely different world of music-making. It is not essential that musicians get involved at this level to make music.

Well, that is not necessarily true. With many of today's MIDI keyboard instruments it is often impossible to control dynamic expression from the front panel! One of the most common methods of controlling dynamic expression is by how hard you hit the key. After the key is depressed, there is little control. Foot controllers and key pressure (aftertouch) offer coarse and limited additional control. It usually requires routing a modulation wheel (if available) to a MIDI event processor for converting, scaling, and offsetting MIDI control data. Shouldn't such a basic musical control as 'dynamic expression' be more easily accessible? Surely this problem could be easily resolved by including a modulation wheel, and allowing musicians to route it to control a percentage of filter modulation and MIDI volume?

Mr. Noriyasu (Roland): We have to think about the concept of the keyboard as a controller. So often musicians are very greedy and want to control the sound of a violin from a keyboard. Well as long as you make keyboard instruments so that when we hit a key the sound of a violin comes out, naturally we are going to desire more expressive control over this sound!

Mr. Noriyasu (Roland): Remember Moog's [ribbon] controller, which was quite expressive? Yet not many people used it.

There are so many possible ways to convey expression using MIDI alone. If manufacturers make the controls and parameters available and easily accessible, musicians will find them with time and utilise them. Remember that MIDI volume controller 7 has been around since the first day MIDI was demonstrated, yet only recently has it become commonly recognised and used for applications such as MIDI automated mixing!

KORG
Mr. Aki Tachibana
Manager, Product Planning Division, Korg.

Mr. Tachibana (Korg): Additional expressive controllers can easily be handled by MIDI, but as more are added, the question is whether or not musicians can really effectively use these? Manufacturers must consider the instrument's target users as well as the instrument's functions. These factors determine which control interfaces will be included. Any of the manufacturers here could make a dream machine - but who could use it?

Well, I think that is kind of like the question of which came first, the chicken or the egg. in this case, though, I think the answer is that without first offering the instrument, musicians wouldn't have a chance to try it.

Mr. Tachibana (Korg): Yes, of course, I understand that too but...

Now that lower priced, so-called 'consumer' products almost offer the same quality as high-end instruments, and we don't have as much visible high-end technology leading the way, where are the roads that will lead toward new developments?

Mr. Noriyasu (Roland): MIDI has been designed so that there is a tremendous amount of flexibility for future growth. There are numerous unused areas of the MIDI Specification that could be utilised in the future. These have been left open because nobody knows what type of instruments might be developed; we didn't want to lock ourselves in. The structure of MIDI is wide open. MIDI will not keep any type of future instruments from being designed. I think each manufacturer today is trying to utilise the many capabilities of MIDI, but the number of possibilities is staggering! And most musicians are only using a very small part of MIDI.

I have had very good experiences working with MIDI but I do hear other musicians complain that MIDI is sometimes too slow or not sufficient for their needs.

Mr. Noriyasu (Roland): Many of the problems that people blame MIDI for are actually the problem of hardware and other factors. Humans are limited as to how much they can do in real time. When the process includes a machine, and as the machine itself becomes a more important part of the musical process, the music often tends to lack human expression. Thus the human-to-machine interface is bad. When we think of a violin or flute, these instruments seem much closer and we feel a close warmth and association with the sounds and expressions of such instruments. It is most difficult trying to achieve this same level of expression by using a keyboard instrument.

Is the current MIDI Specification sufficient to achieve the type of subtle expressive nuances you are talking about?

Mr. Noriyasu (Roland): Yes, it can be done by MIDI but the method is not established yet.

AKAI
Mr. Toshi Tamaki
Manager, Musical Instrument Engineering, Akai Professional.

Mr. Tamaki (Akai): When you ask whether MIDI is sufficient or not you must take the level of the end user into consideration, as the needs of different musicians will vary. When you want to perform an advanced level composition or do something really complicated, you might run into problems such as a slowdown in speed; or if you are using many simultaneous pitch bends, the MIDI data might clog up. Some of these problems can be attributed to MIDI. However, you must ask yourself, is all of this simultaneous information really necessary? Different applications must be considered separately and will require different solutions.

Mr. Noriyasu (Roland): Complaints about the speed of MIDI are most often, but not always, a result of misinformation or misunderstanding about how to effectively use MIDI. MIDI signals must be routed to hardware devices and the processing time of these hardware devices can be very slow. This is one of the limiting factors of today's hardware. With higher priced MIDI instruments, we can speed up the hardware processing time by adding several CPUs [central processing units]. With lower priced MIDI instruments, though, it is not MIDI but the hardware processing time that is the limiting factor. This issue is widely misunderstood, so I would like to emphasise this point: even if MIDI were faster - not that it will be - things wouldn't change very much from the current situation.

It often depends on how you connect up your equipment, and whether you are sending MIDI data down several different MIDI cables in a complex setup instead of clogging it all up on one cable. If you are sequencing, shifting around clock beats for precisely timed onsets often helps too.

Mr. Noriyasu (Roland): Yes, that is why expensive MIDI sequencers have multiple MIDI outputs. If you're performing a complex piece and try to send everything down one cable, well... that's a bit absurd.

So I guess what you're saying is that musicians can push MIDI beyond the limits or learn how to live within them. Just as they can also choose to drive their car head-on into a brick wall or learn to have their fun on the motorway!

Mr. Tamaki (Akai): Yes, if there is a problem, either MIDI can be changed or the way it is used can be changed.

Mr. Noriyasu (Roland): Believe me, changing MIDI would be extremely difficult and would have tremendous negative consequences. What would then become of all the MIDI gear musicians have purchased? And imagine the difficulty of trying to come up with another worldwide compatible standard! It might be impossible to achieve again. You don't know how much work we have done to get this far with MIDI! I strongly feel that we should focus on how to use the existing MIDI Specification effectively to achieve the desired musical results. This is the issue.



"...even if we had faster processors, we couldn't export them due to a regulation set by COCOM... It's unfortunate that regulations such as these, which have nothing to do with music, force us to place limitations on the level of musical instruments we can develop." Mr. Noriyasu (Roland)


There will not be a new MIDI standard, MIDI will not change. MIDI will continue to exist in the market. There may be some additions, such as new control changes defined and things like that. And of course, the horizons are wide open as to what can be done with System Exclusive messages by manufacturers.

It seems like a strange place to raise this issue, but even if we had faster processors, we couldn't export them due to a regulation set by COCOM (Coordinating Committee for Export Control). We are already facing the COCOM limitation with our fastest processor speeds and largest memory capacities. We must receive government approval before being allowed to export, due to the COCOM rules. These obstacles are quite a burden to deal with, so we must sell large quantities to justify the trouble. It's unfortunate that regulations such as these, which have nothing to do with music, force us to place limitations on the level of musical instruments we can develop. The concern of COCOM is that the CPUs and memory devices of high-end musical instruments could be easily removed and used in Soviet submarines. Although I think this would be unlikely, we must comply with the law.

Mariko Oki (Talk Studio): I wonder how many Synclaviers and Fairlights end up as Soviet submarines?

Seriously, though. As MIDI itself grows in complexity and features, will it become more and more difficult to use?

Mr. Tamaki (Akai): I don't think we will be sticking everything into MIDI. MIDI is extremely useful for certain applications and not quite so efficient for others. For example, sample dumps which involve a tremendous amount of data are best done with something other than MIDI.

It's a problem trying to have only one connector for all purposes. There's nothing wrong with having different connecters for different purposes if they serve the user better. So use MIDI for transmitting music and use a SCSI [Small Computer Systems Interface; pronounced 'scuzzy'] port for large data transmissions. Trying to use MIDI for all tasks is the wrong approach, I feel.

What changes will the development of MIDI Local Area Networks (LANs) bring about?

Mr. Noriyasu (Roland): MIDI LANs use MIDI exactly as it is. They won't affect the development of MIDI itself. They can be useful in a large MIDI setup and can coexist with MIDI.

Mr. Tamaki (Akai): The identity of MIDI is not always clear. Its first task was to send musical performance data, then we added control data, then we started controlling other machines with it. Then we started transmitting gigantic amounts of sample data. So now MIDI is used for a multitude of purposes and it has the ability to expand in these ways. What allows us to perform all these tasks, we simply call 'MIDI'.

Mr. Noriyasu (Roland): Since many capabilities of a machine are determined by its software, we can often make inexpensive instruments that on the surface are close in capabilities to professional machines.

Mr. Tamaki (Akai): Ah ha! That's the problem! The capabilities of the cheaper machines are so close to the professional ones!

Mr. Noriyasu (Roland): So we should have made inexpensive machines that professionals couldn't use!

Mariko Oki (Talk Studio): Well, it's true that the inexpensive machines are often good enough for professional use, but one problem is that there just isn't enough good sound software available for these machines.

Mr. Noriyasu (Roland): Well, then third parties should create a good software base.

As you know, many excellent suppliers of third party software have had a difficult time surviving.

Mr. Noriyasu (Roland): Yes, I know.



"In some respects, it is regrettable that the synthesizer has evolved into an instrument on which some musicians spend more time programming sounds than they do making music." Mr. Noriyasu (Roland)


We are often not even close to exhausting the sound capabilities of existing instruments before the next new models come out at the trade shows. The rate of development of instrument technology has far outstripped the base of sound software available for these machines.

Mr. Noriyasu (Roland): I'm sure that each manufacturer would love to bring in a strong base of third party support, but in doing so this might cause a problem in the development of newer products.

An important suggestion raised by Tomita was that instead of constantly making new hardware all the time, why not just make a new version number of the hardware. Allow the instrument to evolve over a period of time, so that the software base can evolve and grow too. Don't throw away the box each time so that the musician must start all over from the ground up. There is never a chance for musicians to learn their instruments well. With computers, we can upgrade the software and maintain files and familiarity with the operating system. Why not a similar situation with instruments?

Mr. Noriyasu (Roland): That would be good. The electronic instruments that we make are mass produced but if a musician finds a machine that allows him to express his music, then he could stick with that machine for 10 years if he wished. Or with the piano for 100 years!

Mr. Noriyasu (Roland): There is still much work to be done on electronic instruments - that's why we continue to build them.

Mr. Tamaki (Akai): Each manufacturer has a product design structure that can be used to design an older machine into a newer one. The new machine can be built on top of this basic structure. However, computer developments occur so rapidly that, inevitably, some changes are no longer compatible with older design structures, so new ones must be built. This may be the cause of some of the current market confusion.

Mariko Oki (Talk Studio): Users of very expensive instruments, such as the Synclavier and Fairlight, have access to excellent sound libraries. Why don't manufacturers create similar libraries for lower priced machines which now have excellent specifications and memory capacities?

Mr. Tamaki (Akai): It takes time to build up a good library, so we have also tried to make machines that can use existing sound libraries as much as possible. Although musicians should not be completely dependent on manufacturers to supply both sounds and machines.

Many musicians have found that they must make a choice between being a programmer or a musician. Creating good programs on today's instruments can take a long time. So whether good sound libraries are available, either from manufacturers or third parties, will be of increasing importance to musicians who don't have the time to make their own.

Mr. Noriyasu (Roland): At Roland, we went to many studios to work with professional musicians to record good sounds. But that, by itself, does not make a good sound! We also use a special computer to extract out certain portions of the sampled sounds, and to add others to create unique and interesting sounds. It is very time consuming. This type of work would be almost impossible for musicians to do by themselves. They can make samples of things around the house or even of real instruments, but they wouldn't be able to come up with the type of source material that we are able to provide. It is a problem when musicians spend such a large amount of time making sounds rather than making music.

Mr. Tachibana (Korg): We must meet the needs of professional musicians individually as well, since some of them do not wish to use sounds that are available to everyone else.

Mr. Tamaki (Akai): Can you place a value on sounds?

Well, you can put a price on the reputation of an instrument that has an excellent sound library. This is one of the reasons why musicians pay to use very expensive instruments even though lower priced machines with similar specs are available.

CASIO
Mr. Makoto Fukuda
Manager, Electronic Musical Instrument Development Department, Casio.

Mr. Fukuda (Casio): It is very expensive to develop a good sample library. And it is very difficult to make a profit from developing sound software, yet a good sound library is needed and should be available.

Another important point Tomita mentioned is that we need more than just good instrument samples. We need to develop good sets of sounds with many subtle variations - dynamics, bowing, special effects, etc. Not piecemeal sounds, but complete sounding instrument 'sets'.

Mariko Oki (Talk Studio): Individual musicians, on their own, just don't have the resources to develop a project such as this. And this is more important than all of the new products. After all, let's not lose sight of what the end goal is - music, not hardware.

What technical developments can we expect to see in the next couple of years?

Mr. Fukuda (Casio): We are still trying to improve the quality of PCM sounds. We are searching for new methods of creating sounds. We are also constantly working on improving user interfaces, to allow better expression. This is an area that Casio is working very hard on.

TEISCO
Mr. Yoichi Kondo
Chief Engineer, Research & Development, Teisco/Kawai.

Mr. Tamaki (Akai): We, too, are still working on improving sound quality. 16-bit sound is excellent but does not offer enough dynamic range. We are exploring new types of D/A convertors and aiming towards mixing systems with a dynamic range of about 110dB. These are some of our technical goals.

Another area is the development of Artificial Intelligence, which is still far away but would be wonderful if we could harness it. In the future, we will hopefully be able to develop human interfaces that will allow musicians to more easily convey greater expression than is currently possible. These are some of our dream goals.

Mr. Kondo (Teisco): We are still working on improving the sound quality and on developing better user interfaces. Developing better user interfaces can help overcome several of today's problems - such as the difficulty of programming a machine or the difficulty of dealing with constant system changes.

KAWAI
Mr. Jiro Murakami
Assistant Manager, Sales Promotion, Foreign Trade Division, Kawai.

Mr. Murakami (Kawai): Many manufacturers today are using PCM-ROM systems because of their convenience and high quality sound. However, many of the subtle nuances of acoustic instruments are missing and emulating these types of qualities, along with more expressive capabilities, is an important design goal.

Mr. Tachibana (Korg): We must consider what sounds we want instruments to generate. Just recording and playing back DAT [digital audio tape] quality sounds does not make them musical instruments. PCM [pulse code modulation] is only one method of generating the sound from a musical instrument - it could be FM [frequency modulation] or analogue, the method used is not important. What sounds come out of the instrument is the issue to focus on. We also need to strive to develop instruments that can be easily used without having to read a manual; instruments that are more intuitive and closer to humans.

Mr. Noriyasu (Roland): I think now is the time to step back and focus more on the human interface aspect of instruments. In some respects, it is regrettable that the synthesizer has evolved into an instrument on which some musicians spend more time programming sounds than they do making music. It wasn't necessary for musicians to create sounds, but we have already stepped in this direction and we can't step backwards now. So the answer is to step forward and rectify the situation by developing better user interfaces.



"We also are trying to meet the needs of professional musicians, although it is very risky for us." Mr. Tachibana (Korg)


We must reconsider our concepts of musical instruments. When you look at the process of creating music today, it is much like the process of laying tiles onto a flat surface; we don't have much depth in the third dimension. We are still working in two dimensions. This is not only a problem with synthesizers - recording engineers as well are having difficulty recording acoustic instruments. The fact is that no matter how hard they try to accurately record a performance, when it is played back it still is not an accurate replica of the real event as heard in the original environment. This is a fundamental problem with electronic instruments as well.

We must develop a better understanding of acoustics and incorporate these into musical instruments. When we learn new applications, instead of passing them on immediately to musicians, we should develop associated human interfaces to control these new parameters. These may sound just like dreams, but these are the directions we must look towards in the future.

I went to a soprano recital the other day and while listening to the voice and piano, I realised that it would be impossible to put a synthesizer up there that would in any way closely achieve the subtle nuances of those acoustic sounds. Not even a Kurzweil or Synclavier, thank you. In comparison to natural acoustic sounds, there's something missing in electronic sound that goes beyond the problem of not having enough expressive control. It's the difference between listening to a piano and listening to a loudspeaker playing back a recording of a piano. There are still some important fundamental characteristics of the sound that we are not capturing with 'stereo', although it's hard to place a finger on precisely what they are. But I have a hunch that it's something beyond the problem of bits and sampling rates that we are forever quibbling over.

Mr. Noriyasu (Roland): Yes, speakers on stage don't match well with the sound of acoustic instruments. One of the reasons is that the sound emanating from an acoustic instrument is not dispersed in only one direction - it is more omnidirectional. I have actually tried this many times but it has never worked out well. Each time I feel as if we have failed. The sound does not blend in well, it stands out distinctly from the acoustic sounds. This problem comes before the problem of expressive control. This is a problem of acoustics.

We must also consider the instrument's internal resonances that interact - things such as the sympathetic vibration of strings not actually hit, and so forth.

Mr. Noriyasu (Roland): Yes, and there is also an interaction between the reflections in the room and the instrument, particularly with piano. Our ears are directional and we perceive the reflections from many different locations as they bounce off the walls. The reverberation that emanates from a pair of stereo speakers does not replicate this process accurately enough, since all the reverberation is coming only from the speakers.

Where do you draw the line between producing only for the mass market and designing for professionals? Should both these groups use the same products or should there be different markets?

Mr. Noriyasu (Roland): At Roland we have clearly defined lines. The reality, though, is that creating products for the professional market is a real burden and a strain on our management. This is very clear to us, although we do feel a responsibility to continue to create products for the professional market.

Mariko Oki(Talk Studio): Why do you feel a responsibility to manufacture for the professional market?

Mr. Noriyasu (Roland): Well, if the professionals are not using our products then amateurs can't follow and won't use our products either.

Mr. Tachibana (Korg): We also are trying to meet the needs of professional musicians, although it is very risky for us. Yet we must. We have just released the T series and since many professional musicians prefer modulation wheels to joysticks, we have developed an optional wheel retrofit for the T1. There are also other functions we incorporated on the T1 that are not available on the consumer versions.

Mr. Murakami (Kawai): Is a machine 'professional' because it is expensive? And therefore inexpensive machines are for consumers? Why do only professionals need better interfaces? Don't other users need them too? We are trying to incorporate good user interfaces into all of our machines.

Mr. Tamaki (Akai): Akai is targeting middle and upper level users by incorporating advanced interfaces such as the large graphic displays and large rotary dials. I don't agree that expensive is necessarily better. Although these features add to the cost, we are hoping users will recognise the value of instruments that offer premium performance and better human interfaces.

Mariko Oki (Talk Studio): The Akai S1000 seems to be establishing a very good reputation among musicians, as well as establishing itself as a paragon of human interface design for other manufacturers to emulate in the future.

Mr. Tamaki (Akai): Yes, it has. Initially though, we experienced friction within our sales management as they had reservations about the possible sales volume of such an expensive machine. So it was a delicate business step, but musicians have begun to recognise its value.



"Casio's policy is to sell large volume at inexpensive prices. It is difficult for us to step outside of this policy." Mr. Fukuda (Casio)


High-end products have the benefit of introducing new technology that will eventually trickle down and appear in lower priced products. But in the past three years we have seen more and more focus on consumer products, with little emphasis on innovation.

Mr. Fukuda (Casio): Casio's policy is to sell large volume at inexpensive prices. It is difficult for us to step outside of this policy. It is also very difficult to make innovative products that satisfy the demands of the professional market and that are accessible to amateurs as well.

Mariko Oki (Talk Studio): Many musicians have mentioned that they have difficulty getting information from Japanese manufacturers. What can they do?

Mr. Noriyasu (Roland): We have product specialists at each branch office throughout the world [as do Akai, Casio, Kawai, and Korg -Ed.], but it is difficult for any single person to be able to answer all questions.

Do you feel that musicians are satisfied with the limitations of the current generation of PCM-ROM keyboards?

Mr. Tachibana (Korg): We are supplying new sound cards, and with the T series you can load in samples from our DSM1 sample disks, so there is quite a bit of flexibility.

Are you working on Resynthesis systems?

Mr. Tachibana (Korg): Of course we are considering this possibility, but there are many different approaches to consider.

Is low-cost resynthesis something that we will see soon, or is it still far away?

Mr. Noriyasu (Roland): The Roland 'U' series uses a type of resynthesis that we call 'RS-PCM'. It is not complete resynthesis but a partial use of a resynthesis method. We are considering expanding this area. The current direction is towards a convergence of our current concept of a 'synthesizer' and 'sampler'. The differences are becoming more blurred each day. We already have PCM-ROM synthesizers which are a hybrid version of a synthesizer and a sampler. When resynthesis is developed further we will see even more overlapping.

What is more important than the technology used, is whether musicians can get their hands on the sounds they like and whether they have the necessary expressive controls. These are the goals we should be focusing on - not the technology.

We are going through a period of rapid technological change, and many musicians tend to focus on technology because it has such a dramatic impact on the sounds available to them.

Mr. Tamaki (Akai): I think one issue we need to consider is whether musicians will need to be able to edit a sound or not. Based on some of our discussions, it would seem that future instruments are going to be more like 'canned' instruments, all with the same sound! If everyone were using the same 'canned' sounds, would such instruments be appealing?

That's one of the beautiful things about acoustic instruments, they are never the same. Each piano has a unique sound, each hall has its own characteristics, and each performer and each performance will elicit a different set of sounds and emotions.

Mr. Tamaki (Akai): We must look toward developing some new type of human interface that will allow musicians to easily change and control the sound, so that musicians can evoke their own sound personality from electronic instruments as well. This sounds like a dream.

It's really not too far-fetched, because that's what can be done with a box of firewood, catgut, and a little rosin!

Mr. Tamaki (Akai): Of course musicians would like manufacturers to supply sounds, but they don't want to all use the same sounds. So either manufacturers can supply thousands of 'canned' sounds to choose from, or musicians can have a hand in shaping their own sounds from source material. Which direction we choose to follow will play an important role in the type of instruments we develop in the future.

Mr. Noriyasu (Roland): We must reconsider exactly what the concept of 'resynthesis' is. Do musicians simply want to imitate acoustic instruments or do they want to use resynthesis to create brand new sounds? The latter area is where I feel resynthesis can become a powerful tool.

In what way would the sound of resynthesis differ from what is possible to achieve with today's samplers, which utilise velocity switching, crossfade loops, butt splicing, etc?

Mr. Noriyasu (Roland): Maybe we can make many of the same sounds with today's instruments, but resynthesis would allow you to achieve this with much less programming.

Mr. Tamaki (Akai): Technically, any manufacturer could make this instrument today. Devising the hardware for a resynthesis machine is no longer a problem. The key is to devise good software and an effective human interface.



"I think one issue we need to consider is whether musicians will need to be able to edit a sound or not." Mr. Tamaki (Akai)


So you're saying that we really have arrived at a stage where the hardware is no longer an important limiting factor and that the doors are wide open to what we are capable of designing with software?

Mr. Tamaki (Akai): Well, the aspect of designing a good control interface is still quite a big problem. How do you go from the idea of what you want to do in your head to realising it on the machine? For example, if you want to have the sound of a guitar string being pulled back and slapped against the fingerboard, what would you do to get this sound? Push a button, pull a lever? This is a very important problem.

Suppose you could move the modulation wheel and crossfade through many different sounds, this would open up some wonderful new expressive capabilities.

Mr. Tamaki (Akai): Hey, that's quite interesting! That would be very possible to achieve.

Mr. Noriyasu (Roland): But if you want buttons and knobs for every possible sound parameter, the machine would be as big as this room and nobody would be able to control it. With this problem in mind, you develop a real appreciation for a controller such as the violin. But it is important to remember that, as with many of the most expressive types of controllers, the violin is basically a monophonic instrument. With today's MIDI instruments being capable of making an entire orchestra of sounds, we have to realise the limitations we are facing. If all the instruments move with the same phrasing, it just won't sound very interesting. Each instrument must be played individually. So, to make an effective MIDI ensemble, you must really approach each sound one at a time. It's not that MIDI can't do it, it's just that one person really can't control the hardware at once in real time. For one person to attempt this is a lot of work. It's tremendously tedious.

Yes, an orchestra is not just 100 different instruments, it is 100 different people expressing themselves all together in a very magnificent kind of way.

Mr. Noriyasu (Roland): Even with a sampler and a very good sound library, you still can't achieve the same effect that an orchestra can. This is the reason I think we must place more emphasis on the human interface design and reconsider our goals and our approach.

Mr. Tamaki (Akai): If you want to control the different expressive sounds that a violin can make by using resynthesis, this would be very difficult to achieve.

Up until now we have had many different digital machines that all use different algorithms. Is it possible to make a general purpose DSP [digital signal processing] machine that can use many different algorithms?

Mr. Tamaki (Akai): Well yes, that is possible.

Mr. Noriyasu (Roland): Anything is possible! Whether a function is useful and what it costs are the limiting factors in the real world. Also, doing this would eliminate the individuality of each instrument. Then all instruments would be the same.

Mr. Fukuda (Casio): The hardware cost of such a project would be very expensive, too.

Mr. Noriyasu (Roland): I don't think any of the instrument manufacturers have the intention of doing that! If that were the case, we'd all become computer companies instead of musical instrument manufacturers!

I have noticed that on the Akai S1000 sampler there are some very interesting DSP algorithms quietly embedded in the Version 2.0 software.

Mr. Tamaki (Akai): Yes, that is true. We are trying to find out whether these type of functions are useful and of interest to musicians.

Well, what about using more 'macros' on synthesizers to give musicians more performance control? For example, to recall certain prerecorded movements of front panel controls?

Mr. Noriyasu (Roland): I'm sure that many manufacturers are considering concepts such as these. We are still trying to determine what is the best way to approach this and how much interest the majority of our users would have in such capabilities.

Mr. Tamaki (Akai): Macros are only one method of creating a simpler user interface, and I think that all manufacturers are carefully considering this overall issue before proceeding.

Mariko Oki (Talk Studio): What computer and electronic developments will have the largest impact on musical instruments in the future?

Mr. Noriyasu (Roland): Faster, cheaper computers for one, and a solution to the COCOM problem!

When can we expect to see digital mixing, optical technology, and/or video-based multitrack recording capabilities utilised?

Mr. Noriyasu (Roland): It is simply a matter of time. It won't be too long.



© PPV Presse Project Verlags GmbH, Munich. Reprinted from Keys magazine with the kind permission of the publishers.



Previous Article in this issue

Projects UK

Next article in this issue

Recording Techniques


Sound On Sound - Copyright: SOS Publications Ltd.
The contents of this magazine are re-published here with the kind permission of SOS Publications Ltd.

 

Sound On Sound - Apr 1990

Donated by: Bert Jansch / Adam Jansch

>

Should be left alone:


You can send us a note about this article, or let us know of a problem - select the type from the menu above.

(Please include your email address if you want to be contacted regarding your note.)

Feature by Gregory D. Moore

Previous article in this issue:

> Projects UK

Next article in this issue:

> Recording Techniques


Help Support The Things You Love

mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.

If you value this resource, you can support this project - it really helps!

If you're enjoying the site, please consider supporting me to help build this archive...

...with a one time Donation, or a recurring Donation of just £2 a month. It really helps - thank you!
muzines_logo_02

Small Print

Terms of usePrivacy