Illusions of Space
The Man Behind The Lexicon Sound | David Griesinger
David Griesinger pioneered the research behind the Lexicon digital reverb sound. He discusses his ideas with Paul White.
Lexicon's David Griesinger explains how the analysis of real acoustic spaces must be properly investigated in order to improve electronic reverberator design. Paul White reflects on his words.
Despite the advances made in digital signal processing, especially at the low cost end of the scale, Lexicon still leads the field in digital reverberation. Furthermore, they have adapted their techniques for acoustic enhancement in concert halls in the form of LARES, a multi-channel reverberation system capable of simulating a natural acoustic for live music performance. The man responsible for Lexicon's world-beating reverb algorithms is Principal Staff Scientist David Griesinger, who I met at the recent AES show in Vienna; David had attended to present a paper entitled Binaural Measures of Spatial Impression and Running Reverberance. I was curious to know how his obsession with reverberation and sound perception came about.
"We started by building mechanical reverberators based on springs and plates, all of which sounded more or less bad! Then, like everyone else, I had the idea that it should be possible to do this with digital electronics."
Was your early research based on the pioneering work by Schroeder?
"No, I didn't know anything about it. First we made an 'echoplex' with about 100 taps instead of 16, to see what it sounded like — and it didn't sound very good to me. So I started playing about with different kinds of comb filters, doing the kind of research that I didn't find out until later that everyone else had already done."
As I understand it, when Schroeder first published his paper on the subject, it was not possible to do the necessary processing in real time.
"Well, Schroeder's work was done in non-real time, but there was a lot of work being done at the BBC which was all in real time. There was work by Axon in Guildford, back in the 50s, where they used magnetic media to produce the delay, with varying numbers of magnetic heads to provide the delay taps. And they did some very clever things with them; for example, they switched randomly from one head to another, either with the feedback tap or the output tap. They claimed that it made a big improvement to the sound — and, of course, they were right."
When it comes to fooling the brain into hearing a convincing acoustic environment, how did you decide which parameters were important to duplicate and which were less important?
"I did it entirely by trial and error. But I had been a recording engineer for many years — I'd made a lot of records and I'd spent a lot of time in concert halls. The reason I wanted to make the unit was that I was dissatisfied with the reverberation devices then available and I was acutely aware of the kind of thing we needed. So I started working on this digital process to try to find an algorithm I thought I could live with for classical music.
"We were getting somewhere with the comb filtering, but I wasn't very happy with it, when I ran into Barry Blesser. Barry was an MIT professor and he'd always been a consultant for EMT (the company that specialised in echo plates), and he had been working with them on a reverberation project. I mentioned to him that I was also working on reverberation and he said, 'Of course, you must know about Schroeder's work then?' I said 'What Schroeder's work!' And he told me about the 1962 paper. I'd looked in the AES journals back to about 1970 trying to find something on artificial reverberation and, of course, I didn't find anything — there wasn't anything in that period. So I got hold of Schroeder's work and thought 'This is really clever stuff'. I combined what Schroeder was doing with something I was doing and within a couple of weeks we had something that was much more effective. I immediately used the process on a record — the processor barely worked but I managed to haul it down to the studio and work with it, and it was really quite nice."
Your reverb algorithms have stood the test of time — the Lexicon reverb is still widely regarded as the most natural available. Did you discover something over and above the principles everyone else used?
"Yes, I'm always a little surprised at that. In fact, the original algorithm was available in the 224 and it's also used in the PCM70."
Did your algorithm use the same FIR (Finite Impulse Response) filter and banks of comb and all-pass filters employed in Schroeder's model or did you take a different approach?
"It wasn't a similar architecture at all — it's quite different from that. I can't say too much about what we actually do without giving away things we'd rather keep secret, but I can say that all our algorithms are the result of listening. There are some tools that you can use — and if you read the paper I gave in Toronto, it talks about those tools — Sonograms, for example. One tool I like a lot is to use a compressor to compress the output of the reverberator so that it gives a virtually constant amplitude. Then you can excite the input of the reverberator with a click and listen to the decay, which now occurs at a constant level. You hear all kinds of things that happen. A lot depends on the applications, and with drums you have to be very careful about the rate of build-up of reverb complexity — that's probably more important than anything else. Probably what bothers me the most with classical music is the fact that with many of the algorithms that sound most natural for other reasons, you have a problem that some frequencies have different decay times to others. In fact, typically, there is a rather broad dispersion of reverberation times with frequency."
Isn't that also true of natural spaces?
"No. There is, in general, more coupling between the different modes, and it's very unusual in a natural space to have particular modes which would have a reverberation time, say, two or three times as long as other modes."
It would seem to me that in any rigidly defined system, or static model designed to simulate reverberation, that some degree of correlation between delay times is inevitable, and this must colour the end result. I understand that one of the ways in which your algorithms differ from those of your competitors is the way in which you introduce a random element into the formation of the reverberation.
"Years ago we introduced this in the 224. It seemed to be a reasonable thing to do to introduce random changes in the delays within the algorithm. And it was for precisely the reason of trying to limit the decay time of those modes that otherwise took too long to decay. As I was saying, you get a dispersion of decay times and typically in a reverberation unit, you have a frequency modal density in time as well as frequency. In other words, you have a certain number of reflections per second. If you look into the frequency domain, you'll find that only certain frequencies will reverberate at all. And the closeness of those frequencies to each other is determined by the total delay in the system. Typically, they may be one or two cycles per second [Hertz] apart in conventional artificial reverberators.
"In a room, of course, they're much closer than that, which is one of the reasons why rooms sound different. If you add the capability of increasing the time density of the reflections as a function of time — so that as time builds up, the number of reflections also build up — a consequence of that in the frequency domain is that the different teeth in the reverberation time comb become of different length. In fact, some of them might become three times longer than others. The reason this might become a problem is that if you excite this reverberator with a noise pulse, you excite all the frequency modes and it sounds wonderful, but then all the ones with shorter reverberation times decay away, leaving only those that have long reverberation times."
Is this what leads to 'ringing'?
"It leads to ringing, or a metallic kind of sound, because you've taken the original broad frequency range and reduced it to just a few frequencies, all of which are ringing at the same time."
And yet it would appear that most of your competitors still produce a static model of a reverberator and then excite it.
"Yes, but by careful design you can limit the problems inherent in a static model. The results can be quite acceptable and that's a perfectly valid way of doing it. But if you want to go further than that, you really have to add some sort of random element — because, if you add the right kind, you can reduce the length of time of these extra modes directly — and that's what Lexicon do in the more recent algorithms. Certainly we do this in the LARES system."
"...in a real concert hall the early reflections are different for every instrument, because every instrument is located in a different position. You only have to move the instrument by one foot or so and the pattern becomes completely different."
In your later systems you have user-accessible parameters called 'Spin' and 'Wander'; what do these actually vary?
"Spin varies the rate of change of the delays, and when they're changing slowly, it's hard to notice that they're changing at all. If they change more rapidly, it introduces pitch shifting and other problems which are quite noticeable. A lot depends on the type of material you are processing; with speech, you can afford to set the Spin very high and it does improve the timbre of the speech. But with piano, you need a much lower setting than you can get away with on strings, for example. The Wander parameter sets the range of time over which the delays can go. So if you set it for 10ms, for example, you create a +/-10ms window and the delay will move around inside that window."
It has been said that Lexicon reverberators also produce the most convincing wraparound or three-dimensional sound. To what do you attribute that?
"There is another set of parameters that we worked on, called Shape and Spread. The spatial effect has to do with the way the energy builds up, reaches a maximum, holds that maximum and then decays. The shape of the reverberant decay really determines the apparent size of the reverberance. On different kinds of music, this creates different effects. So in the 480, we made this a variable you can adjust. If you set the Shape in the middle, you get a sort of double-humped response; it builds up slowly, reaches a maximum, holds it for a little bit, decays, then holds that for a little bit, then decays exponentially. That's a characteristic sound of certain concert halls."
Do you attach any importance to the synthesizing of early reflections or is that something of a red herring?
"In a real concert hall, it is the early reflections that determine its Shape parameter. But if you make the reflections too discrete, they're not particularly pleasant to me in a recording. If you measure the early reflections in a real hall, you find that they're not discrete — they're only sharp in the frequency range from 2-8kHz, which is not that important musically. You can see them very easily in an echogram, and they look like nice clean reflections, but if you look at the echogram in detail at lower frequencies, you'll see there's considerable smear in the echo. And this has to do with surface features that are larger in size than one or two feet — there are almost always some of these in a real concert hall. This means the lower frequencies have quite different reflection times to the higher frequencies."
In a real hall I would envisage that diffusion and diffraction occur much more strongly than it is possible to simulate with any accuracy.
"That's correct. If you try to simulate a reflection with a single delay tap, you have no smear at all and the sound of such a reflection is quite artificial. You have to do something to smooth it out, and there's a number of things you can try — but there's another reason which indicates you probably shouldn't try to recreate the early reflections. In a recording situation, you're normally playing back over loudspeakers and your early reflections are already being supplied by the playback room. If you go about putting the early reflections in twice, both with the playback room and the artificial reverberator, it's not going to sound right.
"Another good reason is that in a real concert hall the early reflections are different for every instrument, because every instrument is located in a different position.
You only have to move the instrument by one foot or so and the pattern becomes completely different. Consequently, the ensemble you obtain has very little tone colour. For example, in a string section each instrument has its own early reflection pattern, so they all average out. However, if you put the same sound into an electronic reverberator, the early reflection pattern is going to be the same for every instrument and it will sound very different — very coloured. It will sound as though everyone is playing from the same point in the room, which is obviously very unrealistic."
Does LARES use specially developed reverb algorithms or are they adaptations of those used in your studio reverberators?
"We developed the algorithm used in the LARES system many, many years ago, but that was around the time we were developing the 480, so that work got put on the shelf. However, I made some ROMs for myself which we used in the 224. Whenever I used the 224, I used this algorithm which we never got around to selling. It was only when we got the opportunity to do the LARES system that I thought we should resurrect this old algorithm, because it's very appropriate to that application — and we also developed it into a 'random hall' program for the 480. The algorithm was developed for its sound and not for feedback immunity, which is also a characteristic of LARES."
I've seen LARES demonstrated and the immunity to feedback is very impressive. How much of this is due to spreading the reverberant sound between several speakers and how much is due to the random nature of the reverberation?
"The phase characteristics of the reverb give about 6dB of feedback immunity and the rest is due to spreading the sound over a number of channels. If you use 16 channels, as we use in most of the LARES installations — 16 reverberators — then you get a factor of the square root of that — or 12dB. So you get twice as much improvement from the number of speakers as you do from the time variance — except that you couldn't get the advantage of the number without the time variance. If you didn't have the time variance, then you'd get no advantage from increasing the number of channels if you simply had the same microphone going through identical reverberators.
"The algorithms we use in LARES have a different type of time variation to those you'd normally use in a recording studio. If you use a delay system where the delays do not move in time, then feedback will build up at frequencies related to those delays."
So how are the pairs of channels fed? Is it a simple matter of time variance or are you running different reverb algorithms on each pair of channels?
"I don't have to use a different algorithm for each, because the algorithm changes itself completely in under a second — you just have to turn them on at different times or start them with different random number tables. Actually, in each 480 there are eight reverberators all loaded from a different random number table, so they're all independent. If you took another 480 and started it at exactly the same time, you'd be in trouble, because its output would be identical to the first one. They tend not to start at exactly the same time, but it's better to set the Spin value slightly different on the second unit, because that will guarantee that they won't run together."
"In a real hall, if you play a sustained note on an instrument with absolutely no vibrato, you'll find that the reverberation disappears after about one second - you simply can 't hear it any more."
How is the multi-channel configuration used in LARES?
"If you only use one 480, then you have a 1-channel in, 4-channel out system using eight reverberators. With two 480s, you have a 2-channel in, 8-channel out system — although you could use it as a 4-channel in, 4-channel out system if you wished. And you might want to do that if you had a stage you could cover easily with four microphones but not with two."
From all this work; what has emerged as the most important aspect in building a convincing simulation of a natural acoustic environment — and which parameters, if any, can you afford to ignore?
"You can't get away with anything, because you can always find a piece of music that will show that you're cheating. It becomes a matter of economics and certain algorithms work best on certain types of music — so long as it's still musical, that's all that matters, but a trained ear would be able to hear the difference."
How much more progress needs to be made before you can fool all of the people all of the time?
"I think you can do that now, particularly if you don't tell them what you're doing, but if you trained them to listen for certain things, they'd probably be able to pick them out. If you employ a time variant system, it is inherently different from a real hall — which is not time variant. In a real hall, if you play a sustained note on an instrument with absolutely no vibrato, you'll find that the reverberation disappears after about one second — you simply can't hear it any more. That would not be true in a time variant system, but it's quite a subtle point. It was only when I realised that this was true, and started to train myself to listen for it, that could actually hear it."
Reverberation is all about creating the illusion of space from a small number of loudspeakers — normally a stereo pair when it comes to reproducing recorded music. Are any of the techniques currently being used to create three-dimensional sound applicable to digital reverberation?
"We've thought quite a bit about using this kind of technique for improving space, and Lexicon sell products that do this — the CP1 and CP3. The problem with techniques which widen space, using processing, is that you're limited in your listening position, and the limitations are quite severe if you're concerned with the entire frequency range. If you're only interested in low frequencies, however, you can widen it over quite a wide range. That's exactly what happens when you record using 'spaced' microphones — low frequencies are recorded with essentially random phase. It's the same as using a crosstalk elimination scheme in a room — it takes the bass energy and adds a very strong anti-phase component, which causes the different room modes to be excited.
"This causes you to hear a much more spacious sound; unless of course you've gone and bought a satellite system with only one woofer. That's why I'm not too fond of systems that rely on a single sub-bass woofer; they're fine if you use two and put them in different places in the room, but if you only use one, you lose the ability to hear random phase information on the record, which is put there for a reason. But if you want to spread the sound out at mid and high frequencies, then you really are limited to a narrow range of listening positions. If you can put up with these restrictions, then it's a fine technique and I think our CP1 product does it very well. We offer it as a program in the 480 processor also.
"I'm not a big believer in this kind of processing for studio use, because it is so dependent on loudspeaker arrangement, feel it should be left up to the listener and not encoded as part of the recording. If you put it on the recording, you're making a lot of assumptions about the speaker setup of the end user — and that's more assumptions than I'm comfortable with. And if you're not in the right place, the sound is degraded by the process — not by very much, but it is degraded nevertheless."
Now that LARES is complete and on the market, what direction is your work likely to take next?
"I don't think that the work on LARES is by any means complete and I would like to see it working as a much less expensive package; someday I'd like to see it available in the home. Right now my work is primarily in acoustic measurement, because I'd like to know how people hear acoustic reverberation, and I think I've made a lot of progress on that. If you look at the response of any binaural listening apparatus, including your own ears, to reflected energy from different directions, you find that it's a strong function of frequency. Below 700Hz, the side reflections matter the most, but above this, that is not true at all.
"At 1 kHz, the side reflections are maybe 6dB down compared to the ones that are 45 degrees from the front. And as you go up in frequency, the peak of reverberant sensitivity moves more and more towards the front.
Actually, 'front' is a bit of a misnomer; I really should have said the 'medial plane' — which means it moves towards the front, towards the top, and towards the rear. It is symmetrical about a line drawn through the ears. For this reason, you can tell the difference between reverberation created by strong reflections from the side as against reverberation which is diffuse — where the reflections are coming from all directions.
"Diffuse reverberation sounds better because all frequency bands have equal reverberant energy, whereas if it only comes from the sides, some bands will have reverberant energy and others will not. It is a complex and interesting issue, but it shows there is a reason that concert hall design should aim towards diffuse fields — at least at mid and upper frequencies."
I understand that you've been recording impulse responses in various concert halls and then analysing them. What have you learned from this?
"The theories I've been working on are a result of measuring and then trying to understand what you hear. It is possible now to make an impulse response measurement and then use that impulse to convolve with music and speech. That means you have a true representation of what that concert hall sounds like from that one point in space with music and speech. You can listen to that and band filter it, to listen to it as a function of frequency, and ask yourself, 'Does this hall sound spacious in this frequency band or does it sound spacious in that frequency band?' Then you can try to relate that to what you measure."
Is the logical conclusion of this work a reverberation processor that is able to sample and recreate the characteristics of real rooms and halls?
"You could do that, but I'm more interested in the other end of the problem; I'm interested in looking at real acoustic spaces to find out which of their attributes are most desirable. Hopefully, we can take the best parts of several halls and combine them into one that we can implement with LARES."
Gear in this article:
Interview by Paul White
Previous article in this issue:
Next article in this issue:
mu:zines is the result of thousands of hours of effort, and will require many thousands more going forward to reach our goals of getting all this content online.
If you value this resource, you can support this project - it really helps!