Katz's Corner Episode 23: A Big Step Forward

I was planning on presenting the next episode in my "How Insensitive" series, but that'll have to wait till next time because a wonderful new toy has just arrived straight from the technical geniuses in Hong Kong. Today we're going to introduce you to a powerful new and affordable investigative tool: The MiniDSP EARS Headphone Jig, costing $179 USD. MiniDSP has been manufacturing audio-related DSP products since 2009. Led by charismatic Frenchman Tony Rouget, MiniDSP has made some innovative and affordable audio amplifiers, digital equalizers, DACs, calibrated measurement microphones and measurement tools.

I've got two of their equalizers working, one in Studio A, my Mastering Room, as a backup for my Acourate EQ/crossover, and one in Studio B as a monitoring EQ for the mixing room. Professionals have shied away from monitor EQ for a long time, and for good reason. Over the decades EQ has been more abused than properly used, but measurement techniques have come out of the dark ages, so with proper knowledge accompanied by proper acoustics, an EQ can be the final tweak on a precision monitoring room.


Fig. 1 and Fig 1A: Mini DSP EARS calibrated test jig for headphone measurement

Likewise, it should be possible to equalize headphones to a target response, and to that end, MiniDSP has produced the EARS headphone measuring jig shown above, and a new companion DSP-based headphone amplifier, the HA-DSP ($325 USD). I've already put EARS to good use. In upcoming Katz's Corners I will describe EARS in detail, its strengths and weaknesses, and intended use. Today, I've put it to good use right away to help generate even better and more precise corrections for my Stax 007 Mk2 and my Audeze LCD-4 headphones. To paraphrase the folks over in Hong Kong, we're living in exciting times to have an affordable headphone measurement jig.

Bob's First EARS-aided Headphone EQs
We're not in Headphone Nerd Nirvana yet—as we're far away from being able to produce an accurate target response curve that we can use with EARS measurements for many reasons that I'll go over here and in future episodes. But in this episode we will produce some EQs that sound very, very good. So first it's time for some number-crunching, followed by (hopefully) some pleasurable listening! I used the EARS jig to measure my two great reference headphones, the Audeze LCD-4 and the Stax 007 Mk2. Since both these headphones are already very linear, we should hopefully succeed with a simple EQ shape.

Previous to getting the EARS, the EQs that I've produced by ear and reported these past months do sound very good. Let's see if measurements will allow me to make even better EQs. These measurements are corrected only with my raw capsule responses, provided specially for me by the folks at MiniDSP. They do not include any other kind of compensation curve. So the measurements should look "funny", they should look kind of like what we would expect from concha, pinnae and to some degree ear canal response, and they do.


Fig 2: Mini DSP EARS measurements, microphone response correction only: Audeze LCD-4 (red), Stax 007 Mk2 (turquoise). Average of 5 positions x 2 ears averaged together. Smoothed 1/12 octave.

Having two excellent open-backed planar headphones is a real asset. This allows me to compare what I already hear against what I can measure, and to examine common characteristics of the two phones. There is a lot of correlation between what I see and what I have heard. For example in Fig. 2 above, below 60Hz we observe that the Stax response is 2 to 2-1/2dB below the Audeze—which confirms my thoughts that the Stax is a bit deficient in the bass. Since the Audeze has excellent bottom end, it makes a good reference to shoot for. The Stax EQ I made by ear already has a bass boost, but now I can refine its shape since the EARS indicates the frequencies I need to work at and how much to boost to bring it in line with the Audeze.

The two headphones have very similar, nearly flat response from 70-500Hz, which indicates the quality of both these planar drivers. Above 500, the Stax response is more irregular, with +/- 3dB swings compared to only +/-1dB swings from the Audeze until about 4kHz. The Stax dip centered around 2.5kHz, where the Audeze is quite flat, clues me in to listen for presence issues, and I decided to implement a bit of compensation for that dip. But I left the other Stax irregularities alone for now.

We should expect a concha-based resonance in the upper midrange, but the narrow band goose in the Stax measurement circa 4.2kHz is troubling; I've not noticed it in previous listening, but recently after listening to a wide variety of "less pure" recordings on Tidal Masters, I'm suspecting that this 4.2kHz goose is real and needs to be tamed. There is an analogous rise in the Audeze, but with much wider Q and not as strong. I decide for the moment to ignore this feature, but watch out for it as it may be excited by certain pop material with strong information in the 4.5kHz range, especially female vocals, especially Linda Ronstadt!

I think the dip in the Audeze circa 6.5kHz is part of the EARS concha and will eventually make it into the EARS target response, as it resembles a dip measured by Tyll on the HATS, but at a slightly different frequency, as we will see in figure 5 in a moment. I hesitate to EQ this range up, as it could ruin the warmth of the Audeze. From 10 to about 15kHz the Stax "air" exceeds the level of the Audeze, and since I already know the Stax is a bit too bright in this region, I decide to do a little dip here. From 15k up the two cans are quite similar.

I have another pair of Stax pro headphones, made from pro elements placed in an SR-5 black and gold headshell. So I will measure them at another time to see whether the narrow 4kHz anomaly appears on that can.

Here are the "semi-automatic" EARS-aided eqs that I came up with, implemented in Equilibrium (Figures 3 and 4). Equilibrium is a very powerful plugin equalizer with many detailed adjustable shapes that allow me to create complements to the measurements.


Fig. 3: Stax "semi-automatic" EQ produced by examination of MiniDSP EARS measurement and integrating previous listening experience

Take a look at the Stax bass EQ, for example, with just one band in Equilibrium I was able to counteract the headphone's weaknesses, including complementing the slightly elevated response circa 60Hz and the little dip circa 40. Ordinary equalizers would have required three bands to implement that complex shape. To refine that shape, I used a technique: start with a dip and try to match the shape of the measurement, and then convert that to a boost. Next, the Stax did not sound quite warm enough to me, being a little hooked on the Audeze, so I added a bit of sugar with an 0.2dB boost at 250Hz.

Equilibrium provides a Butterworth shape, which looks like a flat top, which proved the perfect complement to the 2.5kHz issue. On listening to my reference recording of Lindsey Webster, the new 2.5kHz boost restored the missing presence and matched up the Stax's midrange to the more open quality of the Audeze, which I had deemed more correct. After some more careful listening, I reduced that 2.5kHz boost from +1 to +0.5 as it better matched the Audeze and the loudspeakers. Lastly, a little rolloff from 12 to 20kHz took the slightly "Hi-Fi" tizzy cymbals back in line. While listening, I reduced the strength of the 2.5kHz and 20kHz corrections compared to the EARS predictions and feel that my new EARs-aided Stax EQ sounds considerably better than I was previously able to engineer solely by ear.


Fig. 4: LCD-4 "semi-automatic" EQ produced by examination of MiniDSP EARS measurement and previous listening experience

Long ago I've concluded that the LCD-4 is an excellent headphone which really only needs some correction at extreme high frequencies to sound even better. So I decided to use only one EQ band. Figure 4, the LCD-4 semi-automatic EQ that I came up with after examining its EARS measurements, uses just a single high shelving boost in Equilibrium. Notice that the EQ I ended up with only has 1.2dB of boost by the time it reaches about 12kHz, although the EARS measurement tells us that we might need 6dB to match the Stax. I started listening with 6dB, which sounded very wrong. Even 2dB HF boost sounded just wrong to me with this headphone. It thinned out the critical lower midrange by the yin and yang effect, and made vocal fundamentals sound too thin. This reinforces that EQ'ing headphones is both an art and a science, and that the magnitudes predicted by our measurements are probably far more than we actually need.

Furthermore, I decided to make this band linear phase because when listening, I could hear the phase shift of a minimum phase EQ which was bringing the cymbals artificially closer to the ear, destroying the depth. An FIR equalizer like Equilibrium can make any or all bands be linear phase if we wish. Changing the high end to linear phase preserved the depth and in this case sounded more natural to me.

These two new EQs bring the best characteristics of the Stax over to the Audeze and vice versa. The transient impact of the Audeze, which was already great, has been improved. Snare drums pop, the sound is more lively and real in the Audeze. In the Stax, the improved 2.5kHz presence sounds quite right, I'm very glad that the EARS measurement clued me into that issue. The combination of the improved bass and treble makes the Stax an even more sweet headphone, yet it retains its electrostatic clarity and speed. It's now impossible to casually identify which is the electrostatic and which the planar-magnetic headphone, except for the physical weight of the Audeze.

Stax Polarity
I also discovered, with the aid of EARS, that the Stax were inverted, at least through the Mjolnir KGSS amplifier. In this comparison of the LCD-4 and 007 step responses (Fig. 2A), you can see that the LCD-4 pulse is "upward going" while the 007 is downward. My ears are not particularly sensitive to absolute polarity, but I certainly want to reproduce it correctly. I can perform the polarity inversion in JRiver, Acourate Convolver, or better, by rewiring the Mjolnir amplifier, so I took it to the bench and reversed the polarity coming into the attenuator, a simple operation that does not even require soldering. Perhaps this will improve the impact of bass drum, bass, and the depth of instruments—but it will be subtle. We cannot say if the cause of the problem is the Mjolnir Amplifier or if Stax headphones themselves have been inverted for all time! I'll do further investigation of the amplifier itself. Let's see what Spritzer has to say about this as well.


Fig 2A: Mini DSP EARS measurements, step response. Audeze LCD-4 through Audeze Deckard amplifier(dark color), Stax 007 Mk2 through Mjolnir KGSS Carbon amplifier (pink). Notice that the 007's polarity is incorrect.

Now On to Tyll
Here's a comparison of Tyll's measurements of my serial number LCD-4 with his very expensive HATS jig versus my $179 EARS (Fig 5). The bass ranges are very similar, within about 1dB. Tyll's jig definitely shows greater excursions and resonances than the EARS from 400Hz on up. Another interesting difference is that many of the HATS ear-shape-related peaks and dips have been shifted akHz or more upward in frequency by the EARS. Still, if the EARS manufacturing QC proves consistent (we have yet to check on this), then this may become a valuable measurement jig suitable on its own merits.


Fig. 5: LCD-4 measured by Tyll with HATS (turquoise). By Bob Katz with MiniDSP EARS (red). Average of 5 positions x 2 ears averaged together, 1/12 octave smoothing. Each measurement is corrected using microphone capsule information furnished by manufacturer.

Tyll has generated his own first stab at a headphone target response which he has derived mathematically using measurements produced by his HATS in Harman Audio's calibrated listening room. He measured the flat-response loudspeakers at Harman with the HATS. Read Tyll's blog about it here. He hasn't been able to listen to this proposed target, so my job today is to create a filter that makes my headphones conform with his target. I'd also like to compare it with my own semi-automatic EARS EQ. Since this blog post, Tyll has done a bit of manipulation and set his Target to flat from 200Hz on down. I think this is a good idea, or for bass freaks, a tetch of bass boost that could easily be done with a user-controlled Baxandall EQ on top of the headphone EQ.

Here are the steps I went through to create filters for the LCD-4 that we can easily A/B compare:

For Tyll's HATS EQ:
1) Smooth and average Tyll's LCD-4 HATS measurements, average the L&R channels to a single value measurement. Performed in Room EQ Wizard (REW), Fig. 6. I think the ears react to tonality on a wider basis than the fine features we can measure, but the degree of smoothing needed is a matter of opinion. I'm currently using 1/12th octave smoothing "to be safe", but my experience shows that 1/6 octave is probably not overdoing it.

2) Calculate an EQ correction. Start with Tyll's target, which is the response he would expect a perfect headphone to measure on the HATS (orange trace in Fig. 6). Attenuate Tyll's LCD-4 measurement to yield 0dB center in the bass range (fat green trace in Fig. 6). Divide Tyll's target by Tyll's LCD-4 HATS measurement (trace arithmetic). This yields Tyll's computed LCD-4 EQ (red curve in Fig. 6). Performed in REW.


Fig. 6: Tyll LCD-4 measurement (fat green trace). Tyll target curve (orange trace). Tyll's calculated EQ (red trace). Measurements are average of both ears x 5 positions.

The "error" inherent in the LCD-4 would be the inversion of the EQ (red trace), that is, how far off the measurement is from the ideal response. The goal of the EQ should be an accurate-sounding headphone, we hope! Notice that this EQ is quite a complex shape that is impossible to exactly replicate in any standard equalizer. The only way to accurately produce such a complex EQ is by using convolution, that effectively has an infinite number of bands and can replicate any shape. In fact, Tyll's complex EQ would not have been possible to implement without the unique convolution technology that I am using. Convolution is an available option in JRiver Media Center and in certain special software processors like Acourate Convolver.

Acourate is a powerful analysis tool that can do about everything which REW can do plus much more. Acourate can import Tyll's calculated EQ as a text file and convert it to an impulse response that can be used in a convolver. So we can play this EQ in Acourate Convolver or the convolver built into JRiver Media Center. Dave Gamble, developer of Equilibrium, is considering being able to import an impulse response and turn it into an EQ setting, which would be a great challenge. Perhaps Equilibrium's 32 bands would be enough to approximate the EQ.

Convolution technology has only come of age with today's faster, multi-core computers. Some of the dedicated DSP chips (Analog Devices SHARC, for example) are fast enough to perform some small degree of FIR convolution, but you really need a multicore Intel chip to perform multichannel convolution with a large enough FFT frame to yield accurate low frequency response without ripple. A six-core Intel i7 is fast enough to allow me to run 12 audio channels in Acourate Convolver at 192kHz sampling with a ridiculously large 131,072 sample impulse width. This stresses the computer to the point where it's only practical to run a couple of applications at once. I need 12 channels to simultaneously compute 5.1 surround with a two way crossover to the loudspeakers and two different headphone EQs and I use a 16-channel Lynx AES/EBU interface to feed all of these destinations.

3) Export the predicted EQ as a text file from REW. Import this EQ into Acourate and convert it to a windowed impulse response at all the different sample rates and then into a filter that can be played in JRiver and/or Acourate Convolver. Actually, six different filters, one for each of the popular sample rates, since a good convolver should operate without any sample rate conversion.

For Bob's EARS EQ:
Convert Bob's semi-automatic Equilibrium "EARS EQ" into an impulse response that can also be used in Acourate Convolver. This is performed by playing a Dirac pulse through Equilibrium, capturing that pulse to a wav file, then windowing it into a filter in Acourate. In the future, Dave Gamble, author of Equilibrium, will allow direct export of Equilibrium's impulse response to a WAV file which could be used in a Convolver.

The Proof is in the Listening
Now, finally, we can listen and compare Bob's EARS-based semi-automatic EQ to Tyll's HATS-based correction! The difficulty in comparing two such disparate EQs is the loudness and the headroom issue. My EARS EQ only needs 1.2dB of digital attenuation to prevent any potential overload, probably much less since there is no acoustic music with full scale level information above 10kHz. But to be safe, let's say 1.2dB of attenuation. However, Tyll's EQ has up to 15dB of boost. To be safe, I'll attenuate it digitally by 15dB. It's a floating point correction file and will be dithered to 24 bits on the way out of either JRiver or Acourate Convolver. And I'll need to attenuate the EARS EQ until its perceived loudness matches that of Tyll's EQ.

OK, finally, I got to listen, and to be honest, I didn't need to listen to more than one piece of music to realize that there is something very wrong with Tyll's EQ. It sounds harsh, very thin, far too much 3.5kHz and far too bright. I tried my own variation on Sean Olive's Harman curve a while back, and it did have 12 or 13dB boost circa 4kHz. It sounded a bit wrong, but not at all as strident as Tyll's version, and I'm not sure of the reasons, but the inflection points are very critical, and if there is any kind of math error or measurement error, this type of curve can go very wrong. For example, even one dB error along the way with desired 3.5kHz rise can make the difference between accurate and harsh. So, it's back to the drawing board, guys.

Back to the Drawing Board?
Yes, we need to derive Nirvana, a headphone Target Curve, but I think it will have to be done with art, science, sweat, toil, and tears. As we saw from my experience with the EARS, the magnitudes predicted by measurement, especially above 200Hz, seem to be much stronger than our ear/brain demands. It's the wrong thing to expect simple frequency response measurements to match the response of distant loudspeakers in a room to that of transducers located inches from the eardrums.

Instead we need to interpret frequency response based on the ear's perception that transients in an earphone are far louder than they are in a room with loudspeakers. Increased transients make a sound louder and brighter. We'll have to weight the response measurements differently: neither free field nor far field response adequately predict how we react to headphones. Researchers should not be surprised at this conclusion, as we already know that nearfield loudspeakers need a different EQ curve than mid- and far-field.

We should praise Tyll for his amazing effort—there are a lot of things to conquer and it ain't easy, folks. I suggest that we go back to the drawing board and use a different approach to measure calibrated loudspeakers with the HATS. I invite Tyll to take HATS down to Florida. Tyll, we've got a spare bed for you, no problem! Here we will remeasure using a time-based FFT approach, find the frequencies and the amplitudes, and even so, expect to need far less correction for headphones than what the in-room measurement predicts, whether or not we perform anechoic windowing. To repeat: the psychoacoustics of a set of transducers located inches from the eardrum exaggerate transients and thus perceived high frequency response. The science has not progressed to the level of the art, so there will be a lot of human judgment involved.

In fact, Tyll has attempted to conquer a very thorny problem: measuring the response of a pair of calibrated loudspeakers using a dummy head. Tyll's method simply measured the level of a set of sine wave frequencies, which would exacerbate modal effects in the room, especially at low frequencies. Furthermore, his method is unable to separate the loudspeaker response from the room response, although the ear/brain is able to separate these at frequencies above the bass region. The ear/brain integrates the room with the loudspeakers at low frequencies, but listens almost anechoically at high frequencies, almost exclusively to the direct sound from the transducers. This means there is a discrepancy between ordinary measurements and the actual psychoacoustics of loudspeaker listening. I believe this quandary has only been solved by psychoacousticians such as Jim Johnston and designers such as Uli Brueggemann, inventor of Acourate. This is why I chose Acourate software to perform my room correction.

Traditional power response measurements with pink noise are out, because they do not separate the room from the loudspeaker. I think we do need to judge the frequency response above about 200Hz in a near-anechoic manner, especially since the headphone experience will be near-anechoic. Furthermore, we should only play the left or right speaker or we'd get comb filtering. The next issue is that a dummy head has two ears separated by the body of the head, so sound from the left speaker arrives at the right ear attenuated, delayed and colored. How do we integrate the perceived binaural frequency response of the two ears?

With Acourate, I can sum the impulse response of the left ear with the delayed response from the right ear in the time domain, but we really don't know how the brain judges frequency response of these combined signals. Does the brain judge frequency response in mono? Is it legitimate to perform a 100% time domain sum of left and right eardrum for our purposes? And then how do we reconcile this with the 100% separation of a pair of headphones? I suspect that perceived sum of left and right ear will be less bright than Tyll's single ear measurement produced. I asked psychoacoustician Jim Johnston about all these conundrums, who replied that this is a very hard problem for which he does not have an immediate answer! Back to the drawing board? How about "back to the psychoacoustic laboratory!"

Simplifying The Process
Life would be a lot simpler if we can create a headphone EQ in a standard equalizer—then we can export biquad coefficients directly to an outboard digital equalizer which can be inserted into our system via SPDIF. Many outboard equalizer brands are supported directly within REW, with some models having up to 20 filter bands. It is possible to automatically create an EQ in REW that can be exported to a digital equalizer, but only with a simple target designed for loudspeakers, not the complex shape that a headphone EQ would require. I can manually dial in my simple Equilibrium settings into REW, but a more complex EQ such as Tyll's could not be replicated with only 10 bands and I'm not sure if even 20 bands are sufficient. So for the moment a convolver is our best solution and JRiver is quite an affordable and powerful media player at $49.98. Fans of Tidal streaming can play Tidal's master stream with these EQs with a little work: I play Tidal's master MQA stream using Tidal's desktop app, at 44.1, 48, 88.2 or 96kHz from a Motu interface on a Mac and feed this via SPDIF into another interface on a PC where I can apply a headphone or loudspeaker EQ in Acourate Convolver. There is reportedly a way to play Tidal's app through WDM into JRiver but I have not explored it.

Acourate Convolver costs 126 euro, and can connect to JRiver or external sources. The catch is that AC only accepts its proprietary file format, which has to be exported from Acourate software, that costs 286 euro. That's an excellent price for a full-featured analysis program, but hard for hobbyists to justify. I'm happy to provide my readers wav filter files suitable for JRiver's convolver and AC files suitable for Acourate Convolver at no cost, but only for the headphones that I have already equalized.

Audeze has created a VST plugin called Reveal, which simplifies listening with a correction filter, but only for Audeze headphones. I'll be reviewing Reveal in the near future. Sonarworks purports to have commercialized and simplified the correction process, and they provide a headphone measuring service. I've been resisting Sonarworks for a long time on general principles, but no doubt at some time in the future I'll evaluate their approach, after I succeed in learning how to do it well myself, and showing you all how to do it, too!

(Editor's Note: Ha! Heck of a can of worms you've opened up, Bob. Love that you've decided to dig into it.

I'll be receiving a MiniDSP headphones test jig in the near future and intend to start my investigation by simply measuring a number of headphones and comparing the measurements with my HATS. I'm thinking the first step for me is to see if the differences between those measurements is constant or if it changes with various headphones. That should be interesting in and of itself.

Thanks for the invite, Bob, sure would like to take you up on it, we'll have to see if budget and/or time allows. In truth I think my time is best spent reviewing headphones in the way I always have. But I sure would like to find a target response curve that will be useful with the upcoming on-line headphone measurement and comparison tool. It will have the capability to use a variety of compensation curves so even if we don't get it right at the start we can always add new curves later.

To some extent, I think feedback from InnerFidelity readers will be the most important input in developing a target response. I intend to use my preliminary curve and then allow readers to comment on where they think the curve needs attention. It seems to me that it really is the attentive subjective listening experience of many people that will allow useful adjustments as opposed to some objective method for deriving a target response. We'll see.

Regardless of its absolute accuracy, I think the appearance of this affordable gadget will be a boon to the community as it will allow enthusiasts to compare headphone measurements in an apples-to-apples manner. I encourage readers who have an interest to follow along here as Bob and I play with this tool, and I'd also like to point out this terrific thread at superbestaudiofriends.org where some very experienced folks are beginning to compare the device with their personal measurement rigs. Great stuff!)

detlev24's picture

I am glad to read you are happy. :)

The following is not in relation to your input directly. However, the blog posts are recent and might as well be of interest to others who use digital filters:

1) https://archimago.blogspot.com/2017/12/howto-musings-playing-with-digita...
2) https://archimago.blogspot.com/2018/01/musings-more-fun-with-digital-fil...
3) https://archimago.blogspot.com/2018/01/audiophile-myth-260-detestable-di...

Enjoy the music!

castleofargh's picture

just to say that one doesn't have to get Jriver just to be able to use a convolver. at least I can talk for Windows users, we can add one in foobar(even a stereo convolver if needed for more fun). equalizer APO also has something integrated, I don't use it but I remember reading an update talking about the various convolution settings available.
or people can "simply" use a virtual cable to run through a VST host and include whatever VST doing that.
same for the capture of an impulse, I really can't talk about the quality or manipulations/editions available, but there are a bunch of solutions out there too(even a free one I believe, but no idea what it allows to do, as I never tried).
also REW can import/export impulses and generate one from a measurement so there is that. but of course as it's REW the highest sample rate for the measurement itself is 48khz. I don't care, but I imagine some might.

so yeah plenty of ways to have fun for the price of the EARS, or at least less than your solution, which rapidly adds up to a lot for a guy like myself having a hobby within a hobby. :)
the dark spot remains to determine a reference signature of our choice. that takes us back to pretty much zero(at least in the beginning)with us playing around using an EQ. I'm not sure there really is a way around that aside from using in ear microphones. because as good as a standard curve might turn out to be, my ears aren't the ones stuck on the EARS ^_^.

DonGateley's picture

I think you guys are chasing a a chimera by trying to find and realize an "ieal" curve. For one thing it has no phase information. My own work with tiny Knowles microphones mounted via an acoustically transparent insert such that they are within approximately a millimeter of my eardrum indicates that not only is phase important but that every little thing that can change (such as a slightly misplaced microphone) pretty radically changes the measurements of the same source. I don't think that what arrives at the eardrum of any two people from the same source is similar enough that an absolute standard can be defined.

What can be done with a good IR measurement system such has you guys have and such as I had is to create ratios. One can only manipulate one impulse response measurement to agree with another one if they are both measured with the same rig. It is simple DSP to compute a convolution kernel that transforms the IR measurement of one 'phone on a rig to the IR of another phone on the same rig.

My own experiments with this involving a handful of volunteers is that such a transformation determined by any "close to realistic" canal/pinnae rig allows for emulation of one 'phone by another to more than adequate accuracy. By "close to realistic" I mean that the acoustic impedance at the measurement point of the rig is more than likely to reasonably match at least one person with normal hearing on the planet. If the rig does not have such an impedance nailed it is not terribly useful. If it does, however, the brain seems to adapt via its own DSP such that listeners will readily accept that phone A and phone B sound good 'nuff alike that only comfort separates them.

I think you should be attempting to determine among you the absolutely best sounding, most realistic, etc., 'phone that you can find and use that as your ideal. Do a tournament.

Measuring that phone, call it A, on any of our three rigs we can then derive a convolution kernel that makes 'phone B sound like that 'phone A persuasively for most people if our rigs present a realistic characteristic acoustic impedance at the microphone. I know mine does because it's a real human ear measured at the point where the sound most matters.

Or one can use a rig like ours to measure some reference speaker in some reference mastering environment (Bob?) as the ideal because that's where the source material would be tweaked to "perfection" by a real human. Doing this has other advantages such as being able to measure both the direct and the cross IR's for each speaker so as to present the listener (with 4 convolutions) with the same mastering acoustic space. He would then hear it as the mastering engineer intended which I consider the holy grail.

I know this all works because I've done it.

Bob Cain (posting as Don Gateley)

DonGateley's picture

'ideal', not 'ieal'

Pokemonn's picture

Eye opening article for me....
Thank you very much Bob!

Peter Simon's picture

Thanks for this comprehensive review.

spy hunter game