Bits, isn’t 20-bit good enough?

Robert Katz Leave a Comment

From: “Arksun”

Hiya Bob, How’s tings.
I was wondering whether you could settle a long running argument between myself and a producer friend of mine. My friend Andrew Longhurst believes that for A/D conversion there is absolutely no need to go greater than a 20-bit convertor, as the noise level is significantly high enough in an analog cable connection that a 24-bit convertor would offer no sound improvement what-so-ever.

Dear Laurence:

Things are great, very exciting, and very controversial. The future of the mastering industry keeps on shifting.

Anyway, to prove your point, you’d have to have an A/D converter with true 24 bit dynamic range, whose noise level is that low. Most so-called “24 bit” A/Ds have only approximately 20 bit dynamic range in the first place! But arguably, may have resolution below that number. And then, you must conduct a controlled experiment. It’s possible that your friend is right, but I leave no stone unturned,and only a controlled listening experiment would settle the issue. In theory, your friend may be right, but it has to be settled on the basis of masking theory, and masking theory demonstrates that you can hear signal quite far below the noise in certain frequency ranges. Maybe 21 bits, maybe 22… it’s hard to say, but I believe it is *marginally* greater than 20 in the case cited above. But in the same vein, in theory, it seemed that a 16-bit converter should be adequate to record an analog 1/2″ 2-track tape as the noise level of the analog 1/2 is far greater than 16-bits, but in practice, it takes a high quality converter of at least 20 bits to do justice to an analog tape, so as far as I’m concerned, all bets are off until they have been proved by listening tests. That goes for all assertions of theory versus practice, by the way!

I on the other hand am looking at it from the ‘real-wave’ representation point of view, and am arguing that it’s precisely because of this further wave form complication induced by noise that the highest possible bit conversion and sample rate are needed to ‘capture’ the maximum amount of detail in the analog signal being fed, especially with a full bandwidth complex mix.

Your statement may also be true. And the resolution between the two statements boils down directly to masking theory! I subscribe to the theory that there is inner detail in music or test signals which is audible below the noise floor of any system for a considerable distance. Of course, it is not an infinite distance. At some point it becomes academics your friend makes clear. For example, in digital calculations, it has been shown that you can hear a 24 bit truncation below a 18-bit noise floor. Why? Because the distortion has not been masked by the 18-bit noise. So, it boils down to how far down below the noise floor theory #1 is correct, versus how far below the noise floor theory #2 iscorrect! Quite simple, eh? At what point the masking of the system noise covers up the distortion caused by the reduced resolution. This can only be settled by psychoacoustic means, and ultimately by listening. In the meantime, I suggest caution and conservatism, that is “the more bits, the better–probably.”

So far, each time I increase the wordlength precision of my own work, I find an audible improvement. There are currently a few 24-bit A/Ds which are a pinch better to my ears, but I am not certain if that is because they are 24-bits or simply because their circuitry is more accurate, more euphonic, more detailed, less jitter, or any of the many other possible influences for better sound where at that low level where it becomes impossible to separate out the reasons for “better.”

And to repeat points I make elsewhere: this 20-bit question only comes up at the beginning and end of the chain, because when you start processing (DSP calculations), more bits are definitely better.

Hope this helps,


Comment on this FAQ

Your email address will not be published.