Reply To: Debunking the Digital Audio Myth: The Truth About the ‘Stair-Step’ Effect

June 5, 2023 at 4:08 pm #5842

Here’s another issue about quantization.

Only at 9:11 do they first use the term “dithered”.  “Properly” dithered quantization error will sound like noise.  But if it wasn’t dithered, the quantization error could sound nasty indeed.

Again, decent technology, which includes a decent number of bits in the word, decent dithering (and noise shaping) and a decently high sample rate, makes all this nasty go away.

I think that 44.1 kHz was always just a little low for audio (and 16-bit “depth”), but I understand historically how it happened.  And, with decent dithering in the mastering stage and decent oversampled DACs (them “1-bit” ΣΔ buggers), CD audio came out sounding really quite good.

And if I were archiving fully produced music for my (or anyone else’s) listening enjoyment.  16-bit, 44.1 kHz stereo is good enough (at least for me).  Even FLAC the sonuvabitch if I’m running outa space on my hard drive.

But if I were Synthogy or somebody whose intellectual property consisted of really nicely recorded instrument samples, I wouldn’t be archiving them at 16-bit, 44.1 kHz.  I would record and archive them at a minimum of 96 kHz and 32-bit floats.

And if I were one of you guyz and you have tracks of audio that you’re processing and mixing and processing some more and mastering and doing all sorts of shit to, I would want that audio to be better than 16-bit, 44.1 kHz.  I personally think, for a high-quality musical product that will eventually be 16-bit 44.1 kHz, I think you should be doing all of your recording and storage at 96 kHz with the best damn ADC you can get (which might be something like 24 bits).  I would save those words as 32-bit floats or 64-bit doubles and do all of your manipulation of this audio at that higher sample rate with double-precision arithmetic.  I presume that’s what DAWs are doing nowadaze but I dunno specifically.

Then, only in the last mastering step, when you’re gonna press a CD, that’s when you first downsample to 44.1 kHz (you still have 64-bit doubles), dither with white triangular p.d.f. dither from a really good random number generator, quantize to 16-bits and noise-shape around that quantizer.

The compression and limiting and final EQ in the mastering is what can separate good production from the not as good.  And the noise-shaping around the quantizer can have different characteristics that separate the good from the not as good.  (That radical 60th-order noise-shaping coming from Alexey Lukin is intriguing.)

But I wouldn’t be saying “8-bit is just as good as 16-bit” or “16-bit is just as good as 24-bit”.

And I wouldn’t say “44.1 kHz is just as good as 96 kHz.”  When things are linear, maybe it is.  But not all of the processing you guys do is linear.  If you apply tube emulation or even compression/limiting or some kinda effects, you might like the elbow room you get doing all this in double-precision arithmetic and a 96 kHz sample rate.  Then you just don’t friggin’ care about arithmetic rounding aliasing.

Then, only in that last stage, when these numbers are flying out at you and getting turned into a voltage and variation of sound pressure level, then worry about downsampling, dithering, and noise shaping.