December 17, 2017 Last edited: Jul 17, 2023
This article is part of my audiophiles series of articles
Audiophiles? More like audiofools. This article will debunk audiophile myths, beliefs, and claims. You can read about audio fidelity to familiarize yourself with audio parameters before you continue.
Boutique or large audiophile equipment manufacturers often claim to possess some magical feature not scientifically proven to have effects on audio performance, and for most of the time, don’t measure these 4 audio parameters correctly resulting in underperforming audio gears.
The example of which would be Sony’s audiophile uSD card, and most products from boutique audiophile companies.
Sometimes, they do actually provide these numbers, but usually in a misleading fashion or measured with flawed methodology.
These companies usually boast about rare metal materials used to construct the cases, how the soldering is overengineered, and how they platinum-plate every metal contact in their products.
If they won’t brag about the materials, then they’ll brag about their exceptionally well engineered products, like with PS Audio and Audio GD.
They however rarely mention actually useful values. For example, some media players or even amplifiers advertise its power in simple Watts term, which is useless unless we also know the load used to get that value and its output impedance.
This is not to mention that even if the aforementioned specs are provided, its usefulness is usually only in the power domain.
Most consumer audio companies provide frequency response specifications for their speakers, but instead of providing an FR graph, they instead will just use 4 values to represent frequency response:
Highest frequency (Hz)
Lowest frequency (Hz)
+ value in dB at H or L
- like with +
Sometimes if I am lucky they also provide SNR, but again, without THD values.
If you look at it the way I do, you’ll see that they are all jokes - one audio manufacturer brags about their ultra-precision clock in the DAC, only for its buyers to later notice that the super-clock was indeed improperly integrated in to the design and it produces glitches every 25 ms.
It is the same company that put HUGE efforts on power delivery so that all the component get super clean power, but in the end their products only manage to perform like 55-66dB in signal-to-noise ratio.
Keep in mind that CD audio is 16-bit or roughly 96dB if the record engineers do actually use all the bit.
This means that these overpriced, overengineered (or underengineered), heavy, giant metal boxes couldn’t even reproduce CD audio faithfully.
Most audiophiles also don’t know that modern everyday, consumer audio equipments measure very well nowadays, to the point that you will not, in double-blind test, be able to distinguish a cheapo DVD player from a pseudoscience-based $$$$ CD transport plus a standalone snake-oil DAC.
Their idea that CD and by extension digital audio sounds terrible stems from the time when CD was first being introduced. Back then, implementation of CD player was really bad. We are speaking 14-bit PCM audio, or the brick wall low-pass filter which further generates artifacts.
But all those issues were fixed long ago, in both production and reproduction stage, for example, with oversampling and newer low-pass filters. This led the modern DACs to perform so well that their jitter, which translates to timing and distortion, and noise in DAC performance, don’t really matter anymore.
Yet the audiophiles still hate the new tech, and long for vintage NOS non-oversampling DACs or something of sort. They can’t decide if they should hate the bad early digital audio implementation used in early CD players, or the modern solutions.
Apart from CDs, modern audio devices are so competent that even the self-claimed golden ears FAIL to prefer high-end discrete power amplifier over garden-variety AVR in double blind tests.
In fact, less than 0.01 percent of world’s population (or none at all) would be able to reliably hear digital jitter.
Jitter is millions of times smaller than analog audio’s time-based errors, say, faulty spinning rate of an analog turntables.
DACs in modern smartphones also perform so well that their distortion is much below human limit, eliminating the need for any external music player device (given that you have all the power and audio outputs).
This has been brought up every time I discuss audio with audiofools.
Well, the tones and music are indeed the same, the way these audio equipments see it.
So if your equipment could not even reproduce the easy 1KHz tone without audible distortion and noise, how on earth then can it reproduce the much more complex and challenging music signal?
I do think of this tone vs music argument to be idiotic, since if you’re doing other kind of engineering, you’d at least want to be assured by some form of tests like software unit tests to ensure that it works well in all department, audible artifacts or not.
People need time to adjust ears and brains, which means that the longer you have been listening to a system, the more it will sound more comfortable (i.e. enjoyable) to your brain.
This may explain the ‘burn-in’ phenomenon described by audiophiles when they talk how they need to let their new DAC run for >100 hours for it to get to peak performance.
And when some notice their gears sounding better, they think it’s because of burn-in, not because of their steered preference. Beyerdynamic officially recognizes this problems and formally call the phenomenon ‘habitual effects’.
This is also another audio-fools myth. I don’t see how discrete components can increase audio fidelity compared to ICs.
Compared to ICs where the components sit back-to-back to each other, signals will have to travel a hell lot more distance on its path to discrete components, making the traveling signal more susceptible to interference and degradation.
ICs are the way to go.
Now that you see many gears are so good these days, you’d wonder why there’s no one-size-fits-all solution to audio.
Or if the smartphones are all very good, what was the point of you (me) buying a desktop amplifier?
I personally think it all comes down to compatibility and convenience, i.e. connection interfaces and power rating:
You’ll need a DAC if your computer doesn’t have line level output to feed your amplifiers.
You’ll need headphone amplifier when your source doesn’t play loud enough.
You will need bigger power cord if your stock wire is under-speced.
You’ll need new USB cable of your current one is too short. You’ll need new headphone cable only when its original length is troublesome, i.e. the wire is too small for a given length.
Also, I’m a lil bit into audio, and my OCD thoughts keep urging me to get the best performing equipment for my budget. But I don’t consider silly requirements like gold cages or massive transformer, instead, I compare the actual fidelity figures from 3rd-party website like AudioScienceReview
And now that you know there’re only 4 parameters required to measure audio fidelity, we can safely conclude that
**Consumer (including audiophile) analog audio solution is in every possible way inferior to its digital counterpart of the same class.
Analog systems have seriously worse time-based errors, and much higher noise floor, not to mention that their general modus operandi naturally makes them even more susceptible and sensitive to further errors.**
Now, the question is, if analog audio performs very poorly, then why is it more preferable to some audiophiles?
The answer is simple, fidelity is objective, while preference is subjective, and thus is the reason why many people prefer objectively worse audio equipment.
I suspect the reason people perceive digital to be less enjoyable (i.e. how audiophiles describe digital audio as ‘sharp’, or ‘harsh’) is because of (1) habitual effect, and (2) expectation bias.
These 2 topics are very interesting especially when it comes to evaluating audio performance, so I recommend that you read about them first if you have not.
People coming from analog systems are going to perceive higher-fidelity solid-state and digital system as sterile, dry, tinny because they’re used to their good old trusted analog devices (which produces more harmonic distortion that we humans tend to prefer naturally).
Even if we ignore the bad implementation of early home digital audio, it’s still understandable that audiophiles might find digital audio bad despite the digital implementation being superb.
This is because when digital audio first came mainstream, say, in the 1970s-80s, >90% of commercial recordings used analog medium, and hence the tape hiss and all the crap was transcoded and reproduced (i.e. carried over) to the digital master.
People were never before able to experience such high-def audio quality in analog gears, so when digital audio reveals flaws of analog recording, people called digital audio ‘tinny’.
Another contributing factor was that older speakers and other audio components of the time when CD was introduced were also NOT designed to handle higher frequencies very well, resulting in even worse fidelity due to the transducer’s non-linearity.