Header

Objective Reviews & Commentary - An Engineer's Perspective

May 9, 2012

ODAC May Update

odac production ipodGOOD NEWS! I just completed testing the first two ODAC boards from YoYoDyne. These are known as “first articles” and are built using the same PC boards and parts as the production boards. I’m pleased to report the performance essentially matches the previous prototype I measured in every area and has significantly lower jitter. I still need to publish my final large ODAC article with all the measurements and more technical details, but my involvement is otherwise coming to an end. That’s good news as I can spend more time on the open source DIY Obective Desktop Amplifier (ODA).

AVAILABILITY: I’m not involved with the manufacturing, sales, or, for that matter, receiving any profit. So you need to contact one or more of the following if you want an ODAC. Last I heard, they’re still hoping to ship some before the end of the month:

ODAC CANNOT DRIVE HEADPHONES DIRECTLY: There still seems to some confusion the ODAC is a headphone DAC by itself. It’s not. You need some sort of headphone amp like the O2 (the ODAC fits inside), the upcoming ODA, a FiiO E5/E6/E9/E11, etc.

O2 + ODAC: As previously discussed, when combined with the O2 and wired to the input jack as shown in the Tech Section below, you can still use the line input jack of the O2. When a jack (say from a portable player) is plugged into the jack the O2 will disconnect the ODAC and play the external source. When you unplug the jack the O2 will switch to the ODAC. If you don’t have either source powered up you may hear some hum and/or noise due to the floating input.

SCHEMATIC: YoYoDyne has received approval from Tenor to release the schematic once. More on that in the next ODAC article.

200 COMMENT LIMIT: With some help from friendly blog readers I’ve found a work around for the 200 comment limit. It’s not perfect, and sometimes doesn’t work with Internet Explorer or mobile browsers, but clicking the “Load More…” or “Loading…” link at the end of the comments should load at least some of the newer posts. As long as you see either of those links, there are newer comments not being shown. Hopefully, if I can publish articles more frequently to spread out the comments, you won’t have to worry about it.

WHAT’S NEXT: I’m still working on my Timex vs Rolex article. It’s been my most daunting article yet and I’ve only had limited free time. But it’s hopefully coming Real Soon Now… Really! Honest! There’s also the final large ODAC article with the rest of the measurements and more tech background on the design. And, beyond that, look for the next ODA article. I’m still hoping to have at least a preliminary ODA design published before the end of summer.

 


TECH SECTION


ODAC TO O2 WIRING: Use twisted pair wiring, such as you can find inside any CAT5 ethernet cable or similar. You can also use small shielded wire but it might be harder to “route” within the O2 enclosure. The photo below shows the twisted pairs connected to the ODAC output header. Note the grounds are together (click for larger if desired):

odac to o2 wiring

O2 INPUT JACK CONNECTIONS: There are two options for connecting the ODAC to the O2. If you no longer want to use the input jack, you can just connect to the P1 header alongside the input jack. Left connects to pin 1 (the square pad), the middle pin (2) is ground and pin 3 is the Right input. You must cut the ground traces as shown in the photo with a razor blade or similar. Be careful not to accidentally cut other traces (or yourself!). If you have a DMM you may want to use the Ohms range to verify there’s no longer a short (near 0 ohms) from pin 3 and pin 4 to ground sometimes traces that look cut are not. If you want to still use the input jack, the twisted pairs from the ODAC must be routed to the bottom of the O2 board and connected to the input jack as shown. This can be done through the unused pin pads for the volume control, through an unused mounting hole, or there might be clearance between the front panel and the O2 board—it will depend on the type of wire you use. Keep the wiring as short as possible. The ground wires (and the chassis ground wire) can all attach to the center pin 1 of the jack closest to the front edge of the board or to pin 2 of the P2 header nearby:

o2 odac input jack wiring

odac production boardOVERALL CONSISTENCY: In general the production boards measure very similarly to the prototype. In this case that means YoYoDyne used the right parts and their assembly contractor didn’t mess anything up. It also provides added confidence the ODAC is a solid design. When three different samples all measure very similarly that’s a good sign nothing is on the ragged edge, excessively tolerance dependent, etc.

LOWER JITTER: There were a few minor PCB and other tweaks between my protoype ODAC and the production boards from YoYoDyne. I’m pleased to report the production version has even lower jitter. In the last ODAC article I reported the jitter components adding up to –103.3 dB. In the production version, as you can see below, it’s over 3 dB better at –106.5 dB with far fewer components. In reality both of these are comfortably below the –100 dB threshold I’ve established for transparency, so I’m not suggesting this is an audible benefit, but it’s still nice to see:

ODAC 11025 hz -3 dBFS J-Test Jitter & Phase Error 24-44

April 18, 2012

ODAC Released

odac closeupCAREFREE ENJOYMENT: I’ve been listening almost exclusively to the ODAC for several weeks now and I’ve been smiling a lot—not so much at the ODAC but at the music. When you know your audio gear is genuinely transparent it opens a worry-free window into the music. And, while I suffer from expectation bias just like everyone else, I’ve run a second blind listening test and can report the O2+ODAC held its own against the $1600 Benchmark DAC1. They both are audibly transparent.

I’M NOT SELLING ANYTHING: Some have claimed because others may profit from the ODAC I’m profiting from it. But that’s not the case. My involvement was simply to help design and measure it--just like the DIY open source O2 amplifier. YoYoDyne and various other vendors are responsible manufacturing and selling the ODAC. They’re the ones taking the financial risks, so they’re also the only ones to profit. This is an entirely not-for-profit blog and will remain so. You can’t buy anything from me.

LIKE O2 LIKE ODAC: The O2 amplifier was created as a simple, low cost, minimalist design delivering 100% transparent performance into nearly any headphone. The ODAC takes the same minimalist approach to transparency. But it’s not quite as simple as it looks in the photo above. It has around 60 components and many of those were carefully chosen through a lot of detailed measurements and trial and error.

NO SNAKE OIL REQUIRED: Many audiophiles want to believe more elaborate or exotic DACs offer higher fidelity. The ODAC demonstrates you do NOT need any of these for 100% transparent performance:

  • Asynchronous USB
  • UAC2 (USB Audio Class 2) Support
  • Asynchronous Sample Rate Conversion (ASRC),
  • Minimum Phase Filtering (no pre-ringing)
  • Non-oversampling NOS DAC chips
  • Dual DAC chips
  • Balanced Outputs
  • Vacuum Tube Stages
  • Elaborate and/or High Current Power Supplies

THE PROOF: I can confidently say none of the above are required for DAC audio nirvana—i.e. having your DAC disappear from the signal chain. Some of the above, like NOS designs and tubes, degrade fidelity. I’m confident because detailed measurements and blind listening tests verify the ODAC’s real world performance. I’ve even tried several different PCs and the ODAC’s performance is relatively consistent between them. So while some of the above might improve a few measurements, if they don’t improve the sound, it’s like taking 4 pills to get rid of your headache when 2 do the job nicely. Once the headache is no longer perceptible, more pain reliever doesn’t help anything. I encourage others to compare the ODAC blind against other DACs, at any price, that measure reasonably well.

odac proto march 2012ASSEMBLED BOARD: As explained in the ODA/ODAC and ODAC Update articles the ODAC is not DIY friendly. This is true for most 24 bit USB DACs. Neither the USB interface nor the DAC chips are available in small quantities. Both require signed OEM agreements. minimum purchase quantities, and the USB chip also requires custom programming. The ODAC has many extremely tiny 0603 surface mount components and uses a fine pitch 48 pin IC making it very tedious to build by hand. Given all that, the ODAC requires commercial automated assembled in fairly large batches. DIYers can add the ODAC board to the O2, build it into a stand-alone DAC, or add it to the upcoming Objective Desktop Amp (ODA).

NO S/PDIF: To hopefully avoid most of the “how come you didn’t add S/PDIF” or “when will you add S/PDIF” questions, please see the previous ODA/ODAC and ODAC Update articles. Sorry, but it’s not possible.

PARTNERS REVEALED: Because of the above requirements, the ODAC development has been a joint effort with YoYoDyne Consulting. George has supplied other open source USB DAC boards at reasonable prices and seems to be well respected on diyAudio. He’s taking considerable financial risk having a large batch of ODAC boards assembled and hence is coordinating all the manufacturing and distribution. I conducted all the measurements, refined the design, and helped optimize the PCB layout. Just to be clear, I’m not getting any money from the ODAC. Here are the ODAC resources so far:

AVAILABILITY: YoYoDyne is estimating boards will be available before the end of May assuming nothing goes wrong. It might take a few days longer to get boards to the UK and Europe. For other information please see the links above.

SCHEMATIC: YoYoDyne has received approval from Tenor to release the schematic once the production boards are verified and released to the various vendors above. YoYoDyne is also interested in a possible future version that’s more DIY friendly.

GUILT-FREE VOLUME ADJUSTMENT: The majority of USB DACs only support 16 bits over USB. That means when you turn down the volume in software you’re getting less than 16 bits of resolution. At background music levels you might only be listening to 11 or 12 bit audio. But the ODAC has a 24 bit USB interface and enough dynamic range to allow guilt-free use of software volume controls.

OTHER DETAILS: There’s a lot more in the previous ODAC articles, but in summary, the ODAC is designed to fit inside the standard O2 enclosure in place of the batteries. A few internal wires need to be soldered to connect the ODAC’s output to the O2’s input jack. Other line level sources can still be used with the O2. It will also fit inside the upcoming ODA. And it can be used standalone with either the on board 3.5mm output jack or panel mounted RCA output jacks. The USB connector is a standard USB-Mini-B as used on the FiiO products, the Sansa Clip, cameras, etc.

ODAC rev 1 and rev 2FOUR REVISIONS: The ODAC has been through four lengthy revisions—two of the earlier boards are shown to the right. Despite the fact we started with essentially the reference design from the datasheets, the devil was in the details. The  first version played music and sounded OK. Many companies and DIYers that “design by ear” would have stopped there. But that first version didn’t come close to delivering what the DAC chip is capable of. Each revision cycle took at least several weeks, cost hundreds of dollars, and involved countless hours of work. But, in the end, it resulted in much better performance compared to where we started..

REAL WORLD PERFORMANCE: Most DACs and PC sound interfaces priced under $200 fail to come close to the published specs for the chips they use. The FiiO E10’s Wolfson DAC chip is rated at 117 dB of dynamic range but the E10 only delivers a modest 98 dB. The power supply, PCB layout, grounding scheme, I2S waveform fidelity, clock quality, and more, often degrade the performance to well below the manufacturer’s carefully optimized reference design used for the datasheet specs. This is especially true for USB sourced signals and DACs running from USB power. But the ODAC, despite being USB powered, managed to come very close to the “Holy Grail” datasheet specs. See the Tech Section for more.

THE REAL NUMBERS: Here are the real numbers (versus the April Fool’s Day numbers I published two weeks ago) and they all meet the requirements for audible transparency. The letter following many of the results is the same letter grade (A-F) I’ve used in previous reviews with A being the best, and F being a “Fail”:

Measurement ODAC FiiO E10 DAC1 Pre
Freq. Response 10 hz – 19 Khz 24/44 +/- 0.1 dB A +/- 0.1 dB A +/- 0.1 dB
THD+N 100 hz 0 dBFS 0.0029% A 0.005% A 0.0009%
THD+N 20 hz –1 dBFS 0.003% A 0.004% A 0.0009%
THD+N 10 Khz –1 dBFS 0.003% A 0.004% A 0.0007%
IMD CCIF 19/20 Khz –3 dBFS 0.0011% A 0.013% B 0.0005%
IMD SMPTE –1 dBFS 0.0004% A 0.004% A 0.0004%
Noise A-Weighted dBu 24/44 --102.8 dBu A -98.3 dBu C -105.4 dBu
Dynamic Range –60 dBFS A-Wtd --111.1 dBr A -97.6 dBr C -110.9 dBr
Linearity Error -90 dBFS 24/44 0.0 dB A 0.0 dB A 0.2 dB
Crosstalk 0 dBFS Line Out 100K --93.5 dB A N/A -106 dB
USB Jitter 11025 hz J-test 24/44 Excellent Very Good Excellent
Maximum Output Line Out 100K 2.0 Vrms 1.65 Vrms 2.5 Vrms (1)

NOTE 1: Maximum output of DAC1 is configured with internal jumpers

BOTTOM LINE: The ODAC has been released to production and will hopefully be available by the end of May. I’ll soon be publishing more detailed measurements, results of listening tests, etc. I’m confident the ODAC is audibly transparent. And, especially when it’s installed inside the O2 or future ODA, it offers a level of performance that’s difficult to find without spending substantially more. It also offers detailed measurements and blind listening tests to back up its performance which is something very few other DACs offer at any price. And, paraphrasing from credit card commercials, worry free enjoyment of music can be priceless.

 


TECH SECTION


odac drawing p6 coordinates

SPECIFICATIONS: Here are some ODAC basics:

  • Audio Formats: 16/44, 16/48, 16/96, 24/44, 24/48, 24/96
  • Interface: USB Audio Class 1
  • Native Driver OS Support: Windows XP & Later, OS X x86, Linux
  • Operating Systems Tested: XP, Vista, Win7 (32 & 64), OS X Snow Leopard, Ubuntu 9.1 32 bit
  • Line Output: Approximately 2 Vrms into 5K ohms or higher
  • Dynamic Range: > 110 dB A-Weighted
  • Distortion: < 0.005%
  • Dimensions: 49 x 58mm (see drawing to right)

BLIND EVIDENCE: So far I’ve run two relatively informal blind tests with the ODAC. The latest one used special software on the PC to play the same track on both my Benchmark DAC1 Pre and simultaneously on the ODAC plugged into the same PC (both connected via USB and running at 24/44). The ODAC was connected to an O2 headphone amp, and a switchbox allowed the headphones to be rapidly switched between the DAC1 and the O2+ODAC. The two sources were carefully level matched (using their respective volume controls) using a test signal and wideband DMM. I tried both my Sennheiser HD650 and Denon AH-D2000 headphones with a variety of well recorded favorite tracks. One other listener and I could not reliably tell which was playing.

FUTURE BLIND TESTS: I’ll hopefully be running a more comprehensive and rigorous blind test in the future. But, ultimately, it’s best to have listeners who expect to hear a difference, and someone else who understands the technical issues to supervise the test, perform level matching, sync the sources, etc. It would be ideal, for example, to have Mike and/or Lieven from Headfonia be a listener and George from YoYoDyne oversee the test. But sometimes geography and other factors restrict what’s realistic. If anyone is willing to help coordinate such a blind test, please contact me privately with the link in the right hand column.

USB INTERFACE: There are only a few USB interface chips capable of 24 bit operation. The best option we could find is the Tenor TE7022. It’s used in the Violectric USB 24/96 and several other commercial 24 bit USB DACs. Notably, it does not require proprietary drivers to work with any popular operating system including Windows because, like the Benchmark DAC1, it’s a USB Audio Class 1 interface. It also has respectably low jitter. The XMOS solution requires an expensive license and proprietary windows drivers and offers no audible benefit. The TAS1020B is being discontinued, requires extensive firmware, and also offers no audible benefit. A custom microcontroller creates even more hurdles with no audible benefit.

THE DAC CHIP: As I’ve explained elsewhere you can get transparent performance from most of the better DAC chips on the market from a half dozen chip vendors. What’s most important is choosing one that’s best suited to the particular application. In this case, that means running from a single 5 volt USB power supply, having a buffered voltage output (to avoid needing a single-supply op amp), and operating properly without using a microprocessor. Just those three requirements narrow the choices considerably. We chose the ESS Sabre ES9023 which is used in a lot of commercial designs (it’s an improved version of the popular ES9022). Unlike TI, Burr Brown, Analog Devices, etc, ESS specializes in audio chips and they did a nice job with the ES9023’s feature set and specs.

DIRTY LIES: Many popular “boutique” DACs (especially those being sold on eBay) mislead their fans by quoting only the chip specs for their entire DAC. Basically that amounts to cheating and lying. It further implies either the company is incapable of making the proper measurements or the real measurements were bad enough they didn’t want to share them. The implication is a DACs performance is solely determined by the chip used. But the opposite is usually more true. The implementation matters far more than the chip. the FiiO E10 is just one of many examples. The HA-Info I’ll soon be comparing to the ODAC is another.

CHIP ENVY: Any serious audio engineer designing DACs and making proper measurements knows the implementation matters more than the chip used. The first version of the ODAC, which closely followed the ESS datasheet, only managed about 98 dB of dynamic range. The distortion was also much higher than listed on the datasheet and the jitter was somewhat disappointing. Unless you exactly duplicate the chip manufacturer’s reference design, right down to the PC board layout (which I’ve yet to see any manufacturer do), you really don’t know what you’ll get. To measure all the important parameters you need more than just RMAA. You need a real audio analyzer with performance substantially better than the DAC being measured.

DESIGN BY EAR: Detailed and credible published measurements are missing with most “boutique” DACs being sold including those from Schiit Audio, Audio-GD, AMB, Twisted Pear, Burson, and NuForce. Where’s the credible evidence they got it right? A lot of these companies try to claim specs don’t matter, and they instead design by ear, but that method is seriously flawed (see: What We Hear). It would be like designing a car engine without a dynamometer and having no idea how much horsepower and torque it produced, how fuel efficient it was, etc. Given all the proven problems with sighted listening, and how our ears and brains work, those who claim to design by ear are very likely getting it wrong. Put another way, they’re often designing products with far lower fidelity than they’re otherwise capable of.

odac bench smt partsLESSONS LEARNED: If this project has taught me anything, it’s that getting much better than 16 bit (96 dB) performance can be challenging. The first version of the ODAC, despite following the reference design, only had about 98 dB DNR. That’s about the same as the FiiO E10. The photo to the right shows a few dozen assorted surface mount parts that were laboriously swapped out one at a time and measurements repeated dozens of times using the dScope. Some improvements were far from intuitive. Audiophile preferred polyphenylene capacitors performed worse than less expensive types. Additional filtering on the digital power supply dramatically increased jitter. Chasing down the last few dB of dynamic range the chip is capable of proved to be especially challenging. When it was said and done, the DNR went from 98 dB to over 111 dB. That’s a huge difference and something the design-by-ear crowd would have never achieved.

THE AUDIOPHILE WAY: All too many small or “boutique” audiophile manufactures and DIYers seem to just slap trendy chips on a board, listen to their creation expecting it to sound good (so that’s what they hear), and call it good. Many don’t even follow the reference design. Instead they include a bunch of “audiophile upgrades” expecting better performance—and they hear what they expect to hear even when it’s not true. But the ODAC demonstrated those upgrades often make things worse. So instead of getting even 98 dB DNR like the first ODAC revision, those following audiophile myths and designing-by-ear probably would have ended up with something even worse. Unless you’re making the right measurements, you really have no idea what you’re getting.

THE POWER SUPPLY: For reasons explained in the earlier ODAC articles, the ODAC is USB powered. This allows it to work standalone, as an internal add-on to the O2, and in the upcoming ODA. There are many obvious advantages to USB power but it often degrades performance due to noise. To get around this, the ODAC uses split digital and analog power supplies each with their own filtering and regulator. The analog supply has additional filtering and the critical reference voltages, and negative supply for the DAC chip, are further optimized. I literally tested more than 100 variations of components, including different brands of capacitors, to get the most out of the ES9023. This level of refinement would be impossible without a serious audio analyzer.

PUMP YOU UP: The ESS chip has the huge advantage of a built-in low noise charge pump. It generates its own regulated negative power supply allowing a Redbook standard 2 Vrms output from a single 5 volt USB power supply and a direct coupled output. This is an important distinction compared to a lot of USB powered DACs. Without the charge pump, or some other negative power supply, USB DACs can’t produce the Redbook standard 2 Vrms which reduces their effective dynamic range, lowers their ENOB, and creates level matching problems. It also requires an undesirable output coupling capacitor and usually results in loud transients on power up. The AMB Gamma, and most USB powered DACs I’ve tested, don’t meet the Redbook standard. The ODAC does. And it produces only a soft click on power up.

ADAPTIVE USB INTERFACE & LOCAL CLOCK: I’ve talked about this before, but just to be clear, the ODAC is NOT clocked by the USB port. So the quality of the audio clock, and any resulting jitter, is largely independent of the PC’s USB timing. It has its own low phase noise 12 Mhz crystal controlled oscillator that’s used to generate the MCLK and SCLK audio clocks.

A NOTE FOR 24/88 FANS: Some have asked about 24/88 high resolution audio support (popular for SACD rips). While the ODAC doesn’t support 24/88, it does support the audibly identical 24/44. It’s trivial to re-sample 24/88 audio to 24/44 with no artifacts as it’s a simple divide-by-two operation (and one the operating system will perform for you automatically). I know many audiophiles probably think they’re losing something, but nobody has proven they are. Meyer & Moran demonstrated in a very in-depth study that even 16/44 audio sounded identical to SACD. Another good read is 24/192 Music Downloads. And if you refuse to believe all that, try resampling some 24/88 audio to 24/44 and compare them yourself with Foobar and the ABX add-on. It’s been done at HydrogenAudio and elsewhere always with the same result: Unless you mess up the resampling somehow, or change the levels, you can’t tell them apart.

TRANSPARENCY GUIDELINES: The What We Hear article offers information and references outlining guidelines as to what’s required for a piece of audio gear to genuinely disappear from the signal path and not alter the sound in any audible way. Here are what I believe to be relatively conservative criteria for audible transparency and the ODAC passes all of them:

  • Frequency Response 20hz – 19 Khz within +/- 0.1 dB (Most DACs, due to the Nyquist limit of 22 Khz, start to roll off past 19 Khz when operating at 44 Khz sampling rate—the ODAC is down about 0.4 dB at 20 Khz). The widely accepted, but less conservative standard is +/- 0.5 dB (1 dB total variation) from 20 hz to 20 Khz.
  • All Harmonic, IMD, Alias, Modulation, & Crosstalk Components Below –90 dBFS and total sum below –80 dBFS (0.01%)
  • All Noise Components below –110 dB and total sum below –100 dBFS
  • All Jitter Components below –110 dB and total sum below -100 dBFS

GREEN GUIDE LINES: A few months ago I introduced green guide lines on several of my measurement graphs to help show the worst case ideal performance. Some of these are slightly more lenient than the above criteria or take into account more detailed thresholds (i.e. that power line hum can be slightly higher in level than midrange noise). For now I’m keeping the green guide lines consistent with earlier reviews. But please note the ODAC meets even the tougher criteria above.

PARTIAL MEASUREMENTS: I’ve made LOTS of ODAC measurements including some things I’ve never measured before—such as true latency. For this article I’ve only shown some of the more common measurements. In a future article I’ll cover additional measurements, 16 bit operation, 24/96, as well as several comparisons to the DAC1, FiiO E10, and an HA-Info eBay headphone DAC with a well respected DAC chip. So, in the interest of getting this article done sooner rather than later, and keeping it to a manageable size, only a sampling of 24/44 measurements are shown below.

DYNAMIC RANGE: A DAC’s noise floor impacts Dynamic Range (DNR), audible noise, THD+N, and can even exceed jitter-induced distortion. If you have to pick a single number to evaluate real world DAC performance –60 dBFS dynamic range (DNR) is one of the most revealing. The guys in the white lab coats have determined DNR greater than 100 dB results in transparency under realistic conditions. And, if you want to adjust the volume in software, it’s best to have at least 110 dB DNR to keep the noise floor inaudible even if the downstream gain is left cranked way up. Anything beyond 110 dB is past the point of diminishing returns—it looks nice on paper but doesn’t help the sound quality. The ODAC is a very substantial 14 dB better than the FiiO E10.  Here’s both channels of the ODAC referenced to the 2.03 Vrms at 0 dBFS. Note the channels are very symmetrical indicating a careful PCB layout:

ODAC -60 dBFS Dynamic Range & Noise Both Ch Line Out 100K Ref 2 Vrms 24-44

ODAC VS ESS: As explained earlier, the DNR quoted on datasheets is often something of a Holy Grail. The chip specs are typically from a very high quality AES/EBU or I2S laboratory quality signal (as output by high-end audio analyzers like a Prism dScope or Audio Precision). And they’re typically running from expensive ultra low noise bench power supplies costing thousands of dollars. It’s safe to assume the datasheet numbers were not made with USB data while running on USB power. ESS rates the ES9023 DNR at –112 dB A Weighted. The ODAC delivers –111.1 dB A-Weighted under the same conditions. In other words, even using USB data and power, the ODAC comes within a fraction of a dB of achieving the datasheet spec! I’m fairly proud of this aspect of the ODAC. It wasn’t easy.

CCIF IMD: This 19+20 Khz twin tone is a difficult test for many DACs running at 44 Khz. Old style (NOS) non-oversampling DACs especially struggle due to aliasing problems. In addition the output buffer (or I-V stage) in many DACs contributes high frequency distortion because the RC filter can be a challenging reactive load at these frequencies. If you look back through my reviews, you’ll find lots of products struggle on this test. Even the E10 turned in a marginal result. The ODAC, however, due to careful optimization of the output filter, and the superior digital filtering of the ESS DAC, does very well here with everything in the audio band well below 100 dB (both channels shown). Note also the 19 and 20 Khz tones are visibly equal in level which is not the case for many DACs:

ODAC -3 dBFS CCIF IMD Both Ch 100K Ref 2 Vrms 24-44

 

SMPTE IMD: This twin tone test is more revealing of low frequency problems including power supply interaction. Again, the ODAC does very well with everything well below 100 dB (both channels shown):

ODAC -2 dBFS SMPTE IMD Both Ch 100K ref 2 Vrms 24-44

 

100hz THD+N @ 0 dBFS: This test checks for clipping of the DAC at 0 dBFS and also shows the maximum output and channel balance error. You can see the ODAC produces 2.03 Vrms which is within an in significant 0.03 volts of the Redbook standard for digital audio. And even at 0 dBFS, the distortion is still 3 times less than what’s required for 100% transparency. The channels are perfectly balanced to within 0.001 dB. This spectrum is shown all the way out to 96 Khz and you can see the ESS DAC is well behaved even above 20 Khz with all noise still below about –110 dBFS. This is excellent performance:

ODAC 100hz 0 dBFS Max Out Both Ch 100K Wideband & Channel Balance Ref 2.03 Vrms 24-44

 

THD+N VS FREQUENCY: Here’s the distortion performance at –1 dBFS from 20 hz to 20 Khz into a more challenging 10K load with a measurement bandwidth of 22 Khz. At 1 Khz the distortion is only 0.0027% and it remains around 0.003% over most of the audio band with only a slight rise up to 0.0048% at 9 Khz before the harmonics fall above the audible range. This is excellent performance and both channels are very closely matched (yellow vs blue):

ODAC -1 dBFS THD N vs Freq Both Ch 10K Load BW 22 Khz 24-44

 

NEW JITTER FINDINGS: I did quite a bit more research on jitter during the ODAC’s development. I’m also using a new dScope method that shows the same spectrum as before but now the symmetrical jitter components are marked (with a white “X”) and summed to obtain a total numerical value (previously the dScope was just showing the total residual noise floor). Having the single number (-103.3 dB below) made it easier to optimize the ODAC for the lowest jitter. The objective evidence conservatively indicates  if you keep all related components below -110 dB, and the total below -100 dB, the jitter will be entirely inaudible. Jitter creates dissonant distortion products in the audible band. It’s reasonable to assume if the audible effects of jitter are kept at or below the inaudible noise floor, they too will be inaudible. So the same levels of –100 dB and –110 dB that apply to noise also apply to jitter contributions. This is also consistent with various professional reviewers and their anecdotal opinions on jitter performance as well as my blind testing against the Benchmark DAC1 which has even lower jitter.

ODAC JITTER: The ODAC passes the conservative criteria with several dB to spare even on the worst-case J-Test signal. And the spread at the base of the signal (very low frequency jitter) is extremely minimal being entirely below –130 dB. It’s also worth noting the jitter here looks subjectively worse because the noise floor is much lower than most of my jitter measurements which are done with a 16 bit test signal. A 16 bit noise floors masks most of the “spikes” seen below. The ODAC also has negligible inter-channel phase error and essentially perfect pitch accuracy:

ODAC -3 dBFS 11025hz J-Test Sum of Marked Jitter Components Ref 2 Vrms 24-44 ref

 

CONCLUSION: Hopefully the above provides some good evidence the ODAC delivers transparent performance. The next ODAC article will compare the 16 and 24 bit performance and I’ll be comparing it to several other DACs. I’ll be publishing many more measurements such as modulation noise, channel separation, square wave/impulse response, latency, frequency response, absolute noise, and more.

April 7, 2012

What We Hear

mcgurk ear bbc twoFIRST SOME UPDATES: This is a promising sign. I’ve managed to publish another article in less than a week and it’s a big one! Since last weekend’s April Fool’s article we’ve managed to finalize the ODAC design and it’s off to production. The best case estimate for availability is mid-May assuming all goes well. I plan to publish an article with the full measurements in the next few weeks. For now, I need a temporary break from the ODAC. (photo: BBC Two)

WHAT WE HEAR: The recent review of the O2 Amplifier at Headfonia has generated questions about why the reviewer heard what they heard. For example, a few have asked if the O2 is genuinely a poor match for the Sennheiser HD650 as the reviewer claims. How we do we determine if that’s true and, if the review is wrong, what happened? Others have asked if “colored” audio gear can sound better. This article tries to explore what’s often wrong with typical listening comparisons, subjective reviews and “colored” amplifiers.

THE SPECIFIC COMMENTS IN QUESTION: I’m not trying to place unique blame on Headfonia, or the authors, it’s just useful to have a concrete example. The Headfonia review is a timely example. The authors (Lieven and Mike) wrote:

I really don’t find the Senns [HD650] to pair well with the O2”. He also said the O2 (presumably with the HD650) was: “ …missing some punch and impact. But the bass isn’t the only thing bothering me actually, there also is something bothering me with the mids . They sound a bit thin to me to be honest, I just feel like I am missing out on something using the O2. It isn’t bad sounding, and I’m sure lot’s of people love it, but I’m not one of them, I’m missing something to enjoy what I’m hearing.”

HUMAN PERCEPTION & SCIENCE COLLIDE: Just because the earth seems flat from where you’re standing doesn’t make it true. Our senses don’t always provide us with the real truth. This article is mainly about how all of us, even audio reviewers, perceive things and why.

wine heatheronhertravelsTWO BUCK CHUCK (part 2): Robin Goldstein conducted over 6000 wine tastings with over 500 people including many wine experts who work in the industry. He discovered when people know the prices of the wines they clearly prefer more expensive wines. But when the the same wines are served “blind” (anonymously), the votes reverse and they express a strong preference for the least expensive wines.

FAKE RESTAURANTS & BUG SPRAY: Goldstein exposed one of the most prestigious sources of wine ratings in the world, Wine Spectator, as arguably having their awards up for sale. He created a fictitious restaurant with a wine list featuring poorly reviewed wines with descriptions like “bug spray” and “barnyard”. Then, after paying the appropriate fee, his fake restaurant managed to score a Wine Spectator Award of Excellence. Goldstein argues organizations like Wine Spectator do not provide the sort of unbiased advice many anticipate. He also concluded their ratings have a heavy commercial and financial component the public is largely unaware of. It’s alarmingly analogous to high-end audio including the questionable awards. For those into podcasts check out the NPR story on Goldstein and wine myths . He’s also written a book on the subject called The Wine Trials. (photo: heatherontravels)

TASTE WRONG: How can the same people, taste the same wines in two different tastings, and rate them completely differently? The is our senses are not nearly as reliable as we think. In this case what’s known as expectation bias caused the wine tasters to perceive the wines differently. They expected the more expensive wines to taste better so their brains filtered their perception of taste to help deliver the expectation. But when they didn’t know anything about the wines, they couldn’t have any expectations, hence the bias was removed. I’ll explain more shortly but this is also critical to what we hear.

SEE WRONG: Studies show our vision is easily fooled. A video linked in the Tech Section contains a classroom experiment where students witness what they believe is a real a crime. They’re shown pictures and asked to identify the suspect and many are so sure they even would testify in court. But, alarmingly, there’s no consensus in the photos they choose. In the end, none of them are correct! Their brains filtered out too many details of the thief. When shown pictures, each of their brains helpfully invented different missing details to match one of the photos. We do the same thing with audio (see the Tech Section).

justinstolle baby with headphonesHEAR WRONG: Just like with taste and vision, our hearing is heavily and involuntarily filtered. Only around 0.001% of what we hear makes it to our conscious awareness. This is hardwired into us for survival. We would go crazy from sensory overload if this filtering didn’t take place 24/7. Studies show you can play the identical track, on identical equipment, two different times, and the odds are good people will hear significant differences if they’re expecting a difference. This is called expectation bias and it’s the brain “helpfully” filtering your hearing to match your expectations—just like wine tasting. I reference the auditory work of James Johnston and more in the Tech Section. (photo: JustinStolle CC)

THINK YOU’RE EXEMPT FROM HEARING BIAS? People who have made a career of studying the human auditory system have yet to encounter anyone who can overcome their sensory bias. If you have any doubt I challenge you to watch this video and try to overcome your listening bias. If you can’t reliably do it here with a single syllable, even when you know the exact problem, it’s safe to say you don’t have a chance with the much greater complexity of music and a wide range of expectations:

  • McGurk Effect Video – A 3 minute BBC non-technical BBC video that allows anyone to experience how our brains involuntarily filter what we hear based on other information. (courtesy BBC TWO).

SKILLED LISTENING: You can learn what to listen for and hear more subtle flaws. Studies have shown show recording engineers are especially good at it—even better than hardcore audiophiles and musicians. Recording engineers make their living listening for that one clipped note. But they’re still not exempt from expectation bias.

studio mixer by rachelledravenRECORDING ENGINEERS FAIL: Ethan Winer, something of a recording studio expert, points out even recording engineers get caught making adjustments to a mix, and hearing the desired result, even when the controls were not even active. Nothing changed with the sound, but their brains deceived them into hearing the expected changes. And in blind listening tests, just like everyone else, they hear differences that don’t exist if they’re expecting differences. (photo by; RachelleDraven CC)

THE TIME FACTOR: Our brains are like leaky sieves when it comes to auditory details. We can’t possibly remember the massive amount of data contained in even one song (roughly equivalent to one million words!). So, here again, our brains out of necessity heavily filter what we remember. The students in the sight example above thought they had accurate memories of the thief but they turned out to be very wrong. The science shows our audio memory starts to degrade after just 0.2 seconds. So if switching from Gear A to Gear B requires more than a fraction of a second you’re losing more information. The longer the gap, the more you forget and the more bias distorts what you do remember.

WHAT WE HEAR: Neurologists, brain experts, hearing experts, and audio experts, all agree the human hearing system, by necessity, discards around 99.99% of what arrives at our ears. Our “reptile brain” is actively involved determining what gets discarded. For example when you’re listening to someone at a noisy restaurant, the brain does its best to deliver just their voice. And, multiple studies demonstrate, when you’re listening to audio gear the brain also does its best to filter your hearing in the way it thinks you most want. If you’re expecting Gear A to sound different from Gear B the brain filters each differently so you indeed hear a difference even when there isn’t one. This auditory issue has many names. I generally call it “Subjective Bias” but it’s what’s behind “Expectation Bias”, “The Placebo Effect”, and “Confirmation Bias”. I highly recommend the book Brain Rules by John Medina and there are more resources in the Tech Section.

hd650 closeupBACK TO HEADFONIA’S REVIEW: Knowing the O2 measures well creates an unavoidable bias that it might be too “sterile” which implies, as Lieven said, it could be “missing something” or sound “thin”. His brain involuntarily, when he knows he’s listening to the O2, would filter his hearing to deliver the expected “sterile” sound. Someone could set up a blind test between the O2 and a measurably transparent amp Lieven really likes, and he would no longer be able to tell the amps apart because his brain wouldn’t know what to expect. Conversely, you could set up a test where Lieven compares the O2 in a sighted test to a more favored amp but in reality he’s always listening to the O2. Because he’s expecting to hear differences he would--even though he’s really listening to the same amp. He would hear what he expects from each amp. This isn’t unique to Lieven or Headfonia. We all suffer from this bias.

MEASUREMENTS PROVIDE REAL ANSWERS: The O2 performs, as demonstrated by measurements, very similarly with just about any headphone load and is audibly transparent with the HD650. There’s nothing about the HD650 that alters the signal delivered by the O2 in an audible way. It would be useful to measure the other amplifiers used for comparison in the review. It’s possible one or more are less accurate than the O2 which might create an audible difference. See the Tech Section for more.

A BETTER METHOD: Instead of pretending we’re somehow exempt from inevitable filtering and expectation bias, a much better approach is to simply remove the expectations. Odds are extremely high if the authors had used level matched blind testing they would have reached very different conclusions. This can even be done solo with an ABX switching setup allowing instant switching to stay within the critical 0.2 second window. The switching can be done frequently, hours apart, or over weeks. You can listen to 10 seconds of a revealing excerpt or 10 entire CDs. In the end, the ABX set up reliably indicates there’s an audible difference. If a difference is detected, an assistant can be used to help blind audition both and determine which is preferred.

SUBJECTIVE REALITY: I’ve done the above blind testing using my own HD650 headphones and Benchmark DAC1 Pre. The O2 sounded so similar to the $1600 Benchmark I, and another listener, could not detect any difference. Simple logic tells us if what Lieven wrote about the O2/HD650 is true, it would also likely hold true for the Benchmark DAC1 Pre/HD650 or indeed any sufficiently transparent headphone amp. Considering the DAC1 is a Stereophile Class A (their highest rating) headphone component and rave reviewed by numerous other high-end audiophile reviewers, there seems to be a serious disconnect somewhere. At least one of the reviewers must be wrong and the Headfonia authors are far outnumbered.

ponder by striaticA REVIEWER’S DIFFICULT CHOICE: Subjective reviewers, like Lieven, have three basic choices in how they want to view a piece of gear like the O2 that measures very well: (photo by: striatic CC)

  1. Try to wrongly claim all transparent gear sounds different than other transparent gear and make up weak excuses to avoid blind testing demonstrating what differences they can really hear.
  2. Try to claim non-transparent (i.e. “colored” lower fidelity) gear with added distortion is somehow better than transparent gear. This would condemn all transparent gear, even some really expensive gear, to being equally “inferior” as the O2.
  3. Concede all audibly transparent gear is accurate and generally will sound the same. They don’t have to like the sound of transparent gear, but they would accept giving the O2 a negative review means also giving a Violectric, DAC1, and lots more high quality headphone amps an equally negative review.

seed pods by ticoWHEN THERE REALLY ARE DIFFERENCES: Sometimes gear really does sound different for some good reasons. I provide examples in the Tech Section. The Headfonia review, and related comments, have argued lower fidelity “colored” amps are somehow better than accurate high fidelity amps like the O2. This is often called “synergy” or “euphonic distortion” and it’s a matter of personal taste but it does have some very significant downsides. (photo by: tico cc)

TWO WRONGS DON’T… The synergy theory goes something like: “Because headphones are imperfect it’s OK to pair them with imperfect amps that help compensate for the headphone’s flaws.” Someone might pair bright headphones with an amp having poor high frequency response to help tame the nasty highs. But what happens when you upgrade to say the HD650 with relatively smooth natural highs? Now your headphone amp is going to sound dull and lifeless and there’s no way to fix it. The generally accepted goal, for many solid reasons, is the entire audio reproduction signal chain should be as accurate and transparent as possible right up to the headphones or speakers. That’s what recording engineers who mix our music, and manufacturers of our headphones and speakers, are counting on. If you want different sound see the next paragraph.

foobar eqAMPS & DACS MAKE LOUSY EQUALIZERS: If you want to alter the sound a much better method than using a colored amp or DAC is using EQ. That’s what it’s for. There are even decent EQ apps for iOS and Android. For those purists who somehow think DSP or EQ is wrong, odds are your favorite music has already had significant DSP and/or EQ applied. There’s an example in the Tech Section showing even two passes of digital EQ are entirely transparent. Using a headphone amp to compensate for a flaw with your headphones is like buying a music player that can only play one song.

NOT SO EUPHONIC: Amps or DACs that contribute what’s often called “euphonic distortion” are a lot like using them as equalizers above. An amp with “euphonic distortion” might help mask other problems by adding audible crud to your music but that’s putting a band aid on the problem instead of fixing what’s really wrong. You’re stuck with the added crud from euphonic gear no matter what. Even with the most pristine audiophile recording with low distortion headphones, your gear is still going to dumb down your music with its ever present contributions. If you’re really craving say the sound of a vintage tube amp, it can be done rather convincingly in software via DSP. That way you can at least turn the crud on and off as desired.

IT’S A PACKAGE DEAL: Sadly most “colored” and “euphonic” audiophile gear also has other problems. To get that soft high end you’re after you might end up with a tube amp and also get a very audible 1% THD. Or, like some Schiit Audio products have done, your “esoteric” amp may destroy your expensive headphones. Gear with lower fidelity in one area is much more likely to have problems in other areas.

FIDELITY DEFINED: The “Fi” in “Head-Fi” is supposed to be fidelity, as in “high fidelity”. If you look up the words, you’ll find it means “being faithful”, “accuracy in details” and “the degree to which an electronic device accurately reproduces sound”. Note the words “accuracy” and “electronic”. Accuracy is something that can be measured in most electronic audio gear (DACs, amps, etc.). So if “high fidelity” is really the goal why would anyone be opposed to measurements that support that goal? Ask yourself that question next time you encounter someone dismissing or downplaying audio measurements.

magic by cayusa 275MAGIC: Some compare the supposedly better sound heard from high-end gear to “magic”. Lieven quotes Nelson Pass claiming “It is nevertheless possible to have a product that measures well but doesn’t sound so good. It is still a mystery as to how this could be.” In effect Pass appears to be saying amplifier sound is mystical. But it’s not. Lots of credible published research demonstrates measurements predict sound quality. If Pass is so confident measurements don’t tell the whole story, where’s his published blind listening test demonstrating his view? Such a result would certainly help sell more of his five figure products so he has a strong incentive to publish such a test. Still, magic is an interesting analogy. Magicians know how to exploit weaknesses in our senses to deceive us. Convincing someone a $2000 power cord really sounds better is a similar act of sensory deception. (photo by: Cayusa CC)

BUT IT SOUNDED BETTER EVEN IN THE NEXT ROOM! Anyone who’s read many of the blind vs sighted debates has probably come across someone claiming their wife even commented how it sounded better and she was three rooms away in the kitchen. That phenomena was explained by Stuart Yaniger in his recent article Testing One Two Three. It turns out many audiophiles have a few favorite test tracks they cue up after making changes to their system. The test tracks are an obvious clue to their spouse they probably changed something in their system. And, spouses being spouses, sometimes respond with the audio equivalent of “that’s nice dear”. Sit down and do a blind comparison if you really want to know the truth—preferably in the same room as the speakers.

IF THIS IS ALL TRUE WHY DON’T MORE PEOPLE KNOW OR TALK ABOUT IT? The biggest single reason is there’s an enormous amount of money behind high-end audio. It’s largely built on the belief that spending significantly more money on high-end gear yields better sound. Pick up nearly any magazine, or visit nearly any website, and you’ll read how a $3000 DAC sounds better than a $300 DAC. But is that really true or is it just a self-serving perception that exploits the weaknesses in our auditory abilities? The industry has carefully cultured and crafted a lot of perceptions over the last few decades and some of it has been organic—especially given how audio forums work.

SOME IMPORTANT HISTORY: High-end audio was nearly non-existent in the 1970s short of a few companies like McIntosh and even those marketed their gear around objective criteria. If anyone tried to sell a $2000 power cord in 1980 they would have been laughed out of business but now there are dozens of companies selling them. The wine industry has followed a very similar path. In 1980 people tended to buy wine they thought tasted good. But, courtesy of decades of clever marketing, wine has been made out to be much more complex and important. An entire industry in the USA has grown up around wineries, wine tourism, wine bars, tastings, pairings, flights, expert reviews, etc. Consumers have been taught to doubt their own ability to judge wine and instead trust expert wine ratings with their point scores and elaborate tasting notes. And, not surprisingly, the experts tend to rate more expensive wines much higher. It’s scary how similar it is to high-end audio. Compare “the Warped Vine Cabernet was leathery with hints of green pollen” to “the UberDAC had a congested sense of pace and tarnished rhythms”. Check out the books by Robin Goldstein, Mike Veseth, Jonathan Nossiter, and the documentary Mondovino.

CONDESCENDING LINGO: Like the wine industry, the audio industry has invented their own complex secret language. This is exposed as a clever marketing technique in the books referenced above. The vague complex language makes the average person feel they’re not qualified know what to buy and should defer to the “experts” (who in turn promote more expensive products). So next time you read about slam, pace, congestion, rhythm, blackness, sheen, timing, or other words that could have many meanings, consider it marketing mumbo jumbo and think twice about trusting anything else that person has to say. Misappropriating words to mean something that’s ill-defined and vague isn’t doing anyone any favors except those hawking overpriced gear.

question doubt by mister khaCREATING DOUBT:  Someone might have a favorite inexpensive wine but the wine industry often succeeds in making them wonder if it’s “good enough” to serve to guests. They’re tempted to ignore their own perceptions and instead buy something that’s well rated and more expensive. This is good for the wine industry but bad for consumers. (photo by: Mister Kha CC)

AUDIO DOUBT: The high-end audio industry mirrors the wine industry. Websites like Head-Fi create massive amounts of doubt among those who spend much time there. I’ve received hundreds of messages like:

“I have the $200 UltraCan Five headphones and I thought they sounded really good with my iPod. But then I found Head-Fi and read the new $500 UberCan Golds are a lot better but they apparently need at least the $500 UberAmp Titanium to really perform. Should I spend $1000 to upgrade?”

SAD STORIES: I read messages like the one above and it makes me sad. Here’s this college student, probably eating macaroni and cheese for dinner, contemplating spending $1000 more than they’ve already spent to listen to music on their $200 iPod? And this is from a person who, until they found Head-Fi, thought they already had really good sound! In my opinion it’s just wrong—especially when the doubt is largely based on flawed listening perceptions and often hidden commercial influences.

BEHIND THE SCENES AT HEAD-FI: The above is the marketing genius of sites like Head-Fi and Wine Spectator. This all plays into the Head-Fi recipe:

  • Censorship – Head-Fi bans discussion of blind testing on 98% of the site and they have censored or banned members who speak out against Head-Fi or their sponsors. They take many actions to heavily tilt perceptions in favor of visitors spending money they probably don’t need to spend.
  • Questionable Gear - Some of the most hyped gear on Head-Fi, in my opinion, is snake oil—Audio-GD, NuForce and Schiit Audio are three examples arguably based on hype rather than solid engineering. Some of it is sold through very limited sales channels and Head-Fi is probably responsible for the majority of the sales. 
  • Massive Advertising - Head-Fi is a banner farm of advertising that includes $1000 iPod cables. The advertisers want to capitalize on the doubt created among Head-Fi visitors and having the moderators and admins acting on their behalf.
  • Infomercials – Jude, and others at Head-Fi, produce what many consider “infomercials” promoting products for Head-Fi sponsors. When Wine Spectator granted the Award of Excellence to Robin Goldstein’s fake restaurant they suggested he also buy an ad on their website. Those sorts of cozy behind the scenes relationships are present in the audio industry as well. See Stereophile Exposed below.
  • Biased Evangelists – A disproportionate number of Head-Fi posts are made by what some consider shills who “evangelize” and fiercely defend products sold by Head-Fi sponsors. Some might be just rabid fanboys, but if you look at the products that receive the most positive, and greatest number of, comments they’re usually from Head-Fi sponsors and are often somewhat questionable products that probably wouldn’t do nearly as well left to their own (lack of) merits.
  • Deception – To the casual visitor, Head-Fi just seems to be an open forum where average people can talk about headphone stuff. But it’s really owned by a commercial entity that’s using it as a marketing machine. And it’s not “open”—see Censorship above. It’s a private forum run for profit. From what I understand the moderators and admins receive a steady stream of free gear while Jude receives all that plus serious income. It’s no longer just a hobby to these guys.

SAFE MYTH BUSTING: There’s a great show on the Discovery Network called MythBusters. They pick a few myths for each episode and set out to see if they’re true. To my knowledge they nearly always avoid busting myths that might cause serious financial harm to anyone and instead focus on things like toilets exploding when used by cigarette smokers. This same “courtesy” is almost universally extended by the media covering high-end audio. The past has shown crossing the line is dangerous.

the audio criticTHE AUDIO CRITIC: Peter Aczel set out many years ago to expose myths in audio with his magazine and website The Audio Critic. He had a relatively small but significant and loyal following. But, progressively and sadly, he was marginalized in a variety of ways. Today nearly all audio journalists are inexorably in bed with the manufacturers in one way or another. And if you’re openly pro-Obama you don’t get to work on Mitt Romney’s political campaign. (photo: The Audio Critic)

STEREOPHILE QUESTIONED: For those who think I have an axe to grind, it’s nothing compared to Arthur Salvator’s axe. He strongly questions the credibility of John Atkinson and Michael Fremer at Stereophile. Whatever his personal bias, he presents what appears to be factual data implying Stereophile has heavily favored advertisers and selling magazines over being objective audio journalists. Sadly, as with Wine Spectator’s Excellence Awards, I believe this is likely to happen whenever you can follow the money from their paychecks up the chain to the manufactures of the products they’re supposed to be reviewing.

DECEPTION RUNS DEEP: I used to work in high-end audio. I can attest many believe the “party line” as they’re more or less immersed in it. It’s much easier to dismiss the rare blind listening test than consider their own industry is largely based on erroneous myths. Believing the earth is flat allows them to sleep better at night. So keep in mind even those selling $2000 power cords might genuinely believe their product sounds better. They likely listened with the same expectation bias described above and hence are believers. But that doesn’t mean lots of expensive gear really does sound better.

THE GOLD STANDARD: Across many branches of science blind testing is the gold standard for validating if things work in the real world. It’s well accepted for use where human life is at stake in medicine, forensics and physics. So it’s stunning when audiophiles claim it’s an invalid technique with mountains of evidence to the contrary dating back to the 1800’s. The Tech Section addresses all the common complaints about blind testing. But the short version is there are no practical objections when you do it right.

BLIND AVOIDANCE: Perhaps the most interesting disconnect in the industry is the almost universal avoidance of blind testing. If someone really believes their expensive gear sounds better why not try a blind comparison? This is where things get fuzzy about what many in the industry really believe. I would think many would be genuinely confident and/or curious enough to at least try some blind testing. But, almost universally, they refuse to participate in blind tests and often actively attack blind testing in general. Why? Deep down is it because they’re worried the expensive gear, cables, etc. really don’t sound better? Did they secretly try a blind test and failed to hear a difference? Or have others convinced them blind testing is a total sham and the earth is really flat?

BLIND DONE WRONG: When blind testing is used by someone in the industry they arguably misuse it and retain the very expectation bias blind testing is supposed to remove. Some examples of “blind abuse” are in the Tech Section.

dream by xtream_iFANTASIES: Another way to explain the avoidance of factual information is some simply enjoy the fantasy. High-end audio is a serious hobby for many. For some constantly swapping out gear is a big part of the fun. Blind testing could, obviously, jeopardize some of their enjoyment. One person’s hobby is their own. But if I imagine wine tastes better when consumed from vintage Waterford crystal should I really be trying to convince others to get rid of their pedestrian wine glasses and spend a lot more on vintage crystal because the wine tastes better? This is exactly what most audiophiles are doing at sites like Head-Fi, Headfonia, and many others. They’re convincing others to spend more money with the promise of better sound when they’re often not getting better sound.

FANTASIES ARE NOT ALWAYS HARMLESS: Some have told me I’m pulling the beard off Santa Claus and I should leave deluded audiophiles to their harmless fantasies. But the fantasies are not harmless when, collectively, the often repeated myths are costing people tens and probably hundreds of millions of dollars. Not everyone buying high-end gear has money to burn and reads the Robb Report. Many just want better sound. They don’t need $1000 DACs, power conditioners, expensive cables, etc. to get the sound they’re after yet they’re told over and over they do need those things.

crystal by foolfillment 275WATERFORD CRYSTAL: I think most would agree the difference between regular glass and crystal doesn’t change the actual taste of wine but it’s fair to say it might change the overall experience. There’s nothing wrong with buying and using expensive crystal wine glasses. But I believe it’s crossing an important line when you try to convince others the crystal itself makes the wine actually taste better. (photo by: foolfillment CC)

MONEY WELL SPENT: I’m not suggesting there’s no reason to buy expensive audio gear. I own a $1600 DAC. There can be satisfying reasons to spend more including higher build quality, more peace of mind, the heritage and reputation behind certain manufacturers, better aesthetics, and other less tangible benefits. But, for the sake of those interested in getting the best sound for their money, it’s best not to confuse sound quality with those other benefits. Just as drinking from crystal doesn’t improve the actual taste of the wine itself. Buying a pre amp with a massive front panel and $2000 power cord doesn’t improve the sound even if it might improve the experience for some. I’ll be covering this in much more detail soon in my upcoming Timex vs Rolex article.

ONE MAN’S BOOM IS ANOTHER MAN’S BASS: Perhaps the most accurate headphones I own are my Etymotic ER4s but they’re also among the least used. Why? They’re just not very exciting to my ears and are ruthlessly revealing of poor recordings. I want to be clear nobody can dictate what’s best for someone else when it comes to headphones, speakers, phono cartridges, and signal processing (including EQ). All of these things alter the sound to varying degrees so it comes down to individual preferences as to which is best. But, to be clear, I’ve also presented good reasons why it’s worthwhile for everything else in the signal chain to be as accurate and transparent as possible—that is to have the highest fidelity. Just because I like say a bit of bass boost doesn’t mean I should go buy a headphone amp with a high output impedance that makes my HD650’s sound more boomy. That same amp would be a disaster with my Denon D2000s, Etymotics or Ultimate Ears. It’s much better to choose a headphone with more bass, like my DT770, or use some EQ to punch up the bass to my liking. Then I can use my headphone amp with any headphone and I can have my cake and eat it too.

WHERE’S THE LINE? I can understand wanting to own high quality equipment. But where do you draw the line between subjective enjoyment and blatant snake oil and fraud? Is a $3000 “Tranquility Base” with “Enigma Tuning Bullets” from Synergistic Research more like Waterford crystal or a magnet to clamp on the fuel line in your car claiming to double your gas mileage? What about Brilliant Pebbles--a $129 (large size) jar of rocks that are supposed to improve the sound of your audio system? These are example companies presented in the myths video (Tech Section). Should such products be able to stand up to blind testing? Does it matter if the claimed benefits are real? To me it’s a slippery slope once you start agreeing with the “blind tests don’t work” crowd as the possibilities for unverifiable claims are endless.

TRUTH IN ADVERTISING: Many countries have laws against false claims and sometimes it’s enforced via legal action. For example, companies claiming unproven health benefits for products answer to the FDA. And Apple is currently being sued for misrepresenting Siri’s capabilities in their TV ads. Should audio companies be held to similar standards or at least be required to use a disclaimer when claims are unproven? When do claims enticing people to spend lots of money become fraudulent?

THINGS TO CONSIDER: If this article has you wondering just what’s real and what isn’t I would suggest considering some of the following:

  • Focus On What Matters To You – If you only want the best sound for your money don’t let others make you doubt your own enjoyment or convince you to spend more than necessary for things you can’t hear or don’t need. Likewise if you want a headphone amp that you can hand down to your kids in 20 years don’t let someone like me keep you from buying a Violectric V200 instead of an O2 even if they sound the same. The Violectric is better made. Of course your kids may have audio brain implants in 20 years.
  • Your Tastes Are Your Own – Subjective reviews are just that: Subjective. Don’t assume you will hear what a reviewer claims to have heard. We all hear differently, have different preferences, etc.
  • Music is Art but Audio is Science – Don’t confuse the art of being a musician with the science of reproducing recorded music. What an amplifier looks like is art, but what it does with an audio signal is entirely explained by science.
  • Use Common Sense – Be aware the controversy over blind testing is almost like a religious war. But try to step back and use common sense, consider the source, references, credible studies, etc. Follow the links in this article to additional resources. Consider the huge wealth of scientific evidence in favor of blind testing, measurements, what matters, etc. vs the subjective claims with little or no objective evidence of any kind to back their claims. If differences between cables, power cords, preamps, etc. were really so obvious don’t you think someone would have set up a credible blind test demonstrating the benefits by now?
  • Consider The Source – When you’re reading about gear, testing, reviews, etc, always consider the source. What gear does the author own? Who do they work for? Where does their (and/or their employer’s) income mostly come from? Are they selling audio gear? Do they receive free gear? Do they otherwise have a vested interest in promoting the status quo? I’m not saying everyone is hopelessly biased by those things but they often contribute bias to varying degrees (see Stereophile Exposed above and the wine references).
  • Try Free ABX Software – This will allow you to conduct your own solo blind tests comparing say a CD quality 16/44 track with a 24/96 track. See the Tech Section for more details.
  • Enlist A Friend (with a good poker face) – If you’re convinced that new gear sounds better find someone else to randomly connect the new gear or old gear in a way you don’t know which you’re listening to. Keep everything else as consistent as possible including the volume, music used, etc. See if you can identify the near gear. If you do, repeat the experiment several times keeping score of your preferences while your friend keeps score of which one was really connected. Compare notes when you’re done.
  • Eliminate Other Variables & Clues – If you’re trying to compare two pieces of gear do everything you can to eliminate anything that might cause you to suffer expectation bias or provide even subtle clues as to which is which. Let’s say someone is plugging your headphones, behind your back, randomly into source A or source B. You need to make sure the sound the plug makes going into the jack isn’t different enough to be recognizable. Also make sure the music is the same, not time shifted, one player might click when starting or has audible hiss and the other doesn’t, etc. Make sure the person helping either have a poker face or stays out of sight and doesn’t say anything as they may provide clues otherwise. And if you suspect someone else is hearing differences that likely don’t exist look for these kinds of clues that might be tipping them off—even subconsciously.

OTHER RESOURCES AND REFERENCES: In addition to all the links sprinkled around this article, check these out for more information:

  • Subjective vs Objective Debate – My first attempt at tackling some of this last spring
  • Music vs Sine Waves – An article I wrote debunking some myths about audio measurements.
  • Headphone Amp Measurements – An article I wrote for InnerFidelity on amp measurements
  • 24/192 Music Downloads – High resolution 24 bit music has to sound better, right? Or does it?
  • Matrix Blind Audio Test – A relatively simple and revealing blind test that’s easy to understand
  • Audio Myths Workshop Video – This is a somewhat technical presentation but it has lots of simple common sense examples. See the Tech Section for more details and how to obtain the high quality example files used in the second half to listen for yourself.
  • Why We hear What We Hear – A technical article on the human auditory system by James Johnston that goes into more detail about what we hear.
  • Dishonesty of Sighted Listening – An excellent technical paper by Sean Olive on evaluating speakers with blind vs sighted listening.
  • Hydrogenaudio – A geekfest of mostly objective audio folks. Lots of blind tests, analysis, and more.
  • Audio Differencing – Consider trying out the free software from Liberty Instruments and their test files. Do capacitors really change the sound?
  • Science & Subjectivism in Audio – An information packed page by Doug Self covering this topic.
  • Meyer & Moran Study – Over 600+ trials even audiophiles couldn’t hear a difference between CD and SACD high resolution audio.

THE FUTURE: Look for my “Timex vs Rolex” article where I’ll cover what you get as you spend more, what to look for based your priorities, etc. I’ll also be comparing some typical gear at widely different price points and exposing some high-end gear with low-end performance.

BOTTOM LINE: We can’t “trust our ears” the way we’re usually led to believe we can. And we can’t trust the ears of reviewers for the same unavoidable reasons. I’m not pretending I can write a few articles and change the minds of thousands of audiophiles—others before me have tried. But even if I only help a few save some serious money by focusing on what’s real, I’ve succeeded. Next time you encounter someone you think might be wrongly arguing for spending more money in the interest of better sound, consider providing a link to this article. If they read it and decide the earth is really still flat, or they want to spend more anyway, it’s their choice. But at least the information helps anyone who’s interested make a more informed decision.

 


TECH SECTION


INTRO: Below are further details on some of the references I used for this article and some added information for the more technically inclined.

BLIND OBJECTIONS: There are several typical objections to blind testing. But nearly every one has been well addressed. They include:

  • Switchgear Masking – Some try to argue the relays or switch contacts, extra cabling/connectors, etc. associated with some blind testing somehow masks the differences between gear. But there have been many blind tests without any extra cables or gear. See the Matrix Audio Test as one example where humans simply swapped cables. What’s more, the music we listen to has already been “masked” by all the unseen cabling and switchgear used in producing music. A lot of high end and well respected gear is full of switches and relays. Objection removed.
  • Listening Stress – Some blind tests are conducted by others administrating the test and listeners may feel some pressure to “perform”. But an ABX box can be used solo, in their own home, under exactly the same conditions they normally listen. Objection removed.
  • Listening Duration – It’s often said you can’t appreciate the differences between gear by listening to just one track or for just a short period of time before switching. An ABX box can be used at home for whatever duration desired. You can listen to Gear A for a week, then Gear B for a week, and finally Gear X for a week before casting your vote. That could go on for as long as you want. Objection removed.
  • Unfamiliar Environment – Blind tests are often criticized for being conducted in an unfamiliar environment. Never mind audiophiles can pop into a booth or room at a noisy trade show and claim to write valid reviews of how the gear sounded. But it’s entirely possible to do blind testing in your own home, with your own gear, your own music, at your own leisure, etc. See above. The Wired Wisdom blind test, for example, was done in the audiophile’s homes. Objection removed.
  • The Test System Wasn’t High Enough Resolution – See above. Blind tests have been done with audiophile’s personal $50,000 systems and the results are consistently the same. Objection removed.
  • Statistically Invalid – There have been some tests with marginal results that leave room for the glass half empty or glass half full approach as to their statistical significance. But there have been many more that have had a valid statistical outcome over many trials. The math is clear and well proven. Objection removed (for well run tests with clear outcomes).
  • Listening Skill – Some claim blind tests fail because the listeners involved were not sufficiently skilled. But it’s well established the filtering and expectation bias of our hearing is involuntary—no amount of skill can overcome it. It even trips up recording engineers. Ultimately, if say a dozen people cannot detect a difference, what are the odds you can? Objection largely removed.
  • Flawed Method – It’s always possible for someone to find flaws with the method. It’s like a candidate losing a political election, demanding a ballet recount, and finding a dead person cast one vote—never mind they need another 7032 votes to win. They want to try and invalidate the election on an insignificant technicality for entirely biased reasons. The same audiophile claiming to listen to a piece of gear for 5 minutes at a trade show and have a valid opinion about how it sounds somehow claims noise on the AC power line masked all differences in a blind test. Really?
  • It’s Not Real Proof – This is the ultimate weapon for the blind critics. Anyone can always claim something like “it’s possible when rolling dice to get double sixes many times in a row so your test doesn’t prove anything.” And, technically, they’re correct. But, in practice, the world doesn’t work that way. If they really believe in beating impossible statistical odds, they should be spending their time in a Las Vegas casino not writing about blind testing. Drug manufactures are not allowed to tell the FDA it’s possible their drug is really better than the placebo even if the statistical outcome of their carefully run trials indicates otherwise. There are well established standards that are accurate enough to use where human life is at stake but some audiophiles still, amazingly, try to claim these methods are somehow invalid.

WHEN BLIND IS FLAWED: Some in the industry, and the odd individual, have arguably misappropriated blind listening. For example, the UK magazine Hi-Fi Choice conducts what they call “blind listening group tests”. But instead of following proven and established guidelines for blind testing to determine if the listeners can even tell two pieces of gear apart, they seem to let their judges listen to gear as they choose over multiple listening sessions and take notes on their impressions without presumably knowing which gear is playing. This method suffers from much of the same filtering and expectation bias as sighted listening as they’re expecting differences so they hear differences. As far as I can tell (the published details are vague), they’re not directly comparing two pieces of gear looking for statistically significant differences over several controlled trials. There doesn’t seem to be any voting or scoring to determining what differences are real versus simply a result of expectation bias. Studies show one might blow their entire method out of the water by simply running their usual test with say 5 different amplifiers but, instead of playing all 5, repeat one of them twice and leave one out. I’ll bet you serious money they would hear all sorts of differences between the two sessions with the same amplifier playing. This has been done in blind wine tasting and some of the greatest perceived differences were between identical wines poured from the same bottle (check out the wine tasting podcast interview).

UNLISTENABLE: This guy claims to have done a blind test with 5 people and 10 trials each between two pieces of gear. He further claims “Every listener described the Halcro as being 'unlistenable'”. I’m sorry but if something is so bad as to be “unlistenable” do you really think you need a blind test to figure that out? That’s like comparing a good Cabernet to drinking vinegar. Also, for what it’s worth, he doesn’t seem to understand the difference between THD and THD+N where the “N” stands for noise. What he’s really measuring isn’t distortion, as he implies, but the noise floor. I encourage everyone to apply common sense to what they read and consider the source.

Sennheiser HD650 Impedance New Green=Open Gold=Simulated HeadOUTPUT IMPEDANCE: I’ve talked a lot about the importance of output impedance. Since I wrote my Output Impedance article in February 2011, Benchmark Media published a paper of their own that explores it from some other angles. Both are well worth a read. The graph to the right shows the impedance of the HD650 both in free air and clamped on head. there’s a big broad impedance peak centered at about 90 hz. If someone uses an amp with a sufficiently high output impedance the frequency response will also have a broad audible peak centered around 90 hz. The higher impedance can also reduce the damping and increase distortion. Combined, this could cause what many might might describe as more “boomy” or “bloated” bass. Someone else, however, might think it sounds more “warm” or “full”. Regardless, the change would generally be considered the opposite of “thin” as used by Lieven. It’s possible his typical amp for the HD650 has relatively high output impedance so that’s what he’s used to and considers “normal” for the HD650.

LEVELS MATTER: If the level were just a few dB lower when Lieven listened to the O2 that would, relatively, make it sound more “thin” because of the Fletcher-Munson curves and psychoacoustics. Studies show even a 1 dB difference creates a preference for the louder source. But most publishing gear reviewers don’t even have an accurate way to match levels. That’s a serious problem.

TESTING ONE TWO THREE: As mentioned in the previous section, Stuart Yaniger, an engineer who designs tube audio gear, recently wrote an excellent print article titled Testing One Two Three (it’s not on the web that I know of). The article has excellent advice for how to properly conduct blind testing and he outlines some of his blind listening experiments:

  • The Bastard Box – The author made an audiophile grade box comparing a $300 audiophile capacitor with a $1 capacitor from Radio Shack with the flip of a switch. He correctly identified the audiophile capacitor in blind listening only 7 of 12 times. Pure random guessing would be 6 of 12. By the statistical guidelines for blind testing, he needed to be correct 10 of 12 times for a meaningful (95% confidence) result. He failed so Radio Shack wins. 
  • Audibility Of Op Amp Buffers – He strung together 10 cheap op amp buffers in series to effectively multiply their flaws and did a blind ABX comparison of no buffers vs ten buffers. He found he could reliably hear the difference, so a buffer was removed (down to 9) and the test repeated. In the end he found with 6 or fewer buffers strung together he could not reliably detect any difference. This is an excellent example of blind testing both demonstrating a clear difference and then demonstrating the opposite-no audible difference. Blind testing let the author easily find the threshold of audible transparency (6 buffers). It’s also further proof that op amps are not the evil devices many audiophiles think they are when even six cheap ones strung in series can be transparent enough to sound just like a piece of wire. See: Op Amp Myths.
  • Listener Fatigue – The author devised a method to compare two pre amps for detectable “listening fatigue” by devising a means to blindly and randomly listen to one or the other with two hour meters recording the total listening time of each. If one was less fatiguing, in theory, it should accumulate more total listening hours. After two months of listening to both there wasn’t what he considered a significant difference in the total times implying both had similar levels of fatigue or no fatigue.

WHY WE HEAR WHAT WE HEAR: James “JJ” Johnston has lots of experience in audio ranging from Bell Labs to DTS. He’s made many presentations on audio and has spoken at Audio Engineering Society events. He’s recently re-formatted one of his presentations into an online article entitled Why We hear What We Hear. It’s an interested read with some valuable technical information on how our auditory system works. His article is a primary reference for my earlier comments about how we filter sound. His information correlates well with information from neurologists and brain experts as well as multiple AES papers on listening bias. Here are some highlights:

  • Expecting Differences - Perhaps the most important thing to take away from Johnston’s work is on the last (fourth) page.  He explains how when we listen for different things we will always hear different things depending on what we expect to hear. In other words I can play the same track for a person twice, with everything exactly the same, but if they’re expecting a difference they will in fact hear a difference. It’s not an illusion, or anything they can control, it’s just how our auditory system and brain work. This was also demonstrated in the AES paper Can You Trust Your Ears where the listeners consistently heard differences when presented with identical audio.
  • Auditory Filtering – Johnston talks about our hearing system involuntarily filtering what we hear. There are two layers of “reduction” resulting in somewhere around 1/100,000 of the data picked up by our ears ultimately reaching our conscious awareness. This was discussed in the first section with examples provided.
  • Auditory Memory – Johnston points out our ability to compare two sound sources starts to degrade after only 0.2 seconds. The longer we go between hearing Gear A and Gear B the less accurately we can draw conclusions about any differences. And the longer the gap the more expectation bias creeps into what we remember. Our memory, like our real-time hearing, is by necessity highly selective. Our brains are constantly discarding large amounts of “data”. Other studies back this up including the AES paper Ten Years of ABX Testing which found near immediate comparisons were far more likely to detect differences than extended listening with at least a few minutes between changes. This is the opposite of the common erroneous belief that long term listening is more sensitive.

AUDIO MYTHS WORKSHOP: The Audio Engineering Society (AES) is made up of mostly audio professionals especially those involved with production work including CDs, movie scores, etc. Their conventions are heavily attended by recording engineers, professionals who make their living hearing subtle audio details, and the people behind the gear they use. At the 2009 Audio Myths Workshop a panel of four experts presented a lot of good information on what we think we hear and why. If you don’t have an hour to watch the video, or just want to know what to keep an eye out for, here are some highlights I found especially worthwhile:

  • The Need For Blind Testing –  JJ points out it doesn’t matter how skilled of listener you are, if you know something about the two things you’re trying compare, your brain is unavoidably going to incorporate that information into what what you hear and what you remember later. 
  • A Dirty Trick – JJ recounts a story about a fake switch being set up to supposedly switch between an expensive tube McIntosh amplifier and, in his words, a “crappy little solid state amplifier”. In reality the switch did nothing and, in either position, everyone was listening to the solid state amp. But the audiophiles listening unanimously preferred the supposed “sound” of the tube amp. They heard what they were expecting to hear. His engineer friends tended to prefer the solid state amp with only one claiming they sounded the same. This is further confirmation of expectation bias.
  • Stairway To Heaven – Poppy Crum demonstrated how it’s easy to hear most anything when you’re told what you should be hearing. She played an excerpt from Stairway to Heaven backwards and there’s not much in the way of discernable lyrics. Then she put a slide up with suggested lyrics and most can easily hear exactly what’s on the slide with a fairly high degree of certainty. This is somewhat analogous to reading someone’s gear review, buying the same gear yourself, and hearing what you’ve been told you should hear.
  • State Legilature – No that’s not a typo. Poppy Crum plays a news announcer clip multiple times with everyone listening carefully. The “S” in the middle of the word legiSlature was edited out and it goes largely unnoticed by the entire audience. The brain helpfully fills in the missing letter for you automatically. Despite listening carefully, the audience of audio professionals mostly fails to hear the obvious problem because their brains automatically “invent” the missing audio information.
  • Crime Suspect – They play a TV clip of a theft occurring in a classroom. The students are asked to identify the thief. Many are so certain they can pick the right guy from a photo line up they would testify in court. But, in the end, they not only pick different photos, but they’re all wrong despite their certainty. It’s further proof of how the brain filters and how unreliable our senses can be even when we think we’re certain.
  • Argue To The Death – They talk about experiences on various audio forums where people argue subjective claims to the extreme using the argument “you can’t prove it doesn’t sound better”. No, but we can prove if typical people under realistic conditions can detect any difference with blind listening. Here’s a humorous parody of a typically “passionate” forum discussion (about hammers) that’s, sadly, analogous to some audio threads I’ve seen.
  • Identical Results – Measurements can produce consistent, reliable, and repeatable results. I can test Gear A on Monday and re-test it a month later and get very close to the same result. That’s far from true with sighted listening tests where you can get very different results 10 minutes later if someone is expecting a difference. Further John Atkinson and I can measure a DAC and we should both get very similar measurements as long as we use accepted techniques and publish enough details about the test conditions. He can verify my measurements, and I can verify his, with a high degree of certainty. Without carefully controlled blind testing, that’s impossible to do by ear.
  • ethan winer audio myths audibilityTransparency – Ethan Winer, an acoustics expert, discusses how audio electronics can be defined as audibly transparent by four broad categories of measurements and he provides   criteria for complete transparency. He states that gear passing all these criteria will not contribute any audible sound of its own and in fact sound the same as any other gear passing the same criteria:
    • Frequency Response: 20 hz to 20 Khz +/- 0.1 dB
    • Distortion: At least 100 dB (0.001%) below the music while others consider 80 dB (0.01%) to be sufficient and Ethan’s own tests confirm that (see below).
    • Noise: At least 100 dB below the music
    • Time Based Errors – In the digital world this is jitter and the 100 dB rule applies for jitter components.  (photo: Ethan Winer)
  • Comb Filtering – This is a problem with speakers and it’s likely the source of some audiophile myths. Ethan demonstrates how even changing your listening position (where your ears are in the 3D space of the room) by a few inches can create very audible changes in the high frequency response. Let’s say you sit down to listen, then get up and swap out a cable and sit down again, but this time you’re leaning further back. You may well hear differences but it’s not the cable. It’s the nature of the room acoustics and comb filtering. This is in addition to all the other reasons you might hear differences unrelated to the cable.
  • Noise & Distortion Thresholds – Ethan sets up a noise track and mixes it with music at various levels to simulate similar levels of noise and distortion. I can still plainly hear the noise (on the WAV files) at –60 dB (0.1%) and barely hear it at –70 dB (0.03%) on the cello excerpt. It’s gone, to my ears, at –75 dB (0.018%). So my usual rule of thumb of –80 dB (0.01%) is further validated with some conservative margin for error.  When, for example, I point out dissonant inter-channel IMD in the AMB Mini3 amplifier reached –55 dB, DAC alias components in the range of –50 dB to –25 dB, or 1% THD (-40 dB) in a singled-ended or tube amp, Ethan has helped confirm it’s entirely plausible such distortion could be audible. Here’s a link to the audio files Ethan uses in the video. The cello example, (Example2 in the WAV files), starts at about 33:30 into the video. The noise starts out for two cycles at –30 dBFS, then two cycles at –40 dBFS, and so on. Listen for yourself! When you see a spec like 1% THD you can think about the plainly audible noise at –40 dB in Ethan’s examples. This is somewhat oversimplifying the entire audibility threshold issue but it’s a reasonable worst case approximation. In this case it’s best to error on the side of being too conservative (likely the reason for Ethan’s own –100 dB transparency criteria).
  • Phase Shift – Ethan demonstrates how even large amounts of phase shift applied at various frequencies is inaudible as long as it’s applied equally to both channels. What can matter is interchannel phase shift—any significant shift between the right and left channels. Relative phase is also critical in multi-driver or multi-speaker set ups (i.e. between a woofer and tweeter or front and rear speakers in a surround set up).
  • Null (Differencing) Test – Towards the end Ethan talks about how null testing (which includes audio differencing) not only reveals every kind of known distortion (THD, IMD, phase, noise, frequency response errors, etc.) but also even unknown kinds of distortion. You’re literally subtracting two signals from each other and analyzing/listening to whatever is left over. With test signals this is something the dScope audio analyzer can do in real time with analog, digital, or mixed domains—analog and digital. The dScope, in effect, subtracts the test signal from the output of the device and adds the required amount of gain to boost what’s left over up to a usable level. It’s a test that can bust a lot of myths about various things that are claimed to degrade an audio signal. It can be done, using music, driving real headphones or speakers, in the analog domain (i.e. Hafler and Baxandall method scroll down) or in the digital domain (i.e. Bill Waslo’s software from Liberty Instruments). Ethan presents a few examples including the next item.
  • Reverse EQ – Many people seem to think even high quality digital (DSP) EQ does irreparable harm to music. Ethan has an example where he applies EQ, then applies the exact opposite of the EQ, and then nulls (see above) the result with the original version (no EQ). He gets a near perfect null indicating even with two passes through the EQ DSP no audible damage was done.
  • Other Examples – Ethan has further demonstrations aimed more at recording work rather than playback. He does an interesting “bit truncation” demo but it’s done in such a way as to make comparing the bit depths difficult.

BLIND LISTENING GUIDELINES: There’s a lot of info on how to conduct reasonably valid blind testing and some of it gets very elaborate. But don’t let that scare you from trying it. If you’ve never tried blind listening before you might be surprised how quickly it opens your eyes—even a very simple test. As many wine critics have found out comparisons immediately become much more difficult when you don’t know which is which. You can get more meaningful results from just following some simple guidelines:

  • Compare only two things at a time until you reach a conclusion 
  • Always use identical source material with no detectable time shift between A and B 
  • Match levels to ideally 0.1 dB using a suitable meter and test signal
  • Watch out for unintentional clues that might hint at which source is which
  • Ideally perform enough trials to get a statistically valid result (typically 10 or more but even 5 can be informative)

TRYING ABX YOURSELF: Foobar 2000 has an ABX component and there are other free ABX options.  This is an excellent way to say compare 16/44 FLAC files with a high resolution FLAC file. There are files available at Hydrogen audio or you can make your own if you have any good 24/96 or 24/88 demo tracks. But always do a high quality sample conversion of the high resolution track to produce the 16/44 track. Or compare a 256K AAC file with a 256K MP3 file ripped from the same CD track. Etc. What’s important is you eliminate all other differences. The levels have to be the same, etc. Audacity is free and can be used for editing the files. If the tracks are not in time sync you can configure the ABX options to restart the track each time you switch.

FOOBAR ABX: There’s a YouTube Video on ABX testing with Foobar. The software and ABX component can be downloaded here. The listener can, at any time, choose to listen to amp A, amp B, or “amp X” which will randomly be either A or B and they won’t know which. The software keeps track of “X” and lets the listener vote when they feel they know if “X” is really A or B. A normally hidden running score is kept of the votes. After ten or more votes, the final results can be revealed and checked for a statistically valid result.  If a difference is detected the system can be set up by someone else so the listener doesn’t know which is A and which is B. They can then pick a favorite, again taking as much time as they like, without the unavoidable expectation bias that occurs when they know which is which.

A BLIND ANECDOTE: I received some cranky emails from someone who had read my Op Amp Myths article and basically said I must be crazy or deaf for not hearing the obvious differences between op amps. He was a serious DIYer who claimed to have designed and built a lot of his own gear. I suggested he rig up two identical pieces of gear so he could level match them and switch between them with both having socketed op amps allowing any two op amps to be directly compared. I figured he’d never bother. Much to my surprise the next email a few days later was an apology and he explained it was really eye opening and thanked me. I’m fairly sure he’s going to be approaching his future designs somewhat differently.

REAL ENGINEERING OR SNAKE OIL? I’m curious what others think of the $3995.00 BSG Technologies qøl™ Signal Completion Stage? It’s an all analog device (line-in/line-out) that, according to BSGT, “Reveals information that, until now, has remained hidden and buried in all audio signals”. How, I wonder, does it do that? The Nyquist Sampling Theorem dictates how much information can be reconstructed from a 16 bit CD. And a decent DAC, as far as I know, reconstructs all of it. In fact, a good 24 bit DAC can reconstruct data well below the 16 bit noise floor of a CD—i.e. the DAC is better than the CD itself. So, from what I know, every sample on the CD can be reliably and completely converted into an analog signal. Hence there should not be any “hidden information” on the CD. One can argue information got lost further upstream in the recording chain—i.e. at the microphones, due to the 22 Khz Nyquist bandwidth limit, etc. But that information isn’t even on the CD hence it would be quite a miracle if it could be reconstructed out of nothing. Anyone else have any theories on this nearly $4K device and their main marketing premise?

HAVE FUN! The above is just one example. I know several audio fans who have had fun discovering just what they can really hear and what they can’t using blind testing. It opens up an entirely new perspective and provides a new way to experiment with audio. It’s especially useful, and accessible, for DIYers.