Last month, I promised to write a bit about Vivid Audio’s Oval B1 Decade and KEF’s Blade Two loudspeakers, reviews of both of which I’ve just finished writing. (The review of the Vivid was published last month; the review of the KEF will appear April 15.) In the meantime, something more pressing has come up: many questions about the Master Quality Authenticated (MQA) system, some of which I feel I should address now.
MQA was created by Meridian Audio, and announced in December 2014. Recently, Meridian has spun off MQA into its own legal entity, MQA Ltd., of which Meridian Audio is now a licensee. Bob Stuart is a cofounder of both companies.
When, in December 2014, Meridian first announced the MQA system for digitally encoding recordings of music, I was at first intrigued -- but lost my interest after reading the press release. As far as I could tell from the little information then provided, Meridian was claiming that MQA would deliver high-resolution PCM files via a far more efficient compression scheme than that used to create FLAC or ALAC files, which are losslessly compressed and already about half the size of WAV files, which are uncompressed. In short, MQA could make big files much smaller, which would make them quicker and easier to stream or download -- and without sacrificing quality, as MP3s do.
The press release also claimed that the MQA system was developed using the “latest neuroscience and psychoacoustic research that shows how we identify and locate sounds,” and that their method included “instructions for the decoders and D/A converters.” Because of this, the sound produced would be even better than we’re currently getting from typical hi-rez files. However, I didn’t take these latter claims too seriously; they weren’t supported by any details, and read like ad copy.
That Meridian might be able to compress hi-rez files so efficiently was no surprise -- after all, they’d developed Meridian Lossless Packing, which was used for DVD-Audio and HD DVD when they were still around, and is still used in Blu-ray Discs and Dolby’s TrueHD. Meridian knows what they’re doing in that regard. But making a big file smaller? Big whoop. Bandwidth constraints and data limits in the home were a concern a few years ago, but since then technology has steadily improved, particularly with the advent of movie-streaming services such as Netflix, which use enormous amounts of bandwidth and churn out huge amounts of data. Internet providers have accordingly stepped up their services, and now it’s pretty inexpensive to get extreme bandwidth and high data limits (even unlimited, as I have) for not that much money, at least in North America. So whether I’m transferring a 32-bit/384kHz stereo file in its full glory, or the same file compressed to the size of a 24/48 or even 24/44.1 file -- as is claimed for the MQA system -- makes little difference to me.
That said, bandwidth and data limits are still issues for cellphone service, so compression has definite value for streaming over that kind of service. Probably most relevant to audiophiles is that streaming services would rather deliver smaller files than larger ones, because it’s cheaper and less hassle -- so if they bite on this, as Tidal has indicated it will, that’s another good reason for compression. Still, for me . . . MQA? Meh.
Then came the 2016 Consumer Electronics Show . . .
I don’t know if I missed the invitation or simply wasn’t invited, but during CES 2016 Bob Stuart himself was demonstrating the system, at the Venetian. Bloggers covering it weren’t writing that MQA files sounded as good as uncompressed files; instead, they were saying that the MQA files sounded better -- much better. This sounded ludicrous to me: When you play a compressed music file, the best you can hope is that it sound the same as the file did before compression.
At that time, I also began receiving questions from some manufacturers, who wondered if they now had to include MQA decoding in their D/A converters -- they, too, had heard the stories about better sound, had been approached regarding licensing the technology, and didn’t know what to make of it all. They wanted to know if I thought MQA would be the Next Big Thing.
But I’d been out of the loop on MQA -- some research was in order, and there was more to look at in March 2016 than there had been in December 2014. So I read what’s currently available on the MQA website, as well as two Meridian patents that seem to have to do with MQA: “Doubly compatible lossless audio bandwidth extension” and “Digital encapsulation of audio signals.” I also watched the MQA YouTube videos that had been produced since the launch, and scoured the Internet for as much technical information as I could find on the subject. That brought me to “Beyond High-Resolution,” a 3155-word article about MQA by Robert Harley, editor of The Abso!ute Sound, published shortly after MQA was announced. Then I read “Beyond High Resolution: MQA,” Mark Waldrep’s somewhat critical look at Harley’s article, published on Waldrep’s own site, Real HD-Audio, in May 2015. And while I didn’t hear the demos at CES 2016, I did hear Bob Stuart talk a bit about MQA during a panel discussion at the 2016 ALMA International Symposium, a technical conference for loudspeaker designers and other technical types. (ALMA was founded in 1964 as the American Loudspeaker Manufacturers Association, but in 2001 changed its name to Association of Loudspeaker Manufacturing and Acoustics. The ALMA International Symposium is held annually in Las Vegas, Nevada, usually a few days before CES.)
I wish I could say that how MQA works is clear to me, but it’s not, and probably intentionally so -- I’m sure the developers want to protect their intellectual property. The patents, of course, were the most detailed documents I looked at, but I’m no digital-audio engineer -- I found it difficult to figure out precisely what’s happening at each stage of the process. Instead, I had to more or less piece together everything currently presented by MQA Ltd., and by Harley in his article, which basically summarizes Meridian’s original literature. MQA seems to comprise at least two technologies, of which compression is only one.
From what I understood, the compressing of hi-rez music files, which is called Music Origami, has nothing to do with producing better sound than can be heard from the original, uncompressed file. Instead, it’s what it seemed to me at the outset: big files made smaller for faster, easier delivery. Stuart describes the process in a YouTube video about MQA, “Music Origami,” and while his description isn’t nearly as detailed as I’d have liked, he hints at what’s happening. It is called origami because, as in that Japanese art of folding paper into beautiful objects, during encoding higher frequencies are “folded” into lower frequencies, which on decoding are “unfolded” into the original file. A cruder way to think of it: Take a big sheet of paper that won’t fit through a little hole, scrunch it up into a little ball, push it through the hole, then unscrunch it on the other side -- but with no wrinkles in the paper.
Of course, nothing is actually folded. Instead, the MQA system seems to use a clever data-encoding scheme in which the bandwidth of 0Hz to 768kHz is divided into three regions -- A, B, and C -- with a different bit priority given to each. From what I understand from my research, region A is 0Hz-24kHz (20Hz-20kHz is widely accepted to be the range of human hearing), region B is 24-48kHz, and region C is 48-768kHz. Still using a PCM-based encoding scheme, the topmost 16 to 18 bits (I couldn’t find a reference to the exact bit depth, so I’m guessing based on the graphs shown in the “Music Origami” video) in a sample are used to encode region A, while the remaining bits are used for regions B and C.
Using more bits to encode the frequencies within the audioband than those above seems a good way to save space -- a full 16 bits could be used to encode, say, a sound at a frequency of 1kHz, which would potentially offer 96dB of dynamic range, because music might be able to make use of that range. But you wouldn’t need 96dB or more (24-bit encoding offers 144dB) for, say, 50kHz, because, in typical musical signals, frequencies that high are captured at extremely low levels, which means you can get away with using far fewer bits. This also illustrates why traditional PCM encoding is so inefficient -- regardless of the frequency, the bit depth remains the same, which is overkill for frequencies above the audioband.
What’s more, the MQA developers have found room to write it all within conventional file formats such as FLAC, ALAC, and WAV -- or any lossless format. Not only is the final file size just 50% bigger than a 16/44.1 file ripped from a CD, but decoders that don’t support MQA will still recognize those file formats and play the MQA-encoded files -- they’ll simply ignore the higher frequencies (regions B and C) folded into the bottommost bits. This is why the patent is titled “Doubly compatible lossless audio bandwidth extension”: It’s compatible with components not equipped with MQA decoders and those that are. Pretty smart.
But if that’s true, I don’t see how MQA can be truly lossless, as the title of one of Meridian’s patents indicates. If a recording was originally made with a bit depth of 24, whether they needed it or not, but MQA has a total of only 24 bits to work with to get all three regions folded in, then a number of the original bits must be discarded, right? That’s my understanding, anyway. One way to test this would be to create an MQA file from an uncompressed hi-rez file, then convert it back to the uncompressed state. You can do that all day long with, say, WAV and FLAC files -- convert WAV to FLAC and back again (and again and again) -- and you’ll find that the WAV file will remain unchanged. But until we have MQA files in hand and a conversion utility is available, we won’t know for sure. That’s not to say MQA can’t sound very good, or even be the best thing you’ve ever heard, particularly if what I describe below is true; what it does say is that I can’t see MQA compression of an uncompressed, hi-rez music file being bit-for-bit perfect. If I’m wrong about this, I’d like to learn more.
As for sound quality, this claim is from the MQA website: “With MQA, we go all the way back to the original master recording and capture the missing timing detail.” It’s this supposed recapture of temporal information that, the MQA literature claims, allows MQA-encoded recordings to better convey spatial information, and be more pleasurable to listen to than standard CD- or even hi-rez files.
MQA Ltd. hints at their secret sauce for these sonic improvements in their video “Ringing and Filters,” in which Stuart describes the destructive nature of brickwall filters. As he puts it, they “smear time.” This is not to be disputed -- the pre- and post-ringing effects of filters that he describes in the video have been talked about for far longer than MQA has been around. I remember digital designer Ed Meitner talking about pre- and post-ringing in the late 1980s, and patenting a digital filter to combat those problems in the ’90s. Meridian itself has been using apodizing filters for well over a decade in their CD players and DACs to battle time smearing and other problems.
Later in this video, Stuart says, “In MQA, we use the right filters, or we fix these filters at both ends.” It’s this aspect of MQA that inspires all sorts of questions. Let’s separately address “we use the right filters” and “we fix these filters at both ends.”
If the so-called “right filters” are already used for recordings, and you don’t care about the data compression MQA provides, does MQA result in any benefit? There should be nothing for MQA to correct for and improve on, right? What’s more, with recordings made at a sampling frequency of 96kHz, 192kHz, or even higher, filter effects are already very far above the audioband, even without MQA. So when Stuart is talking about time smearing affecting recordings, is he talking mostly about older recordings made with much lower sampling frequencies, in which the filter is closer to the audioband? Or is this still a problem with newer recordings made at much higher sampling frequencies? I’d like to know, but the information supplied by MQA Ltd. does not clearly explain this.
As for fixing the filters “at both ends,” this does seem to be a nifty aspect of the MQA encoding scheme, in which information about the time smearing gets coded with the original signal during what’s called “the MQA encapsulation process.” At the end of the chain, the MQA-equipped decoder uses the embedded instructions to correct for the time smearing. This is reminiscent of the HDCD-encoding process for CDs in the 1990s, which didn’t correct timing issues, but was able to re-create 20 bits of resolution from 16 by embedding digital-filter instructions in the least-significant bit of a PCM datastream.
But if such correction of time smearing is indeed what’s happening, once again, I need to ask: Is there any benefit to the compression MQA offers? Couldn’t similar corrections be applied to uncompressed PCM signals? Moreover, if this time smearing is happening at the beginning of the recording process, and afterward can be corrected for in the digital domain, couldn’t the correction be made directly to the original digital music file? If so, then you wouldn’t have to worry about encoding the details into a compressed version of it and have the digital filter perform its magic.
Additionally, MQA Ltd. has stated that the MQA system can correct time smearing if the analog-to-digital converter used in the recording process is known. But what if that information is not available? Surely that will happen often, particularly with older recordings. And what if more than one ADC was used in the recording process -- can MQA correct for two or more ADCs? Again, MQA Ltd. needs to explain all this so that we can better understand their new system.
Finally, there’s the question of what, exactly, was demonstrated in those A/B comparisons at CES 2016. From the descriptions I read, various file types, including uncompressed hi-rez PCM files, even MP3s (but why?), were compared to MQA-compressed versions of the same recordings. Much was written about the improvements in sound, but not, in my opinion, enough about the recordings themselves. When I read the blogs, my first question was: Do we know that the master files used for each were identical?
Few bloggers offer this information or ask this question. However, on Stereophile’s website I found “MQA’s Sound Convinces Hardened Showgoers,” by Jason Victor Serinus, for which some due diligence had been done. Serinus said that recording engineer Peter McGrath, who now works for Wilson Audio Specialties and who participated in the demos, supplied some of the tracks that were compared, and that he “had previously informed Stuart and the MQA team that he had used a Meitner ADC and Grado mikes,” so that they could better make their corrections. This goes a long way toward creating a true comparison. According to Serinus, McGrath’s recordings sounded better than before the MQA process was applied, which is promising. But it still begs the question: What happens if no one knows which ADC was used for the original recording?
Then things take a mysterious turn. Later in his article, Serinus says, “In a comparison that had Michael Fremer’s ears perking up, we next listened to some music from Keith Jarrett’s famed live 1975 [The Köln Concert] (recorded in the Opera House in Cologne, Germany). Lucky for us, Stuart had been able to get his hands on a 96kHz transfer of ECM’s analog master tape.” I have to wonder what that “comparison” comprised, because that statement is immediately followed by this:
In a follow-up call, days after CES ended, Michael Fremer had this to say about what he heard:
The CD of the recording has an unfocused, diffuse image of a piano hanging in space, with the room reverb mixed in and confusing the picture. There was no image, there was no there there. My mind couldn’t get engaged with it, which was disturbing, because it didn’t make sense. That’s why many people don’t sit down and listen to a CD with the lights out and stay engaged, as you do with a record. When the MQA version was played, there was a coherent attack, sustain, and decay. Finally, I could visualize Jarrett playing a piano in three-dimensional space, and the space behind it. It was like what a record sounds like. I think that since Jarrett is also an audiophile and likes vinyl that he, too, would hear it the same way.
So are we to assume that an original CD was used to demonstrate non-MQA, while the new 96kHz transfer was used for the MQA version? If so, that doesn’t sound like much of a comparison at all. If that wasn’t the case, someone needs to clear up the confusion by describing exactly which recording versions were used for evaluation. Only then can definitive judgments be made. If the source material in the demos was not always identical, then the results are suspect -- we know that a remastered version of a recording can sound drastically different from an earlier mastering. Suffice it to say that, based on what’s been written so far about MQA’s sound quality, I’m not convinced of the worth of these comparisons.
At this point, I’m neither for nor against MQA. I haven’t heard it, and I don’t know enough about what it does yet, despite the research I’ve done.
In terms of MQA’s ability to compress a signal, and based on Meridian’s past work, I have no reason to doubt their claims that MQA can do it well. I see the benefits of reduced bandwidth and smaller datastreams for some people, particularly if streaming services such as Tidal are encouraged to adopt it. But is MQA truly lossless? I’d like to know more. Sound quality is a different matter altogether. Had I attended the demos at CES 2016, heard differences, and recognized them as improvements, the questions I’ve asked throughout this article are exactly the questions I’d have asked in Las Vegas.
Let’s see what the future brings. MQA Ltd. . . . ?
. . . Doug Schneider
das@soundstagenetwork.com