Any good media streamers coming out, or just released lately?

MartinLogan Audio Owners Forum

Help Support MartinLogan Audio Owners Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I guess I need to ask for a refund on my Electrical Engineering degree. They didn't teach me anything that can be applied in the real world. What I needed to learn was how to deprogram people who have been brainwashed by technobabble that has zero credibility.

Techno babble rules the world of audio (snake oil). Even engineers who know better start believing the nonsense. You can't deprogram someone who is ignorant of the basics of modern science. Doesn't matter what field it is. I blame the internet for dumbing down the average American. Better to realize you do not know something than to get your incorrect info on the internet and then spread it to others as fact. But, that is the time we live in.
 
It's okay - I think your degree is useful - electrical engineers helped design computers that got us to the moon. That keep planes in the sky every day. That design tall skyscrapers which can withstand earthquakes and high winds. That manage lifesaving surgical procedures. That run our global financial system.

It just none of it applies to audio for some reason.

(But that's okay too - it can all be fixed with a cheap "USB-Fixer", re-clocker, a non-compliant out-of-spec USB cable, and a thick, chunky ethernet cable with a really pretty jacket.)

It's like the USB-IF didn't know what they were doing - but the cheap $30 "USB-Fixer" on Amazon (or the $4899 version which is the same thing with huge metal heatsinks attached) will make all the bad stuff okay.


You know what's really bizzare? My streamer / computer can send data perfectly out of its USB socket to a "USB-Fixer", but it can't send it another 30cm along, all the way to the DAC. It just needs that USB fixer in the middle. Weird hey?
The purpose of the “USB fixers” are multi purpose, but overall the objective is to clean the USB signal as best as possible PRIOR to it entering the DAC so that the DAC has to process and “clean” as little as possible to limit the amount of system resources and power draw as possible to provide the best and cleanest sound quality.
There definitely are DACs on the market that have the ability to “clean” the noise, jitter, ground, AC and DC Line leakage, clocking, etc - they just cost more than a new car!
Thus, you can achieve similar results by using say a $2K to $4K range DAC and a $1K to $5k made for audio music server (such as an Aurender or Auralic) with one or more USB Fixers in-between and separate outboard Linear Power supplies to power each component within the entire digital chain (including DAC and Server and USB Fixer).
The aforementioned will net you similar results for a fraction of the total cost.
You need to look at and address more than just the bit-perfect data signal when it comes to USB and digital / computer audio due to all of the noise and jitter created by the digital and power supply and power lines leakage and digital to digital converters and regulators involved all along the way and the contamination created due to all of these things being carried and passed along via all of the connections amongst the cables (especially USB cables since they have both data and power embedded together within one cable).
You need to consider the following:

1): ground connection, and AC and DC leakage currents pass through between parts of the system through the cables.
2): one device will "pollute" the its own output which negatively effects the input of the other devices.
Some devices have a very "constant" load which won't cause noise on the AC and/or DC cable lines such as most external and internal clocks. However, other components within the digital chain such as computers, music servers, NAS, Routers, Modems, and "streamers" (these are all essentially computers when you look at them) they all have massively constant changing current loads which cause voltage fluctuations on the power supply and all cables. Thus creating and passing on (up and down the chain and in and out of the AC Mains also) putting a lot of extra noise and jitter on the DC and AC power lines carried via cables and into to the clocks and chips and boards of all the devices.
Different devices have different current load profiles and different devices have different sensitivities to noise on the DC line.
Utilizing these “USB fixers” and iso-galvanization devices in between eliminates most of these issues by creating a “moat” that doesn’t allow the noise and jitter to enter into the DAC.
I need to do some more research in regards to how and where and what the high grade crystek clocks used within these USB “fixers” do in regards to reclocking of the data prior to entering into the DAC, I don’t actually know enough about that, but I do know they are all regenerating the signal with USB chips that contain no switching regulators. They have their own dedicated power supplies that give each side of the USB cable it’s own independent voltages direct from power supply to each chip (power, data, and ground) each that originate from its own independent linear power supply with further regulation provided by 3 sets of LT3045 linear chip voltage regulators.
The use of a 3ppb OCXO Crystal clock running directly at 24MHz and connected via a board track just a couple of inches away from the USB chip. Therefore, no precision is lost within cables and connectors, as is the case when using an external master 10MHz clock with an additional 24MHz clock generator.
The use independent power supplies, one dedicated to the OCXO clock and the other used for powering the USB data chip and the third for the 5V USB power line.
If you want to do some of your own research, the most popular of these devices are: SOTM TX-USB, Uptone Iso-Regen, Innous Phoenix, and the ifi USB 3.0. Also in conjunction with these, most are also using a similar device that does similar to the Ethernet / network such as the Uptone EtherRegen, SoTM Network switch, Innous just released onea new one also.
 
Last edited:
The aforementioned will net you similar results for a fraction of the total cost.



Well, in that case, you need to be a DAC designer - because you clearly know something that no other DAC designer knows!

You could have the best selling DAC in the world if you think for $300 on Amazon you can get the same performance as DACs that cost "as much as a new car" as you put it. Do you understand how crazy successful you would be? You could play with the "big gun" DACs for the price of a USB fixer on Amazon!

Doesn't the basic economy of this scenario cause questions in your mind?


1): ground connection, and AC and DC leakage currents pass through between parts of the system through the cables.
2): one device will "pollute" the its own output which negatively effects the input of the other devices.

You're touching the truth here - agree. But does it impact sound? Nup - becasue..................

I don’t actually know enough about that,

Ahh - that makes more sense.

Let me help:

Let's start with some basic definitions from the Oxford English Dictionary.

asynchronous adjective
(of two or more objects or events) not existing or happening at the same time


In a USB audio sense, "two or more objects" means successive bits in an data stream (data which just so happens to represent audio, but could represent anything at all).

That means, on an asynchronous connection, the data is not timed. Not timed means not clocked. The time is controlled by the DAC, from its buffer and to the conversion chip. So what happens on the line is inconsequential to it.

Now……….

When you conflate the definition of asynchronous with the definition of "jitter"

Jitter noun
slight irregular movement, variation, or unsteadiness, especially in an electrical signal or electronic device.


You'll see that jitter doesn't exist on an asynchronous connection.

It can’t by definition, because the bits aren’t “happening in time”. Refer back to definition of asynchronous above^^^^^^

Therefore, you can't re-clock it. Re clocking it is impossible, and if not, detrimental at the very least, because you are forcing external and uncorrelated time demands on a data stream which the DAC is trying to control ! NOTHING good can come of that.

What's more, the DAC places that data into a buffer (a sequence of 1s and 0s in static memory).

buffer noun
a region of a physical memory storage used to temporarily store data


Therefore, all your best re-clocking (or chain thereof) is thrown away anyway! The DAC *STORES* the data before it processes it.


Seriously - some really smart people design this stuff - and you can't outsmart them with cheap tweaks.

You clearly understand some of the challenges which face digital data transmission (it is most certainly not all plain sailing) - but what you fail to acknowledge is that the designers of this stuff have already thought of; and dealt with; all of it (and more).

Digital data just doesn't work that way.

(analogue does, yes. And so much we deal with in audio relates to analogue, so it is easy to just transpose what you know about analogue to digital. But to do so is shows fundamentally flawed understanding of digital concepts).
 
Last edited:
This will be related to streamers, I promise.

With the (hopefully) prolific roll-out of immersive audio formats (e.g. Atmos), the audio data is inside a structured audio container format and is then further data reduced with either lossy (DD+) or lossless (TrueHD core audio, AKA MLP) compression.

This means that it is up to the processor to unpack, decompress and then render the included audio streams. So these containerized formats (DD+, TrueHD) are immune from transport-related issues, as long as all the data arrives, the processor will be fully in charge of the clocking of the final LPCM and feeding it to the onboard DAC.

These set-top streamers are increasingly using HDMI as the transport link, as that's a well-established physical and logical (HDMI Spec covers many transport protocols) interconnect in today's A/V world.

So my preferred streamer these days is the Apple TV 4K (I have an old 2017 and a new 2021), as it can play my Amazon Music HD, Tidal HiFi w/Atmos and Apple Music with Spatial Audio (Atmos).

Now, I've run across people saying that HDMI is no good for audio, as it is susceptible to Jitter, and they are correct if all we are talking about is 2ch LPCM 'raw' audio streams.

But that's not how audio flows between an AppleTV and a processor. They use Dolby MAT 2.0 to transfer everything from simple 2ch PCM to full Atmos payloads.
Dolby MAT is a very cool means of converting the LPCM 'audio' pathways on HDMI into a 24Mbps data highway that at its root is agnostic to the audio format. MAT does specify how each end negotiates what formats will ultimately be transported, but that's secondary. This 'data highway' is just that, an asynchronous transport.

So an ATV can relay high-quality audio over a carrier immune to jitter or other temporal effects.

And it works very well on all three (4 if I count the Airpods Max connected to the ATV) Atmos setups I have.
It will not replace my high-res discs and rips of BluRay audio music for ultimate audio quality, but it's a great way to access the vast collections in my subscriptions with very reasonable audio quality.
 
I would think of it this way...do you need to have a library management system or just a streamer, next question is do you need a DAC built-in.

Sounds like you want all, so the next question is do you want to use Roon (which costs money and a separate sever) or want built-in everything with it’s own app.

If you want all in system to do everything, my recommendation is Aurrender it can do everything you want at once and is well regarded. Super simple to setup and use and had a great app everyone seems to like.

I promised long ago (around post #35) that I would report back when I finally made a decision. And the Winner is...Zigman and the Aurender A30. For my needs, for my purposes, this will easily do what I want to accomplish.

Again, I appreciate all of the comments and suggestions made here, and (as all can tell) I took the time to look at a LOT of different solutions and recommendations.

I will report back soon regarding how this is working out.

Again, many thanks for everyone's patience and consideration.
 
the Aurender A30. For my needs, for my purposes, this will easily do what I want to accomplish
Congratulations!
That is one slick unit. The engineering on this one looks to be top-notch.

I'll be interested in hearing your views on MQA recordings if you have a Tidal subscription.
It was pretty nice on a Meridian system.
 
I'll be interested in hearing your views on MQA recordings if you have a Tidal subscription.
It was pretty nice on a Meridian system.

I listen to some MQA. It sounds good - it really does. Noting, there are "green" and "blue" MQA files - green ones are not authenticated, so who knows the provenance of them. But I can't help but to think it is a solution to a problem that doesn't exist. How is it better to listen to a MQA file than an original encoded 24/192 (or whatever the original)?

That is, unless it is a solution to the problem of extracting $$ out of every part of the audio chain, from studio to playback equipment. Ahh, now I understand.
 
But I can't help but to think it is a solution to a problem that doesn't exist.
Not to turn this thread into a pro/con on MQA, so I'll briefly state that to me, the primary (it does many things) 'problem' MQA addresses is temporal accuracy in both the encode and decode steps of the digital pipeline. An MQA-certified DAC, being fed an MQA file will produce a much more accurate impulse response.

Being very familiar with temporal-related issues on the networking side, I get what they are doing, and how these types of issues are not resonating (pun intended) with many.

But lack of MQA support is not a critical factor for me, I'm still going to buy a Trinnov as my processor. The overall flexibility is more important than this one detail.
 
Not to turn this thread into a pro/con on MQA, so I'll briefly state that to me, the primary (it does many things) 'problem' MQA addresses is temporal accuracy in both the encode and decode steps of the digital pipeline. An MQA-certified DAC, being fed an MQA file will produce a much more accurate impulse response.

I've heard this and would like to know more. How are these issues generated in the first place, and how does MQA resolve them, and how is it not resolved with a standard high-res recording?

As I said - MQAs sound good - they surprised me how good they sound - but I put it down to likely better mastering and production capabilities. (Which is probably part of it regardless).
 
I think the companies using MQA are using it mostly for it's DRM control. Most manufactures I've seen statements from are negative about MQA. The most frequent comment is, "unimpressed" which is pretty vague and I suspect that is mostly because of the licensing costs. My "guess" is that most people couldn't hear a difference between an MQA and FLAC file of identical source material. It seems most of their critics are using theoretical cases or frequencies outside the hearing range to show issues that are probably nothing of any consequence.

The biggest issue I see is that they made some pretty heady claims initially that were realistically very hard to fulfill especially all the stuff about compensating for artifacts created during the mastering process. I think that aspect has been dropped from the conversation. In addition their "original" claims were that it was lossless which has been disproved by multiple sources and they have pulled these claims down as a result. However I see this mostly as a bunch of hand waving by people who just want to make noise.

Personally I just see it as another standard to deal with.
 
I think that aspect has been dropped from the conversation. In addition their "original" claims were that it was lossless which has been disproved by multiple sources

I've heard that too! As I understand, it's not lossless in terms of "100% faithful to the original" (ie, you won't get the same checksum) - but conversely / concurrently it doesn't lose data of value, like (for instance) MP3 would.

So calling it lossless or otherwise is really just a technicality.

But there is so much I don't understand about MQA........and really just a whole lot of obfuscation and BS when you try to find out. For instance, why aren't there software decoders? The old DRM again I assume.

And how can you "improve" on a master recording from a temporal sense? How does that work?

I've also heard that any MQA capable DAC is unable to take the MQA circuit out of the processing. (I mean, it just passes non-MQA data straight through it, but still....... oh dear, I've got an MQA DAC! ) .


Personally I just see it as another standard to deal with.
Yeah, and that's the last thing we need.

To me, it seems like one company's strategy to control the entire music process, from recording and mastering, through to distribution, streaming and consumer devices. There is a cut at each step for them. They're also looking at "pay per play" models I believe. What a great little rort (if they can pull it off)!

As I said prior - it seems to be a solution looking for a problem. If I can have the actual 24/192 or DSD (or whatever the master) exact bits, why do I want the MQA version? No one can answer that! (Acknowledge Jon - but explain why (if you can!) - what is going on, and how does MQA fix it; and if MQA can fix it, why can't a MQA DAC fix a regular FLAC on-the-fly)?

Just saying "it's audio origami" doesn't satisfy my technical desires. In fact, I find it insulting.

The really challenging part (for audiophiles) is that anything MQA is re-mastered (by definition), so that makes comparisons impossible! And makes "how does it sound" impossible to answer. Which is why you get the nebulous answers like "it sounds good" or "not impressed".

I think we need another MQA thread to discuss in more detail. (if anyone even has more detail).
 
Last edited:
The issue with fixing "issues" created in mastering is a very nebulous thing. I used to be on a team that produced digital mastering consoles used by Ted Jensen at Sterling Sound and by Denny Purcell at Georgetown Masters and Capitol Records and etc... etc..

The point is that I remember hearing some of these people like Denny talk about how he liked one set of analog DRC's because they "breathed" a certain way. He liked our digital finite impulse response equalizers, but bypassed our DRC boards. These guys picked equipment that added artifacts that they liked to the music.

When the Fleetwood Mac Rumors album was mastered there was a "special box" that someone wanted the mastering engineer to try. It actually added noise and distortion, but he liked the sound he got with it and when that album went up the charts that device became very popular.

If you try to undo their choices, you are making changes to the sound that they wanted to create.

Lots of music is being remastered from "original" source tapes, so people are using their current equipment of choice and imparting their own preferences to the end sound.

I had a friend over who was listening to some remastered Jethro Tull that I think sounds fantastic and much better than the original, but my friend who said it sounded very good didn't like it because it was different than the original. He has become a vinyl enthusiast.
 
FWIW and I may have posted this previously, this article is what I personally consider the definitive review of MQA. For those unfamiliar, Archimago is an Objectivist reviewer it comes to reviewing tech. Have been reading his blog for years.

https://audiophilestyle.com/ca/reviews/mqa-a-review-of-controversies-concerns-and-cautions-r701/
Really interesting article, although doesn't go into too much detail as to "why". Of course, that would be up to MQA to explain what they are doing, rather than someone simply measuring and/or observing.

A really weird format, and this article kind of confirms my suspicions that there is not really a place for MQA in the audio world.

Moreso now - I think it is simply an attempt by someone to control and licence all aspects of the audio production and distribution chains . And confirms my current strategy, which is to take a DSD or high-res PCM if I can - MQA only if there is no other option available.
 
I promised long ago (around post #35) that I would report back when I finally made a decision. And the Winner is...Zigman and the Aurender A30. For my needs, for my purposes, this will easily do what I want to accomplish.

Again, I appreciate all of the comments and suggestions made here, and (as all can tell) I took the time to look at a LOT of different solutions and recommendations.

I will report back soon regarding how this is working out.

Again, many thanks for everyone's patience and consideration.

Congrats!! When you have time, let us know how you like it.
 
Another interesting view of MQA:
And it has been debunked in this reply from Bob Stuart: All that glitters is not gold(en) | Bob Talks

To me, the following was glaring:
...
  1. The blogger’s test failed because he submitted signals that do not resemble music to an encoder that was configured only for music works. Nonsense comes out. This is like being disappointed when a F1 car struggles on an off-road race.
  2. He submitted high-rate composite files containing unsafe levels of ultrasonic signals –in places 100 times higher than in music recordings – resulting in 10x encoder overload. (See Appendix 2)
  3. System error messages generated by the MQA encoder were ignored. [2]
...

There might be some legitimate criticisms of MQA, but this guy did not achieve that.

For instance, one 'weakness' of MQA is the stringent requirement that the DAC be involved in the process. I get exactly why that is so, as it's the only way to ensure the output is temporally aligned to the source timing. This is why there is no software-only solution.
It also means it's difficult to pre-process the signal with DRC, or if post-D/A, then the timing accuracy is reduced by A/D/A cycles that introduce their own artifacts.

So outside a Meridian Digital speaker setup, one must limit the chain to an MQA DAC -> Amp -> Passive speaker

or to MQA DAC -> Pure Analog Active speaker (e.g. Elac Navis )

The other downside for me is that so far, no multichannel nor immersive (Atmos) is supported, and those are a bigger deal to me.
 
And it has been debunked in this reply from Bob Stuart: All that glitters is not gold(en) | Bob Talks


Very interesting reply and a lot to consider.

I remain with a few questions on the reply:



2.MQA has never made false claims about ‘losslessness’. MQA has been clear from the outset that our process operates in a wider frame of reference that includes the whole chain including A/D and D/A converters. [1]


3.Provenance: MQA files are delivered losslessly and reconstruct exactly the sound that an artist, studio or label approves.

How can it be not lossless, and "delivered losslessly" at the same time?

Furthermore, what the studio approves is inconsequential - I'm sure they "approve" the 16/44.1 release too. It doesn't make it perfect, or even lossless.

5. He submitted high-rate composite files containing unsafe levels of ultrasonic signals –in places 100 times higher than in music recordings – resulting in 10x encoder overload. (See Appendix 2)

How do you "overload" an encoder? It processes a file within its envelope of specification (of which ultrasonics is part). If load goes up, performance goes down. But it shouldn't generate garbage.

Maybe what Bob means is that the blogger operated the encoder outside of its performance envelope. But I'd like to know how ultrasonics within the nyquist limit of whatever sample rate was in use could be outisde its performance envelope? Maybe I'm missing something?
 
How can it be not lossless, and "delivered losslessly" at the same time?

Furthermore, what the studio approves is inconsequential - I'm sure they "approve" the 16/44.1 release too. It doesn't make it perfect, or even lossless.



How do you "overload" an encoder? It processes a file within its envelope of specification (of which ultrasonics is part). If load goes up, performance goes down. But it shouldn't generate garbage.

Maybe what Bob means is that the blogger operated the encoder outside of its performance envelope. But I'd like to know how ultrasonics within the nyquist limit of whatever sample rate was in use could be outisde its performance envelope? Maybe I'm missing something?

It’s true that Meridian did not claim it was lossless. I attended a Meridian demo in 2016 and it was admitted at the time, but Tidal later put some misleading wording on their site. My understanding is they selectively throw some bits away for the greater good, much like HDCD, which audiophiles generally accepted in the past.

How else could they cram 24/192 into the space of 16/44? Whatever compression scheme they developed is supposedly tuned for a realistic scenario (music) not test tones obviously. Bob’s life research in psychoacoustics, how the human ear and brain hears sound, were likely part of the secret sauce. Certain assumptions and compromises were made to achieve the goal, like in all engineering.

At the end of the day we don’t know what the black box (encoder) is doing, unlike a Zip program that has to maintain data integrity. What happens when you try to zip a jpeg or some file that is already highly compressed? The file size suffers but you don’t lose data (integrity is prioritized over file size). With MQA the file size is capped by the 16/44 limits so something else has to give.

Once the MQA version is encoded and approved it is put in a lossless 16/44 FLAC container and not allowed further manipulation (authenticated), unlike mp3 and other formats than can be compressed many times further and reconverted back to FLAC (or even upsampled and sold as hi-res) without the buyer knowing!

That was the intention 6 years ago anyway, and assumed Tidal or whatever 3rd party wouldn’t do something stupid on their platform. I don’t think Bob should be taking all the blame.
 
Last edited:
Back
Top