Isochronous vs Asynchronous data
This is not 100% a direct answer to your questions, but it does explain why transports might have an impact on sound quality (this was written in context of some debates about why DenonLink was a good thing vs just HDMI LPCM):
It seems there is a bit of confusion caused by the very different natures of two types of audio data streams sent from BluRay players to AVP’s.
I’ll try and give my attempt at explaining the differences and what the impacts are in the context of our Denon gear.
First, some terminology:
Isochronous – something that occurs on equal time intervals, as applied to audio, it means audio data packets, such as PCM samples from a CD occur one every 44,100/sec and contain 16 bits.
Asynchronous – something that occurs at varying intervals but with a defined sequence. An example is the FTP transfer of a ZIP file between two computers. The TCP-based transfer is an asynchronous operation. The timing of the arrival of the packets does not affect the integrity of the file.
In audio, most data was isochronous and timing was critical, thus why everyone is jittery about jitter (sorry couldn’t help myself
).
When a CD player output its PCM datastream over SPDIF, it had to rely on its own clock to send out the samples, this clock may or may not have been aligned with the receiving units D/A clocks, or gaps would be introduced, and the resulting slight timing variations would result in slightly different waveforms than intended by the original PCM encoding.
With the introduction of DolbyDigital, we began to see the arrival of ‘packetized’ and compressed (whether lossy or lossless doesn’t matter for now, so don’t get too hung up on that aspect) formats that combined multiple audio channels along with structured data about the packet contents (such as: is it 2.0, 3.0, 5.1, is the Dialog normalized or not, etc.). These structured packets could be transmitted asynchronously, as timing was not critical (although clearly, there is a maximum allowable time between packets or buffers run dry).
So while PCM is sensitive to mili and / or pico-second errors, a packetized stream can survive hundreds of milliseconds delay between packets.
The processor that decodes the packetized formats first buffers multiple packets in a FIFO buffer and then the processor unwraps the packets, processes the metadata and uncompresses the PCM data packets into another set of FIFO buffers. It then pulls data from the output FIFO buffers, synch’s them to the video (if part of a video dataset) and clocks them to the processors master PCM Digital clock and sends them on their way to the D/A converters.
So, for PCM sourced data, such as a multichannel PCM track on a BluRay disc, the isochronous nature of the data stream means timing of delivery is critical to accurate reproduction. And while most PCM survives the HDMI transmission just fine, there is opportunity for error and drift if clocks are not synched. Ergo, the introduction of DL4.
Prior to this, all the other audio formats (SACD,DVD-A,CD) were susceptible to jitter errors unless transported over synched transports such as DL3 or Meridian Digital Link.
For all packetized formats (Dolby TruHD, DTS-MA, etc.), they can be transmitted asynchronously with no loss in resulting accuracy. This is called ‘bitstreaming’ the codec from player to AVP.
We’ve been ‘bitstreaming’ DolbyDigital from our DVD players to our processors for a decade, and now with the new HD codecs on BluRay and new AVP’s like the AVP-A1, we can now bitstream these as well and obtain the benefits
Note that since the player sends the bsitreams to the AVP over HDMI, it is actually clocked pretty tightly, but it’s not clock synched (except in a DL4 player/AVP pairing). However, since its asynchronous data, it doesn’t matter to resulting audio waveform accuracy once decoded.
Hope this helps clear up the matter. Although maybe it got a bit too technical, I’ve been (rightly) accused of that before