So the KEF LSX II says, " can accept and process sources up to 384kHz/24bit when streaming over the network."

It is my understanding that this means:

Internal processing can handle high-resolution music files up 384/24.

However, the interspeaker connection bottlenecks to either 48 or 96khz.

  • In a wireless interspeaker setup: The audio is downsampled to 48kHz/24bit PCM.
  • In a wired interspeaker setup: The audio is downsampled to 96kHz/24bit PCM.

So, if you stream a 384kHz/24bit track to the system, the primary speaker processes it at its native resolution, the audio sent to the secondary speaker will be downsampled.

So what that is saying is???

If I am playing a vinyl rip that is 196/24 from my computer (let’s say directly connected to my DAC that also supports up to 384/24) that the output is going to downsample to a max of 96/24?

If I understand that correct, what’s even the point – or is it just so they can claim “384/24” and trick people with marketing it that way???

  • audioen@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I think everything gets downsampled to the DSP clock and DSP resolution for playback purposes. My guess is that is 96/24. 48/16 or something such is perfectly fine for digitization. No point in storing audio bits that are essentially random noise to begin with, and way below any conceivable noise floor either way. Same goes for keeping ultrasonic frequencies you can’t hear and which are unlikely to be in the recording in the begin with, or to be correct even if they are because the recording engineer can’t hear them either. 96/24 is about 3 times more data than is needed for transparency.

    As to marketing aspect, yes, I think so. People just buy whatever has the biggest number, I think. There have been no actual sonic improvements to digital audio since the CD, and CD is already somewhat overspecified.