I definitely don't understand what the abstracted digital layer of data might be.... "regular" network switches are designed with tolerances to keep the abstracted digital layer of data intact ...
I definitely don't understand what the abstracted digital layer of data might be.... "regular" network switches are designed with tolerances to keep the abstracted digital layer of data intact ...
the 1s and 0s... that are actually physically being carried by (analog) squared off electrical waves in the case of LAN cables.I definitely don't understand what the abstracted digital layer of data might be.
you can, of course find a few cheap DIY solutions as well to try and isolate your streamer from the rest of the LAN's noise. not necessary to explore these ideas with expensive branded equipment...yes these 1:1 isolation transformers in the network switches are designed to block the common mode interference, but allow the differential signals carried by the LAN cables through (iirc think over 2 pairs of wire).
but the way i understand it (i could be wrong) "regular" network switches are designed with tolerances to keep the abstracted digital layer of data intact, so they don't eliminate that much common mode interference. since at the end of the day they are networking equipment first, and were not built from the ground up for audio.
this is where things get a bit hairy because people disagree as to what level of common mode interference is audible etc. but if we put that aside, basically only special shielded switches (like those for medical or industrial use) are any good at eliminating this common mode interference. those and obviously the $9000 audiophile ones.
So, even if we would accept that for a moment, the reverse of it was that tighter tolerances would not keep the "1s and 0s" intact?the 1s and 0s... that are actually physically being carried by (analog) squared off electrical waves in the case of LAN cables.
ba no...certain protocols concerning us integrate temporal elements, "a clock"..whether they are important or not because reprocessed at the input of the next element is another subject... , see main topic..So, even if we would accept that for a moment, the reverse of it was that tighter tolerances would not keep the "1s and 0s" intact?
The digital signal (and yes, there are digital signals, of course, and why they are called such is very well defined and not for debate amongst scientists) is the only source of what's going to be reproduced as music. The only task in the digital domain is not to lose or change the order of any of those high values and low values and that's it.
Even if Hans Beekhuyzen might imply otherwise (he doesn't really say loud and clear), there is no time information within the digital signal (so there's nothing that could be "smeared"). The digital signal just contains high and low values and the receiving part has to know how to interpret them (including the timing), either through negotiation or inherently by definition of a fixed protocol.
Cleverly (maybe), he always only comes back to (simulated) samples of S/PDIF when he tries to explain how a digital signal would always be "analog" and gives the impression that the digital content is highly at danger wherever digital signals are transmitted. Quite the contrary is true. The pure reason why digital bet analog in every way is that the exact form of the signal doesn't matter, as long as it isn't too heavily misshaped.
With a 5 GHz WiFi signal carrying all sorts of data simultaneously (including actual audio data) there is no direct equivalence between the waveform and the digital audio data, at all. Also, when the same digital payload is transmitted through a LAN cable instead, there is no "electric wave" propagating within that cable, at all.
Incorrectly equating the "digital audio signal" with the signal being physically transmitted and making you think of it as "the waveform that gets distorted" might be the key fault in Beekhuyzens contribution to the discussion.
The digital signal might carry a click signal used by the receiver. The digital data itself contains no information about time. You can play.it back at any speed without variation on pitch height.ba no...certain protocols concerning us integrate temporal elements, "a clock"..whether they are important or not because reprocessed at the input of the next element is another subject... , see main topic..
and this aspect can be more or less degraded. ..
(and can be observed in the analog domain of this signal... like spdif protocol etc)
RME: Support
RME: Information about international support, Support Hotlines, E-Mail Supportarchiv.rme-audio.de
my point just in reaction to the classic shortcut underlying all these discussions... of "just ones and zeros"
Yes, theoretically that is correct - if you try to block ALL the common mode noise, with a lot of chokes etc. you could end up affecting the differential mode currents that carry the digital signal enough to introduce errors into the bitstream.So, even if we would accept that for a moment, the reverse of it was that tighter tolerances would not keep the "1s and 0s" intact?
The digital signal (and yes, there are digital signals, of course, and why they are called such is very well defined and not for debate amongst scientists) is the only source of what's going to be reproduced as music. The only task in the digital domain is not to lose or change the order of any of those high values and low values and that's it.
Even if Hans Beekhuyzen might imply otherwise (he doesn't really say loud and clear), there is no time information within the digital signal (so there's nothing that could be "smeared"). The digital signal just contains high and low values and the receiving part has to know how to interpret them (including the timing), either through negotiation or inherently by definition of a fixed protocol.
Cleverly (maybe), he always only comes back to (simulated) samples of S/PDIF when he tries to explain how a digital signal would always be "analog" and gives the impression that the digital content is highly at danger wherever digital signals are transmitted. Quite the contrary is true. The pure reason why digital bet analog in every way is that the exact form of the signal doesn't matter, as long as it isn't too heavily misshaped.
With a 5 GHz WiFi signal carrying all sorts of data simultaneously (including actual audio data) there is no direct equivalence between the waveform and the digital audio data, at all. Also, when the same digital payload is transmitted through a LAN cable instead, there is no "electric wave" propagating within that cable, at all.
Incorrectly equating the "digital audio signal" with the signal being physically transmitted and making you think of it as "the waveform that gets distorted" might be the key fault in Beekhuyzens contribution to the discussion.
I would expect every modern, well designed DAC to perform on a similar level at least. BTW, 2002 was for steadyclock, now it's an evolution - steadyclock fs.I see in that video that he mentions that feature was introduced by them in 2002, so not new. How prevalent do you think similar technology is in DACs say built after then, or 2010 or...?
Somebody should tell HansI would expect every modern, well designed DAC to perform on a similar level at least. BTW, 2002 was for steadyclock, now it's an evolution - steadyclock fs.
Somebody should tell Hans
Well, could I turn the question on its head? Which "modern, well designed DACs" (or say DACs in wide use) can't handle the jitter produced by WiiM streamers? Or has the market more than its fair share of poorly designed DACs?For him Grimm MU2 achieves that independence from the source. If you think rme is expensive that is 18x more expensive…
In any case a manufacturer should not encouraged to think that the user’s dac will fix all the source’s shortcomings…
yeh the guys at PS Audio have been talking about this stuff for donkey's years. they really are one of the best examples of a legacy company adopting new technologies at the right time and not losing focus on achieving excellence in performance. they don't design stuff to fit specific pricepoints like most overly corporate companies in this space. unfortunately that means that their new stuff is usually too pricey (even if it performs spectacularly). i'm honestly thinking used, top-quality separates are gonna be really good value in the coming few years...I see in that video that he mentions that feature was introduced by them in 2002, so not new. How prevalent do you think similar technology is in DACs say built after then, or 2010 or...?
Well, could I turn the question on its head? Which "modern, well designed DACs" (or say DACs in wide use) can't handle the jitter produced by WiiM streamers? Or has the market more than its fair share of poorly designed DACs?
Yes, but you can also compensate one's device weaknesses with another one capabilities. Having a DAC with an effective jitter suppression might be more beneficial than using hyper ultra expensive streamer connected to not so well designed DAC. It can be seen in the GoldenSound movie how effective PLL loops can be.All I am saying is that efforts for good implementation should be on both designs , streamer and dac
Yes, but you can also compensate one's device weaknesses with another one capabilities. Having a DAC with an effective jitter suppression might be more beneficial than using hyper ultra expensive streamer connected to not so well designed DAC. It can be seen in the GoldenSound movie how effective PLL loops can be.
You mean, unless someone believes in miracles just because the term "sound signature" is so sexy, not because there could be a reason?Ok.. Unless someone believes that a dac has a sound signature and wants to extract as much as possible