Amp ultra review

Power and gain are two separate things anyway. Also, different speakers have different sensitivities.
And I'm pretty sure El Cheapo doesn't even care to know the difference.

The big trouble is that deliberately non-technical "reviewers" like Randy or Robinson or pseudo-technical reviewers with no clue at all like the (not so) Scientific Audiophile manage to plant totally unjustified doubts into audio beginners' heads. Bullshitty stuff like "100 W class D are not the same as 100 W class AB". As a result it becomes impossible to talk about the grain of truth behind such statements, it can be really frustrating.
 
Last edited:
And I'm pretty sure the El Cheapo doesn't even care to know the difference.

The big trouble is that deliberately non-technical "reviewers" like Randy or Robinson or pseudo-technical reviewers with no clue at all like the (not so) Scientific Audiophile manage to plant totally unjustified doubts into audio beginners' heads. Bullshitty stuff like "100 W class D are not the same as 100 W class AB". As a result it becomes impossible to talk about the grain of truth behind such statements, it can be really frustrating.
Andrew Robinson just did a review on small amps and conveniently left out any type of WiiM Amp. When asked why by a commentator he said his followers would give him 💩 if he reviewed a product called “Vibelink” That guy is an audio snob of the worst kind.
 
Andrew Robinson just did a review on small amps and conveniently left out any type of WiiM Amp. When asked why by a commentator he said his followers would give him 💩 if he reviewed a product called “Vibelink” That guy is an audio snob of the worst kind.
Robinson has a definite bias against WiiM's products. Don't expect this to change.
 
And I'm pretty sure El Cheapo doesn't even care to know the difference.

The big trouble is that deliberately non-technical "reviewers" like Randy or Robinson or pseudo-technical reviewers with no clue at all like the (not so) Scientific Audiophile manage to plant totally unjustified doubts into audio beginners' heads. Bullshitty stuff like "100 W class D are not the same as 100 W class AB". As a result it becomes impossible to talk about the grain of truth behind such statements, it can be really frustrating.
I feel the problem is more how watts are measured, than differences between class A, A/B, B, and D.
Some of my vintage amps are rated as 50w 20hz - 20khz, whereas newer amps are rated 100w @ 1khz.
 
I feel the problem is more how watts are measured, than differences between class A, A/B, B, and D.
Some of my vintage amps are rated as 50w 20hz - 20khz, whereas newer amps are rated 100w @ 1khz.
There's never been one commonly agreed on standard for measuring output power, though.

Different amplifier topologies have different limitations depending on the measuring signal and procedure used, so much is clear. The real question is, which way is mostly relevant for reproducing music? And even this isn't all that easy to answer.

Are we asking for a standard where the amp must be able to output any single frequency from 20 Hz to 20 kHz with its rated power? Nobody ever needs 100 W at 20 kHz for home Hi-Fi. Are we asking for a signal containing "all" (or many?) frequencies from 20 Hz to 20 kHz? If so, at what spectral distribution? Pretty much the hardest test to pass for class-D amplifiers is a multi-tonetest (equally distributed power) at low distortion and the potential problems mainly stem from the treble region. However, such a test doesn't tell much about how and if the amp can cope with high power output below, say, 50 Hz (which is pretty interesting with real music and real customer expectations.

There's no doubt that the power figures provided for many of the cheap class-D desktop amps are ridiculous and simply technically impossible. At the same time it's also impossible to provide one single number that would describe an amplifier's capabilities.
 
Are we asking for a standard where the amp must be able to output any single frequency from 20 Hz to 20 kHz with its rated power? Nobody ever needs 100 W at 20 kHz for home Hi-Fi. Are we asking for a signal containing "all" (or many?) frequencies from 20 Hz to 20 kHz? If so, at what spectral distribution? Pretty much the hardest test to pass for class-D amplifiers is a multi-tonetest (equally distributed power) at low distortion and the potential problems mainly stem from the treble region. However, such a test doesn't tell much about how and if the amp can cope with high power output below, say, 50 Hz (which is pretty interesting with real music and real customer expectations.
This is so true.

A flat spectrum multi-tone test featured in many amp reviews is for sure interesting as it can show the difference between good engineering and truly exceptional engineering (or is that overengineering?), but no 'real' content will ever put such a strain on the amplifier.

Therefore I'm a proponent of using falling ("pink") spectrum multi-tone tests in addition to show how an amplifier performs under more realistic demands. It is a less demanding test, so amps generally perform better at it - which is great, because it shows that amps also perform better with real music.

Here's a recent example (from my Amp Pro vs Amp Ultra measurements thread):
1762847074452.png
We see how with the 'torture test' flat spectrum multi-tone signal both amps show a rise in noise and distortion at high frequencies (to about -90dBr at 20kHz), while with the more realistic falling spectrum multi-tone signal both amps' noise and distortion stays below -120dBr up to 20kHz.
 
There's never been one commonly agreed on standard for measuring output power, though.

Different amplifier topologies have different limitations depending on the measuring signal and procedure used, so much is clear. The real question is, which way is mostly relevant for reproducing music? And even this isn't all that easy to answer.

Are we asking for a standard where the amp must be able to output any single frequency from 20 Hz to 20 kHz with its rated power? Nobody ever needs 100 W at 20 kHz for home Hi-Fi. Are we asking for a signal containing "all" (or many?) frequencies from 20 Hz to 20 kHz? If so, at what spectral distribution? Pretty much the hardest test to pass for class-D amplifiers is a multi-tonetest (equally distributed power) at low distortion and the potential problems mainly stem from the treble region. However, such a test doesn't tell much about how and if the amp can cope with high power output below, say, 50 Hz (which is pretty interesting with real music and real customer expectations.

There's no doubt that the power figures provided for many of the cheap class-D desktop amps are ridiculous and simply technically impossible. At the same time it's also impossible to provide one single number that would describe an amplifier's capabilities.
My point is that an amp that produces 50w at 20hz to 20khz, potentially has more power than one that offers 100w at 1khz only, since the power required varies.

As such my vintage amplifier here is a more realistic measure of it's wattage, because it can produce that consistently at any point in the audible spectrum.
 
My point is that an amp that produces 50w at 20hz to 20khz, potentially has more power than one that offers 100w at 1khz only, since the power required varies.

As such my vintage amplifier here is a more realistic measure of it's wattage, because it can produce that consistently at any point in the audible spectrum.
Honestly, I doubt that this qualifies as a sound and solid rule. You're probably going to find different amps behaving differently.

Just because an amplifier has a high output at 1 kHz doesn't mean it's low at 20 Hz or 20 kHz (ignoring the relevance for now).
 
Back
Top