Where SFDR shines
Spurious-Free Dynamic Range (SFDR) is a real, standard metric from RF and converter test work, and it has a clean, intuitive meaning:
SFDR is the difference between the desired signal (carrier) and the single strongest spurious spectral component (spur) within a defined observation window, typically expressed in dBc (relative to the carrier) or sometimes dBFS (relative to full scale in ADC/DAC contexts).
In amateur radio, that “largest spur below the carrier” idea maps naturally to TX spectral purity. On the RX side, the things that ruin CW/SSB reception in real operating conditions are usually not summarized well by “worst single spur,” which is why SFDR often ends up as a side note rather than a decision metric.
TX: SFDR helps you spot discrete spurs/harmonics (compliance + good-neighbor behavior), but it won’t fully describe “splatter.”
SSB TX: two-tone IMD (and adjacent-channel regrowth) is usually the real “clean vs dirty” separator.
RX: DR3, blocking, and RMDR (reciprocal mixing / phase noise) predict contest-band survivability far better than SFDR.
Where SFDR shines: TX spurs, harmonics, and “spectral purity”
On transmit, “spectral purity” is basically: how far down is the worst spur/harmonic relative to the fundamental. That’s SFDR in practice: “carrier to worst spur.”
(Regulators care about spurious emissions because they can land out of band and cause interference. The exact limits and measurement conditions depend on band, power, and jurisdiction.)
Even when you’re comfortably above the legal minimum, strong discrete spurs (“birdies,” reference leakage, synthesizer products, clock products) can still:
- annoy people in-band,
- trigger “what is that tone?” reports,
- create multi-station / Field Day self-interference,
- and often indicate design/layout/filtering problems.
Where SFDR falls short on TX: “dirty SSB” is usually IMD splatter
A rig can have excellent single-tone SFDR and still sound wide on SSB, because the most common “dirty SSB” mechanism is intermodulation distortion (IMD) from nonlinear PA behavior under a complex envelope.
That’s why practical transmitter evaluation often splits into two buckets:
- Spurs/harmonics (SFDR-ish): “Are there any nasty discrete products?”
- Two-tone IMD / regrowth: “Do I splatter when I talk (or when my audio chain hits peaks)?”
If you only look at SFDR, you can miss the failure mode that actually causes most “your signal is wide” complaints on voice.
Why SFDR is usually the wrong headline metric for RX (CW/SSB)
In real CW/SSB operating (contests, pileups, “big gun 2 kHz away”), receivers typically fail because of:
- Third-order IMD dynamic range (DR3): two strong signals mix in the front end/mixer/ADC path and create a false signal on top of what you want.
- Blocking / gain compression: a strong off-frequency signal pushes stages toward compression and your weak signal “goes away.”
- Reciprocal mixing / phase noise (RMDR): a strong nearby signal plus LO phase noise raises the noise floor right where you’re trying to listen.
Those are multi-signal and/or noise-like mechanisms. They are not well summarized by “carrier vs worst single spur.”
So what is RX SFDR good for?
On the receive side, SFDR is mainly useful for:
- finding internal birdies or “spurs in the panadapter,”
- spotting ADC clock/interleaver artifacts in some SDR implementations,
- describing “spur cleanliness” of a specific stage or architecture.
But as a primary “this radio wins contests” metric? DR3 + blocking + RMDR is where the story is.
Adaptive predistortion: why a simple SFDR test can be misleading
“It depends how it’s implemented”. Modern rigs can include adaptive predistortion that targets the real-world distortion mechanism (nonlinear IMD), not necessarily the single worst spur that sets an SFDR number.
FlexRadio’s SmartSignal is an example of this approach: it’s an automatic adaptive pre-distortion system designed to reduce unwanted splatter / intermod products by pre-correcting the waveform before it hits the PA.
Why SFDR might not show the “win”
- Predistortion primarily attacks IMD/regrowth under varying envelopes (voice and similar), not PLL/reference spurs or clock leakage.
- If your “worst spur” is a synthesizer product, SFDR may barely move even if SSB IMD improves a lot.
- Some systems are less effective on signals that don’t provide the same training behavior. Flex explicitly notes SmartSignal works best with varying-amplitude signals, and its effectiveness with “fixed-amplitude” signals like FT8, RTTY, or CW is limited.
Bottom line: if you want to see what predistortion is buying you, two-tone IMD (or a realistic voice/PEP test) is usually the measurement that reveals it, not a single-tone “worst spur” number.
The amplifier reality: your station’s “cleanliness” is often set by the amp
Once you drive an external linear, the transmitted spectrum is the system result:
- exciter output and interface levels,
- amplifier linearity,
- LPF effectiveness and installation details,
- and any control loops (ALC and/or predistortion feedback sampling).
That means a transceiver’s standalone SFDR can become less predictive of what you actually radiate. A mediocre amp (or an overdriven good amp) will dominate the real on-air result.
ALC: useful tool, not a guarantee (and not standardized)
ALC can help prevent overdrive (which is one of the fastest ways to create ugly IMD), but it is easy to overestimate what ALC “saves.” Two practical issues show up in the wild:
- ALC behavior isn’t standardized: voltage ranges and response timing vary widely between manufacturers (and sometimes between models in the same brand).
- Time constants matter: a slow or poorly behaved ALC loop can allow overshoot on keying or peaks, and can even create artifacts if it “rides” modulation.
So if you run an amp, treat ALC as one tool in the box, not a universal safety net. The only way to know the truth is to measure post-amp, post-LPF under the modes and power levels you actually use.
Practical takeaways: when to use SFDR, and what to use instead
Judging TX cleanliness (no external amp)
- Use SFDR / spectral purity to quickly detect discrete spurs and harmonics.
- Pair it with two-tone IMD if you care about SSB “clean vs splattery.”
Judging TX cleanliness (with an external amp)
- Measure at the station output (after the amplifier and filtering).
- Check both spurs/harmonics and IMD/regrowth under realistic drive and duty cycle.
- If you have predistortion capability, use tests that actually exercise it (voice/two-tone), and understand mode-dependence.
Judging RX performance for CW/SSB
- Prioritize DR3 (with close-in spacing), blocking, and RMDR / phase noise.
- Use SFDR mainly as a spur/birdie detector, not as the main “receiver winner” yardstick.
Mini-FAQ
- Is SFDR “the” spectral purity spec? — It’s a useful shorthand for discrete spurs/harmonics relative to the carrier, but it doesn’t fully describe IMD splatter on SSB.
- Why can a radio look clean on SFDR but splatter on SSB? — Because SSB “dirt” is usually nonlinear IMD/regrowth, not one dominant spur.
- What receiver specs matter most in contests? — Close-in DR3, blocking behavior, and RMDR (reciprocal mixing / phase noise) predict survival far better than RX SFDR.
- Does predistortion improve SFDR? — Not necessarily. Predistortion mainly reduces IMD/regrowth; the “worst spur” that limits SFDR may be unrelated and unchanged.
- If I run an amp, what should I measure? — Measure post-amp/post-LPF for both spurs/harmonics and IMD under the modes and duty cycle you actually use.
Interested in more technical content? Subscribe to our updates for deep-dive RF articles and lab notes.
Questions or experiences to share? Feel free to contact RF.Guru.