What does the noise figure specification on an LNA datasheet include and exclude?
Noise Figure Specification Details
Noise figure is the most critical specification for receiver LNAs, as the first-stage noise figure dominates overall system sensitivity through the Friis equation. Understanding exactly what the datasheet NF specification means requires knowing the measurement setup, conditions, and what is and isn't included.
Noise Parameters
A complete noise characterization requires four noise parameters, not just NF: NF_min (minimum achievable noise figure), Gamma_opt (optimum source reflection coefficient for minimum NF), and R_n (noise resistance, which describes sensitivity to source impedance mismatch). The relationship is: NF = NF_min + (4R_n/Z_0) × |Gamma_s - Gamma_opt|^2 / ((1-|Gamma_s|^2) × |1+Gamma_opt|^2). When Gamma_s (source impedance) equals Gamma_opt, NF = NF_min. When Gamma_s = 0 (perfect 50-ohm match), NF is higher by an amount determined by R_n and how far Gamma_opt is from 50 ohms. Typical GaAs pHEMT at 10 GHz: NF_min = 0.6 dB, Gamma_opt magnitude = 0.5, R_n = 8 ohms, 50-ohm NF = 0.9 dB. The 0.3 dB difference between NF_min and 50-ohm NF is significant in noise-critical applications.
Measurement Conditions
NF is measured using the Y-factor method: a calibrated noise source (ENR known to ±0.1 dB) is toggled between hot (noise on) and cold (noise off = 290K) states. The noise figure meter measures the ratio Y = P_hot/P_cold and computes NF = (ENR - Y + 1)/(Y - 1) in linear, or equivalently NF_dB = ENR_dB - 10*log10(Y - 1). Measurement accuracy is typically ±0.15 dB at room temperature and ±0.25 dB over temperature, with primary error sources being noise source ENR uncertainty, connector repeatability, and instrument calibration. Datasheets typically show NF measured on an evaluation board with optimized bias; your PCB layout may add 0.2-0.5 dB due to input trace loss, via inductance, and ground return path differences.
NF in Cascade
The Friis equation shows why the first-stage NF is so critical: NF_system = NF_1 + (NF_2 - 1)/G_1 + (NF_3 - 1)/(G_1 × G_2) + ... For a system with LNA NF = 1 dB (gain = 20 dB), followed by a mixer NF = 8 dB: NF_system = 1 + (6.31 - 1)/100 = 1 + 0.053 = 1.23 dB. The mixer contribution is negligible. But if the LNA gain drops to 10 dB: NF_system = 1 + (6.31 - 1)/10 = 1 + 0.531 = 2.25 dB. LNA gain directly controls how much subsequent stages degrade system NF. For space applications, cryogenic LNAs at 20K achieve NF below 0.1 dB (noise temperature 7K), where even 0.05 dB of improvement in NF translates to measurable sensitivity improvement.
Friis: NF_sys = NF₁ + (NF₂-1)/G₁ + (NF₃-1)/(G₁G₂)
Y-Factor: NF = ENR / (Y-1)
Noise Temp: T_e = T₀(NF - 1), T₀ = 290K
Frequently Asked Questions
Should I design for NF_min or 50-ohm NF?
It depends on system priorities. Designing for NF_min requires presenting Gamma_opt to the transistor input, which usually means poor input impedance match (S11 of -3 to -8 dB). This causes gain ripple and potential instability when connected to varying source impedances. For most applications (communications receivers, radar front-ends), a compromise design achieves NF within 0.3 dB of NF_min while maintaining S11 < -10 dB. For radio astronomy, deep-space receivers, and cryogenic systems where every 0.01 dB of NF matters, noise-optimized matching is used with an isolator (circulator with load) before the LNA to present a clean 50-ohm impedance to the preceding filter.
Why does my measured NF differ from the datasheet?
Common reasons: (1) Input matching network loss (0.3-1.0 dB for microstrip matching at 10+ GHz). (2) Bias point difference from datasheet conditions. (3) Board grounding and via inductance affecting source impedance. (4) Test equipment calibration and connector loss. (5) Temperature: measuring at ambient temperature above 25°C. (6) Different source impedance than the measurement setup used by the manufacturer. Re-measure at the exact bias conditions and frequency stated on the datasheet to isolate the cause.
What NF does a receiver need?
Required NF depends on the sensitivity (MDS) target: NF = MDS - (-174 + 10*log10(BW)) - SNR_required. For a receiver needing -100 dBm MDS in 1 MHz bandwidth with 10 dB SNR: NF = -100 - (-174 + 60) - 10 = 4 dB. For GPS (BW = 2 MHz, required C/N0 = 30 dB-Hz): NF ≈ 2 dB. For radio astronomy: NF < 0.3 dB (often cryogenic). For cellular base stations (5G NR, BW = 100 MHz): NF < 2.5 dB. For radar warning receivers: NF < 10 dB is often acceptable due to the wide bandwidth and high expected signal levels.