# The nature and significance of common RF indicators

• Rx Sensitivity (receiving sensitivity) Receiving
• sensitivity, which should be one of the most basic concepts, characterizes the minimum signal strength that the receiver can recognize without exceeding a certain bit error rate. The bit error rate mentioned here is a general term following the definition of the CS (Circuit Switching) era. In most cases, BER (bit error rate) or PER (packet error rate) will be used to examine sensitivity. In the LTE era, simply use throughput It is defined by the amount of Throughput – because LTE does not have a circuit-switched voice channel at all, but this is also a real evolution, because for the first time we no longer use such as 12.2kbps RMC (reference measurement channel, actually represents the rate of 12.2kbps Sensitivity is measured against “standardized alternatives” such as the speech codecs in the Internet of Things, but defined in terms of throughput that users can actually experience.
• SNR (signal-to-noise ratio)
• When we talk about sensitivity, we often refer to SNR (signal-to-noise ratio, we generally talk about the demodulation signal-to-noise ratio of the receiver). We define the demodulation signal-to-noise ratio as the demodulator can The signal-to-noise ratio threshold of demodulation (someone will often ask you questions during the interview, give you a string of NF, Gain, and then tell you the demodulation threshold and ask you to push the sensitivity). So where do S and N come from?
• The S is the signal Signal, or useful signal; N is the noise Noise, which generally refers to all signals without useful information. The useful signal is generally emitted by the transmitter of the communication system, and the sources of noise are very extensive. The most typical one is the famous -174dBm/Hz – the natural noise floor. Remember that it is a quantity that has nothing to do with the type of communication system , in a sense it is calculated from thermodynamics (so it is related to temperature); another thing to note is that it is actually a noise power density (so it has the dimension of dBm/Hz), how much bandwidth we receive signals , how much bandwidth noise will be accepted – so the final noise power is obtained by integrating the noise power density over the bandwidth.
• TxPower (transmission power)
This is true in most cases of communication systems.
• ACLR/ACPR
• We put these items together because they represent part of the “transmitter noise”, but these noises are not in the transmission channel, but the part of the transmitter that leaks into the adjacent channel, which can be collectively referred to as “Adjacent Channel Leakage”.
• Among them, ACLR and ACPR (actually the same thing, but one is called in the terminal test, and the other is called in the base station test), both are named after “Adjacent Channel”. interference from other devices. And they have one thing in common, the power calculation of the interference signal is also based on a channel bandwidth. This measurement method shows that the design purpose of this indicator is to consider the signal leaked by the transmitter and interfere with the receiver of the equipment of the same or similar standard – the interference signal falls into the receiver band in the same frequency and bandwidth mode, Form the same frequency interference to the signal received by the receiver. the In LTE, there are two settings for ACLR testing, EUTRA and UTRA. The former describes the interference of the LTE system to the LTE system, and the latter considers the interference of the LTE system to the UMTS system. So we can see that the measurement bandwidth of EUTRAACLR is the occupied bandwidth of LTE RB, and the measurement bandwidth of UTRA ACLR is the occupied bandwidth of UMTS signal (3.84MHz for FDD system, 1.28MHz for TDD system). In other words, ACLR/ACPR describes a kind of “peer-to-peer” interference: the interference of the same or similar communication system by the leakage of the transmitted signal. the This definition has very important practical significance. In the actual network, signals from adjacent cells of the same cell and nearby cells often leak. Therefore, the process of network planning and network optimization is actually a process of maximizing capacity and minimizing interference. The adjacent cell leakage of the system itself is typical for adjacent cells. from the other direction of the system, the mobile phones of users in crowded crowds may also become a source of mutual interference. the Similarly, in the evolution of the communication system, the goal has always been to “smooth transition”, that is, to upgrade and transform the existing network into the next-generation network. Then the coexistence of two or even three generations of systems needs to consider the interference between different systems. The introduction of UTRA by LTE is to consider the radio frequency interference of LTE to the previous generation system in the case of coexistence with UMTS.
• Modulation Spectrum/
• Switching Spectrum Returning to the GSM system, Modulation Spectrum (modulation spectrum) and Switching Spectrum (switching spectrum, also known as switching spectrum, because of different translations for imported products) also play a similar role in adjacent channel leakage. The difference is that their measurement bandwidth is not the occupied bandwidth of the GSM signal. From the definition point of view, it can be considered that the modulation spectrum is to measure the interference between synchronous systems, and the switching spectrum is to measure the interference between non-synchronous systems (in fact, if the signal is not gated, the switching spectrum will definitely drown the modulation spectrum ). the This involves another concept: in the GSM system, the cells are not synchronized, although it uses TDMA; in contrast, TD-SCDMA and the subsequent TD-LTE, the cells are synchronized (The GPS antenna in the shape of a flying saucer or a ball is always a shackle that the TDD system cannot get rid of). the Because the cells are not synchronized, the power leakage of the rising edge/falling edge of the A cell may fall into the payload part of the B cell, so we use the switching spectrum to measure the interference of the transmitter to the adjacent channel in this state; while in the entire 577us GSM In the timeslot, the proportion of rising edge/falling edge is very small after all. Most of the time, the payload parts of two adjacent cells will overlap in time. In this case, the interference of the transmitter to the adjacent channel can be evaluated by referring to the modulation spectrum.
• When talking about SEM, we must first note that it is an “in-band indicator”, which is distinguished from spurious emission. The latter includes SEM in a broad sense, but the focus is actually on the spectrum leakage outside the operating frequency band of the transmitter. , and its introduction is more from the perspective of EMC (electromagnetic compatibility). the SEM provides a “spectrum template”, and then when measuring the in-band spectrum leakage of the transmitter, check whether there are any points exceeding the limit of the template. It can be said that it is related to ACLR, but it is not the same: ACLR considers the average power leaked into the adjacent channel, so it uses the channel bandwidth as the measurement bandwidth, which reflects the “noise floor” of the transmitter in the adjacent channel; What SEM reflects is to capture the exceeding standard point in the adjacent frequency band with a small measurement bandwidth (often 100kHz to 1MHz), which reflects “spurious emission based on the noise floor”. the If you scan the SEM with a spectrum analyzer, you can see that the spurious points on the adjacent channels are generally higher than the average ACLR, so if the ACLR index itself has no margin, the SEM will easily exceed the standard. Conversely, SEM exceeding the standard does not necessarily mean that the ACLR is bad. A common phenomenon is that LO spurs or a certain clock and LO modulation component (often with a narrow bandwidth, similar to point frequency) are serially inserted into the transmitter link. At this time, even if ACLR is fine, SEM may also be over the mark.
• EVM (Error Vector)
• 7.1, the relationship between EVM and ACPR/ACLR
• It is difficult to define the quantitative relationship between EVM and ACPR/ACLR. From the perspective of the nonlinearity of the amplifier, EVM and ACPR/ACLR should be positively correlated: the AM-AM and AM-PM distortion of the amplifier will enlarge the EVM, and also the ACPR/ACLR main source. the However, EVM and ACPR/ACLR are not always positively correlated. We can find a typical example here: Clipping, which is commonly used in digital intermediate frequency, that is, peak clipping. Clipping is to reduce the peak-to-average ratio (PAR) of the transmitted signal. The reduction of peak power helps to reduce the ACPR/ACLR after passing through the PA; but Clipping will also damage EVM, because whether it is clipping (windowing) or using a filter method, All will damage the signal waveform, thus increasing EVM.
• 7.2. The origin of PAR
• PAR (signal peak-to-average ratio) is usually expressed by a statistical function such as CCDF, and its curve represents the power (amplitude) value of the signal and its corresponding probability of occurrence. For example, if the average power of a certain signal is 10dBm, and the statistical probability of its power exceeding 15dBm is 0.01%, we can consider its PAR to be 5dB. the PAR is an important factor affecting transmitter spectrum regeneration (such as ACLP/ACPR/Modulation Spectrum) in modern communication systems. Peak power will push the amplifier into the non-linear region causing distortion, and the higher the peak power, the stronger the non-linearity. the In the GSM era, because of the balanced envelope characteristic of GMSK modulation, PAR=0, we often push it to P1dB when designing the GSM power amplifier to get the maximum efficiency. After the introduction of EDGE, 8PSK modulation is no longer a balanced envelope, so we often push the average output power of the power amplifier to about 3dB below P1dB, because the PAR of 8PSK signal is 3.21dB. the In the UMTS era, regardless of WCDMA or CDMA, the peak-to-average ratio is much larger than that of EDGE. The reason is the correlation of signals in the code division multiple access system: when the signals of multiple code channels are superimposed in the time domain, the phase may be the same, and the power will peak at this time. the The peak-to-average ratio of LTE is derived from the burstiness of RB. OFDM modulation is based on the principle of dividing multi-user/multi-service data into blocks in both the time domain and the frequency domain, so that high power may appear in a certain “time block”. SC-FDMA is used for LTE uplink transmission. First, DFT is used to expand the time domain signal to the frequency domain, which is equivalent to “smoothing” the burstiness in the time domain, thereby reducing the PAR.
• Summary of interference indicators
• The “interference index” here refers to the sensitivity test under various interferences in addition to the static sensitivity of the receiver. It is actually interesting to study the origin of these test items. the Our common interference indicators include Blocking, Desense, Channel Selectivity, etc.
• 8.1, Blocking (blocking)
• Blocking is actually a very old RF indicator that dates back to the early days of radar. The principle is to inject a large signal into the receiver (usually the first-stage LNA is the worst sufferer), so that the amplifier enters the nonlinear region or even saturates. At this time, on the one hand, the gain of the amplifier suddenly becomes smaller, and on the other hand, it produces extremely strong nonlinearity, so the amplification function of the useful signal cannot work normally. the Another possible blocking is actually done by the AGC of the receiver: a large signal enters the receiver chain, and the receiver AGC generates an action to reduce the gain to ensure the dynamic range; but at the same time, the useful signal level entering the receiver is very low , the gain is insufficient at this time, and the useful signal amplitude entering the demodulator is insufficient. the Blocking indicators are divided into in-band and out-of-band, mainly because the RF front-end generally has a frequency band filter, which can inhibit out-of-band blocking. However, regardless of whether it is in-band or out-of-band, the blocking signal is generally a point frequency without modulation. In fact, point-frequency signals without modulation are rare in the real world. In engineering, it is only simplified to point-frequency to (approximately) replace various narrow-band interference signals. the For solving Blocking, it is mainly RF contribution. To put it bluntly, it is to improve the IIP3 of the receiver and expand the dynamic range. For out-of-band Blocking, the suppression degree of the filter is also very important.
• 8.2、AM Suppression
• AM Suppression is a unique indicator of the GSM system. From the description, the interference signal is a TDMA signal similar to the GSM signal, which is synchronized with the useful signal and has a fixed delay. the This scenario is to simulate the signals of adjacent cells in the GSM system. From the perspective that the frequency offset of the interference signal is required to be greater than 6MHz (GSM bandwidth is 200kHz), this is a typical adjacent cell signal configuration. So we can think that AM Suppression is a reflection of the receiver’s interference tolerance to neighboring cells in the actual work of the GSM system. the
• 8.2、Adjacent (Alternative) Channel Suppression (Selectivity)
• 8.3、Co-Channel Suppression (Selectivity)
• This description refers to absolute same-frequency interference, and generally refers to an interference pattern between two same-frequency cells. the According to the networking principle we described before, the distance between two cells with the same frequency should be as far as possible, but no matter how far away, there will be signals leaking from each other, and the difference is only the strength. For the terminal, the signals of the two campuses can be regarded as “correct and useful signals” (of course, there is a set of access specifications on the protocol layer to prevent such wrong access), and it is measured whether the terminal’s receiver can avoid “the west wind overwhelms the east wind” “, depends on its same-frequency selectivity.
• 8.4 Summary
• Blocking is “big signal interferes with small signal”, and RF still has room to maneuver; while the above indicators such as AM Suppression, Adjacent (Co/Alternative) Channel Suppression (Selectivity) are “small signal interferes with large signal”, the meaning of pure RF work Not much, it still depends on the physical layer algorithm. the Single-tone Desense is a unique indicator of the CDMA system. It has a characteristic: the single-tone as the interference signal is an in-band signal, and it is very close to the useful signal. In this way, it is possible to generate two kinds of signals falling into the receiving frequency domain: the first one is due to the near-end phase noise of the LO, the baseband signal formed by mixing the LO and the useful signal, and the signal formed by mixing the LO phase noise and the interference signal. Both will fall within the range of the baseband filter of the receiver, the former is a useful signal and the latter is interference; the second is due to the nonlinearity in the receiver system, a useful signal (with a certain bandwidth, such as a 1.2288MHz CDMA signal) It may intermodulate with the interference signal on the nonlinear device, and the intermodulation product may also fall within the receiving frequency domain and become interference. the The origin of Single-tone desense is that when North America launched the CDMA system, it used the same frequency band as the original analog communication system AMPS, and the two networks coexisted for a long time. As a latecomer, the CDMA system must consider the interference of the AMPS system to itself. the At this point, I think of the PHS, which was called “if you don’t move, you won’t get through”. Because it has occupied the frequency of 1900~1920MHz for a long time, the implementation of TD-SCDMA/TD-LTE B39 in China has always been in the low range of B39, 1880~ 1900MHz, until PHS withdraws from the network. the The explanation of Blocking in textbooks is relatively simple: large signals entering the receiver amplifier make the amplifier enter the nonlinear region, and the actual gain becomes smaller (for useful signals). the But this makes it hard to explain two scenarios: Scenario 1: The linear gain of the pre-stage LNA is 18dB. When a large signal is injected to make it reach P1dB, the gain is 17dB; if no other effects are introduced (default LNA NF, etc. have not changed), then the noise figure of the entire system In fact, the impact is very limited. It is nothing more than that the denominator of the latter stage NF becomes smaller when it is included in the total NF, which has little effect on the sensitivity of the entire system. the Scenario 2: The IIP3 of the pre-stage LNA is very high, so it is not affected, but the second-stage gain block is affected (the interference signal makes it reach P1dB). In this case, the impact of the NF of the entire system is even smaller. the I’m here to throw a brick and put forward a point of view: the impact of Blocking may be divided into two parts, one part is that the Gain mentioned in the textbook is compressed, and the other part is actually that after the amplifier enters the nonlinear region, the useful signal is distorted in this region. This distortion may include two parts, one part is the spectrum regeneration (harmonic component) of the useful signal caused by the pure amplifier nonlinearity, and the other part is the Cross Modulation of the small signal modulated by the large signal. (understandable) Therefore, we also propose another idea: if we want to simplify the Blocking test (3GPP requires frequency scanning, which is very time-consuming), we may select some frequency points, which have the greatest impact on the distortion of the useful signal when the Blocking signal appears. the From an intuitive point of view, these frequency points may include: f0/N and f0*N (f0 is the useful signal frequency, and N is a natural number). The former is because the Nth harmonic component generated by the large signal itself in the nonlinear region just superimposes on the useful signal frequency f0 to form direct interference, and the latter is superimposed on the Nth harmonic of the useful signal f0 and then affects the output signal f0 Time-domain waveform – explain: According to Parseval’s law, the waveform of the time-domain signal is actually the sum of the fundamental frequency signal and each harmonic in the frequency domain. When the power of the N-th harmonic in the frequency domain changes, the The corresponding change in the domain is the envelope change of the time domain signal (distortion occurs). the
• Dynamic range, temperature compensation and power control
• Dynamic range, temperature compensation, and power control are mostly “invisible” metrics that only show their impact when some extreme tests are performed, but in themselves they represent the most delicate parts of RF design .
• 9.1. Transmitter dynamic range
• The dynamic range of the transmitter is characterized by the maximum transmission power and the minimum transmission power of the transmitter “without damaging other transmission indicators”. the “No damage to other transmission indicators” seems very broad. If you look at the main impact, it can be understood as: the linearity of the transmitter is not damaged at the maximum transmission power, and the signal-to-noise ratio of the output signal is maintained at the minimum transmission power. the At the maximum transmission power, the output of the transmitter tends to approach the nonlinear region of all levels of active devices (especially the final amplifier), and the non-linear performances that often occur include spectrum leakage and regeneration (ACLR/ACPR/SEM), modulation error ( PhaseError/EVM). At this time, the most suffering is basically the linearity of the transmitter, and this part should be easier to understand. the Under the minimum transmission power, the useful signal output by the transmitter is close to the noise floor of the transmitter, and even has the danger of being “submerged” in the noise of the transmitter. What needs to be guaranteed at this time is the signal-to-noise ratio (SNR) of the output signal, in other words, the lower the noise floor of the transmitter at the minimum transmit power, the better. the There was an incident in the laboratory: when an engineer was testing the ACLR, he found that the ACLR was worse when the power was reduced (the normal understanding is that the ACLR should be improved as the output power is reduced), and the first reaction at that time was that there was something wrong with the instrument. But the test result is still the same for another instrument. The guidance we give is to test the EVM at low output power, and find that the EVM performance is very poor; we judge that the noise floor at the entrance of the RF link may be very high, and the corresponding SNR is obviously very poor, and the main component of ACLR is no longer The spectral regeneration of the amplifier, but the baseband noise is amplified through the amplifier chain.