QA on DSP

14
0. what is monitoring receiver? A monitoring receiver is a system which is used to continuously gather information's of a specific signal activity which is present in a narrow band. Characteristics of monitoring receiver: 1. Staring Bandwidth (i.e usually 10 Mhz) 2. High Sensitivity 3. High Dynamic Range (i.e Low NF, Low Spurious) 4. High Frequency Resolution 5. Demodulation Schemes 1.why 3 channel only not more in mrx? digital design and software are scalable provided the chosen FPGA has surplus logic cells and processor powerful enough to handle, whereas the bottleneck and time consuming part of the mrx is the front end i.e the RF tuner. The present tuner is designed to handle only 3 channels, only three IF's can used simultaneously. if the design of front to be expanded the system becomes bulky and thermal issues needs to be taken care with cutting edge design. also rf coupling to be taken care so that LO leakage and spurious signals can be eliminated. Since only one user is used for controlling the 3 channel, it becomes more difficult to manage greater number of channels at the same time with limited operators. 2. if a higher power level signal is provided to input of the mrx ? In case of higher power level applied to the rf front end, the limiter present before the amplifier brings the power level to the process able value, only the power level range which can be processed by the front can be used otherwise the device get damaged. RF tuner is used to provide a gain of 40db from the adc sensitivity level. normally it amplifies signal as high as -35dbm and sensitivity signal as -105 dbm. (70 dbm of dynamic range) 3.what is sensitivity? The sensitivity of receiver is its ability to amplify the weak signals. factors determining the sensitivity of the receiver is its IF amplifiers and rf amplifiers along with the noise figure.. ex: for am 30% modulated 400 hz sine wave : standard

description

DSP basics

Transcript of QA on DSP

Page 1: QA on  DSP

0. what is monitoring receiver?

A monitoring receiver is a system which is used to continuously gather information's of a specific signal activity which is present in a narrow band.

Characteristics of monitoring receiver:

1. Staring Bandwidth (i.e usually 10 Mhz)

2. High Sensitivity

3. High Dynamic Range (i.e Low NF, Low Spurious)

4. High Frequency Resolution

5. Demodulation Schemes

1.why 3 channel only not more in mrx?

digital design and software are scalable provided the chosen FPGA has surplus logic cells and processor powerful enough to handle, whereas the bottleneck and time consuming part of the mrx is the front end i.e the RF tuner.

The present tuner is designed to handle only 3 channels, only three IF's can used simultaneously. if the design of front to be expanded the system becomes bulky and thermal issues needs to be taken care with cutting edge design. also rf coupling to be taken care so that LO leakage and spurious signals can be eliminated.

Since only one user is used for controlling the 3 channel, it becomes more difficult to manage greater number of channels at the same time with limited operators.

2. if a higher power level signal is provided to input of the mrx ?

In case of higher power level applied to the rf front end, the limiter present before the amplifier brings the power level to the process able value, only the power level range which can be processed by the front can be used otherwise the device get damaged.

RF tuner is used to provide a gain of 40db from the adc sensitivity level.

normally it amplifies signal as high as -35dbm and sensitivity signal as -105 dbm. (70 dbm of dynamic range)

3.what is sensitivity?

The sensitivity of receiver is its ability to amplify the weak signals.

factors determining the sensitivity of the receiver is its IF amplifiers and rf amplifiers along with the noise figure..

ex:

for am 30% modulated 400 hz sine wave : standard

Page 2: QA on  DSP

for good receivers better than 1uv in HF band.

4. what is selectivity?

The selectivity of the receiver is its ability to reject unwanted signal.

it follows a curve that the attenuation that it offers to signal at frequencies near to the tuned frequency.

5. what is resolution?

The resolution is defined as the least value which can be represented by the system.

in Mrx it is 6.25 khz two signal present apart 6.25 khz can be differentiated.

6. what is image frequency?

we know that f0 =fs+fi

where f0 is the oscillator frequency

fs is the signal frequency

fi is the intermediate frequency.

In the mixer when fo and fs are mixed we get

fo+fs and f0-fs but we use only fo-fs=fi, other signal get rejected and sent to if amplifiers.

now fo-fs=fi,

fo=fs+fi.

now if a signal fsi enters the Rf chain such that fsi=fo+fi

again fsi = fs+2fi

In the mixer when fo and fsi are mixed we get,

fo+fsi and fo-fsi.

now again fo-fo-fi to get -fi , this is the image signal which will be amplified by the if amplifiers and will cause the interference.

this signal fsi is the image signal.

note:

image rejection to be achieved on the rf stage once the frequency enters the if stage it is impossible to remove its effects. it depends mainly of the rf front end selectivity.

7.What is noise figure?

Page 3: QA on  DSP

Noise figure is defined as the ratio of input SNR to the output SNR

purpose :

1. evaluate the 2 kind of equipments in performance

2. comparison of noise and signal at the same point to ensure noise is not excessive.

Noise figure (NF) is a popular specification among RF system designers. It is used to characterize RF amplifiers, mixers, etc., and widely used as a tool in radio receiver design. By no means the incident SNR at the antenna can be improved but the degradation can be reduced.(In RF Domain).

Where as in Digital processing due to FFT processing gain can be improved. (It uses channelized receivers)

8. Are cic filters linear ? why they are used in monitoring receivers?

cic filters are non-linear filters, because of the feedback used.

advantages of cic filters :

1. whenever there is a large sample rate changes in the digital systems cic's are best options.

2. cic filters (multi-rate filters) are realized using adders, subtractors and delay ,there are no multipliers involved in realization.

3. Provide both down sampling and up sampling.

4. a compensatory fir filter is used in conjugation with the cic's for better frequency response characteristics.

disadvantages:

1. linearity is lost in the cic filtering process where as an fir is an linear filter. The cic are chosen in the place of phase distortion is not critical.

9. what is distortion ?

A pure sinusoidal signal has a single frequency at which the voltage varies positive and negative by equal amounts. Any signal varying over less than the full 360° cycle is considered to have distortion.

When distortion occurs the output will not be an exact duplicate (except for magnitude) of the input signal. Distortion can occur because the device characteristic is not linear, in which case nonlinear or amplitude distortion occurs. This can occur with all classes of amplifier operation. Distortion can also occur because the circuit elements and devices respond to the input signal differently at various frequencies, this being frequency distortion.

Note1:

One technique for describing distorted but period waveforms uses Fourier analysis, a method that describes any periodic waveform in terms of its fundamental frequency component and frequency components at integer multiples—these components are called harmonic components or

Page 4: QA on  DSP

harmonics. For example, The original frequency of 1 kHz is called the fundamental frequency; those at integer multiples are the harmonics. The 2-kHz component is therefore called a second harmonic, that at 3 kHz is the third harmonic, and so on. The fundamental frequency is not considered a harmonic.

Note2:

The active devices in the system causes distortion , we need the signal to be amplified for signal conditioning, suppose a gain of k is required over a sine wave i.e k * x(t) ,where x(t) = Asinwπt.

The amplifier offers only k2*x(t)+ k*x(t) + c.

the component k2*x(t) produces second order harmonic k2*a2 cos2 (2πft) which again yields

k2*a2 cos (4πft)/2 + 1/2

similarly third harmonic and so on can be produced with fundamental amplification.

10. harmonic distortion ?

Harmonic distortion is generally specified with an input signal near full-scale (generally 0.5 to 1dB below full-scale to prevent clipping), but it can be specified at any level. For signals much lower than full-scale, other distortion products due to the DNL of the converter (not direct harmonics) may limit performance.

Where DNL : differential non-linearity

differential nonlinearity is due exclusively to the encoding process and may vary considerably, dependent on the ADC encoding architecture.

note:

Overall integral nonlinearity produces distortion products whose amplitude varies as a function of the input signal amplitude. For instance, second-order inter modulation products increase 2 dB for every 1 dB increase in signal level, and third-order products increase 3 dB for every 1 dB increase in signal level.

11. Total harmonic distortion?

Total harmonic distortion (THD) is the ratio of the rms value of the fundamental signal to the mean value of the root-sum-square of its harmonics (generally, only the first five are significant)

12. What is total harmonic distortion + noise?

Total harmonic distortion plus noise (THD +N) is the ratio of the rms value of the fundamental signal to the mean value of the root-sum-square of its harmonics plus all noise components (excluding DC). The bandwidth over which the noise is measured must be specified.

Note:

Page 5: QA on  DSP

In the case of an FFT, the bandwidth is DC to f s /2. (If the bandwidth of the measurement is DC to f s /2, THD +N is equal to SINAD)

13. What is Signal-to-Noise-and-Distortion Ratio (SINAD) ?

Signal-to-Noise-and Distortion (SINAD, or S/(N + D) is the ratio of the rms signal amplitude to the mean value of the root-sum-square (rss) of all other spectral components, including harmonics , but excluding DC.

SINAD is a good indication of the overall dynamic performance of an ADC as a function of input frequency because it includes all components which make up noise (including thermal noise) and distortion. It is often plotted for various input amplitudes. SINAD is equal to

THD + N if the bandwidth for the noise measurement is the same

14. Effective number of bits (ENOB) ?

SINAD is often converted to effective-number-of-bits (ENOB) using the relationship for the theoretical SNR of an ideal N-bit ADC: SNR= 6.02N + 1.76 dB. The equation is solved for N, and the value of SINAD is substituted for SNR:

Signal-to-noise ratio (SNR, or SNR- without-harmonics ) is calculated the same as SINAD except that the signal harmonics are excluded from the calculation, leaving only the noise terms.

15. What is the analog bandwidth of adc?

The analog bandwidth of an ADC is that frequency at which the spectral output of the fundamental swept frequency (as determined by the FFT analysis) is reduced by 3 dB.

16. What is spurious free dynamic range (SFDR) ?

Probably the most significant specification for an ADC used in a communications application is its spurious free dynamic range (SFDR)

SFDR of an ADC is defined as the ratio of the rms signal amplitude to the rms value of the peak spurious spectral content measured over the bandwidth of interest. Unless otherwise stated, the bandwidth is assumed to be the Nyquist bandwidth DC to f s /2.

Page 6: QA on  DSP

17. What is quantization error?

Since the analog input to an ADC can take any value, but the digital output is quantized, there may be a difference of up to ½ LSB between the actual analog input and the exact value of the digital output. This is known as the quantization error or quantization uncertainty.

this quantization error gives rise to quantization noise

note:

The maximum error an ideal converter makes when digitizing a signal is ½ LSB

18. What is Two tone inter-modulation distortion?

Two-tone IMD is measured by applying two spectrally pure sine waves to the ADC at frequencies f 1 and f 2 , usually relatively close together. The amplitude of each tone is set slightly more than 6 dB below full scale so that the ADC does not clip when the two tones add in-phase.

Page 7: QA on  DSP

Note : IP3 & IP2 points :

2nd and 3rd order intercept points

second-order IMD amplitudes increase 2 dB for every 1 dB of signal increase, the third-order IMD amplitudes increase 3 dB for every 1 dB of signal increase.

Once the input reaches a certain level however, the output signal begins to soft-limit, or compress. A parameter of interest here is the 1 dB compression point .

It should be noted that IP2, IP3, and the 1 dB compression point are all a function of frequency and, as one would expect, the distortion is worse at higher frequencies.

Page 8: QA on  DSP

For a given frequency, knowing the third-order intercept point allows calculation of the approximate level of the third-order IMD products as a function of output signal level.

ADC & IMD :

The concept of second- and third-order intercept points is not valid for an ADC, because the distortion products do not vary in a predictable manner (as a function of signal amplitude). The ADC does not gradually begin to compress signals approaching full scale (there is no 1 dB compression point); it acts as a hard limiter as soon as the signal exceeds the ADC input range, thereby suddenly producing extreme amounts of distortion because of clipping. On the other hand, for signals much below full scale, the distortion floor remains relatively constant and is independent of signal level.

19. Need for modulation ?

1. signal propagation is directly proportional to frequency so low frequency signals 20 to 20 khz can travel only few meters.

2. for efficient radiation and reception ,the transmitting antenna and receiver antenna would have length s comparable to a quarter wavelength of frequency.

ex: 1 MHz signal has wavelength of 300 m ,then antenna required is only 75 M

15 kHz signal has wavelength of 20000 m ,then antenna required is 5000 M which is impossible to erect.

3. all the AF signal are between 20 - 20 kHz without modulation get mixed up and becomes impossible to separate the signals .

20. Why both 1 MHz streaming and 10 MHz streaming is present in MRx?

1. 1 MHz streaming becomes vital when 2 or 3 system perform data logging and analysis ,1 MHz reduces the load comparatively to 10 MHz yet serves the objective of spectrum serving.

2. In today's scenario wide band signal activity is present which has bandwidth of more than 1 MHz to 10 MHz which can needs to recorded at 10 MHz sampling rate to analyse and extract the useful information.

21.What is Nyquist criteria?

Nyquist’s criteria require that the sampling frequency be at least twice the highest frequency contained in the signal, or information about the signal will be lost. If the sampling frequency is less than twice the maximum analog signal frequency, a phenomena known as aliasing will occur.

Page 9: QA on  DSP

22.What is Nyquist bandwidth?

The Nyquist bandwidth is defined to be the frequency spectrum from DC to f s /2. The frequency spectrum is divided into an infinite number of Nyquist zones , each having a width equal to 0.5 fs.

Note:

The FFT processor only provides an output from DC to f s /2, i.e., the signals or aliases that appear in the first Nyquist zone.

Page 10: QA on  DSP

Example :

fa = 10 MHz and fs = 102.4 MHz

Ist Nyquist = 10 MHz

IInd Nyquist = 102.4-10 = 92.4 MHz

IIIrd Nyquist = 102.4 +10 = 112.4 MHz

IVth Nyquist = 204.8-10 = 194.8 MHz.

24. What is baseband sampling ?

Baseband sampling implies that the signal to be sampled lies in the first Nyquist zone. It is important to note that with no input filtering at the input of the ideal sampler, any frequency component (either signal or noise) that falls outside the Nyquist bandwidth in any Nyquist zone will be aliased back into the first Nyquist zone . For this reason, an anti-aliasing filter is used in almost all sampling ADC applications to remove these unwanted signals.

Note:

Filters become more complex as the transition band becomes sharper, For instance, a Butterworth filter gives 6 dB attenuation per octave for each filter pole.

how the sharpness of the anti-aliasing transition band can be traded off against the ADC sampling frequency. Choosing a higher sampling rate (oversampling) reduces the requirement on transition band sharpness (hence, the filter complexity) at the expense of using a faster ADC and processing data at a faster rate.

24. In which Nyquist band the processing to be performed?

There is no mention of the absolute location of the band of sampled signals within the frequency spectrum relative to the sampling frequency. The only constraint is that the band of sampled signals be restricted to a single Nyquist zone, i.e., the signals must not overlap any multiple of fs /2 (this, in fact, is the primary function of the anti-aliasing filter).

Note:

Sampling signals above the first Nyquist zone has become popular in communications because the process is equivalent to analog demodulation. It is becoming common practice to sample IF signals directly and then use digital techniques to process the signal, thereby eliminating the need for an IF demodulator and filters.

Trade-off:

As the IF frequencies become higher, the dynamic performance requirements on the ADC become more critical. The ADC input bandwidth and distortion performance must be adequate at the IF frequency, rather than only baseband. This presents a problem for most ADCs designed to

Page 11: QA on  DSP

process signals in the first Nyquist zone, therefore an ADC suitable for under-sampling applications must maintain dynamic performance into the higher order Nyquist zones.

25. What is fft processing gain?

The theoretical FFT noise floor is therefore 10log10(M/2) dB ---> this is the fft processing gain.

In the case of an ideal 12-bit ADC with an SNR of 74 dB, a 4096-point FFT would result in a processing gain of 10log10(4096/2)= 33 dB. In fact, the FFT noise floor can be reduced even further by going to larger and larger FFTs; just as an analog spectrum analyzer’s noise floor can be reduced by narrowing the bandwidth

26. what is notch filter ? Why it is required?

In ECG recording, there often is unwanted 60-Hz interference in the recorded data. The analysis shows that the interference comes from the power line and includes magnetic induction, displacement currents in leads or in the body of the patient ,effects from equipment interconnections, and other imperfections. Although using proper grounding or twisted pairs minimize such 60-Hz effects, another effective choice can be use of a digital notch filter, which eliminates the 60-Hz interference while keeping all the other useful information.

another name for notch filter is band reject filter.

notch filter equation :

27. What is an FIR filter?

FIR stands for Finite impulse Response Filter, whose transfer function is given as follows

Page 12: QA on  DSP

expansion give,

Structure realization,

28. FIR Notes :

1. The oscillations(ripples) exhibited in the pass band (main lobe) and stop band (side lobes) of the magnitude frequency response constitute the Gibbs effect. Gibbs oscillatory behaviour originates from the abrupt truncation of the infinite impulse response. The remedy this problem ,window functions will be used.

2. Using a larger number of the filter coefficients will produce the sharp roll-off characteristic of the transition band but may cause increased time delay and increased computational complexity for implementing the designed FIR filter.

3. The phase response is linear in the pass-band. This is consistent ,which means that all frequency components of the filter input within the pass-band are subjected to the same amount of time delay at the filter output. This is a requirement for applications in audio and speech filtering, where phase distortion needs to be avoided.

Note that we impose the following linear phase requirement ,that is, the FIR coefficients are symmetrical about the middle coefficient ,and the FIR filter order is an odd number.

29. What is gibb's oscillation? and its remedy?

Gibbs oscillations originate from the abrupt truncation of the infinite-length coefficient sequence.

Window method is developed to remedy the undesirable Gibbs oscillations in the pass-band and stop band of the designed FIR filter.

Page 13: QA on  DSP

30. What is window function?

31. truncation of bits at the sensitivity level and at full scale level?

consider a 4 bit signal as example.

0 0 0 0 - 0

0 0 0 1 - 1

0 1 1 0 - 6

1 1 0 1 - D now by removing the LSB bit of the four signals we obtain 0,0,3,6. so the

1.Actual signal present has been limited to lesser value signal.

2.there is loss of sensitivity as the two signals are no longer differentiated as it is represented as 0.

here by multiplying by 2 produces 0,0,6,C but it is not equal to the original info. so signal lost is lost.

now by removing the MSB bit of the four signals, we obtain 0,1,6,5 , here the whole shape of the signal is lost as the higher amplitude signal is detected as lower amplitude signal.

Page 14: QA on  DSP

if the bit allocation for the processing is not made proper saturation/wrapping of the signal is received which will corrupt the signal information, once the info. is lost it cannot be recovered by any means.