QA Digital

17
0. why processor is not preferred over fpga for computation? 2.why these devices are called glue logic? 3.what is fuse & anti-fuse technology? 4. what is meta-stability? 5. what are codes? describe the classification of codes? Coding and encoding is the process of assigning a group of binary digits, commonly referred to as ‘bits’, to represent, identify, or relate to a multi-valued items of information Classification of codes : 1. Weighted Binary Codes 2. Non- Weighted Binary Codes 3. Error Detection Codes 4. Error Correction Codes 5. Alphanumeric Codes 1.Weighted Binary Codes : 1. Straight Binary codes 2. BCD codes 1. 8421 2. 4221 3. 2421 4. 5421 etc. 2. Non- Weighted Binary Codes 1. Excess-3 codes 2. Gray codes 3. Error Detection Codes 1. Redundancy 2. Checksum others 1. Parity codes 2. Repetition codes 3. CRC codes 4. Error Correction Codes 1. Hamming codes 2. Cyclic codes.

description

Digital

Transcript of QA Digital

Page 1: QA Digital

0. why processor is not preferred over fpga for computation?

2.why these devices are called glue logic?

3.what is fuse & anti-fuse technology?

4. what is meta-stability?

5. what are codes? describe the classification of codes?

Coding and encoding is the process of assigning a group of binary digits, commonly

referred to as ‘bits’, to represent, identify, or relate to a multi-valued items of information

Classification of codes :

1. Weighted Binary Codes

2. Non- Weighted Binary Codes

3. Error Detection Codes

4. Error Correction Codes

5. Alphanumeric Codes

1.Weighted Binary Codes :

1. Straight Binary codes

2. BCD codes

1. 8421 2. 4221 3. 2421 4. 5421 etc.

2. Non- Weighted Binary Codes

1. Excess-3 codes

2. Gray codes

3. Error Detection Codes

1. Redundancy 2. Checksum

others

1. Parity codes 2. Repetition codes 3. CRC codes

4. Error Correction Codes

1. Hamming codes

2. Cyclic codes.

Page 2: QA Digital

5. . Alphanumeric Codes

1. ASCII codes

2.EBCDIC codes

3. Unicode

6. Why we need Boolean theorems?

There is no guarantee that this SOP expression will be a minimal expression. In other words, SOP expressions are likely to have redundancies that lead to systems which requires more hardware that is necessary. This is where the role of theorems and other reduction techniques come into play.

7. Why NAND and NOR is called universal gates?

A major reason for the widespread use of these gates is that they are both UNIVERSAL gates, universal in the sense that both can be used for AND operators, OR operators, as well as Inverter. Thus, we see that a complex digital system can be completely synthesized using only NAND gates or NOR gates.

8. What is K-Map?

The map method provides simple straight forward procedure for minimizing Boolean functions that may be regarded as pictorial form of truth table. K-map orders and displays the minterms in a geometrical pattern such that the application of the logic adjacency theorem.

—The K-map is a diagram made up of squares. Each square represents one minterm.

—Since any function can be expressed as a sum of minterms, it follows that a Boolean function can be recognized from a map by the area enclosed by those squares. Whose minterms are included in the operation.

—By various patterns, we can derive alternative algebraic expression for the same operation, from which we can select the simplest one.

9. Need for Quine-Mccluskey method?

The K-map method is suitable for simplification of Boolean functions up to 5 or 6 variables. As the number of variables increases beyond this, the visualization of adjacent squares is difficult as the geometry is more involved. This it a system-static step by step procedure for minimizing a Boolean expression in standard form. This is the method used by the computers for minimization of Boolean function.

10. Multiplication performed in the computers?

There are two methods

1. Repeated LEFT SHIFT and ADD algorithm.

Page 3: QA Digital

2. Repeated ADD and RIGHT SHIFT algorithm.

Microprocessors and microcomputers, however, use what is known as the ‘repeated add and right-shift’ algorithm to do binary multiplication as it is comparatively much more convenient to implement than the ‘repeated left-shift and add’ algorithm.

11. Why FFT is preferred over DFT?

For computation of N point DFT, we need to perform a N2 multiplication and N (N-1) additions to obtain the Fourier transform. When the value of N goes higher the computational complexity increases exponentially and makes the method inefficient and practically unrealizable.

In 1965 Cooley and tukey developed an efficient method to implement DFT. This method is known as FFT. By using FFT the computational complexity can be reduced drastically.

Multiplication Addition

DFT N2 N(N-1)

FFT (N/2)log 2(N) (N)log 2(N)

There are two methods for the implementation of FFT

1. Radix 2 DIT (Decimation in Time)

2. Radix 2 DIF (Decimation in Frequency)

Note:

Twiddle factor = e-j2π/N

Memory Requirement / stage = 2N + N/2

where ,

2N to store the input and output

N/2 to store the twiddle factor

Actual computation of N point FFT is done stage wise. i.e computation of 1 stage is done at one time. This reduces complexity further.

12. what is clock jitter? reasons for clock jitter?

The term clock jitter refers to any deviations of a clock's output transitions from their ideal positions.

Reasons for jitter?

Every clock oscillator contains the a very fine high frequency oscillator. The amplifier doesn't distinguish between clock signals from noise, it simply amplifies the voltage appears at its input terminals.

Jitter's results from four superimposed noise sources, they are as follows.

Page 4: QA Digital

1.crystal noise i.e resistive elements create thermal noise due to random movement of electrons.

2.mechanical vibration or perturbations of crystal creates noise.

3.amplifier self noise (its contribution is larger than above mentioned causes)

4.coupling of power terminal into amplifiers sensitive inputs creates power supply noise. this is the most troublesome noise.

clock jitter induced by power supply noise which fluctuates intermittently in a data dependent fashion is known as intermittent jitter which is worse of all.

two categories of jitter 1. random jitter 2. intermittent jitter.

random jitter happens all the time ,we can measure, characterise and protect our system from random noise.

The greater the ratio of clock to the reference clock , worse the effects of jitter

Methods of jitter measurement?

1. Spectral analysis 2. direct phase measurement 3. differential phase measurement.

1. spectrum is easy to measure but maximum phase deviation from ideal is difficult to measure.

2. direct phase is comparison of ideal clock with jittery clock. in this case getting ideal clock is difficult.

3.differential phase measurement jittery clock is compared against the delayed version of the same clock.

Note :

The FPGA’s clock manager can be used to detect and correct for this jitter and to provide “ clean ” daughter clock signals for use inside the device

13. What is Hazard ?

Hazards is unwanted switching transients appearing in the output while the input to a combinational circuit (network) changes.

Reason :

The reason of hazard is that the different paths from input to output have different propagation delays, since there is a finite propagation delay through all gates.

Hazards/Glitches (spikes) are dangerous if

•Output sampled before signal stabilization

•Output feeds asynchronous input (immediate response)

The usual solutions are :

Page 5: QA Digital

•Use synchronous circuits with clocks of sufficient length

•Minimize use of circuits with asynchronous inputs

•Design hazard free circuits.

14. What is combinational circuit?

Combinational logic circuits are circuits in which the output at any time depends upon the combination of input signals present at that instant only, and does not depend on any past conditions.

Ex: adders , parity checkers , code converters, decoders, mux, demux etc..

15. What is a sequential circuit?

sequential circuits are those in which output at any given time is not only dependent on the input, present at that time but also on previous outputs.

ex : shift registers, counters, state machines etc..

16. What is synchronous sequential circuit?

A Synchronous Sequential Circuit may be defined as a sequential circuit, whose state can be affected only at the discrete instants of time. The synchronization is achieved by using a timing device, termed as System Clock Generator, which generates a periodic train of clock pulses.

The clock pulses are fed to entire system in such a way that internal states (i.e. memory contents) are affected only when the clock pulses hit the circuit. A synchronous sequential circuit that uses clock at the input of memory elements are referred as Clocked Sequential circuit.

17. SETUP TIME

The amount of time required for a data input to be stable prior to the triggering edge of a clock device.

18. hold time

The amount of time required for a data input to to be stable after the triggering edge of a clock to reliably activate the device.

Page 6: QA Digital

FPGA BASE :

1.Fastest FPGA available at market?

Virtex-7 , Built on a state-of-the-art, high-performance, low-power (HPL), 28nm, high-k metal gate (HKMG) process technology, 7series FPGAs enable an unparalleled increase in system performance with 2.9Tb/s of I/O bandwidth, 2 million logic cell capacity, and 5.3TMAC/s DSP, while consuming 50% less power than previous generation devices to offer a fully programmable alternative to ASSPs and ASICs.

Synthesizable speed up to 800 Mhz for highest speed grade.

XC7V2000T is the largest FPGA with 2 million logic gates.

2. Zync 7000 SoC details?

The Zync-7000 family is based on the Xilinx All Programmable SoC architecture. These products integrate a feature-rich dual-core ARM Cortex-A9 based processing system (PS) and 28nm Xilinx programmable logic (PL) in a single device. The ARM Cortex-A9 CPUs are the heart of the PS and also include on-chip memory, external memory interfaces, and a rich set of peripheral connectivity interfaces.

Features :

1. ARM Cortex -A9 Based Application Processor Unit (APU- 1 GHz)

2. Caches

3. On-Chip memory

4. External Interfaces i.e ECC, DDR3, SRAM,NOR Flash, etc

5. 8-Channel DMA support

6. IO Peripherals i.e USB,CAN,SPI,UART,I2C etc.

7. High Speed Interconnect (ARM AMBA AXI Based)

8. Programmable Logic (PL) -- FPGA

1. Serial Transceivers

2. 12 bit ADC's

3. PCI-Express Blocks

4. CLB's, BRAMs & DSP48 Slices, IO lines etc.

Page 7: QA Digital

3. DDR1,DDR2 & DDR3 comparisons?

DDR Basics :

Double Data Rate-SDRAM, or simply DDR1, was designed to replace SDRAM. DDR1 was originally referred to as DDR-SDRAM or simple DDR. The principle applied in DDR is exactly as the name implies “double data rate”. The DDR actually doubles the rate data is transferred by using both the rising and falling edges of a typical digital pulse.

DDR2

DDR2 is the next generation of memory developed after DDR. DDR2 increased the data transfer rate referred to as bandwidth by increasing the operational frequency to match the high FSB frequencies and by doubling the prefetch buffer data rate. DDR2 uses a different motherboard socket than DDR, and is not compatible with motherboards designed for DDR.

DDR3

DDR3 increased the pre-fetch buffer size to 8-bits an increased the operating frequency once again resulting in high data transfer rates than its predecessor DDR2.

In addition, to the increased data transfer rate ,the memory chip voltage level was lowered to 1.5 V to counter the heating effects of the high frequency. By now you can see the trend of memory to increase pre-fetch buffer size and chip operating frequency, and lowering the operational voltage level to counter heat.

The physical DDR3 is also designed with 240 pins, but the notched key is in a different position to prevent the insertion into a motherboard RAM socket designed for DDR2. DDR3 is both electrical and physically incompatible with previous versions of RAM.

In addition to high frequency and lower applied voltage level, the DDR3 has a memory reset option which DDR2 and DDR1 do not.

Page 8: QA Digital

4.Comparisons between DSP vs FPGA ?

DSP :

The DSP is a specialised microprocessor - typically programmed in C, perhaps with assembly code for performance. It is well suited to extremely complex maths-intensive tasks, with conditional processing. It is limited in performance by the clock rate, and the number of useful operations it can do per clock

FPGA :

In contrast, an FPGA is an uncommitted "sea of gates". The device is programmed by connecting the gates together to form multipliers, registers, adders and so forth. Using the Xilinx Core Generator this can be done at a block-diagram level. Many blocks can be very high level – ranging from a single gate to an FIR or FFT. Their performance is limited by the number of gates they have and the clock rate. Recent FPGAs have included Multipliers especially for performing DSP tasks more efficiently.

Points for consideration :

1. What is the sampling rate of this part of the system? If it is more than a few MHz, FPGA is the natural choice.

2. Is your system already coded in C? If so, a DSP may implement it directly. It may not be the highest performance solution, but it will be quick to develop.

3. What is the data rate of the system? If it is more than perhaps 20-30Mbyte/second, then FPGA will handle it better.

4. How many conditional operations are there? If there are none, FPGA is perfect. If there are many, a software implementation may be better.

5. Does your system use floating point? If so, this is a factor in favour of the programmable DSP. None of the Xilinx cores support floating point today, although you can construct your own.

6. Are libraries available for what you want to do? Both DSP & FPGA offer libraries for basic building blocks like FIRs or FFTs. However, more complex components may not be available, and this could sway your decision to one approach or the other.

5.Comparison of FPGA vs ASIC ?

Field Programmable Gate Arrays (FPGAs) and Application Specific Integrated Circuits (ASICs) provide different values to designers, and they must be carefully evaluated before choosing any one over the other. Information abounds that compares the two technologies. While FPGAs used to be selected for lower speed/complexity/volume designs in the past, today’s FPGAs easily push the 500MHz performance barrier. With unprecedented logic density increases and a host of other features, such as embedded processors, DSP blocks, clocking, and high-speed serial at ever lower price points, FPGAs are a compelling proposition for almost any type of design.

FPGA :

Points Reason Pro's Faster time-to-market No layout, masks or other manufacturing steps are needed No upfront NRE Costs typically associated with an ASIC design

Page 9: QA Digital

Simpler design cycle Due to software that handles much of the routing, placement, and timing Field re-programmability A new bit stream can be uploaded remotely Con's

It requires more silicon the custom circuit only needs to consist of the gates that are required for the application

It consumes more power than ASIC

Programmable hardware requires extra configuration logic, which also consumes a significant proportion of the chip.

Note :

FPGA requires 20–40 times the silicon area of an equivalent ASIC

It is slower compared to ASIC Flexibility of programmable logic means that it will always be slower than a custom circuit

Note :

programmable circuits are larger, they will have more capacitance, reducing

the maximum clock speed. An ASIC will typically 3 or 4 times faster.

Structured ASIC : Structured ASIC approaches, which effectively take an FPGA design and replace it with the same logic (hardwired) and fixed routing, can also overcome some of these problems. ASIC Design: Advantage

1. Full custom capability 2. Lower unit costs 3. Smaller form factor

Note: Although ASICs offer the ultimate in size (number of transistors), complexity, and performance, designing and building one is an extremely time consuming and expensive process, with the added disadvantage that the final design is “ frozen in silicon ” and cannot be modified without creating a new version of the device.

Page 10: QA Digital

6. Selection criteria for FPGA ? The selection of an FPGA is based on the following criteria's

1. Capacity (Gate count, BRAMs & DSP slices) 2. Power Consumption 3. IO's (User IO's) & Operating voltage standard for IO's (LVCMOS, LVTTL, LVPECL.. etc) 4. IDE .. Free or Paid software 5. Cost of device 6. Flash or Anti-fuse Devices (ACTEL ONLY) 7. No PROM / Configuration memory or SRAM Based Device (ALL other FPGA vendors) 8. Package of the device (depends on IO requirement) 9. Form Factor 10. Speed grade of Device 11. Temperature Grade of Device(Military / Industrial / Commercial Qualification) 12. Max Operating Frequency 13. Free IP Cores with silicon vendor(this allows a faster development cycle)

7. Difference between PLD and FPGA? we need only be aware that PLDs are devices whose internal architecture is predetermined by the manufacturer, but are created in such a way that they can be configured by engineers in the field to perform a variety of different functions. In comparison to an FPGA, however, these devices contain a relatively limited number of logic gates, and the functions they can be used to implement are much smaller and simpler. 8. Logic on the FPGA fabric or processor, deciding factors? Picosecond and nanosecond logic: This has to run insanely fast, which mandates that it be implemented in hardware (in the FPGA fabric). Microsecond logic: This is reasonably fast and can be implemented either in hardware or software (this type of logic is where you spend the bulk of your time deciding which way to go). Millisecond logic: This is the logic used to implement interfaces such as reading switch positions and flashing light-emitting diodes (LEDs). it's pain slowing the hardware down to implement this sort of function (using huge counters to generate delays, for example). Thus, it’s often better to implement these tasks as microprocessor code (because processors give you lousy speed—compared to dedicated hardware—but fantastic complexity). 9. Soft core vs hard cored microprocessors ? Hard core µp: A hard microprocessor core is implemented as a dedicated, predefined block. Soft core µp: As opposed to embedding a microprocessor physically into the fabric of the chip, it is possible to configure a group of programmable logic blocks to act as a microprocessor. Technology Trade-offs— ● A so core typically runs at 30 to 50 percent of the speed of a hard core. ● However, they have the advantage that you only need to implement a core if you need it and that you can instantiate as many cores as you require until you run out of resources in the form of programmable logic

Page 11: QA Digital

blocks. 10. Special embedded multipliers ? Multipliers, are inherently slow if they are implemented by connecting a large number of programmable logic blocks together. Since many applications require these functions, many FPGAs incorporate special hardwired multiplier blocks. These are typically located in close proximity to the embedded RAM blocks because these functions are often used in conjunction with each other. Note : MAC : If the FPGA you are working with supplies only embedded multipliers, you will have to implement this function by combining the multiplier with an adder formed from a number of programmable logic blocks, while the result is stored in some associated flip-flops, in a block RAM, or in a number of distributed RAMs. Life becomes a little easier if the FPGA also provides embedded adders, and some FPGAs provide entire MACs as embedded functions. 11. Explain about the Xilinx logic cell (LC) ?

An LC comprises a 4-input LUT (which can also act as a 16 RAM or a 16-bit shift register), a multiplexer, and a register. The register can be configured to act as a flip-flop or as a latch. The polarity of the clock (rising-edge triggered or falling-edge triggered) can be configured, as can the polarity of the clock enable and set/reset signals (active-high or active-low). In addition to the LUT, MUX, and register, the LC also contains a smattering of other elements, including some special fast carry logic for use in arithmetic operations. 12. Functions of a clock manager in a Xilinx FPGA? 1. The FPGA’s clock manager can be used to detect and correct for this jitter and to provide “ clean ” daughter

clock signals for use inside the device.

2. To obtain daughter clocks with frequencies that are derived by multiplying or dividing the original signal.

3. Phase shifting of the input clock by desired angle. (by fixed or Others allow you to configure the exact amount

of phase shift you require for each daughter clock

4. Auto skew correction.

Technology Trade-offs

Page 12: QA Digital

● Some FPGA clock managers are based on phase-locked loops (PLLs), while others are based on digital delay-locked loops (DLLs). PLLs have been used since the 1940s in analog implementations, but recent emphasis on digital methods has made it desirable to match signal phases digitally. PLLs can be implemented using either analog or digital techniques, while DLLs are by definition digital in nature. ● The proponents of DLLs say that they offer advantages in terms of precision, stability, power management, noise insensitivity, and jitter performance. 13. What are the configuration files for FPGA & CPLD? Configuration files contain the information that will be uploaded into the FPGA in order to program it to perform s specific function. In the case of SRAM-based FPGAs, the configuration file contains a mixture of configuration data (bits that are used to define the state of programmable logic elements directly) and configuration commands There are several types of configuration files.

A Bit stream file (*.bit) is used to configure an FPGA.

A JEDEC file (*.jed) is used to configure a CPLD.

A PROM file (*.mcs) is used to configure a PROM.

A Raw Bit File (*.rbt) is an ASCII version of the Bit file. The only difference is that the header

information in a Bit File is removed from the Raw Bit File.

An IEEE1532 standard file (*.isc) can be used to configure selected FPGAs, CPLDs, or PROMs.

BINARY(.bin) generates a PROM data file in binary format. This is appropriate for users who are

interested in developing their own software applications for configuring Xilinx FPGAs.

14. What is the DLL? In electronics, a delay-locked loop (DLL) is a digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator. A DLL can be seen as a negative-delay gate placed in the clock path of a digital circuit. Another way to view the difference between a DLL and a PLL is that a DLL is a first order loop and a PLL is a second order loop. A DLL compares the phase of one of its outputs to the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements. The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required. A first order feedback system is significantly easier to stabilize than a second order feedback system, which is a major advantage of DLLs The main component of a DLL is a delay chain composed of many delay gates connected front-to-back. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; the selector of this multiplexer is automatically updated by a control circuit to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal. 15. Need for Gigabit transceivers ?

Page 13: QA Digital

1. As the need to push more data, bus width grew from 8 to 16,32 to 64 bit. a. The problem is that this requires a lot of pins on the device and a lot of tracks connecting the devices together. b. Routing these tracks so that they all have the same length and impedance becomes increasingly painful as boards grow in complexity. it becomes increasingly difficult to manage signal integrity issues Solution : High-end FPGAs include special hardwired gigabit transceiver blocks. These transceivers operate at incredibly high speeds, allowing them to transmit and receive billions of bits of data per second. These blocks use one pair of differential signals to transmit (TX) data and another pair to receive (RX) data. Examples of Gigabit transceiver : ● Fibre Channel

● InfiniBand

● PCI Express

● RapidIO

● SkyRail (from MindSpeed Technologies)

● 10-gigabit Ethernet

16. What are IP Cores ? FPGA designs are so big and complex that it would be impractical to create every portion of the design from scratch, Any existing functional blocks are typically referred to an intellectual property (IP). #3 forms of IP : 1. Internally created blocks reused from previous designs,

2. FPGA vendors, and

3. Third-party IP providers.

FPGA VENDORS : 1. Hard IP

2. Firm IP

3. Soft IP

Hard IP's : Hard IP comes in the form of pre-implemented blocks such as microprocessor cores, gigabit interfaces, multipliers, adders, MAC functions, and the like. These blocks are designed to be as efficient as possible in terms of power consumption, silicon real estate, and performance. Each FPGA family will feature different combinations of such blocks, together with various quantities of

Page 14: QA Digital

programmable logic blocks. Soft IP's : Soft IP refers to a source-level library of high-level functions that can be included to the users designs. These functions are typically represented using a hardware description language, or HDL, such as Verilog or VHDL at the register transfer level (RTL) of abstraction. Any soft IP functions the design engineers decide to use are incorporated into the main body of the design—which is also specified in RTL—and subsequently synthesized down into a group of programmable logic blocks Firm IP's : Firm IP , which also comes in the form of a library of high-level functions. Unlike their soft IP equivalents, however, these functions have already been optimally mapped, placed, and routed into a group of programmable logic blocks 17. What is RTL Encryption ? IP provider would already have simulated, synthesized, and verified the IP blocks before handing over the RTL source code. FPGA vendors are usually reluctant to provide unencrypted RTL because they don’t want anyone to retarget it toward a competitor’s device offering. RTL encrypted by a particular FPGA vendor’s tools can only be processed by that vendor’s own synthesis tools. IP block is tied to a particular FPGA vendor and device family. 18. Advantage of FLASH Based FPGA programming ? FLASH-based devices are non- volatile. They retain their configuration when power is removed from the system don’t need to be reprogrammed when power is reapplied to the system. FLASH-based devices can be programmed in-system (on the circuit board) or outside the system by means of a device programmer. 19. Power Consumption of FPGA ? The power dissipated by an FPGA can be split into two components 1.static power 2.dynamicpower. Dynamic power is the power required for switching signals from 0 to 1 and vice-versa. Conversely, static power is that consumed when no switching, simply by virtue of the device being Switched on

Where N is the average number of outputs that are changing in each clock cycle,

Page 15: QA Digital

C is the average capacitance on each output, Vdd is the power supply driving voltage. (usually 3.3 v) f is the clock frequency. Reduction Mechanisms : 1. Limiting the number of outputs that change with each clock cycle;

2. Minimising the fan-out from each gate and minimising the use of long wires (to keep the Capacitance lower);

3. Reducing the power supply voltage

4. Reducing the clock frequency.

Note : Reducing the clock frequency probably has the most significant effect on reducing the power consumption, as It is often the clock signal itself that dissipates the most power. Minimising the number of clock domains and keeping the logic associated with each clock domain together Can also help to limit the power used by the clock circuitry. Flow-through current is becoming more important with reducing VDD because the threshold voltages of the P- and N- channel transistors are brought closer together and both transistors are partially on for a wider Proportion of the input voltage swing The static power consumption is due to leakage current of the device. 20. Frequency translation in WBSPU? An hopper placed at 81 to 84 Mhz , What is it frequency at baseband to be given to MRx for monitoring 128 Khz audio. F1 = 81 Mhz F2 = 84 Mhz RF Tuner Band : 20-100 Mhz (80 Mhz) Without RF Tuner Band:

Page 16: QA Digital

1. high speed design constraints?. With RF Tuner Band ( 20- 100 Mhz)

2. high speed adc design constraints? current high speed at market? 3. high speed dac's design constraints? current high speed at market? 4. there are several LUT structure used for FPGA what is its trade-off? 5. fpga vs cpld? 21. What is meant by timing does not met ? When we say a design does not “meet timing,” we mean that the delay of the critical path, that is, the largest delay between flip-flops (composed of combinatorial delay, clk-to-out delay, routing delay, setup timing, Clock skew, and soon) is greater than the target clock period. The standard metrics for timing are clock period and frequency. Remedy : Timing optimizations to reduce the combinatorial delay of the critical path are as follows.

1. Adding register layers to divide combinatorial logic structures. Adding register layers improves timing by dividing the critical path into two paths of smaller delay.

2. Parallel structures for separating sequentially executed operations into Parallel operations.

Separating a logic function into a number of smaller functions that can be evaluated in parallel reduces the path delay to the longest of the substructures.

Page 17: QA Digital

3. Flattening logic structures specific to priority encoded signals. By removing priority encodings where they are not needed, the logic structure is flattened and the path delay is reduced.

4. Register balancing to redistribute combinatorial logic around pipelined registers.

Conceptually, the idea is to redistribute logic evenly between registers to minimize the worst-case delay between any two registers. Register balancing improves timing by moving combinatorial logic from the critical path to an adjacent path.

5. Reordering paths to divert operations in a critical path to a non critical path. Timing can be improved by reordering paths that are combined with the critical path in such a way that some of the critical path logic is placed closer to the destination register. 22. What is Maximum Frequency Operation ? Timing refers to the clock speed of a design. The maximum delay between any two sequential elements in a design will determine the max clock speed.

23. what is meant by architecting area? A topology that targets area is one that reuses the logic resources to the greatest extent possible, often at the expense of throughput (speed). Very often this requires a recursive data flow, where the output of one stage is fed back to the input for similar processing.