EEWeb Pulse - Volume 47

25
PULSE EEWeb.com Issue 47 May 22, 2012 Michael McNamara Cadence Electrical Engineering Community EEWeb

description

Interview with Michael McNamara – Vice President at Cadence; TLM-Drive Design and Vertification Using Cadence; React Quickly or Feel The Pain; Advanced Complementary Bipolar Processes on Bonded-SOI Substrates.; RTZ – Return to Zero Comic

Transcript of EEWeb Pulse - Volume 47

Page 1: EEWeb Pulse - Volume 47

PULSE EEWeb.comIssue 47

May 22, 2012

Michael McNamaraCadence

Electrical Engineering Community

EEWeb

Page 2: EEWeb Pulse - Volume 47

Contact Us For Advertising Opportunities

[email protected]

www.eeweb.com/advertising

Electrical Engineering CommunityEEWeb

Page 3: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 3

TABLE O

F CO

NTEN

TSTABLE OF CONTENTS

Michael McNamara 4CADENCE

TLM-Drive Design and Vertification 10Using Cadence®

BY MICHAEL MCNAMARA

Featured Products 14React Quickly or Feel The Pain BY DAVE LACEY WITH XMOS

Advanced Complementary Bipolar 19 Processes on Bonded-SOI SubstratesBY STEPHEN PARKS, RICK JEROME, JOSHUA BAYLOR, AND MICHAEL SUN WITH INTERSIL

RTZ - Return to Zero Comic 24

How Cadence is pioneering the transition to transaction level modeling for faster design and verification times, esier IP and fewer bugs.

Interview with Michael McNamara - Vice President and General Manager of System Level Design

BSOI substrates offer a number of important advantages to ensure high-level performance in semiconductors.

Response time plays a crucial role in real-time efficacy in system programming.

16

Page 4: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 4

INTERVIEWFEA

TURED IN

TERVIEW

CadenceCan you tell us about your work experience/ history before becoming the Vice President of System Level Design at Cadence?After graduating from Cornell University in 1985 with a Master’s Degree in computer architecture from the Electrical Engineering school, I worked in the defense industry, building computer systems designed to find submarines (think Hunt for Red October). As the cold war ended, I moved into the commercial sector, first building computer systems designed to help find oil at Cydrome in 1987; and then at Ardent Computer in 1989 building systems optimized for the MRI machines – finding torn ligaments. My work was in specifying & modeling the systems; verifying the implementation, and bringing up the operating systems on these diverse computerized systems.

Will you tell us about starting Chronologic in the 90’s, bringing VCS to the world, and then later co-founding SureFire Verification, improving the state of the verification software?

ichael cNamara

Michael McNamara - Vice President and General Manager of System Level Design

MWe used Verilog simulation to design and verify the computer systems we were developing, and the commercial tools available at the time ran far too slowly to be useful for our needs. So a number of us from Ardent got together in 1991 to form Chronologic to write a compiler for the Verilog language – VCS. I served as VP of Engineering,

and we delivered a 10x speedup in verification speed over the Verilog-XL tool from Cadence. Regression tests that took a week could be completed overnight; greatly increasing the productivity of the integrated chip design process.

After selling Chronologic to Viewlogic in 1994 (and then

Page 5: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 5

INTERVIEWFEA

TURED IN

TERVIEW

to Synopsys), a number of us recognized the next barrier to productivity was in the lack of automation in verification. So we founded SureFire Verification in 1996 to develop tools for measuring test coverage, performing static verification as well as automatic test generation. We merged SureFire into Verisity in 1999, and together went public in 2001. Cadence acquired Verisity in 2005.

What have been some of your influences that have helped you get to where you are today? The inexorable pace of complexity increase driven by Moore’s law provides the commercial opportunity to successively optimize each step of the design and verification process, as they become bottlenecks. In the 70’s, it became too difficult to layout transistors by hand; so the industry developed place and route tools. In the 80’s, it became too slow to capture logic designs as networks of gates; so the industry developed RTL simulation and synthesis. In the 90’s, it became too slow to write tests by hand to verify these RTL designs, so we developed automatic test bench systems.

My colleagues and I have been very successful by recognizing the next bottleneck; developing tools and methodologies to address the bottleneck, and bringing these tools to market properly timed to serve the expanding needs.

Do you have any tricks up your sleeve? Occasionally back away from your day-to-day activities and take a look

around at your team. Ask what tasks seem to be consuming increasing percentages of your time. What is the emerging bottleneck in your job? What would it take to reduce the bottleneck? Is there a company you could start to address reducing this bottleneck?

What has been your favorite project?My most enjoyable project was building the Chronologic Verilog Compiled Simulator (VCS). In less than 12 months we built a tool that delivered a 10x improvement in simulation performance, and won business at Sun Microsystems, Silicon Graphics and AMD.

What are you currently working on?I manage the teams at Cadence which are developing the methodologies and tools to improve the efficiency of System Level Design. System Level Design is what people do before they start into today’s RTL design process, where they use the familiar Verilog and VHDL languages for design, and SystemVerilog and Specman for verification.

The first technology we developed is the Cadence C-to-Silicon Compiler (CtoS), which is a high level synthesis tool that enables engineers to guide the automatic translation of programs written in the C and C++ programming languages into very efficient RTL, controlling the area, power and performance of the resulting circuitry. This process uses the IEEE standard SystemC class library as the input format and generates files in the IEEE Verilog format as the output.

The second technology we developed is the Cadence Virtual System Platform (VSP), which is a set of tools enabling rapid creation of programmer view models of hardware, in the SystemC language; the execution and debug of embedded software running on these models in a pure virtual platform environment, or on any mixture of execution where some blocks are represented as RTL. The RTL can run simulated on a workstation or accelerated in the Cadence Virtual Computing Platform known as Palladium PXP.

Where is this type of technology used?Quite a number of companies are using CtoS and VSP; recent press releases document the success that companies like Casio and Renesas have had with C-to-Silicon, and that Xilinx has had with the Cadence Virtual System Platform. The VSP was even awarded the 2012 ACE award as the most significant software product of the year.

Will you tell us about linking virtual prototypes to high-level synthesis? The astute reader (and many of my customers) quickly notices that C-to-Silicon and VSP both accept as input models of the hardware which are written in the SystemC language, and naturally ask if they can feed, say the virtual model of the image processing component to C-to-Silicon, and hence get an RTL implementation of the device.

The answer is a guarded ‘yes,’ if one plans for this dual-use up front. In the general sense, the concern of the person building a model of

Page 6: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 6

INTERVIEWFEA

TURED IN

TERVIEW

the hardware for the embedded software programmer are quite different from the concerns of the person building a synthesizable model for high level synthesis. The embedded software designer wants from the virtual platform speedy execution above all else, and needs only sufficient accuracy

Occasionally back away from your day-to-day activities and take a look around at your

team. Ask what tasks seem to be consuming increasing percentages

of your time.

so that correct programs will run on the virtual platform, and incorrect programs will fail. Such users don’t need any models of internal busses or communication protocols. The hardware designer starting from SystemC needs a description that completely represents the required functionality, including a clear specification of the bus protocols and clocking strategies required in the resulting hardware. She needs to be able to guide the HLS tool to produce an implementation that fits the particular conflicting requirements of small area, low power, low latency and high throughput of the target device.

Both teams need an accurate

representation of the algorithm and register definitions; but only the HLS engineer needs a clear description of the internal protocols.

What is the connection between virtual prototypes and high-level synthesis? A powerful part of the IEEE SystemC class library is the support for Transaction Level Modeling. Using TLM, the creators of system level models of devices can abstract away the details of how the various blocks will communicate in the real hardware, by representing this communication as TLM put() and TLM get() calls to an idealized channel, which connects a block to a bus, or as a peer to another block. Using TLM, one can write a model for use in a virtual platform, which includes a representation of the various blocks of the design which will execute very quickly for the embedded software validation process. When synthesizing the same model, people using the Cadence C-to-Silicon compiler use libraries provided by Cadence to map this idealized communication to a synthesizable AXI bus, as an example.

What are the main reasons that your customers are using high-level synthesis?The ability to quickly build different high quality implementations of a given algorithm which are targeted to different use cases is often the first perceived benefit of High Level Synthesis. Teams of RTL coders have built great hardware over the years, including the computer on which I am writing these words! However, this is a manual process

that has to be repeated from scratch to code new RTL to represent the same functionality but which is to be implemented at say 40nm instead of 60nm, and again to work at 20nm. Different RTL needs to be written to implement say 3D graphics for a smart phone, from that RTL that would be ideal for doing the same function on a laptop.

C-to-Silicon lets you generate all of these different RTL files from the same input, by just providing the 40nm technology library instead of the 60nm library; and by setting the options for low power versus high performance in the configuration file.

That being said, the experienced users have recognized that the even more compelling value of C-to-Silicon is its integration into the UVM verification methodology. We’ve designed our HLS tool so that all of the aspects of the design (the calculation and the communication protocols) are modeled in the input; and so one can verify the correct behavior of the system as coded in the SystemC language, which simulates orders of magnitude faster than RTL code.

As I noted before, in my answer to your question about “tricks up my sleeve” – my colleagues and I noticed that the current bottleneck in the design of electronic devices is the incredible amount of time it takes to verify that the code does what we want the device to do. Hence we developed a methodology and a tool set we called TLM Driven Design and Verification (and released a book last year on this topic which you can get from Amazon), which

Page 7: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 7

INTERVIEWFEA

TURED IN

TERVIEW

raises the level of abstraction for design capture to the SystemC level, where verification runs much more quickly, and so at the end of the day the pace of the creation of the new and incredibly more powerful devices that surround us will increase even more, and the devices themselves will be much more reliable.

Do you work directly with chip manufacturers or will you work in conjunction with logic synthesis compilers?C-to-Silicon uses another Cadence tool known as RTL Compiler as a linked-in subroutine to read into our database the chip manufacturers technology library, for example, the TSMC 65 nanometer low-power. With the data from the technology library, which gives the tool information about things like timing, area and voltage information of the various transistors that can be used in the particular manufacturing process, C-to-Silicon then reads in your C program and proceeds to implement a device the performs the function of your C program using those transistors. Our tool works with any process you like; from 180 nm (which will be a fairly inexpensive process generating a fairly slow integrated circuit), to the bleeding edge 20nm and 14nm libraries which are used to build the fastest and most complex devices which we will see in products in the next few years. For the FPGA flow, we use the Xilinx XST tooling as the timing engine for the Xilinx FPGAs, and we use Altera Quartus tools for the Altera FPGAs.

Xilinx has acquired companies providing High Level Synthesis. Cadence and Xilinx are partnering around your Virtual Platform tool, but maybe competing with your HLS tool. What’s going on here?Xilinx does recognize the need to provide such technology to their FPGA customers. These folks require easy-to-use tools that map designs into the FPGA very quickly. If you look at it from Cadence’s perspective, most of our customers are implementing on ASIC, or analog-digital mix. Cadence’s primary customers are the top tier semiconductor manufacturers and integrated systems companies whose primarily use of FGPAs is as a fast prototype of an ASIC chip that they’re going to ship in volume. C-to-Silicon supports that prototyping flow, which allows these customers to quickly generate RTL for an algorithm that can run at a speed of hundreds of MIPS on the FPGA, get confidence that their overall system performance will meet their needs, and then return to building a highly optimized ASIC that will run at GHz, and use much less power and cost less than the FPGA. These are the customers who plan to ship very high numbers of parts, so they need each to be as small as possible (which also often means they will use less energy).

Do you see the web influencing EDA platforms and technology and the way these tools work? I was on the board of Verisity Design back in 2000. Our board asked us if we could take advantage of the

emerging software as a service (SaS) or web-based apps for EDA design. Our investigation then showed that quite a lot of customers were more worried about protecting their IP than they were worried about reducing their IT costs. They had a legitimate fear of having say the next iPhone design out on the web somewhere where people could hack in and figure out what it’s doing and potentially make a competing device. Or more critically, that access to the SaS servers (now called Cloud servers) would go down during a critical design phase, delaying product development at minimum, or worse loosing critical data altogether. On the other hand, the power of virtualized servers and the resulting scalable IT infrastructure is a great boon to the EDA users who have significant peaks and valleys in their need for compute power over the life cycle of a design project. Different tools are needed throughout the lifecycle of the design, and in some phases, only a few computers are needed; in other phases the limit to progress is the number of computers that can be applied in parallel.

Over the past decade, the costs per node of IT have continued to drop, and companies have productized the cloud, bringing the ability to own a private SaS system well within the means of even small to medium businesses. So while in EDA the use of the public SaS servers is limited to special cases, the use of private clouds, and peer to peer web based computing models is expanding significantly.

At Cadence we have provided our customers peer to peer computing

Page 8: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 8

INTERVIEWFEA

TURED IN

TERVIEW

support for more than a decade. We call it “Virtual CAD,” and it is how we help customers to get unstuck very quickly from whatever preventing them from moving forward. These are secure chambers that are actually not just out on the web—there’s a secure T1 link between our customer and our team; and a secure machine at one of our sites dedicated for this customer. The customer can put up their design and have the complete tool environment from Cadence available, and our experts can virtually look over their shoulders from where ever they are to help the customer with the issue. Often it is simple education – the customer needs help learning how to use a cutting edge tool. Sometimes it is a bug in our tool, or the customer’s design, and our experts can apply special debugging versions of particular Cadence tool so that we can get to the bottom of the issue as soon as possible. This allows timely customer support without having to fly our people from our India, Boston or San Jose offices to the customer’s site in Korea or Texas or wherever.

We do offer some Cadence tools in a web based client server environment – initially some of our analog design tools. This is very effective for some of the early stage start-ups or even pre-funded start-up companies that haven’t otherwise invested in IT infrastructure. A lot of the time, the VCs wants to see some early indications of success

before they start investing the big bucks into a start-up these days—everyone’s cautious—and having the ability to let people access the hosted services and start some

of their chip design before they build their own infrastructure is a powerful way to enable these guys to get going without a huge cost on our part.

What’s the best approach to becoming acquainted with some of Cadence’s tools? Do you offer demos for people to learn how to use them?Since we are a big company in the industry, we are probably already in-use somewhere else in your company and there probably is already somebody on our sales force that works with you, certainly if you are medium to large. Cadence does hold a conference at 8 or 10 locations around the world where we show what we offer, and we have other users giving papers on our technology, and everything else we have going on. It’s really just a matter of contacting your local salesperson or an e-mail to me. All you have to do is give me a return address. We like to keep it up-close-and-personal.

What challenges do you foresee in our industry?Design is moving from a serial process where (a) the hardware team designs & verifies RTL models, builds chips from them, and then assembles them into a device, and then (b) hands it to the software team who then ports an operating system to the device and brings up applications, finally shipping this to the market a few years after design start, to a new more efficient parallel process.

In this new flow, the hardware teams and software teams work together building their components

at the same time. They use the virtual platform as the common model that both teams aim for – the software team builds software that works on the Virtual Platform; and the hardware team builds a device that behaves just like the virtual platform, so the software will run on the actual product the first time. Those companies that master this technique the soonest will be first to market with the newest devices. Those companies that provide the tools that enable this new design flow will win with the new winners.

What are some of your hobbies outside of work and design?Well, I own a small orchard in the Santa Cruz Mountains, nestled in the redwoods. Our water comes from a spring, electricity from solar, and the chores around the farm keep me gloriously busy on the weekends as a gentleman farmer. My staff gets the benefit of organic fruit including pears, apricots, apples, kiwis and pomegranates throughout the year. I have gained a deep appreciation for how difficult it would be to survive financially as a farmer – bad weather will cut crop production in half, and raise prices by only 20% and good weather boosts production but cuts prices. Disease, insects, equipment failure all throw more uncertainty into one’s financial security. John Steinbeck owned this farm in the 1930s and he kept his day job as a writer. I follow his lead by keeping my day job managing the development of chip design software. ■

Page 9: EEWeb Pulse - Volume 47

Avago Technologies new AEAT-6600 Hall E� ect Magnetic Encoder delivers optimal solutions for Robotic, Industrial and Medical systems designers.

• World’s highest resolution

• 16-bit absolute positiion through SSI

• Programmable Magnetic Rotary Encoder IC

• 16-pin TSSOP package

• Power down mode

New Encoder for the Worst Case Environments

Avago Technologies Motion Control Products

For more information and to requesta free sample go to: www.avagotech.com/motioncontrol

Page 10: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 10

By Michael McNamaraVice President and General Manager of System Level Design

TLM-Driven Design andVerificaton Using Cadence ®

Abstract

Transaction level modeling (TLM) is gaining favor over register-transfer level (RTL) for design components because of its many advantages—including faster design and verification times, easier intellectual property (IP) reuse, and fewer bugs. The benefits are clear, but the transition will take time. Designers must be able to combine new TLM IP with legacy RTL IP in design and verification environments. Cadence offers methodology guidelines, high-level synthesis, TLM-aware verification and debugging, and services to ease the transition from RTL to TLM.

Key Challenges That Demand a Change from RTL

At the register-transfer level, the structure of finite state machines is fully described. This means that one needs to commit to micro-architectural details when writing RTL, such as the memory structures, pipelines, control states, or ALUs used in the resulting implementation. This requirement results in a longer and less reusable design and verification flow.

While TLM is sometimes used in today’s flows, current RTL-based flows require the manual entry of

design intent twice—once at the systems level, and again at RTL (Figure 1). This is a cumbersome and error-prone process. Architecture is not verified until RTL is generated, and retargeting IP comes with a high cost. A true TLM-driven design and verification flow (shown on the right side of Figure1) will offer an automated path from a single expression of design intent.

One of the big problems with an RTL-driven design methodology is that design teams don’t know if an architecture can be implemented until they create RTL and run verification to determine if goals

Page 11: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 11

PROJECTFEA

TURED PRO

JECT

were met. Because RTL is inherently a direct representation of the architecture, most RTL designers are forced to explore functional correctness, architecture, and design goals simultaneously. The result is a lengthy cycle that begins with making an architectural decision and ends with verifying the functionality. Often, design and verification teams discover functional bugs that require fixes to the architecture, restarting the entire cycle with each bug.

Functional verification has become the primary bottleneck in many SoC projects. Much of the verification effort invested at the system level is lost when RTL functional verification begins. While techniques such as verification planning and metric-driven verification allow design teams to handle most of today’s

verification challenges, time constraints and increasing gate counts are making verification much more difficult. The time required for RTL functional verification can grow exponentially with the size of designs because of the need to verify corner cases resulting from interacting modes and the many hardware and software configurations this IP needs to be tested with.

RTL logic is cycle accurate and involves far more lines of code than TLM logic. When RTL models are simulated, the simulator is examining every event or clock cycle, even if nothing significant is happening at the protocol level. The simulator thus burns a lot of computer cycles on micro-architectural detail that could be postponed until the architecture is committed. TLM simulation is done

at a higher level of abstraction, is performed earlier, and provides higher performance.

The Solution That’s Needed

Used in conjunction with high-level synthesis (HLS), a TLM-based flow moves the level of abstraction up, representing the first major shift in abstraction since designers moved to RTL some 15 years ago. Experience with previous shifts in the abstraction level suggests that an order-of-magnitude improvement in designer productivity is possible (Figure 2).

Developing and maintaining TLM as golden source for an IP block requires synthesis and verification solutions that produce the necessary quality of results and verify the correctness, with no need to edit

Figure 1: An RTL-based flow requires manual design intent entry twice. A TLM-based flow, in contrast, provides an automated path to implementation

Silicon

Gates

RTL

Intent

Implementation

ImplementationVerification

Intent

NOW FUTURE

Silicon

Gates

RTL

Implementation

TLM

Intent

• Too many bugs

• Verifying Arch at RTL – too late

• Not fit for SW developement

• High cost to retarget

• Low power design – cumbersome

Intent entered

twice, manually

Verification

Verification

Higher Level of Abstraction

Page 12: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 12

PROJECTFEA

TURED PRO

JECT

the RTL or the gate level design. This will enable the design team to make all decisions within the TLM environment, and to reuse the TLM source for other designs with entirely new system constraints.

Verification of TLM IP has many advantages over RTL verification. First, simulation runs faster—perhaps an order of magnitude over RTL simulation. This allows many more functional use cases to be verified. Also, debugging at the TLM level of abstraction is easier and faster than RTL debugging. By coding at a higher level, TLM IP requires fewer lines of code, and thus has fewer bugs. The total verification effort can thus be greatly reduced.

TLM Models are at a sufficiently high level of abstraction, and

execute quickly enough, to make hardware/software co-simulation practical. Designers can co-simulate embedded software with TLM hardware models to check for hardware/software dependencies, and to begin early debugging of hardware-dependent software.

At the SoC level, mixed TLM and RTL functional verification is required because of the large amount of legacy RTL IP that will be reused, and because it will still be necessary to perform detailed RTL functional verification for portions of the design. Some verification tasks will still be done only at RTL. This includes micro-architectural structural verification for such attributes as memory access sequences or state transition coverage.

RTL verification is dominated by advanced testbenches that use constrained-random stimulus generation. TLM-enabled VIP should be operable in testbenches for TLM, mixed TLM/RTL, and RTL functional verification. That same VIP needs to allow the application of metric-driven verification as customers apply coverage metrics at all levels of verification abstraction. Finally, supporting embedded software and directed tests is a necessity for verification teams who work closely with the architects and software engineering teams.

Incremental Design Refinement From Algorithm to Micro-Architecture

The TLM IP design and verification flow has several distinct phases, including algorithm verification, architecture verification, and micro-architecture verification (Figure 3). The first step, algorithm design and verification, may involve C++ or a product such as Matlab or Simulink. Users define a vPlan for key algorithm features, verify I/O functionality, and apply stimulus sequences for key use cases.

In step two (architecture verification), designers use a TLM-driven IP modeling (TDIP) methodology to define the architecture and the interface protocols. They reuse the algorithm vPlan and apply additional stimuli, checks, assertions, and coverage. They also define a vPlan for the key architecture and interface protocol features. In step three (micro-architecture verification), following synthesis with the C-to-Silicon

Figure 2: TLM verification flow and requirements

• Algorithm design – Apply stimulus sequences, define checks & coverage to validate the I/O functionality

Verification Requirements

• Expose the registers and memory maps for the SoC• Validate algorithm in the system context• Use as platform for early SW development

• Validate the performance in the system context• Use as platform for timing-dependent SW development

• Architecture chosen - thoroughly verify TLM functional implementation matches spec requirements

• Thoroughly verify delta RTL micro-architecture (signal and protocol level) matches spec requirements

• Verify that RTL functionality matches TLM functionality

• Verify that Gates functionality matches RTL functionality

AlgorithmFunctionalVerification

TLMFunctionalVerification

FunctionalVerification

with SW

SoC and SWVerificationwith Timing

RTLMicro-archVerification

RTLEquivalenceVerification

GatesEquivalenceVerification

Page 13: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 13

PROJECTFEA

TURED PRO

JECT

Compiler, designers reuse the algorithm and architecture vPlans and expand to micro-architectural detail in the stimuli, checks, assertions and coverage.

TLM Verification Solutions Using Cadence

The C-to-Silicon Compiler is a high level synthesis product that takes TLM SystemC IP descriptions and constraints and creates RTL that can be used with the standard RTL implementation flow. To ensure the quality of results, it uses the Cadence® Encounter® RTL

compiler technology to create logic, and uses the timing and power information of this logic to extract architecture details of the resulting RTL.

With Cadence® Incisive® Software Extensions, designers can co-simulate processor models running embedded software with TLM hardware models. Incisive Software Extensions gives the verification testbench access to software executing on processor models, and brings capabilities such as a metric-driven verification, pseudo-random

test generation, and verification coverage combined hardware and software simulation.

These software extensions provide technology to mix TLM, TLM/RTL, and RTL functional verification to achieve successful closure. For SoCs that have large RTL legacy content, TLM simulation can be complemented with fast RTL validation using Cadence. These hardware platforms allow cycle-accurate at speeds that permit low-level software validation. ■

Figure 3: A TLM-based IP design and verification flow encompasses algorithmic, architectural, and mirco-architectural verification.

Page 14: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 14

FEATURED

PROD

UCTS

FEATURED PRODUCTS

Stellaris® Robotic Evaluation BoardThe Stellaris® Robotic Evaluation Board (EVALBOT) is a robotic evaluation platform for the Stellaris LM3S9B92 microcontroller. The board also uses a range of Texas Instruments’ analog components for motor drive, power supply, and communications functions. After a few minutes of assembly, the EVALBOT’s electronics are ready-to-run. When roaming, three AA batteries supply power to the EVALBOT. The EVALBOT automatically selects USB power when tethered to a PC as a USB device or when debugging. Test points are provided to all key EVALBOT signals. Two 20-pin headers enable future wireless

communications using standardized Texas Instruments’ low-power embedded radio modules (EM boards). Additional microcontroller signals are available on break-out pads arranged in rows adjacent to the microcontroller.The EVALBOT has factory-installed quickstart software resident in on-chip Flash memory. For software debugging and Flash programming, an integrated In-Circuit Debug Interface (ICDI) requires only a single USB cable for debug and serial port functions.For more information, please click here.

Capacitors for High Pulse Current ApplicationsVishay Intertechnology, Inc. introduced a new multilayer ceramic chip capacitor (MLCC) featuring an integrated resistor and low electrostrictive ceramic formulation. For high-pulse-current applications, the VJ controlled discharge capacitor (CDC) offers excellent reliability, high voltage ratings from 1,000 VDC to 1,500 VDC, and a capacitance range from 33 nF to 560 nF. The integration of a high-capacitance MLCC with a bleed resistor on its surface allows the VJ CDC to discharge more rapidly, while also reducing board space requirements and

assembly costs. Typical applications for the device include detonation devices (munitions, pyrotechnics, blasting) and electronic fuzing. The capacitor released today is manufactured in Noble Metal Electrode (NME) technology with a wet build process. The VJ CDC features a low electrostrictive ceramic formulation for repeated charge and discharge cycles, allowing the device to achieve very high field reliability. For more information, please click here.

among dissimilar signals within a device. The 89600 VSA software’s new multi-measurement capability has been engineered to deliver the power of multiple signal analyzers with the convenience of a single, optimized user interface. The software’s advanced architecture will enable engineers to configure multiple measurements simultaneously. For more information, please click here.

RF Multi-Measurement Signal AnalyzerAgilent Technologies Inc. announced that an innovative, new multi-measurement capability is being added to its 89600 VSA software, enabling simultaneous signal analysis of multiple carriers and signal formats for more efficient testing and deeper signal insight in wireless test. With wireless R&D and manufacturing engineers increasingly working with more than one signal at a time?whether for multi-format/multi-carrier devices or for viewing both uplink and downlink signals at the same time?analyzing one signal at a time is no longer efficient. Moreover, it fails to provide intelligence about the subtle interactions

Page 15: EEWeb Pulse - Volume 47

BeStar®

ACOUSTICS & SENSORS

Teamwork • Technology • Invention • Listen • Hear

PRODUCTSSpeakers

Buzzers

Piezo Elements

Back-up Alarms

Horns

Sirens/Bells

Beacons

Microphones

Sensors

INDUSTRIESAutomotive

Durables

Medical

Industrial

Mobile

Fire / Safety

Security

Consumer

Leisure

QS9000 • TS/ ISO16949 • ISO14001 • ISO13485 • ISO9001

bestartech.com | [email protected] | 520.439.9204

Preferred acoustic componentsupplier to OEMs worldwide

Page 16: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 16

ReactQuickly

Dave LaceyTechnical Director

of Software Tools

or Feel The Pain

A lot of the

work done in system programming

revolves around deadlines, which makes “real-time programming” a key

factor in system design. Generally, real-time programming is coding a system where some

of the actions of the system must meet specific deadlines. These deadlines may be internal or external to the system. Often, such systems are split into “soft” or “hard” real-time. Hard real-time systems will have a catastrophic failure if a deadline

is missed whereas soft real-time systems have a quality degradation if a deadline is missed, but the systems can at least carry on working.

The critical property when dealing with real-time systems is response time. The response time of a system is simply how long it takes to respond to an external stimulus.

There is a popular children’s game that typifies response time. It is sometimes known as “red hands” or “slaps,” but goes by other names in different parts of the world. It involves two people placing their hands in front of them, palms together as if in prayer. The players point their hands forwards and touch the tips of the fingers with each other.

The game has a “slapper” and an “avoider.” The aim for the slapper is to break contact and slap the avoider’s hands as hard as possible. The avoider needs to move their hands out of the way before they are slapped – if they do this successfully, then the avoider becomes the slapper and get their chance at revenge.

This game is a great example of a real-time system and a need for a good response time. The slapper provides the external stimulus and the avoider is the real-time system that must respond. Similarly in electronic hard real-time systems, you sometimes need to be able to respond

Page 17: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 17

TECHN

ICA

L ARTIC

LETECHNICAL ARTICLE

quickly or really feel that pain.

As mentioned before, response time is an essential factor in programming a system and is important to keep track of it and measure repeatedly. The question arises then of how you accurately test this property? How do you know which systems and architectures help you avoid the pain?

To gauge this property, you can run specialist response time benchmarks. They work by having a test device providing a number of input signals to the device under test. The device under test is programmed to response with an output signal change as fast as possible. The best system for doing this is pure hardware, either an ASIC or FPGA, but software systems are also a possibility.

The graph in Figure 1 shows the worst case response times for three systems for different numbers of input signals. Figure 2 shows details of the different architectures tested. As you can see, response times vary from a couple of hundred nanoseconds to about ten microseconds. The XMOS system is consistently faster than the other systems and also scales better as the number of inputs increase.

Why do XMOS devices respond so much faster than other microprocessors? The simple answer is that XMOS chips are designed with this property specifically in mind and the whole architecture is geared towards it.

What can a good response time be used for? Well, anything where taking too long to respond would either cause the system to miss some information or cause the external world to give up on the systems response. This can happen in many areas including motor control, human-computer interaction and networking. XMOS uses the good response time of its chips to implement high-speed digital communication protocols. These often have very tight response time specifications that need to be adhered to or the protocol breaks down and information is missed – a hard real-time system with some serious “pain” associated with it. The deadlines on these protocols are often so tight that traditionally they are not implemented in software at all. A good response time means that you can implement applications in software that were previously hardware only.

A full report on the benchmarking method described here along with the results can be found in the white paper “Benchmark Methods to Analyze Embedded Processors and Systems,” which you can get from the XMOS website .

Figure 1: Worst case response times

Figure 2: Systems Tested

References:

1. http://www.eeweb.com/blog/dave_lacey/when-software-stinks-and-what-to-do-about-it

2. http://www.xmos.com

About the Author

Dr. David Lacey works as Technical Director of Software Tools at XMOS. With over ten years of research and development in programming tools and compilation technology he now works on the development tools for XMOS devices. As well as tools development he has worked on application development for parallel and embedded microprocessors including work in areas such as math libraries, networking, financial simulation and audio processing. ■

Number of Inputs

ARM

Wor

st C

ase

Res

pon

se T

ime

(ns)

12000

10000

8000

6000

4000

2000

01 2 3 4

PIC XMOS

Architecture System Details

ARM BlueBoard LPC1768-H, LPC1768FBD100ARM Cortex-M3Keil uVision 4.21, FreeRTOS 7.0.1

PIC dsPICDEM Starter Board V2, dsPIC33FJ256MPLAB v8.80, FreeRTOS 7.0.1

XMOS XMOS XC-1A, XS1-G4XMOS Development Tools 11.2.2

Page 18: EEWeb Pulse - Volume 47

Low-Noise 24-bit Delta Sigma ADCISL26132, ISL26134The ISL26132 and ISL26134 are complete analog front ends for high resolution measurement applications. These 24-bit Delta-Sigma Analog-to-Digital Converters include a very low-noise amplifier and are available as either two or four differential multiplexer inputs. The devices offer the same pinout as the ADS1232 and ADS1234 devices and are functionally compatible with these devices. The ISL26132 and ISL26134 offer improved noise performance at 10Sps and 80Sps conversion rates.

The on-chip low-noise programmable-gain amplifier provides gains of 1x/2x/64x/128x. The 128x gain setting provides an input range of ±9.766mVFS when using a 2.5V reference. The high input impedance allows direct connection of sensors such as load cell bridges to ensure the specified measurement accuracy without additional circuitry. The inputs accept signals 100mV outside the supply rails when the device is set for unity gain.

The Delta-Sigma ADC features a third order modulator providing up to 21.6-bit noise-free performance.

The device can be operated from an external clock source, crystal (4.9152MHz typical), or the on-chip oscillator.

The two channel ISL26132 is available in a 24 Ld TSSOP package and the four channel ISL26134 is available in a 28 Ld TSSOP package. Both are specified for operation over the automotive temperature range (-40°C to +105°C).

Features• Up to 21.6 Noise-free bits.

• Low Noise Amplifier with Gains of 1x/2x/64x/128x

• RMS noise: 10.2nV @ 10Sps (PGA = 128x)

• Linearity Error: 0.0002% FS

• Simultaneous rejection of 50Hz and 60Hz (@ 10Sps)

• Two (ISL26132) or four (ISL26134) channel differential input multiplexer

• On-chip temperature sensor (ISL26132)

• Automatic clock source detection

• Simple interface to read conversions

• +5V Analog, +5 to +2.7V Digital Supplies

• Pb-Free (RoHS Compliant)

• TSSOP packages: ISL26132, 24 pin; ISL26134, 28 pin

Applications• Weigh Scales

• Temperature Monitors and Controls

• Industrial Process Control

• Pressure Sensors

ADC

PGA1x/2x/64x/128x

INTERNAL CLOCK

SDO/RDY

SCLK

DVDDAVDD

DGNDAGND

XTALIN/CLOCK

VREF+

EXTERNAL OSCILLATOR

XTALOUT

VREF-A0 A1/TEMP

AIN1+AIN1-

AIN2+AIN2-

AIN3+AIN3-

AIN4+AIN4-

INPUTMULTIPLEXER

ISL26134 Only

CAP

CAP

GAIN1GAIN0

PWDN

SPEED

DGNDDGND

NOTE for A1/TEMP pin: Functions as A1 on ISL26134; Functions as TEMP on ISL26132

FIGURE 1. BLOCK DIAGRAM

September 9, 2011FN6954.1

Get the Datasheet and Order Samples

http://www.intersil.com

Intersil (and design) is a registered trademark of Intersil Americas Inc. Copyright Intersil Americas Inc. 2011All Rights Reserved. All other trademarks mentioned are the property of their respective owners.

Page 19: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 19

TECHN

ICA

L ARTIC

LETECHNICAL ARTICLE

For semiconductors, medical and industrial ap-plications often present difficult operating environments with extreme temperature ranges, high levels of ESD, and high sensitivity to electronic noise. These applica-tions demand robust, high-voltage precision circuits for data acquisition and signal processing. The combina-tion of high accuracy, high voltage, and low-noise re-quirements, plus the need to deploy in harsh operating environments, make it extremely difficult to create preci-sion analog front-end components and circuits that can handle all of the challenges.

These challenges range across a wide spectrum of high-performance applications such as:

• Medical imaging & instrumentation

• Process control (I/O modules)

• Precision instrumentation & test systems

• Spectral analysis equipment

• Thermocouples

• Bio-analyzers

• ATE & Data acquisition

The design of precision analog front-end circuitry can be especially challenging for applications that use high-impedance sensors. These devices are inherently more prone to capacitive and inductive noise pick-up, and typically have higher noise sensitivity, which can make it difficult to achieve repeatability of readings and overall stability of the design. For example, many industrial systems using optical imaging, photodiodes, piezoelectric sensors, piezoresistive pressure transducers, PH sensors or gas meters must be able to provide fast and accurate readings under often very harsh operating conditions for real-time process monitoring and control applications.

As discussed in this article, solving these challenges requires a fundamental bottom-up approach that starts with the use of advanced bipolar processes to design and manufacture products, such as precision op-amps, instrumentation amps and bandgap voltage references. By leveraging fabrication techniques such as deep trench isolation and lateral device spacing to optimize noise performance and minimize parasitic leakage at the transistor level, this approach enables the designer

Page 20: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 20

TECHN

ICA

L ARTIC

LETECHNICAL ARTICLE

to create very robust products that have very efficient power-to-bandwidth characteristics and can handle high-impedance inputs with speed, accuracy and low power usage.

Process Methodology Overview

Intersil has developed a new 40V complementary bipolar plus JFET (CBiFET) process known as PR40 expressly for the development of robust products optimized for precision analog circuit applications. The process is fabricated on bonded silicon-on-insulator (BSOI) substrates and uses deep trench isolation (DTI) to build devices with complete dielectric isolation. The process features a core-bipolar foundation and optional device modules that are added as needed to facilitate product-specific, low-cost manufacturing. The PR40 process is currently being used to fabricate a range of low-noise precision op-amps, instrumentation amps, current sense amps and bandgap voltage references for use in industrial, medical and other high-performance applications.

BSOI substrates offer a number of important advantages. These include robust devices with low parasitics that exhibit very predictable device performance characteristics and are free of latch-up in harsh environment applications. BSOI is a convenient isolation method for making complementary bipolar devices with significant area reduction compared to the junction-isolated devices that are typically used in high voltage

analog circuits.

SOI uses layers of silicon-insulator-silicon in place of conventional silicon substrates. This allows for lower parasitic capacitance due to isolation from the bulk silicon. Deep trench dielectric isolation provides separation of devices to further minimize capacitance and reduce leakage current associated with diodes, transistors, etc. The reduced level of parasitics allows the designer to optimize power-to-bandwidth efficiency and reduce overall power consumption.

Avoiding Latch-up

Latch-up is one of the most problematic conditions that can result from an inadequate front-end analog design. In essence, latch-up is a particular type of short circuit that results from the inadvertent creation of a low impedance path between the power supply rails of an IC, which triggers a parasitic structure that disrupts the proper functioning of the part. A power cycle is typically required to correct the situation and latch-up can even lead to destruction of the part due to an over-current condition.

The parasitic structure that causes latch-up is equivalent to an unintentional thyristor that is acting across a PNP and an NPN transistor stacked next to each other. Latch-up occurs when one of the transistors is conducting and the other starts to conduct at the same time. Then, both transistors keep each other in saturation for as long as

Figure 1: Cross-section drawing of PR40 npn and pnp devices.

Passivation

Bond OxNBL PBL

N-collector P-collector

NBasePBase PSinkNSink N+emit P+emit

M2

NPN PNP

M1

Deep Trench

Page 21: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 21

TECHN

ICA

L ARTIC

LETECHNICAL ARTICLE

the structure is forward biased and current is flowing through it.

Latch-up can happen at any location where the underlying parasitic condition exists. For example, a spike of positive or negative voltage on an output or input pin of a digital chip that exceeds the rail voltage by more than a diode drop is a common cause of latch-up. Another cause is the supply voltage exceeding the absolute maximum rating, for instance from a transient spike in the power supply.

By using advanced fabrication processes such as BSOI, deep trench isolation and optimal lateral device separation, the PR40 process enables latch-up free circuit operation.

Maximizing Design Flexibility

To maximize design flexibility, the PR40 process features a full suite of precision analog devices including 40V low-noise NPN and vertical PNP bipolar devices, 40V P-and N-channel JFETs, 10V super-beta NPNs, and MOSFETs. In addition, the process offers a high BVebo lateral PNP device with high-beta, a buried Zener, and a well controlled Schottky diode. The process supports multiple levels of metal, as well as Active-Area-Bonding.

The process is optimized to allow for consistent matching of devices, with both the NPN and PNP structures constructed in a similar fashion on a common subsurface base. Thin film resistors provide precision matching in relatively little area. The low temperature coefficient of the thin film makes it possible to achieve tighter specifications over wider operating temperatures. It offers optimized high-density multilayer capacitor that provides additional design flexibility and minimizes die area. In terms of ESD protection, it offers superior ESD structures that easily achieve the minimum 2KV HBM targets and are typically in the 4-6KV range.

These unique capabilities of the PR40 process provide the designer great flexibility for designing high performance precision analog circuits. The exceptional device matching enables the design of very high precision amplifiers with minimal offset trim requirements. The absence of parasitic junction leakage over temperature combined with predictable device behavior enable high impedance input amplifiers with very low and stable input bias current behavior across their specified common

Figure 2: Performance characteristics for Intersil PR40 devices

mode voltage and temperature ranges. Deep trench isolation results in very small device footprints enabling very efficient noise and power tradeoffs that result in low power designs with exceptional noise performance. Small devices combined with active area bonding and high density capacitors result in very dense circuits that enable 40V products in small footprint packages such as SOT23, uTDFN and MSOP.

Robust ESD Performance

Products designed on PR40 are also inherently radiation-tolerant and capable of delivering very robust ESD performance. In most analog front-end devices using conventional fabrication processes, it is very difficult to achieve good ESD characteristics while delivering high performance. Because products created with the PR40 process are fully dielectrically isolated using a mechanical process, any sensitivity to electromagnetic pulses has been completely eliminated. Also the use of subsurface transistors with deep junctions means less sensitivity to ionization radiation.

Figure 3 shows a summary of ESD performance characteristics for Intersil devices that have been introduced using the PR40 process. In all cases, the ESD performance comes out at the top of the scale on industry standards for each ESD category.

Part # Primary Performance Feature

40V, Low Noise, Low Power Precision Voltage ReferenceBest in Class noise and power performance

ISL21090 Yes Yes

Yes Yes Yes

Yes

Yes

Yes Yes Yes

Yes Yes Yes

Yes

Yes

Yes Yes

Yes

Yes

Yes Yes

Yes

Yes

LowNoise

LowPower

Lowlbias

SingleSupply

DualSupply

40V, Low Noise, High Slew Rate JFET Op AmpBest in Class lbias over Temperature

UltraLow

ISL28110ISL28210

40V, Precision, Low Power, Low TcVos Op AmpsFlat lbias over Temperature Range

ISL28107ISL28207ISL28407

40V Low Power Precision Op AmpsOutstanding Combination of Precision, Low Powerand Low Noise

ISL28117ISL28217ISL28417

40V Precision Very Low Noise 10Mhz Op AmpOustanding Combination of Low Noise, Speed & Power

UltraLow

ISL28127ISL28227

40V Low Cost Precision Op AmpSignificantly Improved OP01 performance in SOT23

ISL28177

Low Cost 40V Op AmpsExcellent combination of power and noise performance

ISL28325ISL28425

40V Ground Sensing, Rail-to-Rail Output Op AmpExcellent combination of Low Power and Low Noise

ISL28108ISL28208ISL28408

40V Ground Sensing, Rail-to-Rail Output Op AmpExcellent combination of Low Noise and High Bandwidth

ISL28118ISL28218

28V Precision High Side and Low Side Current Sense Amplifier in SOT23

ISL28005ISL28006

Page 22: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 22

TECHN

ICA

L ARTIC

LETECHNICAL ARTICLE

The Foundation for Robust, High-Voltage, Low-Noise, Low-Power Devices

The use of advanced feature-rich Complementary Bipolar and BSOI fabrication processes, along with deep trench dielectric isolation, provides a very predictable and flexible foundation for designing robust, high-voltage, low-noise, low-power devices for use in precision analog front-end designs. Elimination of internal parasitic structures within the devices and inherent resistance to external ESD or radiation make these devices ideal for use throughout a wide range of industrial, medical and other applications that require high accuracy within difficult operating environments.

In addition, because it is a modular fabrication process with a wide range of features that can be incorporated for specific designs, the PR40 approach also improves both design flexibility, product cost and package size. Critical features needed for specific designs can be predictably modeled and efficiently integrated so as to minimize the size of the design while optimizing its performance for the target application.

Precision analog design will always present difficult challenges, especially for applications that combine high-impedance sensors, high-voltages, the need for high-accuracy and exposure to harsh operating environments. But it is possible to reduce the difficulty by developing robust products using a device fabrication process that addresses all of these issues from the bottom up.

Figure 3: ESD Performance for Intersil PR40 devices

About the Authors

Steve Parks is Senior Principal Marketing Engineering, Signal Path Products, Intersil. He formerly served as Marketing Director, Precision Products Division, Intersil and has spent 30 years in analog product development with Intersil and Analog Devices, Inc. He is a graduate of California State Univ. at Chico.

Rick Jerome joined Intersil five years ago and is a lead designer. He worked in technology development at Mo-torola, Fairchild Semi, National Semiconductor, United Technologies, and Linear Technology before joining Intersil. He received his Bachelor of Science degree in 1978 and an MBA in Technology Management in 1998 and has authored/co-authored over 20 papers. Rick holds 12 patents in the field of bipolar, BCDMOS, and RF-SiGe semiconductor process and device development.

Josh Baylor joined Intersil in 2005 as an intern while completing his bachelor’s degree in electrical engineer-ing at Stanford University. Originally a digital designer by schooling, Josh plunged into the world of analog circuits at Intersil. He currently designs amplifiers and other cir-cuits for precision, high-voltage applications.

Dr. Michael I-Shan Sun is a process integration man-ager in the Technology Development group at Intersil. Prior to join Intersil, he worked at International Rectifier on GaN power devices and at Peregrine Semiconduc-tor, where he developed advanced Silicon-on-Sapphire CMOS technology for RF applications. He received his Bachelor’s, Master’s and Ph.D degrees in electrical en-gineering from University of Toronto. ■

Part # Description

Low Noise Precision Voltage Reference 2000V200V3000VISL21090

ChargeDevice Model

(CDM)

MachineModel(MM)

HumanBody Model

(HBM)

Low Noise, Ultra Low lbias Precision 10Mhz JFET OPA 2000V400V4000VISL28110/210

Low Power, Low lbias Precision OPA 1500V500V4000VISL28107/207/407

Low Noise Precision OPA 1500V500V4500VISL28117/217/417

Low Noise 10Mhz Precision OPA 1500V500V6000VISL28127

Low Noise 10Mhz Precision OPA 1500V500V4000VISL28227

Low Cost Precision OPA in SOT23 2200V300V5000VISL28177

Low Cost Dual/Quad Precision OPA 1500V500V4500VISL28325/345

Low Power Precision Single Supply OPA 2000V400V6000VISL28108/208/408

Precision Low Noise Single Supply OPA 2000V300V3000VISL28118/218

Low Power Precision High/Low Side Current Sense Amp 1500V200V4000VISL28005/6

Page 23: EEWeb Pulse - Volume 47

Transform Your iPhone, iPad or iPod into an Oscilloscope

with the iMSO-104

Experience the iMSO-104 as Joe Wolin, co-founder of EEWeb,

gives you an in-depth look into the future of oscilloscopes.

2012

UBM ELECTRONICSUBM ELECTRONICS

WINNER

Begin Your Experience NowBegin Your Experience Now

Page 24: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 24

RETURN TO

ZERORETURN TO ZERO

Page 25: EEWeb Pulse - Volume 47

EEWeb | Electrical Engineering Community Visit www.eeweb.com 25

RETURN TO

ZERORETURN TO ZERO

Contact Us For Advertising Opportunities

[email protected]

www.eeweb.com/advertising

Electrical Engineering CommunityEEWeb

Join Today

www.eeweb.com/register

Electrical Engineering CommunityEEWeb