Sei sulla pagina 1di 23

The Top

Engineering Essentials
Compliments of

EngineeringEssentials

3D IC Technology Delivers The Total Package


RogeR AllAn | Contributing EDitor
rsallan@optonline.net

Burgeoning market demands for cost-effective, higher-density smaller packages bank their hopes on a flurry of recent advances made in materials, processing procedures, and interconnects, as well as a greater variety of packaging approaches.
never-ending parade of refinements to IC packaging gives engineers more choices than ever to meet their design requirements. With more radical approaches lurking on the horizon, that mix will become even richer. Today, though, squeezing more functions into smaller spaces at a lower cost dominates, leading designers to stack more chips atop each other. Thus, were seeing the rapid ascent of 3D IC packaging. The impetus behind 3D IC technologys rise comes from the consumer markets use of more sophisticated interconnects to connect silicon chips and wafers. These wafers contain chips with continually shrinking line dimensions. To scale down semiconductor ICs, finer line drawings are made on 300-mm wafers. Although most mass-produced ICs today are based on 55-nm design nodes or less, these design rules will shrink to 38 nm or smaller, and then down to 27 nm by 2013, according to forecasts by market forecaster VLSI Research Inc. (Fig. 1).
300-mm wafer forecast (thousands of wafers/week)
1200 1000 800 600 400 200 0 2009 2010 2011 2012 2013 Wafer fabrication on advanced technology nodes >27 nm >27 nm but <38 nm 38 nm but <55 nm

These downscaled IC designs accelerate the need for highdensity, cost-effective manufacturing and packaging techniques, which will invariably challenge IC manufacturers to minimize the higher cost of capital equipment investments. Many 3D applications still use traditional ball-grid-array (BGA), quad flat no-lead (QFN), lead-grid-array (LGA), and small-outline transistor (SOT) packages. However, more are migrating to two main approaches: fan-out wafer-level chipscale packaging (WLCSP) and embedded-die packaging. Presently, fan-out WLCSP is finding homes in high-pincount (more than 120 pins) applications that use BGAs. Embedded-die technology favors the use of lower-pin-count applications that embed chips and discrete components into printed-circuit-board (PCB) laminates and use microelectromechanical-system (MEMS) ICs (Fig. 2). Researchers at Texas Instruments believe that WLCSP is heading toward a standardized package configuration. It could include a combination of WLCSP ICs, MEMS ICs, and passive components interconnected using through silicon vias (TSVs). The TSVs bottom layer can be an active WLCSP device, an interposer only, or an integrated passive interposer. The top layer may be an IC, a MEMS device, or a discrete component (Fig. 3). No matter the package type, though, as pin counts and signal frequencies increase, the need to pre-plan the package option becomes more critical. For example, a wire-bonded package with many connections may require more power-supply buffers on the chip due to high levels of inductance. The type of bump, pad, and solder ball placement also can significantly impact signal integrity.
TSVs: Hype Or realiTy?

1. Future packaging trends include greater use of 300-mm wafers as IC design nodes shrink further. This will drive the adoption of 3D ICs for greater chip densities. (courtesy of VLSI Research)
0812EE_F1

TSV technology is not a packaging technology solution, per se. Its simply an important tool that allows semiconductor die and wafers to interconnect to each other at higher levels of density. In that respect, its an important step within the larger IC packaging world. But TSVs arent the only answer to 3D packaging advances. They represent just one part of an unfolding array of materials, processing, and packaging developments. In fact, 3D chips that employ TSV interconnects arent yet ready for large volume productions. Despite making some progress, theyre limited to mainly CMOS image sensors, some MEMS devices, and, to some degree, power amplifiers. More than 90% of IC chips are packaged using tried-and-true wire-bonding means. Speaking at this years ConFab Conference, Mario A. Bolanos, manager of strategic packaging research and external collaboration at Texas Instruments, outlined a number of challenges facing the use of TSVs in 3D chips. These include a

ElEctronic DEsign

EngineeringEssentials

Fan-out WLP/chip embedding in substrates

Frances Alchimer S.A., a provider of nanometric deposition films used in semiconductor IC interconnects, has demon3D WLP 3D IC strated that TSVs with aspect ratios (height with TSV to width) of 20:1 can save IC chipmakers Flip-chip MEMS more than $700 per 300-mm wafer compared with aspect ratios of 5:1 (see the table). This was accomplished by reducing the die area need for interconnection. Alchimer modeled TSV costs and space Fan-out WLP consumption using an existing 3D stack Integrated for mobile applications, The stack includpassive devices ed a low-power microprocessor, a NAND memory chip, and a DRAM chip made PCB on a 65-nm process node. The chips are 2. Future 3D IC packaging approaches will embody techniques such as wafer-level packaging interconnected by about 1000 TSVs, and (WLP) using through-silicon vias (TSVs) together with embedding chips into various substrates. the processor die was calculated for aspect (courtesy of Yol Dveloppment) ratios of 5:1, 10:1, and 20:1. IBM, along with Switzerlands cole 0812EE_F2 lack of electronic design automation (EDA) tools, the need for Polytechnique Fdrale de Lausanne cost-effective manufacturing equipment and processes, insuf- (EPFL) and the Swiss Federal Institute of Technology (ETH), ficient yield and reliability data involving thermal issues, elec- is developing micro-cooling techniques for 3D ICs, using tromigration and thermo-mechanical reliability, and compound TSVs, by means of microfluidic MEMS technology (Fig. 5). yield losses and known-good die (KGD) data. The collaborative effort, known as CMOSAIC, is considering a Unlike conventional ICs, which are built on silicon wafers 3D stack architecture of multiple cores with interconnect densome 750 m thick, 3D ICs require very thin wafers, typically sities ranging from 100 to 10,000 connections/mm2. The IBM/Swiss team plans to design microchannels with about 100 m thick or less. Given the fragility of such very thin wafers, the need arises for highly specialized temporary single-phase liquid and two-phase cooling systems. Nanowafer bonding and de-bonding equipment to ensure the integ- surfaces will pipe coolants, including water and environmenrity of the wafer structure, particularly at high processing tally friendly refrigerants, within a few millimeters of the chip temperatures and stresses during the etching and metalliza- to absorb the heat and draw it away. Once the liquid leaves the tion processes. After bonding, the wafer undergoes a TSV circuit in the form of steam, a condenser returns it to a liquid back-side process, followed by a de-bonding step. These typi- state, where its pumped back to the chip for cooling. cal steps result in higher yield levels for more cost-effective Wire BOnding and Flip CHip mass production. Wire-bonding and flip-chip interconnect technologies cerCurrently, theres a lack of TSV standards on bonding and process temperatures and related reliability levels. The same tainly arent sitting idle. Progress marches on for a number is true regarding standardization of the TSV assignment of of flip-chip wafer-bumping technologies, including the use wafer locations. If enough IC manufacturers work on these of eutectic flip-chip bumping, copper pillars, and lead-free issues, more progress can be made on expanding the roles of soldering. Recent packaging developments include the use of TSVs for interconnects. High process temperatures greater package-on-package (PoP) methods, system-in-package (SiP), than 200C to 300C arent feasible for the economic imple- no-lead (QFN) packages, and variations thereof. mentation of TSVs. Ziptronix Inc., which provides intellectual property (IP) for 3D integration technology, licensed its direct-bond-interIC, MEMS, etc. connect (DBI) technology to Raytheon Vision Systems. The company says that its low-temperature oxide bonding DBI Passive technology is a cost-effective solution for 3D ICs (Fig. 4). Nevertheless, many semiconductor IC experts view the industry at a crossroads of having to choose 2D (planar) and 3D WLCSP with TSV designs. They see a threefold to fourfold increase in costs when going from 45-nm design nodes to 32- and 28-nm designs, considering the fabrication, design, process, and mask costs. Much needed improvements in lithography and chemical vapor polishing, as well as dealing with stress effects issues, 3. ICs, MEMS devices, and other components will be joined by passive make the 3D packaging challenge even more difficult. This is components using wafer-level chip-scale packaging (WLCSP) and through0812EE_F3 silicon vias. (courtesy of Texas Instruments) where TSV technology steps in.

ElEctronic DEsign Go To www.elecTronicdesiGn.com

EngineeringEssentials

4. This set of memory die uses Ziptronixs direct-bond interconnect (DBI) low-temperature oxide-bonding process for 3D ICs. The die are bonded face down to the face-up logic wafer and thinned to about 10 m. Electrical contact between memory and logic elements is made via etching memory and logic bond pads, followed by an interconnect metallization over the memory die edge. (courtesy of Ziptronix)

At the packaging level, 3D configurations have been well known for many years. Using BGA packages in stacked-die configurations with wire bonds is nearly a decades-old practice. For example, in 2003, STMicroelectronics demonstrated a stack of 10 dice using BGAs, a record at the time. Certain 3D approaches like the PoP concept warrant special attention when it comes to high-density and high-functionality handheld products. Designers must carefully consider two issues: thermal cycling and drop-test reliability performance. Both are functions of the packaging materials quality and reliability. This becomes more critical as we move from interconnect pitches of 0.5 mm to 0.4 mm for the bottom of the PoP structure and 0.4 mm to 0.5 mm for the top. Samsung Electronics Ltd. has unveiled a 0.6-mm high, multi-die, eight-chip package for use in high-density memory applications. Designed initially for 32-Gbyte memory sizes, it features half the thickness of conventional eight-chip memory stacks and delivers a 40% thinner and lighter memory solu-

tion for high-density multimedia handsets and other mobile devices, according to the company. Key to the packages creation is the use of 30-nm NAND flash-memory chips, each measuring just 15 m thick. Samsung devised an ultra-thinning technology to overcome the conventional technology limits of an IC chips resistance to external pressure for thicknesses under 30 m. In addition, the new packaging technology can be adapted to other multichip packages (MCPs) configured as SiPs and PoPs. This packaging development provides the best solution for combining higher density with multifunctionality in current mobile product designs, giving designers much greater freedom in creating attractive designs that satisfy the diverse styles and thin-focused tastes to todays consumers, says Tae Gyeong Chung, vice president for Samsungs package development team. Market developments are also shaking up the QFN package arena. Germanys Fraunhofer IZM has developed a chip-inpolymer process that imparts shock and vibration protection to the chip and lends itself to shorter interconnect distances to enhance the chips performance. The process starts by thinning the chip, then adhesively bonding it to a thin substrate. This is all overlaid with resin-coated copper (about 80 m for the resin layer and 5 m for the copper surface). The resin is cured, and interconnect vias are laser-drilled down to the contact pads and plated with a metal. Then the redistribution layer on top is etched from the copper.
ay er
er

Pr

oc

so es
r mo

rl

yl

ay

3D stack

An

g alo

cir

cu
s

its

c RF

irc

uit

SiliCon ConSumption AS A FunCtion oF tSV ASpECt rAtio


tSV aspect ratio
5:1 10:1 20:1
nn

tSV size (diameter depth, m) 40 20 20 200 Keep-out area (2.5 diameter, m) total tSV footprint (mm2) Footprint relative to iC area
100 7.9 12.3% 50 2 3.10%

10 200 2.5 0.5


M icr

el

CMOS circuitry TSV

0.80% 0812EE_F5

Average TSV density = 16 TSVs/mm2; die size = 8 8 mm


Courtesy of Alchimer S.A.

5. Future 3D IC stacks may contain processor, memory, logic, and analog and RF circuitry, all of which are interconnected with through-silicon vias (TSVs). Liquids using MEMS microchannels will perform the cooling. This is part of the CMOSAIC project, which involves IBM and two Swiss partners. (courtesy of the cole Polytechnique Fdrale de Lausanne)

oc

ha

ElEctronic DEsign

EngineeringEssentials

This process has been optimized in commercial production of standard packages like QFNs, without the need for specialized equipment or other delays. The use of polymer-embedded QFNs, Land Die pad Silicon chip Land Die pad Silicon chip essentially quad packs with no leads with the leads being replaced by pads on the chips bottom surface, is part of the 6. Dai Nippon Printing embedded high-performance IC chips can be wire-bonded to a printed wiring HERMES project. 0812EE_F6 The goal of HERMES, which includes board (PWB) inside a multi-layer PWB using unique buried bumped interconnections. Half-etching Fraunhofer and 10 other European indus- the base metal of the leadframe and lengthening its inner leads shrinks the distance between the chip trial and academic organizations, is to and the leadframe its attached to. It also reduces the amount of gold wires for connections. (courtesy advance the embedding of chips and of Dai Nippon Printing) components, both active and passive, to allow for more functional integration and higher density. The (PWB) inside a multi-layer PWB, citing unique buried bumped technology is based on the use of PCB manufacturing and interconnections for its success. PWBs interconnect between arbiassembly practice, as well as on standard available silicon dies, trary layers (via hole connections) with bumps made of high-elechighlighting fine-pitch interconnection, high-power capability, trical-conductivity paste, which are formed by screen printing. Half-etching the base metal of the leadframe and making its and high-frequency compatibility. The QFN package was selected because its more common in inner leads longer will close the distance between the chip and small, thin appliances housing microcontroller ICs. Fraunhofer the leadframe its attached to, as well as drastically reduce the researchers believe that QFNs will take over many application amount of gold wires for connections, resulting in lower manuniches held by other types of packages. The embedded QFN facturing costs (Fig. 6). Mass production of ICs with more than contains a 5- by 5-mm chip thats thinned to about 50 m. The 700 pins inside PWBs is scheduled for this year. Both active package itself measures 100 by 100 mm. The 84 I/Os on the and passive components can be handled. Work is underway to develop epoxy flux materials that chip are at a 100-m pitch (400 m on the package). Malaysias Unisem Berhad has unveiled a high-density lead- improve the thermal-cycling and drop-test reliability shortframe technology, the leadframe grid array (LFGA), that offers comings of conventional tin solder copper (SnAgCu). Such BGA-comparable densities. The company says that it offers materials will help to advance 3D ICs using PoPs. Although PoP a cost-effective replacement for a two-layer FPGA package. manufacturing employs commonly used tin-lead (SnPb) solder Compared to a QFN package, it has shorter wire-bond lengths. alloys, which offer advantages over SnAgCu materials, theres a In addition, it can house a 10- by 10-mm, 72-lead QFN pack- need for a lead-free compound to handle large high-density 3D age in a body size of 5.5 mm2. PoP structures for consumer electronics products. The Henkel Corp. Multicore LF620 lead-free solder paste This package offers a better footprint with higher I/O density and better thermal and electrical performance. It is also suits a broad range of packaging applications. The no-clean thinner and, most importantly, offers a much better yield at halide-free and lead-free material is formulated with a new activator chemistry, so it exhibits extremely low voiding in front-end assembly, says T.L. Li, the packages developer. Dai Nippon Printing has successfully embedded high-perfor- CSPs via in-pad joints, good coalescence, and excellent soldermance IC chips that are wire-bonded to a printed wiring board ability over a range of surface finishes.

Gold wire

Mold resin

Gold wire

Dont Be Intimidated By Low-Power RF System Design


louis e. FRenzel | CommuniCAtionS EDitor
lou.frenzel@penton.com

dding wireless connectivity to any product has never been easy. However, even when a wireless solution doesnt seem to make sense, the potential exists. The cost is reasonable, and you add unexpected value and flexibility to the product. But what if you arent a wireless engineer? Dont worry, because in many

cases, the wireless chip and module companies have made such connectivity a snap.
SeleCTing a TeCHnOlOgy

The table lists a marvelous collection of wireless options. These technologies are all proven and readily available in chip or module form. No license is required since most operate in

ElEctronic DEsign Go To www.elecTronicdesiGn.com

EngineeringEssentials

the unlicensed spectrum. They also operate under the rules and regulations in Part 15 of U.S. CFR 47. When considering wireless for your design, you should have a copy of Part 15 handy. You can find it at www.fcc.gov. The table only provides the main options and enough information to get you started. For a more in-depth look, check out the organizations and trade associations associated with each standard. Some of the wireless standards have relatively complex protocols to fit special applications. For example, Wi-Fi 802.11 is designed for local-area-network (LAN) connections and is relatively easy to interface to Ethernet. It also is the fastest, except for Ultra-Wideband (UWB) and the 60-GHz standard. Its widely available in chip or module form, but its complex and may consume too much power. ZigBee is great for industrial and commercial monitoring and control, and its mesh-networking option makes it a good choice if a large network of nodes must be monitored or controlled. Its a complex protocol that can handle some sophisticated operations. Its underlying base is the IEEE 802.15.4 standard, which doesnt include the mesh or other features, making it a good option for less complex projects. If youre looking for something simple, try industrial, scientific and medical (ISM) band products using 433- or 915-MHz chips or modules. Many products require you to invent your own protocol. Some vendors supply the software tools for that task. Its a good way to go, because you can optimize the design to your needs rather than adapt to some existing overly complex protocol. For very long-haul applications that require reliability, consider a machine-to-machine (M2M) option. These cellphone modules use available cellular network data services like GRPS or EDGE in GSM networks (AT&T and T-Mobile) or 1xRTT and EV-DO in cdma2000 networks (Sprint and Verizon). You will need to do the interfacing yourself and sign up with a carrier or an intermediary company that lines up and administers cellular connections. Though more expensive, this option offers greater reliability and longer range. Cypress Semiconductors proprietary WirelessUSB option operates in the 2.4-GHz band and targets human interface devices (HIDs) like keyboards and mice. It offers a data rate of 62.5 kbits/s and has a range of 10 to 50 m. The Z-Wave proprietary standard from Sigma Design Zensys, used in home automation, operates on 908.42 MHz in the U.S. and 868.42 MHz in Europe. It offers a range of up to about 30 m with data-rate options of 9600 bits/s or 40 kbits/s. Mesh capability is in the mix, too (see Wireless In The Works at www.electronicdesign.com, ED Online 21847).

1. The FreeWave MM2-HS-T 900MHz radio targets embedded military and industrial applications.

erally a matter of experience. With less experience, its probably better to buy existing modules or boards. With solid high-frequency or RF experience, consider doing the design on your own. Almost always, youll start with an available chip. The tricky part is the layout. When self-designing, grab any reference designs available from your chip supplier to save time, money, and aggravation. Primary design issues will include antenna selection, impedance matching with the antenna, the transmit/receive switch, the battery or other power, and packaging. Most modules will take care of these elements. Factoring in the testing time and cost is another essential design step. Any product you design will have to be tested to conform to the FCC Part 15 standards. Arm yourself with the right equipment, especially the spectrum analyzer, RF power meters, field strength meter, and electromagnetic interference/ electromagnetic compliance (EMI/EMC) test gear with antennas and probes. An outside firm also could perform the testing, but thats expensive and takes time. Factor in some rework time if you fail the tests. Most modules are pretested, so it pretty much comes down to the packaging and interfacing with the rest of the product.
COnSideraTiOnS and reCOmmendaTiOnS

If longer range and reliability are top priorities, stay with the lower frequencies915 MHz is far better than 2.4 GHz, and 433 MHz is even better. This is strictly physics. The only downside is antenna size, which will be considerably greater at lower frequencies. Still, you wont be sorry when you need to transmit a few kilometers or miles. Though not impossible at 2.4 GHz, it will require higher power and the highest possible directional gain antennas. As for data rates, think slow. Lower data rates will typically result in a more reliable link. You can gain distance by dropping the data rate. Lower data rates also survive better in high-noise environments. Your analysis of the radiowave path is essential for a solid and reliable link. So, the first step should be to estimate your path loss. Some basic rules of thumb will give you a good approximate figure to use. Once you know your path loss, you can play around with things like transmitter power output, 2. Analog Devices ISM band radio antenna gains, receiver sensitivity, and cable Build VS. Buy chips suit home automation and losses to zero in on hardware needs. To estiDeciding whether to build or buy is a crucial control as well as smart-metering mate the path loss between the transmitter step when it comes to adding wireless. Its gen- applications. and receiver, try:

ElEctronic DEsign

EngineeringEssentials

dB loss = 37 dB + 20log(f) + 20log(d) The frequency of operation (f) is in megahertz, and the range or distance (d) is in miles. Another formula is: dB loss = 20log(4/) + 20log(d) Wavelength () and range or distance are both in meters. Both formulas deliver approximately the same figures. Remember, this is free space loss without obstructions. The loss increases about 6 dB for each doubling of the distance. If obstructions are involved, some corrective figures must be added in. Average loss figures are 3 dB for walls, 2 dB for windows, and 10 dB for exterior structure walls. When finalizing a path loss, add the fade margin. This fudge factor helps ensure good link reliability under severe weather, solar events, or unusual noise and interference. As a result, transmitter power and receiver sensitivity will be sufficient to overcome these temporary conditions. A fade margin figure is just a guess. Some conservative designers say it should be 15 dB, while others say 10 dB is acceptable. If unusual weather or other conditions arent expected, you may get away with less, perhaps 5 dB. Add that to your path loss and adjust everything else accordingly. Another handy formula to help estimate your needs is the Friis formula:

PR = PTGRGT2 /(162d2) PR is the received power in watts, PT is the transmit power in watts, GR is the receive antenna gain, GT is the transmit antenna gain, is the wavelength in meters, and d is the distance in meters. The transmit and receive gains are power ratios. This is 1.64 for a dipole or ground plane antenna. Any directional antenna like a Yagi or patch will have directional gain. It is usually given in dB, but it must be converted to a power ratio. The formula also indicates why lower frequency (longer wavelength) provides greater range ( = 300/fMHz). Transmitter output power, another key figure, is usually given in dBm. Some common figures are 0 dBm (1 mW), 10 dBm (10 mW), 20 dBm (100 mW), and 30 dBm (1 W). Receiver sensitivity also is usually quoted in dBm. This is the smallest signal that the receiver can resolve and demodulate. Typical figures are in the 70- to 120-dBm range. One last thing to factor in is cable loss. In most installations, you will use coax cable to connect the transmitter and receiver to the antennas. The cable loss at UHF and microwave frequencies is surprisingly high. It can be several dB per foot at 2.4 GHz or more. So, be sure to minimize the cable length. Also, seek out special lower-loss cable. It costs a bit more, but coax cable with a loss of less than 1 dB per foot is available if you shop around. This is especially critical when using antennas on towers where the cable run could be long. You

low-powEr, Short-rAngE wirElESS tEChnologiES For DAtA trAnSmiSSion


Technology
Bluetooth IR ISM

Frequency
2.4 GHz 875 nm 315, 418, 433, 902 to 928 MHz, 2.4 GHz Cellular bands 13.56 MHz 900 to 928 MHz, 2.4 GHz 125 kHz, 13.56, 915 MHz 60 GHz 3.1 to 6 GHz 2.4 and 5 GHz 2.4 GHz

MaxiMuM range
10 m <1m 10 km

MaxiMuM raTe
3 Mbits/s 16 Mbits/s 1 to 115 kbits/s, 250 kbits/s <300 kbits/s 106 to 848 kbits/s 1 kbit/s to 2 Mbits/s <100 kbits/s 3 Gbits/s 480 Mbits/s 11, 54, 100+ Mbits/s 250 kbits/s

ModulaTion
FHSS/GFSK Baseband OOK/ASK, FSK, DSSS w/BPSK/QPSK GSM/EDGE, CDMA 1xRTT ASK DSSS, BPSK/QPSK ASK OFDM OFDM DSSS/CCK, OFDM OQPSK

Main applicaTions
Cell headsets, audio, sensor data Short data transfer Industrial monitoring and control, telemetry Remote facilities monitoring Credit card, cell-phone transactions Industrial and factory automation Tracking, shipping Video, backhaul Wireless USB, video WLAN Monitoring and control, sensor networks

M2M NFC Proprietary RFID 60 GHz UWB Wi-Fi, 802.11a/b/g/n ZigBee/ 802.15.4

10 km <1 ft Up to several miles <2 m 10 m 10 m 100 m 100 m

ElEctronic DEsign Go To www.elecTronicdesiGn.com

EngineeringEssentials

Digital Analog Mixed

Reset

Watchdog timer High-speed RC oscillator 32-kHz RC oscillator Clock multiplexer and calibration

On-chip voltage regulator Power-on reset brownout

VDD (2 to 3.6 V) DCOUPL

32-MHz crystal oscillator RESET_N XOSC_Q2 XOSC_Q1 P2_4 P2_3 P2_2 P2_1 P2_0 P1_7 P1_6 P1_5 P1_4 P1_3 P1_2 P1_1 P1_0 P0_7 P0_6 P0_5 P0_4 P0_3 P0_2 P0_1 P0_0 Timer 1 (16 bits) USART 2 USART 1 I/O controller Direct memory access 32.768-kHz crystal oscillator Debug interface

Sleep timer

Sleep-mode controller

32/64/128/ 256-kbyte ash 8051 CPU core Memory arbitrator 8-kbyte SRAM

IRQ control ADC audio/DC 8 channels

Flash write

AES encryption and decryption

Radio registers

CSMA/CA strobe processor

Radio data interface FIFO and frame control

Demodulator

Automatic gain control

Modulator

Timer 2 (IEEE 802.15.4 MAC timer) Timer 3 (8 bits) Frequency synthesizer Receive chain Transmit chain

Timer 4 (8 bits)

RF_P

RF_N

3. The CC2530 SoC from Texas Instruments fits 802.15.4, ZigBee, RF4CE, and smart-energy applications. It has an 8051 microcontroller on board, making it a true single-chip wireless solution.

can offset the loss with a gain antenna, but its still optimal to most likely will come with an antenna and/or antenna suggesminimize the length and use the best cable. tions. The most common is quarter-wave or half-wave vertical. With all of this information, compute the final calculation: When building an antenna into the product, the ceramic type is popular, as is a simple copper loop on the printed circuit board Transmit power (dBm) + transmit antenna gain (dB) + receive (PCB). Follow the manufacturers recommendations for the antenna gain (dB) path loss (dB) cable loss (dB) fade best results. margin (dB) If its a single-chip design, you may need to design the 1008EElowpower-FIGURE 3 impedance matching network between the transceiver and the This figure should be greater than the receiver sensitivity. Now antenna. Most chip companies will offer some recommendaplay with all of the factors to zero in on the final specifications tions that deliver proven results. Otherwise, design your own for everything. Two design issues remainthe antenna and its standard L, T, or LC network to do the job. impedance matching. One final hint about testing: Part 15 uses field strength to The antenna requires a separate discussion beyond this arti- indicate radiated power measured in microvolts per meter cle. There are many sources for antennas. A wireless module (V/m). A field strength meter makes the measurement at

ElEctronic DEsign

EngineeringEssentials

specified distances. The result can be converted to watts to systems, security systems, medical devices, and remote conensure the transmitter is within the rules. The following for- trols. Analog Devices free, dowloadable ADIsimSRD Design mula, which is a close approximation, lets you convert between Studio supports both devices. One particular hot area for RF transceivers involves utilipower and field strength: ties that are building advanced metering infrastructures, including automatic meter reading, to monitor and conV2/120 PG/4d2 trol energy usage. Analysts expect more than 150 million where P is transmitter power in watts, G is the antenna gain, smart meters to be installed worldwide. The ADF7022 and V is the field strength in V/m, and d is the distance in meters ADF7023 target these smart-grid and home/building autofrom the transmit antenna to the field strength meter antenna. A mation applications. The ADF7022 is a highly integrated frequency-shift-keying/ simplified approximation at a common FCC testing distance of Gaussian frequency-shift-keying (FSK/GFSK) transceiver 3 m with a transmit antenna gain of one is P 0.3 V2. designed for operation at the three io-homecontrol channels of SOme example prOduCTS 868.25, 868.95, and 869.85 MHz in the license-free ISM band. FreeWave Technologies has a line of reliable, high-perfor- It fully complies with ETSI-300-200 and has enhanced digital mance spread-spectrum and licensed radios for critical data baseband features specifically designed for the io-homecontrol transmissions. The high-speed MM2-HS-T (TTL interface) wireless communications protocol. and MM2-HS-P (Ethernet interface) come ready to embed in As a result, the device can assume complex tasks typically OEM products like sensors, remote terminal units (RTUs), pro- performed by a microprocessor, such as media access, packet grammable logic controllers (PLCs), and robots and unmanned management/validation, and packet retrieval to and from data vehicles. They operate in the 900-MHz band and use direct- buffer memory. This allows the host microprocessor to remain sequence spread spectrum (DSSS). in power-down mode. Also, it significantly lowers power Thanks to the radios over-the-air speed of 1.23 Mbits/s, consumption and eases both the computational and memory users can send significantly more data in a shorter period of requirements of the host microprocessor. The ADF7023 low-IF transceiver operates in the license-free ISM time. The MM2-HS-T is ideal for embedded applications that require high data rates, such as video and long distances (up bands at 433, 868, and 915 MHz. It offers a low transmit-and-receive to 60 miles). Both radios fit many industry, government, and current, as well as data rates in 2FSK/GFSK up to 250 kbits/s. Its military applications where its necessary to transmit large power-supply range is 1.8 to 3.6 V, and it consumes less power in amounts of data, including multiple high-resolution images both transmit and receive modes, enabling longer battery life. Other on-chip features include an extremely low-power, and video along with data. The MM2-HS-T measures 50.8 by 36 by 9.6 mm and weighs 8-bit RISC communications processor; patent-pending, fully 14 g (Fig. 1). The MM2-HS-P shares a similarly small foot- integrated image rejection scheme; a voltage-controlled oscilprint. Both radios offer RISC-based signal demodulation with lator (VCO); a fractional-N phase-locked loop (PLL); a 10-bit a matched filter and a gallium-arsenide (GaAs) FET RF front analog-to-digital converter (ADC); digital received signalend incorporating multi-stage surface-acoustic-wave (SAW) strength indication (RSSI); temperature sensors; an automatic filters. The combination delivers unmatched overload immu- frequency control (AFC) loop; and a battery-voltage monitor. The CC2530 from Texas Instruments is a true system-on-anity and sensitivity. The MM2-HS-P includes industrial-grade high-speed Eth- chip solution (SoC) tailored for IEEE 802.15.4, ZigBee, Zigernet that supports TCP, industrial-grade wireless security, and Bee RF4CE, and Smart Energy applications. (RF4CE is the serial communications. Each unit can be used in a security net- forthcoming wireless remote-control standard for consumer work as a master, slave, repeater, or master/slave unit, depend- electronics equipment.) Its 64-kbyte and up versions support ing on its programming. FreeWaves proprietary spread-spec- the new RemoTI stack for ZigBee RF4CE, which is the industrum technology prevents detection and unauthorized access, trys first ZigBee RF4CE-compliant protocol stack. Larger memory sizes will allow for on-chip, over-the-air and 256-bit AES encryption is available. The ADF7022 and ADF7023 low-power transceivers from download to support in-system reprogramming. In addiAnalog Devices fit well in smart-grid and other applications tion, the CC2530 combines a fully integrated, high-perforoperating on the short-range ISM band for remote data mea- mance RF transceiver with an 8051 MCU, 8 kbytes of RAM, surement. Smart-grid technology not only measures how much 32/64/128/256 kbytes of flash memory, and other powerful power is consumed, it also determines what time and price are supporting features and peripherals (Fig. 3). The TI CC430 wireless platform consists of TI radio chips. best to save energy, reduce costs, and increase reliability for the delivery of electricity from utility companies to consumers. RF Also, the companys MSP430 16-bit embedded controller transceivers are needed for the secure and robust transmission can implement the IETF standard 6LoWPAN, which is the of this information over short distances, for storing measure- software that enables 802.15.4 radios to carry IPv6 packets. ment data, and for communicating with utility computers over Thus, low-power wireless devices and networks can access the Internet. Furthermore, the platform can implement Europes wireless networks. Applications for the ADF7022 and ADF7023 include indus- Wireless MBus technology for the remote reading of gas and trial monitoring and control, wireless networks and telemetry electric meters.

ElEctronic DEsign Go To www.elecTronicdesiGn.com

EngineeringEssentials

Digital Communications: The ABCs Of Ones And Zeroes


louis e. FRenzel | CommuniCAtionS EDitor
lou.frenzel@penton.com

Dont be left in the analog dust.Avoid noise and other transmission errors using these digital modulation schemes and error-correction techniques.
lectronic communications began as digital technology with Samuel Morses invention of the telegraph in 1845. The brief dots and dashes of his famous code were the binary ones and zeroes of the current through the long telegraph wires. Radio communications also started out digitally, with Morse code producing the off and on transmission of continuous-wave spark-gap pulses. Then analog communications emerged with the telephone and amplitude-modulation (AM) radio, which dominated for decades. Today, analog is slowly fading away, found only in the legacy telephone system; AM and FM radio broadcasting; amateur, CB/family and shortwave radios; and some lingering two-way mobile radios. Nearly everything else, including TV, has gone digital. Cell phones and Internet communications are digital. Wireless networks are digital. Though the principles are generally well known, veteran members of the industry may have missed out on digital communications schooling. Becoming familiar with the basics broadens ones perspective on the steady stream of new communications technologies, products, trends, and issues.

May include modulation

Data in

Encoding (optional)

TX

Medium (copper, ber, radio)

RX

Decoding

Data out

Noise 0909EE_F1

1. Encoding may be optional in this simplified model of a communications system, while some systems require modulation. Noise is the main restriction on range and reliability.

THe FundamenTalS

All communications systems consist of a transmitter (TX), a receiver (RX), and a transmission medium (Fig. 1). The TX and RX simply make the information signals to be transmitted compatible with the medium, which may involve modulation. Some systems use a form of coding to improve reliability. In this article, consider the information to be non-return-to-zero (NRZ) binary data. The medium could be copper cable like unshielded twisted pair (UTP) or coax, fiber-optic cable, or free space for wireless. In all cases, the signal is greatly attenuated by the medium and noise is superimposed. Noise rather than attenuation usually determines if the communications medium is reliable. Communications falls into one of two categoriesbaseband or broadband. Baseband is the transmission of data directly over the medium itself, such as sending serial digiOne bit interval tal data over an RS-485 or I2C link. The original t 10-Mbit/s Ethernet was baseband. Broadband 1 implies the use of modulation (and in some cases, 0 multiplexing) techniques. Cable TV and DSL are probably the best examples, but cellular data is 2. The bit time in this NRZ binary data signal determines data rate as 1/t. also broadband. 0909EE_F2 Communications may also be synchronous or asynchronous. Synchronous data is clocked as 1 s in SONET fiber-optical communications, while asynchronous methods use start and stop bits as in 3V 1 RS-232 and a few others. 2V 1 1 0 0 1 0 0 1 0 Furthermore, communications links are simplex, 1V half duplex, or full duplex. Simplex links involve 0V 11 00 10 01 Time one-way communications, or, simply, broadcasting. (a) (b) Time Duplex is two-way communications. Half duplex uses alternating TX and RX on the same channel. 3. Here, an 8-bit serial data word in NRZ format is to be transmitted (a). That same bit Full duplex means simultaneous (or at least con0909EE_F3 stream, when transmitted in a four-level PAM format, doubles data rate (b). current) TX and RX, as in any telephone.

10

ElEctronic DEsign

EngineeringEssentials

90 90 10 11 101 1 0 180 0 180 001 00 (a) 270 (b) 270 01 (c)

90 100

110 111 0 011

180

000 270

010

4. Shown are phase-shift keying (PSK) constellation diagrams for binary PSK (a), quaternary PSK (b), and 8PSK (c).

daTa raTe .VerSuS BandWidTH

Topology is also fundamental. Point-to-point, point-to- to 12 Mbits/s. Hartley also said that this figure holds for twomultipoint, and multipoint-to-point are common. Networking level or binary signals. If multiple levels are transmitted, then features buses, rings, and mesh. They all dont necessarily the data rate can be expressed as: work for all media. C = (2B)log2M

0909EE_F4.ai

Digital communications sends bits seriallyone bit after anothM indicates the number of multiple voltage levels or symer. However, youll often find multiple serial paths being used, bols transmitted. Calculating the base 2 logarithm is a real such as four-pair UTP CAT 5e/6 or parallel fiber-optic cables. pain, so use the conversion where: Multiple-input multiple-output (MIMO) wireless also implements two or more parallel bit streams. In any case, the basic data speed log2N = (3.32)log10N or capacity C is the reciprocal of the bit time (t) (Fig. 2): Here, log10N is just the common log of a number N. Therefore: C = 1/t C = 2B(3.32)log10N C is the channel capacity or data rate in bits per second and For binary or two-level transmission, the data rate for a t is the time for one bit interval. The symbol R for rate is also used to indicate data speed. A signal with a bit time of 100 ns bandwidth of 6 MHz is as given above: has a data rate of: C = 2(6)(3.32)log102 = 12 Mbits/s C = 1/100 109 = 10 Mbits/s With four voltage levels, the theoretical maximum data rate The big question is how much bandwidth (B) is needed to in a 6-MHz channel is: pass a binary signal of data rate C. As it turns out, its the rise C = 2(6)(3.32)log104 = 24 Mbits/s time (tR) of the bit pulse that determines the bandwidth: To explain this, lets consider multilevel transmission schemes. Multiple voltage levels can be transmitted over a B is the 3-dB bandwidth in megahertz and tR is in microsec- baseband path in which each level represents two or more bits. onds (s). This formula factors in the effect of Fourier theory. For Assume we want to transmit the serial 8-bit byte (Fig. 3a). Also example, a rise time of 10 ns or 0.01 s needs a bandwidth of: assume a clock of 1 Mbit/s for a bit period of 1 s. This will require a minimum bandwidth of: B = 0.35/0.01 = 35 MHz B = C/2 = 1 Mbit/s/2 = 500 kHz A more precise measure is to use the Shannon-Hartley theorem. Hartley said that the least bandwidth needed for a given With four levels, two bits per level can be transmitted (Fig. data rate in a noise-free channel is just half the data rate or: 3b). Each level is called a symbol. In this example, the four levels (0, 1, 2, and 3 V) transmit the same byte 11001001. B = C/2 This technique is called pulse amplitude modulation (PAM). Or the maximum possible data rate for a given bandwidth is: The time for each level or symbol is 1 s, giving a symbol ratealso called the baud rateof 1 Msymbol/s. Therefore, the baud rate is 1 Mbaud, but the actual bit rate is twice that, C = 2B or 2 Mbits/s. Note that it takes just half the time to transmit the As an example, a 6-MHz bandwidth will allow a data rate up same amount of data. B = 0. 35/tR

ElEctronic DEsign Go To www.elecTronicdesiGn.com

11

EngineeringEssentials

90

What this means is that for a given clock rate, eight bits of data can be transmitted in 8 s using binary data. With four-level PAM, twice the data, or 16 bits, can be transmitted in the same 8 s. For a given bandwidth, that translates to the higher data rate equivalent to 4 Mbits/s. Shannon later modified this basic relationship to factor in the signalto-noise ratio (S/N or SNR):

1101

1100

1110

1111

1000 1001 180 0001 0000

1010 1011 0 0010 0011

tion (QAM) is a combination of different voltage levels and phase shifts. QAM, the modulation of choice to achieve high data rates in narrow channels, is used in digital TV as well as wireless standards like HSPA, WiMAX, and Long-Term Evolution (LTE).
CHannel impairmenTS

Data experiences many impairments during transmission, especially noise. 270 The calculations of data rate versus bandC = (B)log2(1 + S/N) 5. In this constellation diagram for 16QAM, 16 width assume the presence of additive unique amplitude-phase combinations transmit white Gaussian noise (AWGN). 0909EE_F5.ai data in 4-bit groups per symbol. Noise comes from many different sourcor: es. For instance, it emanates from thermal agitation, which is most harmful in the front end of a receiver. The C = B(3.32)log10(1 + S/N) sources are resistors and transistors, while other forms of noise The S/N is a power ratio and is not measured in dB. S/N also come from semiconductors. Intermodulation distortion creates is referred to as the carrier-to-noise ratio or C/N. C/N is usually noise. Also, signals produced by mixing in nonlinear circuits credefined as the S/N of a modulated or broadband signal. S/N is ate interfering signals that we treat as noise. Other sources of noise include signals picked up on a cable used at baseband or after demodulation. With an S/N of 20 dB or by capacitive or inductive coupling. Impulse noise from auto 100 to 1, the maximum data rate in a 6-MHz channel will be: ignitions, inductive kicks from motor or relay turn on or off, and C = 6(3.32)log10(1 + 100) = 40 Mbits/s power-line spikes are particularly harmful to digital signals. The 60-Hz hum induced by power lines is another example. SigWith an S/N = 1 or 0 dB, the data rate drops to: nals coupled from one pair of conductors to another within the same cable create crosstalk noise. In a wireless link, noise can C = 6(3.32)log10(1 + 1) = 6 Mbits/s come from the atmosphere (e.g., lightning) or even the stars. Because noise is usually random, its frequency spectrum is This last example is why many engineers use the conserva- broad. Noise can be reduced by simply filtering to limit the bandtive rule of thumb that the data rate in a channel with noise is width. Bandwidth narrowing obviously will affect data rate. Its also important to point out that noise in a digital system roughly equal to the bandwidth C = B. If the speed through a channel with a good S/N seems to defy is treated differently from that in an analog system. The S/N physics, thats because the Shannon-Hartley formulas dont or C/N is used for analog systems, but Eb/N0 usually evaluspecifically say that multiple levels or symbols can be used. ates digital systems. Eb/N0 is the ratio of the energy per bit to the spectral noise density. Its typically pronounced as E sub b Consider that: divided by N sub zero. Energy Eb is signal power (P) multiplied by bit time t C = B(3.32) log10(1 + S/N) = 2B(3.32) log10M expressed in joules. Since data capacity or rate C (sometimes Here, M is the number of levels or symbols. Solving for M: designated R) is the reciprocal of t, then Eb is P divided by R. N0 is noise power N divided by bandwidth B. Using these definitions, you can see how Eb/N0 is related to S/N: M = (1 + S/N)
0100 0110

0101

0111

Take a 40-Mbit/s data rate in a 6-MHz channel, if the S/N is 100. This will require multiple levels or symbols: M = (1 + 100) = 10 Theoretically, the 40-Mbit/s rate can be achieved with 10 levels. The levels or symbols could be represented by something other than different voltage levels. They can be different phase shifts or frequencies or some combination of levels, phase shifts, and frequencies. Recall that quadrature amplitude modula-

Eb/N0= S/N (B/R)

bAnDwiDth EFFiCiEnCy
modulation type FSK bpSK QpSK 8pSK 8QAm 16pSK 16QAm
Bandwidth efficiency 1 1 2 3 3 4 4

Remember, you can also express Eb/ N0 and S/N in dB. The energy per bit is a more appropriate measure of noise in a digital system. Thats because the signal transmitted is usually during a short period, and the energy is averaged over that time. Typically, analog signals are continuous. Eb/ N0 is often determined at the receiver input of a system using modulation. Its a measure of the noise level and will affect the received bit error rate (BER).

12

ElEctronic DEsign

EngineeringEssentials

Mixers I DAC

sin DSP Carrier cos Q Transmitted signal

DAC

6. The widely used I/Q method of modulation in a transmitter is derived 0909EE_F6 from the digital signal processor.

Mixers LPF sin Received signal Carrier local oscillator cos LPF ADC ADC

DSP

7. An I/Q receiver recovers data and demodulates in the digital signal 0909EE_F7 processor.

Different modulation methods have varying Eb/N0 values and related BERs. Another common impairment is attenuation. Cable attenuation is a given thanks to resistive losses, filtering effects, and transmission-line mismatches. In wireless systems, signal strength typically follows an attenuation formula proportional to the square of the distance between transmitter and receiver. Finally, delay distortion is another source of impairment. Signals of different frequencies are delayed by different amounts over the transmission channel, resulting in a distorted signal. Channel impairments ultimately cause loss of signal and bit transmission errors. Noise is the most common culprit in bit errors. Dropped or changed bits introduce serious transmission errors that may make communications unreliable. As such, the BER is used to indicate the quality of a transmission channel. BER, which is a direct function of S/N, is just the percentage of the number of error bits to the total transmitted bits over a given time period. Its usually considered to be the probability of an error occurring in so many bits transmitted. One bit error per 100,000 transmitted is a BER of 105. The definition of a good BER depends on the application and technology, but the 105 to 1012 range is a common target.
errOr COding

redundancy check (CRC). These are added to the transmitted data. The receiver recreates these codes, compares them, and then identifies errors. If errors occur, an automatic repeat request (ARQ) message is sent back to the transmitter and the corrupted data is retransmitted. Not all systems use ARQ, but ARQ-less systems will typically employ some form of it. Nonetheless, most modern communications systems go much further by using sophisticated forward error correction (FEC) techniques. Taking advantage of special mathematical encoding, the data to be transmitted is translated into a set of extra bits, which are then added to the transmission. If bit errors occur, the receiver can detect the failed bits and actually correct all or most of them. The result is a significantly improved BER. Of course, the downsides are the added complexity of the encoding and the extra transmission time needed for the extra bits. This overhead is easily accommodated in more contemporary IC-based communications systems. The many different types of FEC techniques available today fall into two groups: block codes and convolutional codes. Block codes operate on fixed groups of data bits to be transmitted, with extra coding bits added along the way. The original data may or may not be transmitted depending on the code type. Common block codes include the Hamming, BCH, and Reed-Solomon codes. Reed-Solomon is widely used, as is a newer form of block code called the low-density parity check (LDPC). Convolutional codes use sophisticated algorithms, like the Viterbi, Golay, and turbo codes. FEC is widely used in wireless and wired networking, cell phones, and storage media such as CDs and DVDs, hard-disk drives, and flash drives. FEC will enhance the S/N. The BER improves with the use of FEC for a given value of S/N, an effect known as coding gain. Coding gain is defined as the difference between the S/N values for the coded and uncoded data streams of a given BER target. For instance, if a system needs 20 dB of S/N to achieve a BER of 106 without coding, but only 8 dB S/N when FEC is used, the coding gain is 20 8 = 12 dB.
mOdulaTiOn

Almost any modulation scheme may be used to transmit digital data. But in todays more complex critical applications, the most widely used methods are some form of phase-shift keying (PSK) and QAM. Special modes like spread spectrum and orthogonal frequency division multiplexing (OFDM) are especially well adopted in the wireless space.
XOR gate To modulator Chipping signal Clock Code generator

Serial data signal

Error detection and correction techniques can help mitigate bit errors and improve BER. The simplest form of error 8. Direct sequence spread spectrum (DSSS) is produced using this basic detection is to use a parity bit, a check code sum, or cyclical arrangement. 0909EE_F8
13

ElEctronic DEsign Go To www.elecTronicdesiGn.com

EngineeringEssentials

Amplitude-shift keying (ASK) and on-off keying (OOK) are generated by turning the carrier off and on or by shifting it between two carrier levels. Both are used for simple and less critical applications. Since theyre susceptible to noise, the range must be short and the signal strength high to obtain a decent BER. Frequency-shift keying (FSK), which is very good in noisy applications, has several widely used variations. For instance, minimum-shift keying (MSK) and Gaussian-filtered FSK are the basis for the GSM cell-phone system. These methods filter the binary pulses to limit their bandwidth and thereby reduce the sideband range. They also use coherent carriers that have no zero-crossing glitches; the carrier is continuous. In addition, a multi-frequency FSK system provides multiple symbols to boost data rate in a given bandwidth. In most applications, PSK is the most widely used. Plain-old binary phase-shift keying (BPSK) is a favorite scheme in which the 0 and 1 bits shift the carrier phase 180. BPSK is best illustrated in a constellation diagram (Fig. 4a). It shows an axis where each phasor represents the amplitude of the carrier and the direction represents the phase position of the carrier. Quaternary, 4-ary, or quadrature PSK (QPSK) uses sine and cosine waves in four combinations to produce four different symbols shifted 90 apart (Fig. 4b). It doubles the data rate in a given bandwidth but is very tolerant of noise. Beyond QPSK is whats called M-ary PSK or M-PSK. It uses many phases like 8PSK and 16PSK to produce eight or 16 unique phase shifts of the carrier, allowing for very high data rates in a narrow bandwidth (Fig. 4c). For instance, 8PSK allows transmission of three bits per phase symbol, theoretically tripling the data rate in a given bandwidth. The ultimate multilevel scheme, QAM, uses a mix of different amplitudes and phase shifts to define as many as 64 to 1024 or more different symbols. Thus, it reigns as the champion of getting high data rates in small bandwidths. When using 16QAM, each 4-bit group is represented by a phasor of a specific amplitude and phase angle (Fig. 5). With 16 possible symbols, four bits can be transmitted per baud or symbol period. That effectively multiplies the data rate by four for a given bandwidth. Today, most digital modulation and demodulation employs digital signal processing (DSP). The data is first encoded and then
0.1 102 FSK 103 Bit error rate BPSK 104 DPSK 105 106 107 OOK 16QAM

Sub-carrier width/spacing 312.5 kHz

52 sub-carriers

20-MHz channel

Frequency

9. The OFDM configuration used in IEEE 802.11a/g permits data rates of 6, 9, 12, 18, 36, 48, or 54 Mbits/s. Each subcarrier is modulated by BPSK, QPSK, 16QAM, or 64QAM, depending on the data rate.

8 9 10 11 12 13 14 15 16 17 Eb/N0 (dB)

10. Higher-level modulation methods like 16QAM require a better signalto-noise ratio or higher Eb/N0.

0909EE_F10

sent to the digital signal processor, whose software produces the 0909EE_F9 correct bit streams. The bit streams are encoded in an I/Q or inphase and quadrature format using a mixer arrangement (Fig. 6). Subsequently, the I/Q data is translated into analog signals by the digital-to-analog converters (DACs) and sent to the mixers, where its mixed with the carrier or some IF sine and cosine waves. The resulting signals are summed to create the analog RF output. Further frequency translation may be needed. The bottom line is that virtually any form of modulation may be produced this way, as long as you have the right DSP code. (Forms of PSK and QAM are the most common.) At the receiver, the antenna signal is amplified, downconverted, and sent to an I/Q demodulator (Fig. 7). The signal is mixed with the sine and cosine waves, then filtered to create the I and Q signals. These signals are digitized in analog-to-digital converters (ADCs) and sent to a digital signal processor for the final demodulation. Most radio architectures use this I/Q scheme and DSP. Its generally referred to as software-defined radio (SDR). The DSP software manages the modulation, demodulation, and other processing of the signal, including some filtering. The spread-spectrum and OFDM broadband wide-bandwidth schemes are also forms of multiplexing or multiple access. Spread spectrum, which is employed in many cell phones, allows multiple users to share a common bandwidth. Its referred to as code division multiple access (CDMA). OFDM also uses a wide bandwidth to enable multiple users to access the same wide channel. Figure 8 shows how the digitized serial voice, video, or other data is modified to produce spread spectrum. In this scheme, direct sequence spread spectrum (DSSS), the serial data is sent to an exclusive OR gate along with a much higher chipping signal. The chipping signal is coded so its recognized at the receiver. The narrow-band digital data (several kilohertz) is then converted to a wider bandwidth signal that occupies a wide channel. In cell-phone cdma2000 systems, the channel bandwidth is 1.25 MHz and the chipping signal is 1.288 Mbits/s. Therefore, the data signal is spread over the entire band. Spread spectrum can also be achieved with frequency-hopping spread spectrum (FHSS). In this configuration, the data is transmitted in hopping periods over different randomly selected frequencies, spreading the information over a wide spectrum. The receiver, knowing the hop pattern and rate, can

14

ElEctronic DEsign

EngineeringEssentials

reconstruct the data and demodulate it. The most common example of FHSS is Bluetooth wireless. Other data signals are processed the same way and transmitted in the same channel. Because each data signal is uniquely encoded by a special chipping-signal code, all of the signals are scrambled and pseudorandom in nature. They overlay one another in the channel. A receiver hears only a low noise level. Special correlators and decoders in the receiver can pick out the desired signal and demodulate it. In OFDM, the high-speed serial data stream gets divided into multiple slower parallel data streams. Each stream modulates a very narrow sub-channel in the main channel. BPSK, QPSK, or different levels of QAM are used, depending on the desired data rate and the applications reliability requirements. Multiple adjacent sub-channels are designed to be orthogonal to one another. Therefore, the data on one sub-channel doesnt produce inter-symbol interference with an adjacent channel. The result is a high-speed data signal thats spread over a wider bandwidth as multiple, parallel slower streams. The number of sub-channels varies with each OFDM system, from 52 in Wi-Fi radios to 1024 in cell-phone systems like LTE and wireless broadband systems such as WiMAX. With so many channels, its possible to divide the sub-channels into groups. Each group would transmit one voice or other data signal, allowing multiple uses to share the assigned bandwidth. Typical channel widths are 5, 10, and 20 MHz. To illustrate, the popular 802.11a/g Wi-Fi system uses an OFDM scheme to transmit data rates to 54 Mbits/s in a 20-MHz channel (Fig. 9). All new cell-phone and wireless broadband systems use OFDM because of its high-speed capabilities and reliable communications qualities. Broadband DSL is OFDM, as are most power-line technologies. Implementing OFDM can be difficult to implement, which is where DSP steps in.

Modulation methods vary in the amount of data they can transmit in a given bandwidth and how much noise they can withstand. One measure of this is the BER per given Eb/N0 ratio (Fig. 10). Simpler modulation schemes like BPSK and QPSK produce a lower BER for a low Eb/N0, making them more reliable in critical applications. However, different levels of QAM produce higher data rates in the same bandwidth, although a higher Eb/N0 is needed for a given BER. Again, the tradeoff is data rate against BER in a given bandwidth.
SpeCTral eFFiCienCy

Spectral efficiency is a measure of how many bits can be transmitted at a given rate over a fixed bandwidth. Its one way to compare the effectiveness of modulation methods. Spectral efficiency is stated in terms of bits per second per hertz of bandwidth, or (bits/s)/Hz. Though the measure usually excludes any FEC coding, its sometimes useful to include FEC in a comparison. Remember 56k dial-up modems? They achieved an amazing 56 kbits/s in a 4-kHz telephone channel, and their spectral efficiency was 14 (bits/s)/Hz. Maximum throughput for an 802.11g Wi-Fi radio is 54 Mbits/s in a 20-MHz channel for a spectral efficiency of 2.7 (bits/s)/Hz. A standard, digital GSM cell phone does 104 kbits/s in a 200-kHz channel, making the spectral efficiency 0.53 (bits/s)/Hz. Add EDGE modulation and that jumps to 1.93 (bits/s)/ Hz. And taking it to new levels, the forthcoming LTE cell phones will have a spectral efficiency of 16.32 (bits/s)/Hz in a 20-MHz channel. Spectral efficiency shows just how much data can be crammed into a narrow bandwidth with different modulation methods. The table compares the relative efficiencies of different modulation methods, where bandwidth efficiency is just data rate divided by bandwidth or C/B.

Reconciling Power-Factor Correction Standards Leads To Solutions


Don TuiTe | AnAlog/powEr EDitor
don.tuite@penton.com

Most of the world mandates control of current harmonics. North America specs 0.9 power factor. Does it make a difference?
heres a tendency to think of energy on the power lines in terms of its fundamental 60- or 50-Hz frequencythe way the voltage is supposed to be created with the turbines and generators at the power house. Sure, the current lags the voltage if theres a reactive load. Thats power factor, right? But isnt it still a matter of real and reactive components at 50 or 60 Hz? Yes and no. Unfortunately, that conceptualization is a bit oversimplified.

In power distribution, power-factor correction (PFC) has traditionally been understood in terms of adding (in general) capacitive reactance at points in the power distribution system to offset the effect of an inductive load. One could say reactive load, but historically, power engineers have been most concerned with motors as loads when dealing with power factor. Correction could take the form of a bank of capacitors or a synchronous condenser (an unloaded synchronous motor).

ElEctronic DEsign Go To www.elecTronicdesiGn.com

15

EngineeringEssentials

More broadly, PFC can also be needed in any line-powered AC line Voltage AC line apparatus that uses ac-dc power conversion. These applications impedance and voltage can range in scale from battery chargers for portable devices to current big-screen TVs. Cumulatively, their input rectifiers are the largTime est contributor to mains-current harmonic distortion. Load with Where does that harmonic distortion come from? One comac-dc rectier Clipped voltage sine wave and cap as rst AC line mon misconception is that switching regulators cause harmonat the wall outlet power-conversion ic power-factor components. Actually, theyre produced in the stage Time typical full-bridge rectifier and its filter capacitor, aided and Line current drawn by abetted by the impedance of the power line itself. rectier capacitor lter circuit In the steady state, the supply draws current from the line when the input voltage exceeds the voltage on the filter capacitor. This creates a current waveform that includes all the odd 1. In ac-dc switching power supplies, the power-line frequencys current harmonics are produced when the load draws current from the power line harmonics of the power-line frequency (Fig. 1). Once the voltage crosses that point, the current is only lim- during the intervals when the line voltage is higher than the voltage on ited by the source impedance of the utility line as well as by the the filter capacitor. The net effect is of a load current that is out of phase resistance of the diode that is forward-biased and the reactance with the line voltage and contains frequency components that exhibit skin of the capacitor that smoothes out the dc. As the power lines effect on power lines, causing conduction losses, and excite eddy currents exhibit non-zero source impedance, the high current peaks in power-company transformers that result in further losses. 0923_EE_F1 cause some clipping distortion on the peaks of the voltage didnt even record those currents, and in any event, tariffs for sinusoid. Harmonics get to be considered elements of power factor domestic consumers dont permit charging for anything but because of their relationship to the power-line frequency. As real power. That situation is likely to continue since fixing the tariffs is Fourier components, they cumulatively represent an out-of phase current at the fundamental frequency. In fact, one broad unlikely to appeal to state legislators. In any event, resolving the situation on an engineering level is more practical than socking it definition of power factor is: to Joe Homeowner. Thats the economic side of the story. In terms of safety, if Joes home is an apartment, he has another reason to care. where THD is total harmonic distortion. Harmonics, notably the third, can and do result in three-phase imbalance, with current flowing in the ground conductor in a THe prOBlem WiTH pOWer FaCTOr wye (Y) configuration. The wye ground conductor typically Whatever the cause, whats actually so wrong with power isnt sized to carry significant current. PFC harmonics also cause losses and dielectric stresses in factors less than unity? Part of the problem is economic. Another part has to do with safety. Whatever their phase relationships, capacitors and cables, in addition to overcurrents in machine all those superposed harmonic currents create measurable I2R and transformer windings. For a more detailed analysis, see losses as theyre drawn from the generator, through miles of PFC Strategies in light of EN 61000-3-2, by Basu, et al transmission and distribution lines, to the home or workplace. (http://boseresearch.com/RIGA_paper_27_JULY_04.pdf). Historically, the utility ate the expense of the losses. At least for domestic consumers, the utility delivered volt-amperes, regulaTing pF Interestingly, mains power has been subject to interference the consumer paid for watts, and the volt-amperes reactive (VARs) were a net loss. In fact, old mechanical power meters from the beginning. The first regulatory effort to control disIL VIN D1 VOUT Inductor current Peak inductor current Average inductor current (b)

PFC controller

Q1

C1

(a) Gating signal Critical conduction (transition) mode

Continuous conduction mode

2. Power-factor correction (PFC) in ac-dc supplies consists of using a control circuit to switch a MOSFET so it draws current through an inductor in a way that fills in the gaps that would otherwise represent harmonics. When the PFC is operated in critical conduction or transition mode (a), the average 0923_EE_F2 inductor current is relatively low, because the peak current is allowed to fall essentially to zero amps. When it is operated in continuous conduction mode (b), the average current is higher. Transition mode is easier to achieve, while continuous mode results in power factors closer to unity.

16

ElEctronic DEsign

EngineeringEssentials

turbances to the electrical grid, the British Lighting Clauses Act of 1899, was intended to keep uncontrolled arc-lamps from making incandescent lamps flicker. More recently (1978 and 1982), international standards IEC 555-2 Harmonic injection into the AC Mains and IEC 555-3 Disturbances in supply systems caused by household appliances and similar electrical equipment - Part 3: Voltage fluctuations were published. (Later they were updated to IEC1000 standards.) Like those standards, the current standards come out of Europe, but theyre nearly universal. There are related government regulations for power-line harmonics in Japan, Australia, and China. In the European Union, standard IEC/EN61000-3-2, Electromagnetic compatibility (EMC) - Part 3-2 - Limits - Limits for harmonic current emissions (equipment input current 16 A per phase), sets current limits up to the 39th harmonic for equipment with maximum power-supply specs from 75 to 600 W. Its Class D requirements (the strictest) apply to personal computers, computer monitors, and TV receivers. (Classes A, B, and C cover appliances, power tools, and lighting.) What does the standard actually say? Under IEC 61000-3-2, the limits for Class D harmonic currents are laid down in terms of milliamps per watt consumed (Table 1).
glOBal diSHarmOny

power supplies so if one fails, the others can absorb the failed units share of load.
BelOW 20% lOad

One complaint about 80 Plus is that it does not set efficiency targets for very low load levels. This may seem like a trivial objection, but it isnt when there are large numbers of computers in operations such as server farms, many of which may be in a standby or sleep mode at any given time. Ironically, the processors power-saving modes tend to conflict with efforts to save power in the ac supply. Of some further significance may be the conflict between specifying requirements for the individual components of harmonic distortion, as IEC 61000-3-2 does, and specifying a single value, such as 0.9 for power factor, as the higher levels of 80 Plus do. Texas Instruments provides an interesting analysis of the issues in a white paper, High Power Factor and High Efficiency You Can Have Both, by Isaac Cohen and Bing Lu (http:// focus.ti.com/download/trng/docs/seminar/Topic_1_Cohen_ Lu.pdf). Early in the paper, the authors calculate the power factor represented by the Class D harmonic levels specified by IED61000-3-2, Class D. Making a few simplifications, the expression for power factor reduces to:

Awkwardly, IEC61000-3-2, being a European-oriented standard, is based on 230-V single-phase and 230/400-V three-phase power at the wall-plug. In consequence, the current limits have to be adjusted for 120/240-V mains voltages in North America. While IEC61000-3-2 sets mandatory standards for supplies sold in the EU, there are voluntary standards for North AmerSince 0.726 is significantly less than 0.9, a supply that just ica. The U.S. Department of Energys Energy Star Computer Specification includes 80 Plus power-supply requirements meets the minimum requirement for the EU standard will fail Energy Star. for desktop computers (later including servers) and laptops. 80 Plus is a U.S./Canadian electric utility-funded rebate proJust to make things interesting, the TI authors note that gram that subsidizes the extra cost of computer power supplies based on the basic definition of power factor as the ratio of the that achieve 80% or higher efficiency at low mid-range and average power in watts absorbed by a load from a voltage or peak outputs, relative to the power rating on the nameplate, and current source to the product of RMS voltage appearing across 1.0 that exhibit a power factor of at least 0.9. the load and the RMS current flowing in 0.9 Within territories served by participating it, it is theoretically possible to design a Two phases Three phases 0.8 utilities, the utilities pay $5 or $10 for simple full-wave bridge and drive it with Four phases 0.7 every desktop computer or server sold. a square wave and have it meet the 0.9 0.6 In 2008, the 80 Plus program was power-factor Energy Star requirement 0.5 expanded to recognize higher-efficiency by emulating an inductive-input filter 0.4 power supplies, initially using the Olymwith a large inductance value. (See the 0.3 pic medal colors of bronze, silver, and white paper for details.) Nonetheless, 0.2 gold, and then adding platinum (Table a Fourier analysis of the square wave 0.1 2). The new subcategories were meant shows that all harmonics above the 11th 0.0 to help expand program branding and to exceed the IEC61000-3-2 limits. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Duty cycle make it possible to offer larger consumer Ultimately, as the title of the paper rebates for participating manufacturers suggests, the problem is a chimera. For3. The amount of ripple reduction in multi-phase tuitously, all the commonly used activethat had moved ahead of the curve. In Table 2, redundant refers to the PFC schemes is a function of duty cycle. The PFC circuits draw input-current wave0923_EE_F3 practice of server systems makers of oper- solid line represents a two-phase design. In a forms that can easily comply [with] both ating from a 230-V ac source and using three-phase design, the phase difference of each standards, the authors say. multiple supplies to deliver power to the phase is 360 divided by three, which is 120. In Like Texas Instruments, ON Semiconload. Some systems may have up to six a four-phase design, the phase difference is 90. ductor has addressed the issues of recRipple current reduction normalized to one phase of PFC

ElEctronic DEsign Go To www.elecTronicdesiGn.com

17

EngineeringEssentials

Passive PFC involves an inductor on the power-supply input. Passive PFC looks simple, but isnt really practical for rea12 to 21 V + sons that include the necessary inductance, To current T1 sense input conduction losses, and possibility of resonance with the output filter capacitor. M1 As noted above, the power-factor UCC28070 problem in ac-input switch-mode power To L2 current supplies arises because current is drawn D2 sense from the line only during parts of the acinput supply voltage waveform that rise above T2 From the dc voltage on the bulk storage (filter) current M2 capacitor(s). This non-symmetrical curtransformer rent draw introduces harmonics of the ac line voltage on the line. The basic PFC concept is fairly simple (Fig. 2). A control circuit switches a MOS4. Texas Instruments UCC28070 0923_EE_F4 power-factor correction chip integrates two pulse-width FET to draw current through an inductor modulators operating 180 out of phase. This interleaved PWM operation reduces input and in a way that fills in the gaps that would output ripple currents, and the conducted-EMI filtering is easier and less expensive. otherwise represent harmonics. The PFC controller can be designed to onciliation. In a communication available online, Comments operate in several modes: critical conduction mode (also called on Draft 1 Version 2.0 Energy Star Requirements for External transition mode), and continuous conduction mode (CCM). Power Supplies (EPS) at www.energystar.gov/ia/partners/ The differences lie in how fast the MOSFET switches, which in prod_development/revisions/downloads/ON_Semiconductor. turn determines whether the inductor current (and the energy in pdf, the company advised the Department of Energy that exter- the inductor) approaches zero or remains relatively high. nal power supplies that meet the IEC61000-3-2 typically have The terms critical and transition reflect the fact that a power factor of 0.85 or greater when measured at 100% of each time the current approaches 0 A, the inductor is at a point rated output power. where its energy approaches zero. Transition-mode operation More specifically, at 100% of rated output power and 230-V can achieve power factors of 0.9. However, it is limited to ac line, two-stage external power supplies with an active-PFC lower power levels, typically 600 W and below. It is economifront end exhibit a power factor greater than 0.9, the paper cal, because it uses relatively few components. Applications explains. The opposite is however not true, i.e. it is entirely include lighting ballasts and LED lighting, as well as conpossible that an external power supply can exhibit a power sumer electronics. factor of 0.9 and yet will fail a given odd harmonic current and The circuit topology for CCM is like critical conduction therefore will not meet the IEC61000-3-2. mode. But unlike the simpler mode, its ripple current has a Another issue with stating a PFC requirement directly, rather much lower peak-to-peak amplitude and does not go to 0 A. than in terms of individual harmonics, has to do with design effi- The inductor always has current flowing through it and does not ciency. For a single-stage PFC topology to meet the proposed dump all of its energy at each pulse-width modulation (PWM) power-factor specification at 230-V ac line, ON Semiconductor cycle, hence the term continuous. says, the necessary circuit modifications would result in a fewIn this case, the average current produces a higher-quality percent efficiency loss and in a substantially increased cost. composite of the ac line current, making it possible to achieve For single-stage external power supplies the power factor is power factors near unity. This is important at higher power typically greater than 0.80. The proposed power-factor require- levels as the higher currents magnify radiated and conducted ment would eliminate the single-stage topology that is one of electromagnetic interference (EMI) levels that critical conducthe most cost-effective ways of building highly efficient exter- tion mode would have difficulty meeting. nal power supplies such as notebook adapters with a nameplate pFC COnTrOller deSignS output power below 150 W, ON Semiconductor says. TI has an interesting solution for this, embodied in its Note the emphasis on single-stage. It opens the door to an interesting design question represented by TI and ON Semi. To UCC28070 two-phase interleaved continuous-current mode understand it, lets first look at actual PFC design approaches. PFC controller (Fig. 4). The UCC28070 targets 300-W to multi-kilowatt power supplies, such as what might be used in apprOaCHeS TO uniTy pOWer FaCTOr telecom rectifiers or server front ends. Since the discontinuous input-filter charging current creates The idea behind the design of the TI chip is that for higher the low power factor in switch-mode power supplies, the cure power levels, it is possible to parallel two PFC stages to deliver is to increase the rectifiers conduction angle. Solutions include higher power. This also has thermal-management advantages, passive and active PFC and passive or active filtering. since heat losses from the two stages are spread over a wider
VIN L1 D1 VOUT

18

ElEctronic DEsign

EngineeringEssentials

V DD R

+
FB

+ AC in skip V SSKIP(sync) comparator + Soft-skip ramp Delay Start t SSKIP

Clock DC max

+ V SSKIP

FB skip comparator

Terminate PWM skip comparator + 21.33 k PWM comparator + 21.33 k + 4 V Overload Latch

S R

AC in FB FB

Transient load detect + comparator Ramp comp V-to-I V-to-I V-to-I + inverter

Output driver

DRV

V SSKIP(TLD) Transconductance amplier + AC error amplier

Reference generator

V CC Brownout V CC OK

V FF

V FF

0923_EE_F5 5. ON Semiconductors NCL30001 employs a variable reference generator, a low-frequency voltage-regulation error amplifier, ramp compensation, and a current shaping network. Inputs to the reference generator include a scaled ac input signal (AC_IN) and feedforward input (VFF).

area of the circuit board. The disadvantage of simple parallel operation is higher input and output ripple currents. TI says that a better alternative is to interleave the two stages so their currents are 180 out of phase. That way, the ripple currents cancel. In fact, designs with more than two phases are

common (Fig. 3). In those cases, the phase angles are distributed evenly. In multiphase PFC, due to the lower output ripple currents, the number or physical size of the passive components can be smaller than in single-phase PFC, providing cost, space, and EMI-filter complexity tradeoffs. The application often drives PFC controller design. For tAblE 1: En61000-3-2 ClASS D example, ON Semiconductors NCL30001 LED lighting controller, which is intended for 12-V and higher LED lighting hArmoniC CurrEnt limitS applications between 40 and 150 W, combines CCM PFC and a (powEr = 75 to 600 w) flyback step-down converter (Fig. 5). relative While a typical LED lighting power supply might consist of a absolute limit (a) Harmonic order (n) limit (ma/W) boost PFC stage that powers a 400-V bus followed by an isolated dc-dc converter, the NCL30001 datasheet describes a simpler 3.4 2.30 3 approach that shrinks the front-end converter (ON Semi calls it 1.9 1.14 5 the PFC preregulator) and the dc-dc converter into a single pow1.0 0.77 er-processing stage with fewer components. It essentially needs 7 only one MOSFET, one magnetic element, one low-voltage 0.5 0.40 9 output rectifier, and one low-voltage output capacitor. 0.35 0.33 11 ON Semiconductors datasheet provides an instructive 3.85/n 2.25/n 13 to 39 description of the portion of the circuit shown in Figure 5. The output of a reference generator is a rectified version of the input sine wave proportional tAblE 2: 80 pluS EFFiCiEnCy lEVElS to the feedback (FB) and inversely proportional to the feedforward (VFF) values. An ac 115-V internal non-redundant 230-V internal redundant Test type error amp forces the average current output percent of 20% 50% 100% 20% 50% 100% of the current-sense amplifier to match the rated load reference-generator output. This output (VER80% 80% 80% Undefined 80 plus basic ROR) drives the PWM comparator through a 82% 85% 82% 81% 85% 81% bronze reference buffer, and the PWM comparator sums VERROR and the instantaneous current 85% 88% 85% 85% 89% 85% Silver and compares the result to a 4.0-V threshold. 87% 90% 87% 88% 92% 88% gold Suitably compensated, this provides the duty90% 92% 89% 90% 94% 91% platinum cycle control.

ElEctronic DEsign Go To www.elecTronicdesiGn.com

19

EngineeringEssentials

Thermal Modeling Takes The Heat Off Of Automotive Silicon Design


sAnDRA HoRTon | tExAS inStrumEntS
ti_sandrahorton@list.ti.com

Optimizing die layout and power early in the design cycle, as well as thermal improvements at the page and system level, helps to deliver the best possible automotive designs.

hy is it important to dissipate heat? For most semiconductor applications, quickly moving the heat away from the die and out toward the larger system prevents highly concentrated areas of heat on the silicon. Typical operating temperatures for silicon die range from 105C to 150C, depending on the application. At higher temperatures, metal diffusion is more prevalent and eventually the device can fail from shorting. The dies reliability depends greatly upon the amount of time thats spent at the high temperatures. For very short durations, a silicon die can tolerate temperatures well above the published acceptable values. However, the devices reliability is compromised over time. Due to this delicate balance between power needs and thermal limits, thermal modeling has become an essential tool for the automotive industry. In the automotive safety industry, the current drive is for smaller assemblies with lower part counts, which forces semiconductor providers to include more functions with higher power consumption. The higher temperatures generated ultimately will affect reliability and, in turn, automotive safety. But by optimizing the die layout and power pulse timing early in the design cycle, designers can provide an optimized design with fewer silicon test builds, leading to a quicker development cycle time.

PowerPAD-style package <1% ~20%

~80%

Standard leaded package <2%

~40%

~58%

1. As the percentages show, thermal dissipation is distinctly different between a standard leaded package and a PowerPAD-style package.

0708_EE_F1

SemiCOnduCTOr THermal paCkaging

The automotive electronics industry uses various semiconductor package types, from small, single-function transistors to complex power packages with more than 100 leads and specially designed heatsinking capabilities. Semiconductor packaging serves to protect the die, provide electrical connection between the device and external passive components in the system, and manage thermal dissipation. For this discussion, well focus on the semiconductor packages ability to conduct heat away from the die. In leaded packages, the die is mounted to a metal plate called the die pad. This pad supports the die during fabrication, and it provides a good thermal conductor surface. A common semiconductor package type in the auto industry is the exposed pad, or PowerPAD-style, package (Fig. 1). The bottom side of the die pad is exposed and soldered directly to the printed-circuit board (PCB), providing a direct thermal connection from the die to the PCB. The primary heat path runs down through the exposed pad, which is then soldered to

the circuit board. The heat is then dissipated through the PCB into the surroundings. Exposed-pad-style packages conduct approximately 80% of the heat though the bottom of the package and into the PCB. The other 20% of the heat dissipates from the device leads and sides of the package. Less than 1% of the heat dissipates from the top of the package. A similar leaded package is the non-exposed pad package (Fig. 1, again). Here, plastic fully surrounds the die pad, providing no direct thermal connection to the PCB. Approximately 58% of the heat dissipates from the leads and sides of the package, 40% from the bottom of the package, and approximately 2% from the top. Heat transfer occurs via conduction, convection, or radiation. For automotive semiconductor packaging, the primary means of heat transfer are through conduction to the PCB and by convection to the surrounding air. Radiation, when present, represents a minor portion of the heat transfer.
THermal CHallengeS

Operation, safety, and comfort automotive systems rely heavily on semiconductors. Theyre now common in body electronics, airbags, climate control, radio, steering, passive entry, anti-theft systems, tire monitoring, and more. Despite many new applications for semiconductors in the automotive industry, three traditional areas still maintain individual environmental requirements: inside the vehicle cab, the

20

ElEctronic DEsign

EngineeringEssentials

brakes. In braking and airbag applications, power levels of up to 100 W can be expected for short duration (~1 ms). Increased functionality demands plus concentrated multiple sources drive the dies high power. Die temps for some automotive-application semiconductors can reach up to 175C to 185C for short periods of time. Typically, this is the thermal shutdown limit for automotive devices. Thermal demands increase with the addition of more safety 141 features. Though airbags have been common in vehicles for longer than a decade, some cars now come with as many as 12 airbags. During deployment, multiple airbags require a 80.5 sequenced operation and create a much greater thermal design challenge compared to a single traditional airbag. Regarding thermal challenges in terms of material proper20 ties, its no secret that theres a concerted effort to reduce 103 cost in automotive assemblies. Plastic materials are replacing metal modules and PCB enclosures. Plastic enclosures have the benefit of being cheaper to produce. They also weigh less. 61.3 The tradeoff for lower cost and reduced weight, however, is a reduction in thermal performance. Plastic materials have very low thermal conductivity, in the range of 0.3 to 1 W/mK, so they function as thermal insulators. 20 No doubt, then, that the changeover to plastic enclosures will limit a systems heat dissipation, increasing the thermal load on 2. For 0708_EE_F2 eight-lead small-outine IC (SOIC) packages, die junction temperature the semiconductor device.
can be lowered 25C by fusing several leads to the die pad.
Temperature (C) Temperature (C)

WHy mOdeling?

Within the automotive semiconductor industry, modeling activity typically focuses on the thermal performance and design of a single device. Careful simplifications can be assumed to obtain modeling data. System-level simplifications, such as eliminating extraneous low-power devices from the model, using simplified rather than detailed PCB copper routing, or assuming chassis is at a fixed temperature for heatsinking, can all streamline the thermal model for fast solver times. In addition, they will still deliver an accurate representation of the thermal impedance network. Package-level thermal modeling makes it possible to review potential packaging design changes in advance without costly development and testing activity, eliminating material builds. Semiconductor packaging design can be varied to allow for the optimal thermal dissipation depending on the applications needs. With exposed-pad packages like PowerPAD, heat can quickly dissipate from the die to 5.9504 0.0225 6.2026 0.4407 120 the PCB. Variations such as 5.7201 0.0495 5.9272 0.4214 2 5.4923 0.0984 5.6931 0.4149 3 larger die pads, better connec5.2228 0.0968 5.4756 0.4165 12 tion to the PCB, or improved 4.5018 0.1553 5.0765 0.3851 60 3.2595 0.1511 4.4757 0.3559 20 die pad design offer ways to 2.8499 0.1657 3.226 1 0.3601 improve a devices thermal 146 1.8499 0.0874 2.8321 0.7477 22 0.3077 0.288 7 1.7413 0.7477 performance. 0.384 0.8512 6 1.71 2.366 Thermal modeling is also 143 6 1.7914 0.8303 2.9784 2.2471 0.3666 2.4892 3.0759 4.6289 10 used to review the impact of 1 0.2859 5.0006 0.5967 5.5326 potential material changes in 140 1 0.6695 4.7317 1.2632 5.5326 a device. Thermal conductiv3. Thermal modeling software uses comma-separated variable (.csv) inputs to generate detailed die layout and ity of packaging materials show any potential hotspot locations on the die surface. can vary widely, from as low
Temperature (C)

panel firewall, or under the hood. In conjunction, three factors continue to challenge the automotive environment: high ambient temperatures, high power, and limited material thermaldissipation properties. Temperatures for automotive environments versus other environments are typically worlds apart. Generally, consumerelectronics temperature commonly resides at 25C, with upper limits around 70C. On the other hand, electronics inside the passenger compartment of the car or panel applications will run at temperatures up to 85C (see the table). In firewall applications, where electronics are located between the engine compartment and the vehicles cab, devices can be exposed to ambient temperatures up to 105C. Underhood applications require operation in an environment with ambient temperatures up to 125C. Thermal considerations are especially important in safetyrelated systems, such as power steering, airbags, and antilock

ElEctronic DEsign Go To www.elecTronicdesiGn.com

0708_EE_F3

21

EngineeringEssentials

as 0.4 W/mK (thermal insulator) to more than 300 W/mK include any unique geometry, such as embedded heatsinks, or (good thermal conductor). Using thermal modeling techniques metal connections like screws or rivets. helps balance product cost versus performance. Forced airflow around the system and PCB can also play a role in the convective heat transfer from the system. TypiVeriFiCaTiOn OF mOdeling cally, the semiconductor industry targets thermal modeling For critical systems, careful lab-based analysis can determine on a single, high-power device. Other power components on thermal performance and operating temperatures. However, the PCB, though, may play a large role in the systems overall lab-based measurement of these systems may be time-con- performance too. suming and costly. Here, thermal modeling is instrumental in Simplifying the inputs from these packages, yet still mainaddressing the systems thermal needs and satisfying opera- taining a level of accuracy, often requires compact models. tional requirements. Compact models are less complex networks of thermal resisIn the semiconductor industry, thermal modeling has become tance that provide a reasonable approximation of the thermal an early part of the concept testing and silicon die design pro- performance of these non-critical devices. cess. The ideal thermal modeling flow begins months before In smaller and lower-pin-count devices, other methods can fabricating any die. The IC designer and thermal engineer be used to improve thermal performance (Fig. 2). For instance, review die layout and power losses for the device. fusing several of the package leads to the devices die pad can Then, the thermal modeling engineer creates a thermal model significantly improve the overall junction temperature without based on this review. Once thermal-model results are complete, impacting the devices operation. the designer and modeling engineer review the data and tune the model to accurately reflect possible application scenarios. mOdeling aSSumpTiOnS Die-level thermal analysis begins with an accurate represenVerification of all finite element analysis (FEA) modeling is highly recommended. Texas Instruments policy is to run tation of the silicon die layout, including any powered regions correlation studies comparing thermal-modeling results with a on the die. In simple cases, assume that the power is evenly distributed across the silicon. systems physical measurements. However, most die layouts have power in uneven patterns These correlation studies highlighted several areas of potential error, including material properties, power definition, and across the die, depending on functionality. This uneven power geometry simplification. While no model will be a perfect distribution can be critical to the devices overall thermal perduplication of a true system, careful attention must be paid formance. For thermally critical devices, pay close attention to to the assumptions made during modeling to ensure the most the power structures location on the die. In some thermal software programs, the die layout can be accurate system representation. For material properties, published values often show the bulk entered using comma-separated variable (.csv) inputs (Fig. 3). conductivity of a particular material. Yet in semiconductor This allows for an easy transfer of information between die layapplications, thin layers of material are commonly used, and out and thermal-modeling software. Depending on the devices the increased surface area of the material can cause a decrease complexity and the power level, these powered regions on the die can vary from two to three locations to several hundred. in thermal conductivity compared to the bulk value. Carefully note the power represented in the model, because The thermal-modeling engineer should work closely with the applied power on a device during operation can vary with time. IC designer to identify the powered regions for inclusion in the Power losses in the board or other areas in the system may also thermal model. Often, small, very low-powered regions can impact the die surfaces true power. be grouped into larger regions, simplifying the thermal model while still accounting for the devices overall power. Similarly, TypeS OF mOdeling in a thermal model, background or quiescent power can be To aid in semiconductor package design, there are four main used across the dies surface to account for a large percentage types of thermal models: system level, package level, die level, of the non-critical low-power die structures. and die-level transient analysis. In the automotive semiconducDevice functionality frequently requires high power over tor arena, system-level thermal modeling small areas on the die. These highis important because it shows how a parpowered regions can lead to localized mAximum AmbiEnt ticular device will perform in a specific heating regions, which may be sigtEmpErAturES For system. nificantly hotter than the surrounding At a basic level, automotive semisilicon. VEhiClE AppliCAtionS conductor thermal modeling takes the Thermal modeling helps highlight aMbienT PCB into account because its the pri- auToMoTive thermal problems in which clusters environMenT TeMperaTure (c) mary heatsink for most semiconducof medium power silicon products inSiDE CAr tor packages. The composition of the located in close proximity may cause 85 or pAnElS PCB, including copper layers and therresidual heating and possible thermal FirEwAll (bEtwEEn mal vias, should be added to the therstress to the die under assessment. 105 EnginE AnD CAb) Models can also aid in the placement mal model to accurately determine theror calibration of embedded thermal mal behavioral analysis. Furthermore, 125 unDEr hooD

22

ElEctronic DEsign

EngineeringEssentials

Thermal response with time sensors on the die. Ideally, a temperature sensor is placed at the center of the highest- 130 powered region on the die. Yet due to layout 125 constraints, this is often not possible. When located away from the center of the powered 120 region, the temperature sensor is unable to read the devices full maximum temperature. 115 Thermal models can be used to determine 110 the thermal gradient across the die, includBoard 5C ing at the sensor location. Then the sensor 105 Die junction 6C is calibrated to account for the temperature 1.2C 7C 100 difference between the hottest region and 8C 2C the sensor location. 95 3C Ambient The aforementioned model types all temperature assume a constant dc power input. In actu4C 90 al operation, though, device power varies 85 with time and configurability. By designing 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 the thermal system to account for only the Time (seconds) worst-case power, the thermal load may become prohibitive. Transient thermal response can be 4. Thermal response can vary with time across the surface of a semiconductor device. In this reviewed using one of several different case, regions across the die have been powered in a staggered fashion 0708_EE_F4 methods. The simplest method is to assume a dc power source on the die, then track the thermal response dissipation to the rest of the circuit board. Semiconductor of the device as a function of time. A second method inputs a device packages are designed to quickly move the heat away varying power source and then uses thermal software to deter- from the die and to the larger system. A semiconductor package can be improved thermally with mine the final steady-state temperature. The third and most useful transient modeling style is to view higher conductivity materials, direct attachment to a PCB such the response with time of varying power in multiple die loca- as PowerPAD, leads fused to the die pad, or mounting locations tions (Fig. 4). With this method, you can catch interactions for external heatsinks. The semiconductor die itself allows between devices that may not be apparent under normal condi- for many possible ways to minimize overall temperature. Of tions. Transient modeling is also helpful in viewing the full dura- course, the best way to lower temperature is to lower power. For silicon circuit design and layout, good thermal practices tion of certain die operations that occur separate from normal include larger heat-dissipation areas, locating powered regions device function, e.g., a devices power-up or shutdown mode. In many automotive systems, such as braking actuation or away from the edges of the die, using long and narrow powairbag deployment, the device power remains at a low level for ered regions instead of square regions, and providing adequate the bulk of the devices lifetime. In the case of an airbag sys- space between high-powered regions. Silicon is a good thertem during deployment, the power pulse can reach very high mal conductor with a conductivity of approximately 117 W/ power for short durations. mK. Allowing the maximum amount of silicon around a powered region improves the devices thermal dissipation. imprOVemenTS For transient power on a die, staggering power pulses to Design optimization and lower overall temperatures are the decrease instantaneous power will lower overall temperatures. goals of thermal modeling for the automotive semiconductor This results in either a long lag time between power pulses so industry. Lowering the operating die junction temperatures that the heat can dissipate, or high power events are shared over improves a devices reliability. several areas on the die. These transient variations allow the Small enhancements to the system, board, package, or die thermal system to recover before more heat is applied. By carepotentially leads to dramatic improvements in final tempera- fully designing the die, package, PCB, and system, a devices tures. Device and system limitations can eliminate some of thermal performance can be dramatically improved. these suggestions, though. Methods for boosting thermal performance include airflow, rEFErEnCES conductive heat paths, or external heatsinking. Another is 1. to learn more about thermal modeling and other automotive to provide more metal area for heat dissipation, such as by semiconductor devices, visit www.ti.com/automotive-ca. adding external heatsinks, metal connection to chassis, more 2. Download an application note for powerpad at www.ti.com/powlayers or denser copper layers on a PCB, thermally connected erpad_slma002d-ca. copper planes, and thermal vias. Thermal vias located below the exposed pad of a device help SAnDrA horton, Analog packaging group at texas instruments, to quickly carry heat away from the device, as well as speed holds a bSmE from texas A&m university, College Station, texas.
Temperature (C)

ElEctronic DEsign Go To www.elecTronicdesiGn.com

23

Potrebbero piacerti anche