Intel Data Center Design

21
The NYS Forum's May Executive Committee Meeting Building an Energy Smart IT Environment Energy Efficient Data Centers Daniel Costello IT@Intel Global Facility Services DC Engineering

Transcript of Intel Data Center Design

Page 1: Intel Data Center Design

The NYS Forum's May Executive Committee Meeting Building an Energy Smart IT Environment

Energy Efficient Data Centers

Daniel CostelloIT@IntelGlobal Facility Services DC Engineering

Page 2: Intel Data Center Design

Legal Notices

This presentation is for informational purposes only. INTEL MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.

BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino logo, Core Inside, Dialogic, FlashFile, i960, InstantIP, Intel, Intel logo, Intel386, Intel486, Intel740, IntelDX2, IntelDX4, IntelSX2, Intel Core, Intel Inside, Intel Inside logo, Intel. Leap ahead., Intel. Leap ahead. logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, IPLink, Itanium, Itanium Inside, MCS, MMX, Oplus, OverDrive, PDCharm, Pentium, Pentium Inside, skoool, Sound Mark, The Journey Inside, VTune, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

*Other names and brands may be claimed as the property of others.

Copyright © 2007, Intel Corporation. All rights reserved.Last Updated: Aug 28, 2006

Page 3: Intel Data Center Design

Objective

High density computing lowers TCO and increases energy efficiency

– TCO i$ the driving force

– Adoption will not happen unless there is a financial incentive

Demonstrate how Intel is improving energy efficiency

Page 4: Intel Data Center Design

New Demand DriversContinue to Challenge our IT Group

1. This growth is specific to the Intel silicon design engineering environment and does not include overall corporate IT demand—e.g.,~25,000 servers in 2005 to support silicon design.

Linear Demand Exponential Demand

Integration of digitaland analog circuits

Increasing platformfeatures

Increase due to platformand multi-core validation

Corporate Acquisitions70 since 1996, each with a data center

Process technology

140%20,0008,300Design Engineers

65,000

75

64

2006

6,000%

937%

700%

Increase

1,062Compute Servers

8Design Data Centers

8Global Design Sites

1996Growth1

Page 5: Intel Data Center Design

Data Center Assets

62% of DC’s more than 10

years old

More than 1/3 forecast new DC

construction

72% non mission critical

applications

1. Source: Data Center Operations Council research. Tier 4 applications have more demanding service levels.

Plans to Build a New Data Center

Age of Data Centers

% of Applications in each Tier

Applications drive the need for DC capacity(not hardware)

Page 6: Intel Data Center Design

Data Center Consolidation: What is Intel’s Strategy?

“Right sizing” model1 to rebalance the number, locations, and use of the data centersOur cost analysis indicates large global and regional hubs

– (Eastern Europe and Asia bandwidth constraint)

Choice of location driven by TCO– Construction/expansion costs

(10 percent of total cost of ownership [TCO])

– Local utility (power/cooling) costs – economizer usage(25 percent of TCO)

– WAN maturity and bandwidth costs / operations headcount costs

(10 percent of TCO)

– IT hardware(55 percent of TCO; sales tax adds to this)

Dependent on virtualization strategyConsolidation is opportunistic to maximize ROI

1. This model is for illustration purposes only and does not represent actual Intel data center locations.

Page 7: Intel Data Center Design

Savings– Would require the same megawatts of power to run “same”

amount of servers in a dense or spread out configuration• Would require longer conductor runs if spread out

– Would require the same or more tons of cooling for “X” number of servers• Spreading out increases space to be cooled• Drives up chiller plant size• Additional fan capacity would be required

– Spreading out cabinets adds additional cost in building square feet and raised metal floor (RMF) space

– Decreases heating, ventilation, and air conditioning (HVAC) to universal power supply (UPS) output ratio by increasing efficiency of cooling systems

– High Density DC is up to 25% more energy efficient than low density

Additional Scope– 42° F thermal storage for uninterruptible cooling system (UCS)– Fully automated data center and integrated control systems– Sound attenuation – NC60

Why Density versus Space

Page 8: Intel Data Center Design

Large Data Center Construction Economies of Scale (Modular Approach1)

• Design hub data centers to expand modularly• Add as demand warrants

• New Intel data centers targeting 500 watts per square foot• 15 kilowatts per rack

• It is critical to optimize data center design for thermal management2

• Airflow, cooling, cabling, rack configurations, and so on

1. Intel® IT currently defines a given module as 6,000 square feet of data center floor space.2. Source: Intel white paper June 2006 “Increasing Data Center Density While Driving Down Power and Cooling Costs”

www.intel.com/business/bss/infrastructure/enterprise/power_thermal.pdf

Industry average for Tier II/III

data center USD 11,000 – 20,000

per kilowatt

Intel goal cost per module

After fifth module,

diminishing returns

NOTE: The first two modules are based on actual costs to build a high performance data center at Intel. Modules 3 through 7 are projections. All timeframes, dates, and projections are subject to change.

Page 9: Intel Data Center Design

Data Center Processor Efficiency Increases

2002

1. The above testing results are based on the throughput performance of Intel design engineering applications relative to each newprocessor and platform technology generation.

1,000 Sq.Ft.128 kW

512 Servers25 Server Racks

3.7 Teraflops

30 Sq.Ft.21 kW

53 Blades1 Server Rack

3.7 Teraflops

Today

A greater than 6x energy efficiency increase

?

2012

Power will continue to be the limiting factor(increased performance per Watt and per square foot)

Page 10: Intel Data Center Design

Accelerated Server Refresh

New features we will deploy in the data centers• Demand-based switching (DBS) and (DPT) Data Center Power

Thermal advanced management dynamically tailor power to workloads when peak performance is not needed and allow management of power at the rack level

• Virtualization technology (virtualization at the hardware level)combined with our distributed job scheduler will provide on-demand OS provisioning

1. Source: Intel based on SPECint_rate_base2000* and thermal design power. Relative to 2H’05 single-core Intel® Xeon®processor (“Irwindale”).

2. Based on internal Intel testing Q2 2006 using equivalent systems in a rack configuration versus a blade configuration.

Data center power and heat reduction• Intel® Xeon® processor 5300 series and forthcoming multi-

core designs utilize the power efficient mobile architecture• Reduces power consumption by 40 percent

(65 watts versus 110 watts)1

• Blade servers offer an additional 25 percent power efficiency (direct current backplane)2

• Offset data center costs in cooling and floor space per given workload

• The above are key factors in equipment selection

Page 11: Intel Data Center Design

Key Metrics - ACAE

In the past, air conditioning (A/C) systems have been poorly utilized in general purpose and manufacturing computing data centers.– Packaged computer room air conditioning (CRAC) units are capable of 20°

F to 28° F Delta-T coil conditions (Temp In minus Temp Out)– A/C system Delta-Ts have been measured as low as 4° F

Air Conditioning Airflow Efficiency (ACAE) is defined as “the amount of heat that can be removed per standard cubic foot per minute (SCFM) of cooling air” (Wattsheat/SCFM).In our studies, we have evaluated advantages of increasing ACAE.– Reduce initial cost and noise levels (less A/C equipment)– Reduce operating costs (less fan horsepower).– Better overall cooling efficiency (less kW per ton of refrigeration)– Reduce facility support area per kilowatt of IT equipment.

Page 12: Intel Data Center Design

Key Metrics – PUE/DCE

The IT Equipment Power is defined as the effective power used by the equipment that is used tomanage, process, store, or route data within the raised floor space.The Facility power is defined as all other power to the data center required to light, cool, manage,secure and power (losses in the electrical distribution system) the data center.Industry average is estimate at 2 or a DCE of 50%

Page 13: Intel Data Center Design

Intel Data Center Cooling Development

Servers and storage across the board are growing in kW power consumption and corresponding heat output, while still increasing performance per watt.Example of an older data center with unmanaged airflow:– 67 WPSF; 2 kW-4 kW cabinets; ACAE at 4.7 W/CFM; bypass air at ≥35 percent

By installing blanking panels in cabinets, removing cable arms, blocking all cable openings in the RMF, and placing perforated floor tiles only in the cold aisles, we improved airflow management to get:– 135 WPSF; 4 kW-8 kW cabinets; ACAE at 5.6 W/CFM; bypass air at 25 percent

Next, we worked with multiple vendors to isolate (eliminate Vena Contracta) supply and return air using chimney cabinets or hot aisle enclosures, complete CFD model, upgrade floor tiles to grates, and clear all utilities below RMF out of air stream.– 247 WPSF; 8 kW-14 kW cabinets; ACAE at 7.5 W/CFM; bypass air at 10 percent

The latest data centers were purposely built to host high-density systems in a two-story building, a one-story building with a 36" RMF and no utilities below the RMF, and even without a RMF.– ~500 WPSF, 13 kW – 17 kW average per cabinet, ACAE at up to 10.8 W/CFM,

bypass air at <10 percent, CRAC units removed from raised floor

WPSF=watts per square foot; kW=kilowatt; W=watt; ACAE=air conditioning airflow efficiency; W/CFM=watts per cubic feet per minute; CFD=computational fluid dynamics; RMF=raised metal floor

Page 14: Intel Data Center Design

Chimney Cabinet

Page 15: Intel Data Center Design

TwoTwo--Story Vertical Story Vertical FlowthroughFlowthroughHighHigh--Performance Data CenterPerformance Data Center

Plenum AreaPlenum AreaPlenum AreaPlenum Area

Cooling CoilsCooling CoilsCooling CoilsCooling Coils

Electrical AreaElectrical Area

240 Server Cabinets240 Server Cabinets

NonNon--ducted Hot Air Return Space Above Ceilingducted Hot Air Return Space Above Ceiling

600 Ampere 600 Ampere BuswayBusway

Page 16: Intel Data Center Design

Hot Aisle Panel Closure System

Slider and infill panel system Slider and infill panel system Accommodates various heightsAccommodates various heights

Storefront doors at endStorefront doors at endBackBack--toto--back cabinetsback cabinets

Plascore filler panel enclosuresPlascore filler panel enclosures

Page 17: Intel Data Center Design

CFD Model output kW per rack

Cooling air leaking

through floor is used for cooling and bypass to temper

system delta-T to 43º F

1,250 watts per square feet (WPSF), 30 kilowatts (kW) per cabinet, air conditioning airflow efficiency (ACAE) 13.7 watts per cubic feet per minute (W/CFM)

Page 18: Intel Data Center Design

Decoupled Wet Side Economizer System

Page 19: Intel Data Center Design

Intel Data Center Energy Efficiency

HVAC performance index (%) =kWUPS Output

kWHVAC

kW=kilowatt; UPS=universal power supply

1 “Data Centers and Energy Use - Let’s Look at the Data.” ACEEE 2003 Paper #162. William Tschudi and Tengfang Xu, Lawrence Berkeley National Laboratory; Priya Steedharan, Rumsey Engineers, Inc.; David Coup, NYSERDA; Paul Roggensack, California Energy Commission.

Page 20: Intel Data Center Design

Industry moving to 45nm Benefits

Reduction in source-drain leakage power

Compared to 65 nm technology, 45nm technology will provide:

reduction in gate oxide leakage power

reduction in transistor switching power

>10x~30%

improvement in transistor density –for either smaller chip size or increased transistor count~2x

improvement in transistor switching speed or>20%

>5x reduction in source-drain leakage power

Providing the Foundation for Improved Performance/Watt

Page 21: Intel Data Center Design

Tie it all TogetherApplications move to be virtual (remote and not dedicated)Enable Consolidation of data centers to fewer instancesSelect data center hubs in cost-optimized energy efficient locationsServer refresh along with data center constructionHigh-density data centers are more energy efficient and cost less per kW than lower density– Greater than 6x compute to energy efficiency since 2002– Intel is running airflow data centers at ~500 watts per square

foot (WPSF). CFD modeling shows we can increase airflow data centers to 1,250 WPSF (30 kW per rack)

– Use economizers to maximize free cooling and lower energy costs– Increase supply temperature to increase free cooling and lower

energy costs (55°F-95°F). Data Centers are at 72°F supply airPerformance per Watt for the platform and the DC is required$ is the driving force. Adoption will not happen unless there is a financial incentive (<TCO)An Extended invitation to visit Intel’s Data Center Summit