Emerging Infrastructure and Data Center Architecture – Principles and Practice Richard Fichera...
-
Upload
wesley-garrison -
Category
Documents
-
view
222 -
download
0
Transcript of Emerging Infrastructure and Data Center Architecture – Principles and Practice Richard Fichera...
Emerging Infrastructure and Data Center Architecture – Principles and Practice
Richard FicheraDirector, BladeSystems StrategyBladeSystem & Infrastructure Software
Richard FicheraDirector, BladeSystems StrategyBladeSystem & Infrastructure Software
The problem – complexity and physics catch up with the data center
The building blocks – servers, storage and fabrics
Evolution in Data Center architectures
Infrastructure in motion – VMs, automation and orchestration
Infrastructure and data center transformation
Today’s Agenda
HP BladeSystem c-Class Server Blade EnclosureBackground – Overwhelming Complexity and Increasing Scale
Shifting Costs Define Future Investments
Many Servers, Much Capacity, Low Utilization = $140B unutilized server assets
$0
$50
$100
$150
$200
$250
$300
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Installed Base(M Units)
Spending(US$B)
New Server Spending
Server Mgt and Admin Costs x4Power and Cooling Costs x8
0
5
10
15
20
25
30
35
40
45
50
Many Servers, Much Capacity, Low Utilization = $140B unutilized server assets
$0
$50
$100
$150
$200
$250
$300
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Installed Base(M Units)
Spending(US$B)
New Server Spending
Server Mgt and Admin Costs x4Power and Cooling Costs x8
0
5
10
15
20
25
30
35
40
45
50
$0
$50
$100
$150
$200
$250
$300
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
Installed Base(M Units)
Spending(US$B)
New Server SpendingNew Server Spending
Server Mgt and Admin Costs x4Power and Cooling Costs x8
0
5
10
15
20
25
30
35
40
45
50
0
5
10
15
20
25
30
35
40
45
50
0
5
10
15
20
25
30
35
40
45
50
Source: IDC, Virtualization and Multicore Innovations Disrupt the Worldwide Server Market, March 2007
HP BladeSystem c-Class Server Blade EnclosureInfrastructure Building Blocks – Fundamental Physics and Trends
Chef’s Special - Sautéed Data Center
0
20
40
60
80
100
120
140
WolfeCooktop
x86 CPU
Watts/Square Inch
W/sq in
Legacy Thermal Management Was an Afterthought• Overall PUE was
often in the neighborhood of 2.0
• More energy used to remove the heat than was used to do productive work
• For decades the only real decisions were water or air and how many CRACs
Preliminary studies suggest…
Cooling Loads Dominate the Data Center
Source: C.G. Malone & Uptime Institute
Percentage of Power Used
Power & Cooling Will Continue to Dominate Data Center Architecture
20082007 2009
Power
+Co
olin
g
Compute
Collapse complexityand take cost out
Collapse complexityand take cost out
Network
Admin.
Storage
Relative datacenter spending per serverunit
Datacenter spending based on IDC Forecast and report: Datacenter of the Future II, January 2009Spending is per server unit, normalized for CY2008 = 100%
100%
9 April 19, 2023
The Power & Cooling Chain is ComplexOptimizing from chip to facilities
Net-Net – Change the PUE from 2.0+ to 1.25 or less
Net-Net – Change the PUE from 2.0+ to 1.25 or less
“Podular DC Design: up to 45% cooling cost saving
Low Power processors: up to half the power consumption
Disk Drives: 2.5” 9 watts vs 18watts for 3.5”
Power Supplies: 90% +efficient supplies
Power Optimized Servers:18% less power
Advanced Power Management: 10% - 20% (with group power management)
Basic Blade Enclosure: 25% cost savings to power & cool
Power Distribution - 3%
Virtualization/Consolidation: up to 40% reduction in power cost for data centers
Storage Thin Provisioning/Dynamic Capacity Mgt saves up to 45%
Up to 60% power savings
Servers – Market and Drivers• Market
−The x86 server market represents approximately 8,000,000 servers per year, and will remain the center of innovation and investment.
−The market is split 35/50/15 in terms of the tower/rack/blade form factors, with blades and extreme scale-out as the fastest-growing segments.
• Key Drivers−Acquisition cost will always be important
−Energy consumption has become a priority, but focus will shift to larger aggregates as marginal gains on servers get smaller
−Total infrastructure cost, including management, becomes a focus at a system/DC level• This is the jumping off point for debates about unified fabrics, shared
and virtualized I/O, new virtualization management models, etc.
Server Performance• Server performance will
continue to increase• By 2010, a 2 socket
server will have approximately 4 - 6 times the performance of the same server in 2008
• Continued improvements in architecture along with density−Niche architectures will
have freedom to embed other systems elements on chip – comm, crypto, etc.
0
5
10
15
20
25
2007 2009 2011
GOPs/Socket
12 8 June, 2009
Processors trends
• Silicon compaction continues (65nm, 45nm, 32nm)• Higher levels of functional blocks integration Large gate count
− Caches, Memory controller(s), I/O, TPM• All server processors going to NUMA using processor links (no more
FSB)− More efficient coherency protocols (Intel: Home Snooping; AMD: HT Assist)
• Higher number of and faster interfaces Large pin-count pkg− One or more processor links More flexible designs
• Intel QPI• AMD HT
− Multiple memory links Flexible memory configurations− Integrated I/O links (PCIe3, USB3) I/O closer to processor & memory
• Core count increase continues (4, 6, 8, 10, 12, 16)• Core clock frequencies increase slow down (topping around 3GHz)• More physical memory address bits (Intel: 46; AMD: 48)• Wide range of power (TDP) bins (Intel: 37…150W; AMD: 45…140W)
− Depends on core count, cache size, coherent link count
13 8 June, 2009
Memory trends• Increase DDR3 speeds with tradeoffs on # DIMMs per channel (DPC)
• DRAM chip capacity increase
• DIMM capacity increase
• 8GB DIMM will be linearly priced in 2010
• Reduced DIMM power rail and consumption
• DIMM interfaces (DDR, SMI/VMSE) changing to address DDR bus limitations
• Non-volatile components will add memory/storage hierarchy
Server Futures• Continued escalation of core count and memory
−Expect differentiation in choice of on-board peripherals and accelerators at both chip and board level
−Continual pressure toward denser, higher layer count boards
−“Communications radius” effects, SI and connector limits
• Changing options for design
−Link-based connections for more flexible design
−More options for local and near storage• Design differentiation as requirements bi/trifurcate
−GP, scale out, virtualization designs
• Value increasingly in packaging, rack-scale and larger integration
Changing Focus for Server Design• Server design is increasingly merging with DC
design for rack-level and larger aggregates• As designs become more aggregate, the
optimizations become more complex
Server design has beenfocused on the chip to chassisdomain
Server design has beenfocused on the chip to chassisdomain
Increased demand for scale-out isshifting the focus to rack, moduleand entire DC scale designs
Increased demand for scale-out isshifting the focus to rack, moduleand entire DC scale designs
Storage Density• Storage density will
follow a pattern similar to server performance
• By 2010 -11, usable densities will exceed 1 PB/rack
• Expect significant changes and differentiation in −Storage services
−Packaging
−Choices of connection fabric
0
200
400
600
800
1000
1200
2007 2009 2011
TB/rack
17 8 June, 2009
Block storage device trends• Cost competiveness drove HDD industry consolidation• HDD interfaces going fast serial links: SAS/SATA
− SAS growing to be the interface of choice in enterprise
− FC HDD growth is flat or shrinking
• Switched SAS also enables storage fabric for shared block storage− But, lots of things need to be developed for complete solutions
• HDD capacity continue to increase, while rpm tops at 15K− HDD areal density ~30-40% AGR [SFF 0.5TB in ’10, TB in ’11]
• SFF dominates in enterprise− Enterprise SFF 10K adoption growing (largest segment) while LFF 15K
vol. shrinking
• Flash storage is disruptive− SSD $/GB cross-over with SFF SAS 15K rpm in ’11-’12
• 256G/512G in ’10, TB in ‘11
− PCIe-based Flash storage significantly improves storage I/O
− New storage hierarchies and models, including memory cache, disc cache, i/o accelerators
17
18 April 19, 2023
Storage - Virtualized Data Path & Services
Data Path Modules
IBM Sun EMC HP
Reference Storage
Architecture
Storage VirtualizationManager Servers
Data PathControl Path
LUNs
Snapshot Clones MigrationThin provisioning/Dedup Mirroring
Physical Media
Data Center Logical Architecture – Changing Resource Distribution Strategies
Rack-mount Server farmsBlade
serverChassis
VirtualMachines
SLB
Firewall
Data CenterCore
WAN & Campus Core
Distribution/Aggregation
Access(Server
Edge)
April 19, 202319
SAN
• Changes in density and fabric are changing the approach to modularity of storage and servers
• Converged fabrics allow more flexibility in location and reduce interconnect costs
• Local “mini-SANs” such as switched SAS allow refactoring storage to bring it near consumers and producers – and away from the SAN team
•Increasingly flexible storage services models
• Changes in density and fabric are changing the approach to modularity of storage and servers
• Converged fabrics allow more flexibility in location and reduce interconnect costs
• Local “mini-SANs” such as switched SAS allow refactoring storage to bring it near consumers and producers – and away from the SAN team
•Increasingly flexible storage services models
SANstorage
Fabricstorage
Physical Architecture – Is There a Podular DC in Your Future?• Lower TCO
− Higher PUE and power/cooling efficiency vs traditional DC
• Geographic flexibility
− Can deploy closer to customers, and in locales not suitable for brick & mortar
− Controlled/hybrid co-lo environments• Faster time to Revenue for customers
− Brick & Mortar 18+ months design/build vs Container in <6 months
• Improved return on capital
− “Pay as you go” vs. $millions up-front investment for brick & mortar
• More efficient procurement chunk size
− Rack too small, datacenter takes too long• Scalable with enterprise architecture
− Core/Regional Gateway/Point-of-Purchase
HP BladeSystem c-Class Server Blade EnclosureVirtualization, Orchestration, Automation and Infrastructure Agility
Virtualization – A Blessing & a Curse• Virtualization – of servers, storage, networks and I/O
hardware – brings major benefits …−Capital resource efficiency (the initial sell)
−Standardization and ease of migration
−A gateway to adaptive architectures
• … as well as significant burdens – management, management, management−Are you substituting one vendor lock-in for another?
−How many more tools do you want to add to your environment?
−How do you integrate the physical and virtual management layer?
• Be prepared for major innovation and vendor conflict in this arena for the next five years−You need to have a strategy, metrics and a roadmap
Enterprise Customers continue to be challenged managing infrastructure• Server admin and management costs grow with
the installed base of servers1
−Basic operations such as installing a server typically take weeks requiring manual coordination across multiple customer organizations
• Power, cooling and facilities limitations continue to loom as limits - the “$10 Million server”−This will drive multiple deployment options such as cloud
in an attempt to tap economies of scale
• Virtualization helps some things, but potentially complicates the management environment−Expect continued experimentation in virtualization
management models, expanded virtualization options
Typical infrastructure deploymentBuilt one unit at a time
Line of businessselects application
Get purchase approvals Order server
Project planningmeetings
And moremeetings
Server delivery unpack inventory Move to test center
Build process
servernetworkstoragefacilities
Change controlapprovals
Move to production
environmentRe-cable andmove into production
• Many people • Many manual
steps• Many weeks• Human error
The Goal – Automated ProvisioningProvisioned when needed
Line of businessselects application
Verify resource allocation
Tool determines available resources
and when
Push “go”Workflow starts automatically
A full application
infrastructure up and
running!
• Fewer people and steps• Guaranteed compliance• Integrated information• Same interface for virtual and
physical resources
Choose infrastructure application template
(right size?, right app?)
What You Need to Add• Comprehensive VM management
CONVERGED with physical management−Power-aware load placement and movement−Physical/logical discovery & visualization−Multi-tier provisioning of VMs, networks and
applications−Lifecycle management of VMs−Resilience, changing how we do HA
• And the good news is that you have at least 100 niche/startup vendors to choose from
• As well as the feuding major vendors−We ALL want to be your management console of
record
HP BladeSystem c-Class Server Blade EnclosureInfrastructure Transformation – How to Get There From Here
The Path to Infrastructure Transformation
Current State
Standardize
VMsStorageNetworks
Virtualize
Automate
Physical refresh?
Outsource?
The future is cloudy
What?
Some Essential Principles• Draconian standardization
−It’s really amazing how simple you can make an enterprise environment if you just don’t let anyone complain (or at least stop listening to them)
• Vendor simplification−Software is particularly important
−You may want to maintain very coarse-grained hardware heterogeneity for vendor management
• Almost always, fewer is better−Locations, software titles, options
−Once standardization has been in place for a full dev cycle, requests for variations become few and far between
Data Center TransformationWorkstream ApproachEnterprise ApplicationsBusiness ApplicationsDatabases and MiddlewareBack-office infrastructures
Directory Services
Messaging and collaboration
File and Print
IP and networking services
Thin Client Infrastructure / Client Services
Infrastructure servicesManagement tools
Back-up software
NetworkFacilitiesIT Organization and processes
• Define the optimal to-be architecture, migration approach, sourcing strategy and business case by workstream.
• Define dependencies and order between workstream items
• Prioritize high ROI opportunities.
• Holistic, total implementation provides highest ROI.
• Timely response to new business initiatives (that old alignment thing)
• Spend more time focusing on business value instead of fighting fires and managing MAC addresses
GrowBusiness
• Centralize & standardize IT and data center processes• Establish compliance with industry best practices • Protect company revenue, brand & reputation from
outage or disaster
Mitigate Risk
• Overall lower total IT costs – your mileage will vary• Up to 50% savings from IT consolidation, apps
rationalization• Up to 60% energy savings from modern facilities• Up to 25% real estate, location savings
Reduce
Cost
Data Center & IT TransformationWhat Can You Achieve?
Best Practices to Achieve the Vision• Simplify through standardization: Standard & consistent
data center architecture and design; standard hardware, tools, and infrastructure
• Establish PMO for governance: Provides framework for how effort will be structured, who will make decisions
• Go modular: Allows for fast build, flexibility, scalability, and efficiencies; isolates and separates risk
• Break plan into bite-size chunks: Divide into workstreams, engage proper expertise, identify clear goals & deliverables by quarter
• Synchronize—timing is everything: Facilities must be ready to receive servers; servers must be ready to receive applications
• Define one set of processes: A properly documented single set of processes aligned to ITIL V3 model ensures desired outcomes, allows for automation
• Actively manage and communicate change: Change management and well-executed communication strategy critical for success
What Lies Beyond: Cloud ComputingA pool of abstracted, highly scalable, and managed compute infrastructure capable of hosting end customer applications and billed by consumption.
“If managing a massive data center isn’t a core competency of your business, maybe you should get out of this business and pass the responsibility to someone who has “Amazon CTO Werner Vogels, 2007 Next Generation Data Center Conference
Cloud computing's ecosystem in the future will include Google-like public clouds as a platform for applications, and virtual private clouds, which are third-party clouds, or segments of the public cloud with additional features for security, compliance, etc.
The data centre of the future also will include private (internal) clouds, which will be an extension of virtualization and used primarily because of their capital or operational efficiencies. For some applications, data just won't leave the enterprise.
Clouds – A Long Haul• Good concept, great marketing buzz.• Hey, where are the applications?• Welcome to the world of almost consistent data.• Where did you say my data is?• Did someone say standards?• Hi, I’m Coke. Am I sharing my cloud with Pepsi?• What’s the difference between a well designed
shared services platform and an internal cloud?• But it does have a future …
Thank You
Richard FicheraDirector, BladeSystems StrategyBladeSystem & Infrastructure [email protected]
Richard FicheraDirector, BladeSystems StrategyBladeSystem & Infrastructure [email protected]
Expanding on the Themes at NGDC• Beyond Power and Cooling: Improving Data Center
Productivity Speaker, John Pflueger, Technology Strategist, Dell
• How the Sustainable Data Center Will Reduce Costs and Improve IT, Doug Washburn, Forrester Research
• Creating the Most Efficient, Resilient and Sustainable Data Centers, Patrick Leonard, Senior Manager, Strategic Initiatives , Equinix, Inc.
• Working With our Utilities: Getting What You Need When You Want It, Mark Bramfitt, Principal Program Manager, PG&E Corporation
• From Monitoring to Management: Gaining Comprehensive Visibility into Data Center Operations, Traci Yarbrough, Product Marketing Manager, Aperture Technologies