Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... ·...

25
1 Tera-scale Computing Research Addressing the challenges of mainstream parallel computing Jim Held Intel Fellow, Director Tera-scale Computing Research UCB PARLab February 25, 2010

Transcript of Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... ·...

Page 1: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

11

Tera-scale Computing Research

Addressing the challenges of mainstream parallel computing

Jim HeldIntel Fellow, Director

Tera-scale Computing Research

UCB PARLab February 25, 2010

Page 2: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

22

Agenda

•Tera-scale Computing•Platform Vision•Research Agenda

–Applications–Programming–System Software–Memory Hierarchy– Interconnects–Cores

•Summary

Page 3: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

33

What is Tera-scale?TIPs of compute power operating on Tera-bytes of data

Terabytes

TIPS

Gigabytes

MIPS

Megabytes

GIPS

Pe

rfo

rma

nce

Dataset SizeKilobytes

KIPS

Tera-scale

Multi-core

Single-core

Mult-

Media

3D &

Video

Text

RMS Personal Media Creation and Management

Learning & Travel

Entertainment

Health

Source: electronic visualization lab University of Illinois

Page 4: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

44

R = RecognitionM = MiningS = Synthesis

• Based on modeling or simulating the real world

• Make technology more immersive and human-like

• Algorithms are highly parallel in nature

• Real-time results essential for user-interaction

Web search

R

FacialAnimation

Ray Tracing

CancerDetection

Body Tracking

M

S

Data Warehousing

MediaIndexing

FinancialPredictions

InteractiveVirtualWorlds

SecurityBiometricsMODEL-BASED APPS

Tera-scale Computing ApplicationsApplications

In 2004, these observations led us to explore tera-scale

Page 5: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

55

A Tera-scale Platform Vision

Scalable On-die Interconnect Fabric

SpecialPurposeEngines

Integrated IOdevices

IntegratedMemory

Controllers

High BandwidthMemory

Off Die interconnect

IOSocketInter-

Connect

Cache Cache Cache

Last Level

Cache

Last Level

Cache

Last Level

Cache

Page 6: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

66

Tera-scale Research

Cores – power efficient general & special function

Interconnects – High bandwidth, low latency

Memory Hierarchy – Feed the compute engine

System Software – Scalable services

Programming – Empower the mainstream

Applications – Identify, characterize & optimize

Page 7: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

77

Future 3D InternetImmersive Connected Experiences

Bringing the richness of Visual Computing to connected usage models such as social networking, collaboration, online gaming, & online retail.

3D Digital Entertainment

Virtual Worlds

Creating newdigital worlds

Multiplayer Games

InternetData

PeopleEverywhere

The ActualWorld

CO

NN

ECTED

CO

NN

ECTED

CONNECTED

RichVisual

Interfaces

LIMITED RICH

Better content quality, social interaction

StaticWeb Web 2.0 ICE

Real-world datavisualization

Enhancing theactual world

Earth Mapping

Augmented Reality

Applications

Page 8: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

88

Application Kernel Scaling

0 16 32 48 64

Para

llel S

peed

up

Cores

Production Fluid

Production Face

Production Cloth

Game Fluid

Game Rigid Body

Game Cloth

Marching Cubes

Sports Video Analysis

Video Cast Indexing

Home Video Editing

Text Indexing

Ray Tracing

Foreground Estimation

Human Body Tracker

Portifolio Management

Geometric Mean

Graphics Rendering – Physical Simulation -- Vision – Data Mining -- Analytics

Applications

Page 9: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

99

SERVERS: 10x More Work75%+ Time = Compute Intensive Work

TYPE SOFTWAREMAX CLIENTS

PER SERVER

MMORPGS

VWs Second Life 160

WoW 2500

APPLICATION

% CPU

UTILIZATION% GPU

UTILIZATION

2D Websites 20 0-1

Google Earth 50 10-15CLIENTS: 3x CPU, 20x GPU 65%+ Time = Compute Intensive Work

Second Life 70 35-75

NETWORK: 100x BandwidthMaximum Bandwidth Limited byServer to Client

0

50

100

25 50 75 100 125 150Time (In Seconds)

Ban

dw

idth

(In

KB

/s))

CachedUncached

Sources: WoW data (source www.warcraftrealms.com), Second Life data (source CTO-CTO meeting and www.secondlife.com), and Intel measurements.

Platform Performance DemandsEmerging ICE applications

Applications

Page 10: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1010

Application Acceleration:HW Task Queues

88% benefit optimized S/W 98% benefit over optimized S/W

Loop Level Parallelism Task Level Parallelism

GTUC1

$1C2

C7

Cn$m

$5

Core

L1 $ LTU

Global Task Unit (GTU)Caches the task poolUses distributed task stealing

Local Task Unit (LTU)Prefetches and buffers tasks

GTU

Carbon: Architectural Support for Fine-Grained Parallelism on Chip Multiprocessors. Sanjeev Kumar Christopher J. Hughes Anthony Nguyen, ISCA’07, June 9–13, 2007, San Diego, California, USA.

Task Queues• scale effectively to many cores• deal with asymmetry• supports task & loop parallelism

Applications

Page 11: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1111

Design Pattern Language

11

Graph Algorithms

Dynamic Programming

Dense Linear Algebra

Sparse Linear Algebra

Unstructured Grids

Structured Grids

Model-view controller

Iterator

Map reduce

Layered systems

Arbitrary Static Task Graph

Pipe-and-filter

Agent and Repository

Process Control

Event based, implicit

invocation

Graphical models

Finite state machines

Backtrack Branch and Bound

N-Body methods

Circuits

Spectral Methods

Task Decomposition ↔ Data Decomposition

Group Tasks Order groups data sharing data access

Applications

Pipeline

Discrete Event

Event Based

Divide and Conquer

Data Parallelism

Geometric Decomposition

Task Parallelism

Graph algorithms

Fork/Join

CSP

Master/worker

Loop

Parallelism

BSP

Distributed

Array

Shared-Data

Shared Queue

Shared Hash Table

Barriers

Mutex

Thread Creation/destruction

Process Creation/destruction

Message passing

Collective communication

Speculation

Transactional memory

Choose your high level

structure – what is the

structure of my

application? Guided

expansion

Identify the key

computational patterns

– what are my key

computations?

Guided instantiation

Implementation methods – what are the building blocks of parallel programming? Guided implementation

Choose your high level architecture - Guided decomposition

Refine the structure - what concurrent approach do I use? Guided re-organization

Utilize Supporting Structures – how do I implement my concurrency? Guided mapping

Pro

du

ctiv

ity

La

yer

Eff

icie

ncy

La

yer

Digital Circuits

Semaphores

Programming

Keutzer , K., Mattson, T.: “A Design Pattern Language for Engineering (Parallel) Software” to appear in Intel Technology Journal.

Page 12: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1212

Transactional Memory

• TM Definition - a sequence of memory operations that either execute

completely (commit) or have no effect (abort)

• Goal – an atomic block language construct– As easy to use as coarse-gain locks,

but with the scalability of fine-grain locks

– Safe and scalable composition of SW modules

• Intel C/C++ STM compiler– Use for experimentation & workload development

– Downloadable from http://whatif.intel.com

• Draft specification adding TM to C++ – Jointly authored with IBM & Sun

– Discussion on [email protected]

Programming

Shpeisman, T. et. Al Towards Transactional Memory Semantics for C++, SPAA’09, August 11–13, 2009, Calgary, Alberta, Canada.

Page 13: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1313

Ct Technology An example of Intel turning research into reality

A new programming model, abstract machine and API

• Expresses data parallelism with sequential semantics

• Deterministic (race-free) parallel programming

• Degree of thread/vector parallelism targeted dynamically according to user’s multi-core and SIMD hardware

• Extends C++: Uses templates for new types, operator overloading and lib calls for new operators

New Parallel Ops & Data Structures

Irregular/SparseParallel Data(Ex: Face Recog)

Dynamic Compilation Runtime Delivers on-the-fly Parallelization

Scalable with Increasing Cores

Programming

http://software.intel.com/en-us/data-parallel/

Page 14: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1414

Heterogeneous platform support• Shared virtual memory in a mixed

ISA, multiple OS environment

• Simplified programming model with data structure and pointer sharing

Saha, Bratin, et al. “Programming Model for a Heterogeneous x86 Platform.” PLDI ’09 June 15-20, 2009, Dublin, Ireland,

Programming

Page 15: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1515

OS SchedulingFairness on Multi-core

• Distributed Weighted Round Robin scheduling

– Accurate fairness

– Efficient and scalable operation

– Flexible user control

– High performance

• Implementation

– Additional queue per core

– Monitor runtime per round

– Balance across cores

• Evaluated against Linux

– O(1) scheduler 2.6.22.15

– CFS scheduler 2.6.24

System SW

Maximum lag and relative error for 16 threads on 8 cores, 5 threads have nice one.

Performance on Benchmarks v Linux CFS alone

Li, Tong, et al. Efficient and Scalable Multiprocessor Fair Scheduling Using Distributed Weighted Round-Robin. PPoPP’09, February 14–18, 2009, Raleigh, North Carolina, USA

Page 16: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1616

On-Die Fabric Research Adaptive Routing

Adversarial traffic can severely affect network throughput

Example: Matrix transpose– XY routing does not use all available paths

0 1 2 3

6 7 8 9

12 13 14 15

18 19 20 21

4 5

10 11

16 17

22 23

24 25 26 27

30 31 32 33

28 29

34 35

Transpose with load-balanced routing

0 1 2 3

6 7 8 9

12 13 14 15

18 19 20 21

4 5

10 11

16 17

22 23

24 25 26 27

30 31 32 33

28 29

34 35

Transpose with XY

Flexible routing allows all paths to be used

A fully-adaptive scheme provides the best throughput under different traffic patterns but has to consider large set of constraints, e.g.

No resource bifurcation (required by virtual networks)

Minimal storage/power overhead

Zero latency impact

Interconnect

Page 17: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1717

Teraflops Research Processor

Goals:

• Deliver Tera-scale performance– Single precision TFLOP at desktop power

– Frequency target 5GHz

– Bi-section B/W order of Terabits/s

– Link bandwidth in hundreds of GB/s

• Prototype two key technologies– On-die interconnect fabric

– 3D stacked memory

• Develop a scalable design methodology– Tiled design approach

– Mesochronous clocking

– Power-aware capability

I/O Area

I/O Area

PLL

single tile

1.5mm

2.0mm

TAP

21

.72

mm

I/O Area

PLL TAP

12.64mm

65nm, 1 poly, 8 metal (Cu)Technology

100 Million (full-chip)

1.2 Million (tile)

Transistors

275mm2 (full-chip)

3mm2 (tile)

Die Area

8390C4 bumps #

65nm, 1 poly, 8 metal (Cu)Technology

100 Million (full-chip)

1.2 Million (tile)

Transistors

275mm2 (full-chip)

3mm2 (tile)

Die Area

8390C4 bumps #

Interconnect

Page 18: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1818

Power Performance Results

0

1

2

3

4

5

6

0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4Vcc (V)

Fre

qu

en

cy (

GH

z) 80°C

N=80

(0.32 TFLOP) 1GHz

(1 TFLOP) 3.16GHz

(1.81 TFLOP)5.67GHz

(1.63 TFLOP) 5.1GHz

Peak Performance Average Power Efficiency

LeakageMeasured Power

0

25

50

75

100

125

150

175

200

225

250

0.70 0.80 0.90 1.00 1.10 1.20 1.30

Vcc (V)

Po

wer

(W

)

Active Power

Leakage Power

78

15.6

152

26

1.33TFLOP @ 230W80°C, N=80

1TFLOP @ 97W

Stencil: 1TFLOP @ 97W, 1.07V; All tiles awake/asleep

0

5

10

15

20

200 400 600 800 1000 1200 1400

GFLOPS

GF

LO

PS

/W

80°CN=80

5.8

19.4

394 GFLOPS

10.5

0%

4%

8%

12%

16%

0.70 0.80 0.90 1.00 1.10 1.20 1.30

Vcc (V)

% T

ota

l P

ow

er

Sleep disabled

Sleep enabled

80°C

N=80

2X

Vangal, S., et al., “An 80-Tile 1.28TFLOPS Network-on-Chip in 65 nm CMOS,” in Proceedings of ISSCC 2007(IEEE International Solid-State Circuits Conference), Feb. 12, 2007.

Interconnect

Page 19: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

1919

CPU + DRAM

I/O Circuits

Optimized for power efficiency & silicon cost• Very low I/O power• Aggressive power management• Small silicon area and low complexity• Scalability in bandwidth and width

Tiled DRAM

Memory

Platform

interconnectDevelop (prove out) new platform physicals • High density, small form factor• Low loss & reflections• Headroom for future scaling• Low cost & modularity

CPU mem

controller

Reduce memory controller power & complexity• Increased number of banks per channel• Increased concurrency for accessing memory array• Scalable & flexible across a wide range of market segments• Lower latency of on-die cache miss to data returned from DRAM

DRAM ArchOptimize for low power, high concurrency• Very low energy per bit read and written• Investigate the role of three dimensionalstacked products• Work with DRAM vendors

Memory

O’Mahoney, F., et al., “A 27Gb/s Forwarded-Clock I/O Receiver using an injection-Locked LC-DCO in 45nm CMOS,” in Proceedings of ISSCC 2008(IEEE International Solid-State Circuits Conference), Feb. 12, 2008.

Page 20: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

2020

CPU + DRAM

I/O Circuits

Optimized for power efficiency & silicon cost• Very low I/O power• Aggressive power management• Small silicon area and low complexity• Scalability in bandwidth and width

Tiled DRAM

Memory

Platform

interconnectDevelop (prove out) new platform physicals • High density, small form factor• Low loss & reflections• Headroom for future scaling• Low cost & modularity

CPU mem

controller

Reduce memory controller power & complexity• Increased number of banks per channel• Increased concurrency for accessing memory array• Scalable & flexible across a wide range of market segments• Lower latency of on-die cache miss to data returned from DRAM

DRAM ArchOptimize for low power, high concurrency• Very low energy per bit read and written• Investigate the role of three dimensionalstacked products• Work with DRAM vendors

Increasing I/O Efficiency

0 10 15 205

0.1

1

10

mW

/Gb

/s)

Research Target

1.0

GDDR5

~20

DDR3

~15

Intel ISSCC 06

11.7

3.6

5.0Intel VLSI 07

2.7

Data Rate (Gb/s)

0 10 15 205

0.1

1

10

I/O

Po

we

r E

ffic

ien

cy (

)

1.0

GDDR5

DDR3

~15

Intel ISSCC 06

11.7

3.6

5.0

3.6

5.0Intel VLSI 07

2.7

Memory

O’Mahoney, F., et al., “A 27Gb/s Forwarded-Clock I/O Receiver using an injection-Locked LC-DCO in 45nm CMOS,” in Proceedings of ISSCC 2008(IEEE International Solid-State Circuits Conference), Feb. 12, 2008.

Page 21: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

2121

Increasing power efficiencyLow-power scalable SIMD Vector processing

• 45nm CMOS occupies 0.081mm2

• Signed 32b multiply using reconfigurable16b multipliers and adder circuits

• Operation from 1.3V down to ultra-low 230MV

• 2.3GHz, 161mW operation at 1.1V

• Peak SIMD energy efficiency of 494GOPS/W measured at 300mV, 50°C.

Energy-efficiency up to 10X better at normal voltages

Up to 80x better at ultra-low voltages

Kaul, H., et al., “A 300mV 494GOPS/W Reconfigurable Dual-Supply 4-way SIMD Vector Processing Accelerator in 45nm CMOS,” in Proceedings of ISSCC 2009 (IEEE International Solid-State Circuits Conference), Feb. 2009.

Cores

Page 22: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

2222

Within-Die Variation-AwareDVFS and scheduling

• Max Frequency variation per core 28% at 1.2V 62% at 0.8V

• No correlation die to die – individual characterization required

• Improved performance or energy efficiency with:– Multiple frequency islands

– Dynamic scheduling of processing to core

Dighe, S, et al., “Within-Die Variation-Aware Dynamic Voltage-Frequency Scaling, Core Mapping and Thread Hopping for an 80-Core Processor”, in Proceedings of ISSCC 2010 (IEEE International Solid-State Circuits Conference), Feb. 2010

Cores

Page 23: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

2323

Experimental Single-chip Cloud Computer• Experimental many-core CPU on 45nm Hi-K metal-gate silicon

• 48 IA-compatible cores – the most ever built on a single chip

• Message-passing architecture – no HW cache coherence

• Research Vehicle

• Fine-grained software-controlled power management

• Scale-out programming models on-die

Tile

L2$1

L2$0

Router MPB

Core 1

Core 0Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Tile

R

Mem

ory

Contr

oller

Mem

ory

Contr

oller

Mem

ory

Contr

oller

Mem

ory

Contr

oller

System I/F

Collaboration

Howard, J, et al., “A 48-Core IA-32 Message-Passing Processor with DVFS in 45nm CMOS”, in Proceedings of ISSCC 2010 (IEEE International Solid-State Circuits Conference), Feb. 2010

Page 24: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

2424

Summary

• Intel research is addressing the challenges of parallel computing with Intel platforms

– Teraflop hardware performance within mainstream power and cost constraints

– ISA enhancements to address emerging workload requirements

– Language and runtimes to better support parallel programming models

– Partnering with academic research

• Intel is developing hardware and software technologies to enable Tera-scale computing

Page 25: Tera-scale Computing Researchparlab.eecs.berkeley.edu/sites/all/parlab/files/Intel Terascale... · Tera-scale Computing Applications Applications In 2004, these observations led us

2525

Q&A

25