Parallelism Lecture notes from MKP and S. Yalamanchili.
-
Upload
brenna-zimmerman -
Category
Documents
-
view
233 -
download
1
Transcript of Parallelism Lecture notes from MKP and S. Yalamanchili.
Parallelism
Lecture notes from MKP and S. Yalamanchili
(2)
Introduction
• Goal: Higher performance through parallelism
• Job-level (process-level) parallelism High throughput for independent jobs
• Application-level parallelism Single program run on multiple processors
• Multicore microprocessors Chips with multiple processors (cores) Support for both job level and application-level
parallelism
(3)
Core Count Roadmap: AMD
From anandtech.com
(4)
Core Count: NVIDIA
1536 cores at 1GHz
• All cores are not created equal
• Need to understand the programming model
(5)
Hardware and Software
• Hardware Serial: e.g., Pentium 4 Parallel: e.g., quad-core Xeon e5345
• Software Sequential: e.g., matrix multiplication Concurrent: e.g., operating system
• Sequential/concurrent software can run on serial/parallel hardware Challenge: making effective use of parallel hardware
(6)
Parallel Programming
• Parallel software is the problem
• Need to get significant performance improvement Otherwise, just use a faster uniprocessor, since it’s
easier!
• Difficulties Partitioning Coordination Communications overhead
(7)
Amdahl’s Law
• Sequential part can limit speedup
• Example: 100 processors, 90× speedup? Tnew = Tparallelizable/100 + Tsequential
Solving: Fparallelizable = 0.999
• Need sequential part to be 0.1% of original time
90/100F)F(1
1Speedup
ableparallelizableparalleliz
(8)
Scaling Example
• Workload: sum of 10 scalars, and 10 × 10 matrix sum Speed up from 10 to 100 processors
• Single processor: Time = (10 + 100) × tadd
• 10 processors Time = 10 × tadd + 100/10 × tadd = 20 × tadd Speedup = 110/20 = 5.5 (55% of potential)
• 100 processors Time = 10 × tadd + 100/100 × tadd = 11 × tadd Speedup = 110/11 = 10 (10% of potential)
• Idealized model Assumes load can be balanced across processors
(9)
Scaling Example (cont)
• What if matrix size is 100 × 100?
• Single processor: Time = (10 + 10000) × tadd
• 10 processors Time = 10 × tadd + 10000/10 × tadd = 1010 × tadd
Speedup = 10010/1010 = 9.9 (99% of potential)
• 100 processors Time = 10 × tadd + 10000/100 × tadd = 110 × tadd
Speedup = 10010/110 = 91 (91% of potential)
• Idealized model Assuming load balanced
(10)
Strong vs Weak Scaling
• Strong scaling: problem size fixed As in example
• Weak scaling: problem size proportional to number of processors 10 processors, 10 × 10 matrix
o Time = 20 × tadd
100 processors, 32 × 32 matrixo Time = 10 × tadd + 1000/100 × tadd = 20 × tadd
Constant performance in this example For a fixed size system grow the number of
processors to improve performance
(11)
What We Have Seen
• §3.6: Parallelism and Computer Arithmetic Associativity and bit level parallelism
• §4.10: Parallelism and Advanced Instruction-Level Parallelism Recall multi-instruction issue
• §6.9: Parallelism and I/O: Redundant Arrays of Inexpensive Disks
• Now we will look at categories in computation classification
(12)
Concurrency and Parallelism
• Concurrent access to shared data must be controlled for correctness
• Programming models?
Image from futurelooks.com
• Each core can operate concurrently and in parallel
• Multiple threads may operate in a time sliced fashion on a single core
(13)
Instruction Level Parallelism (ILP)
IF ID MEM WB
• Single (program) thread of execution• Issue multiple instructions from the same
instruction stream• Average CPI<1• Often called out of order (OOO) cores
Multiple instructions in EX at the same time
(14)
The P4 MicroarchitectureFrom, “The Microarchitecture of the Pentium 4 Processor 1,” G. Hinton et.al, Intel Technology Journal Q1, 2001
(15)
ILP Wall - Past the Knee of the Curve?
“Effort”
Performance
ScalarIn-Order
Moderate-PipeSuperscalar/OOO
Very-Deep-PipeAggressiveSuperscalar/OOO
Made sense to goSuperscalar/OOO:good ROI
Very little gain forsubstantial effort
Source: G. Loh
(16)
Thread Level Parallelism (TLP)
• Multiple threads of execution
• Exploit ILP in each thread
• Exploit concurrent execution across threads
(17)
Instruction and Data Streams
• Taxonomy due to M. Flynn
Data Streams
Single Multiple
Instruction Streams
Single SISD:Intel Pentium 4
SIMD: SSE instructions of x86
Multiple MISD:No examples today
MIMD:Intel Xeon e5345
SPMD: Single Program Multiple Data A parallel program on a MIMD computer where each
instruction stream is identical Conditional code for different processors
(18)
Programming Model: Multithreading
• Performing multiple threads of execution in parallel Replicate registers, PC, etc. Fast switching between threads
• Fine-grain multithreading Switch threads after each cycle Interleave instruction execution If one thread stalls, others are executed
• Coarse-grain multithreading Only switch on long stall (e.g., L2-cache miss) Simplifies hardware, but doesn’t hide short stalls (eg,
data hazards)
(19)
Conventional Multithreading
• Zero-overhead context switch
• Duplicated contexts for threads
0:r0
0:r71:r0
1:r72:r0
2:r73:r0
3:r7
CtxtPtrMemory (shared by threads)
Register file
(20)
Simultaneous Multithreading
• In multiple-issue dynamically scheduled processor Schedule instructions from multiple threads Instructions from independent threads execute when
function units are available Within threads, dependencies handled by scheduling
and register renaming
• Example: Intel Pentium-4 HT Two threads: duplicated registers, shared function
units and caches Known as Hyperthreading in Intel terminology
(21)
Hyper-threading
• Implementation of Hyper-threading adds less that 5% to the chip area
• Principle: share major logic components by adding or partitioning buffering logic
Processor Execution Resources
Arch State
Processor Execution Resources
Processor Execution Resources
Processor Execution Resources
Arch State Arch State Arch State Arch State Arch State
2 CPU Without Hyper-threading 2 CPU With Hyper-threading
(22)
Multithreading Example
(23)
Shared Memory
• SMP: shared memory multiprocessor Hardware provides single physical
address space for all processors Synchronize shared variables using locks Memory access time
o UMA (uniform) vs. NUMA (nonuniform)
(24)
Example: Communicating Threads
The Producer calls
while (1) {
while (count == BUFFER_SIZE)
; // do nothing
// add an item to the buffer
++count;
buffer[in] = item;
in = (in + 1) % BUFFER_SIZE;
}
Producer Consumer
(25)
Example: Communicating Threads
The Consumer calls
while (1) {
while (count == 0)
; // do nothing
// remove an item from the buffer
--count;
item = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}
Producer Consumer
(26)
Uniprocessor Implementation
• count++ could be implemented as register1 = count; register1 = register1 + 1; count = register1;
• count-- could be implemented as register2 = count; register2 = register2 – 1; count = register2;
• Consider this execution interleaving:S0: producer execute register1 = count {register1 = 5}S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}
(27)
Synchronization
• We need to prevent certain instruction interleavings Or at least be able to detect violations!
• Some sequence of operations (instructions) must happen atomically E.g., register1 = count;
register1 = register1 + 1;count = register1;
atomic operations/instructions
(28)
Synchronization
• Two processors sharing an area of memory P1 writes, then P2 reads Data race if P1 and P2 don’t synchronize
o Result depends of order of accesses
• Hardware support required Atomic read/write memory operation No other access to the location allowed between the
read and write
• Could be a single instruction E.g., atomic swap of register ↔ memory Or an atomic pair of instructions
(29)
Synchronization in MIPS
• Load linked: ll rt, offset(rs)
• Store conditional: sc rt, offset(rs) Succeeds if location not changed since the ll
o Returns 1 in rt Fails if location is changed
o Returns 0 in rt
• Example: atomic swap (to test/set lock variable)try: add $t0,$zero,$s4 ;copy exchange value ll $t1,0($s1) ;load linked sc $t0,0($s1) ;store conditional beq $t0,$zero,try ;branch store fails add $s4,$zero,$t1 ;put load value in $s4
(30)
Cache Coherence
• A shared variable may exist in multiple caches
• Multiple copies to improve latency
• This is a really a synchronization problem
(31)
Cache Coherence Problem
• Suppose two CPU cores share a physical address space Write-through caches
Time step
Event CPU A’s cache
CPU B’s cache
Memory
0 0
1 CPU A reads X 0 0
2 CPU B reads X 0 0 0
3 CPU A writes 1 to X 1 0 1
(32)
Example (Writeback Cache)
P
Cache
Memory
P
X= -100
X= -100
Cache
P
CacheX= -100X= 505
Rd?X= -100
Rd?
Courtesy H. H. Lee
(33)
Coherence Defined
• Informally: Reads return most recently written value
• Formally: P writes X; P reads X (no intervening writes)
read returns written value P1 writes X; P2 reads X (sufficiently later)
read returns written valueo c.f. CPU B reading X after step 3 in example
P1 writes X, P2 writes X all processors see writes in the same ordero End up with the same final value for X
(34)
Cache Coherence Protocols
• Operations performed by caches in multiprocessors to ensure coherence Migration of data to local caches
o Reduces bandwidth for shared memory Replication of read-shared data
o Reduces contention for access
• Snooping protocols Each cache monitors bus reads/writes
• Directory-based protocols Caches and memory record sharing status of blocks
in a directory
(35)
Invalidating Snooping Protocols
• Cache gets exclusive access to a block when it is to be written Broadcasts an invalidate message on the bus Subsequent read in another cache misses
o Owning cache supplies updated value
CPU activity Bus activity CPU A’s cache
CPU B’s cache
Memory
0
CPU A reads X Cache miss for X 0 0
CPU B reads X Cache miss for X 0 0 0
CPU A writes 1 to X Invalidate for X 1 0
CPU B read X Cache miss for X 1 1 1
(36)
Programming Model: Message Passing
• Each processor has private physical address space
• Hardware sends/receives messages between processors
(37)
Parallelism
• Write message passing programs
• Explicit send and receive of data Rather than accessing data in shared memory
send()
receive() send()
receive()
Process 2 Process 2
(38)
Loosely Coupled Clusters
• Network of independent computers Each has private memory and OS Connected using I/O system
o E.g., Ethernet/switch, Internet
• Suitable for applications with independent tasks Web servers, databases, simulations, …
• High availability, scalable, affordable
• Problems Administration cost (prefer virtual machines) Low interconnect bandwidth
o c.f. processor/memory bandwidth on an SMP
(39)
High Performance Computing
zdnet.com
• The dominant programming model is message passing
• Scales well but requires programmer effort• Science problems have fit this model well to
date
theregister.co.uk
(40)
Grid Computing
• Separate computers interconnected by long-haul networks E.g., Internet connections Work units farmed out, results sent back
• Can make use of idle time on PCs E.g., SETI@home, World Community Grid
(41)
Programming Model: SIMD
• Operate elementwise on vectors of data E.g., MMX and SSE instructions in x86
o Multiple data elements in 128-bit wide registers
• All processors execute the same instruction at the same time Each with different data address, etc.
• Simplifies synchronization
• Reduced instruction control hardware
• Works best for highly data-parallel applications
• Data Level Parallelism
(42)
SIMD Co-Processor• Graphics and media processing operates on
vectors of 8-bit and 16-bit data Use 64-bit adder, with partitioned carry chain
o Operate on 8×8-bit, 4×16-bit, or 2×32-bit vectors SIMD (single-instruction, multiple-data)
4x16-bit 2x32-bit
(43)
History of GPUs
• Early video cards Frame buffer memory with address generation for
video output
• 3D graphics processing Originally high-end computers (e.g., SGI) Moore’s Law lower cost, higher density 3D graphics cards for PCs and game consoles
• Graphics Processing Units Processors oriented to 3D graphics tasks Vertex/pixel processing, shading, texture mapping,
rasterization
(44)
Graphics in the System
(45)
GPU Architectures
• Processing is highly data-parallel GPUs are highly multithreaded Use thread switching to hide memory latency
o Less reliance on multi-level caches Graphics memory is wide and high-bandwidth
• Trend toward general purpose GPUs Heterogeneous CPU/GPU systems CPU for sequential code, GPU for parallel code
• Programming languages/APIs DirectX, OpenGL C for Graphics (Cg), High Level Shader Language
(HLSL) Compute Unified Device Architecture (CUDA)
(46)
Example: NVIDIA TeslaStreaming
multiprocessor
8 × Streamingprocessors
(47)
Compute Unified Device Architecture
http://developer.nvidia.com/cuda-education-training
For access to CUDA tutorials
Bulk synchronous processing (BSP) execution model
47
(48)
Example: NVIDIA Tesla
• Streaming Processors Single-precision FP and integer units Each SP is fine-grained multithreaded
• Warp: group of 32 threads Executed in parallel,
SIMD styleo 8 SPs
× 4 clock cycles Hardware contexts
for 24 warpso Registers, PCs, …
(49)
Classifying GPUs
• Does not fit nicely into SIMD/MIMD model Conditional execution in a thread allows an illusion of
MIMDo But with performance degradationo Need to write general purpose code with care
Static: Discoveredat Compile Time
Dynamic: Discovered at Runtime
Instruction-Level Parallelism
VLIW Superscalar
Data-Level Parallelism
SIMD or Vector Tesla Multiprocessor
Really Single Instruction Multiple Thread (SIMT)
(50)
Vector Processors
• Highly pipelined function units
• Stream data from/to vector registers to units Data collected from memory into registers Results stored from registers to memory
• Example: Vector extension to MIPS 32 × 64-element registers (64-bit elements) Vector instructions
o lv, sv: load/store vectoro addv.d: add vectors of doubleo addvs.d: add scalar to each element of vector of
double
• Significantly reduces instruction-fetch bandwidth
(51)
Cray-1: Vector Machine
• Mid 70’s – principles have not changed
• Aggregate operations defined on vectors
• Vector ISAs operating on vector instruction sets
• Load/store ISA ala MIPS
• Often confused with SIMD
From pg-server.csc.ncsu.edu
(52)
Vector vs. Scalar
• Vector architectures and compilers Simplify data-parallel programming Explicit statement of absence of loop-carried
dependenceso Reduced checking in hardware
Regular access patterns benefit from interleaved and burst memory
Avoid control hazards by avoiding loops
• More general than ad-hoc media extensions (such as MMX, SSE) Better match with compiler technology
(53)
Interconnection Networks
• Network topologies Arrangements of processors, switches, and
links
Bus Ring
2D Mesh
N-cube (N = 3)
Fully connected
(54)
Network Characteristics
• Performance Latency per message (unloaded network) Throughput
o Link bandwidtho Total network bandwidtho Bisection bandwidth
Congestion delays (depending on traffic)
• Cost
• Power
• Routability in silicon
(55)
Modeling Performance
• Assume performance metric of interest is achievable GFLOPs/sec Measured using computational kernels from Berkeley
Design Patterns
• Arithmetic intensity of a kernel FLOPs per byte of memory accessed
• For a given computer, determine Peak GFLOPS (from data sheet) Peak memory bytes/sec (using Stream benchmark)
(56)
Roofline Diagram
Attainable GPLOPs/sec= Max ( Peak Memory BW × Arithmetic Intensity, Peak FP Performance )
(57)
Comparing Systems
• Example: Opteron X2 vs. Opteron X4 2-core vs. 4-core, 2× FP performance/core, 2.2GHz vs.
2.3GHz Same memory system
To get higher performance on X4 than X2
Need high arithmetic intensity
Or working set must fit in X4’s 2MB L-3 cache
(58)
Optimizing Performance
• Optimize FP performance Balance adds & multiplies Improve superscalar ILP and use
of SIMD instructions
• Optimize memory usage Software prefetch
o Avoid load stalls Memory affinity
o Avoid non-local data accesses
(59)
Optimizing Performance
• Choice of optimization depends on arithmetic intensity of code
Arithmetic intensity is not always fixed
May scale with problem size Caching reduces memory
accesses Increases arithmetic
intensity
(60)
Four Example Systems
2 × quad-coreIntel Xeon e5345(Clovertown)
2 × quad-coreAMD Opteron X4 2356(Barcelona)
(61)
Four Example Systems
2 × oct-coreIBM Cell QS20
2 × oct-coreSun UltraSPARCT2 5140 (Niagara 2)
(62)
And Their Rooflines
• KernelsSpMV (left)LBHMD (right)
• Some optimizations change arithmetic intensity
• x86 systems have higher peak GFLOPsBut harder to achieve,
given memory bandwidth
(63)
Pitfalls
• Not developing the software to take account of a multiprocessor architecture Example: using a single lock for a shared composite
resourceo Serializes accesses, even if they could be done in
parallelo Use finer-granularity locking
(64)
Concluding Remarks
• Goal: higher performance by using multiple processors
• Difficulties Developing parallel software Devising appropriate architectures
• Many reasons for optimism Changing software and application environment Chip-level multiprocessors with lower latency, higher
bandwidth interconnect
• An ongoing challenge for computer architects!
(65)
Study Guide
• Be able to explain the following concepts ILP, MT, SMT, TLP, DLP, MIMD, SIMD, SISD
• Explain the roofline model of performance• Use of Amdahl’s Law in demonstrating the
limits of scaling• What is the impact of a sequence of read/write
operations on shared data? Cache coherence
• How does ILP differ from SMT• How does SIMD differ from vector?• What is the difference between weak vs. strong
scaling?