Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ......

39
1 Cache Optimizations Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Based on the material prepared by Krste Asanovic and Arvind

Transcript of Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ......

Page 1: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

1

Cache Optimizations

Joel EmerComputer Science and Artificial Intelligence Laboratory

Massachusetts Institute of Technology

Based on the material prepared byKrste Asanovic and Arvind

Page 2: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

wdata

Y

6.823 L8- 2 Joel EmerCPU-Cache Interaction

(5-stage pipeline)

0x4 EAdd

M A

we ALU Y addr

IR Decode,nop Primary BRegister Data rdata Fetch Cache Raddr instPC D hit?wdata

PCen PrimaryInstruction MD1 MD2Cache

hit?

Stall entire CPU on data cache miss

To Memory Control

Cache Refill Data from Lower Levels of Memory Hierarchy

What about Instruction miss or writes to i-stream ?

October 5, 2005

Page 3: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 3 Joel Emer

Write Performance

Tag DataV

=

Block Offset

Tag

t k b

t

HIT

2k

lines

WE

Index

Data Word or Byte

October 5, 2005

Page 4: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 4 Joel Emer

Reducing Write Hit Time

Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit

Solutions:• Design data RAM that can perform read and write in one

cycle, restore old value after tag miss

• CAM-Tag caches: Word line only enabled if hit

• Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store’s tag check

October 5, 2005

Page 5: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 5

Pipelining Cache Writes Joel Emer

Address and Store Data From CPU

Tag Index Store Data

Delayed Write Addr. Delayed Write DataLoad/Store

=? S

Tags L Data

1 0=?

Load Data to CPUHit?

Data from a store hit written into data portion of cache during tag access of subsequent store

October 5, 2005

Page 6: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 6 Joel Emer

Write pipeline

Instr RFMemory ALU

Data Memory

Data Memory

I-Fetch Decode Address Tag Mem Reg Read Calc Read Data

Write

What hazard has been introduced in this pipeline?

October 5, 2005

Page 7: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 7 Joel Emer

Write Policy

• Cache hit: – write through: write both cache & memory

• generally higher traffic but simplifies cache coherence

– write back: write cache only (memory is written only when the entry is evicted)

• a dirty bit per block can further reduce the traffic

• Cache miss: – no write allocate: only write to main memory – write allocate (aka fetch on write): fetch into cache

• Common combinations: – write through and no write allocate – write back with write allocate

October 5, 2005

Page 8: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

Average Cache Read Latency α is HIT RATIO: Fraction of references in cache

1 - α is MISS RATIO: Remaining references

Average access time for serial search:

Addr Addr Main tc + (1 - α) tmProcessor Memory

Data Data CACHE

Average access time for parallel search:

Addr Main α tc + (1 - α) tmProcessor Memory

Data Data CACHE

tc is smallest for which type of cache?

October 5, 2005

Page 9: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 9 Joel Emer

Improving Cache Performance

Average memory access time = Hit time + Miss rate x Miss penalty

To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time

What is the simplest design strategy?

October 5, 2005

Page 10: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 10 Joel Emer

Improving Cache Performance

Average memory access time = Hit time + Miss rate x Miss penalty

To improve performance: • reduce the miss rate (e.g., larger cache) • reduce the miss penalty (e.g., L2 cache) • reduce the hit time

The simplest design strategy is to design the largest primary cache without slowing down the clock or adding pipeline stages

(but design decisions are more complex with out-of-order or highly pipelined CPUs)

October 5, 2005

Page 11: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 11 Joel Emer

Causes for Cache Misses

• Compulsory: first-reference to a block a.k.a. cold start misses

- misses that would occur even with infinite cache

• Capacity: cache is too small to hold all data needed by the program - misses that would occur even under perfect placement & replacement policy

• Conflict: misses that occur because of collisions due to block-placement strategy - misses that would not occur with full associativity

October 5, 2005

Page 12: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 12 Joel Emer

Effect of Cache Parameters on Performance

• Larger cache size + reduces capacity and conflict misses - hit time will increase

• Higher associativity+ reduces conflict misses (up to around 4-8 way) - may increase access time

• Larger block size

October 5, 2005

Page 13: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 13 Joel Emer

Block Size and Spatial Locality Block is unit of transfer between the cache and memory

232-b bits b bits

b = block size a.k.a line size (in bytes)

Word3Word0 Word1 Word2

block address bSplit CPU address

Tag 4 word block,

offset

b=2

Larger block size has distinct hardware advantages • less tag overhead • exploit fast burst transfers from DRAM • exploit fast burst transfers over wide busses

What are the disadvantages of increasing block size?

October 5, 2005

Page 14: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 14 Joel Emer

Block-level Optimizations

• Tags are too large, i.e., too much overhead – Simple solution: Larger blocks, but miss penalty

could be large.

• Sub-block placement (aka sector cache)– A valid bit added to units smaller than the full block,

called sub-blocks – Only read a sub-block on a miss – If a tag matches, is the word in the cache?

October 5, 2005

100 300 204

1 1 1 1 1 1 0 0 0 1 0 1

Page 15: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 15 Joel Emer

Set-Associative RAM-Tag Cache

=? =?

Status

Index Offset

Tag

Tag

Data Tag Status Data

Not energy-efficient – A tag and data word

is read from every way

Two-phase approach – First read tags, then

just read data fromselected way

– More energy-efficient

– Doubles latency in L1

– OK, for L2 and above, why?

October 5, 2005

Page 16: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

Tag =? Data BlockTag =? Data Block

Tag =? Data Block

Tag =? Data BlockTag =? Data Block

Tag =? Data Block

6.823 L8- 16 Joel Emer

Highly-Associative CAM-Tag Caches• For high associativity (e.g., 32-way), use content-addressable

memory (CAM) for tags (Intel XScale)

• Overhead: Tag+comparator bit 2-4x area of plain RAM-tag bit

October 5, 2005

tagt seti offsetb

Tag =? Data Block Tag =? Data Block

Tag =? Data Block

Set 0 Set 1

Set i

Hit? DataOnly one set enabled Only hit data accessed – saves energy

Page 17: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 17

Way Predicting CachesJoel Emer

(MIPS R10000 L2 cache) • Use processor address to index into way prediction table • Look in predicted way at given index, then:

MISSHIT

Return copy Look in other way of data from cache

MISSSLOW HIT (change entry in prediction table) Read block of data from

next level of cache October 5, 2005

Page 18: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 18 Joel Emer

Way Predicting Instruction Cache(Alpha 21264-like)

PC addr inst

Instruction Cache

0x4 Add

Sequential Way

way

Jump control

Primary

Branch Target Way

Jump target

October 5, 2005

Page 19: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

19

Five-minute break to stretch your legs

Page 20: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 20 Joel Emer

Victim Caches (HP 7200)

L1 Data Cache

Unified L2 Cache

RF

CPU

Victim FA Cache 4 blocks

Evicted data from L1

Evicted data From VC

Hit data from VC (miss in L1)

where ?

Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines • First look up in direct mapped cache • If miss, look in victim cache • If hit in victim cache, swap hit line with line now evicted from L1 • If miss in victim cache, L1 victim -> VC, VC victim->? Fast hit time of direct mapped but with reduced conflict misses

October 5, 2005

Page 21: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 21 Joel Emer

Multilevel Caches • A memory cannot be large and fast• Increasing sizes of cache at each level

CPU L1 L2 DRAM

Local miss rate = misses in cache / accesses to cache

Global miss rate = misses in cache / CPU memory accesses

Misses per instruction = misses in cache / number of instructions

October 5, 2005

Page 22: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 22 Joel Emer

Inclusion Policy

• Inclusive multilevel cache: – Inner cache holds copies of data in outer cache – Extra-CPU access needs only check outer cache – Most common case

• Exclusive multilevel caches: – Inner cache may hold data not in outer cache – Swap lines between inner/outer caches on miss – Used in Athlon with 64KB primary and 256KB

secondary cache

Why choose one type of the other?

October 5, 2005

Page 23: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 23 Joel Emer

Itanium-2 On-Chip Caches(Intel/HP, 2002)

Image removed due to copyright restrictions. To view image, visit http://www-

vlsi.stanford.edu/group/chips_micropro_body .html

October 5, 2005

Level 1, 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency

Level 2, 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency

Level 3, 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency

Page 24: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 24 Joel Emer

Reducing Read Miss Penalty

Data Cache

Unified L2

Cache RF

CPU

Write buffer

Evicted dirty lines for writeback cache OR

All writes in writethru cache

• Write buffer may hold updated value of location needed by a read miss

• Simple scheme: on a read miss, wait for the write buffer to go empty

• Faster scheme: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write bufferOctober 5, 2005

Page 25: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 25 Joel Emer

Prefetching

• Speculate on future instruction and data accesses and fetch them into cache(s) – Instruction accesses easier to predict

than data accesses

• Varieties of prefetching– Hardware prefetching– Software prefetching– Mixed schemes

• What types of misses doesprefetching affect?

October 5, 2005

Page 26: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 26 Joel Emer

Issues in Prefetching

• Usefulness – should produce hits • Timeliness – not late and not too early • Cache and bandwidth pollution

L1 Data

L1 Instruction

Unified L2 Cache

RF

CPU

Prefetched data

October 5, 2005

Page 27: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 27 Joel Emer

Hardware Instruction Prefetching

• Instruction prefetch in Alpha AXP 21064 – Fetch two blocks on a miss; the requested block and

the next consecutive block – Requested block placed in cache, and next block in

instruction stream buffer

October 5, 2005

L1 Instruction

Unified L2 Cache

RF

CPU

Stream Buffer (

Prefetched instruction block

4 blocks)

Req block

Req block

Page 28: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 28 Joel Emer

Hardware Data Prefetching

• Prefetch-on-miss: – Prefetch b + 1 upon miss on b

• One Block Lookahead (OBL) scheme – Initiate prefetch for block b + 1 when

block b is accessed – Why is this different from doubling block

size? – Can extend to N block lookahead

• Strided prefetch – If sequence of accesses to block b, b+N,

b+2N, then prefetch b+3N etc.

October 5, 2005

Page 29: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 29 Joel Emer

Software Prefetching

for(i=0; i < N; i++) {prefetch( &a[i + 1] );prefetch( &b[i + 1] );SUM = SUM + a[i] * b[i];

}

• What property do we require of the cache for prefetching to work ?

October 5, 2005

Page 30: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 30 Joel Emer

Software Prefetching Issues • Timing is the biggest issue, not predictability

– If you prefetch very close to when the data is required, you might be too late

– Prefetch too early, cause pollution – Estimate how long it will take for the data to come

into L1, so we can set P appropriately – Why is this hard to do?

for(i=0; i < N; i++) {prefetch( &a[i +prefetch( &b[i +

PP] );] );

} SUM = SUM + a[i] * b[i];

Must consider cost of prefetch instructions October 5, 2005

Page 31: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 31 Joel Emer

Compiler Optimizations

• Restructuring code affects the data block access sequence – Group data accesses together to improve spatial locality – Re-order data accesses to improve temporal locality

• Prevent data from entering the cache – Useful for variables that will only be accessed once

before being replaced – Needs mechanism for software to tell hardware not to

cache data (instruction hints or page table bits)

• Kill data that will never be used again– Streaming data exploits spatial locality but not temporal

locality – Replace into dead cache locations

October 5, 2005

Page 32: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 32 Joel Emer

Loop Interchange

for(j=0; j < N; j++) {for(i=0; i < M; i++) {

x[i][j] = 2 * x[i][j];}

}

for(i=0; i < M; i++) {for(j=0; j < N; j++) {

} }

x[i][j] = 2 * x[i][j];

What type of locality does this improve?

October 5, 2005

Page 33: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 33 Joel Emer

Loop Fusion

for(i=0; i < N; i++)for(j=0; j < M; j++)

a[i][j] = b[i][j] * c[i][j];

for(i=0; i < N; i++)for(j=0; j < M; j++)

d[i][j] = a[i][j] * c[i][j];

for(i=0; i < M; i++)for(j=0; j < N; j++) {

a[i][j] = b[i][j] * c[i][j];d[i][j] = a[i][j] * c[i][j];

}

What type of locality does this improve? October 5, 2005

Page 34: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 34 Joel Emer

Blocking for(i=0; i < N; i++)

for(j=0; j < N; j++) {r = 0;for(k=0; k < N; k++)

r = r + y[i][k] * z[k][j];

} x[i][j] = r;

x y zj k j

i i k

Not touched Old access New access October 5, 2005

Page 35: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 35 Joel Emer

Blockingfor(jj=0; jj < N; jj=jj+B)

for(kk=0; kk < N; kk=kk+B)for(i=0; i < N; i++)

for(j=jj; j < min(jj+B,N); j++) {r = 0;for(k=kk; k < min(kk+B,N); k++)

r = r + y[i][k] * z[k][j];x[i][j] = x[i][j] + r;

x j } y k z j

i i k

October 5, 2005

What type of locality does this improve?

Page 36: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

36

Thank you !

Page 37: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

37

Extras

Page 38: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 38 Joel Emer

Memory Hierarchy Example• AlphaStation 600/5 desktop workstation

– Alpha 21164 @ 333 MHz – On-chip L1 and L2 caches – L1 instruction cache, 8KB direct-mapped, 32B lines, fetch

four instructions/cycle (16B) – Instruction stream prefetches up to 4 cache lines ahead – L1 data cache, 8KB direct-mapped, 32B lines, write-

through, load two 8B words or store one 8B word/cycle (2 cycle latency)

– up to 21 outstanding loads, 6x32B lines of outstanding writes

– L2 unified cache, 96KB 3-way set-associative, 64B blocks/32B sub-blocks, write-back, 16B/cycle bandwidth (7 cycle latency)

– Off-chip L3 unified cache, 8MB direct-mapped, 64B blocks, peak bandwidth is 16B every 7 cycles (15 cycle latency)

– DRAM, peak bandwidth 16B every 10 cycles (60 cycle latency)

October 5, 2005

Page 39: Cache Optimizations - MIT OpenCourseWare · 2020-01-04 · Cache Optimizations Joel Emer ... Pipelining Cache Writes Joel Emer ... Larger block size has distinct hardware advantages

6.823 L8- 39 Joel Emer

Further Issues

There are several other factors that are intimately connected with cache design:

• Virtual memory and associated address translation

• Multiprocessor and associated memorymodel issues - cache coherence

stay tuned

October 5, 2005