Instructor: Justin Hsia

59
Instructor: Justin Hsia CS 61C: Great Ideas in Computer Architecture Multiple Instruction Issue, Virtual Memory Introduction 7/26/2012 Summer 2012 -- Lecture #23 1

description

CS 61C: Great Ideas in Computer Architecture Multiple Instruction Issue, Virtual Memory Introduction. Instructor: Justin Hsia. Great Idea #4: Parallelism. Software Hardware. Parallel Requests Assigned to computer e.g. search “Katz” Parallel Threads Assigned to core - PowerPoint PPT Presentation

Transcript of Instructor: Justin Hsia

Page 1: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 1

Instructor: Justin Hsia

CS 61C: Great Ideas in Computer Architecture

Multiple Instruction Issue,Virtual Memory Introduction

7/26/2012

Page 2: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 2

Great Idea #4: ParallelismSmartPhone

Warehouse Scale

ComputerLeverage

Parallelism &Achieve HighPerformance

Core …

Memory

Input/Output

Computer

Core

• Parallel RequestsAssigned to computere.g. search “Katz”

• Parallel ThreadsAssigned to coree.g. lookup, ads

• Parallel Instructions> 1 instruction @ one timee.g. 5 pipelined instructions

• Parallel Data> 1 data item @ one timee.g. add of 4 pairs of words

• Hardware descriptionsAll gates functioning in

parallel at same time

Software Hardware

Cache Memory

Core

Instruction Unit(s) FunctionalUnit(s)

A0+B0 A1+B1 A2+B2 A3+B3

Logic Gates

7/26/2012

Page 3: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 3

Extended Review: Pipelining

• Why pipeline?– Increase clock speed by reducing critical path– Increase performance due to ILP

• How do we pipeline?– Add registers to delimit and pass information

between pipeline stages• Things that hurt pipeline performance

– Filling and draining of pipeline– Unbalanced stages– Hazards

7/26/2012

Page 4: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 4

Pipelining and the Datapath

• Registers hold up information between stages– Each stage “starts” on same clock trigger– Each stage is working on a different instruction– Must pass ALL signals needed in any following stage

• Stages can still interact by going around registers– e.g. forwarding, hazard detection hardware

PC

inst

ructi

onm

emor

y+4

RegisterFilert

rsrd

ALU

Data

mem

ory

imm

MU

X

7/26/2012

Page 5: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 5

Pipelining Hazards (1/3)

• Structural, Data, and Control hazards– Structural hazards taken care of in MIPS– Data hazards only when register is written to and then

accessed soon after (not necessarily immediately)• Forwarding

– Can forward from two different places (ALU, Mem)– Most further optimizations assume forwarding

• Delay slots and code reordering– Applies to both branches and jumps

• Branch prediction (can be dynamic)7/26/2012

Page 6: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 6

Pipelining Hazards (2/3)

• Identifying hazards:1) Check system specs

(forwarding? delayed branch/jump?)2) Check code for potential hazards

• Find all loads, branches, and jumps• Anywhere a value is written to and then read in the

next two instructions

3) Draw out graphical pipeline and check if actually a hazard• When is result available? When is it needed?

7/26/2012

Page 7: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 7

Pipelining Hazards (3/3)

• Solving hazards: (assuming the following were not present already)– Forwarding?

• Can remove 1 stall for loads, 2 stalls for everything else– Delayed branch?

• Exposes delay slot for branch and jumps– Code reordering?

• Find unrelated instruction from earlier to move into delay slot (branch, jump, or load)

– Branch prediction?• Depends on exact code execution

7/26/2012

Page 8: Instructor:   Justin Hsia

Question: Which of the following signals is NOT needed to determine whether or not to forward for the following two instructions?

add $s1, $t0, $t1ori $s2, $s1, 0xF

rd.EX☐

opcode.EX☐

rt.ID☐

opcode.ID☐

IF ID EX MEM WB

AL

U I$ Reg D$ Reg

WHAT IF:1) add was lw $s1,0($t0)?2) random other instruction

in-between?3) ori was sw $s1,0($s0)?

8

Page 9: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 9

Agenda

• Higher Level ILP• Administrivia• Dynamic Scheduling• Virtual Memory Introduction

7/26/2012

Page 10: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 10

Higher Level ILP

1) Deeper pipeline (5 to 10 to 15 stages)– Less work per stage shorter clock cycle

2) Multiple instruction issue– Replicate pipeline stages multiple pipelines– Can start multiple instructions per clock cycle– CPI < 1 (superscalar), so can use Instructions Per

Cycle (IPC) instead– e.g. 4 GHz 4-way multiple-issue can execute

16 BIPS with peak CPI = 0.25 and peak IPC = 4• But dependencies reduce this in practice

7/26/2012

Page 11: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 11

Multiple Issue

• Static multiple issue– Compiler groups instructions to be issued together– Packages them into “issue slots”– Compiler detects and avoids hazards

• Dynamic multiple issue– CPU examines instruction stream and chooses

instructions to issue each cycle– Compiler can help by reordering instructions– CPU resolves hazards using advanced techniques at

runtime7/26/2012

Page 12: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 12

Superscalar Laundry: Parallel per stage

• More resources, HW to match mix of parallel tasks?

Task

Order

12 2 AM6 PM 7 8 9 10 11 1

Time

BCD

A

EF

(light clothing) (dark clothing) (very dirty clothing)

(light clothing) (dark clothing) (very dirty clothing)

303030 3030

7/26/2012

Page 13: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 13

Pipeline Depth and Issue Width

• Intel Processors over TimeMicroprocessor Year Clock Rate Pipeline

StagesIssue width

Cores Power

i486 1989 25 MHz 5 1 1 5WPentium 1993 66 MHz 5 2 1 10WPentium Pro 1997 200 MHz 10 3 1 29WP4 Willamette 2001 2000 MHz 22 3 1 75WP4 Prescott 2004 3600 MHz 31 3 1 103WCore 2 Conroe 2006 2930 MHz 14 4 2 75WCore 2 Yorkfield 2008 2930 MHz 16 4 4 95WCore i7 Gulftown 2010 3460 MHz 16 4 6 130W

7/26/2012

Page 14: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 14

Pipeline Depth and Issue Width

1989 1992 1995 1998 2001 2004 2007 20101

10

100

1000

10000

Clock

Power

Pipeline Stages

Issue width

Cores

7/26/2012

Page 15: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 15

Static Multiple Issue

• Compiler groups instructions into issue packets– Group of instructions that can be issued on a

single cycle– Determined by pipeline resources required

• Think of an issue packet as a very long instruction word (VLIW)– Specifies multiple concurrent operations

7/26/2012

Page 16: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 16

Scheduling Static Multiple Issue

• Compiler must remove some/all hazards– Reorder instructions into issue packets– No dependencies within a packet– Possibly some dependencies between packets

• Varies between ISAs; compiler must know!– Pad with nops if necessary

7/26/2012

Page 17: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 17

MIPS with Static Dual Issue

• Dual-issue packets– One ALU/branch instruction– One load/store instruction– 64-bit aligned

• ALU/branch, then load/store• Pad an unused instruction with nop

Address Instruction type Pipeline Stages

n ALU/branch IF ID EX MEM WB

n + 4 Load/store IF ID EX MEM WB

n + 8 ALU/branch IF ID EX MEM WB

n + 12 Load/store IF ID EX MEM WB

n + 16 ALU/branch IF ID EX MEM WB

n + 20 Load/store IF ID EX MEM WB

7/26/2012

This prevents what kind of hazard?

Page 18: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 18

Datapath with Static Dual Issue

7/26/2012

1 Extra ALU1 Extra Sign Ext

2 Extra Reg Read Ports1 Extra Write Port

1 Extra 32-bitinstruction

Page 19: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 19

Hazards in the Dual-Issue MIPS

• More instructions executing in parallel• EX data hazard

– Forwarding avoided stalls with single-issue– Now can’t use ALU result in load/store in same packet

• add $t0, $s0, $s1load $s2, 0($t0)

• Splitting these into two packets is effectively a stall

• Load-use hazard– Still one cycle use latency, but now two instructions/cycle

• More aggressive scheduling required!

7/26/2012

Page 20: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 20

Scheduling Example• Schedule this for dual-issue MIPS

Loop: lw $t0, 0($s1) # $t0=array element addu $t0, $t0, $s2 # add scalar in $s2 sw $t0, 0($s1) # store result addi $s1, $s1,–4 # decrement pointer bne $s1, $zero, Loop # branch $s1!=0

ALU/Branch Load/Store CycleLoop: nop lw $t0, 0($s1) 1

addi $s1, $s1,–4 nop 2addu $t0, $t0, $s2 nop 3bne $s1, $zero, Loop sw $t0, 4($s1) 4

– IPC = 5/4 = 1.25 (c.f. peak IPC = 2)7/26/2012

This change affects scheduling but not effect

Page 21: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 21

Loop Unrolling

• Replicate loop body to expose more instruction level parallelism

• Use different registers per replication– Called register renaming– Avoid loop-carried anti-dependencies

• Store followed by a load of the same register• a.k.a. “name dependence” (reuse of a register name)

7/26/2012

Page 22: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 22

Loop Unrolling Example

• IPC = 14/8 = 1.75– Closer to 2, but at cost of registers and code size

ALU/branch Load/store cycleLoop: addi $s1, $s1,–16 lw $t0, 0($s1) 1

nop lw $t1, 12($s1) 2addu $t0, $t0, $s2 lw $t2, 8($s1) 3addu $t1, $t1, $s2 lw $t3, 4($s1) 4addu $t2, $t2, $s2 sw $t0, 16($s1) 5addu $t3, $t4, $s2 sw $t1, 12($s1) 6nop sw $t2, 8($s1) 7bne $s1, $zero, Loop sw $t3, 4($s1) 8

7/26/2012

Page 23: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 23

Agenda

• Higher Level ILP• Administrivia• Dynamic Scheduling• Virtual Memory Introduction

7/26/2012

Page 24: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 24

Administrivia

• Project 2 Part 2 due Sunday– Remember, extra credit available!

• “Free” lab time today– The TAs will still be there, Lab 10 still due

• Project 3 will be posted by Friday night– Two-stage pipelined CPU in Logisim

• Guest lectures next week by TAs

7/26/2012

Page 25: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 25

Agenda

• Higher Level ILP• Administrivia• Dynamic Scheduling• Big Picture: Types of Parallelism• Virtual Memory Introduction

7/26/2012

Page 26: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 26

Dynamic Multiple Issue

• Used in “superscalar” processors• CPU decides whether to issue 0, 1, 2, …

instructions each cycle– Goal is to avoid structural and data hazards

• Avoids need for compiler scheduling– Though it may still help– Code semantics ensured by the CPU

7/26/2012

Page 27: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 27

Dynamic Pipeline Scheduling

• Allow the CPU to execute instructions out of order to avoid stalls– But commit result to registers in order

• Example: lw $t0, 20($s2) addu $t1, $t0, $t2 subu $s4, $s4, $t3 slti $t5, $s4, 20– Can start subu while addu is waiting for lw

• Especially useful on cache misses; can execute many instructions while waiting!

7/26/2012

Page 28: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 28

Why Do Dynamic Scheduling?

• Why not just let the compiler schedule code?• Not all stalls are predicable

– e.g. cache misses• Can’t always schedule around branches

– Branch outcome is dynamically determined• Different implementations of an ISA have

different latencies and hazards

7/26/2012

Page 29: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 29

Speculation

• “Guess” what to do with an instruction– Start operation as soon as possible– Check whether guess was right and roll back if necessary

• Examples:– Speculate on branch outcome (Branch Prediction)

• Roll back if path taken is different– Speculate on load

• Roll back if location is updated

• Can be done in hardware or by compiler• Common to static and dynamic multiple issue7/26/2012

Page 30: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 30

Pipeline Hazard Analogy:Matching socks in later load

• A depends on D; stall since folder is tied up

Task

Order

BCD

A

EF

bubble

12 2 AM6 PM 7 8 9 10 11 1

Time303030 3030 3030

7/26/2012

Page 31: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 31

Out-of-Order Laundry: Don’t Wait

• A depends on D; let the rest continue– Need more resources to allow out-of-order (2 folders)

Task

Order

12 2 AM6 PM 7 8 9 10 11 1

Time

BCD

A303030 3030 3030

E

bubble

7/26/2012

Page 32: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 32

Not a Simple Linear Pipeline

3 major units operating in parallel:• Instruction fetch and issue unit

– Issues instructions in program order• Many parallel functional (execution) units

– Each unit has an input buffer called a Reservation Station– Holds operands and records the operation– Can execute instructions out-of-order (OOO)

• Commit unit– Saves results from functional units in Reorder Buffers– Stores results once branch resolved so OK to execute– Commits results in program order

7/26/2012

Page 33: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 33

Out-of-Order Execution (1/2)

Basically, unroll loops in hardware

1) Fetch instructions in program order (≤ 4/clock)

2) Predict branches as taken/untaken

3) To avoid hazards on registers, rename registers using a set of internal registers (≈ 80 registers)

4) Collection of renamed instructions might execute in a window (≈ 60 instructions)

7/26/2012

Page 34: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 34

Out-of-Order Execution (2/2)

Basically, unroll loops in hardware

5) Execute instructions with ready operands in 1 of multiple functional units (ALUs, FPUs, Ld/St)

6) Buffer results of executed instructions until predicted branches are resolved in reorder buffer

7) If predicted branch correctly, commit results in program order

8) If predicted branch incorrectly, discard all dependent results and start with correct PC

7/26/2012

Page 35: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 35

Dynamically Scheduled CPU

Results also sent to any waiting reservation stations (think: forwarding!)Reorder buffer

for register and memory writes

Can supply operands for issued instructions

Preserves dependencies

Wait here until all operands available

Branch prediction,Register renaming

Execute…

… and Hold

7/26/2012

Page 36: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 36

Out-Of-Order Intel

• All use O-O-O since 2001Microprocessor Year Clock Rate Pipeline

StagesIssue width

Out-of-order/ Speculation

Cores Power

i486 1989 25MHz 5 1 No 1 5W

Pentium 1993 66MHz 5 2 No 1 10W

Pentium Pro 1997 200MHz 10 3 Yes 1 29W

P4 Willamette 2001 2000MHz 22 3 Yes 1 75W

P4 Prescott 2004 3600MHz 31 3 Yes 1 103W

Core 2006 2930MHz 14 4 Yes 2 75W

Core 2 Yorkfield 2008 2930 MHz 16 4 Yes 4 95W

Core i7 Gulftown 2010 3460 MHz 16 4 Yes 6 130W

7/26/2012

Page 37: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 37

AMD Opteron X4 Microarchitecture

72 physical registers

renaming

16 architectural registers

x86 instructions

RISC operations

Queues:- 106 RISC ops - 24 integer ops- 36 FP/SSE ops- 44 ld/st

7/26/2012

Page 38: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 38

AMD Opteron X4 Pipeline Flow

• For integer operations

– 12 stages (Floating Point is 17 stages)– Up to 106 RISC-ops in progress

• Intel Nehalem is 16 stages for integer operations, details not revealed, but likely similar to above– Intel calls RISC operations “Micro operations” or “μops”

7/26/2012

Page 39: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 39

Does Multiple Issue Work?

• Yes, but not as much as we’d like• Programs have real dependencies that limit ILP• Some dependencies are hard to eliminate

– e.g. pointer aliasing• Some parallelism is hard to expose

– Limited window size during instruction issue• Memory delays and limited bandwidth

– Hard to keep pipelines full• Speculation can help if done well

7/26/2012

Page 40: Instructor:   Justin Hsia

Question: Are the following techniques primarily associated with a software- or hardware-based approach to exploiting ILP?

HW HWSW

HW HWBoth

HW BothBoth

Both BothBoth

Out-of-OrderRegister

Execution SpeculationRenaming

40

Page 41: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 41

Get To Know Your Instructor

7/26/2012

Page 42: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 42

Agenda

• Higher Level ILP• Administrivia• Dynamic Scheduling• Virtual Memory Introduction

7/26/2012

Page 43: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 43

Regs

L2 Cache

Memory

Disk

Tape

Instr Operands

Blocks

Pages

Files

Upper Level

Lower Level

Faster

Larger

L1 CacheBlocks

7/26/2012

Memory Hierarchy

Next Up:Virtual

Memory

Earlier:Caches

Page 44: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 44

Memory Hierarchy Requirements

• Principle of Locality – Allows caches to offer (close to) speed of cache

memory with size of DRAM memory– Can we use this at the next level to give speed of

DRAM memory with size of Disk memory?

• What other things do we need from our memory system?

7/26/2012

Page 45: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 45

Memory Hierarchy Requirements

• Allow multiple processes to simultaneously occupy memory and provide protection– Don’t let programs read from or write to each

other’s memories• Give each program the illusion that it has its

own private address space– Suppose code starts at address 0x40000000, then

different processes each think their code resides at the same address

– Each program must have a different view of memory7/26/2012

Page 46: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 46

Virtual Memory

• Next level in the memory hierarchy– Provides illusion of very large main memory– Working set of “pages” residing in main memory

(subset of all pages residing on disk)• Also allows OS to share memory, protect

programs from each other• Each process thinks it has all the memory to

itself

7/26/2012

Page 47: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 47

Virtual to Physical Address Translation

• Each program operates in its own virtual address space; thinks it’s the only program running

• Each is protected from the other• OS can decide where each goes in memory• Hardware gives virtual physical mapping

virtualaddress

(inst. fetchload, store)

Programoperates inits virtualaddressspace

HWmapping physical

address(inst. fetchload, store)

Physicalmemory

(incl. caches)

7/26/2012

Page 48: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 48

• Book title like virtual address• Library of Congress call number like physical

address• Card catalogue like page table, mapping from

book title to call #• On card for book, in local library vs. in another

branch like valid bit indicating in main memory vs. on disk

• On card, available for 2-hour in library use (vs. 2-week checkout) like access rights

VM Analogy

7/26/2012

Page 49: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 49

0

¥

OS

User A

User B

User C

$base

$base+$bound

•Want: • Discontinuous mapping

• Process size >> mem

•Addition not enough!•Use Indirection!

Enough space for User D,but discontinuous (“fragmentation problem”)

Simple Example: Base and Bound Reg

7/26/2012

Page 50: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 50

• Divide into equal sizedchunks (about 4 KB - 8 KB)

• Any chunk of Virtual Memory assigned to any chuck of Physical Memory (“page”)

0

Physical Memory

¥Virtual Memory

Code

Static

Heap

Stack

64 MB

Mapping VM to PM

07/26/2012

Page 51: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 51

Paging Organization

AddrTransMAP

Page is unit of mapping

Page also unit of transfer from disk to physical memory

page 01K1K

1K

01024

31744Virtual Memory

VirtualAddress

page 1

page 31

1K2048 page 2 ...... ...

page 001024

7168

PhysicalAddress

PhysicalMemory

1K1K

1K

page 1

page 7...... ...

• Here assume page size is 1 KiB

7/26/2012

Page 52: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 52

Virtual Memory Mapping Function• Cannot have simple function to predict arbitrary mapping• Use table lookup of mappings

• Use table lookup (“Page Table”) for mappings: Page number is index

• Virtual Memory Mapping Function– Physical Offset = Virtual Offset– Physical Page Number

= PageTable[Virtual Page Number](P.P.N. also called “Page Frame”)

Page Number Offset

7/26/2012

Page 53: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 53

Address Mapping: Page TableVirtual Address:

page no. offset

Page TableBase Reg

Page Table located in physical memory

indexintopagetable

+

PhysicalMemoryAddress

Page Table

Val-id

AccessRights

PhysicalPageAddress

.

V A.R. P. P. A.

...

...

7/26/2012

Page 54: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 54

Page Table

• A page table is an operating system structure which contains the mapping of virtual addresses to physical locations– There are several different ways, all up to the operating

system, to keep this data around• Each process running in the operating system has

its own page table– “State” of process is PC, all registers, plus page table– OS changes page tables by changing contents of Page

Table Base Register

7/26/2012

Page 55: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 55

Requirements Revisited

• Remember the motivation for VM:• Sharing memory with protection

– Different physical pages can be allocated to different processes (sharing)

– A process can only touch pages in its own page table (protection)

• Separate address spaces– Since programs work only with virtual addresses, different

programs can have different data/code at the same address!• What about the memory hierarchy?

7/26/2012

Page 56: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 56

Page Table Entry (PTE) Format• Contains either Physical Page Number or indication

not in Main Memory• OS maps to disk if Not Valid (V = 0)

• If valid, also check if have permission to use page: Access Rights (A.R.) may be Read Only, Read/Write, Executable

...Page Table

Val-id

AccessRights

PhysicalPageNumber

V A.R. P. P. N.

V A.R. P. P.N.

...

P.T.E.

7/26/2012

Page 57: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 57

Paging/Virtual Memory Multiple ProcessesUser B:

Virtual Memory¥

Code

Static

Heap

Stack

0Code

Static

Heap

Stack

A PageTable

B PageTable

User A: Virtual Memory¥

00

Physical Memory

64 MB

7/26/2012

Page 58: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 58

Comparing the 2 levels of hierarchy

Cache version Virtual Memory vers.Block or Line PageMiss Page FaultBlock Size: 32-64B Page Size: 4K-8KBPlacement:

Direct Mapped, Fully AssociativeN-way Set Associative

Replacement: Least Recently UsedLRU or Random (LRU)

Write Thru or Back Write Back

7/26/2012

Page 59: Instructor:   Justin Hsia

Summer 2012 -- Lecture #23 59

Summary

• More aggressive performance options:– Longer pipelines– Superscalar (multiple issue)– Out-of-order execution– Speculation

• Virtual memory bridges memory and disk– Provides illusion of independent address spaces

and protects them from each other– VA to PA using Page Table

7/26/2012