Virtual Memory -...

Post on 17-Aug-2018

213 views 0 download

Transcript of Virtual Memory -...

Virtual Memory

1

CSE 2431: Introduction to Operating Systems Reading: §§8.5, 8.6, 9.1, [OSC]

Review •  Memory Manager

–  Monitor used and free memory –  Allocate memory to processes –  Reclaim (De-allocate) memory –  Swapping between main memory and disk

•  Mono-programming memory management –  Overlay

•  Multi-programming memory management –  Fixed partition –  Variable partition –  Relocation and protection

•  Swapping

2

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

3

Problems

•  Programs are too big to fit in the available memory

•  Solutions – Overlay

•  Programmers split programs into overlays – OS does swapping – Virtual Memory

4

Virtual Memory •  Provide user with virtual memory that is as big as user

needs •  Store virtual memory on disk •  Cache parts of virtual memory being used in real

memory •  Load and store cached virtual memory without user

program intervention

5

Benefits of Virtual Memory •  Use secondary storage($)

–  Extend RAM($$$) with reasonable performance •  Protection

–  Programs do not step over each other •  Convenience

–  Flat address space –  Programs have the same view of the world –  Load and store cached virtual memory without user program

intervention •  Reduce fragmentation:

–  make cacheable units all the same size (page) •  Remove memory deadlock possibilities:

–  permit pre-emption of real memory 6

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

7

Paging (1)

8

3 1 2 3 4

Disk

Real Memory

Virtual Memory Stored on Disk

1 2 3 4 5 6 7 8

1 2 3 4

1

2

3

4

Page Table VM Frame

Real Memory Request Page 3

Paging (2)

9

3 1 1 2

3 4

Disk

Memory

Virtual Memory Stored on Disk

1 2 3 4 5 6 7 8

1 2 3 4

1

2

3

4

Page Table VM Frame

Real Memory Request Page 1

Paging (3)

10

3 1 1 6

2 3 4

Disk

Memory

Virtual Memory Stored on Disk

1 2 3 4 5 6 7 8

1 2 3 4

1

2

3 4

Page Table VM Frame

Real Memory Request Page 6

Paging (4)

11

3 1 1 6

2 3 4

Disk

Memory

Virtual Memory Stored on Disk

1 2 3 4 5 6 7 8

1 2 3 4

1

2

3

4

Page Table VM Frame

Real Memory Request Page 2

2

Paging (5)

12

3 1 1 6

2 3 4

Disk

Memory

Virtual Memory Stored on Disk

1 2 3 4 5 6 7 8

1 2 3 4

1

2

3 4

Page Table VM Frame

Real Memory

2

Request Page 8 Swap page 1 to disk first

Paging (6)

13

3 1

6 2 3 4

Disk

Memory

Virtual Memory Stored on Disk

1 2 3 4 5 6 7 8

1 2 3 4

1

2

3

4

Page Table VM Frame

Real Memory

2

Load Page 8 to Memory

8

Page Mapping Hardware (1)

14

Contents(P,D)

Contents(F,D)

P D

F D

P→F

0 1 0 1 1 0 1

Page Table

Physical Memory

Virtual Address (P,D)

Physical Address (F,D)

P

F

D

D

P

Virtual Memory

Page Mapping Hardware (2)

15

Contents(4006)

Contents(5006)

004 006

005 006

4→5

0 1 0 1 1 0 1

Page Table Virtual Memory

Physical Memory

Virtual Address (004006)

Physical Address (F,D)

004

005

006

006

4

Page size 1000 Number of Possible Virtual Pages 1000 Number of Page Frames 8

Paging Issues • Page size is 2n

– usually 512 bytes, 1 KB, 2 KB, 4 KB, or 8 KB – E.g. 32 bit VM address may have 220 (1 MB) pages

with 4k (212) bytes per page • Page table:

– 220 page entries take 222 bytes (4 MB) – Page frames must map into real memory – Page Table base register must be changed for

context switch •  No external fragmentation; internal fragmentation on

last page only

16

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

17

Multilevel Page Tables • Since the page table can be very large, one solution

is to page the page table • Divide the page number into

– An index into a page table of second level page tables – A page within a second level page table

• Advantage – No need to keeping all the page tables in memory all

the time – Only recently accessed memory’s mapping need to be

kept in memory, the rest can be fetched on demand

18

Multilevel Page Tables

19

Directory . . .

pte . . .

. . .

. . .

dir table offset Virtual address

What does this buy us? Sparse address spaces and easier paging

Page tables

Example of 2-Level Page Table (1) •  A logical address (on 32-bit x86 with 4 KB page size) is

divided into –  A page number consisting of 20 bits (> 1 page) –  A page offset consisting of 12 bits

•  Divide the page number into –  A 10-bit page table page number (p1) (4byte/PTE) –  A 10-bit page table offset (p2)

20

page number" page offset"

p1" p2" d"

10" 10" 12"

Example of 2-Level Page Table (2)

21

Directory

Page tables

Multilevel Paging and Performance

• Since each level is stored as a separate table in memory, memory reference with a three-level page table at least take four memory accesses. Why?

22

A Typical Page Table Entry

23

Page frame number

Caching disabled Modified Present/absent

Referenced Protection

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

24

Page Fault •  Access a virtual page that is not mapped into any physical

page –  A fault is triggered by hardware

•  Page fault handler (in OS’s VM subsystem) –  Find if there is any free physical page available

•  If no, evict some resident page to disk (swapping space) –  Allocate a free physical page –  Load the faulted virtual page to the prepared physical page –  Modify the page table

25

Virtual-to-Physical Lookups •  Programs only know virtual addresses

–  The page table can be extremely large •  Each virtual address must be translated

–  May involve walking hierarchical page table –  Page table stored in memory –  So, each program memory reference requires several actual

memory accesses •  Page table access has temporal locality •  Solution: cache “active” part of page table

–  TLB is an “associative memory”

26

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

27

Translation Lookaside Buffer (TLB)

28

offset Virtual address

. . .

PPage# ...

PPage# ...

PPage# ...

PPage # offset

Physical address

VPage #

TLB

Hit

Miss

Real page table

VPage# VPage#

VPage#

TLB Function •  If a virtual address is presented to MMU, the hardware

checks TLB by comparing all entries simultaneously (in parallel).

•  If valid match, the page is taken from TLB without going through page table.

•  If not match –  MMU detects miss and does an ordinary page table lookup. –  It then evicts one page out of TLB and replaces it with the new

entry, so that next time that page is found in TLB.

29

Page Mapping Hardware (Modified)

30

P D

F D

P→F

0 1 0 1 1 0 1

Page Table Virtual Memory Address (P,D)

Physical Address (F,D)

P

Associative Lookup

P F First

Page Mapping Example (1)

31

004 006

009 006

004→009

0 1 0 1 1 0 1

Page Table Virtual Memory Address (P,D)

Physical Address (F,D)

4

Associative Lookup 1 12 7

19 3

6 3 7

First

Table organized by LRU

4 9

Page Mapping Example (2)

32

004 006

009 006

004→009

0 1 0 1 1 0 1

Page Table Virtual Memory Address (P,D)

Physical Address (F,D)

4

Associative Lookup 1 12 4

19 3

9 3 7

First

Table organized by LRU

Bits in a TLB Entry •  Common (necessary) bits

–  Virtual page number: match with the virtual address –  Physical page number: translated address –  Valid –  Protection bits: read, write, execution

•  Optional (useful) bits –  Process tag –  Reference –  Modify –  Cacheable

•  Include part of PTE –  Example: x86

33

Implementation Issues • TLB can be implemented using

– Associative registers – Content-addressable (look-aside) memory

• TLB hit ratio (Page address cache hit ratio) – Percentage of time page translation found in

associative memory

34

Hardware-Controlled TLB •  On a TLB miss (different from page fault)

–  Hardware walks through the page tables, generates a fault if the page containing the PTE is not present

–  VM software performs fault handling –  Hardware loads the PTE into the TLB

•  Need to write back if there is no free entry –  Restarts the faulting instruction if needed

•  On a TLB hit, hardware checks the protection bits –  If no violation, pointer to page frame in memory –  If violated, the hardware generates a protection fault

•  Examples: IA–32, IA–64

35

Software-Controlled TLB •  On a miss in TLB, a “TLB miss” exception raised, VM software

–  Walks through the page tables –  Checks if the page containing the PTE is in memory –  If no, performs page fault handling –  Loads the PTE into the TLB

•  Writes back if there is no free entry –  Restarts the faulting instruction if needed

•  On a hit in TLB, the hardware checks protection bits –  If no violation, pointer to page frame in memory –  If violated, the hardware generates a protection fault

•  Example: SPARC, MIPS, Alpha, HP-PA, PowerPC

36

Hardware vs. Software Controlled

•  Hardware approach –  Efficient –  Inflexible –  Need more spaces for page table

•  Software approach –  Flexible –  Simpler MMU and more area in CPU chips –  Software can do mappings by hashing

•  PP# → (Pid, VP#) •  (Pid, VP#) → PP#

–  Can deal with large virtual address space

37

Issues •  Which TLB entry to be replaced?

–  Random –  Pseudo Least Recently Used (LRU)

•  What happens on a context switch? –  Process tag: change TLB registers and process register –  No process tag: Invalidate the entire TLB contents

•  What happens when changing a page table entry? –  Change the entry in memory –  Invalidate the TLB entry

38

Effective Access Time • TLB lookup time = ε time unit • Memory cycle = m µs • TLB Hit ratio = ρ • Effective access time (2-level page table)

– Tea = (m + ε)ρ + (3m + ε)(1 – ρ) = 3m + ε – 2mρ

39

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

40

Inverted Page Table (1) •  Main idea

–  One PTE for each physical page frame

–  Hash (Vpage, pid) to Ppage#

–  Trade off time for space

•  Pros –  Small page table for

large virtual address space

•  Cons –  Lookup is difficult –  Overhead of managing

hash chains, etc

41

pid vpage offset

pid vpage

0

k

n-1

k offset

Virtual address

Physical address

Inverted page table

Inverted Page Table (2)

42

004 006

005 006

4

0 1 0 1 1 0 1

Page Table Virtual Address (004006)

Physical Address (005006)

5

Inverted Page Table Implementation

•  TLB is same as before •  TLB miss is handled by software •  In-memory page table is managed using a hash table

–  Number of entries ≥ number of physical frames –  Not found: page fault

43

Hash table

Virtual page Physical page

Outline

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

44

Sharing Pages •  Code and data can be shared among processes

–  mapping them into pages with common page frame mappings •  Code and data must be position independent if VM mappings for the

shared data are different •  Shared code and data that usually are not position independent result

in certain regions of memory being reserved for shared libraries and the operating system

45

Shared Pages

46

Protection (1)

•  Can add read, write, execute protection bits to page table to protect memory

•  Check is done by hardware during access •  shared memory location

–  different protections from different processes –  Solution:

•  associate protection lock with page frame. Each process has its own key. If the key fits the lock, the process may access the page frame

47

Protection (2)

•  Typically many different keys can fit a lock using a priority numbering scheme

•  (e.g. Key 3 fits all locks 3, 7, 15)

48

Keys and Locks

49

Process A Process B Keys Page Frames

Summary

•  Virtual Memory •  Paging •  Page Table •  Page Fault •  TLB •  Inverted Page Table •  Sharing and Protection

50