Operating Systems Part III-Memory Management

60
Operating Systems Memory Management Ajit K Nayak, Ph.D. SOA University

Transcript of Operating Systems Part III-Memory Management

Page 1: Operating Systems Part III-Memory Management

Operating Systems

Memory Management

Ajit K Nayak, Ph.D.

SOA University

Page 2: Operating Systems Part III-Memory Management

AKN/OSIII.2Introduction to Operating Systems

Memory Management Program must be brought (from disk) into memory

and placed within a process for it to be run

Main memory and registers are only storages, that a

CPU can access directly

Memory unit only sees a stream of addresses

i.e. it does not understand how these are generated, or what

they are for (instruction or data)

Register access in one CPU clock (or less)

Main memory can take many cycles, causing a stall

Cache sits between main memory and CPU registers

Protection of memory required to ensure correct

operation

Page 3: Operating Systems Part III-Memory Management

AKN/OSIII.3Introduction to Operating Systems

Base and Limit Registers A pair of base and limit registers define the logical

address space of a process

CPU must check every memory access generated in

user mode to be sure it is between base and limit for

that user process

CPU < Memory

BASEBASE +

LIMIT

y y

N N

Trap to OS

Addressing Error

Page 4: Operating Systems Part III-Memory Management

AKN/OSIII.4Introduction to Operating Systems

Address Binding Addresses represented in different ways at different

stages of a program’s life

Source code addresses usually symbolic (a variable name)

Compiler bind these symbolic addresses in to relocatable

addresses

i.e. “14 bytes from beginning of this module”

Linker or loader will bind relocatable addresses to absolute

addresses

i.e. 74014

Each binding is a mapping from one address space to another

Page 5: Operating Systems Part III-Memory Management

AKN/OSIII.5Introduction to Operating Systems

Binding to Memory address

Address binding of instructions and data to memory

addresses can happen at three different stages

Compile time: If memory location known a priori, absolute

code can be generated otherwise relocatable address to be

generated.

It needs to recompile the code if starting location changes

Load time: If relocatable code is generated at compile time,

then final binding is delayed till load time.

If starting location changes, then program need to be reloaded

Execution time: Binding delayed until run time if the process

can be moved during its execution from one memory segment

to another

Need hardware support for address maps (e.g., base and limit

registers)

Page 6: Operating Systems Part III-Memory Management

AKN/OSIII.6Introduction to Operating Systems

Logical vs. Physical Address Space The concept of a logical address space that is bound

to a separate physical address space is central to

proper memory management

Logical address – generated by the CPU; also referred to as

virtual address

Physical address – address seen by the memory unit

Logical and physical addresses are

the same in compile-time and load-time address-binding schemes;

differ in execution-time address-binding scheme

Logical address space is the set of all logical addresses

generated by a program

Physical address space is the set of all physical

addresses generated by a program

Page 7: Operating Systems Part III-Memory Management

AKN/OSIII.7Introduction to Operating Systems

Memory-Management Unit (MMU) Hardware device that maps logical to physical

address at run time

The value in the relocation register is added to every

address generated by a user process at the time it is

sent to memory

Base register now called relocation register

The user program deals with logical

addresses; it never sees the real

physical addresses

Execution-time binding occurs when

reference is made to location in

memory

Page 8: Operating Systems Part III-Memory Management

AKN/OSIII.8Introduction to Operating Systems

Swapping -I Total physical memory space of processes can

exceed physical memory A process may be swapped out temporarily of memory to a

backing store, and then swapped in to the memory for continued execution.

Example: The processes finished their time quantum in a RR scheduling are swapped out

lower-priority process is swapped out so higher-priority process can be loaded and executed (roll out, roll in )

Backing store – fast disk large enough to accommodate copies of all memory images for all users.

System maintains a ready queue of ready-to-run processes which have memory images on disk

Page 9: Operating Systems Part III-Memory Management

AKN/OSIII.9Introduction to Operating Systems

Swapping - II Whenever the CPU scheduler decides to execute a

process, it calls the dispatcher.

The dispatcher checks , if the next process in the

queue is available in memory.

If it is not, and if there is no free memory region, the dispatcher

swaps out a process currently in memory and swaps in the

desired process.

Major part of swap time is transfer time total transfer time is directly proportional to the amount of

memory swapped

Example: user process: 100MB, transfer rate of the disk: 50MB/s

Transfer time = 2sec

Assuming avg. latency of 8 ms, total time for swap in and swap out would be 4016ms

Page 10: Operating Systems Part III-Memory Management

AKN/OSIII.10Introduction to Operating Systems

Contiguous Allocation Main memory divided into two partitions:

Resident operating system, usually held in low

memory with interrupt vector.

User processes held in high memory.

Each process is contained in a single contiguous memory.

Single-partition allocation

Relocation-register scheme used to protect user

processes from each other, and from changing

operating-system code and data.

Relocation register contains value of smallest

physical address; limit register contains range of

logical addresses – each logical address must be

less than the limit register.

Page 11: Operating Systems Part III-Memory Management

AKN/OSIII.11Introduction to Operating Systems

Hardware Support for Relocation and Limit Registers

Page 12: Operating Systems Part III-Memory Management

AKN/OSIII.12Introduction to Operating Systems

Multiple-partition allocation Fixed-sized partitions

Each partition contains one process, so degree of multi-

programming limited by number of partitions.

No longer used today

Variable-sized partitions

Operating system keeps a table indicating which parts of

memory are available and which are occupied.

Available memory is represented as a set of holes of various

sizes, scattered throughout the memory.

When a process arrives and needs memory, the system

searches the set, for a hole that is large enough for this process.

If the hole is too large, it is split into two parts. One part is

allocated to the arriving process; the other is returned to the set of holes.

When a process terminates, it releases its block of memory,

which is then placed back in the set of holes.

Page 13: Operating Systems Part III-Memory Management

AKN/OSIII.13Introduction to Operating Systems

Dynamic Storage-Allocation How to satisfy a request of size n from a list of free

holes?

First-fit: Allocate the first hole that is big enough

Search starts from the beginning or at the location where the previous first-fit search ended.

Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size

Produces the smallest leftover hole

Worst-fit: Allocate the largest hole; must also search entire list

Produces the largest leftover hole

First-fit and best-fit are better than worst-fit in terms of

decreasing time and storage utilization

But first-fit is faster

Page 14: Operating Systems Part III-Memory Management

AKN/OSIII.14Introduction to Operating Systems

Fragmentation - I External Fragmentation is a situation when storage is

fragmented into a large number of small holes due to

first-fit and best-fit allocation schemes.

i.e. total memory space resulted by all holes may satisfy a

request, but it is not contiguous

Statistical analysis of first fit, reveals that, given N allocated

blocks, another 0.5 N blocks is lost due to fragmentation.

i.e. 1/3 may be unusable -> 50-percent rule

Example: A hole is of 18464 bytes, a process requests

18462 bytes. It lefts a 2byte hole. The overhead of

keeping track of this hole is substantially larger than the

hole itself.

To avoid this problem, the memory is partitioned into

fixed-sized blocks and allocate memory in units based

on block size.

Page 15: Operating Systems Part III-Memory Management

AKN/OSIII.15Introduction to Operating Systems

Fragmentation - II Allocated memory may be slightly larger than

requested memory; this difference in memory is called

Internal Fragmentation

i.e. unused memory that is internal to a partition

Compaction: a solution to external fragmentation

Shuffle memory contents to place all free memory together in

one large block

Compaction is possible only if relocation is dynamic, and is

done at execution time.

It is expensive

Other solutions to external fragmentation

permit the logical address space of the processes to be non-contiguous, thus allowing a process to be allocated physical

memory wherever such memory is available.

Two methods Paging and Segmentation

Page 16: Operating Systems Part III-Memory Management

AKN/OSIII.16Introduction to Operating Systems

Paging - I Paging is a technique, using which, the non-contiguous memory

may be allocated to processes.

Avoids external fragmentation and compaction not required

Avoids problem of varying sized memory chunks

Physical memory is partitioned into fixed-sized blocks called frames

Frame Size: 512 bytes to 16 Mbytes, power of 2

Logical memory is partitioned into blocks of same size called pages

Size is same as frame size

The backing store is divided into fixed-sized blocks that are of the

same size as the memory frames.

When a process to be executed, its pages are loaded into any

available memory frames from the backing store.

Page to frame mapping is kept track with a page table. The page

number is used as an index to the page table

Page 17: Operating Systems Part III-Memory Management

AKN/OSIII.17Introduction to Operating Systems

Address translation Each logical address contains two parts,

a page number (p) and

an offset (d).

Page size be 2n

n bits

Logical address space 2m

m bits, i.e. one logical address contains m bits

Number of pages possible

2m-n, i.e. m-n bits required to represent a page number

And offset requires n bits

Therefore, high order m-n bits of a logical address

designate the page number, and the n low-order bits

designate the page offset.

Page 18: Operating Systems Part III-Memory Management

AKN/OSIII.18Introduction to Operating Systems

Paging Model of Logical and Physical Memory

Paging Hardware

Logical Address

page number page offset

p d

m -n n

Page 19: Operating Systems Part III-Memory Management

AKN/OSIII.19Introduction to Operating Systems

Example Logical address space: 16 bytes

Page size: 4 bytes

Physical memory: 32 bytes

n = ? bit

m = ? Bit

How many pages? Frames?

Logical address 0 maps to which physical address ?

p = 0, d = 0 => 5×4 + 0 =20

Logical address 9 maps to?

p = 2, d = 1 => 1 × 4 + 1 = 5

Logical address 6 maps to?

p = 1, d = 2 => 6 × 4 + 2 = 26

Page 20: Operating Systems Part III-Memory Management

AKN/OSIII.20Introduction to Operating Systems

Paging (Cont.) Paging avoids external fragmentation; however, it

suffers from internal fragmentation

if memory requirement doesn’t coincide with page

boundaries, then last frame may not be complete full

Example: Page size = 2,048 bytes and Process size = 72,766 bytes

Frames required = 36; 35 comlete frames + 1086 bytes

Internal fragmentation of 2048 – 1086 = 962 bytes

Worst case: 1 byte in the last frame

OS needs to keep track of frames

A frame table is used for this purpose.

One entry for each physical page frame, indicating whether

the frame is free or allocated to which page.

Page 21: Operating Systems Part III-Memory Management

AKN/OSIII.21Introduction to Operating Systems

Implementation of Page Table - I Hardware Implementation

Implemented as a set of dedicated registers (paging map)

Every access to memory goes through paging map

Suitable for small page tables (e.g. 256 entries)

But not suitable for modern computers, as it requires large

page tables (e.g. 1 million entries)

Page Table Base Register (PTBR)

Page table is kept in the main memory

PTBR points to page table

But in this method, two memory accesses are required to

access a bytes.

i.e. once the page table (memory), and then the memory.

This delay would be intolerable

Page 22: Operating Systems Part III-Memory Management

AKN/OSIII.22Introduction to Operating Systems

Implementation of Page Table - II Translation Look-aside Buffers (TLBs) It is a special, small,

fast lookup associative, high-speed hardware cache

Each entry in the TLB consists of two parts:

a key and a value , used to store page-frame number

mapping

The search is fast, as an item can be compared with

all keys simultaneously.

The TLB contains only a few of the page-table entries.

(64 – 1024 entries)

TLB Hit: page number is available in TLB.

When asked by CPU, it gets the frame number immediately.

TLB Miss: page number not available in TLB

frame number is obtained from page table and then memory

is accessed

Page 23: Operating Systems Part III-Memory Management

AKN/OSIII.23Introduction to Operating Systems

Translation Look-aside Buffers - I The mapping is now added to the TLB

If the TLB is already full of entries, one existing entry is selected

for replacement using a replacement policy.

Certain entries in TLBs are wired down

i.e. can not be removed from TLB (kernel code)

Some TLBs store another Address-Space Identifiers

(ASIDs) in each TLB entry

ASID uniquely identifies each process and its address-space

ASID is also associated with virtual page.

While resolving virtual page numbers, if both the ASIDs do not

match, then considered as a TLB Miss.

It allows TLBs to store entries for different processes

simultaneously.

Without ASID, TLBs need to be flushed in each context switch,

i.e. when a new page table is selected.

Page 24: Operating Systems Part III-Memory Management

AKN/OSIII.24Introduction to Operating Systems

Paging Hardware With TLB

Page 25: Operating Systems Part III-Memory Management

AKN/OSIII.25Introduction to Operating Systems

Effective Access Time Associative Lookup(): time required to search TLB

Normally less than10% of memory access time

TLB Hit ratio (): percentage of times that a page number is found in the associative registers.

Example:

Consider = 80%, = 20ns for TLB search, 100ns for memory access.

TLB Hit : associative lookup=20ns and memory access 100 ns

TLB Miss : associative lookup = 20 ns, two memory access 200ns

EAT = 0.80 x 120 + 0.20 x 220 = 140ns

Slowdown percentage in memory access time=(140-100)/100 = 40%

Find EAT and percentage slowdown in memory access for = 98% in above problem.

Page 26: Operating Systems Part III-Memory Management

AKN/OSIII.26Introduction to Operating Systems

Shared Pages - I Paging makes possible of sharing the common code.

If the code is reentrant (pure code), then it can be shared

A computer program is called reentrant, if it can be interrupted in the

middle of its execution, and then be safely called again ("re-entered")

before its previous invocations complete execution.

It is also called non-self-modifying code (read only)

Example:

One text editor contains 150KB of code and 50KB of data. 40 users are

using simultaneously with a page size of 50 KB

Non-shared mode:

Memory required? Number of pages/frames?

8000KB needed to support all users, 160 pages

Shared mode

Editor code may be shared, data kept different

2150 KB to support all users, 43 pages

Page 27: Operating Systems Part III-Memory Management

AKN/OSIII.27Introduction to Operating Systems

Shared Pages - II Shared code

One copy of read-only

(reentrant) code shared among

processes, like text editors,

compilers, window systems, run-

time libraries, database systems,

etc.

Also useful for shared memory

interprocess communication

Private code and data

Each process keeps a separate

copy of the code and data

The pages for the private code

and data can appear

anywhere in the logical address

space

Page 28: Operating Systems Part III-Memory Management

AKN/OSIII.28Introduction to Operating Systems

Segmentation Memory-management scheme that supports user view of memory

A program is a collection of segments

A segment is a logical unit such as:

main program

procedure

function

method

object

local variables, global variables

common block

stack

symbol table

arrays

Page 29: Operating Systems Part III-Memory Management

AKN/OSIII.29Introduction to Operating Systems

Segmentation Architecture Logical/user address consists of a two tuple:

<segment-number, offset>

Segment table – maps 2-D logical address into 1-D physical addresses; each table entry has:

Segment base – starting physical address

Segment limit – length of the segment

Subroutine

Symbol table

Stack

Main

Program

Seg Limit Base

0 1000 1400

1 1100 6000

2 400 3600

3 400 3200

1400

2400

Seg 2

Seg 1

3200

3600

4000

Seg 3

6000

Seg 0

7100Logical address space

Physic

al M

em

ory

Segment Table

Page 30: Operating Systems Part III-Memory Management

AKN/OSIII.30Introduction to Operating Systems

Segmentation Hardware

Page 31: Operating Systems Part III-Memory Management

AKN/OSIII.31Introduction to Operating Systems

Virtual Memory - I To execute a program, the program need to be in

physical memory.

Unfortunatly, the program may be more than the size

of total physical memory.

Database of a bank, IRCTC etc.

Virtual memory is a tecrucique that allows the

execution of processes that are not completely in

memory.

separation of user logical memory from physical memory.

Only part of the program needs to be in memory for

execution.

Logical address space can therefore be much larger than

physical address space.

Page 32: Operating Systems Part III-Memory Management

AKN/OSIII.32Introduction to Operating Systems

Virtual Memory - II Each user program could take less physical memory, more

programs could be run at the same time.

increase in CPU utilization and throughput but with no

increase in response time or turnaround time.

Virtual memory can be implemented via:

Demand paging

Demand segmentation

Page 33: Operating Systems Part III-Memory Management

AKN/OSIII.33Introduction to Operating Systems

Demand Paging A page is loaded into memory only when it is needed.

A Pager is used to swap-in or swap-out pages to/from physical memory

Swap time is decreased (as part of the process gets loaded)

Less physical memory needed, Faster response, More users

Valid-Invalid bit in page table

One more bit is associated with page table to distinguish

between the pages in memory and pages in disk.

If the page is available in memory => valid

not-in-memory Invalid

Access to a page marked valid => no problem

No access to a page marked Invalid => no problem

Access to a page marked invalid => Page Fault

Page 34: Operating Systems Part III-Memory Management

AKN/OSIII.34Introduction to Operating Systems

Page Table

Page 35: Operating Systems Part III-Memory Management

AKN/OSIII.35Introduction to Operating Systems

Handling Page Faults Check an internal table (kept with PCB), for this process

to determine whether the reference was a valid or an

invalid memory access.

If the reference was invalid, terminate the process.

If it was valid, now page it in.

Find a free frame, read the desired page into the newly,

allocated frame.

Modify the internal table and the page table to indicate that

the page is now in memory.

Restart the instruction that was interrupted by the trap.

The process can now access the page.

Page 36: Operating Systems Part III-Memory Management

AKN/OSIII.36Introduction to Operating Systems

Steps in Handling a Page Fault

Page 37: Operating Systems Part III-Memory Management

AKN/OSIII.37Introduction to Operating Systems

Pure Demand Paging Start executing a process with no pages in memory.

When the OS sets the instruction pointer to the first

instruction of the process, the process immediately

faults for the page.

After this page is brought into memory, the process

continues to execute, faulting as necessary until every

page that it needs is in memory.

Now it can execute with no more faults.

Page 38: Operating Systems Part III-Memory Management

AKN/OSIII.38Introduction to Operating Systems

Performance of Demand Paging Effective access time for a demand-paged memory is

effective access time= (1 - p) x ma + p x page fault time.

Where p: the probability of a page fault (0 ≤ p ≤ 1).

ma: memory access time

Example:

page fault service time = 8ms,

memeory access time = 200ns, and

one access out of 1,000 causes a page fault,

then find the effective access time.

EAT = 8.2µs

Page 39: Operating Systems Part III-Memory Management

AKN/OSIII.39Introduction to Operating Systems

Copy on Write fork() system call creates a child process that is a duplicate of its

parent.

Pages required now doubled.

But most of the child executes exec() system call immediately

after creation.

i.e. completely new address space, and copying parent address

space is unnecessary.

Copy on write is a technique, in which parent and children are

allowed initially to share same pages

If either process writes to a shared page, a copy of the shared

page is created.

only the pages that are modified by either process are copied;

all unmodified pages can be shared by the parent and child

processes.

only pages that can be modified need be marked as copy-on-write.

Pages that cannot be modified can be shared by the parent and

child.

Page 40: Operating Systems Part III-Memory Management

AKN/OSIII.40Introduction to Operating Systems

Copy on write

Page 41: Operating Systems Part III-Memory Management

AKN/OSIII.41Introduction to Operating Systems

Frame Allocation Alg.1. Find the location of the desired page on the disk.

2. Find a free frame:

a. If there is a free frame, use it.

b. If there is no free frame, use a page-replacement algorithm

to select a victim frame.

c. Write the victim frame to the disk;

3. Read the desired page into the newly freed frame;

change the page and frame tables.

4. Restart the user process.

If no frames are free, two page transfers are required.

This situation doubles the page-fault service time and

increases the effective access time accordingly.

Page 42: Operating Systems Part III-Memory Management

AKN/OSIII.42Introduction to Operating Systems

Page Replacement - I To reduce EAT, a modify bit (dirty bit) is associated with

each page/frame

Modify bit set: The page has been modified after last read

from disk to the memory.

Modify bit unset: page not modified after last loading.

If the modify bit is unset for a victim frame, then it is not

required to swap-out the victim page.

Since the I/O is decreased by half, therefore EAT is

substantially improved.

Two algorithms are required to implement demand

paging.

Frame allocation algorithm

Page replacement algorithm.

Page 43: Operating Systems Part III-Memory Management

AKN/OSIII.43Introduction to Operating Systems

Page Replacement -II The goal is to achieve low page-fault.

Algorithms are evaluated by running it on a particular

string of memory references (reference string) and

computing the number of page faults on that string.

Reference string is a sequence of page number that has been

accessed in sequence

May be generated artificially(hypothetical sequence)

or collected from a system record.

Example reference string

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Page Replacement Algorithms

First in First out

Optimal

Least recently used

Page 44: Operating Systems Part III-Memory Management

AKN/OSIII.44Introduction to Operating Systems

FIFO Page replacement algorithm - I A time is associated with a page, when brought into

memory

When required to replace a page, the oldest one is

chosen.

Example 1: 3 frames

There are 15 faults in total

Performance is not always good, because

The replaced page could contain a heavily used variable

that was initialized early and is in constant use.

Frequent faults may incur to bring this page again and again

Page 45: Operating Systems Part III-Memory Management

AKN/OSIII.45Introduction to Operating Systems

FIFO Page replacement algorithm - II Example 2: Apply FIFO page replacement algorithm

with 3 frames for the ref string

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Number of faults with 3 frames = 9

Find the same for 4 frames

There are 10 faults

i.e. the number of faults for four frames is greater than the

number of faults for three frames

Belady’s anomaly: for some page-replacement

algorithms, the page-fault rate may increase as the

number of allocated frames increases.

4

2

3

1

2

3

1

2

3

4

1

2

5

3

2

5

1

2

4

1

3

1

2

3

5

3

4

Page 46: Operating Systems Part III-Memory Management

AKN/OSIII.46Introduction to Operating Systems

Illustration Belady’s Anomaly

FIFO

Ideal

Page 47: Operating Systems Part III-Memory Management

AKN/OSIII.47Introduction to Operating Systems

Optimal Algorithm - I It has the lowest page-fault rate of all algorithms and

will never suffer from Belady's anomaly.

Replace page that will not be used for longest period

of time

Example 3: with 3 frames

There are 9 page faults and no other algorithm can

produce less than these many faults. (optimal)

If we ignore the first three, then optimal replacement is twice

as good as FIFO replacement.

Page 48: Operating Systems Part III-Memory Management

AKN/OSIII.48Introduction to Operating Systems

Optimal Algorithm - II Example 4: Apply Optimal page replacement

algorithm with 3 frames for the following ref string

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Number of page faults with 3 frames = 7

Find the same for 4 frames

There are 6 faults

i.e. Optimal algorithm does not suffer from Belady’s anomaly

Unfortunately, the optimal page-replacement

algorithm is difficult to implement,

because it requires future knowledge of the reference string.

Therefore, it used only for comparison studies.

1

2

4

1

2

3

1

2

3

3

2

5

1

2

5

1

2

3

3

4

5

Page 49: Operating Systems Part III-Memory Management

AKN/OSIII.49Introduction to Operating Systems

Least Recently Used (LRU) Algorithm Uses past knowledge rather than future

i.e. Replace page that has not been used in the most amount

of time

time of last use need to be associated with each page

Example 5:

12 faults – better than FIFO but worse than Optimal

Generally good algorithm and frequently used

But how to implement?

Page 50: Operating Systems Part III-Memory Management

AKN/OSIII.50Introduction to Operating Systems

LRU Algorithm Implementation Counter implementation

Every page entry has a counter; every time page is referenced

through this entry, copy the clock into the counter

When a page needs to be changed, look at the counters to find

smallest value

Search through table to find LRU page needed

Write to memory is required (time) for each page access.

Stack implementation

Keep a stack of page numbers.

When a page referenced and available in memory (no page

fault), move it to the top of stack

Most recently used page is always on the top

Least recently used page is always at the bottom

Page fault: replace the page at the bottom of the list

Page 51: Operating Systems Part III-Memory Management

AKN/OSIII.51Introduction to Operating Systems

Use Of A Stack to Record Most Recent Page References

4

7

4

0

7

4

7

0

4

1

7

0

4

0

1

7

4

1

0

7

4

2

1

0

7

1

2

0

7

2

1

0

7

7

2

1

0

1

7

2

0

2

1

7

0

Implemented using a doubly linked list

Updating requires 6 pointers to be changed

Page fault: No search for replacement

Each update is more expensive

This approach is appropriate for software or microcode

implementation

Page 52: Operating Systems Part III-Memory Management

AKN/OSIII.52Introduction to Operating Systems

Counting based page replacement Keep a counter of the number of references that have

been made to each page and use it for page

replacement.

Least Frequently Used (LFU) Algorithm:

replaces page with smallest count as active page has large

count value

If a page is used heavily during the initial phase of a process but

then is never used again. it can't be replaced due to large count

Soln: the count shifted right by 1 bit at regular intervals

Most Frequently Used (MFU) Algorithm:

replaces page with largest count as the page with the smallest

count was probably just brought in and has yet to be used.

Both the algorithms are not common and expensive,

therefore not used.

Page 53: Operating Systems Part III-Memory Management

AKN/OSIII.53Introduction to Operating Systems

Page-Buffering Algorithms These algs. are used in addition to standard algorithms.

Free frame Pool: (keep a pool of free frame)

Page fault: choose a victim frame, Read new page into free

frame (pool), restart the execution

When convenient, swap out victim, and add the frame to free pool.

Expansion: maintain a list of modified pages

When paging device is idle, write pages with modified bit set to

disk and reset the modify bit. (swap out time decreased)

Free Frame pool modified:

Page fault: choose a victim frame, attach this frame to free

frame list (pool), Read new page into free frame , restart the

execution.

Next page fault: search in pool, if found, use it (I/O saved)

Page 54: Operating Systems Part III-Memory Management

AKN/OSIII.54Introduction to Operating Systems

Thrashing A process is thrashing if it is spending more time on

paging than executing.

If a process does not have “enough” pages, the page-fault rate is very high.

i.e. Page fault to get page, Replace existing frame, But quickly

need replaced frame back . . .

This leads to: Low CPU utilization

Cause: A global page replacement Algorithm

it replaces pages without regard to the process to which they

belong.

If a process needs more frames, It starts faulting and taking frames away from other processes.

other processes will fault and taking frames away from other

processes and so on

Page 55: Operating Systems Part III-Memory Management

AKN/OSIII.55Introduction to Operating Systems

Cause of Thrashing All processes now queue up for the paging device, the ready

queue empties. => CPU utilization decreases.

CPU scheduler sees the decreasing CPU utilization and

increases the degree of multiprogramming.

Result: more page faults, longer queue for the paging device ,

CPU utilization drops further, …

Thrashing has occurred, and system throughput plunges, the

pagefault rate increases tremendously. As a result, the effective

memory-access time increases.

No work is getting done, because the processes are spending

all their time in paging.

To prevent Thrashing,

we must provide a

process with as many frames as it needs.

how many frames ?

Page 56: Operating Systems Part III-Memory Management

AKN/OSIII.56Introduction to Operating Systems

Locality Model The Model

As a process executes, it moves from loclocality to locality.

A locality is a set of pages that are actively used together.

A program is generally composed of several different localities,

which may overlap.

Example

Function call is a locality, where memory references are made

to the instructions of the function call, its local variables etc.

When exits, the process leaves this locality

Thus, localities are defined by the program structure

and its data structures

It will fault for the pages in its locality until all these pages are in

memory; then, it will not fault again until it changes localities.

If enough frames are their to accommodate the size of the

current locality, the process will thrash

Page 57: Operating Systems Part III-Memory Management

AKN/OSIII.57Introduction to Operating Systems

Working-Set Model - I : working-set window;

a fixed number of page references ;

it is an approximation to the programs locality

Example: = 10 memory references

The working set at time t1 is: ws(t1)={1, 2, 5, 6, 7}.

At time t2, the working set has changed to ws(t2)={3, 4}

WSSi (working set of Process Pi) is the total number of

pages referenced in the most recent (varies in time)

Page 58: Operating Systems Part III-Memory Management

AKN/OSIII.58Introduction to Operating Systems

Working-Set Model - II if too small, it will not encompass entire locality

if too large, it will encompass several localities

if = , it will encompass entire program

D = WSSi total demand for frames

m: total number of available frames

Use:

The OS monitors the working set of each process and allocates

to that working set enough frames.

If D < m, there are enough extra frames, another process can be initiated.

If D > m Thrashing, then suspend or swap out one of the

processes

This prevents thrashing while keeping the degree of

multiprogramming as high as possible.

Thus, it optimizes CPU utilization.

Page 59: Operating Systems Part III-Memory Management

AKN/OSIII.59Introduction to Operating Systems

Page-Fault Frequency More direct approach than WSS

Define upper and lower bounds on the desired page-

fault rate

If the actual page-fault rate exceeds the upper limit, then

allocate the process another frame;

if the page-fault rate falls below the lower limit, then remove a

frame from the process.

Thus, we can directly measure and control the page-fault rate

to prevent thrashing.

If the page-fault rate

increases and no free

frames are available, then select some

process and suspend it.

Page 60: Operating Systems Part III-Memory Management

AKN/OSIII.60Introduction to Operating Systems

Thank You