scheduling

Post on 19-May-2015

1.151 views 1 download

Tags:

Transcript of scheduling

1

Unit III

Scheduling

2

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock Avoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

3

Uniprocessor (CPU) Scheduling

The problem: Scheduling the usage of a single processor among all the

existing processes in the system

The goal is to achieve:

High processor utilization

High throughput

number of processes completed per unit time

Low response time

time elapse from the submission of a request to the beginning of the

response

4

Scheduling Objectives

The scheduling function should

Share time fairly among processes

Prevent starvation of a process

Use the processor efficiently

Have low overhead

Prioritise processes when necessary (e.g. real time deadlines)

5

6

Scheduling and Process State Transitions

7

Scheduling and Process State Transitions

8

Nesting of Scheduling Functions

9

Queuing Diagram for Scheduling

10

Long-Term Scheduling: (job scheduler)

Selects which processes should be brought into the ready queue

May be first-come-first-served

Or according to criteria such as priority, I/O requirements or expected

execution time

Controls the degree of multiprogramming

If more processes are admitted

less likely that all processes will be blocked

better CPU usage

each process has less fraction of the CPU

The long term scheduler will attempt to keep a mix of processor-bound

and I/O-bound processes

11

Medium-Term Scheduling (Swaper)

Part of the swapping function

Swapping decisions based on the need to manage multiprogramming

Done by memory management software

12

Short-Term Scheduling: (CPU scheduler)

Selects which process should be executed next and allocates CPU

The short term scheduler is known as the dispatcher

Executes most frequently

Is invoked on a event that may lead to choose another process for

execution:

clock interrupts

I/O interrupts

operating system calls and traps

signals

13

Processes can be described as either:

I/O-bound process – spends more time doing I/O than computations,

many short CPU bursts

CPU-bound process – spends more time doing computations; few very

long CPU bursts

14

Short-Tem Scheduling Criteria

User-oriented: relate to the behavior of the system as perceived by the

individual user or process

Response Time: Elapsed time from the submission of a

request to the beginning of response

Turnaround Time: Elapsed time from the submission of a

process to its completion

System-oriented:focused on effective and efficient utilization of the

processor.

Processor utilization: keep the CPU as busy as possible

Fairness

Throughput: number of process completed per unit time

Waiting time – amount of time a process has been waiting in

the ready queue

15

Interdependent Scheduling Criteria

16

Interdependent Scheduling Criteria

17

Scheduling Algorithm Optimization Criteria

Max CPU utilization

Max throughput

Min turnaround time

Min waiting time

Min response time

18

Characterization of Scheduling Policies

The selection function: determines which process in the ready queue is selected next for

execution.

The decision mode: specifies the instants in time at which the selection function is

exercised.

Nonpreemptive

Once a process is in the running state, it will continue until it

terminates or blocks itself for I/O

Preemptive

Currently running process may be interrupted and moved to the

Ready state by the OS

Allows for better service since any one process cannot

monopolize the processor for very long time.

19

Priorities

Scheduler will always choose a process of higher priority over one of lower

priority

Have multiple ready queues to represent each level of priority

Lower-priority may suffer starvation

Allow a process to change its priority based on its age or execution

history

20

FCFS approach

21

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock Avoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

22

Running example to discuss various scheduling policies

ProcessArrivalTime

Service/Burst Time

1

2

3

4

5

0

2

4

6

8

3

6

4

5

2

•Service time = total processor time needed in one (CPU-I/O) cycle•Jobs with long service time are CPU-bound jobs and are referred to as “long jobs”

23

First Come First Served (FCFS)

The simplest scheduling policy is first-come-first-served (FCFS),first-in-first-out

(FIFO) or a strict queuing scheme.

As each process becomes ready, it joins the ready queue.

When the currently running process ceases to execute, the process that has

been in the ready queue the longest is selected for running.

Selection function: the process that has been waiting the longest in the ready

queue (hence, FCFS)

Decision mode: nonpreemptive, a process run until it blocks itself

24

FCFS Drawbacks

A short process may have to wait a very long time before it can execute.

A process that does not perform any I/O will monopolize the processor.

Favors CPU-bound processes

I/O-bound processes have to wait until CPU-bound process

completes

They may have to wait even when their I/O are completed (poor

device utilization)

we could have kept the I/O devices busy by giving more priority to I/O

Bound processes.

FCFS Drawbacks

FCFS is not an attractive alternative on its own for a uniprocessor

system.

However, it is often combined with a priority scheme to provide an effective

scheduler.

Thus, the scheduler may maintain a number of queues, one for each priority

level, and dispatch within each queue on a first-come-first-served basis.

26

Selection function: same as FCFS

Decision mode: preemptive

a process is allowed to run until the time slice period (quantum,

typically from 10 to 100 ms) has expired

then a clock interrupt occurs and the running process is put on the

ready queue

Round-Robin

Round-Robin

• Clock interrupt is generated at periodic intervals

• When an interrupt occurs, the currently running process is placed in the ready queue

• Next ready job is selected.

• The principal design issue is the length of the time quantum, or slice, to be

used.

• If the quantum is very short, then short processes will move through the

system relatively quickly.

• BUT there is processing over-head involved in handling the clock

interrupt and performing the scheduling and dispatching function.

• Thus, very short time quantum should be avoided.

28

Time Quantum for Round Robin

Must be substantially larger than the time required to handle the clock

interrupt and dispatching

Should be larger than the typical interaction (but not much more to avoid

penalizing I/O bound processes)

29

Round Robin: Critique

Still favors CPU-bound processes

A I/O bound process uses the CPU for a time less than the time

quantum, then is blocked waiting for I/O

A CPU-bound process runs for all its time slice and is put back into

the ready queue (thus, getting in front of blocked processes)

A solution: virtual round robin

When a I/O has completed, the blocked process is moved to an

auxiliary queue, which gets preference over the main ready queue

A process dispatched from the auxiliary queue runs no longer than

the basic time quantum minus the time spent running, since it was

selected from the ready queue

30

Queuing for Virtual Round Robin

31

Shortest Process Next (SPN) (SJF-Nonpreemptive)

Selection function: the process with the shortest expected CPU burst

time

Decision mode: nonpreemptive

I/O bound processes will be picked first

We need to estimate the required processing time (CPU burst time) for

each process

32

Shortest Process Next: Critique

Possibility of starvation for longer processes as long as there is a steady

supply of shorter processes

Lack of preemption is not suited in a time sharing environment

CPU bound process gets lower priority (as it should) but a process

doing no I/O could still monopolize the CPU if it is the first one to

enter the system

SPN implicitly incorporates priorities: shortest jobs are given

preferences

The next (preemptive) algorithm penalizes directly longer jobs

33

Shortest Remaining Time(SJF-Preemptive)

Preemptive version of shortest process next policy

Must estimate processing time

Priority Scheduling

A priority number (integer) is associated with each process.

The CPU is allocated to the process with the highest

priority

Some systems have a high number represent high

priority.

Other systems have a low number represent high priority.

Text uses a low number to represent high priority.

Priority scheduling may be preemptive or nonpreemptive.

Assigning Priorities

SJF is a priority scheduling where priority is the predicted next CPU burst time.

Other bases for assigning priority:Memory requirementsNumber of open filesAvg I/O burst / Avg CPU burstExternal requirements (amount of money paid, political

factors, etc).Problem: Starvation -- low priority processes may never

execute.Solution: Aging -- as time progresses increase the priority of

the process.

First-Come, First-Served (FCFS) Scheduling

Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:

Process that requests the CPU first is allocated the CPU first.Easily managed with a FIFO queue.Often the average waiting time is long.

P1 P2P3

24 27 300

Suppose that the processes arrive in the order P2 , P3 , P1 .

P1P3P2

63 300

The Gantt chart for the schedule is:

ProcessArrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

SJF (non-preemptive)

Example of Non-Preemptive SJF

P1 P3 P2

73 160

P4

8 12

Example of Preemptive SJF

Process Arrival TimeBurst TimeP1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

• SJF (preemptive)

ProcessArrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

Example of Preemptive SJF

P1 P3P2

42 110

P4

5 7

P2 P1

16

41

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock Avoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

Multiprocessor and Real-Time

Scheduling

Introduction

• When a computer system contains more than a single

processor, several new issues are introduced into the design of

scheduling functions.

• We will examine these issues and the details of scheduling

algorithms for tightly coupled multi-processor systems.

Classifications of Multiprocessor Systems

1. Loosely coupled, distributed multiprocessors, or clusters

Fairly autonomous systems.

Each processor has its own memory and I/O channels

2. Functionally specialized processors

Typically, specialized processors are controlled by a master general-purpose processor and provide services to it. An example would be an I/O processor.

3. Tightly coupled multiprocessing

Consists of a set of processors that share a common main memory and are under the integrated control of an operating system.

We’ll be most concerned with this group.

Multiprocessor Systems

Interconnection Network

P P P

Memory

disk

Memory Memory

disk disk

Shared Nothing

(Loosely Coupled)

P P P

Interconnection Network

Global Shared Memory

disk disk disk

Shared Memory

(Tightly Coupled)

Memory Memory Memory

P P P

Interconnection Network

disk disk disk

Shared Disk

• A good metric for characterizing multiprocessors and placing then in context with other architectures is to consider the synchronization granularity, or frequency of synchronization, between processes in a system.

• Or frequency of synchronization, between processes in a system.

• Five categories, differing in granularity:

1. Independent Parallelism

2. Coarse Parallelism

3. Very Coarse-Grained Parallelism

4. Medium-Grained Parallelism

5. Fine-Grained Parallelism

Granularity

Independent Parallelism

•No explicit synchronization among processes

•Each represents a separate, independent application or job.

•A typical use of this type of parallelism is in a time-sharing system. • Each user is performing a particular application, such as word processing or using a spreadsheet. • The multiprocessor provides the same service as a multiprogrammed uniprocessor.• Because more than one processor is available, average response time to the users will be less.

Synchronization interval (instruction) :N/A

Coarse and Very Coarse-Grained Parallelism

• With coarse and very coarse grained parallelism, there is synchronization

among processes, but at a very gross level.

• This type of situation is easily handled as a set of concurrent processes

running on a multi-programmed uni-processor and can be supported on a

multiprocessor with little or no change to user software.

• In general, any collection of concurrent processes that need to

communicate or synchronize can benefit from the use of a multiprocessor

architecture.

• In the case of very infrequent interaction among the processes, a distributed

system can provide good support. However, if the interaction is somewhat

more frequent, then the overhead of communication across the network

may negate some of the potential speedup. In that case, the multiprocessor

organization provides the most effective support.

Medium-Grained Parallelism

• A single application can be effectively implemented as a

collection of threads within a single process.

• In this case, the programmer must explicitly specify the

potential parallelism of an application.

• Threads usually interact frequently, affecting the performance

of the entire application

Synchronization interval (instruction) :20 -200

Fine Grained Parallelism

Highly parallel applications:Fine-grained parallelism represents a much more complex use of parallelism than is found in the use of threads.

Synchronization interval (instruction)< 20

Synchronization Granularity and Processes

Design Issues

Scheduling on a multiprocessor involves three interrelated

issues:

Assignment of processes to processors.

Use of multiprogramming on individual processors.

Actual dispatching of a process.

Assignment of processes to processors

•If we assume that the architecture of the multiprocessor is

uniform, in the sense that no processor has a particular

physical advantage with respect to access to main memory or

I/O devices, then the simplest scheduling protocol is to treat

processors as a pooled resource and assign processes to

processors on demand.

•The question then arises as to whether the assignment should

be static or dynamic.

•If a process is permanently assigned (static assignment) to a processor

from activation until completion, then a dedicated short-term queue is

maintained for each processor.

•Allows for group or gang scheduling (details later).

•Advantage: less overhead – processor assignment occurs only once.

•Disadvantage: One processor could be idle (has an empty queue) while

another processor has a backlog. To prevent this situation from arising, a

common queue can be utilized. In this case, all processes go into one

global queue and are scheduled to any available processor. Thus, over

the life of a process, it may be executed on several different processors at

different times.

Assignment of processes to processors

• Regardless of whether processes are dedicated to processors, some mechanism is needed to assign processes to processors.

• Two approaches have been used: master/slave and peer.

1. Master/slave architecture– Key kernel functions always run on a particular processor. The

other processors can only execute user programs.– Master is responsible for scheduling jobs.– Slave sends service request to the master.– Advantages

• Simple, requires little enhancement to a uniprocessor multiprogramming OS.

• Conflict resolution is simple since one processor has control of all memory and I/O resources.

– Disadvantages• Failure of the master brings down whole system• Master can become a performance bottleneck

Assignment of processes to processors

2. Peer architecture

– The OS kernel can execute on any processor.

– Each processor does self-scheduling from the pool of available processes.

– Advantages:

• All processors are equivalent.• No one processor should become a bottleneck in the system.

– Disadvantage:• Complicates the operating system• The OS must make sure that two processors do not choose the

same process and that the processes are not somehow lost from the queue.

• Techniques must be employed to resolve and synchronize competing claims for resources.

Assignment of processes to processors

Multiprogramming at each processor

Completion time and other application-related performance metrics

are much more important than processor utilization in multi-

processor environment.

For example, a multi-threaded application may require all its threads

be assigned to different processors for good performance.

Static or dynamic allocation of processes.

Process dispatching

After assignment, deciding who is selected from among the pool of

waiting processes --- process dispatching.

Single processor multiprogramming strategies may be counter-

productive here.

Priorities and process history may not be sufficient.

Process scheduling

Single queue of processes or if multiple priority is used, multiple

priority queues, all feeding into a common pool of processors.

Specific scheduling policy does not have much effect as the

processor number increases.

Conclusion: Use FCFS with priority levels.

Thread scheduling

An application can be implemented as a set of threads that

cooperate and execute concurrently in the same address space.

Criteria: When related threads run in parallel performance improves.

Load sharing: processes are not assigned to a particular

processor.

A global queue of ready threads is maintained and idle processor

selects thread from the queue.

Gang scheduling: Bunch of related threads scheduled together to

run on a set of processors at the same time, on one –to one basis.

Thread scheduling

Dedicated processor assignment: Each program gets as many

processors as there are parallel threads.

Dynamic scheduling: Scheduling done at run time.

Load sharing:

Advantages:

• The load is distributed evenly across the processors, assuring that

no processor is idle.

• No centralizer scheduler is required

Three versions of Load sharing:

• FCFS

• Smallest number of threads first

• Preemptive smallest number of threads first

Real-time systems

Real-time computing is an important emerging discipline in CS and

CE.

Control of lab experiments, robotics, process control,

telecommunication etc.

It is a type of computing where correctness of the computation

depends not only on the logical results but also on the time at

which the results are produced.

Hard real-time systems: Must meet deadline. Ex: Space shuttle

rendezvous with other space station.

Soft real-time system: Deadlines are there but not mandatory.

Results are discarded if the deadline is not met.

Characteristics of Real-Time (RT) systems

Determinism

Responsiveness

User control

Reliability

Fail-soft operation

Deterministic Response

External event and timings dictate the request of service.

OS’s response depends on the speed at which it can respond to

interrupts and on whether the system has sufficient capacity to

handle requests.

Determinism is concerned with how long the OS delays before

acknowledging an interrupt.

In non-RT this delay may be in the order of 10’s and 100’s of

millisecs, whereas in an RT it may have few microsec to 1millisec.

RT .. Responsiveness

Responsiveness is the time for servicing the interrupt once it has

been acknowledged.

Comprises:

Time to transfer control, (and context switch) and execute the

ISR

Time to handle nested interrupts, the interrupts that should be

serviced when executing this ISR. Higher priority Interrupts.

response time = F(responsiveness, determinism)

RT .. User Control

User control : User has a much broader control in RT-OS than in

regular OS.

Priority

Hard or soft deadlines

Deadlines

RT .. Reliability

Reliability: A processor failure in a non-RT may result in

reduced level of service. But in an RT it may be catastrophic :

life and death, financial loss, equipment damage.

Fail-soft operation:

Fail-soft operation: Ability of the system to fail in such a way

preserve as much capability and data as possible.

In the event of a failure, immediate detection and correction is

important.

Notify user processes to rollback.

Requirements of RT

Fast context switch

Minimal functionality (small size)

Ability to respond to interrupts quickly (Special interrupts handlers)

Multitasking with signals and alarms

Special storage to accumulate data fast

Preemptive scheduling

Requirements of RT (contd.)

Priority levels

Minimizing interrupt disables

Short-term scheduler (“omni-potent”)

Time monitor

Goal: Complete all hard real-time tasks by dead-line. Complete as many soft

real-time tasks as possible by their deadline.

RT scheduling

Static table-driven approach

Static priority-driven preemptive scheduling

Dynamic planning-based scheduling

Dynamic best-effort scheduling:

RT scheduling

Static table-driven approach

For periodic tasks.

Input for analysis consists of : periodic arrival time, execution time,

ending time, priorities.

Inflexible to dynamic changes.

General policy: earliest deadline first.

RT scheduling

Static priority-driven preemptive scheduling

For use with non-RT systems: Priority based preemptive scheduling.

Priority assignment based on real-time constraints.

Example: Rate monotonic algorithm

RT scheduling

Dynamic planning-based scheduling

After the task arrives before execution begins, a schedule is prepared

that includes the new as well as the existing tasks.

If the new one can go without affecting the existing schedules than

nothing is revised.

Else schedules are revised to accommodate the new task.

Remember that sometimes new tasks may be rejected if deadlines

cannot be met.

RT scheduling

Dynamic best-effort scheduling:

used in most commercial RTs of today

tasks are aperiodic, no static scheduling is possible

some short-term scheduling such as shortest deadline first is used.

Until the task completes we do not know whether it has met the deadline.

77

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock PreventionDeadlock Avoidance (Operating system concepts by Galvin ,Gagne) Deadlock Detection, Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

DEADLOCKS EXAMPLES:  "It takes money to make money". You can't get a job without experience; you can't get experience

without a job. BACKGROUND:The cause of deadlocks: Each process needing what another process

has. This results from sharing resources such as memory, devices, links.

Under normal operation, a resource allocations proceed like this:: – Request a resource (suspend until available if necessary ).– Use the resource.– Release the resource.

Potential Deadlock

I need quad A and B

I need quad B and C

I need quad C and B

I need quad D and A

Actual Deadlock

HALT until B is free

HALT until C is free

HALT until D is free

HALT until A is free

Dining Philosophers Problem

Example of deadlock

DEADLOCKS

Permanent blocking of a single or set of processes, competing for

system resources or may want to cooperate for communication.

Formal definition :

A set of processes is deadlocked if each process in the set is waiting

for an event that only another process in the set can cause

Usually the event is release of a currently held resource.

Generally it is because of the conflicting needs of different

processes.

There is no general solution to solve it completely.

Resource CategoriesTwo general categories of resources:

1. Reusable

– can be safely used by only one process at a time and is not depleted

by that use.

Processors, I/O channels, main and secondary memory, devices, and data structures such as files, databases, and semaphores

Deadlock occurs if each process holds one resource and requests the other

2. Consumable

– one that can be created (produced) and destroyed (consumed).

Such as Interrupts, signals, messages, and information in I/O buffers

Deadlock may occur if a Receive message is blocking

May take a rare combination of events to cause deadlock

Resource Categories

Resources can be

• physical : printer, tape drive, cpu cycles

• Logical: file, semaphore, monitors

Preemptable resources

can be taken away from a process with no ill effects

Ex: memory,cpu

Nonpreemptable resources

will cause the process to fail if taken away

Ex: printer

System Model

Resource types R1, R2, . . ., Rm

CPU cycles, memory space, I/O devices

Each resource type Ri has Wi instances.

Each process utilizes a resource as follows:

request

use

release

Deadlock Characterization

1. Mutual exclusion :Only single process is allowed to use the resource.

2. Hold and wait :Existence of a process holding at least one resource and waiting

to acquire additional resources currently held by other processes.

3. No preemption :No resource can be removed forcibly from a process.

4. Circular wait: Processes waiting for resources held by other waiting processes.

Deadlock can arise if four conditions hold simultaneously.

Resource-Allocation Graph

V is partitioned into two types:

P = {P1, P2, …, Pn}, the set consisting of all the processes in

the system.

R = {R1, R2, …, Rm}, the set consisting of all resource types in

the system.

request edge – directed edge Pi Rj

assignment edge – directed edge Rj Pi

Directed graph to describe deadlocks

A set of vertices V and a set of edges E.

Resource-Allocation Graph (Symbols)

• Process node

• Resource node

• Assignment edge

• Request edge

Pi

Rj

Def: Overall view of the processes holding or waiting for the various resources in the system.Used for deadlock free resource allocation strategy

Example of a Resource allocation graph

Resource Allocation Graph With A Deadlock

Graph With A Cycle But No Deadlock

Basic Facts

If graph contains no cycles no deadlock

If graph contains a cycle

if only one instance per resource type, then deadlock.

if several instances per resource type, possibility of deadlock.

• If there is single instance of each resource then cycle in the

resource graph is necessary and sufficient condition for the

existence of a deadlock.

• If each resource type has multiple instances ,then a cycle is s

necessary but not sufficient condition for the existence of a

deadlock.

94

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock Avoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

Methods for Handling Deadlocks

Ensure that the system will never enter a deadlock

state.

(Deadlock prevention /avoidance)

Allow the system to enter a deadlock state and

then recover. (Deadlock detection & recovery)

Ignore the problem and pretend that deadlocks

never occur in the system; used by most operating

systems, including UNIX.

Deadlock prevention

Design a system in such a way that the possibility of deadlock

is excluded.

Two main methods

1. Indirect – prevent all three of the necessary conditions

occurring at once (Mutual exclusion,hold-wait,No-

preemption)

2. Direct – prevent circular waits

Deadlock Prevention

Mutual Exclusion – not required for sharable resources;

must hold for non-sharable resources. Must be supported by the OS cannot be disallowed (in general). If access to a resource requires mutual exclusion, then

mutual exclusion must be supported by the OS. Some resources, such as files, may allow multiple

accesses for reads but only exclusive access for writes. Even in this case, deadlock can occur if more than one

process requires write permission.

Restrain the ways request can be made

Deadlock Prevention

Hold and Wait – must guarantee that whenever a process requests a

resource, it does not hold any other resources

– Requires each process to request and be allocated all its

resources before it begins execution,

– Allow process to request resources only when the process has

none

Disadvantages:

– Low resource utilization

– starvation is possible

– process may not know in advance all of the resources that it will

require.

No Preemption –

1. If a process that is holding some resources and requests another

resource that cannot be immediately allocated to it, then all

resources currently being held are released

Preempted resources are added to the list of resources for which

the process is waiting

Process will be restarted only when it can regain its old resources, as

well as the new ones that it is requesting

2. If a process requests a resource that is currently held by another

process, the OS may preempts the second process and require it

to release its resources

Ex: CPU registers, memory space but not printer /tape drives.

Deadlock Prevention

Deadlock Prevention

Circular Wait – impose a total ordering of all resource types, and

require that each process requests resources in an increasing order

of enumeration.

If a process holds a resource type whose number is i, it may

request a resource of type having number j only if j>I

Issue only one request for several units of same resource

Ordering should be as per usage pattern of resources. Ex: Tape drive should have lower number than printer, as it requires

earlier than printer

Ordering is done in program, so reordering requires

reprogramming

Deadlock Avoidance

Simplest and most useful model requires that each process

declare the maximum number of resources of each type that it

may need.

The deadlock-avoidance algorithm dynamically examines the

resource-allocation state to ensure that there can never be a

circular-wait condition.

Resource-allocation state is defined by the number of available

and allocated resources, and the maximum demands of the

processes

Requires that the system has some additional a priori information available

Basic Facts

If a system is in safe state no deadlocks

If a system is in unsafe state possibility of deadlock

Avoidance ensure that a system will never enter an

unsafe state.

Safe, Unsafe , Deadlock State

Safe State When a process requests an available resource, system must

decide if immediate allocation leaves the system in a safe state.

System is in safe state if there exists a safe sequence of all processes.

Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j<I. If Pi resource needs are not immediately available, then Pi

can wait until all Pj have finished. When Pj is finished, Pi can obtain needed resources,

execute, return allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed resources,

and so on.

Avoidance algorithms

Single instance of a resource type

Use a resource-allocation graph

Multiple instances of a resource type

Use the banker’s algorithm

Resource-Allocation Graph Scheme

Claim edge Pi Rj indicated that process Pi may request

resource Rj at some time in future

Similar to request edge in direction but is represented by a

dashed line

when a process requests a resource, Claim edge converts to

request edge

Request edge converted to an assignment edge when the

resource is allocated to the process.

When a resource is released by a process, assignment edge

reconverted to a claim edge

Resources must be claimed a prior in the system

Resource-Allocation Graph

Unsafe State In Resource-Allocation Graph

Resource-Allocation Graph Algorithm

Suppose that process Pi requests a resource Rj

The request can be granted only if converting the request edge

to an assignment edge does not result in the formation of a

cycle in the resource allocation graph.

Time complexity: For detecting a cycle in a graph it requires an

order of n2 operations, Where n in number of processes in the

system.

Banker’s Algorithm

Multiple instances of each resource type.

Each process must a prior claim maximum use.

Every process declares it maximum need/requirement.

Maximum requirement should not exceed the total number of

resources in the system.

When a process requests a resource, system determines whether

allocation will keep the system in safe state or not.

If it is, resources get allocated. Otherwise need to wait until

resources get available.

When a process gets all its resources it must return them in a finite

amount of time.

Data Structures for the Banker’s Algorithm

Let n = number of processes, and m = number of resources types

Available: Vector of length m. If available [j] = k, there are k

instances of resource type Rj available

Max: n x m matrix. If Max [i,j] = k, then process Pi may request at

most k instances of resource type Rj

Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently

allocated k instances of Rj

Need: n x m matrix. If Need[i,j] = k, then Pi may need k more

instances of Rj to complete its task.

Need [i, j] = Max[i, j] – Allocation [i, j]

Safety Algorithm

1. Let Work and Finish be vectors of length m and n, respectively.

Initialize:

Work = Available

Finish [i] = false for i = 0, 1, …, n- 1

2. Find an i such that both:

(a) Finish [i] = false

(b) Needi Work

If no such i exists, go to step 4

3. Work = Work + Allocationi

Finish[i] = true

go to step 2

4. If Finish [i] == true for all i, then the system is in a safe state

Requires m * n2 operation to decide whether a state is safe.

Resource-Request Algorithm

Requesti = request vector for process Pi.

If Requesti [j] = k then process Pi wants k instances of resource type Rj

1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim

2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available

3. Pretend the system to allocate requested resources to Pi by modifying the state as follows:

Available = Available – Requesti

Allocationi = Allocationi + Requesti

Needi = Needi – Requesti

If safe the resources are allocated to Pi If unsafe Pi must wait, and the old resource-allocation state

is restored

Example of Banker’s Algorithm

5 processes P0 through P4;

3 resource types: A (10 instances), B (5 instances), and C (7

instances)

Snapshot at time T0:

Allocation Max Available

A B C A B C A B C

P0 0 1 0 7 5 3 3 3 2

P1 2 0 0 3 2 2

P2 3 0 2 9 0 2

P3 2 1 1 2 2 2

P4 0 0 2 4 3 3

Example (Cont.)

The content of the matrix Need is defined to be Max – Allocation

Need A B C

P0 7 4 3

P1 1 2 2

P2 6 0 0

P3 0 1 1

P4 4 3 1

The system is in a safe state since the sequence < P1, P3, P4, P0,

P2>or < P1, P3, P4, P2, P0> satisfies safety criteria

1. P1 - (1 2 2 ) < ( 3 3 2) i. e need < available

So available= ( 3 3 2 ) + ( 2 0 0 ) available= available+ allocation

available= ( 5 3 2)

2. P3 - (0 1 1 ) < ( 5 3 2 )

Available= ( 5 3 2 )+ ( 2 1 1) = ( 7 4 3)

3. P4 - (4 3 1 ) < ( 5 3 2)

Available= ( 7 4 3 )+ ( 0 0 2) = ( 7 4 5)

4. P0 - (7 4 3 ) < ( 7 4 5 )

Available= ( 7 4 5 )+ ( 0 1 0) = ( 7 5 5)

5. P2 - (6 0 0 ) < ( 7 5 5 )

Available= ( 7 5 5 )+ ( 3 0 2) = ( 10 5 7)

Example: P1 Request (1,0,2)

Check that Request Available that is, (1,0,2) (3,3,2) true

Allocation Need Available

A B C A B C A B C

P0 0 1 0 7 4 3 2 3 0

P1 3 0 2 0 2 0

P2 3 0 2 6 0 0

P3 2 1 1 0 1 1

P4 0 0 2 4 3 1

Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2>

satisfies safety requirement

Can request for (3,3,0) by P4 be granted?

Can request for (0,2,0) by P0 be granted?

Deadlock Detection

An algorithm that examines the state of the system to determine

whether a deadlock has occurred

An algorithm to recover from deadlock.

Single Instance of Each Resource Type

Maintain wait-for graph

Nodes are processes

Pi Pj if Pi is waiting for Pj

Periodically invoke an algorithm that searches for a cycle in the

graph. If there is a cycle, there exists a deadlock.

An algorithm to detect a cycle in a graph requires an order of n2

operations, where n is the number of vertices/processes in the graph

Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph

Several Instances of a Resource Type

Available: A vector of length m indicates the number of available

resources of each type.

Allocation: An n x m matrix defines the number of resources of

each type currently allocated to each process.

Request: An n x m matrix indicates the current request of each

process. If Request [i,j ] = k, then process Pi is requesting k more

instances of resource type Rj.

Detection Algorithm

1. Let Work and Finish be vectors of length m and n, respectively

Initialize:

(a) Work = Available

(b) For i = 1,2, …, n, if Allocationi 0, then

Finish[i] = false; otherwise, Finish[i] = true

2. Find an index i such that both:

(a) Finish[i] == false

(b) Requesti Work

If no such i exists, go to step 4

Detection Algorithm (Cont.)

1. 3. Work = Work + Allocationi

Finish[i] = truego to step 2

2. 4. If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state.

Moreover, if Finish[i] == false, then Pi is deadlocked

Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked state

Example of Detection Algorithm

Five processes P0 through P4; three resource types

A (7 instances), B (2 instances), and C (6 instances)

Snapshot at time T0:

Allocation Request Available

A B C A B C A B C

P0 0 1 0 0 0 0 0 0 0 (0 1 0 )

P1 2 0 0 2 0 2 ( 7 2 6)

P2 3 0 3 0 0 0 ( 3 1 3)

P3 2 1 1 1 0 0 ( 5 2 4 )

P4 0 0 2 0 0 2 ( 5 2 6)

Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i

Example (Cont.)

P2 requests an additional instance of type CRequest

A B C

P0 0 0 0

P1 2 0 1

P2 0 0 1

P3 1 0 0

P4 0 0 2

State of system?

Can reclaim resources held by process P0, but insufficient

resources to fulfill other processes requests.

Deadlock exists, consisting of processes P1, P2, P3, and P4

Detection-Algorithm Usage When, and how often, to invoke algorithm depends on:

How often a deadlock is likely to occur?

How many processes will be affected by deadlock when it happens?

If deadlock occurs frequently, then the detection algorithm should be

invoked frequently.

We could invoke the deadlock detection algorithm every time a request for

allocation cannot be granted immediately.

By this we can identify deadlock causing process n processes involved in

deadlock also.

But this incurs overhead in computation time.

So can call algorithms after 1 hour / CPU utilization drops below 40%.

If detection algorithm is invoked arbitrarily, there may be many cycles in the

resource graph and so we would not be able to tell which of the many

deadlocked processes “caused” the deadlock.

128

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock Avoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

Recovery from Deadlock: (A) Process Termination

Abort all deadlocked processes

Abort one process at a time until the deadlock cycle is eliminated

In which order should we choose to abort?

Priority of the process

How long process has computed, and how much longer to

completion

Resources the process has used

Resources process needs to complete

How many processes will need to be terminated

Is process interactive or batch?

Recovery from Deadlock: (B) Resource Preemption

Preempt some resources from processes and give these

resources to other processes until deadlock cycle get broken.

Following 3 issues need to be considered

1. Selecting a victim – Which resources/ processes are to be

preempted?

- minimize cost

2. Rollback – what should be done with the process from which

resources are preempted?

-return to some safe state, restart process from that state.

3. Starvation – same process may always be picked as victim,

-include number of rollback in cost factor

131

Contents Uniprocessor Scheduling: Types of Scheduling: Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, Deadlock Avoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: Comms Services

OS Services layer in the Mobile OS: Generic Services

OS Layered Model

Fig: Block decomposition in the system model

Blocks of OS Services components

Purpose

• Symbian OS is a microkernel operating system.

• The kernel is restricted to providing the minimum of essential

services, specifically those required to implement process execution

and memory access models.

• In Symbian OS, the higher-level system services are located in the

OS Services layer.

• These services provide the specialized system-level support

required by other system components and by higher layers of the

system, as well as by applications.

• Ex: graphics support, communications support including networking

and telephony, and the connectivity infrastructure are all provided as

OS services.

Generic OS Services Block

The Generic OS Services block provides 1.a number of general-purpose utility-style services: Include logging and scheduling services and some legacy components.2. The frameworks and libraries include an implementation of the C Standard Library3. Framework support for secure certificates, keys andtokens.

Generic OS Services Block

Components are organized into two small collections of servers, frame-

work, and libraries. The common theme of the collections is general

utility.

Generic Services Collection:

Generic Services Collection

1. The Task Scheduler component is an application-launching server

that supports creating, querying and editing of time or condition triggered tasks.

2. The Event Logger component is only an interface supporting logging of

events, for example, call and message lists and retrieval, filtering and

viewing by clients.

3. The System Agent component is a legacy component that performs

number of useful functions for monitoring and reporting system state.

4. The File Logger component is a legacy utility for logging system

or application messages to a log file.

Generic Libraries Collection

• This collection provides system-level libraries for use by applications

and system components.

Generic Libraries Collection

• The Certificate and Key Management, Certificate Store and Key

Store components provide a framework for certificate and key

management that supports public key cryptography for RSA, DSA

and DH key pairs, assignment of trust status and certificate-chain

construction, validation and revocation.

• The Cryptographic Token Framework supports the use of secure

hardware tokens ,for example DRM-protected games or films on

SD cards or memory sticks.

• The C Standard Library is a subset of the POSIX C library which

maps C function calls in as simple a way as possible to native

Symbian OS calls.

OS Services layer in the Mobile OS: Comms Services

• ‘Comms’ (or communications means ‘data communications’ – the art,

science and technology of moving data between different devices over direct

connections or networks.

• Symbian OS supports a wide range of communications technologies

including conventional serial communications, short link technologies

such as USB, Bluetooth and infrared, as well as networking technologies.

Comms Services

Purpose

• Comms Services in Symbian OS provides the support for a wide

variety of communications protocols and services:

• Serial protocols including RS232, IrDA and USB

• Bluetooth radio

• Networking protocols including TCP/IP (both IPv4 and IPv6),

network security (TLS and IPSec) and dial-up protocols (PPP and

SLIP)

• Wi-Fi

• 2G, 2.5G and 3G mobile telephony voice, data (including fax) and

messaging services for GSM/UMTS and CDMA/CDMA2000 network

• The system model divides the Comms Services block into four

distinct sub-blocks:

1. Comms Framework

2. Telephony services

3. Short Link services

4. Networking services

Comms Services

Comms Services

1. Comms Framework: It provides the generic infrastructure that supports

all communications services.

• Most importantly, it includes the Comms Root Server, which is the

‘meta’ process server for all communications services and the ESock

Socket Server which provides the generic, sockets-style interface used

to access all communications services.

2. Telephony Services:These are based on the ETel Telephony Server that

provides support for 2G, 2.5G and 3G mobile phone networks, including

GSM/GPRS/EDGE/UTMS (2G/2.5G/3G) and CDMA/CDMA2000.

Comms Services

3. Networking Services: Networking Services provides packet-based

network services with Ethernet emulation and includes the TCP/IP

stack implementation, secure networking extensions including

TLS/SSL and IPSec, which support secure browsing and VPN

gateways, together with a variety of application-level Internet

services including FTP and HTTP.

4. Short-link Services :Short-link services provides USB, Bluetooth and

infrared services

OS Services layer in the Mobile OS:Multimedia and Graphics Services Block

Multimedia and Graphics Services Block

This block provides all graphics

services above the level of

hardware drivers and provides the

frameworks supporting

multimedia services.

Multimedia and Graphics Services Block

1. The Multimedia Framework provides a single extensible

framework for integrating support for audio, video, MIDI, automated

speech recognition, cameras, and integrated broadcast tuners.

• Its purpose is to consolidate and standardize the multimedia APIs,

so that they are common across all devices based on Symbian OS.

2. OpenGL ES is an open standard for 2D and 3D graphics,

specifically

targeted at embedded systems including consoles and phones.

It defines application APIs for rendering, texture mapping, and other

graphical effects, as well as a portable binding to native windowing

systems.

Multimedia and Graphics Services Block

3. Windowing Model :The Window Server is at the heart of the graphics

architecture of Symbian OS and it is central to the event-handling

model that drives applications.

The Window Server owns and manages access to the screen as a

drawable resource, which is made available to applications through

the abstraction of windowed screen areas. It also provides access

to the keyboard and pointer or digitizer for GUI applications,

4. Graphics and Printing Services Collection: These components

support all bitmapped graphics operations on display and printer

devices, including all font and drawing operations. The principal

components are the Font and Bitmap Server, through which all

operations are made within a client-side server session

Multimedia and Graphics Services Block

5. Graphics Device Interface Collection: This is the lowest level of the

graphics services, providing low-level graphics abstractions and

color palette support.

OS Services layer in the Mobile OS:Connectivity Services Block

Connectivity Services Block

• This block provides the device-side

support for connectivity services, for

example backup and restore, file

transfer and browsing and

application installation.

Connectivity Services Block

• The basic supported services are:

1. backup and restore of a drive on the device to a desktop host

2. file management (e.g. copying files to and from the device, renaming

and deleting files and directories on the device, and formatting

device drives)

3. installation of software from the desktop host.

Component Collections

1. Service Providers Collection :These components provide named

services which run on the device side to provide service interfaces

to remote (host-side) clients.

Remote File Server, Software Install Server,Secure Backup Socket

Server,Secure Backup Engine,PLP Variant

2. Service Framework Collection :This service based on configuration

files and port registration enables device-side services to register a

port number for use by PC-side clients,which can query for and

start device-side services. The configurtion files have an XML-

based format.

Component Collections

• The Bearer Abstraction Layer component is a framework for plug-

ins, which encapsulates actual bearers (for example m-Router),

providing a connection-management API to PC link-type

applications.

• The Server Socket component is a helper library that supports

creating (new, unnamed) port-number-based TCP/IP services for

use by the Service Broker for device–host communications, for

example with a PC. It communicates service port numbers and

manages messages and commands.

• m-Router is a licensed, PPP-like data-communications protocol and

framework, which provides a TCP/IP-based connection between two

devices.

End of Chapter