Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 -...

24
Operating Systems Practical Session 2, Processes and Scheduling 1

Transcript of Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 -...

Page 1: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Operating Systems

Practical Session 2,

Processes and Scheduling

1

Page 2: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Five-state Process Model

2

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

Page 3: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Two types of scheduling

❑Preemptive scheduling A task may be rescheduled to operate at a later time (for example, it may be rescheduled by the scheduler upon the arrival of a “more important” task).

❑Non Preemptive scheduling (cooperative) Task switching can only be performed with explicitly defined system services (for example: the termination task, explicit call to yield() , I/O operation which changes the process state to blocking, etc’…).

3

Page 4: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Scheduling algorithms

1. FCFS (First – Come, First – Served) • Non preemptive

• Convoy effect

2. SJF (Shortest Job First) • Provably minimal with respect to the minimal average turn around time

• No way of knowing the length of the next CPU burst

• Can approximate according to: Tn+1=αtn+(1- α)Tn

• Preemptive (Shortest Remaining Time - SRT) or non preemptive

3. Round Robin • When using large time slices it imitates FCFS

• When using time slices which are closer to context switch time, more CPU time is wasted on switches

4. Guaranteed scheduling • Constantly calculates the ratio between how much time the process has

had since its creation and how much CPU time it is entitled to

• CPU time entitled to = Time since process creationTotal number of processses

• Guarantees 1/n of CPU time per process / user

4

Page 5: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Priority Scheduling algorithms

5. Priority scheduling (naïve) • A generalization of SJF

6. Multi Level Queue scheduling • Partition the ready queue

• Each partition employs its own scheduling scheme

• A process from a lower priority group may run only if there is no higher priority process

5

Page 6: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Priority Scheduling algorithms (cont.)

7. Dynamic Multi Level scheduling

i. Promote (starving) low priority processes

o Can be implemented by counting the time from last execution of the process

ii. Demote processes running longer o Priority may become balanced eventually

6

Page 7: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Two Level scheduling

First level: Choose process to run

Second level: Choose the process to swap out

7

CPU

Memory disk

Memory

scheduler

CPU

scheduler

Page 8: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

The Completely Fair Scheduler

● CFS models an “ideal, precise multitasking CPU” ● Can be thought of as a combination of guaranteed scheduling

and dynamic priority scheduling

● It measure how much runtime each task has had and try and ensure that everyone gets their fair share of time

● The process’s runtime held in a vruntime variable

● A lower vruntime indicates that the process has had less time to compute, and therefore has more need of the processor

Page 9: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

CFS Priorities • CFS doesn't use priorities directly but instead uses

them as a decay factor for the time a process is permitted to execute

• Lower-priority processes have higher factors of decay, while higher-priority processes have lower factors of decay

● Decay factor is inverse to the priority o A higher priority process will accumulate vruntime more

slowly

o Likewise, a low-priority process will have its vruntime increase more quickly

Page 10: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Quality criteria measures

1. Throughput – The number of completed processes per time unit.

2. Turnaround time – The time interval between the process submission and its completion.

3. Waiting time – The sum of all time intervals in which the process was in the ready queue.

4. Response time – The time taken between submitting a command and the generation of first output.

5. CPU utilization – Percentage of time in which the CPU is not idle.

14

Page 11: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

XV6

15

Page 12: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Five-state Process Model

16

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

proc.h

enum procstate { UNUSED, EMBRYO, SLEEPING, RUNNABLE, RUNNING, ZOMBIE };

// Per-process state

struct proc {

uint sz; // Size of process memory (bytes)

pde_t* pgdir; // Page table

char *kstack; // Bottom of kernel stack for this process

enum procstate state; // Process state

int pid; // Process ID

struct proc *parent; // Parent process struct trapframe *tf; // Trap frame for current syscall struct context *context; // swtch() here to run process

void *chan; // If non-zero, sleeping on chan int killed; // If non-zero, have been killed

struct file *ofile[NOFILE]; // Open files

struct inode *cwd; // Current directory

char name[16]; // Process name (debugging)

};

Page 13: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Creation of a Process (fork)

17

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

np->sz = curproc->sz;

np->parent = curproc;

*np->tf = *curproc->tf;

// Clear %eax so that fork returns 0 in the child.

np->tf->eax = 0;

for(i = 0; i < NOFILE; i++)

if(curproc->ofile[i])

np->ofile[i] = filedup(curproc->ofile[i]);

np->cwd = idup(curproc->cwd);

safestrcpy(np->name, curproc->name, sizeof(curproc->name));

pid = np->pid;

acquire(&ptable.lock);

np->state = RUNNABLE;

release(&ptable.lock);

return pid;

}

proc.c

int fork(void)

{

int i, pid;

struct proc *np; struct proc *curproc = myproc(); // Allocate process.

if((np = allocproc()) == 0){

return -1;

}

// Copy process state from p.

if((np->pgdir = copyuvm(curproc->pgdir, curproc->sz)) == 0){

kfree(np->kstack);

np->kstack = 0;

np->state = UNUSED;

return -1;

}

Page 14: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Transition to a Running state

18

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

// Switch to chosen process. It is the process's job

// to release ptable.lock and then reacquire it

// before jumping back to us.

c->proc = p;

switchuvm(p);

p->state = RUNNING;

swtch(&(c->scheduler), p->context);

switchkvm();

// Process is done running for now.

// It should have changed its p->state before coming back.

c->proc = 0;

}

release(&ptable.lock);

}

}

proc.c

void scheduler(void)

{

struct proc *p;

struct cpu *c = mycpu(); c->proc = 0; for(;;){

// Enable interrupts on this processor.

sti();

// Loop over process table looking for process to run.

acquire(&ptable.lock);

for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){

if(p->state != RUNNABLE)

continue;

Page 15: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Transition to a Running state

19

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

# Switch stacks

movl %esp, (%eax) // esp points to a context like data

// store it into old

movl %edx, %esp // esp gets edx = switch kernel stack # Load new callee-save registers

popl %edi popl %esi popl %ebx popl %ebp ret

swtch.S # Context switch # void swtch(struct context **old, struct context *new); # Save current register context in old

# and then load register context from new.

.globl swtch swtch:

movl 4(%esp), %eax // eax gets old movl 8(%esp), %edx // edx gets new # Save old callee-save registers

pushl %ebp pushl %ebx pushl %esi pushl %edi

struct context {

uint edi; uint esi; uint ebx; uint ebp; uint eip;

};

Page 16: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Transition to a Ready state

20

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

void

sched(void)

{

int intena;

struct proc *p = myproc(); if(!holding(&ptable.lock))

panic("sched ptable.lock");

if(mycpu()->ncli != 1)

panic("sched locks");

if(p->state == RUNNING)

panic("sched running");

if(readeflags()&FL_IF)

panic("sched interruptible");

intena = mycpu()->intena;

swtch(&p->context, mycpu()->scheduler);

mycpu()->intena = intena;

}

proc.c

void

yield(void)

{

acquire(&ptable.lock); //DOC: yieldlock myproc()->state = RUNNABLE;

sched();

release(&ptable.lock);

}

Page 17: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Transition to a Blocked state

21

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

proc.c

void

sleep(void *chan, struct spinlock *lk)

{ struct proc *p = myproc(); if(p == 0)

panic("sleep");

if(lk == 0)

panic("sleep without lk");

if(lk != &ptable.lock){ //DOC: sleeplock0

acquire(&ptable.lock); //DOC: sleeplock1

release(lk);

}

// Go to sleep.

p->chan = chan;

p->state = SLEEPING;

sched();

// Tidy up.

p->chan = 0;

// Reacquire original lock.

if(lk != &ptable.lock){ //DOC: sleeplock2

release(&ptable.lock);

acquire(lk);

}

}

Page 18: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Transition to a Ready state

22

New Ready Running Exit

Blocked

Admit

Event

Occurs

Dispatch Release

Time-out

Event

Wait

static void

wakeup1(void *chan)

{

struct proc *p;

for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)

if(p->state == SLEEPING && p->chan == chan)

p->state = RUNNABLE;

}

proc.c

void

wakeup(void *chan)

{

acquire(&ptable.lock);

wakeup1(chan);

release(&ptable.lock);

}

Page 19: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Warm up

• Why bother with multiprogramming?

• Assume processes in a given system wait for I/O 60% of the time.

1. What is the approximate CPU utilization with one process running?

2. What is the approximate CPU utilization with three processes running?

23

Page 20: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Warm up

1. If a process is blocking on I/O 60% of the time, than there is only 40% of CPU utilization.

2. At a given moment, the probability that all three processes are blocking on I/O is (0.6)3. That means that the CPU utilization is (1-(0.6)3)=0.784, or roughly 78%.

24

Page 21: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Preemptive dynamic priorities scheduling (Taken from Silberschatz, 5-9) Consider the following preemptive priority scheduling algorithm with dynamically changing priorities: When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at rate α; when it is running, its priority changes at rate β. All processes are given a priority of 0 when they enter the ready queue. The parameters alpha and beta can be set. Higher priority processes take higher values. 1. What is the algorithm that results from β> α > 0? 2. What is the algorithm that results from α < β< 0? 3. Is there a starvation problem in 1? in 2? explain.

25

Page 22: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Preemptive dynamic priorities scheduling 1. β>α>0. To get a better feeling of the problem, we will create an example: C, P1, P2, P3 arrive one after the other and last for 3 TU, α=1 and β=2(bold marks the running process):

The resulting schedule is FCFS. Slightly more formal: If a process is running it must have the highest priority value. While it is running, it’s priority value increases at a rate greater than any other waiting process. As a result, it will continue it’s run until it completes (or waits on I/O, for example). All processes in the waiting queue, increase their priority at the same rate, hence the one which arrived earliest will have the highest priority once the CPU is available.

Time 1 2 3 4 5 6 7 8 9

P1 0 2 4

P2 0 1 2 4 6

P3 0 1 2 3 4 6 8

26

Page 23: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Preemptive dynamic priorities scheduling 2. α <β<0. We will use (almost) the same example problem as before, but this time α=-2, β=-1:

The resulting schedule is LIFO. More formally: If a process is running it must have the highest priority value. While it is running, it’s priority value decreases at a much lower rate than any other waiting process. As a result, it will continue it’s run until it completes (or waits on I/O, for example), or a new process with priority 0 is introduced. As before, all processes in the waiting queue, decrease their priority at the same rate, hence the one which arrived later will have the highest priority once the CPU is available.

Time 1 2 3 4 5 6 7 8 9

P1 0 -1 -3 -5 -7 -9 -11 -13 -14

P2 0 -1 -3 -5 -7 -8

P3 0 -1 -2

27

Page 24: Practical Session 2, Processes and Schedulingos192/wiki.files/Practical Session 2 - Scheduling.pdfThe Completely Fair Scheduler CFS models an “ideal, precise multitasking CPU”

Preemptive dynamic priorities scheduling 3.In the first case it is easy to see that there is no starvation problem. When the kth process is introduced it will wait for at most (k-1)⋅max{timei} Time Units. This number might be large but it is still finite.

This is not true for the second case - consider the following scenario: P1 is introduced and receives CPU time. While still working a 2nd process, P2, is initiated. According to the scheduling algorithm in the second case, P2 will receive the CPU time and P1 will have to wait. As long as new processes will keep coming before P1 gets a chance to complete its run, P1 will never complete its task.

28