Chapter 6, Process Synchronization

349
Chapter 6, Process Synchronization 1

description

Chapter 6, Process Synchronization. 6.1 Background. Cooperating processes can affect each other This may result from message passing It may result from shared memory space The general case involves concurrent access to shared resources. - PowerPoint PPT Presentation

Transcript of Chapter 6, Process Synchronization

Page 1: Chapter 6, Process Synchronization

1

Chapter 6, Process Synchronization

Page 2: Chapter 6, Process Synchronization

2

6.1 Background

• Cooperating processes can affect each other• This may result from message passing• It may result from shared memory space• The general case involves concurrent access to

shared resources

Page 3: Chapter 6, Process Synchronization

3

• This section illustrates how uncontrolled access to a shared resource can result in inconsistent state

• In other words, it shows what the concurrency control or synchronization problem is

Page 4: Chapter 6, Process Synchronization

4

• The following overheads show:– How producer and consumer threads may both

change the value of a variable, count– How the single increment or decrement of count

is not atomic in machine code– How the interleaving of machine instructions can

give an incorrect result

Page 5: Chapter 6, Process Synchronization

5

High level Producer Code

• while(count == BUFFER_SIZE)• ; // no-op• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE

Page 6: Chapter 6, Process Synchronization

6

High Level Consumer Code

• while(count == 0)• ; // no-op• --count;• Item = buffer[out];• out = (out + 1) % BUFFER_SIZE

Page 7: Chapter 6, Process Synchronization

7

Machine Code for Incrementing count

• register1 = count;• register1 = register1 + 1;• count = register1;

Page 8: Chapter 6, Process Synchronization

8

Machine Code for Decrementing count

• register2 = count;• register2 = register2 - 1;• count = register2;

Page 9: Chapter 6, Process Synchronization

9

• The following overhead shows an interleaving of machine instructions which leads to a lost increment

Page 10: Chapter 6, Process Synchronization

10

• Let the initial value of count be 5• S0: Producer executes• register1 = count ( register1 = 5)• S1: Producer executes • register1 = register1 + 1 ( register1 = 6)• Context switch• S2: Consumer executes • register2 = count ( register2 = 5)• S3: Consumer executes • register2 = register2 – 1 ( register2 = 4)• Context switch• S4: producer executes

• count = register1 ( count = 6)• Context switch• S5: consumer executes

• count = register2 ( final value of count = 4)

Page 11: Chapter 6, Process Synchronization

11

• The point is that you started with a value of 5.• Then two processes ran concurrently.• One attempted increment the count.• The other attempted to decrement the count.• 5 + 1 – 1 should = 5• However, due to synchronization problems the

final value of count was 4, not 5.

Page 12: Chapter 6, Process Synchronization

12

• Term: • Race Condition. • Definition: • This is the general O/S term for any situation

where the order of execution of various actions affects the outcome

• For our purposes it refers specifically to the case where the outcome is incorrect, i.e., where inconsistent state results

Page 13: Chapter 6, Process Synchronization

13

• The derivation of the term “race” condition: • Execution is a “race”. • In interleaved actions, whichever sequence

finishes first determines the final outcome. • Note in the concrete example that the one

that finished first “lost”.

Page 14: Chapter 6, Process Synchronization

14

• Process synchronization refers to the tools that can be used with cooperating processes to make sure that that during concurrent execution they access shared resources in such a way that a consistent state results

• In other words, it’s a way of enforcing a desired interleaving of actions, or preventing an undesired interleaving of actions.

Page 15: Chapter 6, Process Synchronization

15

• Yet another way to think about this is that process synchronization reduces concurrency somewhat, because certain sequences that might otherwise happen are not allowed, and at least a partial sequential ordering of actions may be required

Page 16: Chapter 6, Process Synchronization

16

6.2 The Critical Section Problem

• Term: • Critical section. • Definition: • A segment of code where resources common to

a set of threads are being manipulated• Note that the definition is given in terms of

threads because it will be possible to concretely illustrate it using threaded Java code

Page 17: Chapter 6, Process Synchronization

17

• Alternative definition of a critical section: • A segment of code where access is regulated. • Only one thread at a time is allowed to

execute in the critical section • This makes it possible to avoid conflicting

actions which result in inconsistent state

Page 18: Chapter 6, Process Synchronization

18

• Critical section definition using processes: • Let there be n processes, P0, …, Pn-1, that share

access to common variables, data structures, or resources

• Any segments of code where they access shared resources are critical sections

• No two processes can be executing in their critical section at the same time

Page 19: Chapter 6, Process Synchronization

19

• The critical section problem is to design a protocol that allows processes to cooperate

• In other words, it allows them to run concurrently, but it prevents breaking their critical sections into parts and interleaving the execution of those parts

Page 20: Chapter 6, Process Synchronization

20

• Once again recall that this will ultimately be illustrated using threads

• In that situation, no two threads may be executing in the same critical section of code at the same time

• For the purposes of thinking about the problem, the general structure and terminology of a thread with a critical section can be diagrammed in this way:

Page 21: Chapter 6, Process Synchronization

21

• while(true)• {• entry section • // The synchronization entrance• protocol is implemented here.• critical section • // This section is protected.• exit section • // The synchronization exit protocol• is implemented here.• remainder section • // This section is not protected.• }

Page 22: Chapter 6, Process Synchronization

22

• Note the terminology:– Entry section– Critical section– Exit section– Remainder section

• These terms for referring to the parts of a concurrent process will be used in the discussions which follow

Page 23: Chapter 6, Process Synchronization

23

• A correct solution to the critical section problem has to meet these three conditions:– Mutual exclusion– Progress– Bounded waiting

• In other words, an implementation of a synchronization protocol has to have these three characteristics in order to be correct.

Page 24: Chapter 6, Process Synchronization

24

Mutual exclusion

• Definition of mutual exclusion:• If process Pi is executing in its critical section, no

other process can be executing in its critical section• Mutual exclusion is the heart of concurrency control.• However, concurrency control is not correct if the

protocol “locks up” and the program can’t produce results.

• That’s what the additional requirements, progress and bounded waiting, are about.

Page 25: Chapter 6, Process Synchronization

25

Progress

• Definition of progress:• If no process is in its critical section and some

processes wish to enter, only those not executing in their remainder sections can participate in the decision

• It should not be surprising if you find this statement of progress somewhat mystifying.

• The idea requires a little explanation and may not really be clear until an example is shown.

Page 26: Chapter 6, Process Synchronization

26

Progress explained

• For the sake of discussion, let all processes be structured as infinite loops

• If mutual exclusion has been implemented, a process may be at the top of the loop, waiting for the entry section to allow it into the critical section

• The process may be involved in the entry or exit protocols

• Barring these possibilities, the process can either be in its critical section or in its remainder section

Page 27: Chapter 6, Process Synchronization

27

• The first three possibilities, waiting to enter, entering, or exiting, are borderline cases.

• Progress is most easily understood by focusing on the question of processes either in the critical section or in the remainder section

• The premise of the progress condition is that no process is in its critical section

Page 28: Chapter 6, Process Synchronization

28

• Some processes may be in their remainder sections

• Others may be waiting to enter the critical section

• But the important point is that the critical section is available

• The question is how to decide which process to allow in, assuming some processes do want to enter the critical section

Page 29: Chapter 6, Process Synchronization

29

• Progress states that a process that is happily running in its remainder section has no part in the decision of which process to allow into the critical section.

• A process in the remainder section can’t stop another process from entering the critical section.

• A process in the remainder section also cannot delay the decision.

Page 30: Chapter 6, Process Synchronization

30

• The decision on which enters can only take into account those processes that are currently waiting to get in.

• This sounds simple enough, but it’s not really clear what practical effect it has on the entry protocol.

• An example will be given later that violates the progress condition.

• Hopefully this will make it clearer what the progress condition really means.

Page 31: Chapter 6, Process Synchronization

31

Bounded waiting

• Definition of bounded waiting:• There exists a bound, or limit, on the number

of times that other processes are allowed to enter their critical sections after a given process has made a request to enter its critical section and when that request is granted

Page 32: Chapter 6, Process Synchronization

32

Bounded waiting explained

• Whatever algorithm or protocol is implemented for allowing processes into the critical section, it cannot allow starvation

• Granting access to a critical section is reminiscent of scheduling.

• Eventually, everybody has to get a chance

Page 33: Chapter 6, Process Synchronization

33

• For the purposes of this discussion, it is assumed that each process is executing at non-zero speed, although they may differ in their speeds

• Bounded waiting is expressed in terms of “a number of times”.

• No concrete time limit can be given, but the result is that allowing a thread into its critical section can’t be postponed indefinitely

Page 34: Chapter 6, Process Synchronization

34

More about the critical section problem

• The critical section problem is unavoidable in operating systems

• The underlying idea already came up in chapter 5 in the discussion of preemption and interrupt handling

• There are pieces of operating system code that manipulate shared structures like waiting and ready queues.

• No more than one process at a time can be executing such code because inconsistent O/S state could result

Page 35: Chapter 6, Process Synchronization

35

• You might try to avoid the critical section problem by disallowing cooperation among user processes (although this diminishes the usefulness of multi-programming)

• Such a solution would be very limiting for application code.

• It doesn’t work for system code.• System processes have to be able to cooperate in

their access to shared structures.

Page 36: Chapter 6, Process Synchronization

36

• A more complete list of shared resources that multiple system processes contend for would include scheduling queues, I/O queues, lists of memory allocation, lists of processes, etc.

• The bottom line is that code that manipulates these resources has to be in a critical section

• Stated briefly: There has to be mutual exclusion between different processes that access the resources (with the additional assurances of progress and bounded waiting)

Page 37: Chapter 6, Process Synchronization

37

The critical section problem in the O/S, elaborated

• You might try to get rid of the critical section problem by making the kernel monolithic (although this goes against the grain of layering/modular design)

• The motivation behind this would be the idea that there would only be one O/S process or thread, not many processes or threads.

Page 38: Chapter 6, Process Synchronization

38

• Even so, if the architecture is based on interrupts, whether the O/S is modular or not, one activation of the O/S can be interrupted and set aside, while another activation is started as a result of the interrupt.

• The idea behind “activations” can be illustrated with interrupt handling code specifically.

Page 39: Chapter 6, Process Synchronization

39

• One activation of the O/S may be doing one thing.

• When the interrupt arrives a second activation occurs, which will run a different part of the O/S—namely an interrupt handler

Page 40: Chapter 6, Process Synchronization

40

• The point is that any activation of the O/S, including an interrupt handler, has the potential to access and modify a shared resource like the ready queue

• It doesn’t matter whether the O/S is modular or monolithic

• Each activation would have access to shared resources

Page 41: Chapter 6, Process Synchronization

41

Do you own your possessions, or do your possessions own you?

• The situation can be framed in this way: • You think of the O/S as “owning” and

“managing” the processes, whether system or user processes

• It turns out that in a sense, the processes own the O/S.

Page 42: Chapter 6, Process Synchronization

42

• Even if the processes in question are user processes and don’t access O/S resources directly, user requests for service and the granting of those requests by the O/S causes changes to the O/S’s data structures

• The O/S may create the processes, but the processes can then be viewed as causing shared access to common O/S resources

Page 43: Chapter 6, Process Synchronization

43

• This is the micro-level view of the idea expressed at the beginning of the course, that the O/S is the quintessential service program

• Everything that it does can ultimately be traced to some application request

• Applications don’t own the system resources, but they are responsible for O/S behavior which requires critical section protection

Page 44: Chapter 6, Process Synchronization

44

A multiple-process O/S

• The previous discussion was meant to emphasize that even a monolithic O/S has concurrency issues

• As soon as multiple processes are allowed, whether an O/S is a microkernel or not, it is reasonable to implement some of the system functionality in different processes

Page 45: Chapter 6, Process Synchronization

45

• At that point, whether the O/S supports multi-programming or not, the O/S itself has concurrency issues with multiple processes of its own

• Once you take the step of allowing >1 concurrent process, whether user or system processes, concurrency, or the critical section problem arises

Page 46: Chapter 6, Process Synchronization

46

What all this means to O/S code

• O/S code can’t allow race conditions to arise• The O/S can’t be allowed to enter an

inconsistent state• O/S code has to be written so that access to

shared resources is done in critical sections• No two O/S processes can be in a critical

section at the same time

Page 47: Chapter 6, Process Synchronization

47

• The critical section problem has to be solved in order to implement a correct O/S.

• If the critical section problem can be solved in O/S code, then the solution tools can also be used when considering user processes or application code which exhibit the characteristics of concurrency and access to shared resources

Page 48: Chapter 6, Process Synchronization

48

Dealing with critical sections in the O/S

• Keep in mind:– A critical section is a sequence of instructions that

has to be run atomically without interleaving executions of >1 process

– Pre-emptive scheduling a new process can be scheduled before the currently running one finishes

Page 49: Chapter 6, Process Synchronization

49

• The question of critical sections in the O/S intersects with the topic of scheduling

• The question of scheduling leads to two possible approaches to dealing with concurrency in the O/S:– A non-preemptive kernel– A preemptive kernel

Page 50: Chapter 6, Process Synchronization

50

• In a non-preemptive kernel, any kernel mode process will run until:– It exits– It blocks due to making an I/O request– It voluntarily yields the CPU

• Under this scenario, the process will run through any of its critical sections without interruption by another process

Page 51: Chapter 6, Process Synchronization

51

• A preemptive kernel is likely to be more desirable than a non-preemptive kernel.

• A preemptive kernel– Will be more responsive for interactive time-

sharing– Will support (soft) real-time processing

Page 52: Chapter 6, Process Synchronization

52

• Writing a pre-emptive kernel means dealing with concurrency in the kernel code– In general, this is not easy– In some cases (like SMP) it is virtually impossible– How do you control two threads running

concurrently on two different CPU’s?

Page 53: Chapter 6, Process Synchronization

53

• Windows XP and before were non-preemptive• More recent versions of Windows are

preemptive• Traditional Unix was non-preemptive. • Newer versions of Unix and Linux are

preemptive

Page 54: Chapter 6, Process Synchronization

54

6.3 Peterson’s Solution

• This section will be divided into these 4 categories:

• Background• Scenario 1• Scenario 2• Peterson’s Solution

Page 55: Chapter 6, Process Synchronization

55

Background

• Peterson’s solution is an illustration of how to manage critical sections using high-level language pseudo-code

• My coverage of Peterson’s algorithm starts with two scenarios that are not given in the current edition of the textbook

Page 56: Chapter 6, Process Synchronization

56

• The two scenarios were given in previous editions of the book.

• I have kept these scenarios because I think they’re helpful.

• They give a concrete idea of what it means to violate progress and bounded waiting

• And Peterson’s solution can be viewed as a successful combination of the two incomplete techniques given in the two scenarios

Page 57: Chapter 6, Process Synchronization

57

• Before going into the scenarios, it will be helpful to recall what the critical section problem is, and how it can be solved, in general

• A successful solution to the critical section problem includes these three requirements:– Mutual exclusion– Progress– Bounded waiting

Page 58: Chapter 6, Process Synchronization

58

• At this point, let it be assumed that the mutual exclusion requirement is clear

• Progress and bounded waiting can be made clearer by considering possible solutions to the critical section problem that may not satisfy these requirements

Page 59: Chapter 6, Process Synchronization

59

Scenario 1

• Suppose your synchronization protocol is based on the idea that processes “take turns”

• Taking turns is the basis for concurrency• The question is, how do you decide whose

turn it is?• Consider two processes, P0 and P1

• Let “whose turn” be represented by an integer variable turn which takes on the values 0 or 1

Page 60: Chapter 6, Process Synchronization

60

• Note that regardless of whose turn it currently is, you can always change turns by setting turn = turn - 1

• If turn == 0, then 1 – turn = 1 gives the opposite• If turn == 1, then 1 – turn = 0 gives the opposite• On the other hand, in code protected by an if

statement, if(turn == x) you can simply hard code the change to turn = y, the opposite of x.

Page 61: Chapter 6, Process Synchronization

61

• Let the sections of the problem be structured in this way:

• while(true)• {• entry: If it’s my turn• critical section• exit: Change from my turn to the other• remainder• }

Page 62: Chapter 6, Process Synchronization

62

• For the sake of discussion assume:– P0 and P1 are user processes– The system correctly (atomically) grants access to

the shared variable turn• The code blocks for the two processes could

be spelled out in detail as given below

Page 63: Chapter 6, Process Synchronization

63

P0

• while(true)• {• entry: if(turn == 0)• {• critical section• exit: turn = 1;• }• remainder• }

Page 64: Chapter 6, Process Synchronization

64

P1

• while(true)• {• entry: if(turn == 1)• {• critical section• exit: turn = 0;• }• remainder• }

Page 65: Chapter 6, Process Synchronization

65

• Suppose that turn is initialized arbitrarily to either 1 or 0

• Assume that execution of the two processes goes from there

• This would-be solution violates progress

Page 66: Chapter 6, Process Synchronization

66

• Consider this scenario:• turn == 0• P0 runs its critical section• It then goes to its exit section and sets turn to

1• P1 is in its remainder section and doesn’t want

to enter its critical section (yet)

Page 67: Chapter 6, Process Synchronization

67

• P0 completes its remainder section and wants to enter its critical section again

• It can’t, because it locked itself out when it changed turn to 1

• Due to the structure of the code, only P1 can enter the critical section now

• P0 can only get in after P1 has entered and exited the critical section

Page 68: Chapter 6, Process Synchronization

68

• This scenario meets the requirements for a violation of the progress condition for a correct implementation of synchronization

• The scheduling of P0, a process that is ready to enter its critical section, is dependent on a process, P1, which is running in its remainder section—It is not running in its critical section

Page 69: Chapter 6, Process Synchronization

69

• Here is a physical analogy• There is a single train track between two

endpoints• Access to the track is managed with a token• If the token is at point A, a train can enter the

track from that direction, taking the token with it• If the token is at point B, a train can enter the track

from that direction, taking the token with it

Page 70: Chapter 6, Process Synchronization

70

Page 71: Chapter 6, Process Synchronization

71

• This protocol works to prevent train wrecks• The need to hold the token forces strict

alternation between trains on the track• However, no two trains in a row can originate

from the same point, even if no train wants to go in the other direction

• This decreases the potential use of the tracks

Page 72: Chapter 6, Process Synchronization

72

Scenario 2

• Suppose that the same basic assumptions apply as for scenario 1

• There are two user processes with entry, critical, exit, and remainder sections

• You want to correctly synchronize them• You want to avoid the progress violation that

occurred under scenario 1

Page 73: Chapter 6, Process Synchronization

73

• The scenario 1 solution involved a turn variable that was rigidly alternated between P0 and process P1.

• Under scenario 2, there will be two variables, one for each process

• The variables are used for a process to assert its desire to enter its critical section

• Then each process can defer to the other if the other wants to go, implementing a “polite” rather than “rigid” determination of whose turn it is

Page 74: Chapter 6, Process Synchronization

74

• Specifically, for the sake of discussion, assume:– P0 and P1 are user processes– There are two, shared flag variables, flag0 and

flag1, which are boolean.– The system correctly (atomically) grants access to

the two shared variables

Page 75: Chapter 6, Process Synchronization

75

• Let the variables represent the fact that a process wants to take a turn in this way:– flag0 == true P0 wants to enter its critical section

– flag1 == true P1 wants to enter its critical section

Page 76: Chapter 6, Process Synchronization

76

• Let the sections of the problem be structured in this way:

• while(true)• {• entry: Set my flag to true• while the other flag is true, wait• critical section• exit: Change my flag to false• remainder• }

Page 77: Chapter 6, Process Synchronization

77

• The given solution overcomes the problem of the previous scenario

• The processes don’t have to rigidly take turns to enter their critical sections

• If one process is in its remainder section, its flag will be set to false

• That means the other process will be free to enter its critical section

• A process not in its critical section can’t prevent another process from entering its critical section

Page 78: Chapter 6, Process Synchronization

78

• However, this new solution is still not fully correct.

• Recall that outside of the critical section, scheduling is determined independent of the processes

• That is to say that under concurrent execution, outside of a protected critical section, any execution order may occur

Page 79: Chapter 6, Process Synchronization

79

• So, consider the possible sequence of execution shown in the diagram on the following overhead

• The horizontal arrow represents the fact that P0 is context switched out and P1 is context switched in due to some random scheduling decision

• Note that this is possible because at the time of the arrow, neither process is in its critical section

Page 80: Chapter 6, Process Synchronization

80

Page 81: Chapter 6, Process Synchronization

81

• Under this scheduling order, both processes have set their flags to true

• The result will be deadlock• Both processes will wait eternally for the other to

change its flag• This protocol is coded in such a way that the

decision to schedule is postponed indefinitely• This seems to violate the 2nd part of the definition

of progress

Page 82: Chapter 6, Process Synchronization

82

• The definition of the bounded waiting requirement in a correct implementation of synchronization states that there exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a given process has made a request to enter its critical section and when that request is granted

Page 83: Chapter 6, Process Synchronization

83

• Strictly speaking, scenario 2 doesn’t violate bounded waiting, because neither process can enter its critical section under deadlock.

• In effect, scenario 2 is another example of lack of progress.

• However, it is still interesting because it hints at the problems of unbounded waiting

Page 84: Chapter 6, Process Synchronization

84

• The picture on the following overhead illustrates what is wrong with scenario 2

• Each of the processes is too polite

Page 85: Chapter 6, Process Synchronization

85

Page 86: Chapter 6, Process Synchronization

86

Peterson’s Solution

• Keep in mind that in the previous two scenarios the assumption was made that the system was providing correct, atomic access to the turn and flag variables

• In essence, this assumption presumes a prior solution to the concurrency control problem

• In order to synchronize access to critical sections in code, we’re assuming that there is synchronized access to variables in the code

Page 87: Chapter 6, Process Synchronization

87

• For the sake of explaining the ideas involved, continue to assume that a system supports correct, atomic access to the needed variables

• Peterson’s solution is an illustration of a “correct” solution to synchronization in high level language software that would be dependent on atomicity or synchronization in the machine language implementation to work

Page 88: Chapter 6, Process Synchronization

88

• Peterson’s solution is a combination of the previous two scenarios

• It uses both a turn variable and flag variables to arrive at a solution to the critical section problem that makes progress

• Let the solution have these variables– int turn– boolean flag[2]

Page 89: Chapter 6, Process Synchronization

89

• The meaning of the variables is as before• turn indicates which of the two processes is

allowed to enter the critical section• Instead of having two separate flag variables,

flag[] is given as a boolean array, dimensioned to size 2

• flag[] records whether either of the two processes want to enter its critical section

Page 90: Chapter 6, Process Synchronization

90

• Let the sections of the problem be structured in the way shown on the following overhead

• The code is given from the point of view of Pi

• Pj would contain analogous but complementary code

Page 91: Chapter 6, Process Synchronization

91

• // Pi

• while(true)• {• flag[i] = true; // Assert that this process, i, wants to enter• • turn = j; // Politely defer the turn to the other process, j• • /* Note that the loop below does not enclose the critical section.• It is a busy waiting loop for process i. As long as it’s j’s turn• and j wants in the critical section, then i has to wait. The critical• point is that if it’s j’s turn, but j doesn’t want in, i doesn’t have• to wait. */• • while(flag[j] && turn == j);• • critical section;• • flag[i] = false;• • remainder section;• }

Page 92: Chapter 6, Process Synchronization

92

• Analyzing from the point of view of process Pi alone

• Let Pi reach the point where it wants to enter its critical section

• The following command signals its desire:• flag[i] = true;• It then politely does this in case Pj also wants to get

in:• turn = j;

Page 93: Chapter 6, Process Synchronization

93

• Pi only waits if turn == j and flag[j] is true

• If Pj doesn’t want in, then Pi is free to enter even though it set turn to j

• The argument is the converse if Pj wants in and Pi doesn’t want in

• In either of these two cases, there’s no conflict and the process gets to enter

Page 94: Chapter 6, Process Synchronization

94

• Consider the conflict case• Both Pi and Pj want in, so they both set their flags• They both try to defer to each other by setting

turn to the other• The system enforces atomic, synchronized access

to the turn variable• Whichever one sets turn last defers to the other

and wins the politeness contest

Page 95: Chapter 6, Process Synchronization

95

• That means the other one goes first• As soon as it leaves the critical section, it

resets its flag• That releases the other process from the busy

waiting loop

Page 96: Chapter 6, Process Synchronization

96

• Recall that outside of the critical section, scheduling is determined independent of the processes

• That is to say that under concurrent execution, outside of a protected critical section, any execution order may occur

• A possible sequence of execution is shown in the diagram on the overhead following the next one

Page 97: Chapter 6, Process Synchronization

97

• In the diagram the horizontal arrow represents the fact that P0 is context switched out and P1 is context switched in

• In this diagram, P1 will win the politeness contest, meaning P0 will be allowed into the critical section when it is scheduled next

• It should be possible to figure out the order of execution of the blocks of code of the two processes following this context switch

Page 98: Chapter 6, Process Synchronization

98

Page 99: Chapter 6, Process Synchronization

99

• The book gives the explanation of Peterson’s solution in the form of a proof

• The demonstration given here is not a proof• A non-proof understanding is sufficient• You should be able to answer questions about

and sketch out Peterson’s solution

Page 100: Chapter 6, Process Synchronization

100

• You should also be able to answer questions about and sketch out the partial solutions of scenarios 1 and 2, which Peterson’s solution is based on

• Recall that the most significant points of those examples, about which questions might be asked, is how they fail to meet the progress requirement

Page 101: Chapter 6, Process Synchronization

101

6.4 Synchronization Hardware

• The fundamental idea underlying the critical solution problem is locking

• The entry section can be described as the location where a lock on the critical section is obtained.

• The exit section can be described as the location where the lock on the critical section is released.

• The pseudo-code on the following overhead shows a critical section with this terminology.

Page 102: Chapter 6, Process Synchronization

102

• while(true)• {• acquire lock• critical section• release lock• remainder section• }

Page 103: Chapter 6, Process Synchronization

103

• Locking means exclusive access—in other words, the ability to lock supports mutual exclusion

• In the previous explanations of synchronization, it was assumed that at the very least, variables could in essence be locked, and that made it possible to lock critical sections

• The question is, where does the ability to lock variables come from?

Page 104: Chapter 6, Process Synchronization

104

• Ultimately, locking, whether locking of a variable, a block of code, or a physical resource, can only be accomplished through hardware support

• There is no such thing as a pure software solution to locking

Page 105: Chapter 6, Process Synchronization

105

• At the bottom-most hardware level, one solution to locking/synchronization/critical section protection would be the following:

• Disallow interrupts and disable preemption for the duration of a critical section

• This was mentioned earlier and will be covered again briefly

• Keep in mind, though, that eventually a more flexible construct, accessible to system code writers, needs to be made available

Page 106: Chapter 6, Process Synchronization

106

• Disallowing interrupts is a very deep topic• In reality, this means queuing interrupts• Interrupts may not be handled immediately,

but they can’t be discarded• Interrupt queuing may be supported in

hardware• Interrupt queuing may impinge on the very

“lowest” level of O/S software

Page 107: Chapter 6, Process Synchronization

107

• Disabling preemption• Disallowing interrupts alone wouldn’t be

enough in order to protect critical sections• Preemption would also have to be disabled• The real question is how broadly it should be

disabled• A gross solution is to implement a non-

preemptive scheduling algorithm

Page 108: Chapter 6, Process Synchronization

108

• Whether they supported synchronization in user processes, some older, simpler operating systems solved the synchronization problem in system code by disallowing preemption of system processes.

• The next step up is identifying just the critical sections of user and system processes and only disabling preemption during the critical sections themselves.

• This is the real goal of a general, flexible implementation of synchronization.

Page 109: Chapter 6, Process Synchronization

109

• The overall problem with drastic solutions like uninterruptibility and non-preemption is loss of concurrency.

• Loss of concurrency reduces the effectiveness of time-sharing systems

• Loss of concurrency would have an especially large impact on a real time system

Page 110: Chapter 6, Process Synchronization

110

• This is just one of the book’s side notes on multi-processing systems.

• Trying to make a given process uninterruptible in a multi-processor system is a mess.

• Each processor has to be sent a message saying that no code can be run which would conflict with a given kernel process

Page 111: Chapter 6, Process Synchronization

111

• Another potential architectural problem with the gross solutions is that uninterruptibility may affect the system clock

• System clock may be driven by interrupts. • If interrupts can be

disallowed/delayed/queued in order to support critical sections, the system clock may be slowed.

Page 112: Chapter 6, Process Synchronization

112

A flexible, accessible locking construct

• Modern computer architectures have machine instructions which support locking

• These instructions have traditionally been known as test and set instructions

• The book refers to them as get and set instructions

• I suppose this terminology was developed to sound more modern and object-oriented

Page 113: Chapter 6, Process Synchronization

113

• The book discusses this as a hardware concept and also illustrates it with a Java-like class

• The class has a private instance variable and public methods

• I find it somewhat misleading since what we’re dealing with are not simple accessor and mutator methods

• I prefer the traditional, straightforward, hardware/machine language explanation with the test and set terminology, and that’s all that I’ll cover

Page 114: Chapter 6, Process Synchronization

114

Locking with test and set

• The fundamental problem of synchronization is mutual exclusion

• You need an instruction at the lowest level that either executes completely or doesn’t execute at all

• The execution of that one instruction can’t be interleaved with other actions

• Other threads/processes/code are excluded from running at the same time

• This kind of “uninterruptible” instruction is known as an atomic instruction

Page 115: Chapter 6, Process Synchronization

115

• In order to support locking, you need such an instruction which will both check the current value of a variable (for example), and depending on the value, set it to another value

• This has to be accomplished without the possibility that another process will have its execution interleaved with this one, affecting the outcome on that variable

Page 116: Chapter 6, Process Synchronization

116

• The name “test and set” indicates that this is an instruction with a composite action

• Even though the instruction is logically composite, it is implemented as an atomic instruction in the machine language

• Execution of a block of code containing it may be interrupted before the instruction is executed or after it is executed

• However, the instruction itself cannot be interrupted between the test and the set

Page 117: Chapter 6, Process Synchronization

117

• The test and set instruction can be diagrammed this way in high level pseudo-code

• if(current value of variable x is …)• set the value of x to …

• The variable x itself now serves as the lock

Page 118: Chapter 6, Process Synchronization

118

• In other words, whether or not a thread can enter a section depends on the value of x

• The entry into a critical section, the acquisition of the lock, takes the form of testing and setting x:

• if(x is not set) // These two lines together• set x; // are the atomic test and set• critical section;• unset x;• remainder section;

Page 119: Chapter 6, Process Synchronization

119

• This may be beating a dead horse, but consider what could happen if test and set were not atomic.– P0 could test x and find it available

– P1 could test x and find it available

– P0 could set x

– P1 could redundantly set x

• Due to this interleaving of sub-instructions, both P0 and P1 would then proceed to enter the critical section, violating mutual exclusion

Page 120: Chapter 6, Process Synchronization

120

• The availability of this actual mutual exclusion at the machine language/hardware level is the basis for all correct/successful software solutions made available in API’s

• The book also mentions that an atomic swap instruction can be used to support mutual exclusion/locking.

• It is sufficient to understand the test and set concept and not worry about a swap instruction.

Page 121: Chapter 6, Process Synchronization

121

6.5 Semaphores

• An API may provide a semaphore construct as a synchronization tool for programmers

• A semaphore contains an integer variable and has two operations

• The first operation is acquire()• Historically this has been known as P()• P is short for “proberen”, which means “test”

in Dutch

Page 122: Chapter 6, Process Synchronization

122

• The second operation is known as release()• Historically this has been known as V()• V is short for “verhogen”, which means

“increment” in Dutch• P() and V() had the advantage of brevity, but

they were cryptic

Page 123: Chapter 6, Process Synchronization

123

• In this section I will adopt the modern terminology of acquire() and release() and that’s what you should be familiar with

• It’s kind of sad, but the acquire() and release() methods, in the book’s presentation, have a variable named value.

• It’s clumsy to talk about the value of value, but I will stick with the book’s terminology.

Page 124: Chapter 6, Process Synchronization

124

• On the following overheads high level language pseudo-code definitions are given for acquire() and release()

• The definitions are shown as multiple lines of code

• This is a key point:• In order to be functional, they would have to

be implemented atomically in the API

Page 125: Chapter 6, Process Synchronization

125

• In other words, what you’re seeing here, again, is a high-level software explanation of locking concepts, but these concepts would ultimately have to be supported by locking at the machine level

Page 126: Chapter 6, Process Synchronization

126

• In the definitions of acquire() and release() there is a variable named value.

• The acquire() and release() methods share access to value

• Value is essentially the lock variable• In other words, at the system level, mutual

exclusion and synchronization have to be provided for on the variable value

Page 127: Chapter 6, Process Synchronization

127

• The pseudo-code for the acquire() method is given on the next overhead.

• In considering the code initially, assume that value has somehow been initialized to the value 1

Page 128: Chapter 6, Process Synchronization

128

The semaphore acquire() method

• acquire()• {• while(value <= 0); // no-op• value--;• }

Page 129: Chapter 6, Process Synchronization

129

• Note that this implementation is based on the idea of a waiting loop.

• Acquisition can’t occur if value is less than or equal to zero.

• Successful acquisition involves decrementing the variable value, driving its value towards zero.

• The <= condition in the loop is how the method got its name, “test”, in Dutch

• The idea of waiting in a loop due to a variable value may be reminiscent of elements of Peterson’s solution

Page 130: Chapter 6, Process Synchronization

130

• The pseudo-code for the release() method is given on the next overhead.

• In considering the code, it is not necessary to assume that value has taken on any particular value

Page 131: Chapter 6, Process Synchronization

131

The semaphore release() method

• release()• {• value++;• }

Page 132: Chapter 6, Process Synchronization

132

• Note that releasing involves incrementing the variable value.

• This is how it got its name, “increment” in Dutch.

• Releasing, or incrementing value, is very straightforward.

• It is not conditional and it doesn’t involve any kind of waiting.

Page 133: Chapter 6, Process Synchronization

133

What semaphores are for

• Keep in mind what semaphores are for.• They are a construct that might be made

available in an API to support synchronization in user-level code.

• Internally, they themselves would have to be implemented in such a way that they relied on test and set instructions, for example, so that they really did synchronization

Page 134: Chapter 6, Process Synchronization

134

• In a way, you can understand semaphores as wrappers for test and set instructions

• In code that used them, entry to a critical section would be preceded by a call to acquire()

• The end of the critical section would followed by a call to release()

• In a sense, the variable value in the semaphore is analogous to the variable x that was used in discussing test and set

Page 135: Chapter 6, Process Synchronization

135

• Locking of the critical section is accomplished by the acquisition of the semaphore/the value contained in it

• The semaphore is similar in concept to the car title analogy or the token in the train example of scenario 1

• Locking the variable becomes a surrogate for locking some other thing, like a critical section of code

• If you own the variable, you own the thing it is a surrogate for

Page 136: Chapter 6, Process Synchronization

136

• We will not cover how you would implement semaphores internally

• Eventually, the book will present its own (simplified) Semaphore class

• It turns out that there is also a Semaphore class in the Java API

Page 137: Chapter 6, Process Synchronization

137

• It turns out that in the long run we will not want to make great use of semaphores

• In many cases the inline Java synchronization syntax can be used without needing semaphores

• When more complex synchronization becomes necessary, a Java construct called a monitor will be used instead of a semaphore

• In the meantime, simply to explain the idea of synchronization further, semaphores will be used in the examples

Page 138: Chapter 6, Process Synchronization

138

Using semaphores to protect critical sections

• Keep in mind that in multi-threaded code, the critical section would be shared because all of the code is shared

• A single semaphore would be created to protect that single critical section

• Each thread would make calls on that common semaphore in order to enter and exit the critical section

Page 139: Chapter 6, Process Synchronization

139

• The book has some relatively complete examples.• They will be abstracted in these overheads.• You might have a driver program (containing

main()) where a semaphore is constructed and various threads are created.

• The code on the following overhead shows how a reference to the shared semaphore might be passed to each thread when the thread is constructed

Page 140: Chapter 6, Process Synchronization

140

• Semaphore S = new Semaphore();• Thread thread1 = new Thread(S);• Thread thread2 = new Thread(S);

Page 141: Chapter 6, Process Synchronization

141

• As long as the code that each thread runs is identical, I can see no reason why you might not declare a static semaphore in the class that implements Runnable (the thread class) and construct the common semaphore inline

• In any case, no matter how many instances of the thread class are created, with a shared reference to a semaphore, only one at a time will be able to get into the critical section

Page 142: Chapter 6, Process Synchronization

142

• Once there’s a common semaphore, then the run() method for the threads would take the form shown on the following overhead.

• The shared critical section would be protected by sandwiching it between calls to acquire and release the semaphore

Page 143: Chapter 6, Process Synchronization

143

• run()• {• …• while(true)• {• semaphore.acquire();• // critical section• semaphore.release();• // remainder section• }• }

Page 144: Chapter 6, Process Synchronization

144

Using semaphores to enforce execution order

• Along with the general problem of protecting a critical section, semaphores and locking can be used to enforce a particular order of execution of code when >1 process is running

• Suppose that the code for processes P1 and P2 are not exactly the same.

• P1 contains statement S1 and P2 contains statement S2, and it is necessary for S1 to be executed before S2

Page 145: Chapter 6, Process Synchronization

145

• The difference between P1 and P2 is critical to this example.

• The point is that we’ll now be talking about two processes running through two blocks of code that differ from each other

• The two processes do not share exactly the same code the way two simple threads would

Page 146: Chapter 6, Process Synchronization

146

• Recall that in the introduction to semaphores, the semaphore was shown as initialized to the value 1.

• This is the “unlocked” value• You could acquire the lock if value was >= 0, and

when you acquired it, you decremented value• If the semaphore is initialized to 0, then it is

initialized as locked

Page 147: Chapter 6, Process Synchronization

147

• Remember that even if the semaphore is locked, this doesn’t prevent all code from running

• Code that doesn’t try to acquire the lock can run freely regardless of the semaphore value

• If code for P1 and P2 differ, then there could be parts of P2 that are protected, while the corresponding parts of P1—which correspond, but aren’t the same as P2—are not protected

Page 148: Chapter 6, Process Synchronization

148

• These considerations are important in the following example

• S1 and S2 are the lines of code that differ in P1 and P2 and that have to be executed in 1-2 order

• By inserting the semaphore acquire() and release() calls as shown in the code on the following overhead, this execution order is enforced.

Page 149: Chapter 6, Process Synchronization

149

• /* At the point where the shared semaphore is constructed, initialize it to the locked value. S1 is not protected by an acquire() call, so it can run freely. After it has run and released the lock, then S2 will be able to acquire the lock and run. */

• Process P1 code• {• S1• semaphore.release();• }• Process P2 code• {• semaphore.acquire();• S2• }

Page 150: Chapter 6, Process Synchronization

150

• Two differing processes or threads are synchronized by means of a shared semaphore

• In this case, since they differ, they should probably be passed a reference to a shared semaphore, rather than expecting to be able to construct a static one inline

• The foregoing example is not especially cosmic, but it does introduce an idea that will lead to more complicated examples later:

• The semaphore can be asymmetrically acquired and released in the different processes

Page 151: Chapter 6, Process Synchronization

151

Binary vs. Counting Semaphores

• A binary semaphore is a simple lock which can take on two values:– The value 1 = available (not locked)– The value 0 = not available (locked)

• The semaphore concept can be extended to values greater than 1

• A counting semaphore is initialized to an integer value n, greater than 1

Page 152: Chapter 6, Process Synchronization

152

• The initial value of a counting semaphore tells how many different instances of a given, interchangeable kind of resource there are

• The semaphore is still decremented by 1 for every acquisition and incremented by 1 for every release

• If you think about it, the plan of action is pretty clear

Page 153: Chapter 6, Process Synchronization

153

• Every time a thread acquires one of the copies of the resource, the value variable is decremented, reducing the count of the number of copies available for acquisition

• Every time a thread releases one of the copies, the count goes up by one.

• In theory, such a semaphore could also be used to allow n threads in a critical section at a time, although that’s not a typical use in practice

Page 154: Chapter 6, Process Synchronization

154

Implementing Non-Busy Waiting in a Semaphore

• This was the implementation/definition of a semaphore given above

• acquire()• {• while(value <= 0); // no-op• value--;• }

Page 155: Chapter 6, Process Synchronization

155

• This was referred to earlier as a busy waiting loop and can also be called a spin lock

• The process is still alive and it should eventually acquire the shared resource or enter the critical section

• This is true because a correct implementation of synchronization requires progress and bounded waiting

Page 156: Chapter 6, Process Synchronization

156

• As a live process, it will still be scheduled, but during those time slices when it’s scheduled and the resource isn’t available it will simply burn up its share of CPU time/cycles doing nothing but spinning in the loop

• From the standpoint of the CPU as a resource, this is a waste

Page 157: Chapter 6, Process Synchronization

157

• The solution to this problem is to have a process voluntarily block itself or allow itself to be blocked when it can’t get a needed resource

• This is reminiscent of I/O blocking• The process should be put in a waiting list for

the resource to become available• When it’s in a waiting list, it won’t be scheduled

Page 158: Chapter 6, Process Synchronization

158

• A given system may support a block() and a wakeup() call in the API

• The pseudo-code on the following overhead shows how a semaphore definition might be structured using these calls

• This is hypothetical for the time being, but the idea will recur in a different form later

Page 159: Chapter 6, Process Synchronization

159

• acquire()• {• value--;• if(value < 0)• {• block();• // If the semaphore shows that the resource• // is not available, block the process that• // called acquire() and put it on a waiting• // list. No implementation details below• // the level of the call to block() will be• // considered.• }• }

Page 160: Chapter 6, Process Synchronization

160

• In the original semaphore definition with busy waiting, decrementing value occurred after the waiting loop, when acquisition occurred

• In this new definition the decrementing occurs right away, before the “if” that tests value

• First of all, this is OK since value is allowed to go negative in a counting semaphore

Page 161: Chapter 6, Process Synchronization

161

• When value is negative, it tells how many processes are waiting in the list for the resource

• Not only is it OK to decrement right away, it’s necessary.

• If the result after decrementing is positive, the process successfully acquires

• If not, the decrementation is immediately included as part of the count of the number of waiting processes

Page 162: Chapter 6, Process Synchronization

162

• More importantly, it’s necessary to do the decrementing right away.

• There is nothing else in the semaphore definition• Releasing a blocked process from the waiting list

happens elsewhere in the code, so decrementing has to be done when and where it can

• It can’t be saved until later

Page 163: Chapter 6, Process Synchronization

163

• The corresponding, new definition of the release() method is given on the following overhead

• It is in release() that waiting processes are taken off the waiting list

• As with the acquire() method, the details of how that is done are not shown

• All that’s shown is the call to wakeup(), which is the converse of the call to block() in the acquire() method

Page 164: Chapter 6, Process Synchronization

164

• release()• {• value++;• if(value <= 0)• {• wakeup(P);• // wakeup() means that a process• // has to be removed from the• // waiting list. Which process, P,• // it is depends on the internal• // implementation of the waiting• // and the wakeup() method.• }• }

Page 165: Chapter 6, Process Synchronization

165

Why waiting lists?

• In this latest iteration, locking by means of a semaphore no longer involves wasting CPU time on busy waiting.

• If such a semaphore can be implemented, and locking can be done in this way, then processes simply spend their idle time waiting in queues or waiting lists.

Page 166: Chapter 6, Process Synchronization

166

• How can a waiting list be implemented?• Essentially, just like an I/O waiting list• If this is a discussion of processes, then PCB’s are

entered into the list• There is no specific requirement on the order in

which processes are given the resource when it becomes available

• A FIFO queuing discipline ensures fairness and bounded waiting

Page 167: Chapter 6, Process Synchronization

167

• Note once again that semaphores are an explanatory tool—but they once again beg the question of mutual exclusion

• In other words, it’s obvious that the multi-value semaphore definitions of acquire() and release() consist of multiple lines of high level language code

• They will only work correctly if the entire methods are atomic

Page 168: Chapter 6, Process Synchronization

168

• This can be accomplished if the underlying system supports a synchronization mechanism that would allow mutual exclusion to be enforced from the beginning to the end of the semaphore methods

• In other words, the semaphore implementations themselves have to be critical sections

Page 169: Chapter 6, Process Synchronization

169

• In a real implementation this might be enforced with the use of test and set type instructions

• At the system level, it again raises the question of inhibiting interrupts

• However, we are still not interested in the particular details of that might be done

Page 170: Chapter 6, Process Synchronization

170

• What we are interested in is the concept that enclosing blocks of code in correctly implemented semaphore calls to acquire() and release() make those blocks critical sections

• We are also interested in the implementation of semaphores to the extent that we understand that internally, a semaphore is conceptually based on incrementing and decrementing an integer variable

Page 171: Chapter 6, Process Synchronization

171

Deadlocks and Starvation

• Although how it’s all implemented in practice might not be clear, the previous discussion of locking and semaphores illustrated the idea that you could enforce:– Mutual exclusion– Order of execution

• This should provide a basis for writing user level code where the end result is in a consistent state

Page 172: Chapter 6, Process Synchronization

172

• This still leaves two possible problems: Starvation and deadlock

• Starvation can occur for the same reasons it can occur in a scheduling algorithm

• If the waiting list for a given resource was not FIFO and used some priority for waking up blocked processes, some processes may never acquire the resource

• The simple solution to this problem is to implement a queuing discipline that can’t lead to starvation

Page 173: Chapter 6, Process Synchronization

173

• Deadlock is a more difficult problem• An initial example of this came up with

scenario 2, presented previously.• Essentially, Alphonse and Gaston deadlocked.• This was a simple case of a poor

implementation where you had two processes and one resource

Page 174: Chapter 6, Process Synchronization

174

• Deadlock typically arises when there are more than one process and more than one resource they are contending for

• If the acquire() and release() calls are interleaved in a certain way, the result can be that neither process can proceed

• An example of this follows

Page 175: Chapter 6, Process Synchronization

175

Deadlock Example• Suppose that Q and S are resources and that P0 and P1 are structured in

this way:• P0 P1

• acquire(S) acquire(Q)• acquire(Q) acquire(S)• … …• release(S) release(Q)• release(Q) release(S)• The arbitrary, concurrent scheduling of the processes may proceed as

follows• P0 acquires S

• Then P1 acquires Q

Page 176: Chapter 6, Process Synchronization

176

• At this point, neither process can go any further

• Each is waiting for a resource that the other one holds

• This is a classic case of deadlock• This is a sufficiently broad topic that it will not

be pursued in depth here• A whole chapter is devoted to it later on

Page 177: Chapter 6, Process Synchronization

177

6.6 Classic Problems of Synchronization

• These problems exist in operating systems and other systems which have concurrency

• Because they are well-understood, they are often used to test implementations of concurrency control

• Some of these problems should sound familiar because the book has already brought them up as examples of aspects of operating systems (without yet discussing all of the details of a correct, concurrent implementation)

Page 178: Chapter 6, Process Synchronization

178

• The book discusses the following three problems– The bounded-buffer problem– The readers-writers problem– The dining philosophers problem

Page 179: Chapter 6, Process Synchronization

179

• The book gives Java code to solve these problems• For the purposes of the immediate discussion,

these examples are working code• There is one slight, possible source of confusion. • The examples use a home-made Semaphore class• In the current version of the Java API, there is a

Semaphore class

Page 180: Chapter 6, Process Synchronization

180

• Presumably the home-made class agrees with how the book describes semaphores

• It is not clear whether the API Semaphore class agrees or not

• The home-made class will be noted at the end of the presentation of code—but its contents will not be explained

• Only after covering the coming section on synchronization syntax in Java would it be possible to understand how the authors have implemented concurrency control in their own semaphore class

Page 181: Chapter 6, Process Synchronization

181

The Bounded Buffer Problem

• Operating systems implement general I/O using buffers and message passing between buffers

• Buffer management is a real element of O/S construction

• This is a shared resource problem• The buffer and any variables keeping track of

buffer state (such as the count of contents) have to be managed so that contending processes (threads) keep them consistent

Page 182: Chapter 6, Process Synchronization

182

• Various pieces of code were given in previous chapters for the bounded buffer problem

• Now the book gives code which is multi-threaded and also does concurrency control using a semaphore

• That code follows

Page 183: Chapter 6, Process Synchronization

183

• /**• * BoundedBuffer.java• *• * This program implements the bounded buffer with semaphores.• * Note that the use of count only serves to output whether• * the buffer is empty of full.• */

• import java.util.*;

• public class BoundedBuffer implements Buffer• {

• private static final int BUFFER_SIZE = 2;

• private Semaphore mutex;• private Semaphore empty;• private Semaphore full;

• private int count;• private int in, out;• private Object[] buffer;

Page 184: Chapter 6, Process Synchronization

184

• public BoundedBuffer()• {• // buffer is initially empty• count = 0;• in = 0;• out = 0;

• buffer = new Object[BUFFER_SIZE];

• mutex = new Semaphore(1);• empty = new Semaphore(BUFFER_SIZE);• full = new Semaphore(0);• }

Page 185: Chapter 6, Process Synchronization

185

• // producer calls this method• public void insert(Object item) {• empty.acquire();• mutex.acquire();

• // add an item to the buffer• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;

• if (count == BUFFER_SIZE)• System.out.println("Producer Entered " + item

+ " Buffer FULL");• else• System.out.println("Producer Entered " + item

+ " Buffer Size = " + count);

• mutex.release();• full.release();• }

Page 186: Chapter 6, Process Synchronization

186

• // consumer calls this method• public Object remove() {• full.acquire();• mutex.acquire();

• // remove an item from the buffer• --count;• Object item = buffer[out];• out = (out + 1) % BUFFER_SIZE;

• if (count == 0)• System.out.println("Consumer Consumed " + item + " Buffer

EMPTY");• else• System.out.println("Consumer Consumed " + item + " Buffer

Size = " + count);

• mutex.release();• empty.release();

• return item;• }

• }

Page 187: Chapter 6, Process Synchronization

187

• There is more code to the full solution. • It will be given later, but the first thing to

notice is that there are three semaphores• The book has introduced a new level of

complexity “out of the blue” by using this classic problem as an illustration

Page 188: Chapter 6, Process Synchronization

188

• Not only is there a semaphore, mutex, for mutual exclusion on buffer operations

• There are two more semaphores, empty and full• These semaphores are associated with the idea that

the buffer has to be protected from trying to insert into a full buffer or remove from an empty one

• In other words, they deal with the concepts, given in an earlier chapter, of blocking sends/receives—writes/reads

Page 189: Chapter 6, Process Synchronization

189

• No cosmic theory is offered to explain the ordering of the calls to acquire and release the semaphores

• The example is simply given, and it’s up to us to try and sort out how the calls interact in a way that accomplishes the desired result

Page 190: Chapter 6, Process Synchronization

190

• The mutex semaphore is initialized to 1. • 1 and 0 are sufficient to enforce mutual

exclustion

Page 191: Chapter 6, Process Synchronization

191

• The empty semaphore is initialized to BUFFER_SIZE.

• The buffer is “empty”, i.e., has space to insert new items, until empty.acquire() has been called BUFFER_SIZE times and its space is filled

• In effect, the empty semaphore counts how many elements of the buffer array are empty and available for insertion

Page 192: Chapter 6, Process Synchronization

192

• The full semaphore is initialized to 0. • Initially, the buffer is not “full”. • There are no elements in the buffer array. • This means that remove() won’t find anything

until a call to insert() has made a call to full.release()

• In effect, the full semaphore counts how many elements of the buffer array have been filled and are available for removal

Page 193: Chapter 6, Process Synchronization

193

Page 194: Chapter 6, Process Synchronization

194

• In the code, the calls to acquire() and release() are paired in the insert() and release() methods

• The calls to acquire() and release() on empty and full are crossed between the insert() and remove() methods

• Notice that the pattern of criss-crossing of the calls is reminiscent of the trickery used in order to have a single semaphore control the order of execution of two blocks of code

Page 195: Chapter 6, Process Synchronization

195

• A semaphore is released after one block of code and acquired before the other

• This happens with both the empty and full semaphores in this example

• See the diagram on the following overhead

Page 196: Chapter 6, Process Synchronization

196

Page 197: Chapter 6, Process Synchronization

197

• The rest of the book code to make this a working example follows

Page 198: Chapter 6, Process Synchronization

198

• /**• * An interface for buffers• *• */

• public interface Buffer• {• /**• * insert an item into the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract void insert(Object item);

• /**• * remove an item from the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract Object remove();• }

Page 199: Chapter 6, Process Synchronization

199

• /**• * This is the producer thread for the bounded buffer problem.• */

• import java.util.*;

• public class Producer implements Runnable• {• public Producer(Buffer b) {• buffer = b;• }• • public void run()• {• Date message;• • while (true) {• System.out.println("Producer napping");• SleepUtilities.nap();• • // produce an item & enter it into the buffer• message = new Date(); • System.out.println("Producer produced " + message);• • buffer.insert(message);• }• }• • private Buffer buffer;• }

Page 200: Chapter 6, Process Synchronization

200

• /**• * This is the consumer thread for the bounded buffer problem.• */• import java.util.*;

• public class Consumer implements Runnable• {• public Consumer(Buffer b) { • buffer = b;• }• • public void run()• {• Date message;• • while (true)• {• System.out.println("Consumer napping");• SleepUtilities.nap(); • • // consume an item from the buffer• System.out.println("Consumer wants to consume.");• • message = (Date)buffer.remove();• }• }• • private Buffer buffer;• }

Page 201: Chapter 6, Process Synchronization

201

• /**• * This creates the buffer and the producer and consumer threads.• *• */• public class Factory• {• public static void main(String args[]) {• Buffer server = new BoundedBuffer();

• // now create the producer and consumer threads• Thread producerThread = new Thread(new Producer(server));• Thread consumerThread = new Thread(new Consumer(server));• • producerThread.start();• consumerThread.start(); • }• }

Page 202: Chapter 6, Process Synchronization

202

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 203: Chapter 6, Process Synchronization

203

• The book’s Semaphore class follows• Strictly speaking, the example was written to use

this home-made class• Presumably the example would also work with

objects of the Java API Semaphore class• The keyword “synchronized” in the given class is

what makes it work• This keyword will be specifically covered in the

section of the notes covering Java synchronization

Page 204: Chapter 6, Process Synchronization

204

• /**• * Semaphore.java• *• * A basic counting semaphore using Java synchronization.• */

• public class Semaphore• {• private int value;

• public Semaphore(int value) {• this.value = value;• }

• public synchronized void acquire() {• while (value <= 0) {• try {• wait();• }• catch (InterruptedException e) { }• }

• value--;• }

• public synchronized void release() {• ++value;

• notify();• }• }

Page 205: Chapter 6, Process Synchronization

205

The Readers-Writers Problem

• The author explains this in general terms of a database

• The database is the resource shared by >1 thread

Page 206: Chapter 6, Process Synchronization

206

• At any given time the threads accessing a database may fall into two different categories, with different concurrency requirements– Readers: Reading is an innocuous activity– Writers: Writing (updating) is an activity which

changes the state of a database

Page 207: Chapter 6, Process Synchronization

207

• In database terminology, you control access to a data item by means of a lock

• If you own the lock, you have (potentially sole) access to the data item

• In order to implement user-level database locking, you need mutual exclusion on the code that accesses the lock

Page 208: Chapter 6, Process Synchronization

208

• This is just another one of my obscure side notes, like PCB’s as canopic jars…

• Maybe this will give you a better idea of what a lock is

• An analogy can be made with the title to a car and the car itself.

• If you possess the title, you own the car, allowing you to legally take possession of the car

Page 209: Chapter 6, Process Synchronization

209

• Database access adds a new twist to locking• There are two kinds of locks• An exclusive lock: This is the kind of lock

discussed so far. • A writer needs an exclusive lock which means

that all other writers and readers are excluded when the writer has the lock

Page 210: Chapter 6, Process Synchronization

210

• A shared lock: This is actually a new• This is the kind of lock that readers need. • The idea is that >1 reader can access the data at the

same time, as long as writers are excluded• The point is that readers don’t change the data, so

by themselves, they can’t cause concurrency control problems which are based on inconsistent state

• They can get in trouble if they are intermixed with operations that do change database state

Page 211: Chapter 6, Process Synchronization

211

• The book gives two different possible approaches to the readers-writers problem

• It should be noted that neither of the book’s approaches prevents starvation

• In other words, you might say that these solutions are application level implementations of synchronization which are not entirely correct, because they violate the bounded waiting condition

Page 212: Chapter 6, Process Synchronization

212

First Readers-Writers Approach

• No reader will be kept waiting unless a writer already has already acquired a lock

• Readers don’t wait on other readers• Readers don’t wait on waiting writers• Readers have priority• Writers may starve

Page 213: Chapter 6, Process Synchronization

213

Second Readers-Writers Approach

• Once a writer is ready, it gets the lock as soon as possible

• Writers have to wait for the current reader to finish, and no longer

• Writers have to wait on each other, presumably in FIFO order

• Writers have priority• Readers may starve

Page 214: Chapter 6, Process Synchronization

214

• Other observations about the readers-writers problem

• You have to be able to distinguish reader and writer threads (processes) from each other

• For this scheme to give much processing advantage, you probably need more readers than writers in order to justify implementing the different kinds of locks

• Garden variety databases would tend to have more readers than writers

Page 215: Chapter 6, Process Synchronization

215

• The solution approaches could be extended to prevent starvation and so that they also had other desirable characteristics

• Book code follows, along with some explanations• The first code solution takes approach 1: The

readers have priority• As a consequence, the provided solution would

allow starvation of writers to happen

Page 216: Chapter 6, Process Synchronization

216

• /**• * Database.java• *• * This class contains the methods the readers and writers will use• * to coordinate access to the database. Access is coordinated using

semaphores.• */

• public class Database implements RWLock• {• // the number of active readers• private int readerCount;

• Semaphore mutex; // controls access to readerCount• Semaphore db; // controls access to the database

• public Database() {• readerCount = 0;

• mutex = new Semaphore(1);• db = new Semaphore(1);• }

Page 217: Chapter 6, Process Synchronization

217

• public void acquireReadLock(int readerNum) {• mutex.acquire();

• ++readerCount;

• // if I am the first reader tell all others• // that the database is being read• if (readerCount == 1)• db.acquire();

• System.out.println("Reader " + readerNum + " is reading. Reader count = " + readerCount);

• mutex.release();• }

Page 218: Chapter 6, Process Synchronization

218

• public void releaseReadLock(int readerNum) {• mutex.acquire();

• --readerCount;

• // if I am the last reader tell all others• // that the database is no longer being read• if (readerCount == 0)• db.release();

• System.out.println("Reader " + readerNum + " is done reading. Reader count = " + readerCount);

• mutex.release();• }

Page 219: Chapter 6, Process Synchronization

219

• public void acquireWriteLock(int writerNum) {• db.acquire();• System.out.println("writer " + writerNum + " is

writing.");• }

• public void releaseWriteLock(int writerNum) {• System.out.println("writer " + writerNum + " is

done writing.");• db.release();• }

• }

Page 220: Chapter 6, Process Synchronization

220

• The starting point for understanding this first database example is comparing it with the bounded-buffer example

• Like the bounded-buffer example, this example has more than one semaphore

• However, unlike the bounded-buffer example, it has two semaphores, not three, and the two are used to control different things

Page 221: Chapter 6, Process Synchronization

221

• An equally important difference from the bounded-buffer example is that this example does not actually do any reading or writing of data to a database

• The application code simply implements the protocol for assigning different kinds of locks to requesting processes

• It’s complicated enough as it is without trying to inject any reality into it.

• It’s sufficient to worry about the locking protocol

Page 222: Chapter 6, Process Synchronization

222

• Overall then, the major difference can be seen in the methods that are implemented

• In the bounded-buffer example, there were an insert() and a remove() method

• In this example there are four methods:– acquireReadLock()– acquireWriteLock()– releaseReadLock()– releaseWriteLock()

Page 223: Chapter 6, Process Synchronization

223

• Like with the bounded-buffer problem, understanding the acquisition and release of the read and write db locks depends on understanding the results of the placement of the various semaphore acquire() and release() calls in the code.

• Once again, you have to figure out what it means for the acquire() and release() of a semaphore lock to criss-cross (appear at the top/bottom or bottom/top) in the code for the db lock methods

Page 224: Chapter 6, Process Synchronization

224

• Observe that the write locks are pure and simple

• They enforce mutual exclusion on the database

• This is done with the db semaphore• See the code on the following overhead

Page 225: Chapter 6, Process Synchronization

225

• public void acquireWriteLock(int writerNum)• {• db.acquire();• System.out.println("writer " + writerNum + "

is writing.");• }

• public void releaseWriteLock(int writerNum) • {• System.out.println("writer " + writerNum + "

is done writing.");• db.release();• }

Page 226: Chapter 6, Process Synchronization

226

• The read locks make use of the mutex semaphore

• Mutex is a garden variety semaphore which enforces mutual exclusion on both the acquisition and release of read locks

• There is no fancy criss-crossing.

Page 227: Chapter 6, Process Synchronization

227

• Both acquireReadLock() and releaseReadLock() begin with mutex.acquire() and end with mutex.release()

• All of the db acquire and release code is protected, but in particular, the shared variable readerCount is protected

Page 228: Chapter 6, Process Synchronization

228

• The read locks also make use of the db semaphore

• This is the semaphore that protects the db• If a writer has already executed db.acquire(),

then a reader cannot get past the db.acquire() call in the acquireReadLock() method

Page 229: Chapter 6, Process Synchronization

229

• However, more than one reader can access the db at the same time

• Again, there is no criss-crossing• The call to db.acquire() occurs in

acquireReadLock()

Page 230: Chapter 6, Process Synchronization

230

• However, this call is conditional• Only the first reader has to make it• As long as no writer is holding the db, a reader

can enter the acquireReadLock() code, even if another reader has already acquired the db

Page 231: Chapter 6, Process Synchronization

231

• The call for a reader to release the db, db.release(), only occurs in releaseReadLock()

• However, this is also conditional• A read lock on the database is only released if

there are no more readers• The code is repeated below for reference

Page 232: Chapter 6, Process Synchronization

232

• public void acquireReadLock(int readerNum) {• mutex.acquire();

• ++readerCount;

• // if I am the first reader tell all others• // that the database is being read• if (readerCount == 1)• db.acquire();

• System.out.println("Reader " + readerNum + " is reading. Reader count = " + readerCount);

• mutex.release();• }

• public void releaseReadLock(int readerNum) {• mutex.acquire();

• --readerCount;

• // if I am the last reader tell all others• // that the database is no longer being read• if (readerCount == 0)• db.release();

• System.out.println("Reader " + readerNum + " is done reading. Reader count = " + readerCount);

• mutex.release();• }

Page 233: Chapter 6, Process Synchronization

233

• The rest of the book code to make this a working example follows

Page 234: Chapter 6, Process Synchronization

234

• /**• * An interface for reader-writer locks.• *• * In the text wedo not have readers and writers• * pass their number into each method. However we do so• * here to aid in output messages.• */

• public interface RWLock• {• public abstract void acquireReadLock(int readerNum);• public abstract void acquireWriteLock(int writerNum);• public abstract void releaseReadLock(int readerNum);• public abstract void releaseWriteLock(int writerNum);• }

Page 235: Chapter 6, Process Synchronization

235

• /**• * Reader.java• * A reader to the database.• */

• public class Reader implements Runnable• {

• private RWLock db;• private int readerNum;

• public Reader(int readerNum, RWLock db) {• this.readerNum = readerNum;• this.db = db;• }

• public void run() {• while (true) {• SleepUtilities.nap();

• System.out.println("reader " + readerNum + " wants to read.");• db.acquireReadLock(readerNum);

• // you have access to read from the database• // let's read for awhile .....• SleepUtilities.nap();

• db.releaseReadLock(readerNum);• }• }• }

Page 236: Chapter 6, Process Synchronization

236

• /**• * Writer.java• * A writer to the database.• */

• public class Writer implements Runnable• {• private RWLock server;• private int writerNum;

• public Writer(int w, RWLock db) {• writerNum = w;• server = db;• }

• public void run() {• while (true)• {• SleepUtilities.nap();

• System.out.println("writer " + writerNum + " wants to write.");• server.acquireWriteLock(writerNum);

• // you have access to write to the database• // write for awhile ...• SleepUtilities.nap();

• server.releaseWriteLock(writerNum);• }• }• }

Page 237: Chapter 6, Process Synchronization

237

• /**• * Factory.java• * This class creates the reader and writer threads and• * the database they will be using to coordinate access.• */

• public class Factory• {• public static final int NUM_OF_READERS = 3;• public static final int NUM_OF_WRITERS = 2;

• public static void main(String args[])• {• RWLock server = new Database();

• Thread[] readerArray = new Thread[NUM_OF_READERS];• Thread[] writerArray = new Thread[NUM_OF_WRITERS];

• for (int i = 0; i < NUM_OF_READERS; i++) {• readerArray[i] = new Thread(new Reader(i, server));• readerArray[i].start();• }

• for (int i = 0; i < NUM_OF_WRITERS; i++) {• writerArray[i] = new Thread(new Writer(i, server));• writerArray[i].start();• }• }• }

Page 238: Chapter 6, Process Synchronization

238

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 239: Chapter 6, Process Synchronization

239

• These are the same observations that were made with the producer-consumer example– The book’s Semaphore class follows– Strictly speaking, the example was written to use this home-

made class– Presumably the example would also work with objects of

the Java API Semaphore class– The keyword “synchronized” in the given class is what

makes it work– This keyword will be specifically covered in the section of

the notes covering Java synchronization

Page 240: Chapter 6, Process Synchronization

240

• /**• * Semaphore.java• * A basic counting semaphore using Java synchronization.• */

• public class Semaphore• {• private int value;

• public Semaphore(int value) {• this.value = value;• }

• public synchronized void acquire() {• while (value <= 0) {• try {• wait();• }• catch (InterruptedException e) { }• }• value--;• }

• public synchronized void release() {• ++value;• notify();• }• }

Page 241: Chapter 6, Process Synchronization

241

The Dining Philosophers Problem

Page 242: Chapter 6, Process Synchronization

242

• Let there be one rice bowl in the center• Let there be five philosophers• Let there be only five chopsticks, one between

each of the philosophers

Page 243: Chapter 6, Process Synchronization

243

• Let concurrent eating have these conditions• 1. A philosopher tries to pick up the two

chopsticks immediately on each side– Picking up one chopstick is an independent act. – It isn’t possible to pick up both simultaneously.

Page 244: Chapter 6, Process Synchronization

244

• 2. If a philosopher succeeds in acquiring the two chopsticks, then the philosopher can eat. – Eating cannot be interrupted

• 3. When the philosopher is done eating, the chopsticks are put down one after the other– Putting down one chopstick is an independent act. – It isn’t possible to put down both simultaneously.

• Note that under these conditions it would not be possible for two neighboring philosophers to be eating at the same time

Page 245: Chapter 6, Process Synchronization

245

• This concurrency control problem has two challenges in it:

• 1. Starvation• Due to the sequence of events, one

philosopher may never be able to pick up two chopsticks and eat

Page 246: Chapter 6, Process Synchronization

246

• 2. Deadlock• Due to the sequence of events, each

philosopher may succeed in picking up either the chopstick on the left or the chopstick on the right. – None will eat because they are waiting/attempting

to pick up the other chopstick. – Since they won’t be eating, they’ll never finish and

put down the chopstick they do hold

Page 247: Chapter 6, Process Synchronization

247

• A full discussion of deadlock will be given in chapter 7– In the meantime, possible solutions to starvation and

deadlock under this scenario include:– Allow at most four philosophers at the table– Allow a philosopher to pick up chopsticks only if both

are available– An asymmetric solution: Odd philosophers reach first

with their left hands, even philosophers with their right

Page 248: Chapter 6, Process Synchronization

248

• Note that all proposed solutions either reduce concurrency or introduce artificial constraints

• The book gives partial code for this problem but having looked at all of the code for the previous two examples, it is not necessary to pursue more code for this one

Page 249: Chapter 6, Process Synchronization

249

6.7 Monitors

• Monitors are an important topic for two reasons– The are worth understanding because Java

synchronization is ultimately built on the monitor concept

– Also, the use of semaphores is fraught with difficulty, so overall, monitors might be a better concept to learn

Page 250: Chapter 6, Process Synchronization

250

• Recall that depending on the circumstance, correct implementation of synchronization using semaphores might require cross-over of calls

• The author points out that these are common mistakes when working with semaphores:

• 1. Reversal of calls: mutex.release() before mutex.acquire(). – This can lead to violation of mutual exclusion

Page 251: Chapter 6, Process Synchronization

251

• 2. Double acquisition: mutex.acquire() followed by mutex.acquire(). – This will lead to deadlock

• 3. Forgetting one or the other or both calls. – This will lead to a violation of mutual exclusion or

deadlock

Page 252: Chapter 6, Process Synchronization

252

• A high level, O-O description of what a monitor is:

• It’s a class with (private) instance variables and (public) methods

• Mutual exclusion is enforced over all of the methods at the same time

• No two threads can be in any of the methods at the same time

Page 253: Chapter 6, Process Synchronization

253

• Notice that this blanket mutual exclusion obviates calls to acquire() and release(), and therefore eliminates the problem of placing them correctly in calling code

• Notice also that under this scheme, the private instance variables are protected by definition

• There is no access to them except through the methods, and the methods have mutual exclusion enforced on them

Page 254: Chapter 6, Process Synchronization

254

The relationship of monitors to Java

• The discussion of monitors here is tangentially related to Java

• In Java there is a Monitor class, but that is something different from the monitor concept under discussion here

• However, it turns out that the Object class is the source of certain monitor methods

• This means that using the Java syntax of synchronization, a programmer could write a class that embodied all of the characteristics of a monitor

Page 255: Chapter 6, Process Synchronization

255

• Note also that the discussion will shortly mention the idea of a Condition variable.

• Java has a Condition interface which corresponds to this idea

• Once again, a programmer could write code that agreed with the discussion of monitors here

Page 256: Chapter 6, Process Synchronization

256

• Finally, note that Java synchronization will be the next topic and some of the syntax will be covered.

• When it is covered, the intent is not to show in particular how to implement a monitor—but how to use it directly to synchronize programmer written code.

• Java synchronization may be built on the monitor concept, but that doesn’t mean that the programmer has to do synchronization using monitors directly

Page 257: Chapter 6, Process Synchronization

257

Condition Variables (or Objects)

• A monitor class can have declared in it Condition variables:

• private Condition x, y;• A monitor class will also have two special

methods:• wait() and signal()

Page 258: Chapter 6, Process Synchronization

258

• In the Object class of Java there is a wait() method which is like the conceptual wait() method in a monitor

• In the Object class of Java there are also methods notify() and notifyAll().

• These methods correspond to the conceptual signal() method in a monitor

Page 259: Chapter 6, Process Synchronization

259

• Different threads may share a reference to a monitor object

• The threads can call methods on the monitor• A monitor method may contain a call of this

form:• x.wait();• If this happens, the thread that was “running

in the monitor” is suspended

Page 260: Chapter 6, Process Synchronization

260

• A thread will remain suspended until another thread makes a call such as this:

• x.signal()• In Java this would be x.notify() or x.notifyAll()

Page 261: Chapter 6, Process Synchronization

261

• In a primitive semaphore, if a resource is not available, when a process calls acquire() and fails, the process goes into a spinlock

• The logic of wait() is sort of the reverse: • A process can voluntarily step aside by calling

x.wait(), allowing another thread into the protected code

Page 262: Chapter 6, Process Synchronization

262

• It becomes second nature to think of concurrency control as a technique for enforcing mutual exclusion on a resource

• Recall that synchronization also includes the ability to enforce a particular execution order

Page 263: Chapter 6, Process Synchronization

263

• It may be easier to remember the idea underlying wait() by thinking of it as a tool that makes it possible for a process to take actions which affect its execution order

• Although not directly related, it may be helpful to remember the concept of “politeness” that came up in the Alphonse and Gaston phase of trying to explain concurrency control

• Making a wait() call allows other threads to go first

Page 264: Chapter 6, Process Synchronization

264

• More points to consider:• The authors raise the question of what it would

mean to call “signal()” (I gather they mean release()) on a semaphore when there is nothing to release

• If you went back to look at the original pseudo-code for a semaphore, you would find that this would increment the counter—even though that would put the count above the number of actual resources

Page 265: Chapter 6, Process Synchronization

265

• It may or may not be possible to fix the pseudo-code to deal with this—but it’s not important,

• Because if you looked at the original pseudo-code, reference is made to a list of processes waiting for a resource—and the implementation of this wasn’t explained

Page 266: Chapter 6, Process Synchronization

266

• The previous overhead was a jumping off point for this important, additional information about monitors:

• The monitor concept explicitly includes waiting lists

• If a thread running in the monitor causes a call such as x.wait() to be made, the thread is put in the waiting list

Page 267: Chapter 6, Process Synchronization

267

• When another thread makes a call x.notify(), that thread will step aside and one in the waiting list will be resumed

• If another thread were to make a call x.notifyAll(), all waiting threads would potentially be resumed

Page 268: Chapter 6, Process Synchronization

268

• The management of waiting lists and the ability to call wait(), notify(), and notifyAll() leads to another consideration which the implementation of a monitor has to take into account

• Let this scenario be given:• Thread Q is waiting because it earlier called x.wait()• Thread P is running and it calls x.signal()• By definition, only one of P and Q can be running in

the monitor at the same time

Page 269: Chapter 6, Process Synchronization

269

• The question becomes, what protocol should be used to allow Q to begin running in the monitor instead of P?

• This question is not one that has to be answered by the application programmer

• It is a question that confronts the designer of a particular monitor implementation

Page 270: Chapter 6, Process Synchronization

270

• In general, there are two alternatives:• Signal and wait: • P signals, then waits, allowing Q its turn. • After Q finishes, P resumes.

Page 271: Chapter 6, Process Synchronization

271

• Signal and continue: • P signals and continues until it leaves the

monitor. • At that point Q can enter the monitor (or

potentially may not, if prevented by some other condition)

Page 272: Chapter 6, Process Synchronization

272

• The book next tries to illustrate the use of monitors in order to solve the dining philosophers problem

• I am not going to cover this

Page 273: Chapter 6, Process Synchronization

273

6.8 Java Synchronization

• Thread safe. Definition: Concurrent threads have been implemented so that they leave shared data in a consistent state

• Note: Much of the example code shown previously would not be thread safe. It is highly likely that threaded code without the use of synchronization syntax would generate a compiler error or warning indicating that it was not thread safe

Page 274: Chapter 6, Process Synchronization

274

• The most recent examples which used a semaphore class which did use the Java synchronization syntax internally should not generate this error/warning

• If code does produce this in warning form and the code can be run, it should be made emphatically clear that even though it runs, it IS NOT THREAD SAFE

Page 275: Chapter 6, Process Synchronization

275

• In other words, unsafe code may appear to run

• More accurately it may run and even give correct results some or most of the time

• But depending on the vagaries of thread scheduling, at completely unpredictable times, it will give incorrect results

Page 276: Chapter 6, Process Synchronization

276

• If you compiled the code for the Peterson’s solution code given earlier, this defect would hold.

• That example did not actually include any functioning synchronization mechanism on the shared variables that modeled turn and desire

Page 277: Chapter 6, Process Synchronization

277

• More repetitive preliminaries:– The idea of inconsistent state can be illustrated

with the producer-consumer problem: – If not properly synchronized, calls to insert() and

remove() can result in an incorrect count of how many messages are in a shared buffer

Page 278: Chapter 6, Process Synchronization

278

– Keep in mind also that the Java API supports synchronization syntax at the programmer level

– However, all synchronization ultimately is provided by something like a test and set instruction at the hardware level

Page 279: Chapter 6, Process Synchronization

279

• Because this is such a long and twisted path, the book reviews more of the preliminaries

• To begin with, the initial examples are not even truly synchronized.

• This means that they are incorrect. • They will lead to race conditions on shared

variables/shared objects

Page 280: Chapter 6, Process Synchronization

280

• Although not literally correct, the initial examples attempt to illustrate what is behind synchronization by introducing the concept of busy waiting or a spin lock

• The basic idea is that if one thread holds a resource, another thread wanting that resource will have to wait in some fashion

• In the illustrative, programmer-written code, this waiting takes the form of sitting in a loop

Page 281: Chapter 6, Process Synchronization

281

• The first problem with busy waiting is that it’s wasteful

• A thread that doesn’t have a resource it needs can be scheduled and burn up CPU cycles spinning in a loop until its time slice expires

• The second problem with busy waiting is that it can lead to livelock

Page 282: Chapter 6, Process Synchronization

282

• Livelock is not quite the same as deadlock• In deadlock, two threads “can’t move”

because each is waiting for an action that only the other can take

• In livelock, both threads are alive and scheduled, but they still don’t make any progress

Page 283: Chapter 6, Process Synchronization

283

• The book suggests this illustrative scenario:• A producer has higher priority than a consumer• The producer fills the shared buffer and remains

alive, keeping on trying to add more messages• The consumer, having lower priority, is alive, but

never scheduled, so it can never remove a message from the buffer

• Thus, the producer can never enter a new message into it

Page 284: Chapter 6, Process Synchronization

284

• Using real syntax that correctly enforces mutual exclusion can lead to deadlock

• Deadlock is a real problem in the development of synchronized code, but it is not literally a problem of synchronization syntax

• In other words, you can write an example that synchronizes correctly but still has this problem

Page 285: Chapter 6, Process Synchronization

285

• A simplistic example would be an implementation of the dining philosophers where each one could pick up the left chopstick

• The problem is not that there is uncontrolled access to a shared resource. The problem is that once that state has been entered, it will never be left

• Java synchronization syntax can be introduced and illustrated and the question of how to prevent or resolve deadlocks can be put off until chapter 7, which is devoted to that question

Page 286: Chapter 6, Process Synchronization

286

• The book takes the introduction of synchronization syntax through two stages:

• Stage 1: You use Java synchronization and the Thread class yield() method to write code that does enforce mutual exclusion and which is essentially a “correct” implementation of busy waiting.

• This is wasteful and livelock prone, but it is synchronized

Page 287: Chapter 6, Process Synchronization

287

• Stage 2: You use Java synchronization with the wait(), notify(), and notifyAll() methods of the Object class.

• Instead of busy waiting, this relies on the underlying monitor-like capabilities of Java to have threads wait in queues or lists.

• This is deadlock prone, but it deals with all of the other foregoing problems

Page 288: Chapter 6, Process Synchronization

288

The synchronized Keyword in Java

• Java synchronization is based on the monitor concept, and this descends all the way from the Object class

• Every object in Java has a lock associated with it

• This lock is essentially like a simple monitor• Locking for the object is based on a single

condition variable

Page 289: Chapter 6, Process Synchronization

289

• If you are not writing synchronized code—if you are not using the keyword synchronized, the object’s lock is completely immaterial

• It is a system supplied feature of the object which lurks in the background unused by you and having no effect on what you are doing

Page 290: Chapter 6, Process Synchronization

290

• Inside the code of a class, methods can be declared synchronized

• In the monitor concept, mutual exclusion is enforced on all of the methods of a class at the same time

• Java is finer-grained. • There can be unsynchronized methods• However, if >1 method is declared synchronized in a

class, then mutual exclusion is enforced across all of them at the same time for any threads trying to access the object

Page 291: Chapter 6, Process Synchronization

291

• If a method is synchronized and no thread holds the lock, the first thread that calls the method acquires the lock

• Again, Java synchronization is monitor-like. • There is an entry set for the lock, in other words,

a waiting list• If another thread calls a synchronized method

and cannot acquire the lock, it is put into the entry set for that lock

Page 292: Chapter 6, Process Synchronization

292

• When the thread holding the lock finishes running whatever synchronized method it was in, it releases the lock

• At that point, if the entry set has threads in it, the JVM will schedule one

• FIFO scheduling may be done on the entry set, but the Java specifications don’t require it

Page 293: Chapter 6, Process Synchronization

293

• Here are the first correctly synchronized snippets of sample code which the book offers

• They do mutual exclusion on a shared buffer• However, they mimic busy waiting using the

Thread class yield() method• The Java API simply says this about the method:• “Causes the currently executing thread object to

temporarily pause and allow other threads to execute.”

Page 294: Chapter 6, Process Synchronization

294

• The book doesn’t bother to give a complete set of classes for this solution because it is not a very good one

• Because it implements a kind of busy waiting, it’s wasteful and livelock prone.

• The book also claims that it is prone to deadlock.• It seems that even the book may be getting hazy about

the exact defects of given solutions. • Let it be said that this solution, at the very least, is X-

lock prone

Page 295: Chapter 6, Process Synchronization

295

Synchronized insert() and remove() Methods for Producers and Consumers of a Bounded Buffer

• public synchronized void insert(Object item)

• {• while(count == BUFFER_SIZE)• Thread.yield;• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;• }

Page 296: Chapter 6, Process Synchronization

296

• public synchronized Object remove()• {• Object item;• while(count == 0)• Thread.yield();• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;• return item;• }

Page 297: Chapter 6, Process Synchronization

297

• Note that in the fully semaphore oriented (pseudo?) solution, there were three semaphores

• One handled the mutual exclusion which the keyword synchronized handles here

• The other two handled the cases where the buffer was empty or full

Page 298: Chapter 6, Process Synchronization

298

• There is no such thing as a synchronized “empty” or “full” variable, so there are not two additional uses of synchronized in this code

• The handling of the empty and full cases goes all the way back to the original example, where the code does modular arithmetic and keeps a count variable

• Note that the call to yield() then depends on the value of the count variable

Page 299: Chapter 6, Process Synchronization

299

Code with synchronized and wait(), notify(), and notifyAll()

• Java threads can call methods wait(), notify(), and notifyAll()

• These methods are similar in function to the methods of these names described when discussing the monitor concept

• Each Java object has exactly one lock, but it has two sets, the entry set and the wait set

Page 300: Chapter 6, Process Synchronization

300

The Entry Set

• The entry set is a waiting list• You can think of it as being implemented as a

linked data structure containing the “PCB’s” of threads

• Threads in the entry set are those which have reached the point in execution where they have called a synchronized method but can’t get in because another thread holds the lock

Page 301: Chapter 6, Process Synchronization

301

• A thread leaves the entry set and enters the synchronized method it wishes to run when the current lock holder releases the lock and the scheduling algorithm picks from the entry set that thread wanting the lock

Page 302: Chapter 6, Process Synchronization

302

• The wait set is also a waiting list• You can also think of this as a linked data

structure containing the “PCB’s” of threads• The wait set is not the same as the entry set• Suppose a thread holds a lock on an object• A thread enters the wait set by calling the

wait() method

Page 303: Chapter 6, Process Synchronization

303

• Entering the wait set means that the thread voluntarily releases the lock that it holds

• In the application code this would be triggered in an if statement where some (non-lock related) condition has been checked and it has been determined that due to that condition the thread can’t continue executing anyway

• When a thread is in the wait set, it is blocked. It can’t be scheduled but it’s not burning up resources because it’s not busy waiting

Page 304: Chapter 6, Process Synchronization

304

The Entry and Wait Sets Can Be Visualized in this Way

Page 305: Chapter 6, Process Synchronization

305

• By definition, threads in the wait set are not finished with the synchronized code

• Threads acquire the synchronized code through the entry set

• There has to be a mechanism for a thread in the wait set to get into the entry set

Page 306: Chapter 6, Process Synchronization

306

The Way to Move a Thread from the Wait Set to the Entry Set

• If in the synchronized code, one or more calls to wait() have been made,

• At the end of the code for a synchronized method, put a call to notify()

• When the system handles the notify() call, it picks an arbitrary thread from the wait set and puts it into the entry set

• When the thread is moved from one set to the other, its state is changed from blocked to runnable

Page 307: Chapter 6, Process Synchronization

307

• The foregoing description should be sufficient for code that manages two threads

• As a consequence, it should provide enough tools for an implementation of the producer-consumer problem using Java synchronization

Page 308: Chapter 6, Process Synchronization

308

Preview of the Complete Producer-Consumer Code

• The BoundedBuffer class has two methods, insert() and remove()

• These two methods are synchronized• Synchronization of the methods protects both

the count variable and the buffer itself, since each of these things is only accessed and manipulated through these two methods

Page 309: Chapter 6, Process Synchronization

309

• Unlike with semaphores, the implementation is nicely parallel:

• You start both methods with a loop containing a call to wait() and end both with a call to notify()

• Note that it is not immediately clear why the call to wait() is in loop rather than an if statement

• Note also, syntactically, that the call to wait() has to occur in a try block

Page 310: Chapter 6, Process Synchronization

310

• Finally, note these important points:• The use of the keyword synchronized enforces

mutual exclusion• The use of wait() and notify() have taken over the

job of controlling whether a thread can insert or remove a message from the buffer depending on whether the buffer is full or not

• The code follows. • This will be followed by further commentary

Page 311: Chapter 6, Process Synchronization

311

• /**• * BoundedBuffer.java• * • * This program implements the bounded buffer using Java synchronization.• * • */

• public class BoundedBuffer implements Buffer {• private static final int BUFFER_SIZE = 5;

• private int count; // number of items in the buffer

• private int in; // points to the next free position in the buffer

• private int out; // points to the next full position in the buffer

• private Object[] buffer;

• public BoundedBuffer() {• // buffer is initially empty• count = 0;• in = 0;• out = 0;

• buffer = new Object[BUFFER_SIZE];• }

Page 312: Chapter 6, Process Synchronization

312

• public synchronized void insert(Object item) {• while (count == BUFFER_SIZE) {• try {• wait();• } catch (InterruptedException e) {• }• }

• // add an item to the buffer• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;

• if (count == BUFFER_SIZE)• System.out.println("Producer Entered " + item + " Buffer FULL");• else• System.out.println("Producer Entered " + item + " Buffer Size =

"• + count);

• notify();• }

Page 313: Chapter 6, Process Synchronization

313

• // consumer calls this method• public synchronized Object remove() {• Object item;

• while (count == 0) {• try {• wait();• } catch (InterruptedException e) {• }• }

• // remove an item from the buffer• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;

• if (count == 0)• System.out.println("Consumer Consumed " + item + " Buffer EMPTY");• else• System.out.println("Consumer Consumed " + item + " Buffer Size = "• + count);

• notify();

• return item;• }

• }

Page 314: Chapter 6, Process Synchronization

314

An example scenario showing how the calls to wait() and notify() work

• Assume that the lock is available but the buffer is full

• The producer calls insert()• The lock is available so it gets in• The buffer is full so it calls wait()• The producer releases the lock, gets blocked,

and is put in the wait set

Page 315: Chapter 6, Process Synchronization

315

• The consumer eventually calls remove()• There is no problem because the lock is

available• At the end of removing, the consumer calls

notify()• The call to notify() removes the producer from

the wait set, puts it into the entry set, and makes it runnable

Page 316: Chapter 6, Process Synchronization

316

• When the consumer exits the remove() method, it gives up the lock

• The producer can now be scheduled• The producer thread begins execution at the line of

code following the wait() call which caused it to be put into the wait set

• After inserting, the producer calls notify()• This would allow any other waiting thread to run• If nothing was waiting, it has no effect

Page 317: Chapter 6, Process Synchronization

317

• The rest of the code is given here so it’s close by for reference

• It is the same as the rest of the code for the previous examples, so it may not be necessary to look at it again

Page 318: Chapter 6, Process Synchronization

318

• /**• * An interface for buffers• *• */

• public interface Buffer• {• /**• * insert an item into the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract void insert(Object item);

• /**• * remove an item from the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract Object remove();• }

Page 319: Chapter 6, Process Synchronization

319

• /**• * This is the producer thread for the bounded buffer problem.• */

• import java.util.*;

• public class Producer implements Runnable {• private Buffer buffer;

• public Producer(Buffer b) {• buffer = b;• }

• public void run() {• Date message;

• while (true) {• System.out.println("Producer napping");• SleepUtilities.nap();

• // produce an item & enter it into the buffer• message = new Date();• System.out.println("Producer produced " + message);

• buffer.insert(message);• }• }

• }

Page 320: Chapter 6, Process Synchronization

320

• /**• * This is the consumer thread for the bounded buffer problem.• */• import java.util.*;

• public class Consumer implements Runnable {• private Buffer buffer;

• public Consumer(Buffer b) {• buffer = b;• }

• public void run() {• Date message;

• while (true) {• System.out.println("Consumer napping");• SleepUtilities.nap();

• // consume an item from the buffer• System.out.println("Consumer wants to consume.");

• message = (Date) buffer.remove();• }• }

• }

Page 321: Chapter 6, Process Synchronization

321

• /**• * This creates the buffer and the producer and consumer threads.• *• */• public class Factory• {• public static void main(String args[]) {• Buffer server = new BoundedBuffer();

• // now create the producer and consumer threads• Thread producerThread = new Thread(new Producer(server));• Thread consumerThread = new Thread(new Consumer(server));• • producerThread.start();• consumerThread.start(); • }• }

Page 322: Chapter 6, Process Synchronization

322

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 323: Chapter 6, Process Synchronization

323

Multiple Notifications

• A call to notify() picks one thread out of the wait set and puts it into the entry set

• What if there are >1 waiting threads?• The book points out that using notify() alone

can lead to deadlock• This is an important problem, which motivates

a discussion of notifyAll(), but the topic will not be covered in detail until the next chapter

Page 324: Chapter 6, Process Synchronization

324

• The general solution to any problems latent in calling notify() is to call notifyAll()

• This moves all of the waiting threads to the entry set

• At that point, which one runs next depends on the scheduler

• The selected one may immediately block

Page 325: Chapter 6, Process Synchronization

325

• However, if notifyAll() is always called, statistically, if there is at least one thread that can run, it will eventually be scheduled

• Any threads which depend on it could then run when they are scheduled, and progress will be made

Page 326: Chapter 6, Process Synchronization

326

notifyAll() and the Readers-Writers Problem

• The book gives full code for this• I will try to abstract their illustration without

referring to the complete code• Remember that a read lock is not exclusive– Multiple reading threads are OK at the same time– Only writers have to be blocked

• Write locks are exclusive– Any one writer blocks all other readers and writers

Page 327: Chapter 6, Process Synchronization

327

Synopsis of Read Lock Code• acquireReadLock()• {• while(…)• wait();• …• }• releaseReadLock()• {• …• notify();

Page 328: Chapter 6, Process Synchronization

328

• One writer will be notified when the readers finish.

• It does seem possible to call notifyAll(), in which case possibly >1 writer would contend to be scheduled, but it is sufficient to just ask the system to notify one waiting thread.

Page 329: Chapter 6, Process Synchronization

329

Synopsis of Write Lock Code• acquireWriteLock()• {• while(…)• wait();• …• }• releaseReadLock()• {• …• notifyAll();• }

Page 330: Chapter 6, Process Synchronization

330

• All readers will be notified when the writer finishes (as well as any waiting writers).

• The point is to get all of the readers active since they are all allowed to read concurrently

Page 331: Chapter 6, Process Synchronization

331

Block Synchronization

• Lock scope definition: Time between when a lock is acquired and released (might also refer to the location in the code where it’s in effect)

• Declaring a method synchronized may lead to an unnecessarily long scope if large parts of the method don’t access the shared resource

• Java supports block synchronization syntax where just part of a method is made into a critical section

Page 332: Chapter 6, Process Synchronization

332

• Block synchronization is based on the idea that every object has a lock

• You can construct an instance of the Object class and use it as the lock for a block of code

• In other words, you use the lock of that object as the lock for the block

• Example code follows

Page 333: Chapter 6, Process Synchronization

333

• Object mutexLock = new Object();• …• public void someMethod()• {• nonCriticalSection();• synchronized(mutexLock)• {• criticalSection();• }• remainderSection();• }

Page 334: Chapter 6, Process Synchronization

334

• Block synchronization also allows the use of wait() and notify calls()• Object mutexLock = new Object();• …• synchronized(mutexLock)• {• …• try• {• mutexLock.wait();• catch(InterruptedException ie)• {• …• }• …• Synchronized(mutexLock)• {• mutexLock.notify();• }

Page 335: Chapter 6, Process Synchronization

335

Synchronization Rules: I.e., Rules Affecting the Use of the Keyword synchronized

• 1. A thread that owns the lock for an object can enter another synchronized method (or block) for the same object.

• This is known as a reentrant or recursive lock.• 2. A thread can nest synchronized calls for

different objects. • One thread can hold the lock for >1 object at

the same time.

Page 336: Chapter 6, Process Synchronization

336

• 3. Some methods of a class may not be declared synchronized.

• A method that is not declared synchronized can be called regardless of lock ownership—that is, whether a thread is running in a synchronized method concurrently

• 4. If the wait set for an object is empty, a call to notify() or notifyAll() has no effect.

Page 337: Chapter 6, Process Synchronization

337

• 5. wait(), notify(), and notifyAll() can only be called from within synchronized methods or blocks.

• Otherwise, an IllegalMonitorStateException is thrown.

• An additional note: For every class, in addition to the lock that every object of that class gets, there is also a class lock.

• That makes it possible to declare static methods or blocks in static methods synchronized

Page 338: Chapter 6, Process Synchronization

338

Handling the InterruptedException

• If you go back to chapter 4, you’ll recall that the topic of asynchronous (immediate) and deferred thread cancellation (termination) came up

• Deferred cancellation was preferred. • This meant that threads were cancelled by

calling interrupt() rather than stop()

Page 339: Chapter 6, Process Synchronization

339

• Under this scenario, thread1 might have a reference to thread2

• Within the code for thread1, thread2 would be interrupted in this way:

• thread2.interrupt();

Page 340: Chapter 6, Process Synchronization

340

• Then thread2 can check its status with one of these two calls:

• me.interrupted();• me.isInterrupted();• thread2 can then do any needed

housekeeping (preventing inconsistent state) before terminating itself

Page 341: Chapter 6, Process Synchronization

341

• The question becomes, is it possible to interrupt (cancel) a thread that is a wait set (is blocked)?

• A call to wait() has to occur in a block like this:• try• {• wait();• }• catch(InterruptedException ie)• {• …• }

Page 342: Chapter 6, Process Synchronization

342

• If a thread calls wait(), it goes into the wait set and stops executing

• As explained up to this point, the thread can’t resume until notify() or notifyAll() are called and it is picked for scheduling

• This isn’t entirely true

Page 343: Chapter 6, Process Synchronization

343

• The wait() call is the last live call of the thread• The system is set up so that thread1 might

make a call like this while thread2 is in the wait set:

• thread2.interrupt();

Page 344: Chapter 6, Process Synchronization

344

• If such a call is made on thread2 while it’s in the wait set, the system will throw an exception back out where thread2 made the call to wait()

• At that point, thread2 is no longer blocked because it’s kicked out of the wait set

• This means that thread2 can be scheduled without a call to notify(), but its state is now interrupted

• If you choose to handle the exception, then what you should do is provide the housekeeping code which thread2 needs to run so that it can terminate itself and leave a consistent state

Page 345: Chapter 6, Process Synchronization

345

• The foregoing can be summarized as follows:• Java has this mechanism so that threads can be terminated

even after they’ve disappeared into a wait set• This can be useful because there should be no need for a

thread to either waste time in the wait set or run any further if it is slated for termination anyway

• This is especially useful because it allows a thread which is slated for termination to release any locks or resources it might be holding.

• Why this is good will become even clearer in the following chapter, on deadlocks

Page 346: Chapter 6, Process Synchronization

346

Concurrency Features in Java

• What follows is just a listing of the features with minimal explanation

• If you want to write synchronized code in Java, check the API documentation

• 1. There is a class named ReentrantLock. • This supports functionality similar to the

synchronized keyword (or a semaphore) with added features like enforcing fairness in scheduling threads waiting for locks

Page 347: Chapter 6, Process Synchronization

347

• 2. There is a class named Semaphore. • Technically, the examples earlier were based on the authors’

hand-coded semaphore. • If you want to use the Java Semaphore class, double check

its behavior in the API• 3. There is an interface named Condition, and this type can

be used to declare condition variables associated with reentrant locks.

• They are related to the idea of condition variables in a monitor, and they are used with wait(), notify(), and notifyAll() with reentrant locks

Page 348: Chapter 6, Process Synchronization

348

• 6.9 Synchronization Examples: Solaris, XP, Linux, Pthreads. SKIP

• 6.10 Atomic Transactions: This is a fascinating topic that has as much to do with databases as operating systems… SKIP

• 6.11 Summary. SKIP

Page 349: Chapter 6, Process Synchronization

349

The End