OS

54
Operating Systems Lecture 1 : Introduction

Transcript of OS

Operating SystemsLecture 1 : Introduction

1.0 Main Point

• Why study OS? (1.1)

• What is OS? (1.2)

• Principle of OS design (1.3)

• History of OS (1.4)

1.1 Why Study OS?

• Abstraction : illusion of infinite H.W.

• System Design : tradeoffs between H.W~S.W.

• Look under the hood : to examine the engine

• Capstone : combine things

1.2 What is OS?

• Definition : OS implements a virtual machines that is easier to program than the raw H.W.

1.2.1 OS have two Functions

• Coordinator : Concurrency , Memory Protection, etc…

• Standard Service: Standard Libraries, Windowing Systems.

WIMP : window, icon, menu, pointer

1.2.2 If you don’t have OS

• source code -> compiler -> obj-code -> H.W.

.c .asm .bin

1.2.3 Simple or Complex

• early PC, early smartphones, embedded: only one app at a time, OS is just std lib.

• recently OS : manage interactions, share H.W.

1.2.4 OS coordination

• example : ProtectionKeep usr prog from crashing OS Keep usr prog from crashing each other App

1.2.4.1 H.W. support

• address translation

• dual mode operation

1.2.4.2 Address translation

CPU

*MMU

Physical Mem.

DATA

Virtual Address Real Address

untranslated

* memory management unit (MMU) remembers all actual address of virtual addressso, if CPU calls a virtual address, MMU forward it to real location in Physical Memory.

1.2.4.3 Dual mode Oper.

• User-mode : restricted to only program’s memory.

• Kernel-mode : can do anything in OS.

Application

Standard library

*Portable OS layer

Machine-dependent OS layer

* Not Another Completely Heuristic Operating System (NACHOS) is instructional software for teaching undergraduate, and potentially graduate level operating systems courses.This Portable OS simulate the H.W. and machine-dependent layer like interrupts, etc. and the execution of user programs running on top.

1.3 OS Principles

• Illusionist : make illusion of dedicated infinite H.W.

• Government : protect users & allocate resources

• Complex system : keep things simple to work

• History teacher : learn from past & predict future

1.4 History of OS

• H.W. expensive, Humans cheap

• H.W. cheap, Humans expensive

• H.W. very cheap, Humans very expensive

• Distributed systems

* Techniques have to vary over time, adapt to changing tradeoffs.

1.4.1 H.W. expensive, Humans cheap

• User in console

• batch monitor

• Data channels, Interrupts

• Memory Protection + relocation

1.4.2 H.W. cheap, Humans expensive

• Interactive timesharing

1.4.3 H.W. very cheap, Humans very expensive

• Personal computing

1.4.4 Distributed Systems

• allow different machines to share resources by Networking

Operating SystemsLecture 2 : Threads & Process

Contents

I. Concurrency

II. Threads & Dispatching

III. Independent vs. Cooperating threads

1.1 Concurrency

• Decompose hard problems into simpler ones

1.2 Process

• Operating system abstraction: sequential stream in it’s own address space

1.2.1 Two parts of Process

• Sequential Execution : No concurrency in process

• Process state: registers, main memory, files in UNIX

1.2.2 Process ~ Program

• program is just part of process state

• program can invoke more than one process

1.2.3 Definition

• Uni Programming : one process at a time-> easier for programmer, harder for user

• Multi Programming : more than one process

1.3 Threads

• Thread : sequential stream within a process

• Address Space : provide *illusion

*Protection : make illusion that program is running on its own machine

1.3.1Why separate?

• Discuss that ‘thread’ is a part of process. Many situations want multiple thread per address space

• *Multi-threading : single program made up of a number of works

* same as Multi-tasking in Ada

1.3.2 Examples

• Embedded systems

• Modern OS

• Network servers : get multiple requests

• Parallel computing : makes it run faster

1.3.3 Thread State

• Program Counter

• Registers

• Execution Stack : parameters, variables, return PC value are kept during another thread runs.

1.3.4 Address space state

• Thread encapsulate concurrency

• Address space encapsulate protection-> keep buggy from trashing all systems

• Address state : contents of main memory UNIX files

1.4 Classification

addr.space

threadone many

one DOS/Mac old UNIX

many Embedded modern OS

1.5 Summary

• Process has two parts : Thread & Address space

• Thread concerns concurrency

• Address space concerns protection

2.0 Threads & Dispatching

• All threads share the same uni-processor yetThen, How does it work? - Thread control block (TCB) - Dispatching loop

2.1 Per-thread state

• Registers

• Program Counter

• Stack Pointer

• Scheduling information

• etc.

2.2 Dispatching Loop

• Run thread

• Save state into TCB

• Choose thread to run in Pool

• Load chosen thread’s state & Loop

2.2.1 Running a thread

• How does dispatcher get control back?

1. Internal Eventsi) Thread blocks on I/Oii) Thread blocks waiting iii) Yield

2. External eventsi) Interruptsii) Timer

2.2.2 Choosing Thread

• LIFO : Results in starvation

• FIFO : Nachos does

• Priority queue : give some better shot

* Round Robin Scheduling : time-sharing algorithm that slice time and gives quota to each threads.after the given quota (timer INT.s), thread being dispatched.

2.2.3 Thread States

Running

Ready Blocked

I/O Request

I/O Complete

Yield, Timer

Scheduled

2.2.4 Context Switching

2.2.5 Interrupts

1. CPU stop when I/O finishes or Timer expires. and start running Interrupt Handler.

2. Handler saves interrupted Thread’s state

3. Handler runs

4. Handler restores *thread’s state

5. Return to normal execution

* if time-slice, restore next thread’s state

2.3 Thread Creation

2.4 Multiprocessing & Multiprogramming

• Multiprocessing : Each *processor run Thread

• Multiprogramming : Dispatcher choose Thread

* It’ll be real processor, or simulating processor.

3.0 Independent vs Cooperating thread

• handle cooperating threads

• Atomic operation

3.1 Definition

• Independent threads :- No state shared- Deterministic - Reproducible

• Cooperating threads :- Shared state- Non-deterministic- Non-reproducible

* Cooperating mean that the bug can be intermittent. so, threads are must be Synced.

3.2 Why Cooperating?

• Share resource(information)

• Speedup

• Modularity

3.3 Simple concurrent

3.4 Atomic Operation

• Can’t stop in the middle

• Instructions can be atomic or not.: different architecture

LD / ST‘double’ type value

x86 x64

non-Atomic

-> Two separate 32-bit operation

Atomic

-> Only one

64-bit operation

Operating SystemsLecture 3 : Concurrency - part 1

Synchronization

1.0 Definition

• Synchronization : ensure cooperation

• Mutual exclusion : One thread excludes the other, and vice versa.

• Critical section: Only one thread can run at once

• Lock: Prevents from doing something

1.1 too much milk #1

• 그냥 사러나가면 다른 스래드가 나간 줄 모름 -> 메모를 붙이자.

1.2 too much milk #2

• 메모가 있는지 확인하고 붙이자 -> 확인 후 콘텍스트-스위칭 일어나 버리면 둘 다 메모를 붙이고 사러 나는 상황이 발생

1.3 too much milk #3

• 메모를 붙이고 다른 메모가 있는지 확인 -> A 메모 부착 후 CS. -> B 메모 부착 후 CS. -> 두 메모가 둘 다 붙어 있어 스타베이션 발생.

1.4 too much milk #4

• A를 ‘의심쟁이’로 만들자. (While Loop)-> 문제는 해결. -> but, A 스레드에만 과부하가 발생-> 스레드 추가시 그 과부하는 과중됨

1.5 too much milk #5

• Lock::acquire / Lock::Release-> 먼저 emptyMilk를 발견한 스래드를 제외한 모든 스래드는 즉시 중지.-> Acquire/Release 과정에서 Interrupt를 비활성화 시켜 오직 Lock시키는 과정에만 충실하게. I/O Pending 시킨다고도 한다.