I/O Management

57
1 Interrupts Revisited How interrupts happens. Connections between devices and interrupt controller actually use interrupt lines on the bus rather than dedicated wires

Transcript of I/O Management

Page 1: I/O Management

1

Interrupts Revisited

How interrupts happens. Connections between devices and interrupt controller actually use interrupt lines on the bus rather than dedicated wires

Page 2: I/O Management

2

Principles of I/O Software - Goals of I/O Software (1)

• Device independence• programs can access any I/O device • without specifying device in advance

· (floppy, hard drive, or CD-ROM)

• Uniform naming• name of a file or device a string or an integer• not depending on which machine

• Error handling• handle as close to the hardware as possible

Page 3: I/O Management

3

Goals of I/O Software (2)• Synchronous vs. asynchronous transfers• blocked transfers vs. interrupt-driven

• Buffering• data coming off a device cannot be stored in final destination

• Sharable vs. dedicated devices• disks are sharable• tape drives would not be

Page 4: I/O Management

Programmed I/O

• Steps in printing a string.

Page 5: I/O Management

Programmed I/O (Polling)

• Data copied to kernel space from user space• OS then enters a tight loop (Polling) the device (usually checking a

status register) to see if the device is ready for more data.• Once the device is ready to accept more data the OS will copy the

data to the device register• Once all data has been copied control is returned to the user

program.

Page 6: I/O Management

Programmed I/O

• Send data one character at a time

Writing a string to the printer using programmed I/O

Page 7: I/O Management

Good vs Bad• What is good about programmed I/O?• Usually simple to implement

• What is bad about programmed I/O?• Since we are tying up the CPU checking to see if we can

access a device, we are in a state of “busy waiting”. No “real” work is being done.

• Programmed I/O is ok when the CPU has nothing else to do or the action that is being done is very short (CPU has small idle time)

Page 8: I/O Management

Interrupt Driven I/O

• Idea: block process which requests I/O, schedule another process • Return to calling process when I/O is done• Printer generates interrupt when a character is printed• Keeps printing until the end of the string• Re-instantiate calling process

Page 9: I/O Management

9

Interrupt-Driven I/O

• Writing a string to the printer using interrupt-driven I/O• Code executed when print system call is made• Interrupt service procedure

Page 10: I/O Management

DMA

• Use DMA controller to send characters to printer instead of using the CPU• CPU is only interrupted when the buffer is printed instead of when

each character is printed• DMA is worth it if (1) DMA controller can drive the device as fast as the CPU could drive it (2) there is enough data to make it worthwhile

Page 11: I/O Management

11

I/O Using DMA

• Printing a string using DMA• code executed when the print system call is made• interrupt service procedure

Page 12: I/O Management

Layers of the I/O software system.

I/O Software Layers

Page 13: I/O Management

Interrupt Handlers• The idea: driver starting the I/O blocks until interrupt happens when

I/O finishes• Handler processes interrupt• Wakes up driver when processing is finished• Drivers are kernel processes with their very own• Stacks• PCs• states

Page 14: I/O Management

Interrupt processing details1. Save registers not already saved by interrupt hardware.

2. Set up a context for the interrupt service procedure.

3. Set up a stack for the interrupt service procedure.

4. Acknowledge the interrupt controller. If there is no centralized interrupt controller, re-enable interrupts.

5. Copy the registers from where they were saved to the process table.

Page 15: I/O Management

Interrupt processing details

6 Run the interrupt service procedure.

7 Choose which process to run next.

8 Set up the MMU context for the process to run next.

9 Load the new process’ registers, including its PSW.

10 Start running the new process.

Page 16: I/O Management

Device Drivers

Page 17: I/O Management

Device Drivers-Act 1• Driver contains code specific to the device• Supplied by manufacturer• Installed in the kernel • User space might be better place• Need interface to OS • block and character interfaces• procedures which OS can call to invoke driver (e.g. read a block)

Page 18: I/O Management

Device drivers Act 2• Checks input parameters for validity• Abstract to concrete translation (block number to cylinder, head,

track, sector)• Check device status. Might have to start it.• Puts commands in device controller’s registers• Driver blocks itself until interrupt arrives• Might return data to caller• Does return status information

Page 19: I/O Management

19

Device-Independent I/O Software (1)

Functions of the device-independent I/O software

Uniform interfacing for device drivers

Buffering

Error reporting

Allocating and releasing dedicate devices

Providing a deice-independent block size

Page 20: I/O Management

20

Device-Independent I/O Software (2)

(a) Without a standard driver interface(b) With a standard driver interface

Page 21: I/O Management

Interface: Driver functions• Driver functions differ for different drivers• Kernel functions which each driver needs are different for different

drivers• Too much work to have new interface for each new device type • OS defines functions for each class of devices which it MUST supply, e.g.

read, write, turn on, turn off……..• Driver has a table of pointers to functions• OS just needs table address to call the functions

Page 22: I/O Management

22

Device-Independent I/O Software (3)

(a) Unbuffered input(b) Buffering in user space(c) Buffering in the kernel followed by copying to user space(d) Double buffering in the kernel

Page 23: I/O Management

23

Device-Independent I/O Software (4)

Networking may involve many copies

Page 24: I/O Management

24

User-Space I/O Software

Layers of the I/O system and the main functions of each layer

Page 25: I/O Management

Disks

• Magnetic (hard)• Reads and writes are equally fast=> good for storing file systems • Disk arrays are used for reliable storage (RAID)

• Optical disks (CD-ROM, CD - Recordable , DVD) used for program distribution

Page 26: I/O Management

What does the disk look like?

Page 27: I/O Management

• Seek time is 7x better, transfer rate is 1300 x better, capacity is 50,000 x better.

Floppy vs hard disk (20 years apart)

Page 28: I/O Management

Disks-more stuff• Some disks have microcontrollers which do bad block re-mapping,

track caching• Some are capable of doing more then one seek at a time, i.e. they

can read on one disk while writing on another• Real disk geometry is different from geometry used by driver =>

controller has to re-map request for (cylinder, head,sector) onto actual disk• Disks are divided into zones, with fewer tracks on the inside,

gradually progressing to more on the outside

Page 29: I/O Management

(a) Physical geometry of a disk with two zones. (b) A possible virtual geometry for this disk.

Disk Zones

Page 30: I/O Management

RAID Motivation

• Disks are improving, but not as fast as CPUs 1970s seek time: 50-100 ms. 2000s seek time: <5 ms.

Factor of 20 improvement in 3 decades• We can use multiple disks for improving performance• By Striping files across multiple disks (placing parts of each file on a

different disk), parallel I/O can improve access time, Striping reduces reliability

• So, we need Striping for performance, but we need something to help with reliability / availability, To improve reliability, we can add redundant data to the disks, in addition to Striping

Page 31: I/O Management

Redundant Array of Inexpensive Disks (RAID)• Parallel I/O to improve performance and reliability • vs SLED, Single Large Expensive Disk• Bunch of disks which appear like a single disk to the OS• SCSI disks often used-cheap, 7 disks per controller• SCSI is set of standards to connect CPU to peripherals• Different architectures-level 0 through level 7

Page 32: I/O Management

Raid Level 0

• Level 0 is nonredundant disk array• Files are Striped across disks, no redundant info• High read throughput• Best write throughput (no redundant info to write)• Any disk failure results in data loss

• Reliability worse than SLED

Stripe 0

Stripe 4

Stripe 3Stripe 1 Stripe 2

Stripe 8 Stripe 10 Stripe 11

Stripe 7Stripe 6Stripe 5

Stripe 9

data disks

Page 33: I/O Management

Raid Level 1

• Mirrored Disks• Data is written to two places

• On failure, just use surviving disk• On read, choose fastest to read

• Write performance is same as single drive, read performance is 2x better

• Expensive

data disks mirror copies

Stripe 0

Stripe 4

Stripe 3Stripe 1 Stripe 2

Stripe 8 Stripe 10 Stripe 11

Stripe 7Stripe 6Stripe 5

Stripe 9

Stripe 0

Stripe 4

Stripe 3Stripe 1 Stripe 2

Stripe 8 Stripe 10 Stripe 11

Stripe 7Stripe 6Stripe 5

Stripe 9

Page 34: I/O Management

Parity Codes• What do you need to do in order to detect and correct a one-bit

error ?• Suppose you have a binary number, represented as a collection of bits:

<b3, b2, b1, b0>, e.g. 0110• Detection is easy• Parity:

• Count the number of bits that are on, see if it’s odd or even• EVEN parity is 0 if the number of 1 bits is even

• Parity(<b3, b2, b1, b0 >) = P0 = b0 b1 b2 b3• Parity(<b3, b2, b1, b0, p0>) = 0 if all bits are intact• Parity(0110) = 0, Parity(01100) = 0• Parity(11100) = 1 => ERROR!• Parity can detect a single error, but can’t tell you which of the bits got

flipped

Page 35: I/O Management

Parity and Hamming Code• Detection and correction require more work• Hamming codes can detect double bit errors and detect & correct single bit errors• 7/4 Hamming Code

• h0 = b0 b1 b3• h1 = b0 b2 b3• h2 = b1 b2 b3• H0(<1101>) = 0• H1(<1101>) = 1• H2(<1101>) = 0• Hamming(<1101>) = <b3, b2, b1, h2, b0, h1, h0> = <1100110>• If a bit is flipped, e.g. <1110110>• Hamming(<1111>) = <h2, h1, h0> = <111> compared to <010>, <101> are in error. Error occurred in bit 5.

Page 36: I/O Management

Raid Level 2• Bit-level Striping with Hamming (ECC) codes for error correction• All 7 disk arms are synchronized and move in unison• Complicated controller• Single access at a time• Tolerates only one error, but with no performance degradation

data disks

Bit 0 Bit 3Bit 1 Bit 2 Bit 4 Bit 5 Bit 6

ECC disks

Page 37: I/O Management

Raid Level 3• Use a parity disk• Each bit on the parity disk is a parity function of the corresponding bits on all

the other disks• A read accesses all the data disks• A write accesses all data disks plus the parity disk• On disk failure, read remaining disks plus parity disk to compute the missing data

data disksParity disk

Bit 0 Bit 3Bit 1 Bit 2 ParitySingle parity disk can be usedto detect and correct errors

Page 38: I/O Management

Raid Level 4

• Combines Level 0 and 3 – block-level parity with Stripes• A read accesses all the data disks• A write accesses all data disks plus the parity disk• Heavy load on the parity disk

data disksParity disk

Stripe 0 Stripe 3Stripe 1 Stripe 2 P0-3

Stripe 4

Stripe 8

Stripe 10 Stripe 11

Stripe 7Stripe 6Stripe 5

Stripe 9

P4-7

P8-11

Page 39: I/O Management

Raid Level 5• Block Interleaved Distributed Parity• Like parity scheme, but distribute the parity info over all disks (as

well as data over all disks)• Better read performance, large write performance• Reads can outperform SLEDs and RAID-0

data and parity disks

Stripe 0 Stripe 3Stripe 1 Stripe 2 P0-3

Stripe 4

Stripe 8 P8-11 Stripe 10

P4-7Stripe 6Stripe 5

Stripe 9

Stripe 7

Stripe 11

Page 40: I/O Management

Backup and parity drives are shown shaded.

RAID Levels 0,1,2

Page 41: I/O Management

Backup and parity drives are shown shaded.

RAID

Page 42: I/O Management

Innovative Work & Knowledge

Page 43: I/O Management

Disk Arm Scheduling – Motivation

• Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer. • The 1-dimensional array of logical blocks is mapped into the sectors of the

disk sequentially.• Sector 0 is the first sector of the first track on the outermost cylinder.• Mapping proceeds in order through that track, then the rest of the

tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost.

• The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth.

Page 44: I/O Management

Factors For Learning

• Time required to read or write a disk block determined by 3 factors

1. Seek time2. Rotational delay3. Actual transfer time

Page 45: I/O Management

Disk Access Time

• Average time to access some target sector approximated by :Taccess= Tavgseek + Tavgrotation + Tavgtransfer Seek time( Tavgseek ): Time to position heads over cylinder

containing target sector Rotational latency( Tavgrotation ): Time waiting for first bit of target

sector to pass under r/w head Transfer time(Tavgtransfer): Time to read the bits in the target sector

Page 46: I/O Management

Disk Scheduling

• Several algorithms exist to schedule the servicing of disk I/O requests. • We illustrate them with a request queue (0-199).

98, 183, 37, 122, 14, 124, 65, 67

Head pointer 53

Page 47: I/O Management

FCFS• Illustration shows total head movement of 640 cylinders.

(98-53) + (183-98) + (183-37) + (122-37) + (122-14) + (124-14) + (124-65) + (67-65)

45 + 85 + 146 + 85 + 108 + 110 + 59 + 2

640

Page 48: I/O Management

Lets Do IT - FCFS

• While head is on cylinder 11, requests for 1,36,16,34,9,12 come in FCFS would result in ________________ head movement

• Disk requests come in to the disk driver for cylinders 10, 22, 20, 2, 40, 6, and 38, in that order. the arm is initially at cylinder 20.

FCFS would result in ________________ head movement

Page 49: I/O Management

SSTF – Shortest Seek Time First

• Selects the request with the minimum seek time from the current head position. SSTF scheduling may cause starvation of some requests.

Illustration shows total head movement of 236 cylinders.

Page 50: I/O Management

Lets Do IT - SSTF

• While head is on cylinder 11, requests for 1,36,16,34,9,12 come in SSTF would result in ________________ head movement

• Disk requests come in to the disk driver for cylinders 10, 22, 20, 2, 40, 6, and 38. The head is initially at cylinder 20.

SSTF would result in ________________ head movement

Page 51: I/O Management

SCAN

• The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues.

• Sometimes called the elevator algorithm.

• Illustration shows total head movement of 208 cylinders.

Page 52: I/O Management

Lets Do IT - SCAN

• While head is on cylinder 11, requests for 1,36,16,34,9,12 come in SCAN would result in ________________ head movement

• Disk requests come in to the disk driver for cylinders 10, 22, 20, 2, 40, 6, and 38. The head is initially at cylinder 20.

SCAN would result in ________________ head movement

Page 53: I/O Management

C-SCAN

• Provides a more uniform wait time than SCAN.

• The head moves from one end of the disk to the other. servicing requests as it goes. When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip.

• Treats the cylinders as a circular list that wraps around from the last cylinder to the first one.

Page 54: I/O Management

Lets Do IT – C - SCAN

• While head is on cylinder 11, requests for 1,36,16,34,9,12 come in C-SCAN would result in ________________ head movement

• Disk requests come in to the disk driver for cylinders 10, 22, 20, 2, 40, 6, and 38. The head is initially at cylinder 20.

C-SCAN would result in ________________ head movement

Page 55: I/O Management

C-LOOK

• Version of C-SCAN• Arm only goes as far as the last

request in each direction, then reverses direction immediately, without first going all the way to the end of the disk.

Page 56: I/O Management

Selecting Disk Arm Scheduling Algorithm• SSTF is common and has a natural appeal• SCAN and C-SCAN perform better for systems that place a heavy load on the disk.• Performance depends on the number and types of requests.• Requests for disk service can be influenced by the file-allocation method.• The disk-scheduling algorithm should be written as a separate module of the

operating system, allowing it to be replaced with a different algorithm if necessary.• Either SSTF or LOOK is a reasonable choice for the default algorithm.

Page 57: I/O Management

Assignment - 1 Deadline – 6th March• Explain Need of memory mapped I/O.• Write a short note on Interrupt • Explain Goals of I/O Software.• Write Short Note on : Device Controllers, Direct Memory Access• Explain RAID Levels.• Explain various Disk Arm Scheduling Algorithms with illustration.

Quality is important , not quantity