I/O Management and Disk Scheduling Chapter 11. Categories of I/O Devices Human readable Used to...
-
Upload
alban-shelton -
Category
Documents
-
view
215 -
download
1
Transcript of I/O Management and Disk Scheduling Chapter 11. Categories of I/O Devices Human readable Used to...
I/O Management and Disk Scheduling
Chapter 11
Categories of I/O Devices
Human readable Used to communicate with the user
Printers, Display terminals, Keyboard, Mouse
Machine readable Used to communicate with electronic
equipment Disk and tape drives, Sensors, Controllers
Communication Used to communicate with remote devices
Modems, Network interface cards
Differences in I/O Devices
Data rate May be differences of several orders of
magnitude between the data transfer rates
Application (the use to which the device is put) Disk used to store files requires file-
management software Disk used to store virtual memory pages
depends on the hardware and software to support it
Terminal used by system administrator may have a higher priority
Differences in I/O Devices
Complexity of control Control of a disk may be more complex than a
printer
Unit of transfer Data may be transferred as a stream of bytes
for a terminal or in larger blocks for a disk
Data representation Encoding schemes (character code, parity)
Error conditions The nature, reporting and responding to errors
differ widely
Techniques for Performing I/O
Programmed I/O Process issues I/O command and busy-
waits for the operation to complete
Interrupt-driven I/O I/O command is issued Processor continues executing
instructions I/O module sends an interrupt when
done
Techniques for Performing I/O
Direct Memory Access (DMA) Processor requests DMA module to
transfer a block of data DMA module controls exchange of
data between main memory and the I/O device
Processor interrupted only after entire block has been transferred
Evolution of the I/O Function
Processor directly controls a peripheral deviceController or I/O module is added Processor uses programmed I/O without
interrupts Processor does not need to handle details of
external devices
Controller or I/O module with interrupts Processor does not spend time waiting for an
I/O operation to be performed
Evolution of the I/O Function
Direct Memory Access I/O module given direct access to memory Processor involved at beginning and end only
I/O module is a separate processor CPU directs I/O processor to execute an I/O
program in main memory A sequence of I/O activities performed
without interruption
I/O processor has its own local memory It is a computer in its own right Minimal CPU involvement
Direct Memory Access
DMA module transfers a block of data to/from memory one word at a time over the system busDMA and CPU share the same system busDMA unit steals bus cycles (from the CPU) to transfer data (cycle stealing)The CPU instruction cycle is suspended to allow the current data word transferThis is not an interrupt (no context saves)The CPU just pauses for one bus cycle
Operating System Design Issues
Efficiency It is important because I/O is often
the bottleneck for computation Most I/O devices are extremely slow
compared to processor and main memory Use of multiprogramming allows for some
processes to be waiting on I/O while another process executes
Swapping is used to bring in additional Ready processes, but is itself an I/O operation
Operating System Design Issues
Generality Desirable to handle all I/O devices in a
uniform manner Diversity of I/O devices makes this
difficult Use hierarchical, modular approach Hide most of the details of device I/O in
lower-level routines so that processes and upper levels see devices in general terms such as read, write, open, close, lock, unlock
I/O Buffering
Reasons for buffering Processes must wait for I/O to
complete before proceeding Certain pages must remain (locked) in
main memory during I/O Interferes with swapping decisions of the
OS (the process cannot be swapped out) Use system buffers for I/O
Two Types of I/O Devices
Block-oriented Information is stored in fixed sized blocks Transfers are made one block at a time Used for disks and tapes
Stream-oriented Transfer information as a stream of bytes Used for terminals, printers, communication
ports, mouse, and most other devices that are not secondary storage
Single Buffer
Operating system assigns a buffer in main memory for an I/O requestBlock-oriented Input transfers made to system buffer Process moves block to user space when
needed Another block is requested to be transferred
into the buffer Called read ahead (or anticipated input):
assume the block will be needed
I/O Buffering
Single Buffer
Block-oriented User process can process one block of data
while next block is read in Swapping can occur since input is taking
place in system memory, not user memory Operating system keeps track of
assignment of system buffers to user processes
Similar considerations apply to block-oriented output also
Single Buffer
Stream-oriented Can be line-at-a-time or byte-at-a-
time Line-at-a-time
Input and output is one line at a time; buffer holds a single line
Byte-at-a-time Buffer is similar to that in
producer/consumer model
I/O Buffering
Double buffer Use two system buffers instead of one A process can transfer data to or from one
buffer while the operating system empties or fills the other buffer
Circular buffer More than two buffers are used Each individual buffer is one unit in a
circular buffer Used when I/O operation must keep up with
process
I/O Buffering
Utility of Buffering
Smooths out peaks in I/O demand Increases OS efficiency Improves performance of processes
However, no amount of buffering will allow I/O to keep pace with a process having a constant demand Eventually, the buffers will all fill up
and the process will have to wait
Disk Performance Parameters
To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sectorSeek time time it takes to position the head at the
desired track
Rotational delay or rotational latency time its takes for the beginning of the
sector to reach the head
Timing of a Disk I/O Transfer
Disk Performance Parameters
Access time Sum of seek time and rotational delay The time it takes to get in position to read
or write
Data transfer occurs as the sector moves under the headTypical average seek time = 5 to 10 msAverage rotational delay (10,000 rpm) = 3ms
Disk Scheduling Policies
Reading a file from disk: Best performance when the file is stored in
adjacent tracks, occupying all sectors (sequential access)
Worst performance when file is stored in sectors distributed randomly over the disk (random access)
When I/O requests to a disk from many processes are queued up, how to schedule these requests for optimal performance?Random scheduling gives worst performance
Disk Scheduling Policies
First-in, first-out (FIFO) Process requests from queue in the
order in which they arrived Fair to all processes Good if only a few processes with
clustered accesses Approaches random scheduling in
performance if there are many processes
Disk Scheduling Policies
Priority Goal is not to optimize disk use but to
meet other objectives Short batch jobs and interactive jobs
may have higher priority Provide good interactive response
time Long jobs may wait excessively long
Disk Scheduling Policies
Last-in, first-out (LIFO) Most recent request is processed Good for transaction processing
systems The device is given to the most recent user The user may be reading a sequential file
(good locality), so there should be little arm movement
Possibility of starvation since a job may never regain the head of the line
Disk Scheduling Policies
Shortest Service Time First (SSTF) Select the disk I/O request that
requires the least movement of the disk arm from its current position
Always incurs the minimum seek time Better performance than FIFO
Disk Scheduling Policies
SCAN Arm moves in one direction only,
satisfying all outstanding requests until it reaches the last track or no more requests in that direction
Direction is then reversed No starvation Biased against area most recently
traversed: poor exploitation of locality
Disk Scheduling Policies
C-SCAN (Circular SCAN) Restricts scanning to one direction
only When the last track has been visited
in one direction, the arm is returned to the opposite end of the disk and the scan begins again
Disk Scheduling Policies
N-step-SCAN Segments the disk request queue into
subqueues of length N Subqueues are processed one at a time, using
SCAN New requests added to another queue when a
queue is being processed
FSCAN Uses two subqueues All requests are in one queue when scan begins New requests are put in other queue during scan
RAID
Redundant Array of Independent Disks (RAID) A set (array) of physical disk drives viewed by
the OS as a single logical device Data are distributed across the disks The disks operate independently and in
parallel, improving performance Multiple or even single I/O requests can be handled
in parallel Improves reliability (guards against disk
failure) by storing redundant parity information
RAID (cont’d)
Use RAID instead of a single large disk Large disks are expensive Large disks are less reliable
Consists of seven levels (0 to 6), each designating a different design
RAID 0 (non-redundant)
RAID 0
Data are striped across the disks The disk is divided into strips The disk strips are mapped to consecutive
logical strips in round robin fashion The corresponding strips from each disk
constitute a stripe If there are n disks, up to n strips can be
transferred in parallel. Provides High data transfer capacity (single request, large
data) High I/O request rate (multiple requests, small data)
RAID 1 (redundancy through mirroring)
•Data is duplicated (each logical strip mapped to two disks)•Recovery from failure is simple
RAID 2 (redundancy through Hamming code)
•Every I/O request requires access to all disks (parallel access)•Strips are very small (usually, byte or word)•Hamming code calculated across corresponding bits on each data disk and stored on multiple parity disks
RAID 3 (bit-interleaved parity)
•Similar to RAID 2, but requires only a single parity disk•Simple parity calculation (XOR)
RAID 4 (block-level parity)
•Separate I/O requests can be satisfied in parallel (independent access)•Strips are relatively large
RAID 5 (block-level distributed parity)
•Similar to RAID 4, but distributes parity strips across all disks
RAID 6 (dual redundancy)
•Uses two different parity calculations