Chapter 13-4 I/O Systems. 13.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts...
-
date post
15-Jan-2016 -
Category
Documents
-
view
218 -
download
0
Transcript of Chapter 13-4 I/O Systems. 13.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts...
Chapter 13-4 I/O SystemsChapter 13-4 I/O Systems
13.2 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Chapter 13: I/O SystemsChapter 13: I/O Systems
Chapter 13-1 and 13-2.
I/O Hardware
Chapter 13-3
Application I/O Interface
Kernel I/O Subsystem
Transforming I/O Requests to Hardware Operations
Performance
13.3 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Kernel I/O SubsystemKernel I/O Subsystem
13.4 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Kernel I/O SubsystemKernel I/O Subsystem
I/O Services
Scheduling
Buffering
Caching
Spooling
Device Reservation and Error Handling
I/O system is also responsible for protecting itself from malicious or accidental misuse.
Much more on these topics in the next couple of chapters.
Remember: the kernel is tht part of the operating system that is always resident. Thus these ‘services’ are always available (and quite necessary)
13.5 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Disk SchedulingDisk Scheduling Scheduling blocking I/Os:
A good ordering of disk I/O request handling is critical.
Want to reduce average wait times and yet not starve others.
Disk Scheduling was discussed in the previous chapter. FCFS, SSTF, SCAN, ….
Requests for service are entered into a wait queue in some kind of an optimal order depending upon the scheduling algorithm.
Asynchronous I/O A different animal – we must keep track of potentially many I/O requests at the
same time.
Here, a wait queue is often attached to some kind of device status table.
Table is associated with specific devices and indicates device status: is busy, idle, or not working…
Thus judicious scheduling I/O operations is an effective way to speed up I/O operations
Without question, I/O is the slowest operation on a computer.
13.6 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Device-status TableDevice-status Table
Can see the queues.
13.7 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
BufferingBuffering Buffering techniques are used throughout computer systems in so very
many ways.
Main idea is that buffers serve to address the inherent mismatch in speeds/ timing/ etc., between two or more devices or between a device and an application.
These mismatches arise due to incredible differences in speed between devices such as, say, modems and a disk.
As an example, buffers may be created (in this case) in main memory to accumulate inputs; when the entire input is received, a very quick single transfer to disk can readily occur. Done all the time.
Often, two buffers are used (double buffering). Here a modem can continue to fill one buffer while the other buffer’s data is written to disk.
Double buffering is used in many applications too to speed up I/O handling – but there’s a cost: space in the application’s address space, for one thing
Idea behind all this is to decouple producers from consumers.
13.8 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Sun Enterprise 6000 Device-Transfer RatesSun Enterprise 6000 Device-Transfer Rates
13.9 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Buffering - moreBuffering - more Another example of the use of buffering takes place in message passing where
typically messages are broken down into packets sent over a network.
These packets can be reassembled back together for processing…
Copy Semantics: We often have buffers of data to be written to, say, disk. When a write() system call is issued, a pointer to the buffer and an integer citing the number of bytes to transfer are passed to the write() system call.
We’ve seen this in the device status tables a couple of slides back.
But what if the application needs to write more to this buffer or modify it before the physical write takes place? Remember: the I/O is scheduled!
Using copy semantics, we guarantee the version sent to disk is correct one.
This is accommodated by the system call by the call writing the data into a kernel buffer before returning control to the application.
Disk writes then takes place from the kernel buffer.
This approach – copying data between kernel buffers and application data space is common in operating systems, even though there is some copying overhead.
13.10 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
CachingCaching Caching – as opposed to buffering (although they may be used together
and may be complementary at times) – really deals with copies of data.
First of all, we have our primary memory cache and then there are CPU secondary and primary caches.
Buffers typically contain a copy of a data item – whatever it is – maybe a page or perhaps something smaller. But caches represent additional storage usually in faster access storage. (This means the switching technology is much faster!!)
Caches are used extensively in improving I/O efficiency in accessing directories, FATs, recently read / written data, and a host of other things…etc.
In these cases, cache stores may be accessed much more quickly than a corresponding area of primary memory – or disk.
13.11 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
SpoolingSpooling Spooling – a technique usually used as a holding place for data for a device, such as a
printer.
Spooling is very useful for devices that need non-interleaved access, as in the case of printers or tapes that cannot multiplex I/O requests for concurrent applications.
A printer can only print one file at a time.
Upon writing output files, application data is ‘spooled’ to a disk file until the entire file is developed.
Then, this spool file can be queued up and sent to the printer for printing.
Usually there is a daemon (a background process) that might handle the actual spooling, or perhaps some in-kernel thread.
(Most operating systems do, however, additionally allow for exclusive device access where a process may obtain total control over a device – which can be allocated and later de-allocated to this process.
Advantages to this, but overall system-wide performance is reduced.)
13.12 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Error HandlingError Handling OS can recover from lots of errors during operations such as bad disk reads, device
unavailable, and other transient write failures if the way we do business so specifies.
We have commands such as: read() retry; resend(), …
Most of these system calls() return an error number or code when I/O request fails; sometimes just a single bit indicating a success or a failure.
Sometimes some operating systems can return a host of return codes that really nail a pretty comprehensive description of the return.
Indexed sequential operations using VSAM, for example, may return codes that indicate a successful operation, end of file, record not found, and a host of other messages pertinent to indexed operations.
Unix has many of these including
Bad pointer, argument out of range, etc. I’ve had a number of these… SCSI protocols, which are pretty complicated, report failures in three levels of details
A sense key – identifies the general nature of the failure (hardware, illegal request, etc.)
Additional sense code – category of failure, such as bad command parameter. These additional sense-code qualifiers – gives more detail…
13.13 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
I/O ProtectionI/O Protection
Sometimes, a user process may accidentally or purposefully attempt to disrupt normal operation via illegal I/O instructions.
We need to prevent this from happening, and thus we make all I/O instructions privileged.
Applications do not do I/O itself, but rather use a number of system calls to accommodate their need.
These system calls are then executed in kernel mode.
Of course, the OS checks to see if the request is valid and accommodates the request and returns to the user.
13.14 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Kernel Data StructuresKernel Data Structures
Just as processor management uses a variety of unique data structures, such as process control blocks (PCBs) and many queues, to manage and control processes,
so too the I/O subsystem needs a number of data structures (tables, etc.) to keep track of I/O components.
One of the in-kernel data structures is the open-file table. Let’s take a look.
Unix provides file access to a wide
variety of entities: user files, raw devices,
and address spaces of processes. While these are handled differently,
Unix uses a uniform data structure, as
we can see.
The system-wide open-file table contains
other tables (dispatch tables) that
point to still other supporting data
structures. For my process…Lists all files I have open
13.15 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
UNIX I/O Kernel Structure (same figure)UNIX I/O Kernel Structure (same figure)
We will go through this more ahead too, but for now we can see that our per-process open-file table points into the system-wide open-file table.
We’ve discussed this in an earlier chapter. In some detail::
Each entry in the per-process open-file table points to a specific data structure within the system-wide open-file table that is in kernel memory space.
Note in Unix, each file-system record points to an entry in the active inode table.
These active-inode tables and network information tables are all located in kernel memory tables…
13.16 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Kernel Data Structures – NT I/O OS kernel routines always manage and control state information on I/O
components, including open file tables, network connections, character device state, and other data structures.
Many, many complex data structures are used and needed to track buffers, memory allocation, “dirty” blocks and more.There is a lot of variability here.
As your book points out, in Windows NT, it uses a message passing implementation for I/O. An I/O request is converted to a message that is sent through the kernel
to the what NT calls an I/O manager Then, the message is sent to the device driver software (within the
kernel), which may change the message contents.. Output:
For output: the message contains data to be written For input: the message contains a buffer to receive data.
But this is NT’s approach, and it is unique to NT.
13.17 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Kernel I/O Subsystem Summary I am including this just for completeness. Items below underscore the immense complexity, detail, and demands that
I/O place on a computing environment. Below we have a list of services provided by the I/O subsystem to
applications and to other parts of the kernel: Management of the name space for files and devices Access control to files and devices Operation control (for example, a take cannot seek() ). File system space allocation Device allocation Buffering, caching, spooling I/O scheduling Device-status-monitoring, error handling, and failure recovery Device driver configuration and initialization.
You should be familiar with these basic services by now and be somewhat conversant with them.
I like the next figure (again). It shouts volumes.
13.18 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
A Kernel I/O StructureA Kernel I/O Structure
View this figure as providing services from ‘low-level’ to ‘high-level,’ although the kernel I/O subsystem is only marked here to show separation.
Going downwards:‘Upper levels’ of the I/O subsystem access devices via a uniform interface provided by the device controllers (Hardware)
As we go down into the kernel I/O subsystem part of the kernel, we can see the device drivers.
These device drivers connect with the system calls() in a lower level in the kernel I/O subsystem.
.
lower levels
upper levelsMore ahead…
13.19 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Transforming I/O Requests to Hardware OperationsTransforming I/O Requests to Hardware Operations
It is easy to assume that we have a good grasp of the details of I/O processing.
But I think it is worthwhile to go through these details once again to see all the components (hardware and software) of the I/O subsystem in action: their dependencies and sequences….
So, let’s go through this and put it all together…I will follow the book almost verbatim. If you are totally comfortable with this, you may skip it…
Let’s go through a detailed example of reading a file from disk.
First of all, an application reads a file typically by referring to its file name. The process issues a blocking read() system call to a file descriptor
13.20 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
I/O Requests to Hardware Operations: 1I/O Requests to Hardware Operations: 1
Can see the file descriptor in this figure.
13.21 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
I/O Requests to Hardware Operations: 2I/O Requests to Hardware Operations: 2 The system call deep in the kernel I/O
subsystem, checks out parameters and if the data is already available in a buffer, this activity is done; I/O is complete. If not, perform physical I/O===============
Within the I/O subsystem, the system call() ‘causes’ invoking process to be moved from run queue to a wait queue for the device.
The I/O subsystem send the request to the proper device driver via subroutine call or in-kernel message.
I/O subsystem causes kernel code to move the requesting process’s PCB to a wait queue, linking it in and thus blocking this process.
Device driver allocates kernel buffer space for the data and actually causes the I/O to be scheduled.
Device driver also sends commands to the device controller by writing into the device control registers. (we are now in the hardware…) Device Driver is blocked.
13.22 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
I/O Requests to Hardware Operations: 3I/O Requests to Hardware Operations: 3 The device controller operates the device
hardware to perform the actual data transfer, and monitors the device.
Device Driver may poll for status and data or it may have set up a DMA transfer into kernel memory. If we assume that the transfer is managed by a DMA controller, completion of this I/O will cause the device controller to generate an interrupt when transfer is done.
The interrupt sent to the CPU causes the correct interrupt handler back down deep in the kernel to be invoked. The interrupt from the interrupt vector table, stores necessary data, signals to unblock the device driver, and returns from the interrupt.
The device driver receives the signal, determines which I/O request has been completed, determines the request’s status, and signals the kernel I/O subsystem that the request has been completed
Kernel code then transfers data or return codes to the address space of the requesting process and calls the appropriate scheduling routines to move the process (PCB) from the wait queue to the ready queue.
Moving the process to the ready queue, unblocks the process.
Scheduler can then assign the process to the CPU, and the process may resume…
Interrupt handlers Interrupt vector table
13.23 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Life Cycle of An I/O RequestLife Cycle of An I/O Request
The text in the previoustwo slides came essentially from this figure.
But I felt that the previous figure shows where these various functions are located in the kernel I/O subsystem,
The sequence proceeds down the left, over, and then up on the right.
13.24 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
PerformancePerformance
It is important to note that I/O imposes an incredible strain on the CPU.
Scheduling I/O activities for user and kernel processes fairly and efficiently is tough as they block and unblock.
Interrupts may be handled thousands of times per second, and the burden of state changes and associated activities is something to be reckoned with.
Memory bus are loaded down during I/O during copying between controller’s and physical memory and between kernel buffers and application data space during many typical normal I/O operations.
In truth, interrupt handling is expensive as the system must change state, execute an interrupt handler, and restore state thousands and thousands of time...
The details of network processing are even more incredible when one looks at character processing and attendant interrupts (see book).
The high context-switch rate is untenable. This has given rise to front-end processors and terminal concentrators that can
save up and speed up a very heavy interrupt burden on a CPU.
13.25 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
PerformancePerformance
Some high end computer systems have I/O channels – in reality, processors themselves – dedicated to offloading I/O from, say, a mainframe.
Channels can be very specialized and dedicated to certain activities, such as running plotters or specialized hardware…
Idea here is to keep channel busy undertaking I/O while the CPU does what it does best – compute!
In smaller machines, in lieu of I/O channels, we have device controllers and DMA controllers to do our work.
13.26 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Improve I/O Performance?
Here’s what we try to do – in general (let me quote from the book)
“reduce the number of context switches
“reduce the number of times that data must be copied in memory while passing between devices and applications
“reduce the frequency of interrupts by using large transfers, smart controllers, and polling (if busy waiting can be minimized).
“increase concurrency by using DMA-knowledgeable controllers or channels to offload simple data copying from the CPU
“balance CPU, memory subsystem, bus, and I/O performance, because an overload in any one area will cause idleness in others.”
13.27 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Last Issue
Where should the I/O functionality be implemented? In device hardware? Device drivers? Application software?
In general, anything implemented in application software is great for prototyping and experimenting.
These software solutions provide flexibility and modifiability – but not speed! Solutions in applications may be inefficient due to need for context
switches and that applications cannot take advantages of internal kernel data structures and kernel functionality.
Proofs of concept in software may be re-implemented in kernel code. Performance will be increased, but the nature of OS software requires so very much more care in implementing…(system crashes, data corruption, etc.)
Ultimately, the best performance will be realized when solutions are burned into hardware – resulting in much greater speeds but clear lack of flexibility. Pretty difficult to fix bugs and to allow flexibility in many activities. But speed: the best.
13.28 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Device-Functionality ProgressionDevice-Functionality Progression
Good figure: shows the trade-offs between application solutions all the way up to hard coded solutions.
End of Chapter 13-4End of Chapter 13-4