1. Trang chủ
  2. » Công Nghệ Thông Tin

Lecture Operating system principles - Chapter 11: I/O Management and disk scheduling

45 93 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 45
Dung lượng 205,16 KB

Nội dung

After studying this chapter, you should be able to: Summarize key categories of I/O devices on computers, discuss the organization of the I/O function, explain some of the key issues in the design of OS support for I/O, analyze the performance implications of various I/O buffering alternatives,...

Chapter 11 I/O Management and Disk Scheduling – Operating System Design Issues – I/O Buffering – Disk Scheduling – Disk Cache Goal: Generality • For simplicity and freedom from error, it’s better to handle all I/O devices in a uniform manner • Due to the diversity of device characteristics, it is difficult in practice to achieve true generality • Solution: use a hierarchical modular design of I/O functions – Hide details of device I/O in lower-level routines – User processes and upper levels of OS see devices in terms of general functions, such as read, write, open, close, lock, unlock A Model of I/O Organization • Logical I/O: – Deals with the device as a logical resource and is not concerned with the details of actually controlling the device – Allows user processes to deal with the device in terms of a device identifier and simple commands such as open, close, read, write • Device I/O: – Converts requested operations into sequence of I/O instructions – Uses buffering techniques to improve utilization A Model of I/O Organization • Scheduling and Control: – Performs actual queuing / scheduling and control operations – Handles interrupts and collects and reports I/O status – Interacts with the I/O module and hence the device hardware Goal: Efficiency • Most I/O devices are extremely slow compared to main memory  I/O operations often form a bottleneck in a computing system • Multiprogramming allows some processes to be waiting on I/O while another process is executing Goal: Efficiency • Swapping brings in ready processes but this is an I/O operation itself • A major effort in I/O design has been schemes for improving the efficiency of I/O – I/O buffering – Disk scheduling – Disk cache Roadmap – Operating System Design Issues – I/O Buffering – Disk Scheduling – Disk Cache No Buffering • Without a buffer, OS directly accesses the device as and when it needs • A data area within the address space of the user process is used for I/O No Buffering • Process must wait for I/O to complete before proceeding – busy waiting (like programmed I/O) – process suspension on an interrupt (like interrupt-driven I/O or DMA) • Problems – the program is up waiting for the relatively slow I/O to complete – interferes with swapping decisions by OS No Buffering – It is impossible to swap the process out completely because the data area must be locked in main memory before I/O – Otherwise, data may be lost or single-process deadlock may happen • the suspended process is blocked waiting on the I/O event, and the I/O operation is blocked waiting for the process to be swapped in 10 Shortest Service Time First • Select the disk I/O request that requires the least movement of the disk arm from its current position • Always choose the minimum seek time 31 SCAN • Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversed • LOOK policy: reverse direction when there are no more requests in a direction 32 SCAN • SCAN is biased against the area most recently traversed  does not exploit locality • SCAN favors – jobs whose requests are for tracks nearest to both innermost and outermost tracks and – the latest-arriving jobs 33 C-SCAN (Circular SCAN) • Restricts scanning to one direction only • When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again • Reduces the maximum delay experienced by new requests 34 N-step-SCAN • With SSTF, SCAN, C-SCAN, the arm may not move if processes monopolize the device by repeated requests to one track: arm stickiness • Segments the disk request queue into subqueues of length N • Subqueues are processed one at a time, using SCAN • New requests are added to other queue when a queue is being processed 35 FSCAN • Two subqueues • When a scan begins, all of the requests are in one of the queues, with the other empty • During the scan, all new requests are put into the other queue • Service of new requests is deferred until all of the old requests have been processed 36 Performance Compared Comparison of Disk Scheduling Algorithms 37 Roadmap – Operating System Design Issues – I/O Buffering – Disk Scheduling – Disk Cache 38 Disk Cache • Buffer in main memory for disk sectors • Contains a copy of some of the sectors • When an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache – If so, the request is satisfied via the cache – If not, the requested sector is read into the disk cache from the disk 39 Disk Cache • Locality of reference – When a block of data is fetched into the cache to satisfy a single I/O request, it is likely that there will be future references to that same block • One design issue: replacement strategy – When a new sector is brought into the disk cache, one of the existing blocks must be replaced 40 Least Recently Used (LRU) • The block that has been in the cache the longest with no reference to it is replaced • A stack of pointers reference the cache – Most recently referenced block is on the top of the stack – When a block is referenced or brought into the cache, it is placed on the top of the stack – The block on the bottom of the stack is to be replaced 41 LRU Disk Cache Performance • The miss ratio is, principally, a function of the size of the disk cache 42 Least Frequently Used (LFU) • The block that has experienced the fewest references is replaced • A counter is associated with each block – Incremented each time the block is accessed • When replacement is required, the block with the smallest count is selected Consider certain blocks are referenced repeatedly in short intervals due to locality, but are referenced relatively infrequently overall 43 Frequency-Based Replacement • Blocks are logically organized in a stack, similar to LRU On a cache hit On a miss, the block with the smallest count that is not in the new section is chosen for replacement 44 Refined Frequency-Based Replacement • Only blocks in the old section are eligible for replacement • Allows relatively frequently referenced blocks a chance to build up their reference counts before becoming eligible for replacement • Simulation studies indicate that this refined policy outperforms LRU and LFU 45 ... for improving the efficiency of I/O – I/O buffering – Disk scheduling – Disk cache Roadmap – Operating System Design Issues – I/O Buffering – Disk Scheduling – Disk Cache No Buffering • Without... Model of I/O Organization • Scheduling and Control: – Performs actual queuing / scheduling and control operations – Handles interrupts and collects and reports I/O status – Interacts with the I/O. .. the I/O operation and swapping involve the same disk • Does it make sense to swap the process out after the I/O operation finishes? 17 Stream-oriented Single Buffer • Line-at-time or Byte-at-a-time

Ngày đăng: 30/01/2020, 05:13

TỪ KHÓA LIÊN QUAN