Lecture Operating systems: A concept-based approach (2/e): Chapter 12 - Dhananjay M. Dhamdhere

64 85 0
Lecture Operating systems: A concept-based approach (2/e): Chapter 12 - Dhananjay M. Dhamdhere

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 12 - Implementation of file operations. This chapter discusses the physical organization used in file systems. It starts with an overview of I/O devices and their characteristics, and discusses different RAID organizations that provide high reliability, fast access, and high data transfer rates.

PROPRIETARY MATERIAL. ©  2007 The McGraw­Hill Companies, Inc. All rights reserved. No part of this PowerPoint slide  may be displayed, reproduced or distributed  in any form or by any means, without the prior written permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by McGraw­Hill  for their individual course preparation. If you are a student using this PowerPoint slide, you are using it without permission.  Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 1 Copyright © 2008 Input Output Control System (IOCS) • The IOCS consists of two layers that provide efficient file processing and efficient device performance – Access Methods layer * Each access method provides efficient processing of files with a specific file organization, e.g., sequential file organization and direct file organization – Physical IOCS layer * Performs I/O operations on devices * Ensures efficient device performance Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 2 Copyright © 2008 Physical organizations in Access methods and Physical IOCS •  The physical IOCS reads data from disk into buffers or disk cache/file cache    maintained in memory (or writes data), ensuring high device throughput •  The access method moves the data between buffers or caches and the     address space of the process, ensuring efficient file processing Chapter 12: Implementation  ofFileOperations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:3 Copyrightâ2008 Policies and Mechanisms Policy – A guiding principle for implementing a functionality (e.g., prioritybased scheduling) * It invokes mechanisms to perform various actions required to implement the functionality • Mechanism – Specific action in implementing a functionality Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 4 Copyright © 2008 Layers of File system and IOCS •  M: Mechanism module, P: Policy module •  A policy module invokes mechanism modules of the same layer,    which may invoke policy and mechanism modules of the lower layer Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 5 Copyright © 2008 Policies and mechanisms in file system and IOCS layers Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 6 Copyright © 2008 Model of a computer system •  The I/O subsystem has an independent data path to memory •  Devices are connected to device controllers, which are connected to the    DMA; a device is identified by the pair (controller id, device id) •  The DMA, a device controller, and a device implement an I/O operation Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 7 Copyright © 2008 Access and data transfer time in an I/O operation The total time required to perform an I/O operation = ta + tx Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 8 Copyright © 2008 Error detection approaches •  Parity bits •  Cyclic     redundancy    checksum     (CRC) Chapter 12: Implementation  of File Operations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:9 Copyrightâ2008 Disk data organization Data should be organized such that it can be accessed efficiently – Notion of a Cylinder * Consists of identically positioned tracks on all platters of a disk  All of its tracks can be accessed for the same position of disk heads  Its use reduces disk head movement  Put adjoining data of a file on tracks in the same cylinder – Data staggering techniques * A disk rotates a bit while disk heads are being readied or moved to access a new track Make sure that data to be accessed passes under the read / write heads after their movement is completed Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 10 Copyright © 2008 Blocking of records • A block is called a physical record (pr) and a record in it is called a logical record (lr) – When a block contains logical records (tio)pr = ta + x tx – Effective I/O time per logical record (tio)lr = ta /2 + tx Example: Transfer rate of I/O device = 800 K bytes/sec Record size = 200 bytes, ta = 10 msec Q: Can tw be made 0? Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 50 Copyrightâ2008 Variation of (tio)lr with blocking factor TransferrateofI/Odevice=800Kbytes/sec (tio)lrdecreasesastheblockingfactorisincreased Thisfactcanbeusedtoreduceoreliminatetwthroughbuffering Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 51 Copyright © 2008 Combination of buffering and blocking • A combination of buffering and blocking can be used to minimize the effective elapsed time of a process – Blocking reduces the effective I/O time per record * See previous slide – Buffering provides overlap between I/O and processing of records * See slides on operation of Single_buf_P and Multi_buf_P * Overlap is maximum when effective I/O time per record < processing time per record – Use an appropriate blocking factor such that * (tio)lr < processing time per record Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 52 Copyright © 2008 Buffered processing of blocked records using blocking factor = and two buffers •  The process waits until I/O on Buf1  is complete; it occurs at 11 sec •  The four records in Buf1  are processed during 11­23 sec •  During this time, I/O on Buf2  is in progress; it completes at 22 sec •  The records in Buf2  can be processed straightaway •  This pattern repeats  Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed SlideNo:53 Copyrightâ2008 Access methods The IOCS provides a library of access method modules, each supporting efficient processing of a specific class of files – The library may contain following access methods * Unbuffered processing of sequential files * Buffered processing of sequential files * Processing of direct access files * Unbuffered processing of index-sequential files * Buffered processing of index-sequential files – The access method for buffered processing of sequential files performs all actions described in previous slides * See next slide for details Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 54 Copyright © 2008 Actions of an access method during file processing Chapter 12: Implementation  ofFileOperations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:55 Copyrightâ2008 Disk cache Disk cache holds some of the data from a disk in memory – It speeds up repeated accesses to file data and control data like file map tables * It contains copies of recently read disk blocks and disk blocks that were modified but are yet to be written to a disk * Hit ratios ≥ 0.9 due to spatial and temporal locality * If the cache becomes full, disk blocks are replaced on an LRU basis – However, it has some costs * A record is first read into the cache, then copied to the process address space * Some files may dominate the cache, degrading access to other files * Causes poor reliability because some modified disk blocks may not have been updated before a crash occurred Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— AConceptưBasedApproach,2ed SlideNo:56 Copyrightâ2008 Disk cache Disk caches may slow down virtual memory operations – Two copy operations are needed during page-in or page-out * When a page is to be loaded, it would be first read into the disk cache, and then into a page frame * When a modified page is to be replaced, it would be first copied into the disk cache and then written to disk • A unified disk cache is used for both paging and file I/O – Benefits * File system considers files to be paged objects  Hence any part of a file can be accessed equally efficiently * Unification avoids one copy operation during page-in and page-out (see next slide) Chapter 12: Implementation  ofFileOperations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:57 Copyrightâ2008 Disk caching Separatediskandpagecaches Requirestwocopyoperations duringpageưinandpageưout Chapter12:Implementation ofFileOperations Unifieddiskcache Requiresasinglecopy operation Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:58 Copyrightâ2008 File processing in Unix • A device driver is structured into two halves – Top half contains routines for I/O-initiation – Bottom half contains interrupt handling and error recovery – A device driver has a standard interface * Routines for initializing its own operation, performing read / write or performing interrupt processing have standard entry names * The strategy routine performs scheduling of I/O requests • The buffer cache is a disk cache – Search in the cache is speeded up through hashing – Disk blocks allocated to sequential files may be pre-fetched – Provides advantages of both buffering and blocking Chapter12:Implementation ofFileOperations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:59 Copyrightâ2008 Unix buffer cache Whenadiskblockistobeaccessed,itishashedtoobtainabucket numberandthensearchedforintheliststartingonthebucket ThefreelistcontainsthebuffersinLRUorder;whenadiskblockis accessed,itismovedtotheendofthefreelist Chapter12:Implementation ofFileOperations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:60 Copyrightâ2008 File processing in Linux • Some salient features – A device driver is dynamically loadable * It has to be registered with the kernel when loaded, and de-registered when removed – Provides some innovations regarding I/O operations * A read operation may block a process, but a write operation does not (because the data is simply copied into the disk cache) Hence reads are performed at a higher priority * To reduce disk head movement, I/O operations involving adjoining disk data are combined whenever possible Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 61 Copyright © 2008 File processing in Linux • Provides four I/O schedulers – No-op scheduler: performs FCFS scheduling – Deadline scheduler: Look scheduling with additional features * Write operations may delay reads, thus blocking their processes * Hence read and write operations have deadlines of 0.5 and secs, resp – Completely fair queuing * Maintains a separate queue of requests for each process and performs roundrobin between the queues – Anticipatory scheduler * Process reading a sequential file typically issues a new read / write after the previous one is complete and it becomes ready Hence  Scheduler waits a few msecs before scheduling a new operation  Thus, next I/O operation can be performed before disk heads are moved Chapter 12: Implementation  of File Operations Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:62 Copyrightâ2008 Cache management in Windows ThecachemanagerisusedbybothVMmanagerandI/Omanager Forsequentialfiles,datamaybepreưfetchedintothecache Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 63 Copyright © 2008 File processing in Windows • The cache is organized as follows – Each cache block is 256 Kbytes Part of a file held in a cache block is called a view; it may be shared by processes * A file is considered to be a sequence of views – If data required by a file operation is not in the cache * Cache manager allocates a cache block, loads the data and activates VM handler to copy it in the process address space Chapter 12: Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach, 2 ed Slide No: 64 Copyright © 2008 ... disk stripe can be read in parallel * This arrangement provides high data transfer rates Chapter 12:  Implementation  of File Operations Dhamdhere: Operating Systems— AConceptưBasedApproach,2ed SlideNo:18... Data is skewed analogous to head skew Chapter 12:  Implementation  of File Operations Dhamdhere: Operating Systems— A Concept­Based Approach,  2 ed Slide No: 16 Copyright © 2008 Redundant Array... storage?) * Read / write redundant data records in parallel – Fast data transfer rates * Store file data on several disks in the RAID * Read / write file data in parallel – Fast access * Store

Ngày đăng: 30/01/2020, 03:27

Mục lục

  • Slide 1

  • Input Output Control System (IOCS)

  • Physical organizations in Access methods and Physical IOCS

  • Policies and Mechanisms

  • Layers of File system and IOCS

  • Policies and mechanisms in file system and IOCS layers

  • Model of a computer system

  • Access and data transfer time in an I/O operation

  • Error detection approaches

  • Disk data organization

  • A cylinder in a disk with several platters

  • Data staggering techniques

  • Sector interleaving

  • Variation of throughput with sector interleaving factor (inf)

  • Variation of throughput with interleaving factor

  • Slide 16

  • Redundant Array of Inexpensive Disks (RAID)

  • Disk striping

  • RAID levels

  • RAID levels

Tài liệu cùng người dùng

Tài liệu liên quan