chương 9 hệ điều hành

76 440 0
chương 9 hệ điều hành

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 9: Virtual Memory Operating System Concepts Essentials – th Edition Silberschatz, Galvin and Gagne ©2013 Chapter 9: Virtual Memory s Background s Demand Paging s Copy-on-Write s Page Replacement s Allocation of Frames s Thrashing s Memory-Mapped Files s Allocating Kernel Memory s Other Considerations s Operating-System Examples Operating System Concepts Essentials – th Edition 9.2 Silberschatz, Galvin and Gagne ©2013 Objectives s To describe the benefits of a virtual memory system s To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames s To discuss the principle of the working-set model s To examine the relationship between shared memory and memory-mapped files s To explore how kernel memory is managed Operating System Concepts Essentials – th Edition 9.3 Silberschatz, Galvin and Gagne ©2013 Background s Code needs to be in memory to execute, but entire program rarely used q Error code, unusual routines, large data structures s Entire program code not needed at same time s Consider ability to execute partially-loaded program q Program no longer constrained by limits of physical memory q Each program takes less memory while running -> more programs run at the same time  Increased CPU utilization and throughput with no increase in response time or turnaround time q Less I/O needed to load or swap programs into memory -> each user program runs faster Operating System Concepts Essentials – th Edition 9.4 Silberschatz, Galvin and Gagne ©2013 Background s Virtual memory – separation of user logical memory from physical memory q Only part of the program needs to be in memory for execution q Logical address space can therefore be much larger than physical address space q Allows address spaces to be shared by several processes q Allows for more efficient process creation q More programs running concurrently q Less I/O needed to load or swap processes s Virtual address space – logical view of how process is stored in memory q Usually start at address 0, contiguous addresses until end of space q Meanwhile, physical memory organized in page frames q MMU must map logical to physical s Virtual memory can be implemented via: q Demand paging q Demand segmentation Operating System Concepts Essentials – th Edition 9.5 Silberschatz, Galvin and Gagne ©2013 Virtual Memory That is Larger Than Physical Memory Operating System Concepts Essentials – th Edition 9.6 Silberschatz, Galvin and Gagne ©2013 Virtual-address Space s Usually design logical address space for stack to start at Max logical address and grow “down” while heap grows “up” q Maximizes address space use q Unused address space between the two is hole  No physical memory needed until heap or stack grows to a given new page s Enables sparse address spaces with holes left for growth, dynamically linked libraries, etc s System libraries shared via mapping into virtual address space s Shared memory by mapping pages read-write into virtual address space s Pages can be shared during fork(), speeding process creation Operating System Concepts Essentials – th Edition 9.7 Silberschatz, Galvin and Gagne ©2013 Shared Library Using Virtual Memory Operating System Concepts Essentials – th Edition 9.8 Silberschatz, Galvin and Gagne ©2013 Demand Paging s Could bring entire process into memory at load time s Or bring a page into memory only when it is needed q Less I/O needed, no unnecessary I/O q Less memory needed q Faster response q More users s Similar to paging system with swapping (diagram on right) s Page is needed ⇒ reference to it q q s invalid reference ⇒ abort not-in-memory ⇒ bring to memory Lazy swapper – never swaps a page into memory unless page will be needed q Swapper that deals with pages is a pager Operating System Concepts Essentials – th Edition 9.9 Silberschatz, Galvin and Gagne ©2013 Basic Concepts s With swapping, pager guesses which pages will be used before swapping out again s Instead, pager brings in only those pages into memory s How to determine that set of pages? q Need new MMU functionality to implement demand paging s If pages needed are already memory resident q No difference from non demand-paging s If page needed and not memory resident q Need to detect and load the page into memory from storage  Without changing program behavior  Without programmer needing to change code Operating System Concepts Essentials – th Edition 9.10 Silberschatz, Galvin and Gagne ©2013 Buddy System s Allocates memory from fixed-size segment consisting of physically-contiguous pages s Memory allocated using power-of-2 allocator q Satisfies requests in units sized as power of q Request rounded up to next highest power of q When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of  Continue until appropriate sized chunk available s For example, assume 256KB chunk available, kernel requests 21KB q Split into AL and AR of 128KB each  One further divided into BL and BR of 64KB – One further into CL and CR of 32KB each – one used to satisfy request s Advantage – quickly coalesce unused chunks into larger chunk s Disadvantage - fragmentation Operating System Concepts Essentials – th Edition 9.62 Silberschatz, Galvin and Gagne ©2013 Buddy System Allocator Operating System Concepts Essentials – th Edition 9.63 Silberschatz, Galvin and Gagne ©2013 Slab Allocator s Alternate strategy s Slab is one or more physically contiguous pages s Cache consists of one or more slabs s Single cache for each unique kernel data structure q Each cache filled with objects – instantiations of the data structure s When cache created, filled with objects marked as free s When structures stored, objects marked as used s If slab is full of used objects, next object allocated from empty slab q If no empty slabs, new slab allocated s Benefits include no fragmentation, fast memory request satisfaction Operating System Concepts Essentials – th Edition 9.64 Silberschatz, Galvin and Gagne ©2013 Slab Allocation Operating System Concepts Essentials – th Edition 9.65 Silberschatz, Galvin and Gagne ©2013 Slab Allocator in Linux s For example process descriptor is of type struct task_struct s Approx 1.7KB of memory s New task -> allocate new struct from cache q s Will use existing free struct task_struct Slab can be in three possible states Empty – all free s Full – all used Partial – mix of free and used Upon request, slab allocator Uses free struct in partial slab If none, takes one from empty slab If no empty slab, create new empty s Slab started in Solaris, now wide-spread for both kernel mode and user memory in various OSes s Linux 2.2 had SLAB, now has both SLOB and SLUB allocators q SLOB for systems with limited memory  q Simple List of Blocks – maintains list objects for small, medium, large objects SLUB is performance-optimized SLAB removes per-CPU queues, metadata stored in page structure Operating System Concepts Essentials – th Edition 9.66 Silberschatz, Galvin and Gagne ©2013 Other Considerations Prepaging s Prepaging q To reduce the large number of page faults that occurs at process startup q Prepage all or some of the pages a process will need, before they are referenced q But if prepaged pages are unused, I/O and memory was wasted q Assume s pages are prepaged and α of the pages is used cost of s * α save pages faults > or < than the cost of prepaging s * (1- α) unnecessary pages?  Is  α near zero ⇒ prepaging loses Operating System Concepts Essentials – th Edition 9.67 Silberschatz, Galvin and Gagne ©2013 Other Issues – Page Size s Sometimes OS designers have a choice q Especially if running on custom-built CPU s Page size selection must take into consideration: q Fragmentation q Page table size q Resolution q I/O overhead q Number of page faults q Locality q TLB size and effectiveness s Always power of 2, usually in the range 12 (4,096 bytes) to 222 (4,194,304 bytes) s On average, growing over time Operating System Concepts Essentials – th Edition 9.68 Silberschatz, Galvin and Gagne ©2013 Other Issues – TLB Reach s TLB Reach - The amount of memory accessible from the TLB s TLB Reach = (TLB Size) X (Page Size) s Ideally, the working set of each process is stored in the TLB q Otherwise there is a high degree of page faults s Increase the Page Size q This may lead to an increase in fragmentation as not all applications require a large page size s Provide Multiple Page Sizes q This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation Operating System Concepts Essentials – th Edition 9.69 Silberschatz, Galvin and Gagne ©2013 Other Issues – Program Structure s Program structure q int[128,128] data; q Each row is stored in one page q Program for (j = 0; j

Ngày đăng: 10/07/2016, 09:51

Tài liệu cùng người dùng

Tài liệu liên quan