Chapter 5 - Memory management. This chapter is devoted to the fundamentals of memory management. It begins by discussing how memory protection is implemented in the hardware by using special registers in the CPU. It then discusses how efficient use of memory is achieved by reusing memory released by a process while handling subsequent memory requests, and how techniques for fast memory allocation and deallocation may cause memory fragmentation.
PROPRIETARY MATERIAL. © 2007 The McGrawHill Companies, Inc. All rights reserved. No part of this PowerPoint slide may be displayed, reproduced or distributed in any form or by any means, without the prior written permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by McGrawHill for their individual course preparation. If you are a student using this PowerPoint slide, you are using it without permission. Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 1 Copyright © 2008 Managing the memory hierarchy • The memory hierarchy is comprised of several memory units with different speeds and cost – Efficient operation of a process and the system depends on effective use of the memory hierarchy * Efficient operation of a process depends on hit ratios in faster memories of the hierarchy, i.e., the cache and memory * Efficient operation of the system requires many processes to be present in memory Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 2 Copyright © 2008 Managing the memory hierarchy • How different levels in the hierarchy are managed – L1 cache and L2 cache * Allocation and use is managed by hardware to ensure high hit ratios – Memory * Use is managed by run time libraries of programming languages * Allocation is managed by the kernel It must Accommodate many processes in memory Ensure high hit ratios – Disk * Allocation and use is managed by the kernel Quick loading and storing of process address spaces is important Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 3 Copyright © 2008 Managing the memory hierarchy • Efficient use of memory – Involves sharing of memory among processes and reuse of memory previously allocated to other processes – Requires speedy allocation and de-allocation within the memory allocated to a process – We discuss these aspects in Chapter and this set of slides • The memory hierarchy consisting of memory and a disk is called virtual memory – We discuss it separately in Chapter Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 4 Copyright © 2008 Memory binding • Each entity has a set of attributes; e.g., a variable has type, size and dimensionality – Binding is the action of specifying values of attributes of an entity, e.g * Declaration of type of a variable is the binding of its type attribute * Memory allocation is the binding its memory address attribute – Two types of binding are used in practice: * Early binding (static binding ─ binding before operation begins) Restrictive, but leads to efficient operation of a process * Late binding (dynamic binding ─ binding during operation) Flexible, but may lead to less efficient operation of a process Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 5 Copyright © 2008 Features of static and dynamic memory allocation • Static memory allocation – Allocation is performed before a process starts operating – Size of memory required should be known a priori; otherwise, * Wastage may occur if size is overestimated * A process may have to be terminated if it requires more memory than allocated to it – No allocation or de-allocation actions required during operation • Dynamic memory allocation – Allocation is performed during operation of a process – Allocation equals size; hence no wastage of memory – Allocation / de-allocation overhead is incurred during operation Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed SlideNo:6 Copyrightâ2008 Memory allocation preliminaries Stack LIFO allocation – A ‘contiguous’ data structure—data occupy adjoining locations – Used for data allocated ‘automatically’ on entering a function, such as parameters of a function call and local variables of a function • Heap – Non-contiguous data structure—data may not occupy adjoining locations * Pointer-based access to allocated data – Used for program controlled data (PCD data) that is explicitly allocated and de-allocated in a program, e.g malloc / calloc – ‘holes’, i.e., unused areas, may develop in memory during operation Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 7 Copyright © 2008 A hole develops in a heap due to de-allocation (a) Three variables are allocated in the heap (b) Deallocation of the memory of floatptr2 creates a hole in memory Chapter 5: Memory Management Dhamdhere: Operating Systems— AConceptưBasedApproach,2ed SlideNo:8 Copyrightâ2008 Memory allocation for a process Memory requirements of the components of a process – Sizes of code and static data of the program to be executed are known a priori * Hence code and static data can be allocated statically – Sizes of stack and program controlled data (PCD) data vary during operation * Hence stack and PCD data require dynamic allocation – The memory allocation model for a process incorporates both these requirements Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 9 Copyright © 2008 Memory allocation model for a process • The code and static data are given a fixed allocation • Stack and PCD data can grow in opposite directions • Their sizes can vary independently; problem arises only if they overlap Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 10 Copyright © 2008 Approaches to noncontiguous memory allocation • Segmentation – Each component in a process is a logical unit called a segment, e.g., a function, a module, or an object * Components can have different sizes – The kernel allocates memory to all components and builds a segment table * Avoids internal fragmentation, but external fragmentation exists – Each logical address is a pair of the form (segment id, byte id) * Segment id could be a segment name or segment number * Byte id could be a byte name or byte number – The MMU uses the segment table to perform address translation Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 40 Copyright © 2008 A process Q in segmentation Alogicaladdressisassumedtousesegmentandbyteids Chapter5: MemoryManagement Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:41 Copyrightâ2008 Comparison of contiguous and noncontiguous memory allocation • Contiguous memory allocation – No allocation / de-allocation overhead during program execution – Internal fragmentation exists in partitioned allocation – External fragmentation exists in first/best/next fit allocation – Relocation register necessary if a swapped-out process is to be swapped-in in a different memory location • Noncontiguous memory allocation – Address translation is performed during program execution – Paging: No external fragmentation; internal fragmentation exists – Segmentation: only external fragmentation exists – Swapped-in process may be loaded in any part of memory Chapter 5: Memory Management Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:42 Copyrightâ2008 Kernel memory allocation The kernel needs memory for its own use – kernel creates and destroys data structures used to store information about processes and resources at a very high rate, e.g., PCBs and ECBs * Hence efficiency of kernel memory allocation directly influences its overhead * The kernel uses special memory allocation techniques for efficient memory allocation to its data structures These techniques exploit the fact that sizes of these data structures are known in advance Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 43 Copyright © 2008 Slab allocator • A slab is a fixed-size area of memory – A slab contains data structures of the same kind * It is pre-formatted to contain standard-size slots for these data structures * It has a free list indicating which of its slots are free * Hence allocation and de-allocation of memory is very fast – If all slabs containing data structures of a specific kind are full, the kernel allocates new slabs Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 44 Copyright © 2008 Format of a slab • Each slab has a different sized colouring area • It is used to ensure that objects in different slabs map into different areas of the cache, leading to a high cache hit ratio Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed SlideNo:45 Copyrightâ2008 Linking, loading, and execution of programs Atranslatorproducesanobjectmoduleforaprogram Thelinkerrelocatesobjectmodulesandcombinesseveralobjectmodules toformabinaryprogram Theloaderloadsabinaryprograminmemoryforexecution Chapter5: MemoryManagement Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:46 Copyrightâ2008 Memory binding in programs • The origin of a program is the address of its first instruction or data byte – A compiler is given an origin specification, which is called the compiled origin of the program * It binds instructions and data of a program in accordance with the origin specification * This binding has to change if the program is to be executed in some other area of memory We called this requirement relocation of a program Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 47 Copyright © 2008 Assembly program P and its generated code • Symbols MAX and ALPHA are external references: defined in some other program(s) • TOTAL is an entry point: It is defined in this program, and may be referred to in other programs • Instructions READ A and BC LT, LOOP are address sensitive; their operand addresses should be modified during relocation Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed SlideNo:48 Copyrightâ2008 Linking and relocation of programs Linking A program may wish to use library functions and other programs * The program should be combined with these library functions and programs * Linking is the function that performs this action • Relocation – Many programs may have the same origin specification * They have to be relocated if they are to be executed simultaneously – A program may have to be executed from a memory area that is different from the area for which it was compiled Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 49 Copyright © 2008 Linking and relocation of programs • Linking and relocation could be performed – Statically – Dynamically * Dynamic linking: Only modules that are used during an execution are linked Conserves memory To implement dynamic linking of module abc, a standard module is linked statically During execution, it invokes the linker to link module abc * Dynamic relocation: Facilitates compaction Chapter 5: Memory Management Dhamdhere:OperatingSystems AConceptưBasedApproach,2ed SlideNo:50 Copyrightâ2008 Self-relocating programs A program P may have to be loaded for execution in any area of memory – Relocation register in the CPU facilitates such execution – Alternatively, program P may be statically relocated to different memory areas * P may be relocated to many memory areas and resulting binary programs may be stored in library Occupies too much library space and lacks flexibility * Static relocation may be performed before every execution of P Causes delays * A self-relocating program avoids these problems by relocating itself dynamically Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 51 Copyrightâ2008 Self-relocating programs A self-relocating program relocates itself It is comprised of three parts * A part that determines its load time address in memory It makes a dummy subroutine call and uses the return address to find its own load time address * A relocating logic that knows the instructions that need to be modified to perform relocation * Main part of the program: It consists of instructions and data This part is executed after the program has relocated it Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 52 Copyright © 2008 Sharing of a program (a) Static sharing: Program C is included in each sharing program (b) Dynamic sharing: Only one copy of program C exists in memory. It is linked with every program that wishes to use it. This arrangement conserves memory Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 53 Copyright © 2008 Reentrant program (a) C allocates its data dynamically; accesses it through register 1 (b) C is invoked by A. It allocates data area CA (c) C is invoked by B. It allocates data area CB. Both invocations of C can execute concurrently without mutual interference Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed Slide No: 54 Copyright © 2008 ... Non-contiguous data structure—data may not occupy adjoining locations * Pointer-based access to allocated data – Used for program controlled data (PCD data) that is explicitly allocated and de-allocated... data structure—data occupy adjoining locations – Used for data allocated ‘automatically’ on entering a function, such as parameters of a function call and local variables of a function • Heap... • Tags are stored at both boundaries of a memory area – Hence tags of adjoining memory areas are in adjoining memory locations Chapter 5: Memory Management Dhamdhere: Operating Systems— A ConceptBased Approach, 2ed