1. Trang chủ
  2. » Luận Văn - Báo Cáo

final assignment computer memory cache virtual memory flash memory

44 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Computer Memory Cache, Virtual Memory, Flash Memory
Tác giả Trần Đặng Mạnh
Người hướng dẫn Nguyen Hoang Dung
Trường học Vietnam National University, Hanoi
Chuyên ngành Computer Memory
Thể loại Essay
Năm xuất bản 2023
Thành phố Hanoi
Định dạng
Số trang 44
Dung lượng 11,25 MB

Cấu trúc

  • CHAPTER 1:cCompare Cache memory and Virtual memory (0)
  • CHAPTER 2: Virtual Memory (7)
    • I. Virtual Memory (7)
    • II. The general idea of how Virtual memory works (8)
    • III. Benefits of Virtual Memory (8)
    • IV. Hardware and Control Structures (9)
    • V. Execution of a process (9)
    • VI. Implication (9)
    • VII. Real and Virtual Memory (11)
    • VIII. Thrashing (13)
    • IX. Principle of Locality (13)
    • X. Support Needed for Virtual Memory (13)
    • XI. Paging (14)
    • XII. Translation lookaside Buffer(TLB) (15)
    • XIII. Associative Mapping (18)
    • XIV. Page Table and Virtual Memory (20)
    • XV. Inverted Page Table (21)
    • XVI. The Page Size Issue (22)
    • XVII. Operating System Software (23)
    • XIX. Placement Policy (23)
    • XX. Replacement Policy (23)
    • XXI. Basic algorithms for the replacement policy (24)
      • 1. The LRU Policy (24)
      • 2. Note on counting page faults (24)
      • 3. Implementation of the LRU Policy (25)
      • 4. The FIFO Policy (25)
      • 5. The Clock Policy (25)
      • 6. Resident set (27)
      • 7. Cleaning policy (28)
      • 8. Load control (29)
    • XXII. UNIX (30)
      • 1. Page replacement (31)
      • 2. Kernel Memory Allocator (32)
    • XXIV. Windows memory management (33)
      • 1. Window Virtual address map (33)
      • 2. Windows paging (33)
      • 3. Android Memory management (34)
  • CHAPTER 3: Flash Memory (34)
    • I. What is flash memory (34)
    • II. PN Junction (35)
    • III. MOSFET (36)
    • IV. CMOS (38)
    • V. Reading NOR Flash and NAND Flash (38)
      • 1. Moving electrons: Flash Erase and Writes (39)
      • 2. Flash Erase and Write (NAND) (39)
    • VI. Non-volatile flash memory stick and SD card (40)

Nội dung

In the steady-state practically, all of the main memory will be occupied with process pages, so that the processor and OS have direct access to as many processes as possible.VII.Real and

Virtual Memory

Virtual Memory

Virtual memory is a memory management technique where secondary memory can be used as if it were a part of the main memory Virtual memory is a common technique used in a computer's operating system (OS).Virtual memory uses both hardware and software to enable a computer to compensate for physical memory shortages, temporarily transferring data from random access memory (RAM) to disk storage

Virtual memory is important for improving system performance, multitasking and using large programs However, users should not overly rely on virtual memory, since it is considerably slower than RAM If the OS has to swap data between virtual memory and RAM too often, the computer will begin to slow down this is called thrashing.

The general idea of how Virtual memory works

When an application is in use, data from that program is stored in a physical address using RAM A memory management unit maps the address to RAM and automatically translates addresses The MMU can, for example, map a logical address space to a corresponding physical address If, at any point, the RAM space is needed for something more urgent, data can be swapped out of RAM and into virtual memory.

The computer's memory manager is in charge of keeping track of the shifts between physical and virtual memory This means using virtual memory generally causes a noticeable reduction in performance.

Benefits of Virtual Memory

Some benefits of virtual memory are:

By using virtual memory many applications or programs can be executed at a time.

Main memory has limited space but you can increase and decrease the size of virtual memory by yourself.

Users can run large programs that have a size greater than the main memory The data which is common in memory can be shared between RAM and virtual memory

CPU utilization can be increased because more processes can reside in the main memory

The cost of buying extra RAM is saved by using virtual memory

Hardware and Control Structures

Two characteristics fundamental to memory management: i All memory references are logical addresses that are dynamically translated into physical addresses at run time. ii A process may be broken up to into a number of pieces that don’t need to be contiguously located in main memory during execution.

If these two characteristics are present, it is not necessary that all of the pages or segments of a process be in main memory during execution.

Execution of a process

Operating system brings into main memory a few pieces of the program. Resident set: i Portion of process that is main memory.

An interrupt is generated when an address that is not in main memory. Operating system places the process in a blocking state.

Piece of process that contains the logical address is brought into main memory. Operating system issues a disk I/O Read request.

Another process is dispatched to run while the disk I/O takes place.

An interrupt is issued when disk I/O is complete, which cause the operating system to place the affected process in the Ready state

Implication

More processes may be maintained in main memory.

Only load in some of the pieces of each process.

With so many processes in main memory, it is very likely a process will be in the Ready state at any particular time.

A process maybe larger than all of main memory.

At any given time, only a few pages of any process are in the main memory and therefore more processes and be maintained in memory and therefore more processes can be maintained in memory Furthermore, time is saved because unused pages are not swapped in and out of memory In the steady-state practically, all of the main memory will be occupied with process pages, so that the processor and OS have direct access to as many processes as possible.

Real and Virtual Memory

Table 8.2 Characteristics of Paging and Segmentation

Main memory partitioned into small fixed- size chunks called frames

Main memory partitioned into small fixed- size chunks called frames

Main memory not partitioned Main memory not partitioned

Program broken into pages by the compiler or memory manage- ment system

Program broken into pages by the compiler or memory management system

Program segments specified by the programmer to the compiler (i.e., the decision is made by the programmer)

Program segments specified by the programmer to the compiler (i.e., the decision is made by the programmer)

No internal fragmentation No internal fragmentation

No external fragmentation No external fragmentation External fragmentation External fragmentation

Operating system must maintain a page table for each process showing which frame each page occupies

Operating system must maintain a page table for each process showing which frame each page occupies

Operating system must maintain a segment table for each process showing the load address and length of each segment

Operating system must maintain a segment table for each process showing the load address and length of each segment

Operating system must maintain a free frame list

Operating system must maintain a free frame list

Operating system must maintain a list of free holes in main memory

Operating system must maintain a list of free holes in main memory

Processor uses page number, offset to calculate absolute address

Processor uses page number, offset to calculate absolute address

Processor uses segment number, offset to calculate absolute address

Processor uses segment number, offset to calculate absolute address

All the pages of a process must be in main memory for process to run, unless overlays are used

Not all pages of a process need be in main memory frames for the process to run Pages may be read in as needed

All the segments of a process must be in main memory for process to run, unless overlays are used

Not all segments of a process need be in main memory frames for the process to run Segments may be read in as needed

Reading a page into main memory may require writing a page out to disk

Reading a segment into main memory may re- quire writing one or more segments out to disk

Thrashing

In the steady state, practically all of main memory will be occupied with process pieces, so that the processor and operating system have direct access to as many processes as possible Thus, when the operating system brings one piece in, it must throw another out In essence, the operating system tries to guess, based on recent history, which pieces are least likely to be used in the near future.

Principle of Locality

Program and data references within a process tend to cluster.

Only a few pieces of a process will be needed over a short period of time. Therefore it is possible to make intelligent guesses about which pieces will be needed in the future.

Support Needed for Virtual Memory

For virtual memory to be practical and effective: i Hard ware must support paging and segmentation. ii Operating system must include software for managing the movement of pages and/or segments between secondary memory and main memory.

Paging

Typically, each process has its own page table

Each page table entry contains a present bit to indicate whether the page is in main memory or not. i If it is in main memory, the entry contains the frame number of the corresponding page in main memory ii If it is not in main memory, the entry may contain the address of that page on disk or the page number may be used to index another table (often in the PCB) to obtain the address of that page on disk

A modified bit indicates if the page has been altered since it was last loaded into main memory i If no change has been made, the page does not have to be written to the disk when it needs to be swapped out

Other control bits may be present if protection is managed at the page level i a read-only/read-write bit ii protection level bit: kernel page or user page (more bits are used when the processor supports more than 2 protection levels)

Translation lookaside Buffer(TLB)

Each virtual memory reference can cause two physical memory accesses: i One to fetch the page table entry ii One to fetch iii To overcome effect of doubling the memory access time, most virtual memory schemes make use of a special high-speed cache called translation lookaside buffer

Associative Mapping

The TLB only contains some of the page table entries so we cannot simply index into the TLB based on page number.

Each TLB entry must include the page number as well as the complete page table entry.

The processor a number of TLB entries to determine if there is a match on page number.

The Smaller the page size, the lesser the amount of internal fragmentation However, more pages are required per process

More pages per process means larger page tables

For large programs in a heavily multiprogram med environment some portion of the page tables of active processes must be in virtual memory instead of main memory The physical characteristics of most secondary-memory devices favor a larger page size for more efficient block transfer of data

Page Table and Virtual Memory

Most computer systems support a very large virtual address space i 32 to 64 bits are used for logical addresses ii If (only) 32 bits are used with 4KB pages, a page table may have 2 20 entries

The entire page table may take up too much main memory Hence, page tables are often also stored in virtual memory and subjected to paging i When a process is running, part of its page table must be in main memory (including the page table entry of the currently executing page)

Inverted Page Table

Another solution (PowerPC, IBM Risk 6000) to the problem of maintaining large page tables is to use an Inverted Page Table (IPT)

We generally have only one IPT for the whole system

There is only one IPT entry per physical frame (rather than one per virtual page) i this reduces a lot the amount of memory needed for page tables

The 1st entry of the IPT is for frame #1 the nth entry of the IPT is for frame #n and each of these entries contains the virtual page number

Thus, this table is inverted

The process ID with the virtual page number could be used to search the IPT to obtain the frame #

For better performance, hashing is used to obtain a hash table entry which points to a IPT entry i A page fault occurs if no match is found ii chaining is used to manage hashing overflow d = offset within page

The Page Size Issue

Page size is defined by hardware; always a power of 2 for more efficient logical to physical address translation But exactly which size to use is a difficult question: i Large page size is good since for a small page size, more pages are required per process ii More pages per process means larger page tables Hence, a large portion of page tables in virtual memory iii Small page size is good to minimize internal fragmentation iv Large page size is good since disks are designed to efficiently transfer large blocks of data v Larger page sizes mean less pages in main memory; this increases the TLB hit ratio

With a very small page size, each page matches the code that is actually used: faults are low

Increased page size causes each page to contain more code that is not used Page faults rise.

Page faults decrease if we can approach point P were the size of a page is equal to the size of the entire process

Page fault rate is also determined by the number of frames allocated per process Page faults drops to a reasonable value when W frames are allocated

Drops to 0 when the number (N) of frames is such that a process is entirely in memory

Page sizes from 1KB to 4KB are most commonly used

But the issue is non trivial Hence some processors are now supporting multiple page sizes Ex: i Pentium supports 2 sizes: 4KB or 4MB ii R4000 supports 7 sizes: 4KB to 16MB

Operating System Software

Memory management software depends on whether the hardware supports paging or segmentation or both

Pure segmentation systems are rare Segments are usually paged memory management issues are then those of paging

We shall thus concentrate on issues associated with paging

To achieve good performance, we need a low page fault rate

Determines when a page should be brought into main memory Two common policies: i Demand paging only brings pages into main memory when a reference is made to a location on the page (ie: paging on demand only) many page faults when process first started but should decrease as more pages are brought in ii Preparing brings in more pages than needed locality of references suggest that it is more efficient to bring in pages that reside contiguously on the disk efficiency not definitely established: the extra pages brought in are

Placement Policy

Determines where in real memory a process piece resides

For pure segmentation systems: i first-fit, next fit are possible choices (a real issue)

For paging (and paged segmentation): i the hardware decides where to place the page: the chosen frame location is irrelevant since all memory frames are equivalent (not an issue)

Replacement Policy

Deals with the selection of a page in main memory to be replaced when a new page is brought in

This occurs whenever main memory is full (no free frame available)

Occurs often since the OS tries to bring into main memory as many processes as it can to increase the multiprogramming level

The decision for the set of pages to be considered for replacement is related to the resident set management strategy: i how many page frames are to be allocated to each process? We will discuss this later

No matter what is the set of pages considered for replacement, the replacement policy deals with algorithms that will choose the page within that set

Basic algorithms for the replacement policy

The Optimal policy selects for replacement the page for which the time to the next reference is the longest i produces the fewest number of page faults ii impossible to implement (need to know the future) but serves as a standard to compare with the other algorithms we shall study:

Least recently used (LRU) First-in, first-out (FIFO) Clock

Replaces the page that has not been referenced for the longest time i By the principle of locality, this should be the page least likely to be referenced in the near future ii performs nearly as well as the optimal policy

Example: A process of 5 pages with an OS that fixes the resident set size to 3

2 Note on counting page faults:

When the main memory is empty, each new page we bring in is a result of a page fault

For the purpose of comparing the different algorithms, we are not counting these initial page faults i because the number of these is the same for all algorithms

But, in contrast to what is shown in the figures, these initial references are really producing page faults

3 Implementation of the LRU Policy:

Each page could be tagged (in the page table entry) with the time at each memory reference

The LRU page is the one with the smallest time value (needs to be searched at each page fault)

This would require expensive hardware and a great deal of overhead.

Consequently, very few computer systems provide sufficient hardware support for true LRU replacement policy

Other algorithms are used instead

Treats page frames allocated to a process as a circular buffer i When the buffer is full, the oldest page is replaced Hence: first-in, first- out

This is not necessarily the same as the LRU page

A frequently used page is often the oldest, so it will be repeatedly paged out by FIFO

Simple to implement i requires only a pointer that circles through the page frames of the process

Requires the association of an additional bit with each frame

Referred to as the use bit

When a page is first loaded in memory or referenced, the use bit is set to 1 The set of frames is considered to be a circular buffer

Any frame with a use bit of 1 is passed over by the algorithm

Page frames visualized as laid out in a circle

Replacement policy and cache size

With large caches, replacement of pages can have a performance impact

If the page frame selected for replacement is in the cache, that cache block is lost as well as the page that it holds

In system using page buffering, cache performance can be improved with a policy for page replacement in the page buffer

Most operating systems place pages by selecting an arbitrary page frame from the page buffer

The OS must decide how many pages to bring into main memory

The smaller the amount of memory allocated to each process, the more processes can reside in memory

Small number of pages loaded increases page faults

Beyond a certain size, further allocations of pages will not effect the page fault rate

Gives a process a fixed number of frames in main memory within which to execute

When a page fault occurs, one of the pages of that process must be replaced +Variable-allocation:

Allows the number of page frames allocated to a process to be varied over the lifetime of the process

The scope of a replacement strategy can be categorized as global or local Both types are activated by a page fault when there are no free page frames + Local:

Chooses only among the resident pages of the process that generated the page fault +Global

Considers all unlocked pages in main memory

Necessary to decide ahead of time the amount of allocation to give a process

If allocation is too small, there will be a high page fault rate

When a new process is loaded into main memory, allocate to it a certain number of page frames as its resident set

When a page fault occurs, select the page to replace from among the resident set of the process that suffers the fault

Reevaluate the allocation provided to the process and increase or decrease it to improve overall performance

Decision to increase or decrease a resident set size is based on the assessment of the likely future demands of active processes

Criteria used to determine resident set size

Requires a use bit to be associated with each page in memory

Bit is set to 1 when that page is accessed

When a page fault occurs, the OS notes the virtual time since the last page fault for that process

Does not perform well during the transient periods when there is a shift to a new locality

Variable-Interval sampled Working set(VSWS)

Evaluates the working set of a process at sampling instances based on elapsed virtual time

-The minimum duration of the sampling interval

-The maximum duration of the sampling interval

-The number of page faults that are allowed to occur between sampling instances

Concerned with determining when a modified page should be written out to secondary memory

Determines the number of processes that will be resident in main memory

Critical in effective memory management

Too few processes, many occasions when all processes will be blocked and much time will be spent in swapping

Too many processes will lead to thrashing

If the degree of multiprogramming is to be reduced,one or more of the currently resident processes must be swapped out

Process with the smallest resident set

Process with the largest remaining execution window

UNIX

Intended to be machine independent so its memory management schemes will vary Early UNIX: Variable partitioning with no virtual memory scheme current implementations of UNIX and Solaris

-SVR4 and Solaris use two separate schemes

-Provides a virtual memory capability that allocates page frames in main memory to processes

-Allocates page frames to disk block buffers

-Allocates memory for the kernel

The page frame data table is used for page replacement

Pointers are used to create lists within the table

All available frames are linked together in a list of free frames available for brining in pages

When the number of available frames drops below a certain threshold, the kernel will steal a number of frames to compensate

• The kernel generates and destroys small tables and buffers frequently during the course of execution, each of which requires dynamic memory location

• Most of these blocks are significantly smaller than typical pages (therefore paging would be inefficient)

• Allocations and free operations must be made as fast as possible

UNIX often exhibits steady-state behavior in kernel memory demand i.e the amount of demand for blocks of a particular size varies slowly in time Defers coalescing until it seems likely that it is needed, and then coalesces as many blocks as possible

Shares many characteristics with UNIX

Based on the clock algorithm

The use bit is replaced with an 8-bit age variable

Incremented each time the page is accessed

Periodically decrements the age bits

A page with an age of 0 is an “old” page that has not been referenced in some time and is best candidate for replacement

A form of least frequently used policy

Kernel memory capability manages physical main memory page frames Primary function is to allocate and deallocate frames for particular uses -Possible owners of a frame include:

A buddy algorithm is used so that memory for the kernel can be allocated and deallocated in units of one or more pages

Page allocator alone would be inefficient because the kernel requires small short-term memory chunks in odd sizes

Used by Linux to accommodate small chunks

Windows memory management

Virtual memory manager controls how memory is allocated and how paging is performed

Designed to operate over a variety of platforms

Uses page sizes ranging from 4Kbytes to 64 Kbytes

On 32-bit platforms each user process sees a separate 32-bit address space allowing 4 Gbytes of virtual memory per process

By default, half is reserved for the OS

Large memory intensive applications run more effectively using 64-bit Windows Most modern PCs use the AMD64 processor architecture which is capable of running as either a 32-bit or 64-bit system

On creation, a process can make use of the entire user space of almost 2Gbytes

This space is divided into fixed-size pages managed in contiguous regions allocated on

Regions may be in one of three States:

Windows uses variable allocation, local scope

When activated, a process is assigned a data structure to manage its working set Working sets of active processes are adjusted depending on the availability of main memory

Android includes a number of extensions to the normal Linux kernel memory management facility

This feature provides anonymous shared memory, which abstracts memory as file descriptors

A file descriptor can be passed to another process to share memory

This feature allocates virtual memory so that it is physically contiguous

This feature is useful for hardware that does not support virtual memory +Low memory Killer

This feature enables the system to notify an app or apps that they need to free up memory

If an app does not cooperate, it is terminated.

Flash Memory

What is flash memory

Flash memory is an evolving technology that’s finding its way into our lives on an increasing scale Flash- memory technology is ubiquitous Like most things associated with computers, non-volatile storage sticks have a specific set of benefits and drawbacks Having a basic idea of those parameters allows the buyer to form a more informed choice about which is best for his or her needs. Flash memory may be a quite Erasable Read Only Memory (EEROM) which has the potential to clear and rewrite data It’s non- volatile meaning it can hold data even without the presence of power, supported the way of addressing read/write data, non-volatile storage is of two types Those are the NAND non-volatile storage and NOR non- volatile storage.

Non-volatile storage is made using solid-state chips Each of those chips contains an array of non-volatile storage cells Rather than a traditional electrochemical method, non-volatile storage uses semiconductors for storing data Non-volatile storage is one and the only sort of semiconductor memory making it one of the important sorts of data-storage medium While non-volatile storage has gained immense popularity there are some drawbacks that limit its universal adoption These factors must be considered before using this data-storage medium.

PN Junction

PN Junction is the PN junction formation of the Diode, which has the following characteristics:

In P-semiconductors, the concentration of elementary (hole) pp is much higher than that of np (pp >> np).

In the N semiconductor, the concentration (electron) nn is much higher than the hole concentration pn (nn >> pn)

The JFET transistor is a junction gate field-effect transistor, which is a simple type of field-effect transistor It is a three-pin semiconductor element, used as an electronically controlled switching element, an amplifier element, or as a voltage-controlled resistor, etc in electronic circuits.

• It can be understood as follows: When an electric current is passed through a semiconductor medium whose conductive cross-section change under the action of an electric field perpendicular to that semiconductor layer If the electric field strength changes, it will change the resistance of the semiconductor layer and thus the current through it This semiconductor layer is called the conductive channel.

• The current through the transistor is created by only one type of majority conductor Therefore, the FET is a unipolar device.

• FETs have very high input impedance.

• The noise in FETs is much less than in bipolar transistors.

• It is not voltage compensated at line ID = 0 and therefore it is a good breaker.

• Three basic ways of JFET

JFET with common source pole (Common Source = CS)

• Input signal G versus S, output signal D versus S.

JFET with common gate pole type (Common Gate = CG)

• The input signal S versus G, the output signal D versus G. JFET with common drain type (Common Drain = CD)

• Input signal G versus D, output signal S versus D.

MOSFET

Mosfet, short for (Metal-Oxide Semiconductor Field-Effect Transistor), is a field-effect Transistor (Metal Oxide Semiconductor Field Effect Transistor) ie a special Transistor with structure and operation different from conventional Transistor Mosfet has the principle of operation based on the magnetic field effect to generate current, is a device with a large input impedance suitable for amplifying weak signal sources Working principle:

• Mosfet works in 2 open and closed modes Due to being an element with elementary charge carriers, Mosfet can switch at very high frequencies But to ensure short switching time, the problem of control is an important issue.

• Equivalent circuit of Mosfet Looking at it we see the switching mechanism depends on the parasitic capacitors on it.

• For P channel: Mosfet open control voltage is Ugs0 Current will go from S to D.

• For N channel: Mosfet open control voltage is Ugs > 0 The closing control voltage is Ugs

Ngày đăng: 09/05/2024, 10:57

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w