1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu Windows Internals covering windows server 2008 and windows vista- P17 pdf

50 343 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 50
Dung lượng 0,98 MB

Nội dung

790 4. Section Ref 1 Pfn Ref 46 Mapped Views 4 5. User Ref 0 WaitForDel 0 Flush Count 0 6. File Object 86960228 ModWriteCount 0 System Views 0 7. Flags (8008080) File WasPurged Accessed 8. File: \Program Files\Debugging Tools for Windows (x86)\debugger.chw Next look at the file object referenced by the control area with this command: 1. lkd> dt nt!_FILE_OBJECT 0x86960228 2. +0x000 Type : 5 3. +0x002 Size : 128 4. +0x004 DeviceObject : 0x84a69a18 _DEVICE_OBJECT 5. +0x008 Vpb : 0x84a63278 _VPB 6. +0x00c FsContext : 0x9ae3e768 7. +0x010 FsContext2 : 0xad4a0c78 8. +0x014 SectionObjectPointer : 0x86724504 _SECTION_OBJECT_POINTERS 9. +0x018 PrivateCacheMap : 0x86b48460 10. +0x01c FinalStatus : 0 11. +0x020 RelatedFileObject : (null) 12. +0x024 LockOperation : 0 '' 13. . The private cache map is at offset 0x18: 1. lkd> dt nt!_PRIVATE_CACHE_MAP 0x86b48460 2. +0x000 NodeTypeCode : 766 3. +0x000 Flags : _PRIVATE_CACHE_MAP_FLAGS 4. +0x000 UlongFlags : 0x1402fe 5. +0x004 ReadAheadMask : 0xffff 6. +0x008 FileObject : 0x86960228 _FILE_OBJECT 7. +0x010 FileOffset1 : _LARGE_INTEGER 0x146 8. +0x018 BeyondLastByte1 : _LARGE_INTEGER 0x14a 9. +0x020 FileOffset2 : _LARGE_INTEGER 0x14a 10. +0x028 BeyondLastByte2 : _LARGE_INTEGER 0x156 11. +0x030 ReadAheadOffset : [2] _LARGE_INTEGER 0x0 12. +0x040 ReadAheadLength : [2] 0 13. +0x048 ReadAheadSpinLock : 0 14. +0x04c PrivateLinks : _LIST_ENTRY [ 0x86b48420 - 0x86b48420 ] 15. +0x054 ReadAheadWorkItem : (null) Finally, you can locate the shared cache map in the SectionObjectPointer field of the file object and then view its contents: 1. lkd> dt nt!_SECTION_OBJECT_POINTERS 0x86724504 2. +0x000 DataSectionObject : 0x867548f0 3. +0x004 SharedCacheMap : 0x86b48388 4. +0x008 ImageSectionObject : (null) Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 791 5. lkd> dt nt!_SHARED_CACHE_MAP 0x86b48388 6. +0x000 NodeTypeCode : 767 7. +0x002 NodeByteSize : 320 8. +0x004 OpenCount : 1 9. +0x008 FileSize : _LARGE_INTEGER 0x125726 10. +0x010 BcbList : _LIST_ENTRY [ 0x86b48398 - 0x86b48398 ] 11. +0x018 SectionSize : _LARGE_INTEGER 0x140000 12. +0x020 ValidDataLength : _LARGE_INTEGER 0x125726 13. +0x028 ValidDataGoal : _LARGE_INTEGER 0x125726 14. +0x030 InitialVacbs : [4] (null) 15. +0x040 Vacbs : 0x867de330 -> 0x84738b30 _VACB 16. +0x044 FileObjectFastRef : _EX_FAST_REF 17. +0x048 ActiveVacb : 0x84738b30 _VACB 18. . Alternatively, you can use the !fileobj command to look up and display much of this information automatically. For example, using this command on the same file object referenced earlier results in the following output: 1. lkd> !fileobj 0x86960228 2. \Program Files\Debugging Tools for Windows (x86)\debugger.chw 3. Device Object: 0x84a69a18 \Driver\volmgr 4. Vpb: 0x84a63278 5. Event signalled 6. Access: Read SharedRead SharedWrite 7. Flags: 0xc0042 8. Synchronous IO 9. Cache Supported 10. Handle Created 11. Fast IO Read 12. FsContext: 0x9ae3e768 FsContext2: 0xad4a0c78 13. Private Cache Map: 0x86b48460 14. CurrentByteOffset: 156 15. Cache Data: 16. Section Object Pointers: 86724504 17. Shared Cache Map: 86b48388 File Offset: 156 in VACB number 0 18. Vacb: 84738b30 19. Your data is at: b1e00156 10.5 File System interfaces The first time a file’s data is accessed for a read or write operation, the file system driver is responsible for determining whether some part of the file is mapped in the system cache. If it’s not, the file system driver must call the CcInitializeCacheMap function to set up the perfile data structures described in the preceding section. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 792 Once a file is set up for cached access, the file system driver calls one of several functions to access the data in the file. There are three primary methods for accessing cached data, each intended for a specific situation: ■ The copy method copies user data between cache buffers in system space and a process buffer in user space. ■ The mapping and pinning method uses virtual addresses to read and write data directly from and to cache buffers. ■ The physical memory access method uses physical addresses to read and write data directly from and to cache buffers. File system drivers must provide two versions of the file read operation—cached and noncached—to prevent an infinite loop when the memory manager processes a page fault. When the memory manager resolves a page fault by calling the file system to retrieve data from the file (via the device driver, of course), it must specify this noncached read operation by setting the “no cache” flag in the IRP. Figure 10-10 illustrates the typical interactions between the cache manager, the memory manager, and file system drivers in response to user read or write file I/O. The cache manager is invoked by a file system through the copy interfaces (the CcCopyRead and CcCopyWrite paths). To process a CcFastCopyRead or CcCopyRead read, for example, the cache manager creates a view in the cache to map a portion of the file being read and reads the file data into the user buffer by copying from the view. The copy operation generates page faults as it accesses each previously invalid page in the view, and in response the memory manager initiates noncached I/O into the file system driver to retrieve the data corresponding to the part of the file mapped to the page that faulted. The next three sections explain these cache access mechanisms, their purpose, and how they’re used. 10.5.1 Copying to and from the Cache Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 793 Because the system cache is in system space, it is mapped into the address space of every process. As with all system space pages, however, cache pages aren’t accessible from user mode because that would be a potential security hole. (For example, a process might not have the rights to read a file whose data is currently contained in some part of the system cache.) Thus, user application file reads and writes to cached files must be serviced by kernelmode routines that copy data between the cache’s buffers in system space and the application’s buffers residing in the process address space. The functions that file system drivers can use to perform this operation are listed in Table 10-2. You can examine read activity from the cache via the performance counters or system perprocessor variables stored in the processor’s control block (KPRCB) listed in Table 10-3. 10.5.2 Caching with the Mapping and Pinning Interfaces Just as user applications read and write data in files on a disk, file system drivers need to read and write the data that describes the files themselves (the metadata, or volume structure data). Because the file system drivers run in kernel mode, however, they could, if the cache manager were properly informed, modify data directly in the system cache. To permit this optimization, the cache manager provides the functions shown in Table 10-4. These functions permit the file system drivers to find where in virtual memory the file system metadata resides, thus allowing direct modification without the use of intermediary buffers. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 794 If a file system driver needs to read file system metadata in the cache, it calls the cache manager’s mapping interface to obtain the virtual address of the desired data. The cache manager touches all the requested pages to bring them into memory and then returns control to the file system driver. The file system driver can then access the data directly. If the file system driver needs to modify cache pages, it calls the cache manager’s pinning services, which keep the pages active in virtual memory so that they cannot be reclaimed. The pages aren’t actually locked into memory (such as when a device driver locks pages for direct memory access transfers). Most of the time, a file system driver will mark its metadata stream “no write”, which instructs the memory manager’s mapped page writer (explained in Chapter 9) to not write the pages to disk until explicitly told to do so. When the file system driver unpins (releases) them, the cache manager releases its resources so that it can lazily flush any changes to disk and release the cache view that the metadata occupied. The mapping and pinning interfaces solve one thorny problem of implementing a file system: buffer management. Without directly manipulating cached metadata, a file system must predict the maximum number of buffers it will need when updating a volume’s structure. By allowing the file system to access and update its metadata directly in the cache, the cache manager eliminates the need for buffers, simply updating the volume structure in the virtual memory the memory manager provides. The only limitation the file system encounters is the amount of available memory. You can examine pinning and mapping activity in the cache via the performance counters or per-processor variables stored in the processor’s control block (KPRCB) listed in Table 10-5. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 795 10.5.3 Caching with the Direct Memory Access Interfaces In addition to the mapping and pinning interfaces used to access metadata directly in the cache, the cache manager provides a third interface to cached data: direct memory access (DMA). The DMA functions are used to read from or write to cache pages without intervening buffers, such as when a network file system is doing a transfer over the network. The DMA interface returns to the file system the physical addresses of cached user data (rather than the virtual addresses, which the mapping and pinning interfaces return), which can then be used to transfer data directly from physical memory to a network device. Although small amounts of data (1 KB to 2 KB) can use the usual buffer-based copying interfaces, for larger transfers the DMA interface can result in significant performance improvements for a network server processing file requests from remote systems. To describe these references to physical memory, a memory descriptor list (MDL) is used. (MDLs were introduced in Chapter 9.) The four separate functions described in Table 10-6 create the cache manager’s DMA interface. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 796 You can examine MDL activity from the cache via the performance counters or per-processor variables stored in the processor’s control block (KPRCB) listed in Table 10-7. 10.6 Fast I/O Whenever possible, reads and writes to cached files are handled by a high-speed mechanism named fast I/O. Fast I/O is a means of reading or writing a cached file without going through the work of generating an IRP, as described in Chapter 7. With fast I/O, the I/O manager calls the file system driver’s fast I/O routine to see whether I/O can be satisfied directly from the cache manager without generating an IRP. Because the cache manager is architected on top of the virtual memory subsystem, file system drivers can use the cache manager to access file data simply by copying to or from pages mapped to the actual file being referenced without going through the overhead of generating an IRP. Fast I/O doesn’t always occur. For example, the first read or write to a file requires setting up the file for caching (mapping the file into the cache and setting up the cache data structures, as explained earlier in the section “Cache Data Structures”). Also, if the caller specified an asynchronous read or write, fast I/O isn’t used because the caller might be stalled during paging I/O operations required to satisfy the buffer copy to or from the system cache and thus not really providing the requested asynchronous I/O operation. But even on a synchronous I/O, the file system driver might decide that it can’t process the I/O operation by using the fast I/O mechanism, say, for example, if the file in question has a locked range of bytes (as a result of calls to the Windows LockFile and UnlockFile functions). Because the cache manager doesn’t know what parts of which files are locked, the file system driver must check the validity of the read or write, which requires generating an IRP. The decision tree for fast I/O is shown in Figure 10-11. These steps are involved in servicing a read or a write with fast I/O: 1. A thread performs a read or write operation. 2. If the file is cached and the I/O is synchronous, the request passes to the fast I/O entry point of the file system driver stack. If the file isn’t cached, the file system driver sets up the file for caching so that the next time, fast I/O can be used to satisfy a read or write request. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 797 3. If the file system driver’s fast I/O routine determines that fast I/O is possible, it calls the cache manager’s read or write routine to access the file data directly in the cache. (If fast I/O isn’t possible, the file system driver returns to the I/O system, which then generates an IRP for the I/O and eventually calls the file system’s regular read routine.) 4. The cache manager translates the supplied file offset into a virtual address in the cache. 5. For reads, the cache manager copies the data from the cache into the buffer of the process requesting it; for writes, it copies the data from the buffer to the cache. 6. One of the following actions occurs: ❏ For reads where FILE_FLAG_RANDOM_ACCESS wasn’t specified when the file was opened, the read-ahead information in the caller’s private cache map is updated. Read-ahead may also be queued for files for which the FO_RANDOM_ACCESS flag is not specified. ❏ For writes, the dirty bit of any modified page in the cache is set so that the lazy writer will know to flush it to disk. ❏ For write-through files, any modifications are flushed to disk. The performance counters or per-processor variables stored in the processor’s control block (KPRCB) listed in Table 10-8 can be used to determine the fast I/O activity on the system. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 798 10.7 read ahead and Write behind In this section, you’ll see how the cache manager implements reading and writing file data on behalf of file system drivers. Keep in mind that the cache manager is involved in file I/O only when a file is opened without the FILE_FLAG_NO_BUFFERING flag and then read from or written to using the Windows I/O functions (for example, using the Windows ReadFile and WriteFile functions). Mapped files don’t go through the cache manager, nor do files opened with the FILE_FLAG_NO_BUFFERING flag set. Note When an application uses the FILE_FLAG_NO_BUFFERING flag to open a file, its file I/O must start at device-aligned offsets and be of sizes that are a multiple of the alignment size; its input and output buffers must also be device-aligned virtual addresses. For file systems, this usually corresponds to the sector size (512 bytes on NTFS, typically, and 2,048 bytes on CDFS). One of the benefits of the cache manager, apart from the actual caching performance, is the fact that it performs intermediate buffering to allow arbitrarily aligned and sized I/O. 10.7.1 Intelligent Read-Ahead The cache manager uses the principle of spatial locality to perform intelligent read-ahead by predicting what data the calling process is likely to read next based on the data that it is reading currently. Because the system cache is based on virtual addresses, which are contiguous for a particular file, it doesn’t matter whether they’re juxtaposed in physical memory. File read-ahead for logical block caching is more complex and requires tight cooperation between file system drivers and the block cache because that cache system is based on the relative positions of the accessed data on the disk, and, of course, files aren’t necessarily stored contiguously on disk. You can examine read-ahead activity by using the Cache: Read Aheads/sec performance counter or the CcReadAheadIos system variable. Reading the next block of a file that is being accessed sequentially provides an obvious performance improvement, with the disadvantage that it will cause head seeks. To extend readahead benefits to cases of strided data accesses (both forward and backward through a file), the cache manager maintains a history of the last two read requests in the private cache map for Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 799 the file handle being accessed, a method known as asynchronous read-ahead with history. If a pattern can be determined from the caller’s apparently random reads, the cache manager extrapolates it. For example, if the caller reads page 4000 and then page 3000, the cache manager assumes that the next page the caller will require is page 2000 and prereads it. Note Although a caller must issue a minimum of three read operations to establish a predictable sequence, only two are stored in the private cache map. To make read-ahead even more efficient, the Win32 CreateFile function provides a flag indicating forward sequential file access: FILE_FLAG_SEQUENTIAL_SCAN. If this flag is set, the cache manager doesn’t keep a read history for the caller for prediction but instead performs sequential read-ahead. However, as the file is read into the cache’s working set, the cache manager unmaps views of the file that are no longer active and, if they are unmodified, directs the memory manager to place the pages belonging to the unmapped views at the front of the standby list so that they will be quickly reused. It also reads ahead two times as much data (2 MB instead of 1 MB, for example). As the caller continues reading, the cache manager prereads additional blocks of data, always staying about one read (of the size of the current read) ahead of the caller. The cache manager’s read-ahead is asynchronous because it is performed in a thread separate from the caller’s thread and proceeds concurrently with the caller’s execution. When called to retrieve cached data, the cache manager first accesses the requested virtual page to satisfy the request and then queues an additional I/O request to retrieve additional data to a system worker thread. The worker thread then executes in the background, reading additional data in anticipation of the caller’s next read request. The preread pages are faulted into memory while the program continues executing so that when the caller requests the data it’s already in memory. For applications that have no predictable read pattern, the FILE_FLAG_RANDOM_ ACCESS flag can be specified when the CreateFile function is called. This flag instructs the cache manager not to attempt to predict where the application is reading next and thus disables read-ahead. The flag also stops the cache manager from aggressively unmapping views of the file as the file is accessed so as to minimize the mapping/unmapping activity for the file when the application revisits portions of the file. 10.7.2 Write-Back Caching and Lazy Writing The cache manager implements a write-back cache with lazy write. This means that data written to files is first stored in memory in cache pages and then written to disk later. Thus, write operations are allowed to accumulate for a short time and are then flushed to disk all at once, reducing the overall number of disk I/O operations. The cache manager must explicitly call the memory manager to flush cache pages because otherwise the memory manager writes memory contents to disk only when demand for physical memory exceeds supply, as is appropriate for volatile data. Cached file data, however, represents nonvolatile disk data. If a process modifies cached data, the user expects the contents to be reflected on disk in a timely manner. Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. [...]... a client and a server A client-side remote FSD allows applications to access remote files and directories The client FSD accepts I/O requests from applications and translates them into network file system protocol commands that the FSD sends across the network to a server- side component, which is typically a remote FSD A server- side FSD listens for commands coming from a network connection and fulfills... which all remote FSD miniport drivers share, handles many of the mundane details involved with interfacing a 816 Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark client-side remote FSD to the Windows I/O manager In addition to the FSD components, both LANMan Redirector and LANMan Server include Windows services named Workstation and Server, respectively Figure 11-6 shows the... cache and flush pages to disk by running the Reliability and Performance Monitor and adding the Data Maps/sec and Lazy Write Flushes/sec counters and then copying a large file from one location to another The generally higher line in the following screen shot shows Data Maps/sec and the other shows Lazy Write Flushes/sec 10.7.3 Write Throttling 805 Please purchase PDF Split-Merge on www.verypdf.com... designed with rewritable media in mind The Windows UDF driver ( \Windows\ System32\Drivers\Udfs.sys) provides read-write support for DVD-RAM, CD-R/RW, and DVD+-R/RW drives when using UDF 2.50 and read-only support when using UDF 2.60 However, Windows does not implement support for certain UDF features such as named streams and access control lists FAT12, FAT16, and FAT32 Windows supports the FAT file system... Chapter 4 describes the way that the system and applications store data in the registry Registry-related problems such as misconfigured security and missing registry values and keys are the source of many system and application failures The system and applications also use files to store data, and they access executable and DLL image files Misconfigured NTFS security and missing files or directories are therefore... version of exFAT in Windows Vista and Windows Server 2008 does not Note ReadyBoost (described in Chapter 9) does not work with exFAT-formatted flash drives NTFS As we said at the beginning of the chapter, the NTFS file system is the native file system format of Windows NTFS uses 64-bit cluster numbers This capacity gives NTFS the ability to address volumes of up to 16 exaclusters; however, Windows limits... respectively Figure 11-6 shows the relationship between a client accessing files remotely from a server through the redirector and server FSDs Windows relies on the Common Internet File System (CIFS) protocol to format messages exchanged between the redirector and the server CIFS is a version of Microsoft’s Server Message Block (SMB) protocol (For more information on CIFS, go to www.cifs.com.) Like local... and writes to the file as well as open and close the file without requesting additional oplocks Batch oplocks are typically used only to support the execution of batch files, which can open and close a file repeatedly as they execute If a client has no oplock, it can cache neither read nor write data locally and instead must retrieve data from the server and send all modifications directly to the server. .. required for read and write operations and also leave all matters related to physical memory management to the single Windows global memory manager, thus reducing code duplication and increasing efficiency 808 Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark 11 File Systems In this chapter, we present an overview of the file system formats supported by Windows We then describe... that the command is intended for resides Windows includes a client-side remote FSD named LANMan Redirector (redirector) and a server- side remote FSD named LANMan Server (\%SystemRoot%\System32\Drivers\Srv2.sys) The redirector is implemented as a port/miniport driver combination, where the port driver (\%SystemRoot%\System32\Drivers\Rdbss.sys) is implemented as a driver subroutine library, and the miniport . cache and flush pages to disk by running the Reliability and Performance Monitor and adding the Data Maps/sec and Lazy Write Flushes/sec counters and then. mechanisms, their purpose, and how they’re used. 10.5.1 Copying to and from the Cache Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Ngày đăng: 15/12/2013, 11:15

w