Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 58 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
58
Dung lượng
1,21 MB
Nội dung
Advanced Disk Management Terms You Need to Understand ✓ Virtual file systems ✓ Striping and concatenation ✓ Mirroring ✓ Redundant Arrays of Inexpensive Disks (RAID) ✓ Solaris Volume Manager Concepts You Need to Master ✓ Identifying RAID levels ✓ Identifying characteristics of RAID levels ✓ Understanding features of the Solaris Volume Manager 14 14 8699 ch14 11/19/02 10:08 AM Page 309 Chapter 14 310 Introduction This chapter describes some of the advantages of virtual disk management systems and briefly describes some of the techniques used by these systems to improve availability. These include duplexing, mirroring, logging and disk arrays. The features and key concepts of the virtual disk management system included with Solaris 9, the Solaris Volume Manager, are described. Virtual File Systems Virtual disk management systems allow the use of physical disks in different ways that are not supported by the standard Solaris file systems. This section summarizes these advantages and describes several techniques used to pro- vide virtual file systems. Advantages of Virtual Disk Management Systems A virtual disk management system can overcome disk capacity and architec- ture limitations and improve performance and availability. In addition, man- ageability is enhanced by the use of a graphical management tool. The three main storage factors are performance, availability, and hardware costs. A virtual disk management system allows managing tradeoffs between these three factors and in some cases reduces the impact of these factors. Overcoming Disk Limitations Virtual disk management systems allow partitions on multiple disks to be combined and treated as a single partition. Not only does this allow file sys- tems to be larger than the largest available physical disks, it also allows the entire storage area on a physical disk to be used. Improved Performance Typically, using multiple disks in place of a single disk will increase perform- ance by spreading the load across several physical disks. Improved Availability Virtual disk management systems typically support data-redundancy or high- availability features, such as mirroring and Redundant Arrays of Inexpensive Disks (RAID) configurations. In addition, features such as hot sparing and file system logging reduce recovery time. 14 8699 ch14 11/19/02 10:08 AM Page 310 Advanced Disk Management 311 Enhanced Manageability Virtual disk management systems typically provide a simple graphical user interface for disk management and administration. This allows simple, almost error-free administration of complex disk configurations and file systems. The graphical interface usually includes a visual representation of the physi- cal disks and virtual disk file systems. In addition, it typically supports drag- and-drop capabilities that allow quick configuration changes. Concatenated Virtual Devices Unlike the standard Solaris file system, which consists of a single partition (slice), a concatenated virtual file system consists of two or more slices. The slices can be on the same physical disk or on several physical disks. The slices also can be of different sizes. Concatenation implies that the slices are addressed in a sequential manner. That is, as space is needed, it is allocated from the first slice in the concate- nation. Once this space is completely used, space is allocated from the sec- ond slice, and so on. The main advantage of a concatenated virtual device is that it provides means for using slices that might otherwise be too small for use. In addition, con- catenation using more than one physical disk provides some load balancing between the disks and can result in head movement optimization. However, using multiple disks increases the chance that a failure of any one disk will result in the failure of the virtual device. Striped Virtual Device A striped virtual device, like a concatenated virtual device, can consist of two or more slices. The slices can be on the same physical disk or on several phys- ical disks. The slices can also be of different sizes. Unlike a concatenated device, the slices are addressed in an interleaved man- ner. That is, as space is needed, it is allocated as a block from the first slice, and then a block from the second slice, and so on. The main advantage of a striped device is that when the slices are on several physical disks, it provides an increase in performance because it allows mul- tiple simultaneous reads and writes. This is because each physical disk in the 14 8699 ch14 11/19/02 10:08 AM Page 311 Chapter 14 312 striped virtual device can be accessed at the same time. In addition, like con- catenated virtual devices, it provides means for using slices that might other- wise be too small for use. As with concatenated virtual devices, multiple disks increase the chance that a failure of any one disk will result in the failure of the virtual device. A concatenated striped virtual device is a striped virtual device that has been expanded by concatenating additional slices to the end of the device. Mirroring and Duplexing Mirroring is the technique of copying data being written to an online device to an offline device. This provides a real-time backup of data that can be brought online to replace the original device in the event that the original device fails. Typically, the two disks used in this way share the same controller. Duplexing is similar to mirroring, except that each disk has its own controller. This provides a little more redundancy by eliminating the controller as a sin- gle point of failure. RAID Configurations One approach to improving data availability is to arrange disks in various con- figurations, known as Redundant Arrays of Inexpensive Disks (RAID). Table 14.1 lists the levels of RAID. Table 14.1 RAID Levels Level Description 0 Striping or concatenation 1 Mirroring and duplexing 0+1 Striping then mirroring (striped disks that are mirrored) 2 Hamming Error Code Correction (ECC), used to detect and correct errors 3 Bit-interleaved striping with parity information (separate disk for parity) 4 Block-interleaved striping with parity information (separate disk for parity) 5 Block-interleaved striping with distributed parity information 6 Block-interleaved striping with two independent distributed parity schemes 7 Block-interleaved striping with asynchronous I/O (input/output) transfers and distributed parity information 1+0 Mirroring then Striping (mirrored disks that are striped) 14 8699 ch14 11/19/02 10:08 AM Page 312 Advanced Disk Management 313 Virtual disk management systems implement one or more of these RAID lev- els but typically not all of them. The commonly supported RAID levels are 0, 1, and 5: ➤ RAID 0 (striping and concatenation) does not provide data redundancy, but striping does improve read/write performance because data is evenly distributed across multiple disks and typically has the best performance. Concatenation works best in environments that require small random I/O. Striping performs well in large sequential or random I/O environments. ➤ RAID 1 (mirroring) provides data redundancy and typically improves read performance, but writes are typically slower. ➤ RAID 5 is typically slower for both reads and writes when compared to RAID 1, but the cost is lower. Because multiple I/O operations are required to compute and store the parity, RAID 5 is slower on write operation than striping (RAID 0). UFS File System Logging With UFS file system logging, updates to a UFS file system are recorded in a log before they are applied. In the case of system failure, the system can be restarted, and the UFS file system can quickly use the log instead of having to use the fsck command. The fsck command is a time-consuming and not always 100% accurate method of recovering a file system. It reads and verifies the information that defines the file system. If the system crashed during an update, the update might have been only partially completed. The fsck command must correct the information by removing these partial updates. With UFS file system logging, only logged updates are actually applied to the file system. If the system crashes, the log has a record of what should be complete and can be used to quickly make the file system consistent. Solaris Volume Manager (SVM) The Solaris Volume Manager (previously known as the Solstice DiskSuite) is a software product that can be used to increase storage capacity and data availability and in some cases increase performance. SVM supports four types of related storage components. These are volumes, state databases (along with replicas), hot spare pools, and disk sets. Table 14.2 14 8699 ch14 11/19/02 10:08 AM Page 313 Chapter 14 314 provides a summary of these components that are described in more detail in the next sections of this chapter. Table 14.2 Solaris Volume Manager Components Component Description Volumes A collection of physical disk slices or partitions that are managed as a single logical device. State Database A database used to store information on the SVM configuration. Replicas of the database are used to provide redundancy. Hot Spare Pool A collection of slices that can be used as hot spares to automatically replace failed slices. Disk Set A set of volumes and hot spares that can be shared by several host systems. Volumes SVM uses virtual disks, called volumes (previously referred to as metadevices), to manage the physical disks. A volume is a collection of one or more phys- ical disk slices or partitions that appear and can be treated as a single logical device. The basic disk management commands, except the format command, can be used with volumes. In general, volumes can be thought of as slices or partitions. Like the standard Solaris file systems that are accessible using raw (/dev/rdsk) and block (/dev/dsk) logical device names, volumes under SVM are accessed using either the raw or the block device name under /dev/md/rdsk or /dev/md/dsk. The volume (that is, the partition name) begins with the letter “d” followed by a number. For example, /dev/md/dsk/d0 is block volume d0. Because a volume can include slices from more than one physical disk, it can be used to create a file system that is larger than the largest available physi- cal disk. Volumes can include IPI, SCSI, and SPARCStorage Array drives. Disk slices that are too small to be of any use can be combined to create usable storage. SVM supports five classes of volumes. These are listed in Table 14.3. 14 8699 ch14 11/19/02 10:08 AM Page 314 Advanced Disk Management 315 Table 14.3 Classes of SVM Volumes Volume Description RAID 0 Used directly or as a building block for mirror and transactional vol- umes (the three types of RAID 0 volumes are stripes, concatenations, and concatenated stripes) RAID 1 Used to mirror data between RAID 0 volumes to provide redundancy RAID 5 Used to replicate data with parity, allowing regeneration of data Transactional Used for UFS file system logging Soft Partition Used to divide a disk slice into one or more smaller slices. SVM allows volumes to be dynamically expanded by adding additional slices. Then a UFS file system on that volume can be expanded. Soft Partitions As disks become larger, sometimes it might be necessary to subdivide a phys- ical disk into more than eight slices (which is the current limit). With SVM, a disk slice can be subdivided into as many soft partitions as needed. A soft par- tition can be accessed like any disk slice and can be included in a SVM vol- ume. Although a soft partition appears as a contiguous portion of disk, it actually consists of a set of extents that could be located in various areas of the disk. State Database and Replicas The State Database is used to store information about the SVM configura- tion. Because this information is critical, copies of the State Database, referred to as State Database Replicas, are maintained as backups and ensure that the state information is always accurate and accessible. SVM updates the State Database and replicas whenever changes are made to the disk configu- ration. The State Database and its replicas can be stored on either disk slices dedi- cated for database use or on slices that are part of volumes. When a slice that contains the database or a replica is added to a volume, SVM recognizes this configuration and skips over the database or replica. The replica is still acces- sible and usable. The database and one or more replicas can be stored on the same slice, however it would be advisable to distribute the replicas across several slices to safe guard against the failure of a single slice. 14 8699 ch14 11/19/02 10:08 AM Page 315 Chapter 14 316 Hot Spare Pools A hot spare pool is a collection of slices that are automatically substituted for slices that fail. When a disk error occurs, SVM locates a hot spare (slice) in the hot spare pool that is at least the size of the failing slice and allocates the hot spare as a replacement for the failing slice. Assuming a RAID 1 (mirrored) or RAID 5 configuration, the data from the failed slice is copied to its replace- ment from the mirrored data or rebuilt using parity information. Disk Sets A disk set or shared disk set is the combination of volumes with a State Database, one or more State Database Replicas, and Hot Spares that can be shared between two or more host systems. However, only one host system can use the disk set at a time. The purpose of the disk set is to provide data availability for a cluster of host systems. That is, if one host system fails, another host in the system can take over its operations. The disk set provides a means for all the clustered host systems to share and access the same data. This is referred to as a failover configuration. Administration of SVM Objects Two methods are available to manage the SVM objects. The first is the Enhanced Storage Tool, which provides a graphical user interface. This is accessible via the Solaris Management Console. The second is a set of com- mands that are referred to collectively as the SVM command-line interface. Enhanced Storage Tool The Enhanced Storage Tool is used to set up and administer the SVM con- figuration. It provides a graphical view of both the SVM objects and the underlying physical disks. The SVM configuration can be modified quickly by using drag-and-drop manipulation. SVM Command-Line Interface The command-line interface provides a set of commands to create and man- age SVM objects. Most of these commands start with the prefix “meta” such as the metainit command used to initialize a volume. The command-line interface includes the following commands: ➤ growfs(1M)—Expands a UFS file system ➤ metaclear(1M)—Deletes volumes and hot spare pools 14 8699 ch14 11/19/02 10:08 AM Page 316 Advanced Disk Management 317 ➤ metadb(1M)—Creates and deletes database replicas ➤ metadetach(1M)—Detaches a volume from a RAID 1 (mirror) volume ➤ metadevadm(1M)—Checks device ID configuration ➤ metahs(1M)—Manages hot spares and hot spare pools ➤ metainit(1M)—Configures volumes ➤ metaoffline(1M)—Places submirrors offline ➤ metaonline(1M)—Places submirrors online ➤ metaparam(1M)—Modifies volume parameters ➤ metarecover(1M)—Recovers configuration information for soft partitions ➤ metarename(1M)—Renames volumes ➤ metareplace(1M)—Replaces slices of submirrors and RAID 5 volumes ➤ metaroot(1M)—Sets up files for mirroring the root file system ➤ metaset(1M)—Administers disk sets ➤ metastat(1M)—Displays the status of volumes or hot spare pools ➤ metasync(1M)—Resynchronizes volumes during reboot ➤ metattach(1M)—Attaches a metadevice to a mirror Summary A virtual disk management system can overcome disk capacity and architec- ture limitations and improve performance and availability. In addition, man- ageability is enhanced by the use of a graphical management tool. The three main storage factors are performance, availability, and hardware costs. A virtual disk management system allows managing tradeoffs between these three factors and in some cases reduces the impact of these factors. The techniques used to improve performance and/or availability include con- catenation, striping, mirroring, duplexing and the different levels of RAID. The Solaris Volume Manager (SVM) manages volumes (collections of phys- ical disk slices) that include a State Database that stores information on the SVM configuration and one or more replicas (to provide redundancy). A Hot Spare Pool is used to automatically replace failed disk slices. A SMV Disk Set (Volumes, State Database, Replicas and Hot Spare Pool) can be shared by several host systems. 14 8699 ch14 11/19/02 10:08 AM Page 317 Chapter 14 318 Exam Prep Practice Questions Question 1 Which of the following virtual devices or RAID levels does the Volume Manager support? [Select all that apply.] ❑ A. Concatenated virtual device ❑ B. RAID level 0 ❑ C. RAID level 1 ❑ D. RAID level 5 ❑ E. RAID 10 All the answers are correct. Question 2 Which of the following are types of virtual file systems? [Select all that apply.] ❑ A. Concatenated ❑ B. Aggregated ❑ C. Sliced ❑ D. Monolithic ❑ E. Striped The correct answers are A and E. The other answers do not relate to virtual file systems. Question 3 Identify the prefix used with most of the commands associated with the SMV command-line interface. The correct answer is meta. 14 8699 ch14 11/19/02 10:08 AM Page 318 [...]... NFS server 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 344 344 Chapter 15 Exam Prep Practice Questions Question 1 Which of the following commands can be used to mount the /export/home file system from the Solaris system (IP address of 192 .168. 39. 7) on the /mnt mount point? ❍ A mount -F nfs -h solaris /export/home /mnt ❍ B mount -F nfs 192 .168. 39. 7: /export/home... required for mirroring 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 321 321 Advanced Disk Management Need to Know More? Sun Microsystems, Solaris Volume Manager: Administration Guide Available in printed form and on the Web at docs.sun.com 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 322 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 323 15 Network File System (NFS) ... size: 72 k 72 k 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 343 343 Network File System (NFS) total for cache initial size: end size: high water size: 53888k 72 k 72 k # The end size (cache working set size when logging was halted) and high water size (largest cache working set size) are displayed for each file system mounted in the cache In this example, only one file system. .. tenths of a second 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 3 29 3 29 Network File System (NFS) The following listing shows using the mount command to mount the /export/home file from the Solaris system on the /sun_home mount point The resource is soft mounted (1,000 attempts) with read-only access: # mount -F nfs -o soft,retry=1000,ro solaris: /export/home /sun_home... Like other types of file systems, the remote NFS resource can be mounted manually using the mount 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 3 39 3 39 Network File System (NFS) command, at system boot, by adding it to the /etc/vfstab file, or as needed using the AutoFS mechanism (described in the previous section) The root (/) and /usr file systems cannot be cached... resources in the file system table (or specified file) that have the Mount At Boot column set to yes If a file system type is specified using the -F command-line option, only file systems of that type are mounted If the -l command-line argument is 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 331 331 Network File System (NFS) specified, only local file systems are mounted... list cache FS information maxblocks 90 % minblocks 0% threshblocks 85% maxfiles 90 % minfiles 0% threshfiles 85% maxfilesize 3MB sparc20:_export_home:_home # 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 341 341 Network File System (NFS) Displaying Cache Statistics Statistical information about the performance of the cached file system can be displayed using the cachefsstat(1M)... /etc/dfs/dfstab 15 8 699 ch15 11/ 19/ 02 10:14 AM Page 3 27 3 27 Network File System (NFS) The following entry from /export/home directory: /etc/dfs/dfstab is used to share the share -F nfs -d “home directories” /export/home You might be wondering why some of the directories, files, and even commands associated with NFS use the phrase dfs or df This comes from the System V... that uses a striped file system Answer B is incorrect because a striped virtual file system uses all the slices in an interleaved fashion Answer D is incorrect because it is not a type of file system Question 7 Enter the word used to describe the technique of writing data to both an online disk and an offline disk to provide a real-time replacement disk if needed 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 320... ➤ logformat—Specifies either basic (default) or extended logging The following listing file: shows the default contents of the /etc/nfs/nfslog.conf #ident “@(#)nfslog.conf 1.5 99 /02/21 SMI” # # Copyright (c) 199 9 by Sun Microsystems, Inc # All rights reserved # # NFS server log configuration file # # [ defaultdir= ] \ # [ log= ] [ fhtable= ] \ # [ buffer= . on the Web at docs.sun.com. 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 321 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 322 Network File System (NFS) Terms You Need to Understand ✓ Network File System (NFS) ✓. Database, Replicas and Hot Spare Pool) can be shared by several host systems. 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 3 17 Chapter 14 318 Exam Prep Practice Questions Question 1 Which of the following. features of the Solaris Volume Manager 14 14 8 699 ch14 11/ 19/ 02 10:08 AM Page 3 09 Chapter 14 310 Introduction This chapter describes some of the advantages of virtual disk management systems and