Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 86 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
86
Dung lượng
842,83 KB
Nội dung
Module Configuring AutoFS Objectives The AutoFS file system provides a mechanism for automatically mounting NFS file systems on demand and for automatically unmounting these file systems after a predetermined period of inactivity The mount points are specified using local or distributed automount maps Upon completion of this module, you should be able to: q Describe the fundamentals of the AutoFS file system q Use automount maps The following course map shows how this module fits into the current instructional goal Managing Virtual File Systems and Core Dumps Managing Swap Configuration Figure 7-1 Managing Crash Dumps and Core Files Configuring NFS Configuring AutoFS Course Map 7-1 Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Introducing the Fundamentals of AutoFS Introducing the Fundamentals of AutoFS AutoFS is a file system mechanism that provides automatic mounting using the NFS protocol AutoFS is a client-side service The AutoFS file system is initialized by the /etc/rc2.d/S74autofs automount script, which runs automatically when a system is booted This script runs the automount command, which reads the AutoFS configuration files and also starts the automount daemon automountd The automountd daemon runs continuously, mounting and unmounting remote directories on an as-needed basis Whenever a user on a client computer running the automountd daemon tries to access a remote file or directory, the daemon mounts the remote file system to which that file or directory belongs This remote file system remains mounted for as long as it is needed If the remote file system is not accessed for a defined period of time, the automountd daemon automatically unmounts the file system The AutoFS service mounts and unmounts file systems as required without any user intervention The user does not need to use the mount and umount commands and does not need to know the superuser password The AutoFS file system enables you to the following: q q Unmount file systems automatically q Centralize the administration of AutoFS mounts through the use of a name service, which can dramatically reduce administration overhead time q 7-2 Mount file systems on demand Create multiple mount resources for read/write or read-only file systems Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Introducing the Fundamentals of AutoFS The automount facility contains three components: q The AutoFS file system q The automountd daemon q The automount command RAM AutoFS automount -v Automount Maps automountd Master map Direct map Indirect map Special map Figure 7-2 The AutoFS Features AutoFS File System An AutoFS file system’s mount points are defined in the automount maps on the client system After the AutoFS mount points are set up, activity under the mount points can trigger file systems to be mounted under the mount points If the automount maps are configured, the AutoFS kernel module monitors mount requests made on the client If a mount request is made for an AutoFS resource not currently mounted, the AutoFS service calls the automountd daemon, which mounts the requested resource Configuring AutoFS Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 7-3 Introducing the Fundamentals of AutoFS The automountd Daemon The /etc/rc2.d/S74autofs script starts the automountd daemon at boot time The automountd daemon mounts file systems on demand and unmounts idle mount points Note – The automountd daemon is completely independent from the automount command Because of this separation, you can add, delete, or change map information without having to stop and start the automountd daemon process The automount Command The automount command, called at system startup time, reads the master map to create the initial set of AutoFS mounts These AutoFS mounts are not automatically mounted at startup time, they are the points under which file systems are mounted on demand 7-4 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Using Automount Maps Using Automount Maps The file system resources for automatic mounting are defined in automount maps Figure 7-3 shows maps defined in the /etc directory NFS Client "venues" / etc auto_master /net /home /- -hosts auto_home auto_direct [options] [options] [options] auto_direct /opt/moreapps pluto: /export/opt/apps auto_home Ernie Mary Figure 7-3 mars:/export/home/ernie mars:/export/home/mary Configuring AutoFS Mount Points The AutoFS map types are: q Master map – Lists the other maps used for establishing the AutoFS file system The automount command reads this map at boot time q Direct map – Lists the mount points as absolute path names This map explicitly indicates the mount point on the client q Indirect map – Lists the mount points as relative path names This map uses a relative path to establish the mount point on the client q Special – Provides access to NFS servers by using their host names Configuring AutoFS Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 7-5 Using Automount Maps The automount maps can be obtained from ASCII data files, NIS maps, NIS+ tables, or from an LDAP database Together, these maps describe information similar to the information specified in the /etc/vfstab file for remote file resources The source for automount maps is determined by the automount entry in the /etc/nsswitch.conf file For example, the entry: automount: files tells the automount command that it should look in the /etc directory for its configuration information Using nis instead of files tells automount to check the NIS maps for its configuration information Configuring the Master Map The auto_master map associates a directory, also called a mount point, with a map The auto_master map is a master list specifying all the maps that the AutoFS service should check Names of direct and indirect maps listed in this map refer to files in the /etc directory or to name service databases Associating a Mount Point With a Map The following example shows an /etc/auto_master file # cat /etc/auto_master # Master map for automounter # +auto_master /net -hosts /home auto_home /xfn -xfn 7-6 -nosuid,nobrowse -nobrowse Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Using Automount Maps The general syntax for each entry in the auto_master map is: mount point map name mount options where: mount point The full path name of a directory If the directory does not exist, the AutoFS service creates one, if possible map name The name of a direct or indirect map These maps provide mounting information A relative path name in this field requires AutoFS to consult the /etc/nsswitch.conf file for the location of the map mount options The general options for the map The mount options are similar to those used for standard NFS mounts However, the nobrowse option is an AutoFS-specific mount option Note – The plus (+) symbol at the beginning of the +auto_master line in this file directs the automountd daemon to look at the NIS, NIS+, or LDAP databases before it reads the rest of the map If this line is commented out, only the local files are searched unless the /etc/nsswitch.conf file specifies that NIS, NIS+, or LDAP should be searched Configuring AutoFS Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 7-7 Using Automount Maps Identifying Mount Points for Special Maps There are two mount points for special maps listed in the default /etc/auto_master file # cat /etc/auto_master # Master map for automounter # +auto_master /net -hosts /home auto_home /xfn -xfn -nosuid,nobrowse -nobrowse The two mount points for special maps are: The -hosts map Provides access to all resources shared by NFS servers The resources being shared by a server are mounted below the /net/hostname directory, or, if only the server’s IP address is known, below the /net/IPaddress directory The server does not have to be listed in the hosts database for this mechanism to work The -xfn map Provides access to resources available through the Federated Naming Service (FNS) Resources associated with FNS mount below the /xfn directory Note – The -xfn map provides access to legacy FNS resources Support for FNS is scheduled to cease with this release of the Solaris OE 7-8 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Using Automount Maps Using the /net Directory Shared resources associated with the hosts map entry are mounted below the /net/hostname directory For example, a shared resource named /documentation on host sys42 is mounted by the command: # cd /net/sys42/documentation Using the cd command to trigger the automounting of sys42’s resource eliminates the need to log in to the system Any user can mount the resource by executing the command to change to the directory that contains the shared resource The resource remains mounted until a predetermined time period of inactivity has occurred Adding Direct Map Entries The /- entry in the example master map defines a mount point for direct maps # cat /etc/auto_master # Master map for automounter # +auto_master /net -hosts /home auto_home /xfn -xfn /auto_direct -nosuid,nobrowse -nobrowse -ro The /- mount point is a pointer that informs the automount facility that the full path names are defined in the file specified by map_name (the /etc/auto_direct file in this example) Note – The /- entry is not an entry in the default master map This entry has been added here as an example The other entries in this example already exist in the auto_master file Even though the map_name entry is specified as auto_direct, the automount facility automatically searches for all map-related files in the /etc directory; therefore, based upon the automount entry in the /etc/nsswitch.conf file, the auto_direct file is the /etc/auto_direct file Configuring AutoFS Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 7-9 Using Automount Maps Note – An NIS or NIS+ master map can have only one direct map entry A master map that is a local file can have any number of entries Creating a Direct Map Direct maps specify the absolute path name of the mount point, the specific options for this mount, and the shared resource to mount For example: # cat /etc/auto_direct # Superuser-created direct # /apps/frame -ro,soft /opt/local -ro,soft /usr/share/man -ro,soft map for automounter server1:/export/framemaker,v5.5.6 server2:/export/unbundled server3,server4,server5:/usr/share/man The syntax for direct maps is: key [ mount-options] location where: key mount-options The specific options for a given entry location 7-10 The full path name of the mount point for the direct maps The location of the file resource specified in server:pathname notation Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System Creating a RAID Volume The first step when building a mirror of the root (/) file system is to create RAID-0 volumes, which you will later combine to form the mirror Each RAID-0 volume becomes a submirror to the mirror Use the metainit command to force the creation of the RAID-0 volume The force (-f) option must be used because this is the root (/) file system, which cannot be unmounted The syntax of the metainit command is: metainit -f concat/stripe numstripes width component where: -f concat/stripe Specifies the volume name of the concatenation or stripe being defined numstripes Specifies the number of individual stripes in the metadevice For a simple stripe, numstripes is always For a concatenation, numstripes is equal to the number of slices width Specifies the number of slices that make up a stripe When the width is greater than 1, the slices are striped component 9-14 Forces the metainit command to continue, even if one of the slices contains a mounted file system or is being used as swap space This option is useful when configuring mirrors or concatenations on root (/), swap, and /usr file systems Specifies the logical name for the physical slice (partition) on a disk drive, such as /dev/dsk/c0t0d0s1 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System The following example shows how to use the metainit command to create a RAID-0 volume: # /usr/sbin/metainit -f d11 1 c0t0d0s0 d11: Concat/Stripe is setup Caution – If encapsulating an existing file system in a RAID-0 volume, both the numstripes and width arguments must be 1, or the data will be lost The command line forces the creation of volume d11 Volume d11 creates a concatenation composed of a single stripe, one slice wide, and it is stored on the /dev/dsk/c0t0d0s0 disk slice Note – In this example, the root (/) file system is stored on the disk slice /dev/dsk/c0t0d0s0 Because the root (/) file system is stored at that location, you must use of the -f option to force the creation of a volume on the mounted partition Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-15 Building a Mirror of the Root (/) File System To create an additional RAID-0 volume, for the secondary submirror of the root file system, use the Enhanced Storage Tool within the Solaris Management Console To create additional volumes, complete the following steps: Click the Volumes icon Any configured metadevice volumes appear on the View pane, as shown in Figure 9-12 If there are no metadevice volumes currently configured, the View pane remains empty Figure 9-12 Volumes Icon 9-16 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System Select Create Volume from the Action menu, as shown in Figure 9-13 Figure 9-13 Solaris Management Console: Action Menu Answer the prompts in the Create Volume Wizard window Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-17 Building a Mirror of the Root (/) File System Every time you create a new volume, you can create additional state database replicas When creating RAID-0 volumes, it is usually unnecessary to create additional state database replicas Select Don’t Create State Database Replicas in the Create Volume window, as shown in Figure 9-14 Figure 9-14 Create Volume Window 9-18 Click Next to continue Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System Every time you create a new volume, as shown in Figure 9-15, you can relocate it on alternate disk sets Figure 9-15 Create Volume: Select Disk Set Window If only one disk set exists on the system, select the default of Click Next to continue Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-19 Building a Mirror of the Root (/) File System Figure 9-16 shows a selection of volume configurations that you can create Figure 9-16 Create Volume: Select Volume Type Window 9-20 Select Concatenation (RAID 0) Click Next to continue Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System You can name the volume, as shown in Figure 9-17 By default, volume names fall within the range of d0 through d127 In this procedure, build a mirror named d10 The two submirrors that will comprise the mirror are d11 (for the first submirror) and d12 (for the second submirror) You have already created volume d11 from the slice that contains the root (/) file system, so this one is volume d12, which will contain the mirror of the root (/) file system Figure 9-17 Create Volume: Name Volume Window 10 Name the volume d12 11 Click Next to continue Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-21 Building a Mirror of the Root (/) File System You can also select a slice that the new volume will occupy, as shown in Figure 9-18 This volume is the secondary submirror of a mirror, therefore the size of this slice must be equal to or greater than the size of the primary submirror of the mirror Figure 9-18 Create Volume: Select Components Window 12 Select a slice equal to or greater than the size of the primary submirror RAID-0 volume 13 Click Add to move it to the Selected list 14 Click Next to continue 9-22 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System You can select the order of presentation of the slices within the stripe group, if you are mirroring a file system that can span multiple slices, as shown in Figure 9-19 Figure 9-19 Create Volume: Select Components Window Note – When mirroring root (/), you cannot span multiple slices 15 Click Next to continue Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-23 Building a Mirror of the Root (/) File System A hot spare pool is a set of slices you can use to improve the fault tolerance of the system To allow continued data accesses to a failed volume until you can replace a failed slice, hot spares are automatically swapped in to replace the failed slice After replacing the failed slice, the hot spare is automatically swapped back onto the replacement slice, as shown in Figure 9-20 16 Because no hot spare pools have been created, select No Hot Spare Pool Figure 9-20 Create Volume: Use Hot Spare Pool Window 17 Click Next to continue 9-24 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System The Create Volume: Review Window window provides a confirmation of your selections It also provides a summary of the commands necessary to accomplish the identical task from the command line, as shown in Figure 9-21 Figure 9-21 Create Volume: Review Window 18 Click Finish Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-25 Building a Mirror of the Root (/) File System Figure 9-22 shows the metadevice for the newly created RAID-0 volume Figure 9-22 Solaris Management Console: Volumes Window In this procedure, you created two RAID-0 volumes, d11 and d12 The d11 volume contains the slice where the root (/) file system is stored, and the d12 volume contains space for a copy of the root (/) file system 9-26 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A Building a Mirror of the Root (/) File System Creating a RAID-1 Volume You can create the RAID-1 volume using: q The metainit command q The Enhanced Storage Tool within the Solaris Management Console The metainit Command The syntax for creating a RAID-1 volume by using the metainit command is: metainit mirror -m submirror [read_options] [write_options] [pass_num] where: mirror -m submirror Specifies the volume name of the mirror The -m indicates that the configuration is a mirror Submirror is a volume (stripe or concatentation) that makes up the initial one-way mirror read_options The following read options for mirrors are available: • -g – Enables the geometric read option, which results in faster performance on sequential reads • -r – Directs all reads to the first submirror Use the -r option only when the devices that comprise the first submirror are substantially faster than those of the second mirror You cannot use the -r option with the -g option write_options The following write option is available: S – Performs serial writes to mirrors The default setting for this option is parallel write pass_num A number (0–9) at the end of an entry defining a mirror that determines the order in which that mirror is resynchronized during a reboot The default is Smaller pass numbers are resynchronized first Equal pass numbers are run concurrently If is used, the resynchronization is skipped Use only for mirrors mounted as read-only, or as swap space Configuring Solaris Volume Manager Software Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A 9-27 Building a Mirror of the Root (/) File System Note – If neither the -g nor -r options are specified, reads are made in a round-robin order from all submirrors in the mirror This process enables load balancing across the submirrors The following command-line example creates a mirrored volume named d10, and attaches a one-way mirror using volume d11 Volume d11 is a submirror of the mirror named d10 # /usr/sbin/metainit d10 -m d11 d10: Mirror is setup 9-28 Advanced System Administration for the Solaris™ Operating Environment Copyright 2002 Sun Microsystems, Inc All Rights Reserved Enterprise Services, Revision A ... rw,intr,largefiles,onerror=panic,suid,dev =22 00000 100 825 5 791 /proc /proc proc dev =40 80000 100 825 5 790 mnttab /etc/mnttab mntfs dev =41 40000 100 825 5 790 fd /dev/fd fd rw,suid,dev =41 80000 100 825 5 7 94 swap /var/run tmpfs dev=1 100 825 5 797 ... autofs indirect,ignore,nobrowse,dev =43 000 02 100 825 5810 -xfn /xfn autofs indirect,ignore,dev =43 00003 100 825 5810 sys 44: vold(pid2 64) /vol nfs ignore,dev =42 c0001 100 825 5 827 Stopping and Starting the Automount... tmpfs dev =2 100 825 58 02 /dev/dsk/c0t0d0s7 /export/home ufs rw,intr,largefiles,onerror=panic,suid,dev =22 00007 100 825 58 02 -hosts /net autofs indirect,nosuid,ignore,nobrowse,dev =43 00001 100 825 5810 auto_home