1. Trang chủ
  2. » Công Nghệ Thông Tin

Red Hat Linux Networking , System Administration (P11) ppt

30 399 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 841,08 KB

Nội dung

264 Chapter 11 directly, as well as using the graphical Network Configuration tool You used subnetting to create two internal subnetworks and configured a router so the subnetworks could communicate with each other You set up a Dynamic Host Configuration Protocol server to assign IP addresses to the hosts on the network You also enabled forwarding and masquerading so that every computer on your internal network could have Internet access CHAPTER 12 The Network File System IN THIS CHAPTER ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ NFS Overview Planning an NFS Installation Configuring an NFS Server Configuring an NFS Client Using Automount Services Examining NFS Security Linux Servers are often installed to provide centralized file and print services for networks This chapter explains how to use the Network File System (NFS) to create a file server After a short overview of NFS, you learn how to plan an NFS installation, how to configure an NFS server, and how to set up an NFS client You’ll learn how to mount remote file systems automatically, eliminating the need to mount remote file systems manually before you can access it The final section of the chapter highlights NFS-related security issues NFS Overview NFS is the most common method used to share files across Linux and UNIX networks It is a distributed file system that enables local access to remote disks and file systems In a properly designed and carefully implemented NFS installation, NFS’s operation is totally transparent to clients using remote file systems Provided that you have the appropriate network connection, you can access files and directories that are physically located on another system or even in a different city or country using standard Linux commands No special 265 266 Chapter 12 procedures, such as using a password, are necessary NFS is a common and popular file-sharing protocol, so NFS clients are available for many non-UNIX operating systems, including the various Windows versions, MacOS, OS/2, VAX/VMS, and MVS Understanding NFS NFS follows standard client/server architectural principles The server component of NFS consists of the physical disks that contain the file systems you want to share and several daemons that make these shared file systems visible to and available for use by client systems on the network When an NFS server is sharing a file system in this manner, it is said to be exporting a file system Similarly, the shared file system is referred to as an NFS export The NFS server daemons provide remote access to the exported file systems, enable file locking over the network, and, optionally, allow the server administrator to set and enforce disk quotas on the NFS exports On the client side of the equation, an NFS client simply mounts the exported file systems locally, just as local disks would be mounted The mounted file system is known colloquially as an NFS mount The possible uses of NFS are quite varied NFS is often used to provide diskless clients, such as X terminals or the slave nodes in a cluster, with their entire file system, including the kernel image and other boot files Another common scheme is to export shared data or project-specific directories from an NFS server and to enable clients to mount these remote file systems anywhere they see fit on the local system Perhaps the most common use of NFS is to provide centralized storage for users’ home directories Many sites store users’ home directories on a central server and use NFS to mount the home directory when users log in or boot their systems Usually, the exported directories are mounted as /home/username on the local (client) systems, but the export itself can be stored anywhere on the NFS server, for example, /exports/users/username Figure 12-1 illustrates both of these NFS uses The network shown in Figure 12-1 shows a server (suppose that its name is diskbeast) with two set of NFS exports, user home directories on the file system /exports/homes (/exports/homes/u1, /exports/homes/u2, and so on) and a project directory stored on a separate file system named /proj Figure 12-1 also illustrates a number of client systems (pear, apple, mango, and so forth) Each client system mounts /home locally from diskbeast On diskbeast, the exported file systems are stored in the /exports/homes directory When a user logs in to a given system, that user’s home directory is automatically mounted on /home/username on that system So, for example, The Network File System because user u1 has logged in on pear, /exports/homes/u1 is mounted on pear’s file system as /home/u1 If u1 then logs in on mango, too (not illustrated in Figure 12-1), mango also mounts /home/u1 Logging in on two systems this way is potentially dangerous because changes to files in the exported file system made from one login session might adversely affect the other login session Despite the potential for such unintended consequences, it is also very convenient for such changes to be immediately visible Figure 12-1 also shows that three users, u5, u6, and u7, have mounted the project-specific file system, /proj, in various locations on their local file systems Specifically, user u5 has mounted it as /work/proj on kiwi (that is, kiwi:/work/proj in host:/mount/dir form) u6 as lime:/projects, and u7 as peach:/home/work NFS can be used in almost any situation requiring transparent local access to remote file systems In fact, you can use NFS and NIS (Chapter 13 covers NIS in depth) together to create a highly centralized network environment that makes it easier to administer the network, add and delete user accounts, protect and back up key data and file systems, and give users a uniform, consistent view of the network regardless of where they log in pear diskbeast kiwi /exports/homes /home/u5 /work/proj /home/u1 apple lime /home/u2 mango /home/u6 /projects /home/u3 /proj peach guava /home/u4 /home/u7 /home/work Figure 12-1 Exporting home directories and project-specific file systems 267 268 Chapter 12 As you will see in the sections titled “Configuring an NFS Server” and “Configuring an NFS Client,” NFS is easy to set up and maintain and pleasantly flexible Exports can be mounted read-only or in read-write mode Permission to mount exported file systems can be limited to a single host or to a group of hosts using either hostnames with the wildcards * and ? or using IP address ranges, or even using NIS groups, which are similar to, but not the same as, standard UNIX user groups Other options enable strengthening or weakening of certain security options as the situation demands What’s New with NFSv4? NFS version 4, which is the version available in Fedora Core and Red Hat Enterprise Linux, offers significant security and performance enhancements over older versions of the NFS protocol and adds features such as replication (the ability to duplicate a server’s exported file systems on other servers) and migration (the capability to move file systems from one NFS server to another without affecting NFS clients) that NFS has historically lacked For example, one of the (justified) knocks against NFS has been that it transmits authentication data as clear text NFSv4 incorporates RPCSEC-GSS (the SecureRPC protocol using the Generic Security Service API) security, which makes it possible to encrypt the data stream transmitted between NFS clients and servers Another security feature added to NFSv4 is support for access control lists, or ACLs ACLs build on the traditional Linux UID- and GID-based file and directory access by giving users and administrators the ability to set more finely grained restrictions on who can read, write, and/or execute a given file In terms of backward compatibility, NFSv4 isn’t, at least not completely Specifically, an NFSv4 client might not be able to mount an NFSv2 export It has been our experience that mounting an NFSv2 export on an NFSv4 client requires the use of the NFS-specific mount option nfsvers=2 Going the other direction, mounting an NFSv4 export on an NFSv2 does not require special handling NFSv4 and NFSv3 interoperability is no problem See the section titled “Configuring an NFS Client” for more details about interoperability between NFS versions In terms of performance enhancements, NFSv4 makes fuller use of clientside caching, which reduces the frequency with which clients must communicate with an NFS server By decreasing the number of server round trips, overall performance increases In addition, NFSv4 was specifically designed (or enhanced) to provide reasonable performance over the Internet, even on slow, low-bandwidth connections or in high latency situations (such as when someone on your LAN is downloading the entire Lord of the Rings trilogy) However, despite the improved client-side caching, NFS is still a stateless protocol Clients maintain no information about available servers across reboots, and the client-side cache is likewise lost on reboot In addition, if a server reboots or The Network File System becomes unavailable when a client has pending (uncommitted) file changes, these changes will be lost if the server does not come back up fairly soon Complementing the new version’s greater Internet-friendliness, NFSv4 also supports Unicode (UTF-8) filenames, making cross-platform and intercharacter set file sharing more seamless and more international When applicable, this chapter will discuss using NFSv4 features, include examples of NFSv4 clients and servers, and warn you of potential problems you might encounter when using NFSv4 N OT E For more information about NFSv4 in Fedora Core and Red Hat Enterprise Linux, see Van Emery’s excellent article, “Learning NFSv4 with Fedora Core 2,” on the Web at www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec html The NFSv4 open-source reference implementation is driven by the Center for Information Technology Integration (CITI) at the University of Michigan, which maintains an information-rich Web site at www.citi.umich.edu /projects/nfsv4 NFS Advantages and Disadvantages Clearly, the biggest advantage NFS provides is centralized control, maintenance, and administration It is much easier, for example, to back up a file system stored on a single server than it is to back up directories scattered across a network, on systems that are geographically dispersed, and that might or might not be accessible when the backup is made Similarly, NFS makes it trivial to provide access to shared disk space, or limit access to sensitive data When NFS and NIS are used together, changes to systemwide configuration files, such as authentication files or network configuration information, can be quickly and automatically propagated across the network without requiring system administrators to physically visit each machine or requiring users to take any special action NFS can also conserve disk space and prevent duplication of resources Read-only file systems and file systems that change infrequently, such as /usr, can be exported as read-only NFS mounts Likewise, upgrading applications employed by users throughout a network simply becomes a matter of installing the new application and changing the exported file system to point at the new application End users also benefit from NFS When NFS is combined with NIS, users can log in from any system, even remotely, and still have access to their home directories and see a uniform view of shared data Users can protect important or sensitive data or information that would be impossible or time-consuming to re-create by storing it on an NFS mounted file system that is regularly backed up 269 270 Chapter 12 NFS has its shortcomings, of course, primarily in terms of performance and security As a distributed, network-based file system, NFS is sensitive to network congestion Heavy network traffic slows down NFS performance Similarly, heavy disk activity on the NFS server adversely affects NFS’s performance In the face of network congestion or extreme disk activity, NFS clients run more slowly because file I/O takes longer The performance enhancements incorporated in NFSv4 have increased NFS’s stability and reliability on high latency and heavily congested networks, but it should be clear that unless you are on a high-speed network, such as Gigabit Ethernet or Myrinet, NFS will not be as fast as a local disk If an exported file system is not available when a client attempts to mount it, the client system can hang, although this can be mitigated using a specific mount option that you will read about in the section titled “Configuring an NFS Client.” Another shortcoming of NFS is that an exported file system represents a single point of failure If the disk or system exporting vital data or application becomes unavailable for any reason, such as a disk crash or server failure, no one can access that resource NFS suffers from potential security problems because its design assumes a trusted network, not a hostile environment in which systems are constantly being probed and attacked The primary weakness of most NFS implementations based on protocol versions 1, 2, and is that they are based on standard (unencrypted) remote procedure calls (RPC) RPC is one of the most common targets of exploit attempts As a result, sensitive information should never be exported from or mounted on systems directly exposed to the Internet, that is, one that is on or outside a firewall While RPCSEC_GSS makes NFSv4 more secure and perhaps safer to use on Internet-facing systems, evaluate such usage carefully and perform testing before deploying even a version 4–based NFS system across the Internet Never use NFS versions and earlier on systems that front the Internet; clear-text protocols are trivial for anyone with a packet sniffer to intercept and interpret N OT E An NFS client using NFS servers inside a protected network can safely be exposed to the Internet because traffic between client and server travels across the protected network What we are discouraging is accessing an NFSv3 (or earlier) export across the Internet Quite aside from encryption and even inside a firewall, providing all users access to all files might pose greater risks than user convenience and administrative simplicity justify Care must be taken when configuring NFS exports to limit access to the appropriate users and also to limit what those users are permitted to with the data Moreover, NFS has quirks that can prove disastrous The Network File System for unwary or inexperienced administrators For example, when the root user on a client system mounts an NFS export, you not want the root user on the client to have root privileges on the exported file system By default, NFS prevents this, a procedure called root squashing, but a careless administrator might override it Planning an NFS Installation Planning an NFS installation is a grand-sounding phrase that boils down to thoughtful design followed by careful implementation Of these two steps, design is the more important because it ensures that the implementation is transparent to end users and trivial to the administrator The implementation is remarkably straightforward This section highlights the server configuration process and discusses the key design issues to consider “Thoughtful design” consists of deciding what file systems to export to which users and selecting a naming convention and mounting scheme that maintains network transparency When you are designing your NFS installation, you need to: ■ ■ Select the file systems to export ■ ■ Establish which users (or hosts) are permitted to mount the exported file systems ■ ■ Identify the automounting or manual mounting scheme that clients will use to access exported file systems ■ ■ Choose a naming convention and mounting scheme that maintains network transparency and ease of use With the design in place, implementation is a matter of configuring the exports and starting the appropriate daemons Testing ensures that the naming convention and mounting scheme works as designed and identifies potential performance bottlenecks Monitoring is an ongoing process to ensure that exported file systems continue to be available, network security and the network security policy remain uncompromised, and that heavy usage does not adversely affect overall performance A few general rules exist to guide the design process You need to take into account site-specific needs, such as which file systems to export, the amount of data that will be shared, the design of the underlying network, what other network services you need to provide, and the number and type of servers and clients The following tips and suggestions for designing an NFS server and its exports will simplify administrative tasks and reduce user confusion: 271 272 Chapter 12 ■■ Good candidates for NFS exports include any file system that is shared among a large number of users, such as /home, workgroup project directories, shared data directories, such as /usr/share, the system mail spool (/var/spool/mail), and file systems that contain shared application binaries and data File systems that are relatively static, such as /usr, are also good candidates for NFS exports because there is no need to replicate the same static data and binaries across multiple machines T I P A single NFS server can export binaries for multiple platforms by exporting system-specific subdirectories So, for example, you can export a subdirectory of Linux binaries from a Solaris NFS server with no difficulty The point to emphasize here is that NFS can be used in heterogeneous environments as seamlessly as it can be used in homogeneous network installations ■■ Use /home/username to mount home directories This is one of the most fundamental directory idioms in the Linux world, so disregarding it not only antagonizes users but also breaks a lot of software that presumes user home directories live in /home On the server, you have more leeway about where to situate the exports Recall from Figure 12-1, for example, that diskbeast stored user home directories in /exports/home ■■ Few networks are static, particularly network file systems, so design NFS servers with growth in mind For example, avoid the temptation to drop all third-party software onto a single exported file system Over time, file systems usually grow to the point that they need to be subdivided, leading to administrative headaches when client mounts must be updated to reflect a new set of exports Spread third-party applications across multiple NFS exports and export each application and its associated data separately ■■ If the previous tip will result in a large number of NFS mounts for clients, it might be wiser to create logical volume sets on the NFS server By using logical volumes underneath the exported file systems, you can increase disk space on the exported file systems as it is needed without having to take the server down or take needed exports offline ■■ At large sites, distribute multiple NFS exports across multiple disks so that a single disk failure will limit the impact to the affected application Better still, to minimize downtime on singleton servers, use RAID for redundancy and logical volumes for flexibility If you have the capacity, use NFSv4’s replication facilities to ensure that exported file systems remain available even if the primary NFS server goes up in smoke The Network File System ■ ■ Similarly, overall disk and network performance improves if you distribute exported file systems across multiple servers rather than concentrate them on a single server If it is not possible to use multiple servers, at least try to situate NFS exports on separate physical disks and/or on separate disk controllers Doing so reduces disk I/O contention When identifying the file systems to export, keep in mind a key restriction on which file systems can be exported and how they can be exported You can export only local file systems and their subdirectories To express this restriction in another way, you cannot export a file system that is itself already an NFS mount For example, if a client system named userbeast mounts /home from a server named homebeast, userbeast cannot reexport /home Clients wishing to mount /home must so directly from homebeast Configuring an NFS Server This section shows you how to configure an NFS server, identifies the key files and commands you use to implement, maintain, and monitor the NFS server, and illustrates the server configuration process using a typical NFS setup On Fedora Core and Red Hat Enterprise Linux systems, the /etc/exports file is the main NFS configuration file It lists the file systems the server exports, the systems permitted to mount the exported file systems, and the mount options for each export NFS also maintains status information about existing exports and the client systems that have mounted those exports in /var/lib/nfs/rmtab and /var/lib/nfs/xtab In addition to these configuration and status files, all of the daemons, commands, initialization scripts, and configuration files in the following list are part of NFS Don’t panic because the list is so long, though; you have to concern yourself with only a few of them to have a fully functioning and properly configured NFS installation Notice that approximately half of the supporting files are part of NFSv4 — presumably the price one pays for added features ■ ■ Daemons ■ ■ rpc.gssd (new in NFSv4) ■ ■ rpc.idmapd (new in NFSv4) ■ ■ rpc.lockd ■ ■ rpc.mountd ■ ■ rpc.nfsd ■ ■ rpc.portmap 273 The Network File System The hide and nohide options mimic the behavior of NFS on SGI’s IRIX By default, if an exported directory is a subdirectory of another exported directory, the exported subdirectory will be hidden unless both the parent and child exports are explicitly mounted The rationale for this feature is that some NFS client implementations cannot deal with what appears to be two different files having the same inode In addition, directory hiding simplifies client- and server-side caching You can disable directory hiding by specifying nohide The final interesting mount option is mp If set, the NFS server will not export a file system unless that file system is actually mounted on the server The reasoning behind this option is that a disk or file system containing an NFS export might not mount successfully at boot time or might crash at runtime This measure prevents NFS clients from mounting unavailable exports Here is a modified version of the /etc/exports file presented earlier: /usr/local /usr/devtools /home /projects /var/mail /opt/kde *.example.com(mp,ro,secure) 192.168.1.0/24(mp,ro,secure) 192.168.0.0/255.255.255.0(mp,rw,secure,no_subtree_check) @dev(mp,rw,secure,anonuid=600,anongid=600,sync,no_wdelay) 192.168.0.1(mp,rw,insecure,no_subtree_check) gss/krb5(mp,ro,async) The hosts have not changed, but additional export options have been added All file systems use the mp option to make sure that only mounted file systems are available for export /usr/local, /usr/devtools, /home, and /project can be accessed only from clients using secure ports (the secure option), but the server accepts requests destined for /var/mail from any port because the insecure option is specified For /projects, the anonymous user is mapped to the UID and GID 600, as indicated by the anonuid=600 and anongid=600 options The wrinkle in this case is that only members of the NIS netgroup dev will have their UIDs and GIDs mapped because they are the only NFS clients permitted to mount /projects /home and /var/mail are exported using the no_subtree_check option because they see a high volume of file renaming, moving, and deletion Finally, the sync and no_wdelay options disable write caching and delayed writes to the /project file system The rationale for using sync and no_wdelay is that the impact of data loss would be significant in the event the server crashes However, forcing disk writes in this manner also imposes a performance penalty because the NFS server’s normal disk caching and buffering heuristics cannot be applied If you intend to use NFSv4-specific features, you need to be familiar with the RPCSEC_GSS configuration files, /etc/gssapi_mech.conf and /etc /idmapd.conf idmapd.conf is the configuration file for NFSv4’s idmapd daemon idmapd works on the behalf of both NFS servers and clients to translate NFSv4 IDs to user and group IDs and vice versa; idmapd.conf controls 279 280 Chapter 12 idmapd’s runtime behavior The default configuration (with comments and blank lines removed) should resemble Listing 12-1 [General] Verbosity = Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = nsswitch Listing 12-1 Default idmapd configuration In the [General] section, the Verbosity option controls the amount of log information that idmapd generates; Pipefs-directory tell idmapd where to find the RPC pipe file system it should use (idmapd communicates with the kernel using the pipefs virtual file system); Domain identifies the default domain If Domain isn’t specified, it defaults to the server’s fully qualified domain name (FQDN) less the hostname For example, if the FQDN is coondog.example.com, the Domain parameter would be example.com; if the FQDN is mail.admin.example.com, the Domain parameter would be the subdomain admin.example.com The Domain setting is probably the only change you will need to make to idmapd’s configuration The [Mapping] section identifies the user and group names that correspond to the nobody user and group that NFS server should use The option Method = nsswitch, finally, tells idmapd how to perform the name resolution In this case, names are resolved using the name service switch (NSS) features of glibc The /etc/gssapi_mech.conf file controls the GSS daemon (rpc svcgssd) You won’t need to modify this file As provided in Fedora Core and RHEL, gssapi_mech.conf lists the specific function call to use to initialize a given GSS library Programs (in this case, NFS) need this information if they intend to use secure RPC Two additional files store status information about NFS exports, /var /lib/nfs/rmtab and /var/lib/nfs/etab /var/lib/nfs/rmtab is the table that lists each NFS export that is mounted by an NFS client The daemon rpc.mountd (described in the section “NFS Server Daemons”) is responsible for servicing requests to mount NFS exports Each time the rpc.mountd daemon receives a mount request, it adds an entry to /var/lib/nfs/rmtab Conversely, when mountd receives a request to unmount an exported file system, it removes the corresponding entry from /var/lib/nfs/rmtab The following short listing shows the contents of /var/lib/nfs/rmtab on an NFS The Network File System server that exports /home in read-write mode and /usr/local in read-only mode In this case, the host with IP address 192.168.0.4 has mounted both exports: $ cat /var/lib/nfs/rmtab 192.168.0.4:/home:0x00000001 192.168.0.4:/usr/local:0x00000001 Fields in rmtab are colon-delimited, so it has three fields: the host, the exported file system, and the mount options specified in /etc/exports Rather than try to decipher the hexadecimal options field, though, you can read the mount options directly from /var/lib/nfs/etab The exportfs command, discussed in the subsection titled “NFS Server Scripts and Commands,” maintains /var/lib/nfs/etab etab contains the table of currently exported file systems The following listing shows the contents of /var/lib/nfs/etab for the server exporting the /usr/local and /home file systems shown in the previous listing (the output wraps because of page width constraints) $ cat /var/lib/nfs/etab /usr/local 192.168.0.4(ro,sync,wdelay,hide,secure,root_squash,no_all_squash, subtree_check,secure_locks,mapping=identity,anonuid=-2,anongid=-2) /home 192.168.0.2(rw,sync,wdelay,hide,secure,root_squash,no_all_squash, subtree_check,secure_locks,mapping=identity,anonuid=-2,anongid=-2) As you can see in the listing, the format of the etab file resembles that of /etc/exports Notice, however, that etab lists the default values for options not specified in /etc/exports in addition to the options specifically listed N OT E Most Linux systems use /var/lib/nfs/etab to store the table of currently exported file systems The manual page for the exportfs command, however, states that /var/lib/nfs/xtab contains the table of current exports We not have an explanation for this — it’s just a fact of life that the manual page and actual usage differ The last two configuration files to discuss, /etc/hosts.allow and /etc/hosts.deny, are not, strictly speaking, part of the NFS server Rather, /etc/hosts.allow and /etc/hosts.deny are access control files used by the TCP Wrappers system; you can configure an NFS server without them and the server will function perfectly (to the degree, at least, that anything ever functions perfectly) However, using TCP Wrappers’ access control features helps enhance both the overall security of the server and the security of the NFS subsystem 281 282 Chapter 12 The TCP Wrappers package is covered in detail in Chapter 19 Rather than preempt that discussion here, we suggest how to modify these files, briefly explain the rationale, and suggest you refer to Chapter 19 to understand the modifications in detail First, add the following entries to /etc/hosts.deny: portmap:ALL lockd:ALL mountd:ALL rquotad:ALL statd:ALL These entries deny access to NFS services to all hosts not explicitly permitted access in /etc/hosts.allow Accordingly, the next step is to add entries to /etc/hosts.allow to permit access to NFS services to specific hosts As you will learn in Chapter 19, entries in /etc/hosts.allow take the form: daemon:host_list [host_list] T I P The NFS HOWTO (http://nfs.sourceforge.net/nfs-howto/server html#CONFIG) discourages use of the ALL:ALL syntax in /etc/hosts.deny, using this rationale: “While [denying access to all services] is more secure behavior, it may also get you in trouble when you are installing new services, you forget you put it there, and you can’t figure out for the life of you why they won’t work.” We respectfully disagree The stronger security enabled by the ALL:ALL construct in /etc/hosts.deny far outweighs any inconvenience it might pose when configuring new services daemon is a daemon such as portmap or lockd, and host_list is a list of one or more hosts specified as hostnames, IP addresses, IP address patterns using wildcards, or address/net mask pairs For example, the following entry permits all hosts in the example.com domain to access the portmap service: portmap:.example.com The next entry permits access to all hosts on the subnetworks 192.168.0.0 and 192.168.1.0: portmap:192.168.0 192.168.1 You need to add entries for each host or host group permitted NFS access for each of the five daemons listed in /etc/hosts.deny So, for example, to The Network File System permit access to all hosts in the example.com domain, add the following entries to /etc/host.allow: portmap:.example.com lockd :.example.com mountd :.example.com rquotad:.example.com statd :.example.com Therefore, a name of the form domain.dom matches all hosts, including hosts in subdomains like subdom.domain.dom NFS Server Daemons Providing NFS services requires the services of six daemons: /sbin/portmap, /usr/sbin/rpc.mountd, /usr/sbin/rpc.nfsd, /sbin/rpc.statd, /sbin/rpc.lockd, and, if necessary, /usr/sbin/rpc.rquotad They are generally referred to as portmap, mountd, nfssd, statd, lockd, and rquotad, respectively If you intend to take advantage of NFSv4’s enhancements, you’ll also need to know about rpc.gssd, rpc.idmapd, and rpc svcgssd For convenience’s sake, we’ll refer to these daemons using the shorthand expressions gssd, idmapd, and svcgssd Table 12-2 briefly describes each daemon’s purpose Table 12-2 Nfs Server Daemons DAEMON FUNCTION gssd Creates security contexts on RPC clients for exchanging RPC information using SecureRPC (RPCSEC) using GSS idmapd Maps local user and group names to NFSv4 IDs (and vice versa) lockd Starts the kernel’s NFS lock manager mountd Processes NFS client mount requests nfsd Provides all NFS services except file locking and quota management portmap Enables NFS clients to discover the NFS services available on a given NFS server rquotad Provides file system quota information NFS exports to NFS clients using file system quotas statd Implements NFS lock recovery when an NFS server system crashes svcgssd Creates security contexts on RPC servers for exchanging RPC information using SecureRPC (RPCSEC) using GSS 283 284 Chapter 12 The NFS server daemons should be started in the following order to work properly: portmap nfsd mountd statd rquotad (if necessary) idmapd svcgssd The start order is handled for you automatically at boot time if you have enabled NFS services using Service Configuration Tool (/usr/bin/systemconfig-services) Notice that the list omits lockd nfsd starts it on an as-needed basis, so you should rarely, if ever, need to invoke it manually Fortunately, the Red Hat Linux initialization script for NFS, /etc/rc.d/init.d/nfs, takes care of starting up the NFS server daemons for you Should the need arise, however, you can start NFS yourself by executing the handy service utility script directly: # service nfs start Starting NFS services: Starting NFS quotas: Starting NFS daemon: Starting NFS mountd [ [ [ [ OK OK OK OK ] ] ] ] [ [ [ [ OK OK OK OK ] ] ] ] You can also use: # /etc/rc.d/init.d/nfs start Starting NFS services: Starting NFS quotas: Starting NFS daemon: Starting NFS mountd By default, the startup script starts eight copies of nfsd to enable the server to process multiple requests simultaneously To change this value, edit /etc/sysconfig/nfs and add an entry resembling the following (you need to be root to edit this file): RPCNFSDCOUNT=n Replace n with the number of nfsd processes you want to start Busy servers with many active connections might benefit from doubling or tripling this The Network File System number If file system quotas for exported file systems have not been enabled on the NFS server, it is unnecessary to start the quota manager, rquotad, but be aware that the initialization script starts rquotad whether quotas have been enabled or not T I P If /etc/sysconfig/nfs does not exist, you can create it using your favorite text editor In a pinch, you can use the following command to create it with the RPCNFSDCOUNT setting mentioned in the text: # cat > /etc/sysconfig/nfs RPCNFSDCOUNT=16 ^d ^d is the end-of-file mark, generated by pressing the Control key and d simultaneously NFS Server Scripts and Commands Three initialization scripts control the required NFS server daemons, /etc/rc.d/init.d/portmap, /etc/rc.d/init.d/nfs, and /etc/rc.d /init.d/nfslock The exportfs command enables you to manipulate the list of current exports on the fly without needing to edit /etc/exports The showmount command provides information about clients and the file systems they have mounted The nfsstat command displays detailed information about the status of the NFS subsystem The portmap script starts the portmap daemon, frequently referred to as the portmapper All programs that use RPC, such as NIS and NFS, rely on the information the portmapper provides The portmapper starts automatically at boot time, so you rarely need to worry about it, but it is good to know you can control it manually Like most startup scripts, it requires a single argument, such as start, stop, restart, or status As you can probably guess, the start and stop arguments start and stop the portmapper, restart restarts it (by calling the script with the start and stop arguments, as it happens), and status indicates whether the portmapper is running, showing the portmapper’s PID if it is running The primary NFS startup script is /etc/rc.d/init.d/nfs Like the portmapper, it requires a single argument, start, stop, status, restart, or reload start and stop start and stop the NFS server, respectively The restart argument stops and starts the server processes in a single command and can be used after changing the contents of /etc/exports However, it is not necessary to reinitialize the NFS subsystem by bouncing the server daemons in this way Rather, use the script’s reload argument, which causes exportfs, discussed shortly, to reread /etc/exports and to reexport the 285 286 Chapter 12 file systems listed there Both restart and reload also update the timestamp on the NFS lock file (/var/lock/subsys/nfs) used by the initialization script The status argument displays the PIDs of the mountd, nfsd, and rquotad daemons For example: $ service nfs status rpc.mountd (pid 4358) is running nfsd (pid 1241 1240 1239 1238 1235 1234 1233 1232) is running rpc.rquotad (pid 1221) is running The output of the command confirms that the three daemons are running and shows the PIDs for each instance of each daemon All users are permitted to invoke the NFS initialization script with the status argument, but all the other arguments (start, stop, restart, and reload) require root privileges NFS services also require the file-locking daemons lockd and statd As explained earlier, nfsd starts lockd itself, but you still must start statd separately You can use an initialization script for this purpose, /etc/rc.d /init.d/nfslock It accepts almost the same arguments as /etc/rc.d /init.d/nfs does, with the exception of the reload argument (because statd does not require a configuration file) To tie everything together, if you ever need to start the NFS server manually, the proper invocation sequence is to start the portmapper first, followed by NFS, followed by the NFS lock manager, that is: # service portmap start # service nfs start # service nfslock start Conversely, to shut down the server, reverse the start procedure: # service nfslock stop # service nfs stop # service portmap stop Because other programs and servers may require the portmapper’s service, we suggest that you let it run unless you drop the system to run level to perform maintenance You can also find out what NFS daemons are running using the rpcinfo command with the -p option rpcinfo is a general-purpose program that displays information about programs that use the RPC protocol, of which NFS is one The -p option queries the portmapper and displays a list of all registered RPC programs The following listing shows the output of rpcinfo -p on a fairly quiescent NFS server: The Network File System $ /usr/sbin/rpcinfo -p program vers proto 100000 tcp 100000 udp 100011 udp 100011 udp 100011 tcp 100011 tcp 100003 udp 100003 udp 100003 udp 100003 tcp 100003 tcp 100003 tcp 100021 udp 100021 udp 100021 udp 100021 tcp 100021 tcp 100021 tcp 100005 udp 100005 tcp 100005 udp 100005 tcp 100005 udp 100005 tcp port 111 111 961 961 964 964 2049 2049 2049 2049 2049 2049 32770 32770 32770 35605 35605 35605 32772 32825 32772 32825 32772 32825 portmapper portmapper rquotad rquotad rquotad rquotad nfs nfs nfs nfs nfs nfs nlockmgr nlockmgr nlockmgr nlockmgr nlockmgr nlockmgr mountd mountd mountd mountd mountd mountd rpcinfo’s output shows the RPC program’s ID number, version number, the network protocol it is using, the port number it is using, and an alias name for the program number The program number and name (first and fifth columns) are taken from the file /etc/rpc, which maps program numbers to program names and also lists aliases for program names At a bare minimum, to have a functioning NFS server, rpcinfo should list entries for portmapper, nfs, and mountd The exportfs command enables you to manipulate the list of available exports, in some cases without editing /etc/exports It also maintains the list of currently exported file systems in /var/lib/nfs/etab and the kernel’s internal table of exported file systems In fact, the NFS initialization script discussed earlier in this subsection uses exportfs extensively For example, the exportfs -a command initializes /var/lib/nfs/etab, synchronizing it with the contents of /etc/exports To add a new export to etab and to the kernel’s internal table of NFS exports without editing /etc/exports, use the following syntax: exportfs -o opts host:dir 287 288 Chapter 12 opts, host, and dir use the same syntax as that described for /etc/exports earlier in the chapter Consider the following command: # exportfs -o async,rw 192.168.0.3:/var/spool/mail This command exports /var/spool/mail with the async and rw options to the host whose IP address is 192.168.0.3 This invocation is exactly equivalent to the following entry in /etc/exports: /var/spool/mail 192.168.0.3(async,rw) A bare exportfs call lists all currently exported file systems; adding the -v option lists currently exported file systems with their mount options # exportfs -v /usr/local /home 192.168.0.4(ro,wdelay,root_squash) 192.168.0.4(rw,wdelay,root_squash) To remove an exported file system, use the -u option with exportfs For example, the following command unexports the /home file system shown in the previous example # exportfs -v -u 192.168.0.4:/home unexporting 192.168.0.4:/home The showmount command queries the mount daemon, mountd, about the status of the NFS server Its syntax is: showmount [-adehv] [host] Invoked with no options, showmount displays a list of all clients that have mounted file systems from the current host Specify host to query the mount daemon on that host, where host can be a resolvable DNS hostname or, as in the following example, an IP address: # showmount 192.168.0.1 Hosts on 192.168.0.1: 192.168.0.0/24 192.168.0.1 Table 12-3 describes the effects of showmount’s options The Network File System Table 12-3 Options for the showmount Command OPTION DESCRIPTION -a Displays client hostnames and mounted directories in host:directory format -d Displays only the directories clients have mounted -e Displays the NFS server’s list of exported file systems -h Displays a short usage summary no-headers Disables displaying descriptive headings for showmount’s output -v Displays showmount’s version number The following examples show the output of showmount executed on an NFS server that has exported /media to the client named bubba.example com, which has an IP address of 192.168.0.2, using the following entry in /etc/exports: /media 192.168.0.0/24(rw) The first command uses the -a option for the most comprehensive output, the second uses the -d option to show only the mounted directories, and the third example uses -e to show the server’s export list # showmount -a All mount points on bubba.example.com: 192.168.0.0/24:/media 192.168.0.1:192.168.0.0/24 # showmount -d Directories on bubba.example.com: /media # showmount -e Export list for bubba.example.com: /media 192.168.0.0/24 The showmount command is most useful on potential NFS clients because they can identify the directories an NFS server is exporting By the same token, however, this poses a security risk because, in the absence of entries in /etc/hosts.deny that forbid access to the portmapper, any host can obtain this information from an NFS server 289 290 Chapter 12 Using Secure NFS Although NFSv4 is installed, the default installation does not use NFSv4’s security enhancements by default You need to set this up manually To so, use the following procedure: Enable secure NFS by adding the following line to /etc/sysconfig /nfs: SECURE_NFS=no /etc/sysconfig/nfs does not exist on Fedora Core and RHEL systems To use Kerberos or other strong encryption mechanism with NFSv4, you should set this variable to yes Edit /etc/idmapd.conf and set the Domain option to your domain and change the Nobody-User and Nobody-Group options to nobody: Domain = example.com [Mapping] Nobody-User = nobody Nobody-Group = nobody You might not have to make this change because idmapd.conf is usually configured to use the nobody user and group by default Restart the portmapper and NFS using the service utility: # service portmap restart # service nfs condrestart You not need to start the GSS client and server daemons, rpcgssd and rpcsvcgssd, respectively, unless you wish to use Kerberos or another strong encryption mechanism (in which case there is additional setup to perform that this chapter does not address) Once the daemons are running, you can configure your server as described in the next section You’ll learn how to mount the exports in the section titled “Configuring an NFS Client.” Example NFS Server This section illustrates a simple but representative NFS server configuration It exports two file systems, /home and /media Here are the corresponding entries in /etc/exports: /home 192.168.0.0/24(rw,async,no_subtree_check) /media 192.168.0.0/24(ro) The Network File System With the exports configured, start (or restart) the daemons (the portmapper is already running) using the initialization scripts: # service nfs start Starting NFS services: Starting NFS quotas: Starting NFS mountd: Starting NFS daemon: # service nfslock start Starting NFS file locking services: Starting NFS statd: [ [ [ [ OK OK OK OK ] ] ] ] [ OK ] Next, use rpcinfo -p to make sure the necessary daemons are running, then finish up with showmount -a (or exportfs -v) to list the server’s NFS exports: # rpcinfo -p program vers proto port 100000 tcp 111 portmapper 100000 udp 111 portmapper 100011 udp 958 rquotad 100011 udp 958 rquotad 100011 tcp 961 rquotad 100011 tcp 961 rquotad 100003 udp 2049 nfs 100003 udp 2049 nfs 100003 udp 2049 nfs 100003 tcp 2049 nfs 100003 tcp 2049 nfs 100003 tcp 2049 nfs 100021 udp 37737 nlockmgr 100021 udp 37737 nlockmgr 100021 udp 37737 nlockmgr 100021 tcp 35981 nlockmgr 100021 tcp 35981 nlockmgr 100021 tcp 35981 nlockmgr 100005 udp 974 mountd 100005 tcp 977 mountd 100005 udp 974 mountd 100005 tcp 977 mountd 100005 udp 974 mountd 100005 tcp 977 mountd # showmount -e Export list for bubba.example.com: /home 192.168.0.0/24 /media 192.168.0.0/24 291 292 Chapter 12 The final step in preparing an NFS server is to ensure that NFS services are started at boot time You can use the Services Configuration Tool (Red Hat ➪ System Settings ➪ Server Settings ➪ Services on Fedora Core and Applications ➪ System Settings ➪ Server Settings ➪ Services on RHEL); systemconfig-services at the command line, or the chkconfig command-line services administration tool Using chkconfig, execute the following commands: # # # # chkconfig chkconfig chkconfig chkconfig level level level level 0123456 nfs off 0123456 nfslock off 345 nfs on 345 nfslock on The first two commands disable the nfs and nfslock initialization scripts for all run levels The second two commands reenable them for run levels 3, 4, and After you have confirmed that the NFS daemons are running and that the exports are available, you are ready to configure one or more NFS clients First, however, for the graphically addicted (or the command-line-challenged), we’ll show you how to use Red Hat Linux’s graphical tool for administering NFS exports, the NFS Server Configuration Tool Using the NFS Server Configuration Tool If you prefer to use graphical tools for system administration, Red Hat Linux includes the NFS Server Configuration tool It edits the /etc/exports file directly, so you can use the graphical tool and edit the configuration file directly using a text editor interchangeably To start the NFS Server Configuration tool, select Red Hat ➪ System Settings ➪ Server Settings ➪ NFS on Fedora Core or Applications ➪ System Settings ➪ Server Settings ➪ NFS on RHEL You can also start the tool by executing the command system-config-nfs (as root) in a terminal window Figure 12-2 shows the NFS Server Configuration tool To add a new export, click the Add button, which opens the Add NFS Share dialog box (see Figure 12-3) On the Basic tab, type the name of the directory you want to export in the Directory text box or use the Browse button to locate the directory to export Use the Host(s) text box to indicate which hosts are allowed to mount this directory Click the Read-only radio button (selected by default) or the Read/Write radio button to indicate the basic access permissions for this export Figure 12-3, for example, shows that /home will be exported read-write to all hosts with an IP address in the range 192.168.0.0/24 Notice that you can use the same syntax for specifying IP addresses in this NFS Server Configuration tool that you can if you edit /etc/exports directly The Network File System Figure 12-2 The NFS Server Configuration dialog box Figure 12-3 The Add NFS Share dialog box To modify the mount options for your new NFS export, click the General Options tab On this tab, click the check boxes to enable the corresponding mount option The possible mount options include: ■ ■ Allow connections from ports 1024 and higher — This option corresponds to the insecure option listed in Table 12-1 ■ ■ Allow insecure file locking — This option corresponds to the insecure_locks option listed in Table 12-1 ■ ■ Disable subtree checking — This option corresponds to the no_subtree_check option listed in Table 12-1 ■ ■ Sync write operations on request — This option (enabled by default) corresponds to the sync option listed in Table 12-1 ■ ■ Force sync of write operations immediately — This option is only available if Sync write operations on request is enabled and corresponds to the no_wdelay option listed in Table 12-1 293 ... *.example.com(mp,ro,secure) 192.168.1.0/24(mp,ro,secure) 192.168.0.0/255.255.255.0(mp,rw,secure,no_subtree_check) @dev(mp,rw,secure,anonuid=600,anongid=600,sync,no_wdelay) 192.168.0.1(mp,rw,insecure,no_subtree_check)... subtree_check,secure_locks,mapping=identity,anonuid=-2,anongid=-2) /home 192.168.0.2(rw,sync,wdelay,hide,secure,root_squash,no_all_squash, subtree_check,secure_locks,mapping=identity,anonuid=-2,anongid=-2)... /sbin/portmap, /usr/sbin/rpc.mountd, /usr/sbin/rpc.nfsd, /sbin/rpc.statd, /sbin/rpc.lockd, and, if necessary, /usr/sbin/rpc.rquotad They are generally referred to as portmap, mountd, nfssd, statd, lockd,

Ngày đăng: 07/07/2014, 09:20

TỪ KHÓA LIÊN QUAN

w