Linux Server Hacks Volume Two phần 7 doc

41 280 0
Linux Server Hacks Volume Two phần 7 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

A good solution is one that allows you to mount NFS shares without using /etc/fstab. Ideally, it could also mount shares dynamically, as they are requested, so that when they're not in use there aren't all of these unused directories hanging around and messing up your ls -l output. In a perfect world, we could centralize the mount configuration file and allow it to be used by all machines that need the service, so that when a user leaves, we just delete the mount from one configuration file and go on our merry way. Happily, you can do just this with the Linux autofs daemon. The autofs daemon lives in the kernel and reads its configuration from "maps," which can be stored in local files, centralized NFS-mounted files, or directory services such as NIS or LDAP. Of course, there has to be a master configuration file to tell autofs where to find its mounting information. That file is almost always stored in /etc/auto.master. Let's have a look at a simple example configuration file: /.autofs file:/etc/auto.direct timeout 300 /mnt file:/etc/auto.mnt timeout 60 /u yp:homedirs timeout 300 The main purpose of this file is to let the daemon know where to create its mount points on the local system (detailed in the first column of the file), and then where to find the mounts that should live under each mount point (detailed in the second column). The rest of each line consists of mount options. In this case, the only option is a timeout, in seconds. If the mount is idle for that many seconds, it will be unmounted. In our example configuration, starting the autofs service will create three mount points. /u is one of them, and that's where we're going to put our home directories. The data for that mount point comes from the homedirs map on our NIS server. Running ypcat homedirs shows us the following line: hdserv:/vol/home:users The server that houses all of the home directories is called hdserv. When the automounter starts up, it will read the entry in auto.master, contact the NIS server, ask for the homedirs map, get the above information back, and then contact hdserv and ask to mount /vol/home/users. (The colon in the file path above is an NIS-specific requirement. Everything under the directory named after the colon will be mounted.) If things complete successfully, everything that lives under /vol/home/users on the server will now appear under /u on the client. Of course, we don't have to use NIS to store our mount mapswe can store them in an LDAP directory or in a plain-text file on an NFS share. Let's explore this latter option, for those who aren't working with a directory service or don't want to use their directory service for automount maps. The first thing we'll need to alter is our auto.master file, which currently thinks that everything under /u is mounted according to NIS information. Instead, we'll now tell it to look in a file, by replacing the original /u line with this one: /u file:/usr/local/etc/auto.home timeout 300 This tells the automounter that the file /usr/local/etc/auto.home is the authoritative source for information regarding all things mounted under the local /u directory. In the file on my system are the following lines: jonesy -rw hdserv:/vol/home/users/& 245 245 matt -rw hdserv:/vol/home/usrs/& What?! One line for every single user in my environment?! Well, no. I'm doing this to prove a point. In order to hack the automounter, we have to know what these fields mean. The first field is called a key. The key in the first line is jonesy. Since this is a map for things to be found under /u, this first line's key specifies that this entry defines how to mount /u/jonesy on the local machine. The second field is a list of mount options, which are pretty self-explanatory. We want all users to be able to mount their directories with read/write access (-rw). The third field is the location field, which specifies the server from which the automounter should request the mount. In this case, our first entry says that /u/jonesy will be mounted from the server hdserv. The path on the server that will be requested is /vol/home/users/&. The ampersand is a wildcard that will be replaced in the outgoing mount request with the key. Since our key in the first line is jonesy, the location field will be transformed to a request for hdserv:/vol/home/users/jonesy. Now for the big shortcut. There's an extra wildcard you can use in the key field, which allows you to shorten the configuration for every user's home directory to a single line that looks like this: * -rw hdserv:/vol/home/users/& The * means, for all intents and purposes, "anything." Since we already know the ampersand takes the value of the key, we can now see that, in English, this line is really saying "Whichever directory a user requests under /u, that is the key, so replace the ampersand with the key value and mount that directory from the server." This is wonderful for two reasons. First, my configuration file is a single line. Second, as user home directories are added and removed from the system, I don't have to edit this configuration file at all. If a user requests a directory that doesn't exist, he'll get back an error. If a new directory is created on the file server, this configuration line already allows it to be mounted. Hack 58. Keep Filesystems Handy, but Out of Your Way Use the amd automounter, and some handy defaults, to maintain mounted resources without doing without your own local resources. The amd automounter isn't the most ubiquitous production service I've ever seen, but it can certainly be a valuable tool for administrators in the setup of their own desktop machines. Why? Because it gives you the power to be able to easily and conveniently access any NFS share in your environment, and the default settings for amd put all of them under their own directory, out of the way, without you having to do much more than simply start the service. Here's an example of how useful this can be. I work in an environment in which the /usr/local directories on our production machines are mounted from a central NFS server. This is great, because if we need to build 246 246 software for our servers that isn't supplied by the distribution vendor, we can just build it from source in that tree, and all of the servers can access it as soon as it's built. However, occasionally we receive support tickets saying that something is acting strangely or isn't working. Most times, the issue is environmental: the user is getting at the wrong binary because /usr/local is not in her PATH, or something simple like that. Sometimes, though, the problem is ours, and we need to troubleshoot it. The most convenient way to do that is just to mount the shared /usr/local to our desktops and use it in place of our own. For me, however, this is suboptimal, because I like to use my system's /usr/local to test new software. So I need another way to mount the shared /usr/local without conflicting with my own /usr/local. This is where amd comes in, as it allows me to get at all of the shares I need, on the fly, without interfering with my local setup. Here's an example of how this works. I know that the server that serves up the /usr/local partition is named fs, and I know that the file mounted as /usr/local on the clients is actually called /linux/local on the server. With a properly configured amd, I just run the following command to mount the shared directory: $ cd /net/fs/linux/local There I am, ready to test whatever needs to be tested, having done next to no configuration whatsoever! The funny thing is, I've run into lots of administrators who don't use amd and didn't know that it performed this particular function. This is because the amd mount configuration is a little bit cryptic. To understand it, let's take a look at how amd is configured. Soon you'll be mounting remote shares with ease. 6.4.1. amd Configuration in a Nutshell The main amd configuration file is almost always /etc/amd.conf. This file sets up default behaviors for the daemon and defines other configuration files that are authoritative for each configured mount point. Here's a quick look at a totally untouched configuration file, as supplied with the Fedora Core 4 am-utils package, which supplies the amd automounter: [ global ] normalize_hostnames = no print_pid = yes pid_file = /var/run/amd.pid restart_mounts = yes auto_dir = /.automount #log_file = /var/log/amd log_file = syslog log_options = all #debug_options = all plock = no selectors_on_default = yes print_version = no # set map_type to "nis" for NIS maps, or comment it out to search for all # types map_type = file search_path = /etc browsable_dirs = yes show_statfs_entries = no fully_qualified_hosts = no cache_duration = 300 # DEFINE AN AMD MOUNT POINT [ /net ] 247 247 map_name = amd.net map_type = file The options in the [global] section specify behaviors of the daemon itself and rarely need changing. You'll notice that search_path is set to /etc, which means it will look for mount maps under the /etc directory. You'll also see that auto_dir is set to /.automount. This is where amd will mount the directories you request. Since amd cannot perform mounts "in-place," directly under the mount point you define, it actually performs all mounts under the auto_dir directory, and then returns a symlink to that directory in response to the incoming mount requests. We'll explore that more after we look at the configuration for the [/net] mount point. From looking at the above configuration file, we can tell that the file that tells amd how to mount things under /net is amd.net. Since the search_path option in the [global] section is set to /etc, it'll really be looking for /etc/amd.net at startup time. Here are the contents of that file: /defaults fs:=${autodir}/${rhost}/root/${rfs};opts:=nosuid,nodev * rhost:=${key};type:=host;rfs:=/ Eyes glazing over? Well, then let's translate this into English. The first entry is /defaults, which is there to define the symlink that gets returned in response to requests for directories under [/net] in amd.conf. Here's a quick tour of the variables being used here: ${autodir} gets its value from the auto_dir setting in amd.conf, which in this case will be /.automount. • ${rhost} is the name of the remote file server, which in our example is fs. It is followed closely by /root, which is really just a placeholder for / on the remote host. • ${rfs} is the actual path under the / directory on the remote host that gets mounted.• Also note that fs: on the /defaults line specifies the local location where the remote filesystem is to be mounted. It's not the name of our remote file server. In reality, there are a couple of other variables in play behind the scenes that help resolve the values of these variables, but this is enough to discern what's going on with our automounter. You should now be able to figure out what was really happening in our simple cd command earlier in this hack. Because of the configuration settings in amd.conf and amd.net, when I ran the cd command earlier, I was actually requesting a mount of fs:/linux/local under the directory /net/fs/linux/local.amd, behind my back, replaced that directory with a symlink to /.automount/fs/root/linux/local, and that's where I really wound up. Running pwd with no options will say you're in /net/fs/linux/local, but there's a quick way to tell where you really are, taking symlinks into account. Look at the output from these two pwd commands: $ pwd /net/fs/linux/local $ pwd -P /.automount/root/fs/linux/local TheP option reveals your true location. So, now that we have some clue as to how the amd.net /defaults entry works, we need to figure out exactly why our wonderful hack works. After all, we haven't yet told amd to explicitly mount anything! 248 248 Here's the entry in /etc/amd.net that makes this functionality possible: * rhost:=${key};type:=host;rfs:=/ The * wildcard entry says to attempt to mount any requested directory, rather than specifying one explicitly. When you request a mount, the part of the path after /net defines the host and path to mount. If amd is able to perform the mount, it is served up to the user on the client host. The rfs=/ bit means that amd should request whatever directory is requested from the server under the root directory of that server. So, if we set rfs=/mnt and then request /linux/local, the request will be for fs:/mnt/linux/local. Hack 59. Synchronize root Environments with rsync When you're managing multiple servers with local root logins, rsync provides an easy way to synchronize the root environments across your systems. Synchronizing files between multiple computer systems is a classic problem. Say you've made some improvements to a file on one machine, and you would like to propagate it to others. What's the best way? Individual users often encounter this problem when trying to work on files on multiple computer systems, but it's even more common for system administrators who tend to use many different computer systems in the course of their daily activities. rsync is a popular and well-known remote file and directory synchronization program that enables you to ensure that specified files and directories are identical on multiple systems. Some files that you may want to include for synchronization are: .profile• .bash_profile• .bashrc• .cshrc• .login• .logout• Choose one server as your source server (referred to as srchost in the examples in this hack). This is the server where you will maintain the master copies of the files that you want to synchronize across multiple systems' root environments. After selecting this system, you'll add a stanza to the rsync configuration file (/etc/rsyncd.conf) containing, at a minimum, options for specifying the path to the directory that you want to synchronize (path), preventing remote clients from uploading files to the source server (read only), the user ID that you want synchronization to be performed as (uid), a list of files and directories that you want to exclude from synchronization (exclude), and the list of files that you want to synchronize (include). A sample stanza will look like this: [rootenv] path = / uid = root # default uid is nobody read only = yes exclude = * .* include = .bashrc .bash_profile .aliases hosts allow = 192.168.1. hosts deny = * 249 249 Then add the following command to your shell's login command file (.profile, .bash_profile, .login, etc.) on the source host: rsync -qa rsync://srchost/rootenv / Next, you'll need to manually synchronize the files for the first time. After that, they will automatically be synchronized when your shell's login command file is executed. On each server you wish to synchronize, run this rsync command on the host as root: rsync -qa rsync://srchost/rootenv / For convenience, add the following alias to your .bashrc file, or add an equivalent statement to the command file for whatever shell you're using (.cshrc, .kshrc, etc.): alias envsync='rsync -qa rsync::/srchost/rootenv / && source .bashrc' By running the envsync alias, you can immediately sync up and source your rc files. To increase security, you can use the /etc/hosts.allow and /etc/hosts.deny files to ensure that only specified hosts can use rsync on your systems [Hack #64] 6.5.1. See Also man rsync• Lance Tost Hack 60. Share Files Across Platforms Using Samba Linux, Windows, and Mac OS X all speak SMB/CIFS, which makes Samba a one-stop shop for all of their resource-sharing needs. It used to be that if you wanted to share resources in a mixed-platform environment, you needed NFS for your Unix machines, AppleTalk for your Mac crowd, and Samba or a Windows file and print server to handle the Windows users. Nowadays, all three platforms can mount file shares and use printing and other resources through SMB/CIFS, and Samba can serve them all. Samba can be configured in a seemingly endless number of ways. It can share just files, or printer and application resources as well. You can authenticate users for some or all of the services using local files, an LDAP directory, or a Windows domain server. This makes Samba an extremely powerful, flexible tool in the fight to standardize on a single daemon to serve all of the hosts in your network. 250 250 At this point, you may be wondering why you would ever need to use Samba with a Linux client, since Linux clients can just use NFS. Well, that's true, but whether that's what you really want to do is another question. Some sites have users in engineering or development environments who maintain their own laptops and workstations. These folks have the local root password on their Linux machines. One mistyped NFS export line, or a chink in the armor of your NFS daemon's security, and you could be inadvertently allowing remote, untrusted users free rein on the shares they can access. Samba can be a great solution in cases like this, because it allows you to grant those users access to what they need without sacrificing the security of your environment. This is possible because Samba can be (and generally is, in my experience) configured to ask for a username and password before allowing a user to mount anything. Whichever user supplies the username and password to perform the mount operation is the user whose permissions are enforced on the server. Thus, if a user becomes root on his local machine it needn't concern you, because local root access is trumped by the credentials of the user who performed the mount. 6.6.1. Setting Up Simple Samba Shares Technically, the Samba service consists of two daemons, smbd and nmbd. The smbd daemon is the one that handles the SMB file- and print-sharing protocol. When a client requests a shared directory from the server, it's talking to smbd. The nmbd daemon is in charge of answering NetBIOS over IP name service requests. When a Windows client broadcasts to browse Windows shares on the network, nmbd replies to those broadcasts. The configuration file for the Samba service is /etc/samba/smb.conf on both Debian and Red Hat systems. If you have a tool called swat installed, you can use it to help you generate a working configuration without ever opening vijust uncomment the swat line in /etc/inetd.conf on Debian systems, or edit /etc/xinetd.d/swat on Red Hat and other systems, changing the disable key's value to no. Once that's done, restart your inetd or xinetd service, and you should be able to get to swat's graphical interface by pointing a browser at http://localhost:901. Many servers are installed without swat, though, and for those systems editing the configuration file works just fine. Let's go over the config file for a simple setup that gives access to file and printer shares to authenticated users. The file is broken down into sections. The first section, which is always called [global], is the section that tells Samba what its "personality" should be on the network. There are a myriad of possibilities here, since Samba can act as a primary or backup domain controller in a Windows domain, can use various printing subsystem interfaces and various authentication backends, and can provide various different services to clients. Let's take a look at a simple [global] section: [global] workgroup = PVT server string = apollo hosts allow = 192.168.42. 127.0.0. printcap name = CUPS load printers = yes printing = CUPS logfile = /var/log/samba/log.smbd max log size = 50 security = user smb passwd file = /etc/samba/smbpasswd socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 interfaces = eth0 wins support = yes dns proxy = no 251 251 Much of this is self-explanatory. This excerpt is taken from a working configuration on a private SOHO network, which is evidenced by the hosts allow values. This option can take values in many different formats, and it uses the same syntax as the /etc/hosts.allow and /etc/hosts.deny files (see hosts_access(8) and "Allow or Deny Access by IP Address" [Hack #64]). Here, it allows access from the local host and any host whose IP address matches the pattern 192.168.42.*. Note that a netmask is not given or assumedit's a pure regex match on the IP address of the connecting host. Note also that this setting can be removed from the [global] section and placed in each subsection. If it exists in the [global] section, however, it will supersede any settings in other areas of the configuration file. In this configuration, I've opted to use CUPS as the printing mechanism. There's a CUPS server on the local machine where the Samba server lives, so Samba users will be able to see all the printers that CUPS knows about when they browse the PVT workgroup, and use them (more on this in a minute). The server string setting determines the server name users will see when the host shows up in a Network Neighborhood listing, or in other SMB network browsing software. I generally set this to the actual hostname of the server if it's practical, so that if users need to manually request something from the Samba server, they don't try to ask to mount files from my Linux Samba server by trying to address it as "Samba Server." The other important setting here is security. If you're happy with using the /etc/samba/smbpasswd file for authentication, this setting is fine. There are many other ways to configure authentication, however, so you should definitely read the fine (and copious) Samba documentation to see how it can be integrated with just about any authentication backend. Samba includes native support for LDAP and PAM authentication. There are PAM modules available to sync Unix and Samba passwords, as well as to authenticate to remote SMB servers. We're starting with a simple password file in our configuration. Included with the Samba package is a tool called mksmbpasswd.sh, which will add users to the password file en masse so you don't have to do it by hand. However, it cannot migrate Unix passwords to the file, because the cryptographic algorithm is a one-way hash and the Windows hash sent to Samba by the clients doesn't match. To change the Samba password for a user, run the following command on the server: # smbpasswd username This will prompt you for the new password, and then ask you to confirm it by typing it again. If a user ran the command, she'd be prompted for her current Samba password first. If you want to manually add a user to the password file, you can use the -a flag, like this: # smbpasswd -a username This will also prompt for the password that should be assigned to the user. Now that we have users, let's see what they have access to by looking at the sections for each share. In our configuration, users can access their home directories, all printers available through the local CUPS server, and a public share for users to dabble in. Let's look at the home directory configuration first: [homes] comment = Home Directories browseable = no 252 252 writable = yes The [homes] section, like the [global] section, is recognized by the server as a "special" section. Without any more settings than these few minimal ones, Samba will, by default, take the username given during a client connection and look it up in the local password file. If it exists, and the correct password has been provided, Samba clones the [homes] section on the fly, creating a new share named after the user. Since we didn't use a path setting, the actual directory that gets served up is the home directory of the user, as supplied by the local Linux system. However, since we've set browseable = no, users will only be able to see their own home directories in the list of available shares, rather than those of every other user on the system. Here's the printer share section: [printers] comment = All Printers path = /var/spool/samba browseable = yes public = yes guest ok = yes writable = no printable = yes use client driver = yes This section is also a "special" section, which works much like the [homes] special section. It clones the section to create a share for the printer being requested by the user, with the settings specified here. We've made printers browseable, so that users know which printers are available. This configuration will let any authenticated user view and print to any printer known to Samba. Finally, here's our public space, which anyone can read or write to: [tmp] comment = Temporary file space path = /tmp read only = no public = yes This space will show up in a browse listing as "tmp on Apollo," and it is accessible in read/write mode by anyone authenticated to the server. This is useful in our situation, since users cannot mount and read from each other's home directories. This space can be mounted by anyone, so it provides a way for users to easily exchange files without, say, gumming up your email server. Once your smb.conf file is in place, start up your smb service and give it a quick test. You can do this by logging into a Linux client host and using a command like this one: $ smbmount '//apollo/jonesy' ~/foo/ -o username= jonesy, workgroup=PVT This command will mount my home directory on Apollo to ~/foo/ on the local machine. I've passed along my username and the workgroup name, and the command will prompt for my password and happily perform the mount. If it doesn't, check your logfiles for clues as to what went wrong. 253 253 You can also log in to a Windows client, and see if your new Samba server shows up in your Network Neighborhood (or My Network Places under Windows XP). If things don't go well, another command you can try is smbclient. Run the following command as a normal user: $ smbclient -L apollo On my test machine, the output looks like this: Domain=[APOLLO] OS=[Unix] Server=[Samba 3.0.14a-2] Sharename Type Comment tmp Disk Temporary file space IPC$ IPC IPC Service (Samba Server) ADMIN$ IPC IPC Service (Samba Server) MP780 Printer MP780 hp4m Printer HP LaserJet 4m jonesy Disk Home Directories Domain=[APOLLO] OS=[Unix] Server=[Samba 3.0.14a-2] Server Comment Workgroup Master PVT APOLLO This list shows the services available to me from the Samba server, and I can also use it to confirm that I'm using the correct workgroup name. Hack 61. Quick and Dirty NAS Combining LVM, NFS, and Samba on new file servers is a quick and easy solution when you need more shared disk resources. Network Attached Storage (NAS) and Storage Area Networks (SANs) aren't making as many people rich nowadays as they did during the dot-com boom, but they're still important concepts for any system administrator. SANs depend on high-speed disk and network interfaces, and they're responsible for the increasing popularity of other magic acronyms such as iSCSI (Internet Small Computer Systems Interface) and AoE (ATA over Ethernet), which are cool and upcoming technologies for transferring block-oriented disk data over fast Ethernet interfaces. On the other hand, NAS is quick and easy to set up: it just involves hanging new boxes with shared, exported storage on your network. "Disk use will always expand to fill all available storage" is one of the immutable laws of computing. It's sad that it's as true today, when you can pick up a 400-GB disk for just over $200, as it was when I got my CS degree and the entire department ran on some DEC-10s that together had a whopping 900 MB of storage (yes, 254 254 [...]... servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of snmp servers on your network var SNMP_SERVERS $HOME_NET 269 270 Next, copy the... http://dav.sourceforge.net 261 262 Jon Fox Chapter 7 Security Section 7. 1 Hacks 6368: Introduction Hack 63 Increase Security by Disabling Unnecessary Services Hack 64 Allow or Deny Access by IP Address Hack 65 Detect Network Intruders with snort Hack 66 Tame Tripwire Hack 67 Verify Fileystem Integrity with Afick Hack 68 Check for Rootkits and Other Attacks 7. 1 Hacks 6368: Introduction We've come a long way... step discussed in that hack 2 57 258 6 .7. 4 Configuring System Services Fine-tuning the services running on the soon-to-be NAS box is an important step Turn off any services you don't need [Hack #63] The core services you will need are an NFS server, a Samba server, a distributed authentication mechanism, and NTP It's always a good idea to run an NTP server [Hack #22] on networked storage systems to keep... 09/15-04:49:32.299135 70 .48.80.189:6881 -> 192.168.6.64:5 275 7 TCP TTL:109 TOS:0x0 ID:53803 IpLen:20 DgmLen:1432 DF ***AP*** Seq: 0x1869E9D1 Ack: 0x18F60ED8 Win: 0xFFFF TcpLen: 32 TCP Options (3) => NOP NOP TS: 71 9694 59 470 0245 [Xref => http://www.whitehats.com/info/IDS291] Better to know about attempted attacks than to be blissfully unaware! Of course, whether or not you want to monitor your network for these... notification tools on both the Linux and Windows platforms In today's connected world, you can't really afford not to firewall your hosts and scan for clever folks that can still punch through your defenses In the open source world, there's no better tool for the latter task than snort 7. 4.6 See Also • "Monitor Network Traffic with MRTG" [Hack #79 ] • Network Security Hacks, by Andrew Lockhart (O'Reilly)... 6 .7. 7 See Also • "Combine LVM and Software RAID" [Hack # 47] • "Centralize Resources Using NFS" [Hack #56] • "Share Files Across Platforms Using Samba" [Hack #60] 258 259 • "Reduce Restart Times with Journaling Filesystems" [Hack #70 ] Hack 62 Share Files and Directories over the Web WebDAV is a powerful, platform-independent mechanism for sharing files over the Web without resorting to standard networked... any new drive that I'm swapping in without having to check By default, I then set up Linux software RAID and LVM so that the two drives on the primary IDE interface are in a logical volume group [Hack # 47] On systems with 300-GB disks, this gives me 600 GB of reliable, mirrored storage to provide to users If 256 2 57 you're less nervous than I am, you can skip the RAID step and just use LVM to deliver... directory on almost every Linux system Like many configuration files found within Linux, they can appear daunting at first glance, but with a little help, setting them up is actually quite easy 264 265 7. 3.1 Protecting Your Machine with hosts.allow and hosts.deny Before we jump into writing complex network access rules, we need to spend a few moments reviewing the way the Linux access control software... /usr/sbin/twadmin create-polfile S site.key /etc/tripwire/twpol.txt # 277 278 7. 5.6 TripWire Tips You should follow a few simple policies and procedures in order to keep your Tripwire installation secure First, don't leave the twpol.txt and twcfg.txt files that you used to generate your Tripwire database on your hard drive Instead, store them somewhere off the server If your system's security is compromised, as long... information: kill HUP # PID Many Linux distributions provide tools that simplify managing rc scripts and xinetd configuration For example, Red Hat Linux provides chkconfig, while SUSE Linux provides this functionality within its YaST administration tool Of course, the specific services each system requires depends on what you're using it for However, if you're setting up an out-of-the-box Linux distribution, you . OS=[Unix] Server= [Samba 3.0.14a-2] Sharename Type Comment tmp Disk Temporary file space IPC$ IPC IPC Service (Samba Server) ADMIN$ IPC IPC Service (Samba Server) MP780 Printer MP780 hp4m. you're setting up an out-of-the-box Linux distribution, you will often want to deactivate default services such as a web server, an FTP server, a TFTP server, NFS support, and so on. 7. 2.4. Summary Running. actual hostname of the server if it's practical, so that if users need to manually request something from the Samba server, they don't try to ask to mount files from my Linux Samba server by trying

Ngày đăng: 09/08/2014, 04:22

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan