Linux Server Hacks Volume Two phần 9 pot

41 283 0
Linux Server Hacks Volume Two phần 9 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

A more recent development in the world of rstatd data-collection tools is jperfmeter, which is a Java-based, cross-platform monitor with a more polished interface and a graphical configuration tool. It does not yet (at the time of writing) support thresholds, and it's missing a few other finer details, but it's a brand new tool, so I'm sure it will get there at some point. There are other tools available for remote server statistics monitoring, but you may also want to look into building your own, using either the Rstat::Client Perl module or the RPC or rstat interfaces for other languages, such as Python, Java, or C/C++. Hack 81. Remotely Monitor and Configure a Variety of Networked Equipment 327 327 Using SNMP, you can collect information about almost any device attached to your network. For everything that has a network interface, chances are there's some form of Simple Network Management Protocol (SNMP) daemon that can run on it. Over the years, SNMP daemons have been added to everything from environmental sensors to UPSs to soda vending machines. The point of all of this is to be able to remotely access as much information about the host as humanly possible. As an added bonus, proper configuration can allow administrators to change values on the host remotely as well. SNMP daemon packages are available for all of the widely used distributions, along with possibly separate packages containing a suite of SNMP command-line tools. You might have come across the snmpwalk or snmpget commands before in your travels, or you might've seen similarly named functions in scripting languages such as Perl and PHP. Let's have a look at a small bit of a "walk" on an SNMP-enabled Linux host and use it to explain how this works: $ snmpwalk -v2c -c public livid interfaces IF-MIB::ifNumber.0 = INTEGER: 4 IF-MIB::ifIndex.1 = INTEGER: 1 IF-MIB::ifIndex.2 = INTEGER: 2 IF-MIB::ifIndex.3 = INTEGER: 3 IF-MIB::ifIndex.4 = INTEGER: 4 IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 IF-MIB::ifDescr.4 = STRING: sit0 IF-MIB::ifType.1 = INTEGER: softwareLoopback(24) IF-MIB::ifType.2 = INTEGER: ethernetCsmacd(6) IF-MIB::ifType.3 = INTEGER: ethernetCsmacd(6) IF-MIB::ifType.4 = INTEGER: tunnel(131) IF-MIB::ifPhysAddress.1 = STRING: IF-MIB::ifPhysAddress.2 = STRING: 0:a0:cc:e7:24:a0 IF-MIB::ifPhysAddress.3 = STRING: 0:c:f1:d6:3f:32 IF-MIB::ifPhysAddress.4 = STRING: 0:0:0:0:3f:32 IF-MIB::ifAdminStatus.1 = INTEGER: up(1) IF-MIB::ifAdminStatus.2 = INTEGER: up(1) IF-MIB::ifAdminStatus.3 = INTEGER: down(2) IF-MIB::ifAdminStatus.4 = INTEGER: down(2) IF-MIB::ifOperStatus.1 = INTEGER: up(1) IF-MIB::ifOperStatus.2 = INTEGER: up(1) IF-MIB::ifOperStatus.3 = INTEGER: down(2) IF-MIB::ifOperStatus.4 = INTEGER: down(2) As you can see, there's a good bit of information here, and I've cut out the bits that aren't important right now. Furthermore, this is only one part of one SNMP "tree" (the "interfaces" tree). Under that tree lie settings and status information for each interface on the system. If you peruse the list, you'll see separate values for each interface corresponding to things like the interface description (the name the host calls the interface), the physical address, and the interface type. But what is this "tree" I'm speaking of? SNMP data is actually organized much like LDAP data, or DNS data, or even your Linux system's file hierarchythey're all trees! Our output above has hidden some of the detail from us, however. To see the actual path in the tree for each value returned, we'll add an option to our earlier command: $ snmpwalk -Of -v2c -c public livid interfaces 328 328 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifNumber.0 = INTEGER: 4 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex.1 = INTEGER: 1 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex.2 = INTEGER: 2 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex.3 = INTEGER: 3 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex.4 = INTEGER: 4 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifDescr.1 = STRING: lo .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifDescr.2 = STRING: eth0 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifDescr.3 = STRING: eth1 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifDescr.4 = STRING: sit0 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifType.1 = INTEGER: softwareLoopback(24) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifType.2 = INTEGER: ethernetCsmacd(6) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifType.3 = INTEGER: ethernetCsmacd(6) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifType.4 = INTEGER: tunnel(131) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifPhysAddress.1 = STRING: .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifPhysAddress.2 = STRING: 0:a0:cc:e7:24:a0 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifPhysAddress.3 = STRING: 0:c:f1:d6:3f:32 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifPhysAddress.4 = STRING: 0:0:0:0:3f:32 .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifAdminStatus.1 = INTEGER: up(1) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifAdminStatus.2 = INTEGER: up(1) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifAdminStatus.3 = INTEGER: down(2) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifAdminStatus.4 = INTEGER: down(2) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOperStatus.1 = INTEGER: up(1) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOperStatus.2 = INTEGER: up(1) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOperStatus.3 = INTEGER: down(2) .iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOperStatus.4 = INTEGER: down(2) Now we can clearly see that the "interfaces" tree sits underneath all of those other trees. If you replaced the dot separators with a forward slashes, it would look very much like a directory hierarchy, with the value after the last dot being the filename and everything after the equals sign being the content of the file. Now this should start to look a little more familiarmore like the output of a find command than something completely foreign (I hope). A great way to get acquainted with an SNMP-enabled (or "managed") device is to simply walk the entire tree for that device. You can do this by pointing the snmpwalk command at the device without specifying a tree, as we've done so far. Be sure to redirect the output to a file, though, because there's far too much data to digest in one sitting! To do this, use a command like the following: $ snmpwalk -Ov -v2c -c public livid > livid.walk 329 329 You can run the same command against switches, routers, firewalls, and even some specialized devices such as door and window contact sensors and environmental sensors that measure the heat and humidity in your machine room. 9.5.1. The Code Even just sticking to Linux boxes offers a wealth of information. I've written a script in PHP, runnable from a command line, that gathers basic information and reports on listening TCP ports, using only SNMP. Here's the script: #!/usr/bin/php <?php snmp_set_quick_print(1); $string = "public"; $host = "livid"; check_snmp($host); spitinfo($host); function check_snmp($box)//see if this box is running snmp before we throw //requests at it. { $string="public"; $infocheck = @snmpget("$box", "$string", "system.sysDescr.0"); if(! $infocheck) { die("SNMP doesn't appear to be running on $box"); } else { return $infocheck; } } function spitinfo($host)//retrieves and displays snmp data. { $string = "public"; $hostinfo = @snmpget("$host","$string","system.sysDescr.0"); list ($k)=array(split(" ", $hostinfo)); $os = $k[0]; $hostname = @snmpget("$host","$string","system.sysName.0"); $user = @snmpget("$host","$string","system.sysContact.0"); $location = @snmpget("$host","$string","system.sysLocation.0"); $macaddr = @snmpget ("$host","$string","interfaces.ifTable.ifEntry. ifPhysAddress.2"); $ethstatus = @snmpget("$host","$string","interfaces.ifTable.ifEntry. ifOperStatus.2"); $ipfwd = @snmpget("$host","$string","ip.ipForwarding.0"); $ipaddr = @gethostbyname("$host"); $info=array("Hostname:"=>"$hostname","Contact:"=>"$user", "Location:"=>"$location","OS:"=>"$os","MAC Address:"=> "$macaddr","IP Address:"=>"$ipaddr","Network Status"=> "$ethstatus", "Forwarding:"=>"$ipfwd"); print "$host\n"; tabdata($info); print "\nTCP Port Summary\n"; snmp_portscan($hostname); } function tabdata($data) 330 330 { foreach($data as $label=>$value) { if($label){ print "$label\t"; }else{ print "Not Available"; } if($value){ print "$value\n"; }else{ print "Not Available"; } } } function snmp_portscan($target) { $listen_ports = snmpwalk("$target", "public", ".1.3.6.1.2.1.6.13.1.3. 0.0.0.0"); foreach($listen_ports as $key=>$value) { print "TCP Port $value (" . getservbyport($value, 'tcp') . ") listening \n"; } } ?> 9.5.2. Running the Code Save this script to a file named report.php, and make it executable (chmod 775 report.php). Once that's done, run it by issuing the command ./report.php. I've hard-coded a value for the target host in this script to shorten things up a bit, but you'd more likely want to feed a host to the script as a command-line argument, or have it read a file containing a list of hosts to prod for data. You'll also probably want to scan for the number of interfaces, and do other cool stuff that I've left out here to save space. Here's the output when run against my Debian test system: Hostname: livid Contact: jonesy(jonesy@linuxlaboratory.org Location: Upstairs office OS: Linux MAC Address: 0:a0:cc:e7:24:a0 IP Address: 192.168.42.44 Network Status up Forwarding: notForwarding TCP Port Summary TCP Port 80 (http) listening TCP Port 111 (sunrpc) listening TCP Port 199 (smux) listening TCP Port 631 (ipp) listening TCP Port 649 ( ) listening TCP Port 2049 (nfs) listening TCP Port 8000 ( ) listening TCP Port 32768 ( ) listening 331 331 You'll notice in the script that I've used numeric values to search for in SNMP. This is because, as in many other technologies, the human-readable text is actually mapped from numbers, which are what the machines use under the covers. Each record returned in an snmpwalk has a numeric object identifier, or OID. The client uses the Management Information Base (MIB) files that come with the Net-SNMP distribution to map the numeric OIDs to names. In a script, however, speed will be of the essence, so you'll want to skip that mapping operation and just get at the data. You'll also notice that I've used SNMP to do what is normally done with a port scanner, or with a bunch of calls to some function like (in PHP) fsockopen. I could've used function calls here, but it would have been quite slow because we'd be knocking on every port in a range and awaiting a response to see which ones are open. Using SNMP, we're just requesting the host's list of which ports are open. No guessing, no knocking, and much, much faster. Hack 82. Force Standalone Apps to Use syslog Some applications insist on maintaining their own set of logs. Here's a way to shuffle those entries over to the standard syslog facility. The dream is this: working in an environment where all infrastructure services are running on Linux machines [Hack #44] using easy-to-find open source software such as BIND, Apache, Sendmail, and the like. There are lots of nice things about all these packages, not the least of which is that they all know about and embrace the standard Linux/Unix syslog facilities. What this means is that you can tell the applications to log using syslog, and then configure which log entries go where in one file (syslog.conf), instead of editing application-specific configuration files. For example, if I want Apache to log to syslog, I can put a line like this one in my httpd.conf file: ErrorLog syslog This will, by default, log to the local7 syslog facility. You can think of a syslog facility as a channel into syslog. You configure syslog to tell it where entries coming in on a given channel should be written. So, if I want all messages from Apache coming in on the local7 channel to be written to /var/log/httpd, I can put the following line in /etc/syslog.conf: local7.* /var/log/httpd You can do this for the vast majority of service applications that run under Linux. The big win is that if an application misbehaves, you don't have to track down its logfilesyou can always consult syslog.conf to figure out where your applications are logging to. In reality, though, most environments are not 100% Linux. Furthermore, not all software is as syslog-friendly as we'd like. In fact, some software has no clue what syslog is, and these applications maintain their own logfiles, in their own logging directory, without an option to change that in any way. Some of these applications are otherwise wonderful services, but systems people are notoriously unrelenting in their demand for consistency in things like logging. So here's the meat of this hack: an example of a service that displays 332 332 selfish logging behavior, and one way to go about dealing with it. Fedora Directory Server (FDS) can be installed from binary packages on Red Hatbased distributions, as well as on Solaris and HP-UX. On other Linux distributions, it can be built from source. However, on no platform does FDS know anything about the local syslog facility. Enter a little-known command called logger. The logger command provides a generic shell interface to the syslog facility on your local machine. What this means is that if you want to write a shell or Perl script that logs to syslog without writing syslog-specific functions, you can just call logger from within the script, tell it what to write and which syslog facility to write it to, and you're done! Beyond that, logger can also take its input from stdin, which means that you can pipe information from another application to logger, and it will log whatever it receives as input from the application. This is truly beautiful, because now I can track down the FDS logs I'm interested in and send them to syslog with a command like this: # exec tail -f /opt/fedora-ds/slapd-ldap/logs/access.log | logger -p local0. debug & I can then tell my syslog daemon to watch for all of the messages that have been piped to logger and sent to syslog on local0 and to put them in, say, /var/ log/ldap/access.log. The debug on the end of the facility name is referred to in syslog parlance as a priority. There are various priority levels available for use by each syslog facility, so a given application can log messages of varying severity as being of different priorities [Hack #86]. FDS is a good example of an application where you'd want to utilize prioritiesthe access log for FDS can be extremely verbose, so you're likely to want to separate those messages into their own logfile. Its error log is rarely written to at all, but the messages there can pertain to the availability of the service, so you might want those messages to go to /var/log/messages. Rather than using up another whole syslog facility to get those messages to another file, just run a command like this one: # tail -f /opt/fedora-ds/slapd-ldap/logs/error.log | logger -p local0.notice Now let's tell syslog to log the messages to the proper files. Here are the configuration lines for the access and error logs: local0.debug /var/log/ldap/access.log local0.notice /var/log/messages There is one final enhancement you'll probably want to make, and it has to do with logger's output. Here's a line that made it to a logfile from logger as we ran it above, with just a -p flag to indicate the facility to use: Aug 26 13:30:12 apollo logger: connection refused from 192.168.198.50 Well, this isn't very useful, because it lists logger as the application reporting the log entry! You can tell logger to masquerade as another application of your choosing using the -t flag, like this: # tail -f access.log | logger -p local0.debug -t FDS 333 333 Now, instead of the reporting application showing up as logger:, it will show up as FDS:. Of course, there are probably alternatives to using logger, but they sometimes involve writing Perl or PHP daemons that perform basically the same function as our logger solution. In the long run, you may be able to come up with a better solution for your site, but for the "here and now" fix, logger is a good tool to have on your toolbelt. Hack 83. Monitor Your Logfiles Use existing tools or simple homemade scripts to help filter noise out of your logfiles. If you support a lot of services, a lot of hosts, or both, you're no doubt familiar with the problem of making efficient use of logfiles. Sure, you can have a log reporting tool send you log output hourly, but this information often goes to waste because of the extremely high noise-to-signal ratio. You can also try filtering down the information and using a tool such as logwatch to report on just those things most important to you on a daily basis. However, these reports won't help alert you to immediate, impending danger. For that, you need more than a reporting tool. What you really need is a log monitor; something to watch the logs continually and let you know about anything odd. Log monitors in many environments come in human form: administrators often keep several terminal windows open with various logs being tailed into them, or they use something like root-tail to get those logs out of windows and right into their desktop backgrounds. You can even send your output to a Jabber client [Hack #84]. This is wonderful stuff, but again, it doesn't help filter out any of the unwanted noise in logfiles, and it's not very effective if all the humans are out to lunch (so to speak). There are a number of solutions to this problem. One is simply to make sure that your services are logging at the right levels and to the right syslog facilities, and then make sure your syslog daemon is configured to break things up and log things to the right files. This can help to some degree, but what we want is to essentially have a real-time, always-running "grep" of our logs that will alert us to any matches that are found by sending us email, updating a web page, or sending a page. 9.7.1. Using log-guardian There are a couple of tools out there that you can use for log monitoring. One is log-guardian, which is a Perl script that allows you to monitor multiple logfiles for various user-supplied patterns. You can also configure the action that log-guardian takes when a match is found. The downside to using log-guardian is that you must have some Perl knowledge to configure it, since actions supplied by the user are in the form of Perl subroutines, and other configuration parameters are supplied in the form of Perl hashes. All of these are put directly into the script itself or into a separate configuration file. You can grab log-guardian from its web site: http://www.tifaware.com/perl/log-guardian/. Once downloaded, you can put the log-guardian.pl script wherever you store local system tools, such as under /opt or in /var/local. Since it doesn't come with an init script, you'll need to add a line similar to this one to your system's rc.local file: /var/local/bin/log-guardian & 334 334 The real power of log-guardian comes from Perl's File::Tail module, which is a fairly robust bit of code that acts just like tail -f. This module is required for log-guardian. To determine whether you have it installed, you can run something like locate perl | grep Tail, or run a quick Perl one-liner like this at the command line: $ perl -e "use File::Tail;" If that returns a big long error beginning with "Can't find Tail/File.pm" or something similar, you'll need to install it using CPAN, which should be dead simple using the following command: # perl -MCPAN -e shell This will give you a CPAN shell prompt, where you can run the following command to get the module installed: > install File::Tail The File::Tail module is safe for use on logfiles that get moved, rolled, or replaced on a regular basis, and it doesn't require you to restart or even think about your script when this happens. It's dead-easy to use, and its more advanced features will allow you to monitor multiple logfiles simultaneously. Here's a simple filter I've added to the log-guardian script itself to match on sshd connections coming into the server: '/var/log/messages' => [ { label => 'SSH Connections', pattern => "sshd", action => sub { my $line = $_[1]; print $line; } }, ], That's about as simple a filter you can write for log-guardian. It matches anything that gets written to /var/log/messages that has the string sshd in it and prints any lines it finds to stdout. From there, you can send it to another tool for further processing or pipe it to the mail command, in which case you could run log-guardian like this: # /var/local/bin/log-guardian | mail jonesy@linuxlaboratory.org Of course, doing this will send every line in a separate email, so you might prefer to simply let it run in a terminal. You'll be able to monitor this output a little more easily than the logfiles themselves, since much of the noise has been filtered out for you. 335 335 This sshd filter is just one examplethe "pattern" can consist of any Perl code that returns some string that the program can use to match against incoming log entries, and the "action" performed in response to that match can be literally anything you're capable of inventing using Perl. That makes the possibilities just about endless! 9.7.2. Using logcheck The logcheck utility is not a real-time monitor that will alert you at the first sign of danger. However, it is a really simple way to help weed out the noise in your logs. You can download logcheck from http://sourceforge.net/projects/sentrytools/. Once downloaded, untar the distribution, cd to the resulting directory, and as root, run make linux. This will install the logcheck files under /usr/local. There are a few files to edit, but the things that need editing are simple one-liners; the configuration is very intuitive, and the files are very well commented. The main file that absolutely must be checked to ensure proper configuration is /usr/local/etc/logcheck.sh. This file contains sections that are marked with tags such as CONFIGURATION and LOGFILE CONFIGURATION, so you can easily find those variables in the file that might need changing. Probably the most obvious thing to change is the SYSADMIN variable, which tells logcheck where to send output. SYSADMIN=user@mydomain.com You should go over the other variables as well, because path variables and paths to binaries are also set in this file. Once this is ready to go, the next thing you'll want to do is edit root's crontab file, which you can do by becoming root and running the following command: # crontab -e You can schedule logcheck to run as often as you want. The following line will schedule logcheck to run once an hour, every day, at 50 minutes after the hour: 50 * * * * /bin/sh /usr/local/etc/logcheck.sh You can pick any time period you want, but once per hour (or less in smaller sites or home networks) should suffice. Once you've saved the crontab entry, you'll start getting email with reports from logcheck about what it's found in your logs that you might want to know about. It figures out which log entries go into the reports by using the following methodology: It matches a string you've noted as significant by putting it in /usr/local/etc/logcheck.hacking.• It does not match a string you've noted as being noise by putting it in /usr/local/etc/logcheck.ignore.• These two files are simply lists of strings that logcheck will try to match up against entries in the logs it goes through to create the reports. There is actually a third file as well, /usr/local/etc/logcheck.violations.ignore, 336 336 [...]... configured 9. 12.3 See Also • http://www.nagios.org Chapter 10 System Rescue, Recovery, and Repair Section 10.1 Hacks 891 00: Introduction Hack 89 Resolve Common Boot and Startup Problems Hack 90 Rescue Me! Hack 91 Bypass the Standard Init Sequence for Quick Repairs Hack 92 Find Out Why You Can't Unmount a Partition Hack 93 Recover Lost Partitions Hack 94 Recover Data from Crashed Disks Hack 95 Repair... excellent demonstration of the graphical qualities of Zabbix 9. 9.4 Mapping the Network The last aspect of Zabbix that we'll look at is the mapping feature (shown in Figure 9- 4) This is an excellent tool for providing a quick reference map of the network showing detailed status To begin, click on the lower Network Maps button Create a new network map by filling in the name you wish to call your new map... Hack 94 Recover Data from Crashed Disks Hack 95 Repair and Recover ReiserFS Filesystems Hack 96 Piece Together Data from the lost+found 354 355 Hack 97 Recover Deleted Files Hack 98 Permanently Delete Files Hack 99 Permanently Erase Hard Disks Hack 100 Recover Lost Files and Perform Forensic Analysis 10.1 Hacks 891 00: Introduction No computing system survives contact with the environment The excellence... that will reside on your server (along with the server' s certificate), and then copying the individual client certificates to the hosts for which they were intended 9. 11.3 Configuring stunnel Now, on the server side, edit your stunnel.conf file to read as follows: cert = /etc/stunnel/syslog-ng -server. pem CAfile = /etc/stunnel/syslog-ng-client.pem verify = 3 [5140] accept = your .server. ip:5140 connect... multiple certificates: one for use by the central log server, and one for each client that sends log information to the server Later in this section, you'll install the server certificate on your server and distribute the client certificates to the hosts for which they were created The process for creating certificates varies slightly based on the Linux distribution you're using For a Red Hat system,... system "stty echo"; chomp $option{pass}; print "\n"; } # Connect to Jabber server 338 3 39 my ($host, $port) = split(":", $option {server} , 2); print "Connecting to $host:$port as $option{user}\n"; my $c = new Net::Jabber::Client; $c->Connect( hostname => $host, port => $port, ) or die "Cannot connect to Jabber server at $option {server} \n"; my @result; eval { @result = $c->AuthSend( username => $option{user},... to figure out what's happening Wouldn't it be nice if you had some sort of detailed real-time network map that could monitor services and tell you what was going on? Zabbix to the rescue! Zabbix is a host monitoring tool that can do amazing things Read on to see how you can apply it in your own network 9. 9.1 Dependencies Zabbix is a complicated beast, so there are naturally a few dependencies to note... Then make similar changes to stunnel.conf on the client side: client cert = CAfile verify [5140] = yes /etc/stunnel/ syslog-ng-client.pem = /etc/stunnel/syslog-ng -server. pem = 3 accept = 127.0.0.1:514 connect = your .server. ip:5140 348 3 49 9.11.4 Configuring syslog-ng Once those changes have been made, it's time to start working on creating your syslog-ng.conf file The syntax of this file has a steep... the coordinates you wis the icon representing Home-FTP to be displayed on Select the Server icon and click Add The page will refresh, and w finishes loading, you'll see your icon representing Home-FTP on the map You can continue adding hosts and placing t on the map until you have a full representation of your network 9. 9.5 The Details What we've covered here is a fraction of the capabilities of Zabbix... should see an option there for 192 .168.2.118 (or the hostname if you gave it one) Since the FTP server is running, we get a return value of 1 Had the server not been running, we would see 0 in that field Notice that to the right you have the option to graph, trend, and compare data collected over time This allows for detailed data analysis on the uptime and availability of your servers It is also an excellent . Hostname: livid Contact: jonesy(jonesy@linuxlaboratory.org Location: Upstairs office OS: Linux MAC Address: 0:a0:cc:e7:24:a0 IP Address: 192 .168.42.44 Network Status up Forwarding: notForwarding . listening TCP Port 111 (sunrpc) listening TCP Port 199 (smux) listening TCP Port 631 (ipp) listening TCP Port 6 49 ( ) listening TCP Port 20 49 (nfs) listening TCP Port 8000 ( ) listening TCP. analysis on the uptime and availability of your servers. It is also an excellent demonstration of the graphical qualities of Zabbix. 9. 9.4. Mapping the Network The last aspect of Zabbix that we'll

Ngày đăng: 09/08/2014, 04:22

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan