Essential System Administration, 3rd Edition phần 8 pot

111 247 0
Essential System Administration, 3rd Edition phần 8 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

# lpc class laser check This will allow only jobs in class check to print; all others will be held To allow any job to print, use this command: # lpc class laser off Using classes can be a bit tricky For example, if you alternate between printing checks and regular output on a printer, you probably don't want to turn off class check after all the checks are printed Rather, you want check jobs to be held until the proper paper is again in place in the printer In this case, the following command will be more effective: # lpc class laser A This sets the allowed class to class A (the default), so jobs spooled in class check will be held as desired 13.6.2 Configuring LPRng LPRng uses three configuration files (stored in /etc ): printcap , lpd.conf , and lpd.perms , which hold queue configuration, global spooler configuration and printer access rules, respectively The first of these is a modified version of the standard LPD printcap file It uses a relaxed syntax: all fields use an equal sign to assign values rather than having datatype-specific assignment characters (although the form name@ is still used to turn off Boolean flags), multiple line entries not require final backslash characters, and no terminal colon is needed to designate field boundaries Here are two simple entries from printcap : hp: :lp=/dev/lp0 :cm=HP Laser Jet printer :lf=/var/log/lpd.log :af=/var/adm/pacct :filter=/usr/local/lib/filters/ifhp :tc=.common Example local printer laser: :lp=painters@matisse :tc=.common Example remote printer .common: :sd=/var/spool/lpd/%P :mx=0 Named group of items Description for lpq command Include the common section Include the common section The first entry is for a local printer named hp on the first parallel port This printcap entry specifies a description for the printer, the name of its log and accounting files, and a filter with which to process jobs The final field, tc , provides an "include" feature within printcap entries It takes a list of names as its argument In this case, the field says to include the settings in the printcap entry called common within the current entry Thus, it has the effect of removing any length limits on print jobs to printer hp and of specifying its spool directory as /var/spool/lpd/hp The second printcap entry creates a queue for a remote printer, matisse on host painters , which also has no job length limits and uses the spool directory /var/spool/lpd/laser The last two items are again set using the tc include field The LPRng printcap file allows for variable expansion within printcap entries We saw an example of this in the sd field in the preceding example The following variables are supported: %P Printer name %Q Queue name %h Simple hostname %H Fully-qualified hostname %R Remote print queue name %M Remote computer hostname %D Current date We will now go on to consider additional LPRng features and the printcap settings that support them 13.6.2.1 Separate client and server entries By default, printcap entries apply both to spooling system clients—user programs like lpr —and servers—the lpd daemon However, you can specify that an entry apply only to one of these contexts, as in these example entries: laser:server :lp=/dev/lp0 Entry applies to the lpd daemon laser: :lp=matisse@painters Entry applies to client programs The first entry defines the printer laser as the device on the first parallel port The server field indicates that the entry is active only when lpd is using the printcap file (and not when it is accessed by programs like lpr ) The second entry defines the printer laser for client programs as a remote printer (matisse on painters ) Clients will be able to send jobs directly to this remote printer In this next example, clients are required to use the local print daemon in order to print to the printer laser2 : laser2:force_localhost laser2:server :lp=/dev/lp0 :sd=/var/spool/lpd/%P Force clients to use the local server The force_localhost setting (a Boolean, which is off by default) tells clients accessing this printcap entry to funnel jobs through the local lpd server process 13.6.2.2 Using a common printcap file for many hosts One of LPRng's most powerful capabilities is the built-in features for constructing a single central printcap file which can be copied to or shared among many hosts This flexibility comes from the on setting (for "on host") Here is an example: laser: :oh=*.ahania.com,!astarte.ahania.com :lp=/dev/lp0 This entry defines a printer named laser on every host in the domain ahania.com except astarte The printer will always be located on the first parallel port The following entry will define a printer named color on every host in the 10.0.0 subnet For most hosts, the printer points to the color queue on 10.0.0.4, while for 10.0.0.4 itself, it points to the device on the first parallel port color: :oh=10.0.0.0/24,!10.0.0.4 :lp=%P@10.0.0.4 :tc=.common Host specification by IP address color: :oh=10.0.0.4 :lp=/dev/lp0 The %P construct in the first entry's lp setting is not really necessary here, but it would be useful if this setting occurred in a named group of settings, as in this example: color:tc=.common laser:tc=.common draft:tc=.common common: :oh=*.ahania.com,!astarte.ahania.com :lp=%P@astarte.ahania.com These entries define the printers color , laser , and draft on every host in ahania.com except astarte as the corresponding queue on astarte (which are defined elsewhere in the printcap file) 13.6.2.3 Special-purpose queues In this section, we examine how to set up queues for several more complex printing scenarios 13.6.2.3.1 Bounce queues Here is a printcap entry for a simple store-and-forward queue (as we've seen before): laser:server :lp=matisse@painters :sd=/var/spool/lpd/%P The queue laser collects jobs and sends them on to the queue matisse on host painters as is However, it is sometimes useful to process the jobs locally before sending them on to be printed This is accomplished via a bounce queue, as in this example: blots:server :sd=/var/spool/lpd/%P :filter=path and arguments :bq_format=l :bq=picasso@painters Binary jobs will be sent on This queue named blots accepts jobs, runs them through program specified in the filter setting, and then sends them to queue picasso on host painters for printing 13.6.2.3.2 Printer pools LPRng allows you create a printer pool: a queue that feeds several printing devices, as in this example: scribes:server :sd=/var/spool/lpd/%P :sv=lp1,lp2,lp3 Here, the queue scribes sends jobs to queues lp1 , lp2 , and lp3 (which must be defined elsewhere in the printcap file), as each queue becomes free (which, of course, occurs when the associated device is free) This mechanism provides a very simple form of load balancing Here is part of the printcap entry for lp1 : lp1: :sd=/var/spool/lpd/%P :ss=scribes The ss setting specifies the controlling queue for this printer Note that it does not prevent jobs from being sent directly to queue lp1 ; the only effect of this setting seems to be to make queue status listings more readable Print job destinations can also be determined on a dynamic basis Here is an example: smart: :sd=/var/spool/lpd/%P :destinations=matisse@printers,degas@france,laser :router=/usr/local/sbin/pick_printer The program specified in the router setting is responsible for determining the destination for each submitted print job The router program is a standard print filter program Its exit status determines what happens to the job (0 means print, 37 means hold, and any other value says to delete the job), and it must write the queue destination and other information to standard output (where lpd obtains it) See the LPRng-HOWTO document for full details on dynamic print job routing 13.6.2.4 Filters As we've noted before, print jobs are processed by filter programs before they are sent to the printer device Filters are responsible for initializing the device to a known initial state, transforming the output into a form that it understood by the printer, and ensuring that all output has been sent to the printer at the end of the job The first and third tasks are typically accomplished by adding internal printer commands to the beginning and end of the print job Filter programs are also responsible for creating printer accounting records As the examples we've looked at have shown, LPRng provides the filter printcap setting for specifying a default filter for all jobs in a particular queue In addition, it supports many of the various output type-specific filter variables used in traditional printcap entries (i.e., the *f settings) The LPRng package often uses the ifhp filter program also written by Patrick Powell It is suitable for use with a wide variety of current printers The characteristics of the various supported printers are stored in its configuration file, ifhp.conf (usually stored in /etc ) The following printcap entry illustrates settings related to its use: lp: :sd=/var/spool/lpd/%P :filter=/usr/local/libexec/filters/ifhp :ifhp=model=default The filter setting specifies the path to ifhp , and the ifhp setting specifies the appropriate printer definition with its configuration file In this case, we are using the default settings, which work well with a wide variety of printers NOTE The LPRng facility includes an excellent Perl script that demonstrates the method for getting page count information from modern printers It is called accounting.pl and is included with the source distribution 13.6.2.5 Other printcap entry options It is also possible to store printcap entries in forms other than a flat text file For example, they could be stored in an LDAP directory LPRng allows for such possibilities by allowing printcap entries to be fetched or created dynamically as needed This is accomplished by setting the printcap_path in the lpd.conf configuration file as a pipe to a program rather than a path to a printcap file: printcap_path=|program Such an entry causes LPRng to execute the specified program whenever it needs a printcap entry (the desired entry is passed to the program as its standard input) For example, such a program could retrieve printcap information from an LDAP directory See Chapter 11 of Network Printing for details and extended examples 13.6.3 Global Print Spooler Settings The lpd.conf configuration file holds a variety of settings relating to the print spooler service Among the most important are ones related to printer connection and timeouts and to print job logging Some of the most commonly used are listed in the example configuration file below: # communication-related settings connect_grace=3 network_connect_grace=3 connect_timeout=600 send_try=2 max_servers_active=10 # logging settings max_log_file_size=256 max_status_size=256 min_log_file_size=128 min_status_size=64 max_status_line=80 # central logging server logger_destination=scribe logger_pathname=/tmp/lprng.tmp logger_max_size=1024 logger_timeout=600 Wait period between jobs (default=0) Cancel job after this interval (default=0) Maximum number of retries (default is no limit) Max # lpd child processes (default is half the system process limit) Maximum file sizes in KB (default is no limit) Keep this much data when the files are too big (default is 25%) Truncate entries to this length (default=no limit) Destination for log file entries Local temporary file to use Max size of the temporary file (default=no limit) Wait time between connections to the remote server (default is whenever data is generated) 13.6.4 Printer Access Control The third LPRng configuration file, lpd.perms , is used to control access to the print service and its printers Each entry in the file provides a set of characteristics against which potential print jobs are matched and also indicates whether such jobs should be accepted The first entry that applies to a specific print job will be used to determine its access Accordingly, the order of entries within the file is important The syntax of the lpd.perms file is explained best by examining some examples For example, these entries allow users to remove their own print jobs and root to remove any print job: ACCEPT SERVICE=M SAMEUSER ACCEPT SERVICE=M SERVER REMOTEUSER=root REJECT SERVICE=M The first keyword in an entry is always ACCEPT or REJECT , indicating whether matching requests are to be performed These entries all apply to the M service, which corresponds to removing jobs with lprm The various entries allow the command to succeed if the user executing and the user owning the print jobs are the same (SAMEUSER ), or if the user executing it is root (REMOTEUSER=root ) on the local system (SERVER ) All other lprm requests are rejected Available SERVICE codes include C (control jobs with lpc ), R (spool jobs with lpr ), M (remove jobs with lprm ), Q (get status info with lpq ), X (make connection to lpd ), and P (general printing) More than one code letter can be specified to SERVICE There are several keywords that are compared against the characteristics of the print job and the command execution context: USER , GROUP , HOST , PRINTER These items are compared to the ownership and other characteristics of the print job to which the desired command will be applied In addition, the SERVER keyword requires that the command be executed on the local server REMOTEUSER , REMOTEGROUP , REMOTEHOST These items are compared to the user, group, and host where or with whom the desired command originated Note that the "remote" part of the name can be misleading, because it need not refer to a remote user or host at all The preceding keywords all take a string or list of strings as their arguments These items are interpreted as patterns to be compared to the print job or command characteristics SAMEUSER , SAMEHOST These keywords require that USER be the same as REMOTEUSER and HOST be the same as REMOTEHOST , respectively For example, the following entry limits use of the lprm command to users' own jobs and requires that the command be run on the same host from which the print job was submitted: ACCEPT SERVICE=M SAMEUSER SAMEHOST We'll now examine some additional lpd.perms entries The following entry rejects all connections to the lpd server that originate outside the ahania.com domain or from the hosts dalton and hamlet : REJECT SERVICE=X NOT REMOTEHOST=*.ahania.com REJECT SERVICE=X REMOTEHOST=dalton,hamlet Note that these entries could not be formulated as ACCEPT s Hosts may be specified by hostname or by IP address The following entries allow only members of the group padmin to use the lpc command on the local host: ACCEPT SERVICE=C SERVER REMOTEGROUP=padmin REJECT SERVICE=C The LPC keyword can be used to limit the lpc subcommands that can be executed For example, the following entry allows members of group printop to hold and release individual print jobs and move them around within a queue: ACCEPT SERVICE=C SERVER REMOTEGROUP=printop LPC=topq,hold,release The following entries prevent anyone from printing to the printer test except user chavez : ACCEPT SERVICE=R,M,C REMOTEUSER=chavez PRINTER=test REJECT SERVICE=* PRINTER=test User chavez can also remove jobs from the queue and use lpc to control it The following command prevents print job forwarding on the local server: REJECT SERVICE=R,C,M FORWARD The DEFAULT keyword is used to specify a default action for all requests not matching any other configuration file entry: # All everything that is not explicitly forbidden DEFAULT ACCEPT The default access permissions in the absence of an lpd.perms file is to accept all requests 13.6.4.1 Other LPRng capabilities LPRng has quite a few additional capabilities which space constraints prevent us from considering, including the ability for more sophisticated user authentication using a variety of mechanisms, including PGP and Kerberos Consult the LPRng documentation for full details I l@ ve RuBoard I l@ ve RuBoard 13.7 CUPS The Common Unix Printing System ( CUPS) is another project aimed at improving, and ultimately superceding, the traditional printing subsystems CUPS is distinguished by the fact that it was designed to address printing within a networking environment from the beginning, rather than being focused on printing within a single system Accordingly, it has features designed to support both local and remote printing, as well as printers directly attached to the network We will take a brief look at CUPS in this section The homepage for the project is http://www.cups.org CUPS is implemented via the Internet Printing Protocol (IPP) This protocol is supported by most current printer manufacturers and operating systems IPP is implemented as a layer on top of HTTP, and it includes support for security-related features such as access control, user authentication, and encryption Given this structure, CUPS requires a web server on printer server systems Architecturally, CUPS separates the print job handling and device spooling functions into distinct modules Print jobs are given a identifier number and also have a number of associated attributes: their destination, priority, media type, number of copies, and so on As with other spooling subsystems, filters may be specified for print queues and/or devices in order to process print jobs The CUPS system provides many of them Finally, backend programs are responsible for sending print jobs to the actual printing devices CUPS also supports printer classes: groups of equivalent printers fed by a single queue (we've previously also referred to such entities as printer pools) CUPS extends this construct by introducing what it calls "implicit classes." Whenever distinct printers and/or queues on different servers are given the same name, the CUPS system treats the collection as a class, controlling the relevant entities as such In other words, multiple servers can send jobs to the same group of equivalent printers In this way, implicit classes may be used to prevent any individual printing device or server system from becoming a single point of failure Classes may be nested: a class can been a member of another class 13.7.1 Printer Administration CUPS supports the lpr , lpq , and lprm commands and the lp , lpstat , and cancel commands from the BSD and System V printing systems, respectively For queue and printer administration, it offers two options: command-line utilities, including a version of the System V lpadmin command, or a web-based interface The latter is accessed by pointing a browser at port 631: for example, http://localhost:631 for the local system The following commands are available for managing and configuring print queues Note that all of them except lpinfo specify the desired printer as the argument to the -p option: lpstat View queue status accept and reject Allow/prevent jobs from being sent to the associated printing device enable and disable Allow/prevent new print jobs from being submitted to the specified queue lpinfo Display information about available printers ( -v ) or drivers (-m ) lpadmin Configure print queues Here is an example lpadmin command, which adds a new printer: # lpadmin -plj4 -D"Finance LaserJet" -L"Room 2143-A" \ -vsocket://192.168.9.23 -mlaserjet.ppd This command add a printer named lj4 located on the network using the indicated IP address The printer driver to be used is laserjet.ppd (several are provided with the CUPS software) The -D and -L options provide descriptions of the printer and its location, respectively In general, the -v option specifies the printing device as well as the method used to communicate with it Its argument consists of two colon-separated parts: a connection-type keyword (which selects the appropriate backend module), followed by a location address Here are some syntax forms: parallel:/dev/device serial:/dev/device usb:/dev/usb/device ipp://address/port lpd://address/DEVICE socket://address[:port] Local parallel port Local serial port Local USB port IPP-based network printer LPD-based network printer Network printer using another protocol (e.g., JetDirect) The CUPS version of lpadmin has several other useful options: -d to specify a system default printer (as under System V), -c and -r to add/remove a printer from a class, and -x to remove the print queue itself Under CUPS, printers need only be configured on the server(s) where the associated queues are located All clients on the local subnet will be able to see them once CUPS is installed and running on each system 13.7.1.1 CUPS configuration files CUPS maintains several configuration files, stored in the /etc/cups directory Most of them are maintained by lpadmin or the web-based administrative interface The one exception, which you may need to modify manually, is the server's main configuration file, cupsd.conf Here are some sample annotated entries (all non-system-specific values are the defaults): ServerName painters.ahania.com ServerAdmin root@ahania.com ErrorLog /var/log/cups/error_log AccessLog /var/log/cups/access_log PageLog /var/log/cups/page_log LogLevel info MaxLogSize 1048571 PreserveJobFiles No RequestRoot /var/spool/cups User lp Group sys TempDir /var/spool/cups/tmp MaxClients 100 Timeout 300 Browsing On ImplicitClasses On Server name CUPS administrator's email address Log file locations Printer accounting data Log detail (other levels: debug, warn, error) Rotate log files when current is bigger than this Don't keep files after print job completes Spool directory Server user and group owners CUPS temporary directory Maximum client connections to this server Printing timeout period in seconds Let clients browse for printers Implicit classes are enabled Readers familiar with the Apache facility will notice many similarities to its main configuration file (httpd.conf ) 13.7.1.2 Access control and authentication Printer access control, user authentication, and encryption are also enabled and configured in the cupsd.conf configuration file.[9] [9] These features are somewhat in flux as of this writing, so there may be additional capabilities in your version of CUPS Consult the CUPS documentation for details on the current state of things Encryption is controlled by the Encryption entry: Encryption IfRequested The entry indicates whether or not to encrypt print requests (in order to use encryption, the OpenSSL library must be linked into the CUPS facility) The default is to encrypt files if the server requests it; other values are Always and Never Additional keywords may be added as other encryption methods become available There are two main entries related to user authentication: AuthType Source of authentication data, one of: None , Basic (use data in the Unix password and group file, transmitted Base64-encoded), and Digest (use the file passwd.md5 in /etc/cupsd for authentication data) The last method offers a medium level of security against network sniffing The CUPS system provides the lppasswd command for maintaining the passwd.md5 file AuthClass Method of authentication The default is Anonymous (perform no authentication) Other options are User (valid username and password are required), System (user must also belong to the system group, which can be defined using the SystemGroup entry), and Group (user must also belong to the group specified in the AuthGroupName entry) The encryption- and user authentication-related entries are used to specify requirements for specific printers or printer classes These are defined via stanzas like the following in the configuration file: [Encryption entry] [Authentication entries] [Access control entries] The ordering here is not significant The pseudo-HTML directives delimit the stanza, and the item specified in the opening tag indicates the entities to which the stanza applies.[10] It can take one of the following forms: [10] Again, note the similarity to the Apache configuration file syntax / /printers /printers/name /classes /classes/name /admin Defaults for the CUPS system Applies to all non-specified printers Applies to a specific printer Applies to all non-specified classes Applies to the specified class Applies to CUPS administrative functions Here a some example stanzas (which also introduce the access control directives): Order Deny,Allow Deny From All Allow From 127.0.0.1 Order Allow,Deny Allow From ahania.com Allow From essadm.com Deny From 192.168.9.0/24 System defaults Interpret Allow list as overrides to Deny list Deny all access except from the local host Interpret Deny list as exceptions to Allow list Allow access from these domains but exclude this subnet Encryption Always AuthType Digest AuthClass Group AuthGroupName finance Order Deny,Allow Deny From All Allow From 10.100.67.0/24 Applies to class named checks Always encrypt Require valid user account and password Restrict to members of the finance group AuthType Digest AuthClass System Order Deny,Allow Deny From All Allow From ahania.com Access for administrative functions Require valid user account and password Limit to system group members Deny all access except from this subnet Restrict access to the local domain Consult the CUPS documentation for information about the facility's other features as well as its installation procedure I l@ ve RuBoard I l@ ve RuBoard 13.8 Font Management Under X On most current Unix systems, fonts are made available to applications via the X Window system (although some application packages manage their own font information) In this section, we will consider the main administrative tasks related to managing fonts In an ideal world, fonts would be something that users took care of themselves However, in this world, font handling under X is cumbersome enough that the system administrator often needs to get involved In this section, we consider font management using the standard X11R6 tools, and we refer to directory locations as defined by the normal installation of the package These facilities and locations are often significantly altered (and broken) in some vendors' implementations 13.8.1 Font Basics When you think of a font, you probably think of something like Times or Helvetica These names actually referred to font families containing a number of different typefaces: for example, regular Times, italic Times, bold Times, bold italic Times, and so on At the moment, there are quite a few different formats for font files The most important distinction among them is between bitmap fonts and outline fonts Bitmap fonts store the information about the characters in a font as bitmap images, while outline fonts define the characters in a font as a series of lines and curves, comprising in this way mathematical representations of the component characters From a practical point of view, the main difference between these two font types is scalability Outline fonts can be scaled up or down any arbitrary amount and look equally good at all sizes In contrast, bitmap fonts not scale well, and they look extremely jagged and primitive as they get larger For this reason, outline fonts are generally preferred to bitmap fonts To further complicate matters, there are two competing formats for outline fonts: Adobe Type and TrueType In technical terms, the chief difference between them consists of the kind of curves used to represent the characters: Bezier curves and b-splines, respectively The other major difference between the two formats is price, with Type fonts generally being significantly more expensive than TrueType fonts All of these different types of fonts are generally present under X The most important formats are listed in Table 13-7 , along with their corresponding file extensions Portable Compiled Font bitmap PCF.gz Speedo bitmap spd Ghostscript font outline gsf Type outline pfa, pfb, afm TrueType autonice = autonice_penalty = 10 [16] Each stanza is introduced by the subsystem name In this example, we configure the generic (general), ipc shared memory subsystems [16] and proc (CPU/process) These example settings are useful for running large jobs on multiprocessor systems The proc subsystem is the most relevant to CPU performance The following parameters may be useful in some circumstances: max_per_proc_address_space and max_per_process_data_size may need to be increased from their defaults of GB and GB (respectively) to accommodate very large jobs By default, the Tru64 scheduler gives a priority boost to jobs returning from a block I/O wait (in an effort to expedite interactive response) You can disable this by setting give_boost to The scheduler can be configured to automatically nice processes that have used more than 600 seconds of CPU time (this is disabled by default) Setting autonice to enables it, and you can specify the amount to nice by with the autonice_penalty parameter (the default is 4) The round_robin_switch_rate can be used to modify the time slice It does so in an indirect manner Its default value is 0, which is also equivalent to its maximum value of 100 This setting specifies how many time-slice expiration context switches occur in a second, and the time slice is computed by dividing the CPU clock rate by this value Thus, setting it to 50 has the effect of doubling the time slice length (because the divisor changes from 100 to 50) Such a modification should be considered only for systems designed for running long jobs, with little or no interactive activity (or where you've decided to favor computation over interactive activity) 15.3.5 Unix Batch-Processing Facilities Manually monitoring and altering processes' execution priorities is a crude way to handle CPU time allocation, but unfortunately it's the only method that standard Unix offers It is adequate for the conditions under which Unix was developed: systems with lots of small interactive jobs But if a system runs some large jobs as well, it quickly breaks down Another way of dividing the available CPU resources on a busy system among multiple competing processes is to run jobs at different times, including some at times when the system would otherwise be idle Standard Unix has a limited facility for doing so via the at and batch commands Under the default configuration, at allows a command to be executed at a specified time, and batch provides a queue from which jobs may be run sequentially in a batch-like mode For example, if all large jobs are run via batch from its default queue, it can ensure that only one is ever running at a time (provided users cooperate, of course) In most implementations, system administrators may define additional queues in the queuedefs file, found in various locations on different systems: AIX /var/adm/cron FreeBSD not used HP-UX /var/adm/cron Linux not used Solaris /etc/cron.d Tru64 /var/adm/cron This file defines queues whose names consist of a single letter (either case is valid) Conventionally, queue a is used for at , queue b is used for batch , and on many newer systems, queue c is used by cron Tru64 and AIX define queues e and f for at jobs using the Korn shell and C shell, respectively (submitted using the at command's -k and -c options) Queues are defined by lines in this format: q x jy nz w q is a letter, x indicates the number of simultaneous jobs that may run from that queue, y specifies the nice value for processes started from that queue, and z says how long to wait before trying to start a new job when the maximum number for that queue or the facility are already running The default values are 100 jobs, a nice value of (where is the default nice number), and 60 seconds The first two of the following queuedefs entries show typical definitions for the at and batch queues The third entry defines the h queue, which can run one or two simultaneous jobs, niced to level 10, and waits for five minutes between job initiation attempts after starting one has failed: a.4j1n b.2j2n90w h.2j10n300w The desired queue is selected with the -q option to the at command Jobs waiting in the facility's queues may be listed and removed from a queue using the -l and -r options, respectively.[17] [17] The BSD form of the at facility provided the atq and atrm commands for these functions, but they are obsolete forms Also, only the implementations found on FreeBSD and Linux systems continue to require that the atrun command be executed periodically from within cron to enable the at facility (every 10 minutes was a typical interval) If simple batch-processing facilities like these are sufficient for your system's needs, at and batch may be of some use, but if any sort of queue priority features are required, these commands will probably prove insufficient The manual page for at found on many Linux systems is the most honest about its deficiencies: at and batch as presently implemented are not suitable when users are competing for resources A true batch system supports multiple queues; queues that receive jobs from and send jobs to a configurable set of network hosts, including the ability to select hosts based on load-leveling criteria and to allow the administrator to set in-queue priorities (for ordering pending jobs within a queue); queue execution priorities and resource limits (the priority and limits automatically assigned to jobs started from that queue); queue permissions (which users can submit jobs to each queue); and other parameters on a queue-by-queue basis AIX has adapted its print-spooling subsystem to provide a very simple batch system (see Section 13.3 ), allowing for different job priorities within a queue and multiple batch queues, but it is still missing most important features of a modern batch system Some vendors offer batch-processing features as an optional feature at additional cost There are also a variety of open source queueing systems, including: Distributed Queueing System (DQS): http://www.scri.fsu.edu/~pasko/dqs.html Portable Batch System: http://pbs.mrj.com I l@ ve RuBoard I l@ve RuBoard 15.4 Managing Memory Memory resources have at least as much effect on overall system performance as the distribution of CPU resources To perform well, a system needs to have adequate memory not just for the largest jobs it will run, but also for the overall mix of jobs typical of its everyday use For example, the amount of memory that is sufficient for the one or two big jobs that run overnight might provide only a mediocre response time under the heavy daytime interactive use On the other hand, an amount of memory that supports a system's normal interactive use might result in quite poor performance when larger jobs are run Thus, both sets of needs should be taken into consideration when planning for and evaluating system memory requirements Paging and swapping are the means by which Unix distributes available memory among current processes when their total memory needs exceed the amount of physical memory Technically, swapping refers to writing an entire process to disk, thereby freeing all of the physical memory it had occupied A swapped-out process must then be reread into memory when execution resumes Paging involves moving sections of a process's memory—in units called pages—to disk, to free up physical memory needed by some process A page fault occurs when a process needs a page of memory that is not resident and must be (re)read in from disk On virtual memory systems, true swapping occurs rarely if at all[18] and usually indicates a serious memory shortage, so the two terms are used synonymously by most people [18] Some systems swap out idle processes to free memory The swapping I refer to here is the forced swapping of active processes due to a memory shortage Despite the strong negative connotations the term has acquired, paging is not always a bad thing In the most general sense, paging is what makes virtual memory possible, allowing a process' memory requirements to greatly exceed the actual amount of physical memory A process' total memory requirement includes the sum of the size of its executable image [19] (known as its text segment) and the amount of memory it uses for data [19] An exception occurs for executables that can be partially or totally shared by more than one process In this case, only one copy of the image is in memory regardless of how many processes are executing it The total memory used by the shared portions in these cases is divided among all processes using them in the output from commands like ps To run on systems without virtual memory, the process requires an amount of physical memory equal to its current text and data requirements Virtual memory systems take advantage of the fact that most of this memory isn't actually needed all the time Pieces of the process image on disk are read in only as needed The system automatically maps their virtual addresses (relative address with respect to the beginning of the process's image) to real physical memory locations When the process accesses a part of its executable image or its data that is not currently in physical memory, the kernel reads in—pages in—what is needed from disk, sometimes replacing other pages that the process no longer needs For a large program that spends most of its time in two routines, for example, only the part of its executable image containing the routines needs to be in memory while they are running, freeing up for other uses the memory the rest of the program's text segment would occupy on a nonvirtual memory computer This is true whether the two routines are close together or far apart in the process' virtual address space Similarly, if a program uses a very large data area, all of it needn't be resident in memory simultaneously if the program doesn't access it all at once On many modern systems, program execution also always begins with a page fault as the operating system takes advantage of the kernel's virtual memory management facility to read enough of the executable image to get it started The problem with paging comes when there is not enough physical memory on the system for all of the processes currently running In this case, the kernel will apportion the total memory among them dynamically When a process needs a new page read in and there are no free or reusable pages, the operating system must steal a page that is being used by some other process In this case, an existing page in memory is paged out For volatile data, this results in the page being written to a paging area on disk; for executable pages or unmodified pages read in from file, the page is simply freed In either case, however, when that page is again required, it must be paged back in, possibly forcing out another page When available physical memory is low, an appreciable portion of the available CPU time can be spent handling page faulting, and all processes will execute much less efficiently In the worst kind of such thrashing conditions, the system spends all of its time managing virtual memory, and no real work gets done at all (no CPU cycles are actually used to advance the execution of any process) Accordingly, total CPU usage can remain low under these conditions You might think that changing the execution priorities for some of the jobs would solve a thrashing problem Unfortunately, this isn't always the case For example, consider two large processes on a system with only a modest amount of physical memory If the jobs have the same execution priority, they will probably cause each other to page continuously if they run at the same time This is a case where swapping is actually preferable to paging If one job is swapped out, the other might run without page faulting, and after some amount of time, the situation can be reversed Both jobs finish much sooner this way than they under continuous paging Logically, lowering the priority of one of the jobs should cause it to wait to execute until the other one pauses (e.g., for an I/O operation) or completes However, except for the special, low-priority levels we considered earlier, low-priority processes occasionally get some execution time even when higher-priority processes are runnable This happens to prevent a low-priority process from monopolizing a critical resource and thereby creating an overall system bottleneck or deadlock (this concern is indicative of a scheduling algorithm designed for lots of small interactive jobs) Thus, running both jobs at once, regardless of their priorities, will result in some execution degradation (even for the higher priority job) due to paging In such cases, you need to either buy more memory or not run both jobs at the same time In fact, the virtual memory managers in modern operating systems work very hard to prevent such situations from occurring by using techniques for using memory efficiently They also try to keep a certain amount of free memory all the time to minimize the risk of thrashing These are some of the most common practices used to maximize the efficiency of the system's memory resources: Demand paging Pages are loaded into memory only when a page fault occurs When a page is read in, a few pages surrounding the faulted page are typically loaded as well in the same I/O operation in an effort to head off future page faults Copy-on-write page protection Whenever possible, only a single copy of identical pages in use by multiple processes is kept in memory Duplicate, process-private copies of a page are created only if one of the processes modifies it Page reclaims When memory is short, the virtual memory manager takes memory pages being used by current processes However, such pages are simply initially marked as free and are not replaced with new data until the last possible moment In this way, the owning process can reclaim them without a disk read operation if their original contents are still in memory when the pages are required again The next section discusses commands you can use to monitor memory use and paging activity on your system and get a picture of how well the system is performing Later sections discuss managing the system paging areas 15.4.1 Monitoring Memory Use and Paging Activity The vmstat command is the best tool for monitoring system memory use; it is available on all of the systems we are considering The most important statistics in this context are the number of running processes and the number of page-outs[20] and swaps You can use this information to determine whether the system is paging excessively As you gather data with these commands, you'll also need to run the ps command so that you know what programs are causing the memory behavior you're seeing [20] Because of the way that AIX keeps its paging statistics, page-ins are better indicators, because a page-in always means that a page was previously paged out The following sections discuss the memory monitoring commands and show how to interpret their output They provide examples of output from systems under heavy loads It's important to keep in mind, though, that all systems from time to time have memory shortages and consequent increases in paging activity Thus, you can expect to see similar output on your system periodically Such activity is significant only if it is persistent Some deviation from what is normal for your system is to be expected, but consistent and sustained paging activity does indicate a memory shortage that you'll need to deal with 15.4.1.1 Determining the amount of physical memory The following commands can be used to quickly determine the amount of physical memory on a system: AIX lsattr -HE -l sys0 -a realmem FreeBSD grep memory /var/run/dmesg.boot HP-UX dmesg | grep Phys Linux free Solaris dmesg | grep mem Tru64 vmstat -P | grep '^Total' Some Unix versions (including FreeBSD, AIX, Solaris, and Tru64) also support the pagesize command, which displays the size of a memory page: $ pagesize 4096 Typical values are KB and KB 15.4.1.2 Monitoring memory use Overall memory usage levels are very useful indicators of the general state of the virtual memory subsystem They can be obtained from many sources, including the top command we considered earlier Here is the relevant part of the output: CPU states: 3.5% user, 9.4% system, 13.0% nice, 87.0% idle Mem: 63212K av, 62440K used, 772K free, 21924K shrd, Swap: 98748K av, 6060K used, 92688K free 316K buff 2612K cached Graphical system state monitors can also provide overall memory use data Figure 15-2 illustrates the KDE System Guard (ksysguard) utility's display It presents both a graphical view of ongoing CPU and memory usage, as well as the current numerical data in the status area at the bottom of the window Figure 15-2 Overall system performance statistics Linux also provides the free command, which lists current memory usage statistics: $ free -m -o total Mem: 249 Swap: 255 used 231 free 18 252 shared buffers 11 cached 75 The command's options specify display units of MB (-m ) and to omit buffer cache data ( -o ) The most detailed memory subsystem data is given by vmstat As we've seen, vmstat provides a number of statistics about current CPU and memory use vmstat output varies somewhat between implementations Here is an example of typical vmstat output:[21] [21] vmstat's output varies somewhat from system to system, as we'll see $ vmstat procs memory page disk r b w swap free re mf pi po fr de sr s0 s6 s7 s8 0 1642648 759600 98 257 212 10 10 0 0 0 1484544 695816 0 0 0 0 0 0 1484544 695816 0 0 0 0 0 0 0 1484544 695816 0 0 0 0 0 faults in sy 199 121 113 35 113 65 111 72 cpu cs us sy 92 46 45 44 id 88 99 99 99 The first line of every vmstat report is an average since boot time; it can be ignored for our purposes, and I'll be omitting it from future displays.[22] [22] You can define an alias to take care of this automatically Here's an example for the C shell: alias vm "/usr/bin/vmstat \!:* | awk 'NR!=4'" The report is organized into sections as follows: procs or kthr Statistics about active processes Together, the first three columns tell you how many processes are currently active memory Memory use and availability data page or swap Paging activity io or disk Per-device I/O operations faults or system or intr Overall system interrupt and context switching rates cpu Percentage of CPU devoted to system time, user time, and time the CPU remained idle AIX adds an additional column showing CPU time spent in idle mode while jobs are waiting for pending I/O operations Not all versions of vmstat contain all sections Table 15-5 lists the most important columns in vmstat's report Table 15-5 vmstat report contents Label(s) Meaning r Number of runnable processes b Number of blocked processes (idle because they are waiting for I/O) w Number of swapped-out runnable processes (should be 0) avm, act, swpd Number of active virtual memory pages (a snapshot at the current instant) For vmstat, a page is usually KB, regardless of the system's actual page size However, under AIX and HPUX, a vmstat page is KB fre, free Number of memory pages on the free list re Number of page reclaims: pages placed on the free list but reclaimed by their owner before the page was actually reused pi, si, pin Number of pages paged in (usually includes process startup) po, so, pout Number of pages paged out (if greater than zero, the system is paging) fr Memory pages freed by the virtual memory management facility during this interval dn Disk operations per second on disk n Sometimes, the columns are named for the various disk devices rather than in this generic way (e.g., adn under FreeBSD) Not all versions of vmstat include disk data cs Number of context switches us Percentage of total CPU time spent on user processes sy Percentage of total CPU time spent as system overhead id Idle time percentage (percentage of CPU time not used) Here are examples of the output format for each of our systems: AIX kthr memory page faults cpu - - - r b avm fre re pi po fr sr cy in sy cs us sy id wa 0 149367 847219 0 0 0 109 258 11 18 72 HP-UX procs r b w avm 0 228488 Linux procs r b w swpd 0 FreeBSD procs r b w 0 Solaris memory free re 120499 free 4280 memory avm fre 5392 32500 page po at buff 5960 memory cache 48296 page flt re pi pi swap so fr 10 si po fr de sr io bo bi disks sr ad0 ad1 0 in 1021 sy 44 system in cs 101 123 faults in sy 229 faults cs 29 us sy cpu us sy id 14 86 cpu id 99 cpu cs us sy id 99 kthr memory r b w swap free re 0 695496 187920 Tru64 page disk mf pi po fr de sr dd f0 s0 -1 0 0 0 Virtual Memory Statistics: (pagesize = 8192) procs memory pages r w u act free wire fault cow zero react 135 31 15K 10K 5439 110M 8M 52M 637K faults sy 34 cpu cs us sy id 45 0 100 intr in sy 953 cpu cs us sy id 1K 98 in 402 pin pout 42M 63K Note that some versions have additional columns We'll look at interpreting vmstat output in the next subsection 15.4.1.3 Recognizing memory problems You can expect memory usage to vary quite a lot in the course of normal system operations Short-term memory usage spikes are normal and to be expected In general, one or more of the following symptoms may suggest a significant shortage of memory resources when they appear regularly and/or persist for a significant period of time: Available memory drops below some acceptable threshold On an interactive system this may be 5%15% However, on a system designed for computation, a steady free memory amount of 5% may be fine Significant paging activity The most significant metrics in this case are writes to the page file (pageouts) and reads from the page file (although most systems don't provide the latter statistic) The system regularly thrashes, even if only for short periods of time The page file gradually increases in size or remains at a high usage level under normal operations This can indicate that additional paging space is needed or that memory itself is in low supply In practical terms, let's consider specific parts of the vmstat output: In general, the number in the w column should be 0, indicating no runnable swapped-out processes; if it isn't, the system has a serious memory shortage The po column is the most important in terms ofpaging: it indicates the number of page-outs and should ideally be very close to zero If it isn't, processes are contending for the available memory and the system is paging Paging activity is also reflected in significant decreases in the amount of free memory (fre) and in the number of page reclaims (re)—memory pages taken away from one process because another one needs them even though the first process needs them too High numbers in the page-ins column (pi) are not always significant because starting up a process involves paging in its executable image and data [23] When a new process starts, this column will jump up but then quickly level off again [23] The AIX version of vmstat limits pi to page-ins from paging space The following is an example of the effect mentioned in the final bullet: $ vmstat procs memory r b w avm fre 81152 17864 1 98496 15624 0 84160 11648 0 74784 9600 0 74464 5984 0 78688 5472 1 60480 16032 ^C Output is edited re 0 0 0 page pi po 0 192 320 320 64 0 0 At the second data line, a compile job starts executing There is a jump in the number of page-ins, and the available memory (fre) drops sharply Once the job gets going, the page-ins drop back to zero, although the free list size stays small When the job ends, its memory returns to the free list (final line) Check your system's documentation to determine whether process startup paging is included in vmstat's paging data Here is some output from a system briefly under distress: $ vmstat procs memory r b w avm fre 1 43232 31296 46560 32512 0 82496 2848 81568 2304 72480 2144 72640 2112 73280 3328 54176 19552 ^C re 0 2 0 0 page pi po 0 0 384 608 384 448 96 96 64 32 0 32 Some columns omitted cpu us sy id 97 95 37 58 63 43 71 23 12 76 12 23 26 51 34 65 At the beginning of this report, this system was running well, with no paging activity at all Then several new processes start up (line 5), both page-in and page-out activity increases, and the free list shrinks This system doesn't have enough memory for all the jobs that want to run at this point, which is also reflected in the size of the free list By the end of this report, however, things are beginning to calm down again as these processes finish 15.4.1.4 The filesystem cache Most current Unix implementations use any free memory as adata cache for disk I/O operations in an effort to maximize I/O performance Recently accessed data is kept in memory for a time in case it is needed again, as long as there is sufficient memory to so However, this is the first memory to be freed if more memory is needed This tactic improves the performance of local processes and network system access operations However, on systems designed for computation, such memory may be better used for user jobs On many systems, you can configure the amount of memory that is used in this way, as we'll see 15.4.2 Configuring the Virtual Memory Manager Some Unix variations allow you to specify some of the parameters that control the way the virtual memory manager operates We consider each Unix version individually in the sections that follow These operations require care and thought and should be initially tried on nonproduction systems Recklessness and carelessness will be punished 15.4.2.1 AIX AIX provides commands for customizing some aspects of the Virtual Memory Manager You need to be cautious when modifying any of the system parameters discussed in this section, because it is quite possible to make the system unusable or even crash if you give invalid values Fortunately, changes made with the commands in the section last only until the system is rebooted AIX's schedtune command (introduced in the previous section of this chapter) can be used to set the values of various Virtual Memory Manager (VMM) parameters that control how the VMM responds to thrashing conditions In general, its goal is to detect such conditions and deal with them before they get completely out of hand (for example, a temporary spike in memory usage can result in thrashing for many minutes if nothing is done about it) The VMM decides that the system is thrashing when the fraction of page steals (pages grabbed while they were still in use) that are actually paged out to disk [24] exceeds some threshold value When this happens, the VMM begins suspending processes until thrashing stops.[25] It tries to select processes to suspend that are both having an effect on memory performance and whose absence will actually cause conditions to improve It chooses processes based on their own repage rates: when the fraction of its page faults are for pages that have been previously paged out rises above a certain value—by default, one fourth—a process becomes a candidate for suspension Suspended processes are resumed once system conditions have improved and remained stable for a certain period of time (by default, second) [24] Computed as po/fr, using the vmstat display fields [25] Suspended processes still consume memory, but they stop paging Without any arguments, schedtune displays the current values of all of the parameters under its control, including those related to memory load management Here is an example of its output: # schedtune THRASH -h -p -m SYS PROC MULTI SUSP -w -e WAIT GRACE FORK -f TICKS 10 CLOCK SCHED_FIFO2 IDLE MIGRATION -c -a -b %usDELTA AFFINITY_LIM BARRIER/16 100 -d SCHED_D 16 SCHED -r SCHED_R 16 FIXED_PRI -F GLOBAL Table 15-6 summarizes the meanings of the thrashing-related parameters -t -s TIMESLICE MAXSPIN 16384 Table 15-6 AIX VMM parameters Option Label Meaning -h SYS Memory is defined as overcommitted when page writes/total page steals > 1/-h Setting this value to disables the thrash recovery mechanisms (which is the default) -p PROC A process may be suspended during thrashing conditions when its repages/page faults > 1/-p This parameter defines when an individual process is thrashing The default is -m MULTI Minimum number of processes to remain running even when the system is thrashing The default is -w WAIT Number of seconds to wait after thrashing ends (as defined by -h ) before any reactivating suspended processes The default is -e GRACE Number of seconds after reactivation before a process may be suspended again The default is Currently, the AIX thrashing recovery mechanisms are disabled by default In general, it is better to prevent memory overuse problems than to recover from them However, this is not always possible, so you may find this feature useful on very busy, heavily loaded systems To enable it, set the value of -h to (the previous AIX default value) For most systems, it is not necessary to change the default values of the other thrashing control parameters However, if you have clear evidence that the VMM is systematically behaving either too aggressively or not aggressively enough in deciding whether memory has become overcommitted, you might want to experiment with small changes, beginning with -h or -p In some cases, increasing the value of -w may be beneficial on systems running a large number of processes I don't recommend changing the value of -m The vmtune command allows the system administrator to customize some aspects of the behavior of the VMM's page replacement algorithm vmtune is located in the same directory as schedtune: /usr/samples/kernel Without options, the command displays the values of various memory management parameters: # vmtune vmtune: current values: -p -P -r -R minperm maxperm minpgahead maxpgahead 209507 838028 -f minfree 120 -F maxfree 128 -N -W pd_npages maxrandwrt 524288 -M -w -k -c -b -B -u -l -d maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps 838849 4096 1024 196 192 131072 -s sync_release_ilock -n nokilluid -S v_pinshm -L lgpg_regions -g lgpg_size -h strict_maxperm -t maxclient 838028 number of valid memory pages = 1048561 maximum pinable=80.0% of real memory number of file memory pages = 42582 maxperm=79.9% of real memory minperm=20.0% of real memory numperm=4.1% of real memory number of compressed memory pages = number of client memory pages = 46950 # of remote pgs sched-pageout = compressed=0.0% of real memory numclient=4.5% of real memory maxclient=79.9% of real memory These are vmtune's most useful options for memory management: -f minfree Minimum size of the free list—a set of memory pages set aside for new pages required by processes (used to satisfy page faults) When the free list falls below this threshold, the VMM must steal pages from running processes to replenish the free list The default is 120 pages -F maxfree Page stealing stops when the free list reaches or exceeds this size The default is 128 pages -p minperm Threshold value that forces both computational and file pages to be stolen (expressed as a percentage of the system's total physical memory) The default is 18%-20% (depending on memory size) -P maxperm Threshold value that forces only file pages to be stolen (expressed as a percentage of the system's total physical memory) The default is 75%-80% The second pair of parameters determine to a certain extent which sorts of memory pages are stolen when the free list needs to be replenished AIX distinguishes between computational memory pages, which consist of program working storage (non-file-based data) and program text segments (the executable's in-memory image) File pages are all other kinds of memory pages (all of which are backed by disk files) By default, the VMM attempts to slightly favor computational pages over file pages when selecting pages to steal, according to the following scheme: Both types %file < minperm OR file-repaging File pages only computational-repaging (minperm < %file < maxperm AND file-repaging < computational-repaging) OR %file > maxperm %file is the percentage of pages which are file pages Repage rates are the fraction of page faults that reference stolen or replaced memory pages rather than new pages (determined from the VMM's limited history of pages that have recently been present in memory) It may make sense to reduce maxperm on computationally-oriented systems 15.4.2.2 FreeBSD On FreeBSD systems, kernel variables may be displayed and modified with the sysctl command (and set at boot time via its configuration file /etc/sysctl.conf) For example, the following commands display and then reduce the value for the maximum number of simultaneous processes allowed per user: # sysctl kern.maxprocperuid kern.maxprocperuid: 531 # sysctl kern.maxprocperuid=64 kern.maxprocperuid: 531 -> 64 Such a step might make sense on systems where users need to be prevented from overusing/abusing system resources (although, in itself, this step would not solve such a problem) The following line in /etc/sysctl.conf performs the same function: kern.maxprocperuid=64 Figure 15-3 lists the kernel variables related to paging activity and the interrelationships among them Figure 15-3 FreeBSD memory management levels Normally, the memory manager tries to maintain at least vm.v_free_target free pages The pageout daemon, which suspends processes when memory is short, wakes up when free memory drops below the level specified by vm.v_free_reserved (it sleeps otherwise) When it runs, it tries to achieve the total number of free pages specified by vm.v_inactive_target The default values of these parameters depend on the amount of physical memory in the system On a 98 MB system, they have the following settings: vm.v_inactive_target: 1524 vm.v_free_target: 1016 vm.v_free_min: 226 vm.v_free_reserved: 112 vm.v_pageout_free_min: 34 Units are pages Finally, the variables vm.v_cache_min and vm.v_cache_max specify the minimum and maximum sizes of the filesystembuffer cache (the defaults are 1016 and 2032 pages, respectively, on a 98 MB system) The cache can grow dynamically between these limits if free memory permits If the cache size falls significantly below the minimum size, the pageout daemon is awakened You may decide to increase one or both of these values if you want to favor the cache over user processes in memory allocation Increase the maximum first; changing the minimum level requires great care and understanding of the memory manager internals 15.4.2.3 HP-UX On HP-UX systems,kernel parameters are set with the kmtune command Paging is controlled by three variables, in the following way: free memory lotsfree Page stealing stops desfree free memory < lotsfree Page stealing occurs minfree free memory < desfree Anti-thrashing measures taken, including process deactivation (in addition to page stealing) The default values for these variables are set by HP-UX and depend on the amount of physical memory in the system (in pages) The documentation strongly discourages modifying them HP-UX can use either a statically or dynamically sized buffer cache (the latter is the default and is recommended) A dynamic cache is used when the variables nbuf and bufpages are both set to In this case, you can specify the minimum and maximum percentage of memory used for the cache via the variables dbc_min_pct and dbc_max_pct, which default to 5% and 50%, respectively Depending on the extent to which you want to favor the cache or user processes in allocating memory, modifying the maximum value may make sense 15.4.2.4 Linux On Linux systems, modifying kernel parameters is done by changing the values within files in /proc/sys and its subdirectories (as we've seen previously) For memory management, the relevant files are located in the vm subdirectory These are the most important of them: freepages This file contains three values specifying a minimum free page level, a low free page level, and a desired free page level When there are fewer than the minimum number, user processes are denied additional memory Between the minimum and low levels, aggressive paging (page stealing) takes place, while between the low and desired levels, "gentle" paging occurs Above the desired (highest) level, page stealing stops The default values (in pages) depend on the amount of physical memory in the system, but they scale as x, 2x, and 3x (more or less) Successfully modifying these values requires a thorough knowledge of both the Linux memory subsystem and the system workload, and doing so is not recommended for the faint of heart buffermem Specifies the amount of memory to be used for the filesystem buffer cache The three values specify the minimum amount, the borrow percentage, and the maximum amount They default to 2%, 10%, and 60%, respectively When memory is short and the size of the buffer cache exceeds the borrow percentage level, pages will be stolen from the buffer cache until its size drops below this size If you want to favor the buffer cache over processes in allocating memory, increasing the borrow and/or maximum levels may make sense On the other hand, if you want to favor processes, reducing the maximum and setting the borrow level close to it makes more sense overcommit_memory Setting the value in this file to allows processes to allocate amounts of memory larger than can actually be accommodated (the default is 0) Some application programs allocate huge amounts of ... 09:52:46 EDT 2002 File system /dev/dsk/c1d1s0 /dev/dsk/c1d1s2 /dev/dsk/c1d2s8 Old New Kbytes 81 952 3735 68 66 788 3 used 688 48 354632 4 389 43 avail 13104 189 36 2 289 40 capacity 84 % 94% 66% Mounted-on... erasbd.ttf :2:mincho.ttc -itc-Eras Light ITC-medium-r-normal 0-0-0-0-p-0-iso 885 9-1 -itc-Eras Bold ITC-medium-r-normal 0-0-0-0-p-0-iso 885 9-1 -misc-mincho- The final entry shows the method for referring... commands from the BSD and System V printing systems, respectively For queue and printer administration, it offers two options: command-line utilities, including a version of the System V lpadmin command,

Ngày đăng: 13/08/2014, 04:21

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan