1. Trang chủ
  2. » Công Nghệ Thông Tin

Principles of Network and System Administration 2nd phần 6 pot

65 306 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 65
Dung lượng 601,78 KB

Nội dung

8.9. GAME-THEORETICAL STRATEGY SELECTION 311 0 20406080100 0 0.1 0.2 0.3 0.4 0.5 Figure 8.14: The absolute values of payoff contributions as a function of time (in hours), For daily tidying T p = 24. User numbers are set in the ratio (n g ,n b ) = (99, 1), based on rough ratios from the author’s College environment, i.e. one percent of users are considered mischievous. The filling rates are in the same ratio: r b /R tot = 0.99,r g /R tot = 0.01,r a /R tot = 0.1. The flat dot-slashed line is |π q |, the quota payoff. The lower wavy line is the cumulative payoff resulting from good users, while the upper line represents the payoff from bad users. The upper line doubles as the magnitude of the payoff |π a |≥|π u |, if we apply the restriction that an automatic system can never win back more than users have already taken. Without this restriction, |π a | would be steeper. As drawn, the daily ripples of the automatic system are in phase with the users’ activity. This is not realistic, since tidying would normally be done at night when user activity is low, however such details need not concern us in this illustrative example. The policy created in setting up the rules of play for the game penalizes the system administrator for employing strict quotas which restrict users’ activities. Even so, users do not gain much from this, because quotas are constant for all time. A quota is a severe handicap to users in the game, except for very short times before users reach their quota limits. Quotas could be considered cheating by the system administrator, since they determine the final outcome even before play commences. There is no longer an adaptive allocation of resources. Users cannot create temporary files which exceed these hard and fast quotas. An immunity-type model which allows fluctuations is a more resource-efficient strategy in this respect, since it allows users to span all the available resources for short periods of time, without consuming them for ever. According to the minimax theorem, proved by John von Neumann, any two- person zero-sum game has a solution, either in terms of a pair of optimal pure strategies or as a pair of optimal mixed strategies [225, 96]. The solution is found as the balance between one player’s attempt to maximize his payoff and the other player’s attempt to minimize the opponent’s result. In general, one can say of the 312 CHAPTER 8. DIAGNOSTICS, FAULT AND CHANGE MANAGEMENT payoff matrix that max ↓ min → π rc ≤ min → max ↓ π rc , (8.22) where the arrows refer to the directions of increasing rows (↓) and columns (→). The left-hand side is the least users can hope to win (or conversely the most that the system administrator can hope to keep) and the right is the most users can hope to win (or conversely the least the system administrator can hope to keep). If we have max ↓ min → π rc = min → max ↓ π rc , (8.23) it implies the existence of a pair of single, pure strategies (r ∗ ,c ∗ ) which are optimal for both players, regardless of what the other does. If the equality is not satisfied, then the minimax theorem tells us that there exist optimal mixtures of strategies, where each player selects at random from a number of pure strategies with a certain probability weight. The situation for our time-dependent example matrix is different for small t and for large t. The distinction depends on whether users have had time to exceed fixed quotas or not; thus ‘small t’ refers to times when users are not impeded by the imposition of quotas. For small t, one has: max ↓ min → π rc = max ↓         π g − 1 2 1 2 + π u + π a 1 2 + π u 1 2 + π u + π a θ(t 0 − t)         = 1 2 + π u . (8.24) The ordering of sizes in the above minimum vector is: 1 2 + π u ≥ 1 2 + π u + π a θ(t 0 − t) ≥ π u + π a θ(t 0 − t) ≥ π g − 1 2 . (8.25) For the opponent’s endeavors one has min → max ↓ π rc = min → ( 1 2 + π u , 1 2 + π u , 1 2 + π u ,π q ) = 1 2 + π u . (8.26) This indicates that the equality in eqn. (8.23) is satisfied and there exists at least one pair of pure strategies which is optimal for both players. In this case, the pair is for users to conceal files, regardless of how the system administrator tidies files (the system administrator’s strategies all contribute the same weight in eqn (8.26)). Thus for small times, the users are always winning the game if one assumes that they are allowed to bluff by concealment. If the possibility of concealment or bluffing is removed (perhaps through an improved technology), then the next best strategy is for users to bluff by changing the date, assuming that the tidying looks at the 8.10. MONITORING 313 date. In that case, the best system administrator strategy is to tidy indiscriminately at threshold. For large times (when system resources are becoming or have become scarce), then the situation looks different. In this case one finds that max ↓ min → π rc = min → max ↓ π rc = π q . (8.27) In other words, the quota solution determines the outcome of the game for any user strategy. As already commented, this might be considered cheating or poor use of resources, at the very least. If one eliminates quotas from the game, then the results for small times hold also at large times. 8.10 Monitoring Having set policy and implemented it to some degree, it is important to verify the success of this programme by measuring the state of the system. Various monitoring tools exist for this purpose, depending upon the level at which we wish to evaluate the system: • Machine performance level • Abstract policy level. While these two levels are never unrelated, they pose somewhat different questions. A very interesting idea which might be used both in fault diagnosis and security intrusion detection is the idea of anomaly detection. In anomaly detection we are looking for anything abnormal. That could come from abnormal traffic, patterns of kernel activity, or changes in the statistical profiles of usage. An anomaly can be responded to as a punishable offence, or as a correctable transgression that leads to regulation of behavior, depending on its nature and the policy of the system administrator (see figure 8.15). Automated self-regulation in host management has been discussed in refs. [41, 42, 44, 48], as well as adaptive behavior [274] and network intrusion detection [102, 156]. In their insightful paper [159], Hoogenboom and Lepreau anticipated the need for monitoring time series data with feedback regulation in order to adjust policy automatically. Today much effort is aimed at detecting anomalies for security related intrusion detection rather than for general maintenance, or capacity planning. This has focused attention on mainly short-term changes; however, long-term changes can also be of interest in connection with maintenance of host state and its adaptability to changing demand. SNMP tools such as MRTG, RRDtool and Cricket specialize in collecting data from SNMP devices like routers and switches. Cfengine’s environment daemon adopts a less deterministic approach to anomaly detection over longer time scales, that can be used to trigger automated policy countermeasures [50]. For many, monitoring means feeding a graphical representation of the system to a human in order to provide an executive summary of its state. 314 CHAPTER 8. DIAGNOSTICS, FAULT AND CHANGE MANAGEMENT 0 24 48 72 96 120 144 168 Time (hrs) 0 20 40 60 80 Average value Figure 8.15: An average summary of system activity over the course of a week, as generated by cfengine’s environment daemon. 8.11 System performance tuning When is a fault not a fault? When it is an inefficiency. Sooner or later, user perception of system performance passes a threshold. Beyond that threshold we deem the performance of a computer to be unacceptably slow and we become irritated. Long before that happens, the system itself recognizes the symptoms of a lack of resources and takes action to try to counter the problem, but not always in the way we would like. Efficiency and users’ perception of efficiency are usually two separate things. The host operating system itself can be timesharing perfectly and performing real work at a break-neck pace, while one user sits and waits for minutes for something as simple as a window to refresh. For anyone who has been in this situation, it is painfully obvious that system performance is a highly subjective issue. If we aim to please one type of user, another will be disappointed. To extract maximal performance from a host, we must focus on specific issues and make particular compromises. Note that the system itself is already well adjusted to share resources: that is what a kernel is designed to do. The point of performance tuning is that what is good for one task is not necessarily good for another. Generic kernel configurations try to walk the line of being adequate for everyone, and in doing so they are not great at doing any of them in particular. The only way we can truly achieve maximal performance is to specialize. Ideally, we would have one host per task and optimize each host for that one task. Of course this is a 8.11. SYSTEM PERFORMANCE TUNING 315 huge waste of resources, which is why multitasking operating systems exist. The inevitability of sharing resources between many tasks is to strike compromise. This is the paradox of multitasking. Whole books have been written on the subject of performance tuning, so we shall hardly be able to explore all of the avenues of the topic in a brief account. See for instance refs. [159, 97, 200, 307, 16, 318, 293, 266]. Our modest aim in this book is, as usual, to extract the essence of the topic, pointing fingers at the key performance bottlenecks. If we are to tune a system, we need to identify what it is we wish to optimize, i.e. what is most important to us. We cannot make everything optimal, so we must pick out a few things which are most important to us, and work on those. System performance tuning is a complex subject, in which no part of the system is sacrosanct. Although it is quite easy to pin-point general performance problems, it is harder to make general recommendations to fix these. Most details are unique to each operating system. A few generic pointers can nonetheless offer the greatest and most obvious gains, while the tweaking of system-dependent parameters will put the icing on the cake. In order to identify a problem, we must first measure the performance. Again there are the two issues: user perception of performance (interactive response time) and system throughput and we have to choose the criterion we wish to meet. When the system is running slowly, it is natural to look at what resources are being tested, i.e. • What processes are running • How much available memory the system has • Whether disks are being used excessively • Whether the network is being used heavily • What software dependencies the system has (e.g. DNS, NFS). The last point is easy to overlook. If we make one host dependent on another then the dependant host will always be limited by the host on which it depends. This is particularly true of file-servers (e.g. NFS, DFS, Netware distributed filesystems) and of the DNS service. Principle 48 (Symptoms and cause). Always try to fix problems at the root, rather than patching symptoms. 8.11.1 Resources and dependencies Since all resources are scheduled by processes, it is natural to check the process table first and then look at resource usage. On Windows, one has the process manager and performance monitor for this. On Unix-like systems, we check the process listing with ps aux, if a BSD compatible ps command exists, or ps -efl if the system is derived from System V. If the system has both, or a BSD compatible output mode, as in Solaris and Digital Unix (OSF1), for instance, then the BSD 316 CHAPTER 8. DIAGNOSTICS, FAULT AND CHANGE MANAGEMENT style output is recommended. This provides more useful information and orders the processes so that the heaviest process comes at the top. This saves time. Another useful Unix tool is top. A BSD process listing looks like this: host% ps aux | more USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND root 3 0.2 0.0 0 0 ? S Jun 15 55:38 fsflush root 22112 0.1 0.5 1464 1112 pts/2 O 15:39:54 0:00 ps aux mark 22113 0.1 0.3 1144 720 pts/2 O 15:39:54 0:00 more root 340 0.1 0.4 1792 968 ? S Jun 15 3:13 /bin/fingerd This one was taken on a quiet system, with no load. The columns show the user ID of the process, the process ID, an indication of the amount of CPU time used in executing the program (the percentage scale can be taken with a pinch of salt, since it means different things for different kernels), and an indication of the amount of memory allocated. The SZ post is the size of the process in total (code plus data plus stack), while RSS is the resident size, or how much of the program code is actually resident in RAM, as opposed to being paged out, or never even loaded. TIME shows the amount of CPU time accumulated by the process, while START indicates the amount of clock time which has elapsed since the process started. Problem processes are usually identified by: • %CPU is large. A CPU-intensive process, or a process which has gone into an endless loop. TIME is large. A program which has been CPU intensive, or which has been stuck in a loop for a long period. • %MEM is large. SZ is large. A large and steadily growing value can indicate a memory leak. One thing we notice is that the ps command itself uses quite a lot of resources. If the system is low on resources, running constant process monitoring is an expensive intrusion. Unix-like systems also tell us about memory performance through the virtual memory statistics, e.g. the vmstat command. This command gives a different output on each operating system, but summarizes the amount of free memory as well as paging performance etc. It can be used to get an idea of whether or not the system is paging a lot (a sign that memory is low). Another way of seeing this is to examine the amount of swap space which is in use: OS List virtual memory usage AIX lsps -a HPUX swapinfo -t -a -m Digital Unix/OSF1 swapon -s Solaris 1 or SunOS 3/4 pstat -s Solaris 2 or SunOS 5 swap -l GNU/Linux free Windows Performance manager 8.11. SYSTEM PERFORMANCE TUNING 317 Excessive network traffic is also a cause of impaired performance. We should try to eliminate unnecessary network traffic whenever possible. Before any complex analysis of network resources is undertaken, we can make sure that we have covered the basics: • Make sure that there is a DNS server on each large subnet to avoid sending unnecessary queries through a router. (On small subnets this would be overkill.) • Make sure that the nameservers themselves use the loopback address 127.0.0.1 as the primary nameserver on Unix-like hosts, so that we do not cause collisions by having the nameserver talk to itself on the public network. • Try to avoid distributed file accesses on a different subnet. This loads the router. If possible, file-servers and clients should be on the same subnet. • If we are running X-windows, make sure that each workstation has its DISPLAY variable set to :0.0 rather than hostname:0.0, to avoid sending data out onto the network, only to come back to the same host. Some operating systems have nice graphical tools for viewing network statistics, while others have only netstat, with its varying options. Collision statistics can be seen with netstat -i for Unix-like OSs or netstat /S on Windows. DNS efficiency is an important consideration, since all hosts are more or less completely reliant on this service. Measuring performance reliably, in a scientifically stringent fashion is a difficult problem (see chapter 13), but adequate measurements can be made, for the purpose of improving efficiency, using the process tables and virtual memory statistics. If we see frantic activity in the virtual memory system, it means that we are suffering from a lack of resources, or that some process has run amok. Once a problem is identified, we need a strategy for solving it. Performance tuning can involve everything from changing hardware to tweaking software. • Optimizing choice of hardware • Optimizing chosen hardware • Optimizing kernel behavior • Optimizing software configurations • (Optimizing service availability). Hardware has physical limitations. For instance, the heads of a hard-disk can only be in one place at a time. If we want to share a hard-disk between two processes, the heads have to be moved around between two regions of the disk, back and forth. Moving the read heads over the disk platter is the slowest operation in disk access and perhaps the computer as a whole, and unfortunately something we can do nothing about. It is a fundamental limitation. Moreover, to get the data from disk into RAM, it is necessary to interrupt processes and involve the kernel. 318 CHAPTER 8. DIAGNOSTICS, FAULT AND CHANGE MANAGEMENT Time spent executing kernel code is time not spent on executing user code, and so it is a performance burden. Resource sharing is about balancing overheads.We must look for the sources of overheads and try to minimize them, or mitigate their effects by cunning. 8.11.2 Hardware The fundamental principle of any performance analysis is: Principle 49 (Weakest link). The performance of any system is limited by the weakest link amongst its components. System optimization should begin with the source. If performance is weak at the source, nothing which follows can make it better. Obviously, any effect which is introduced after the source will only reduce the performance in a chain of data handling. A later component cannot ‘suck’ the data out of the source faster than the source wants to deliver it. This tells us that the logical place to begin is with the system hardware. A corollary to this principle follows from a straightforward observation about hardware. As Scotty said, we cannot change the laws of physics: Corollary to principle (Performance). A system is limited by its slowest mov- ing parts. Resources with slowly moving parts, like disks, CD-ROMs and tapes, transfer data slowly and delay the system. Resources which work purely with electronics, like RAM memory and CPU calculation, are quick. However, electronic motion/communication over long distances takes much longer than communication over short distances (internally within a host) because of impedances and switching. Already, these principles tell us that RAM is one of the best investments we can make. Why? In order to avoid mechanical devices like disks as much as possible, we store things in RAM; in order to avoid sending unnecessary traffic over networks, we cache data in RAM. Hence RAM is the primary workhorse of any computer system. After we have exhausted the possibilities of RAM usage, we can go on to look at disk and network infrastructure. • Disks: When assigning partitions to new disks, it pays to use the fastest disks for the data which are accessed most often, e.g. for user home directories. To improve disk performance, we can do two things. One is to buy faster disks and the other is to use parallelism to overcome the time it takes for physical motions to be executed. The mechanical problem which is inherent in disk drivesisthattheheadswhichreadandwritedatahavetomoveasaunit. If we need to collect two files concurrently which lie spread all over the disk, this has to be done serially. Disk striping is a technique whereby filesystems are spread over several disks. By spreading files over several disks, we have several sets of disk heads which can seek independently of one another, and work in parallel. This does not necessarily increase the transfer rate, but it does lower seek times, and thus performance improvement can approach as much as N times with N disks. RAID technologies employ striping techniques and are widely available commercially. GNU/Linux also has RAID support. 8.11. SYSTEM PERFORMANCE TUNING 319 Spreading disks and files across multiple disk controllers will also increase parallelism. • Network: To improve network performance, we need fast interfaces. All inter- faces, whether they be Ethernet or some other technology, vary in quality and speed. This is particularly true in the PC world, where the number of competing products is huge. Network interfaces should not be trusted to give the performance they advertise. Some interfaces which are sold as 100Mbits/sec, Fast Ethernet, manage little more than 40Mbits/sec. Some network interfaces have intelligent behavior and try to detect the best avail- able transmission rate. For instance, newer Sun machines use the hme fast Ethernet interface. This has the ability to detect the best transmission protocol for the line a host is connected to. The best transmission type is 100Mbits/sec, full duplex (simultaneous send and receive), but the interface will switch down to 10Mbits/sec, half duplex (send or receive, one direction at a time) if it detects a problem. This can have a huge performance effect. One problem with auto-detection is that, if both ends of the connection have auto-detection, it can become an unpredictable matter which speed we end up with. Sometimes it helps to try setting the rate explicitly, assuming that the network hardware supports that rate. There are other optimizations also, for TCP/IP tuning, which we shall return to below. Refs. [295, 312] are excellent references on this topic. The sharing of resources between many users and processes is what networking is about. The competition for resources between several tasks leads to another performance issue. Principle 50 (Contention/competition). When two processes compete for a resource, performance can be dramatically reduced as the processes fight over the right to use the resource. This is called contention. The benefits of sharing have to be weighed against the pitfalls. Contention could almost be called a strategy, in some situations, since there exist technologies for avoiding contention altogether. For example, Ethernet technology allows contention to take place, whereas Token Ring technology avoids it. We shall not go into the arguments for and against contention. Suffice it to say that many widely used technologies experience this problem. • Ethernet collisions: Ethernet communication is like a television panel of politi- cians: many parties shouting at random, without waiting for others to finish. The Ethernet cable is a shared bus. When a host wishes to communicate with another host, it simply tries. If another host happens to be using the bus at that time, there is a collision and the host must try again at random until it is heard. This method naturally leads to contention for bandwidth. Thesystemworksquitewellwhentrafficislow,butasthenumberofhosts competing for bandwidth increases, the probability of a collision increases in step. Contention can only be reduced by reducing the amount of traffic on the network segment. The illusion of many collisions can also be caused by 320 CHAPTER 8. DIAGNOSTICS, FAULT AND CHANGE MANAGEMENT incorrect wiring, or incorrectly terminated cable, which leads to reflections. If collision rates are high, a wiring check might also be in order. • Disk thrashing: Thrashing 2 is a problem which occurs because of the slowness of disk head movements, compared with the speed of kernel time-sharing algorithms. If two processes attempt to take control of a resource simultane- ously, the kernel and its device drivers attempt to minimize the motion of the heads by queuing requested blocks in a special order. The algorithms really try to make the disks traverse the disk platter uniformly, but the requests do not always come in a predictable or congenial order. The result is that the disk heads can be forced back and forth across the disk, driven by different processes and slowing the system to a virtual standstill. The time for disk heads to move is an eternity to the kernel, some hundreds of times slower than context switching times. An even worse situation can arise with the virtual memory system. If a host begins paging to disk because it is low on memory, then there can be simultaneous contention both for memory and for disk. Imagine, for instance, that there are many processes, each loading files into memory, when there is no free RAM. In order to use RAM, some has to be freed by paging to disk; but the disk is already busy seeking files. In order to load a file, memory has to be freed, but memory can’t be freed until the disk is free to page, this drags the heads to another partition, then back again and so on. This nightmare brings the system to a virtual standstill as it fights both over free RAM and disk head placement. The system spends more time juggling its resources than it does performing real work, i.e. the overhead to work ratio blows up. Theonlycureforthrashingistoincreasememory,orreducethenumberof processes contending for resources. A final point to mention in connection with disks is to do with standards. Disk transfer rates are limited by the protocols and hardware of the disk interfaces. This applies to the interfaces in the computer and to the interfaces in the disks. Most serious performance systems will use SCSI disks, for their speed (see section 2.2). However, there are many versions of the SCSI disk design. If we mix version numbers, the faster disks will be delayed by the slower disks while the bus is busy, i.e. the average transfer rate is limited by the weakest link or the slowest disk. If one needs to support legacy disks together with new disks, then it pays to collect like disks with a special host for each type, or alternatively buy a second disk controller rather than to mix disks on the same controller. 8.11.3 Software tuning and kernel configuration It is true that software is constrained by the hardware on which it runs, but it is equally true that hardware can only follow the instructions it has received from software. If software asks hardware to be inefficient, hardware will be inefficient. Software introduces many inefficiencies of its own. Hardware and software tuning are inextricably intertwined. 2 For non-native English speakers, note the difference between thrash and trash. Thrashing refers to a beating, or the futile fight for survival, e.g. when drowning. [...]... performance monitor and editing the values For once, this useful tool is a standard part of the Windows system 8.11 SYSTEM PERFORMANCE TUNING 8.11.4 323 Data efficiency Efficiency of storage and transmission depends on the configuration parameters used to manage disks and networks, and also on the amount of traffic the devices see We have already mentioned the problem of contention Some filesystem formatting... principle of predictable failure? 2 Explain the meaning of ‘single point of failure’ 3 Explain how a meshed network can be both more robust and more susceptible to failure 4 What is the ‘small worlds’ phenomenon and how does it apply to system administration? 5 Explain the principle of causality 6 What is meant by an interaction? 7 How do interactions underline the importance of the principle of causality?... priorities, such as simplicity and robustness In software the potential number of variables is much greater than in hardware tuning Some software systems can be tuned individually For instance, highavailability server software such as WWW servers and SMTP (E-mail) servers can be tuned to handle traffic optimally for heavy loads See, for instance, tips on tuning sendmail [62 , 185], and other general tuning... J.ROOT-SERVERS.NET 360 0000 A ; ; housed in LINX, operated by RIPE NCC ; 360 0000 NS K.ROOT-SERVERS.NET 360 0000 A ; ; operated by IANA ; 360 0000 NS L.ROOT-SERVERS.NET 360 0000 A ; ; housed in Japan, operated by WIDE ; 360 0000 NS M.ROOT-SERVERS.NET 360 0000 A ; End of File 9.5.5 345 F.ROOT-SERVERS.NET 192.5.5.241 G.ROOT-SERVERS.NET 192.112. 36. 4 H.ROOT-SERVERS.NET 128 .63 .2.53 I.ROOT-SERVERS.NET 192. 36. 148.17 J.ROOT-SERVERS.NET... Service), Microsoft’s Exchange server and NIS (Sun’s Network Information Service) The advantage of LDAP will be a uniform protocol for accessing table lookups Currently the spread of LDAP is hindered by few up-to-date implementations of the protocol • NTP is the network time protocol, used for synchronizing clocks throughout the network • IMAP Internet Mail Access Protocol provides a number of network services... E.ROOT-SERVERS.NET 360 0000 A 192.203.230.10 ; ; formerly NS.ISC.ORG ; 9.5 SETTING UP THE DNS NAMESERVICE 360 0000 NS F.ROOT-SERVERS.NET 360 0000 A ; ; formerly NS.NIC.DDN.MIL ; 360 0000 NS G.ROOT-SERVERS.NET 360 0000 A ; ; formerly AOS.ARL.ARMY.MIL ; 360 0000 NS H.ROOT-SERVERS.NET 360 0000 A ; ; formerly NIC.NORDU.NET ; 360 0000 NS I.ROOT-SERVERS.NET 360 0000 A ; ; operated by VeriSign, Inc ; 360 0000 NS J.ROOT-SERVERS.NET... general tuning tips [307, 200, 303] More often than not, performance tuning is related to the availability or sharing of system resources This requires tuning the system kernel The most configurable piece of software on the system is the kernel All Unix-like systems kernel parameters can be altered and tuned The most elegant approach to this is taken by Unix SVR4, and Solaris Here, many kernel parameters... Application-level services Network services are the crux of network cooperation (see section 3.5) They distinguish a cooperative network from a loose association of hosts A community is bound together by a web of delegation and sharing We give this job to A and that job to B, and they carry out their specialized tasks, making the whole function In a computer network, we assign specific functions to specific hosts,... A definition of quality 2 A fault tree or cause tree analysis for the system quality 3 26 CHAPTER 8 DIAGNOSTICS, FAULT AND CHANGE MANAGEMENT Policy goals Strategic plan Quality definition Procedures Methods Verification Documentation Figure 8. 16: Elements of a quality assurance system 3 Formulating a strategic remedial policy 4 The formalization of remedies as a checklist 5 Acknowledging and accepting... done the deed of configuring a network service, see section 9.3, by editing textfiles and ritually sacrificing a few 334 CHAPTER 9 APPLICATION-LEVEL SERVICES doughnuts, we reach the point where we have to actually start the daemon in order to see the fruits of those labors There are two ways to start network daemons: • When the system boots, by adding an appropriate shell-command to one of the system s startup . procrastinates, procedures will be out of date, or inappropriate. 8.12. PRINCIPLES OF QUALITY ASSURANCE 325 ISO 9000 reiterates one of the central messages of system administration and secu- rity: namely that. 303]. More often than not, performance tuning is related to the availability or sharing of system resources. This requires tuning the system kernel. The most configurable piece of software on the system. performance monitor and editing the values. For once, this useful tool is a standard part of the Windows system. 8.11. SYSTEM PERFORMANCE TUNING 323 8.11.4 Data efficiency Efficiency of storage and transmission

Ngày đăng: 13/08/2014, 22:21

TỪ KHÓA LIÊN QUAN