Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 46 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
46
Dung lượng
184,35 KB
Nội dung
We can also use the exclamation mark (!). When we apply ! to an expression, it indicates that we are looking for the complement of the expression match resulting from the parentheses metacharacters (that is, all results except those that match the expression). Try it Out: File Globbing Perhaps the easiest way to understand the type of things that file globbing allows us to do is to look at an example. So, in this example we'll create a number of files and then use different globbing expressions to select different subsets of the files for listing. Create a temporary folder called numberfiles, and then set it to be the current working directory: $ mkdir /home/<username>/numberfiles $ cd /home/<username>/numberfiles 1. Now create ten files, named after the Italian words for the numbers 1 to 10. Use the touch command to do this: $ touch uno due tre quattro cinque sei sette otto nove dieci Use the ls command just to list them all (by default, it lists them by name in alphabetical order): $ ls cinque dieci due nove otto quattro sei sette tre uno ♦ 2. Now let's group them in a number of different ways, using the metacharacters we just mentioned. First, list all the files that start with the letter s: $ ls s* sei sette 3. Next, use the ? metacharacter to select all files whose name consists of exactly three characters: $ ls ??? due sei tre uno 4. Next, select all the files whose name starts with a vowel: $ ls [aeiou] * otto uno 5. Next, select all the files whose name starts with any character in the range a to f: $ ls [a−f]* cinque dieci due 6. Finally, select all the files whose name does not start with a vowel. The exclamation operator must be within the square parentheses. $ ls [!aeiou]* cinque dieci due nove quattro sei sette tre 7. How it Works We've used the ls command here to demonstrate file globbing, because the output from ls shows the effects of the globbing very clearly. However, you should note that we can use file globbing with any command that expects filename or directory−name arguments. Let's look at each of the globbing expressions here. Auto−completion 177 We used the expression s* to match all files that begin with the letter s: $ ls s* This expression matches the file names sei and sette, and would even match a file called s if there were one, because the * matches any string of any length (including the 0−length string). To match filenames with exactly three characters, we use a ? to represent each character: $ ls ??? We used the expression [aeiou] * to pick up all filenames starting with a vowel. The * works in the same way as in the s* example, matching any string of any length, so files matching this expression begin with a character a, e, i, o, or u, followed by any other sequence of characters: $ ls [aeiou]* A similar approach applies for the expression [a−f ] *, except that we use a hyphen (−) within the parentheses to express any one of the characters in a range: $ ls [a−f]* Using a range implies that the characters have an assumed order. In fact, this encompasses all alphanumeric characters, with numbers (0−9) preceding letters (a−z). (Hence the expression [0 − z ] * would match all filenames that start with either a number or a letter.) Finally, we use the exclamation mark (!) within the square parentheses to negate the result of the vowel−matching expression, thereby arriving at all filenames that start with a consonant: $ ls [!aeiou]* Aliases Aliases are our first step toward customizing Bash. In its simplest form, an alias functions as an abbreviation for a commonly used command. In more complex cases, aliases can define completely new functionality. An alias is easily defined using the notation <alias_name>=<alias_value>. When we need it, we invoked it using <alias_name>−the shell substitutes <alias_name> with <alias_value>. In fact, the standard Red Hat Linux 9 shell already has several aliases defined. We could list the existing aliases using the alias command: $ alias alias l.='ls −d .* −−color=tty' alias ll='ls −l −−color=tty' alias ls='ls −−color=tty' alias vi='vim' alias which='alias | /usr/bin/which −−tty−only −−read−alias −−show−dot −− show−tilde' Some of the common aliases include aliases for the ls command, to include our favorite options. If you use the ls command without any options then it would simply print the list of files and sub−directories under the Aliases 178 current working directory. However, in this case the ls command is aliased to itself, with the −−color option, which allows ls to indicate different file types with different colors. Aliases may be defined for the lifetime of a shell by specifying the alias mapping at the command line or in a startup file (discussed in a later section) so that the aliases are available every time the shell starts up. Environment Variables Like aliases, environment variables are name−value pairs that are defined either on the shell prompt or in startup files. A process may also set its own environment variables programmatically (that is, from within the program, rather than declared in a file or as arguments). Environment variables are most often used either by the shell or by other programs to communicate settings. Some programs communicate information through environment variables to programs that they spawn. There are several environment variables set for us in advance. To list all of them that are currently set, you can use the env command, which should display an output similar to that below: $ env HOSTNAME=localhost.localdomain SHELL=/bin/bash TERM=xterm HISTSIZE=1000 USER=deepakt MAIL=/var/spool/mail/deepakt PATH=/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/deepakt/bin As you can see, the PATH variable is one of the environment variables listed here. As we described earlier in this chapter, Bash uses the value of the PATH variable to search for commands. The MAIL variable, also listed here, is used by mail reading software to determine the location of a user's mailbox. System−defined Variables and User−defined Variables We may set our own environment variables or modify existing ones: $ echo $PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/deepakt/bin $ export MYHOME=/home/deepakt $ export PATH=$PATH:$MYHOME/mybin $ echo $PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/deepakt/bin:/home/ deepakt/mybin While user−defined variables (also known as local variables) can be set as MYHOME=/home/deepakt, these variables will not be available to any of the commands spawned by this shell. For local variables to be available to child processes spawned by a process (the shell in this case), we need to use the export command. However, to achieve persistence for variables even after we log out and log back in, we need to save these settings in startup files. Environment variables are defined either interactively or in a startup file such as . bashrc. These variables are automatically made available to a new shell. Examples of environment variables are PATH, PRINTER, and Environment Variables 179 DISPLAY. However, local variables do not get automatically propogated to a new shell when it is created. The MYHOME variable is an example of a local variable. The echo command, followed by the name of a variable prefixed with a dollar ($) symbol, prints the value of the environment variable. I/O Redirection Earlier in the chapter, we referred to the file system and how we can use file system commands to manipulate files and directories and process management commands such as ps to manage processes. The shell provides us with a powerful set of operators that allow us to manage input, output, and errors while working with files and processes. I/O Streams If a process needs to perform any I/O operation, it has to happen through an abstraction known as an I/O stream. The process has three streams associated with it − standard input, standard output, and standard error. The process may read input from its standard input, write its output to standard output, and write error messages to its standard error stream. By default, the standard input is associated with the keyboard; output and error are associated with the terminal, in our case mostly an xterm. Sometimes, we may not want processes to write to or read from a terminal; we may want the process to write to another location, such as a file. In this case we need to associate the process's standard output (and possibly the standard error) with the file in question. The process is oblivious to this, and continues to read from the standard input and write to the standard output, which in this case happens to be the files we specify. The I/O redirection operators of the shell make this redirection of the streams from the terminal to files extremely simple. The < Operator The < operator allows programs that read from the standard input to read input from a file. For instance, let us consider the wc (word count) program, which reads input from the keyboard (until a Ctrl−D is encountered) and then prints the number of lines, words, and characters that were input: $ wc −1 12345 67890 12345 ^D 3 Note Note that we've used the −1 option here, which has wc print the number of lines only. Now consider a case in which we have the input to wc available in a file, called 3linefile.txt. In this case the following command will produce the same result: $ wc −1 < 3linefile.txt 3 In this case the standard input is redirected from the keyboard to the file. I/O Redirection 180 The > Operator The > operator is similar to the < operator. Its purpose is to redirect the standard output from the terminal to a file. Let us consider the following example: $ date > date.txt The date command writes its output to the standard output, which is usually the terminal. Here, the > operator indicates to the shell that the output should instead be redirected to a file. When we write the file out to the terminal (using the cat command), we can see the output of the date command displayed: $ cat date.txt Tue Jan 14 23:03:43 2003 Try it Out: Redirecting Output Based on what we have learned so far, let us create a file with some contents in it: $ cat > test.txt The quick brown fox jumped over the rubber chicken ^D $ cat test.txt The quick brown fox jumped over the rubber chicken This way of using cat to creating a file is similar to using the Microsoft DOS command COPY CON TEST.TXT. How it Works The cat command, used without any options, is supposed to echo back to the standard output anything that it reads from the standard input. In this case, the > operator redirects the standard output of the cat command to the file test. txt. Thus whatever was typed in on the keyboard (standard input) ended up in the file test. txt (standard output redirected by the shell). The >> Operator The >> operator is essentially the same as the > operator, the only difference being that it does not overwrite an existing file, instead it appends to it. $ cat >> test.txt Since rubber chicken makes bad nuggets ^D $ cat test.txt The quick brown fox jumped over the rubber chicken Since rubber chicken makes bad nuggets I/O Redirection 181 The | Operator The | operator is used to feed the output of one command to the input of another command. $ cat test.txt | wc −l 2 $wc −l test.txt 2 The output of the cat command − that is, the contents of the file test.txt − is fed by the shell to the wc command. It is the equivalent of running the wc −1 command against the test. txt file. It is also possible to chain multiple commands this way, for example commandl | command2 | command3. Configuring the Shell As we saw in the section about aliases, most of us are likely to have our own preferences about how the shell should function. Bash is a highly customizable shell that allows us to set the values of environment variables that change its default behavior. Among other things, users like to change their prompt, their list of aliases and even perhaps add a welcome message when they log in: $ echo $PS1 $ $ export PS1="Grandpoobah > " Grandpoobah > Bash uses the value of the PS1 environment variable to display its prompt. Therefore we could simply change this environment variable to whatever pleases us. However, to ensure that our brand new prompt is still available to us the next time we log in to the system, we need to add the PS1 setting to the . bashrc file. Try it Out Let us add some entries to our .bashrc file (save a backup copy first, so you can put it back to normal when you're done): export PS1="Grandpoobah> " alias ls='ls −al' banner "Good day" When we log in, we see a banner that displays the silly Good day message. If we list our aliases and environment variables, we see that our new settings have taken effect. How it Works When a user logs in, Bash reads the /etc/bashrc file (which is a common startup file for all users of the system). Then it reads the .bashrc file in the user's home directory and executes all commands in it, including creating aliases, setting up environment variables, and running programs (the banner program in this case). I/O Redirection 182 Since the user's .bashrc is read after the system−wide configuration file, this is a good place to override any default settings that may not be to the user's liking. A user can also create a .bashrc_logout script in their home directory, and add programs to it. When the user logs out, Bash reads and executes the commands in the .bashrc_logout file. Therefore, this is a good location to add that parting message or reminder, and simple housekeeping tasks. A Sample .bashrc Let us take a look at a sample .bashrc file. export PS1='foobar$ ' export PATH=$PATH:/home/deepakt/games alias rm='rm −i ' alias psc='ps −auxww' alias d='date' alias cls='clear' alias jump='cd /home/deepakt/dungeon/maze/labyrinth/deep/down' Setting the PS1 environment variable changes the command prompt. We may have a separate directory in which we store a number of games. We also add this directory to the PATH environment variable. In the aliases section, we alias the rm command to itself with the −i option. The −i option forces the rm command to confirm with the user if it is all right to delete a file or directory. This is often a useful setting for novice users to prevent accidental deletion of files or directories. We also abbreviate the ps command and arguments to display the entire command line of processes with the psc alias. The date command is abbreviated as d. Finally, to save on typing the complete path to a deeply nested directory, we create jump, an alias to the cd command that changes our current working directory to the deeply nested directory. As we saw in an earlier section, the su command switches the identity of a user to that of another user. By default, when the switch happens, the new user's .bashrc file is not executed. However, if we use the − option to su, the new user's .bashrc is executed and the current directory is changed to that of the new user: $ su − jmillionaire Managing Tasks The Linux operating system was designed to be a multitasking operating system − that is, to allow multiple tasks to be executed together. Until a few years ago, end users of the system were not directly exposed to this aspect of the operating system. As far as Linux is concerned, the job−control features of Bash allow users to take advantage of the multitasking features of the operating system. In this section, we shall look at managing multiple tasks, both attended and unattended, starting with an overview of how processes work in Linux. Configuring the Shell 183 Processes Processes, as we saw earlier, are programs executing in memory. A process may be associated with a terminal (for example, the date command is associated with the terminal since it prints it standard output to the terminal). The association of a process with a terminal also means that all the signals delivered to terminal's group of processes will be delivered to the process in question. Some processes, such as servers (or daemons), are seldom associated with a terminal. These processes are typically started as part of the system boot process, and they run during the entire time that the system is up and write output to log files. When a user starts a process (that is, when the user runs a command), the command is associated with the terminal and is therefore also described as running in the foreground. While a process is running in the foreground, the shell does not return a prompt until the process has completed execution. However, a process may also be started such that the prompt is returned immediately; in this case, the process is called a background process. To run a process as a background process, we use the ampersand (&) character after the command: $bash ls −R / & This indicates to the shell that the process must be disassociated from the terminal and executed as a background process. Its output continues to be written to the terminal. Job Control Job control is a feature of Bash that allows the user to start and manage multiple programs at the same time rather than sequence their execution. We can suspend a program using the Ctrl−Z key, and we can send it to the background or foreground (using the bg and fg commands) or even leave it suspended. It is also possible to list all of the jobs (processes) started and terminate some of them. Try it Out Let us try using job control to manage a long−running process, say the ls −R / command, which recursively lists all the files and directories on the system: $ ls −R / ^Z [1]+ Stopped ls −R / $ jobs [11+ Stopped ls −R / $ bg %1 [1] + ls −R / & $ fg %1 ls −R / ^Z [1]+ Stopped ls −R / $ kill −s SIGKILL %1 $ [1]+ Killed ls −R / Processes 184 How it Works We start the program ls with the −R option. After a while, we decide to suspend the program using the Ctrl−Z key. The jobs command displays the current jobs and their status. We use the bg command to send the process to the background. After a while, we decide to bring the process back to the foreground, for which we use the fg command. Both bg and fg take an argument that indicates the job number. The %1 argument indicates that we are referring to job number 1. Finally, having had enough of the process, we suspend it once again and kill it (using the kill command). Note Note that the job control commands are built−in commands, and not external commands. Scheduling Tasks Often, it is not necessary (or not possible) for the user to be present when a task needs to execute. For example, if a user wants to have a certain script executed at midnight to take advantage of the spare CPU cycles, then what they need is a mechanism by which the task can be scheduled and executed unattended. Alternatively, if a certain task takes hours to complete and may not require any user input, it is not necessary for the user to remain logged on until the task is complete. Scheduling Processes We can use the cron utility to execute tasks automatically at arbitrary times, and even repeatedly if required. The cron daemon is a system process that runs at all times in the background, checking to see if any processes need to be started on behalf of users. We can schedule tasks for cron by editing the crontab file. Try it Out: Scheduling a Task Let's schedule a cron job that needs to be started every Monday and Thursday at 11:55 PM to back up our system: $ crontab −e No crontab for deepakt − using an empty one This brings up an editor (vi by default), using which we add our crontab entry: 55 23 * * 1,4 /home/deepakt/mybackup >/home/deepakt/mybackup.out 2>&1 We need to save the file and exit the editor: crontab: installing new crontab $ crontab −1 # DO NOT EDIT THIS FILE − edit the master and reinstall. # (/tmp/crontab.6642 installed on Fri Jan 17 05:09:37 2003) # (Cron version −− $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) 55 23 * * 1,4 /home/deepakt/mybackup /home/deepakt/mybackup.out 2>&1 Scheduling Tasks 185 How it Works We need to use the crontab command to create new cron jobs. The −e option pops up the vi editor that allows us to add the new cron job. The entry for a cron job consists of six columns: The first five columns indicate the time at which the job should execute and the frequency − the minute (0−59), the hour (0−23), the day of the month (1−31), the month of the year (1−12), and the day of the week (0−6, with 0 indicating Sunday). An asterisk represents all logical values, hence we have asterisks for the day of the month and the month of the year (job needs to run during all months of the year). • The last column indicates the actual command to be invoked at these times. We need to specify the full command, with the complete path leading to our backup program and also redirect the output to a log file. The 2>&1 indicates that the standard error is also redirected to the same log file. • Allowing a Process to Continue after Logout The nohup command can be used to execute tasks that need to continue to execute even after the user has logged out: $ nohup ls −R / & The nohup command is quite straightforward, in that it takes the program to be executed as the argument. We need to send the whole process to the background by using the & operator. The standard output and error of the nohup command will be written to the user's home directory in a file called nohup.out. Shell Scripting As we've seen in this chapter, the shell has extensive capabilities when it comes to providing tools for finding our way around the operating system and getting our job done. But the true power of the shell is in its capacity as a scripting language. To capture this, we use shell scripts. Essentially, a shell script is a sequence of commands and operators listed one after another, stored in a file, and executed as a single entity. Shell scripting in Bash is a topic that deserves a book by itself. Our objective in this short section is simply to touch upon the salient features of scripting using Bash. Bash shell script files start with the command interpreter, in this case bash itself: #!/bin/bash or: #!/bin/sh Scheduling Tasks 186 [...]... for the string nobody in the files /etc/passwd and /etc/group: $ grep nobody /etc/passwd /etc/group /etc/passwd:nobody:x :99 :99 :Nobody:/:/sbin/nologin /etc/passwd:nfsnobody:x: 655 34: 655 34:Anonymous NFS User:/var/lib/nfs:/sbin/nologin /etc/group:nobody:x :99 : /etc/group:nfsnobody:x: 655 34: The grep command prints the names of the files in which the string was found, followed by a colon character (:) and... root root 14 Oct 30 14: 45 /lib/libc.so.6 −> libc−2.2 .93 .so Here, we can see that libc.so.6 is actually a symbolic link to the actual library libc−2.2 .93 .so This means that if the library is upgraded from version 2.2 .93 to (say) 2.2 .94 , the upgrade process removes the link between libc.so.6 and libc−2.2 .93 .so and creates a new link between libc.so.6 and libc−2.2 .94 .so This ensures that the programs referring... the case of Red Hat Linux 9, we may use the Nautilus file manager However, one of the most interesting aspects of the file system (and one that is not immediately obvious) is the fact that Linux treats almost all devices as files Hard disks, terminals, printers, and floppy disk drives are all devices − devices that can be read from and written to (mostly) In fact, with the proc file system, Linux goes... see the inode number of a file, we could use the ls command again, this time with the −i option: $ ls −li /etc total 2012 22 697 2 −rw−r−−r−− 226602 −rw−r−−r−− 226336 −rw−r−−r−− 1 root 1 root 1 root root root root 1 95 152 28 Aug 5 03:14 a2ps.cfg 256 2 Aug 5 03:14 a2ps−site.cfg 47 Jan 19 04:00 adjtime The Anatomy of a File The inode number is listed in the first column Each file has a unique inode number... use this program to modify the password they use for logging in By default on Red Hat Linux 9, passwords are encrypted and stored in the file /etc/shadow that can be modified only by the root user From what we have seen so far, when a user executes the passwd program, the program would assume just the privileges assigned to that user So how does the password program modify this file to update it with... FIFO called myfifo, using the mknod command; and then we list it The letter 'p' indicates that this is a FIFO: 193 The Anatomy of a File $ mknod myfifo p $ ls −1 myfifo prw−r−−r−− 1 deepakt $ ls −l /tmp/ssh* /tmp/ssh−XXiVoKic: total 0 srwxr−xr−x 1 deepakt users 0 Jan 19 19: 09 myfifo users 0 Jan 19 18:40 agent .99 6 Note A FIFO is a mechanism used by processes to talk to each other, therefore known as... symbolic link to the same file, called soft.txt: $ ls 13 152 4 13 152 4 13 152 8 −li dissectme.txt −rw−r−−r−− 2 −rw−r−−r−− 2 lrwxrwxrwx 1 hard.txt deepakt deepakt deepakt soft.txt users 21 Jan 19 18:40 dissectme.txt users 21 Jan 19 18:40 hard.txt users 13 Jan 19 20:23 soft.txt −> dissectme.txt When we list all three files with the −i option of ls, we see that inode numbers of the hard link and the inode number... listing for the /etc directory, we see that the first letter indicating the type of the file is the letter 'd', confirming that /etc is indeed a directory: $ ls −ld /etc drwxr−xr−x 59 root root 8 192 Jan 19 18:32 /etc In the next two listings, we initially list one of the first hard disks on the system, /dev/hda in this case We see the letter b, which indicates that this is a block device While listing... −−time=ctime time.txt −rw−r−−r−− 1 deepakt users 13 Jan 20 03: 49 time.txt −rw−r−−r−− 1 deepakt users 13 Jan 20 03: 49 time.txt −rw−r−−r−− 1 deepakt users 13 Jan 20 03: 49 time.txt We can see that they are all the same Here, the file was created at 03: 49 The file was accessed and modified at that time, and its attributes were all set at that time, so the mtime, atime, and ctime are all the same Pause... note is that treating devices as files allows Linux to deal with them in a consistent manner Linux supports a wide variety of file system types including Microsoft Windows file system types Some first−time Linux users find it interesting that it is possible to copy a file from a Microsoft Windows file system onto a floppy and edit it on a Linux machine and take it back to Windows In fact, Linux even . 17 05: 09: 37 2003) # (Cron version −− $Id: crontab.c,v 2.13 199 4/01/17 03:20:37 vixie Exp $) 55 23 * * 1,4 /home/deepakt/mybackup /home/deepakt/mybackup.out 2>&1 Scheduling Tasks 1 85 How. case of Red Hat Linux 9, we may use the Nautilus file manager. However, one of the most interesting aspects of the file system (and one that is not immediately obvious) is the fact that Linux treats. keyboard (until a Ctrl−D is encountered) and then prints the number of lines, words, and characters that were input: $ wc −1 123 45 67 890 123 45 ^D 3 Note Note that we've used the −1 option