Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 49 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
49
Dung lượng
5,51 MB
Nội dung
Seeing messages like "/dev/FOO: device not found" is never a good thing. However, this message can be caused by a number of different problems. There isn't much you can do about a complete hardware failure, but if you're "lucky" your disk's partition table may just have been damaged and your data may just be temporarily inaccessible. If you haven't rebooted, execute the cut lproc /partitions command to see if it still lists your device's partitions. Unless you have a photographic memory, your disk contains only a single partition, or you were sufficiently disciplined to keep a listing of its partition table, trying to guess the sizes and locations of all of the partitions on an ailing disk is almost impossible without some help. Thankfully, Michail Brzitwa has written a program that can provide exactly the help you need. His gpart (guess partitions) program scans a specified disk drive and identifies entries that look like partition signatures. By default, gpart displays only a listing of entries that appear to be partitions, but it can also automatically create a new partition table for you by writing these entries to your disk. That's a scary thing to do, but it beats the alternative of losing all your existing data. If you're just reading this for information and aren't actually in the midst of a lost data catastrophe, you may be wondering how to back up a disk's partition table so that you don't have to depend on a recovery utility like gpart. You can easily back up a disk's master boot record (MBR) and partition table to a file using the following dd command, where FOO is the disk and FILENAME is the name of the file to which you want to write your backup: # dd if=/dev/FOO of=FILENAME bs=512 count=1 If you subsequently need to restore the partition table to your disk, you can do so with the following dd command, using the same variables as before: # dd if=FILENAME of=/dev/FOO bs=1 count=64 skip=446 seek=446 The gpart program works by reading the entire disk and comparing sector sequences against a set of filesystem identification modules. By default, gpart includes filesystem identification modules that can recognize the following types of partitions: beos (BeOS), bsddl (FreeBSD/NetBSD/386BSD), ext2 and ext3 (standard Linux filesystems), fat (MS-DOS FAT12/16/32), hpfs (remember OS/2?), hmlvm (Linux LVM physical volumes), lswap (Linux swap), minix (Minix OS), ntfs (Microsoft Windows NT/2000/XP/etc.), qnx4 (QNX Version 4.x), rfs (ReiserFS Versions 3.5.11 and greater), s86dl (Sun Solaris), and xfs (XFS journaling filesystem). You can write additional partition identification modules for use by gpart (JFS fans, take note!), but that's outside the scope of this hack. For more information about expanding gpart, see its home page at http://www.stud.uni-hannover.de/user/76201/gpart and the README file that is part of the gpart archive. 10.6.1. Looking for Partitions As an example of gpart's partition scanning capabilities, let's first look at the listing of an existing disk's partition table as produced by the fdisk program. (BTW, if you're questioning the sanity of the partition 368 368 layout, this is a scratch disk that I use for testing purposes, not a day-to-day disk.) Here's fdisk's view: # fdisk -l /dev/hdb Disk /dev/hdb: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 25 200781 83 Linux /dev/hdb2 26 57 257040 82 Linux swap / Solaris /dev/hdb3 58 3157 24900750 83 Linux /dev/hdb4 3158 7297 33254550 5 Extended /dev/hdb5 3158 3337 1445818+ 83 Linux /dev/hdb6 3338 3697 2891668+ 83 Linux /dev/hdb7 3698 4057 2891668+ 83 Linux /dev/hdb8 4058 4417 2891668+ 83 Linux /dev/hdb9 4418 4777 2891668+ 83 Linux /dev/hdb10 4778 5137 2891668+ 83 Linux /dev/hdb11 5138 5497 2891668+ 83 Linux /dev/hdb12 5498 5857 2891668+ 83 Linux /dev/hdb13 5858 6217 2891668+ 83 Linux /dev/hdb14 6218 6577 2891668+ 83 Linux /dev/hdb15 6578 6937 2891668+ 83 Linux /dev/hdb16 6938 7297 2891668+ 83 Linux Let's compare this with gpart's view of the partitions that live on the same disk: # gpart /dev/hdb Begin scan… Possible partition(Linux ext2), size(196mb), offset(0mb) Possible partition(Linux swap), size(251mb), offset(196mb) Possible partition(Linux ext2), size(24317mb), offset(447mb) Possible partition(Linux ext2), size(1411mb), offset(24764mb) Possible partition(Linux ext2), size(2823mb), offset(26176mb) Possible partition(Linux ext2), size(2823mb), offset(29000mb) Possible partition(Linux ext2), size(2823mb), offset(31824mb) Possible partition(Linux ext2), size(2823mb), offset(34648mb) Possible partition(Linux ext2), size(2823mb), offset(37471mb) Possible partition(Linux ext2), size(2823mb), offset(40295mb) Possible partition(Linux ext2), size(2823mb), offset(43119mb) Possible partition(Linux ext2), size(2823mb), offset(45943mb) Possible partition(Linux ext2), size(2823mb), offset(48767mb) Possible partition(Linux ext2), size(2823mb), offset(51591mb) Possible partition(Linux ext2), size(2823mb), offset(54415mb) End scan. Checking partitions… * Warning: more than 4 primary partitions: 15. Partition(Linux ext2 filesystem): primary Partition(Linux swap or Solaris/x86): primary Partition(Linux ext2 filesystem): primary Partition(Linux ext2 filesystem): primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary Partition(Linux ext2 filesystem): invalid primary 369 369 Ok. Guessed primary partition table: Primary partition(1) type: 131(0x83)(Linux ext2 filesystem) size: 196mb #s(401562) s(63-401624) chs: (0/1/1)-(398/6/63)d (0/1/1)-(398/6/63)r Primary partition(2) type: 130(0x82)(Linux swap or Solaris/x86) size: 251mb #s(514080) s(401625-915704) chs: (398/7/1)-(908/6/63)d (398/7/1)-(908/6/63)r Primary partition(3) type: 131(0x83)(Linux ext2 filesystem) size: 24317mb #s(49801496) s(915705-50717200) chs: (908/7/1)-(1023/15/63)d (908/7/1)-(50314/10/59)r Primary partition(4) type: 131(0x83)(Linux ext2 filesystem) size: 1411mb #s(2891632) s(50717268-53608899) chs: (1023/15/63)-(1023/15/63)d (50314/12/1)-(53183/6/58)r Doing the math can be a bit tedious, but calculating the partition size and offsets shows that they are actually the same. gpart found all of the partitions, including all of the logical partitions inside the disk's extended partition, which can be tricky. If you don't want to do the math yourself, gpart provides a special -c option for comparing its idea of a disk's partition table against the partitions that are listed in an existing partition table. Using gpart with the -c option returns 0 if the two are identical or the number of differences if the two differ. 10.6.2. Writing the Partition Table Using fdisk to recreate a partition table can be a pain, especially if you have multiple partitions of different sizes. As mentioned previously, gpart provides an option that automatically writes a new partition table to the scanned disk. To do this, you need to specify the disk to scan and the disk to write to on the command line, as in the following example: # gpart -W /dev/ FOO /dev/ FOO If you're paranoid (and you should be, even though your disk is already hosed), you can back up the existing MBR before writing it by adding the -b option to your command line and specifying the name of the file to which you want to back up the existing MBR, as in the following example: # gpart -b FILENAME -W /dev/ FOO /dev/ FOO As mentioned at the beginning of this hack, a disk failure may simply be the result of a bad block that happens to coincide with your disk's primary partition table. If this happens to you and you don't have a backup of the partition table, gpart does an excellent job of guessing and rewriting your disk's primary partition table. If the disk can't be mounted because it is severely corrupted or otherwise damaged, see "Recover Data from Crashed Disks" [Hack #94] and "Piece Together Data from the lost+found" [Hack #96] for some suggestions regarding more complex and desperate data recovery hacks. 370 370 10.6.3. See Also "Rescue Me!" [Hack #90]• Hack 94. Recover Data from Crashed Disks You can recover most of the data from crashed hard drives with a few simple Linux tricks. As the philosopher once said, "Into each life, a few disk crashes must fall." Or something like that. Today's relatively huge disks make it more tempting than ever to store large collections of data online, such as your entire music collection or all of the research associated with your thesis. Backups can be problematic, as today's disks are much larger than most backup media, and backups can't restore any data that was created or modified after the last backup was made. Luckily, the fact that any Linux/Unix device can be accessed as a stream of characters presents some interesting opportunities for restoring some or all of your data even after a hard drive failure. When disaster strikes, consult this hack for recovery tips. This hack uses error messages and examples produced by the ext2fs filesystem consistency checking utility associated with the Linux ext2 and ext3 filesystems. You can use the cloning techniques in this hack to copy any Linux disk, but the filesystem repair utilities will differ for other types of Linux filesystems. For example, if you are using ReiserFS filesystems, see "Repair and Recover ReiserFS Filesystems" [Hack #95] for details on using the special commands provided by its filesystem consistency checking utility, reiserfsck. 10.7.1. Popular Disk Failure Modes Disks generally go bad in one of three basic ways: Hardware failure that prevents the disk heads from moving or seeking to various locations on the disk. This is generally accompanied by a ticking noise whenever you attempt to mount or otherwise access the filesystem, which is the sound of disk heads failing to launch or locate themselves correctly. • Bad blocks on the disk that prevent the disk's partition table from being read. The data is probably still there, but the operating system doesn't know how to find it. • Bad blocks on the disk that cause a filesystem on a partition of the disk to become unreadable, unmountable, and uncorrectable. • The first of these problems can generally be solved only by shipping your disk off to a firm that specializes in removing and replacing drive internals, using cool techniques for recovering data from scratched or flaked platters, if necessary. The second of these problems is discussed in "Recover Lost Partitions" [Hack #93]. This hack explains how to recover data that appears to be lost due to the third of these problems: bad blocks that corrupt filesystems to the point where standard filesystem repair utilities cannot correct 371 371 them. If your disk contains more than one partition and one of the partitions that it contains goes bad, chances are that the rest of the disk will soon develop problems. While you can use the techniques explained in this hack to clone and repair a single partition, this hack focuses on cloning and recovering an entire disk. If you clone and repair a disk containing multiple partitions, you will hopefully find that some of the copied partitions have no damage. That's great, but cloning and repairing the entire disk is still your safest option. 10.7.2. Attempt to Read Block from Filesystem Resulted in Short Read… The title of this section is one of the more chilling messages you can see when attempting to mount a filesystem that contained data the last time you booted your system. This error always means that one or more blocks cannot be read from the disk that holds the filesystem you are attempting to access. You generally see this message when the fsck utility is attempting to examine the filesystem, or when the mount utility is attempting to mount it so that it is available to the system. A short read error usually means that an inode in the filesystem points to a block on the filesystem that can no longer be read, or that some of the metadata about your filesystem is located on a block (or blocks) that cannot be read. On journaling filesystems, this error displays if any part of the filesystem's journal is stored on a bad block. When a Linux system attempts to mount a partition containing a journaling filesystem, its first step is to replay any pending transactions from the filesystem's journal. If these cannot be readvoilà!short read. 10.7.3. Standard Filesystem Diagnostics and Repair The first thing to try when you encounter any error accessing or mounting a filesystem is to check the consistency of the filesystem. All native Linux filesystems provide consistency-checking applications. Table 10-2 shows the filesystem consistency checking utilities for various popular Linux filesystems. Table 10-2. Different Linux filesystems and their associated repair utilities Filesystem type Diagnostic/repair utilities ext2, ext3 e2fsck, fsck.ext2, fsck.ext3, tune2fs, debugfs JFS jfs_fsck, fsck.jfs reiserfs reiserfsck, fsck.reiserfs, debugresiserfs XFS fsck.xfs, xfs_check 372 372 The consistency-checking utilities associated with each type of Linux filesystem have their own ins and outs. In this section, I'll focus on trying to deal with short read errors from disks that contain partitions in the ext2 or ext3 formats, which are the most popular Linux partition formats. The ext3 filesystem is a journaling version of the ext2 filesystem, and the two types of filesystems therefore share most data structures and all repair/recovery utilities. If you are using another type of filesystem, the general information about cloning and repairing disks in later sections of this hack still applies. If you're using an ext2 or ext3 filesystem, your first hint of trouble will come from a message like the following, generally encountered when restarting your system. This warning comes from the e2fsck application (or a symbolic link to it, such as fsck.ext2 or fsck.ext3): # e2fsck /dev/hda1 e2fsck: Attempt to read block from filesystem resulted in short read If you see this message, the first thing to try is to cross your fingers and hope that only the disk's primary superblock is bad. The superblock contains basic information about the filesystem, including primary pointers to the blocks that contain information about the filesystem (known as inodes). Luckily, when you create an ext2 or ext3 filesystem, the filesystem-creation utility (mke2fs or a symbolic link to it named mkfs.ext2 or mkfs.ext3) automatically creates backups copies of your disk's superblock, just in case. You can tell the e2fsck program to check the filesystem using one of these alternate superblocks by using its -b option, followed by the block number of one of these alternate superblocks within the filesystem with which you're having problems. The first of these alternate superblocks is usually created in block 8193, 16384, or 32768, depending on the size of your disk. Assuming that this is a large disk, we'll try the last as an alternative: # e2fsck -b 32768 /dev/hda1 e2fsck: Attempt to read block from filesystem resulted in short read while checking ext3 journal for /dev/hda1 You can determine the locations of the alternate superblocks on an unmounted ext3 filesystem by running the mkfs.ext3 command with the n option, which reports on what the mkfs utility would do but doesn't actually create a filesystem or make any modifications. This may not work if your disk is severely corrupted, but it's worth a shot. If it doesn't work, try 8192, 16384, and 32768, in that order. This gave us a bit more information. The problem doesn't appear to be with the filesystem's superblocks, but instead is with the journal on this filesystem. Journaling filesystems minimize system restart time by heightening filesystem consistency through the use of a journal [Hack #70]. All pending changes to the filesystem are first stored in the journal, and are then applied to the filesystem by a daemon or internal scheduling algorithm. These transactions are applied atomically, meaning that if they are not completely successful, no intermediate changes that are part of the unsuccessful transactions are made. Because the filesystem is therefore always consistent, checking the filesystem at boot time is much faster than it would be on a standard, non-journaling filesystem. 10.7.4. Removing an ext3 Filesystem's Journal As mentioned previously, the ext3 and ext2 filesystems primarily differ only in whether the filesystem contains a journal. This makes repairing most journaling-related problems on an ext3 filesystem relatively 373 373 easy, because the journal can simply be removed. Once the journal is removed, the consistency of the filesystem in question can be checked as if the filesystem was a standard ext2 filesystem. If you're very lucky, and the bad blocks on your system were limited to the ext3 journal, removing the journal (and subsequently fsck'ing the filesystem) may be all you need to do to be able to mount the filesystem and access the data it contains. Removing the journal from an ext3 filesystem is done using the tune2fs application, which is designed to make a number of different types of changes to ext2 and ext3 filesystem data. The tune2fs application provides the -O option to enable you to set or clear various filesystem features. (See the manpage for tune2fs for complete information about available features.) To clear a filesystem feature, you precede the name of that feature with the caret (^) character, which has the classic Computer Science 101 meaning of "not." Therefore, to configure a specified existing filesystem so that it thinks that it does not have a journal, you would use a command line like the following: # tune2fs -f -O ^has_journal /dev/hda1 tune2fs 1.35 (28-Feb-2004) tune2fs: Attempt to read block from filesystem resulted in short read while reading journal inode Darn. In this case, the inode that points to the journal seems to be bad, which means that the journal can't be cleared. The next thing to try is the debugfs command, which is an ext2/ext3 filesystem debugger. This command provides an interactive interface that enables you to examine and modify many of the characteristics of an ext2/ext3 filesystem, as well as providing an internal features command that enables you to clear the journal. Let's try this command on our ailing filesystem: # debugfs /dev/hda1 debugfs 1.35 (28-Feb-2004) /dev/hda1: Can't read an inode bitmap while reading inode bitmap debugfs: features features: Filesystem not open debugfs: open /dev/hda1 /dev/hda1: Can't read an inode bitmap while reading inode bitmap debugfs: quit Alas, the debugfs command couldn't access a bitmap in the filesystem that tells it where to find specific inodes (in this case, the journal's inode). If you are able to clear the journal using the tune2fs or debugfs command, you should retry the e2fsck application, using its -c option to have e2fsck check for bad blocks in the filesystem and, if any are found, add them to the disk's bad block list. Since we can't fsck or fix the filesystem on the ailing disk, it's time to bring out the big hammer. 10.7.5. Cloning a Bad Disk Using ddrescue 374 374 If bad blocks are preventing you from reading or repairing a disk that contains data you want to recover, the next thing to try is to create a copy of the disk using a raw disk copy utility. Unix/Linux systems have always provided a simple utility for this purpose, known as dd, which copies one file/partition/disk to another and provides commands that enable you to proceed even in the face of various types of read errors. You must put another disk in your system that is at least the same size or larger than the disk or partition that you are attempting to clone. If you copy a smaller disk to a larger one, you'll obviously be wasting the extra space on the larger disk, but you can always recycle the disk after you extract and save any data that you need from the clone of the bad disk. To copy one disk to another using dd, telling it not to stop on errors, you would use a command like the following: # dd if=/dev/hda of=/dev/hdb conv=noerror,sync This command would copy the bad disk (here, /dev/hda) to a new disk (here, /dev/hdb), ignoring errors encountered when reading (noerror) and padding the output with an appropriate number of nulls when unreadable blocks are encountered (sync). dd is a fine, classic Unix/Linux utility, but I find that it has a few shortcomings: It is incredibly slow.• It does not display progress information, so it is silent until it is done.• It does not retry failed reads, which can reduce the amount of data that you can recover from a bad disk. • Therefore, I prefer to use a utility called ddrescue, which is available from http://www.gnu.org/software/ddrescue/ddrescue.html. This utility is not included in any Linux distribution that I'm aware of, so you'll have to download the archive, unpack it, and build it from source code. Version 0.9 was the latest version when this book was written. The ddrescue command has a large number of options, as the following help message shows: # ./ddrescue -h GNU ddrescue - Data recovery tool. Copies data from one file or block device to another, trying hard to rescue data in case of read errors. Usage: ./ddrescue [options] infile outfile [logfile] Options: -h, help display this help and exit -V, version output version information and exit -B, binary-prefixes show binary multipliers in numbers [default SI] -b, block-size=<bytes> hardware block size of input device [512] -c, cluster-size=<blocks> hardware blocks to copy at a time [128] -e, max-errors=<n> maximum number of error areas allowed -i, input-position=<pos> starting position in input file [0] -n, no-split do not try to split error areas -o, output-position=<pos> starting position in output file [ipos] -q, quiet quiet operation -r, max-retries=<n> exit after given retries (-1=infinity) [0] -s, max-size=<bytes> maximum size of data to be copied -t, truncate truncate output file -v, verbose verbose operation Numbers may be followed by a multiplier: b = blocks, k = kB = 10^3 = 1000, Ki = KiB = 2^10 = 1024, M = 10^6, Mi = 2^20, G = 10^9, Gi = 2^30, etc… If logfile given and exists, try to resume the rescue described in it. If logfile given and rescue not finished, write to it the status on exit. Report bugs to bug-ddrescue@gnu.org # 375 375 As you can see, ddrescue provides many options for controlling where to start reading, where to start writing, the amount of data to be read at a time, and so on. I generally only use the max-retries option, supplying -1 as an argument to tell ddrescue not to exit regardless of how many retries it needs to make in order to read a problematic disk. Continuing with the previous example of cloning the bad disk /dev/hda to a new disk, /dev/hdb, that is the same size or larger, I'd execute the following command: # ddrescue max-retries=-1 /dev/hda /dev/hdb Press Ctrl-C to interrupt rescued: 3729 MB, errsize: 278 kB, current rate: 26083 kB/s ipos: 3730 MB, errors: 6, average rate: 18742 kB/s opos: 3730 MB Copying data… The display is constantly updated with the amount of data read from the first disk and written to the second, including a count of the number of disk errors encountered when reading the disk specified as the first argument. Once ddrescue completes the disk copy, you should run e2fsck on the copy of the disk to eliminate any filesystem errors introduced by the bad blocks on the original disk. Since there are guaranteed to be a substantial number of errors and you're working from a copy, you can try running e2fsck with the -y option, which tells e2fsck to answer yes to every question. However, depending on the types of messages displayed by e2fsck, this may not always worksome questions are of the form Abort? (y/n), to which you probably do not want to answer "yes." Here's some sample e2fsck output from checking the consistency of a bad 250-GB disk containing a single partition that I cloned using ddrescue: # fsck -y /dev/hdb1 fsck 1.35 (28-Feb-2004) e2fsck 1.35 (28-Feb-2004) /dev/hdb1 contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? yes Inode 12243597 is in use, but has dtime set. Fix? yes Inode 12243364 has compression flag set on filesystem without compression support. Clear? yes Inode 12243364 has illegal block(s). Clear? yes Illegal block #0 (1263225675) in inode 12243364. CLEARED. Illegal block #1 (1263225675) in inode 12243364. CLEARED. Illegal block #2 (1263225675) in inode 12243364. CLEARED. Illegal block #3 (1263225675) in inode 12243364. CLEARED. Illegal block #4 (1263225675) in inode 12243364. CLEARED. Illegal block #5 (1263225675) in inode 12243364. CLEARED. Illegal block #6 (1263225675) in inode 12243364. CLEARED. Illegal block #7 (1263225675) in inode 12243364. CLEARED. Illegal block #8 (1263225675) in inode 12243364. CLEARED. Illegal block #9 (1263225675) in inode 12243364. CLEARED. Illegal block #10 (1263225675) in inode 12243364. CLEARED. Too many illegal blocks in inode 12243364. Clear inode? yes Free inodes count wrong for group #1824 (16872, counted=16384). Fix? yes Free inodes count wrong for group #1846 (16748, counted=16384). Fix? yes 376 376 Free inodes count wrong (30657608, counted=30635973). Fix? yes [much more output deleted] Once e2fsck completes, you'll see the standard summary message: /dev/hdb1: ***** FILE SYSTEM WAS MODIFIED ***** /dev/hdb1: 2107/30638080 files (16.9% non-contiguous), 12109308/61273910 blocks 10.7.6. Checking the Restored Disk At this point, you can mount the filesystem using the standard mount command and see how much data was recovered. If you have any idea how full the original filesystem was, you will hopefully see disk usage similar to that in the recovered filesystem. The differences in disk usage between the clone of your old filesystem and the original filesystem will depend on how badly corrupted the original filesystem was and how many files and directories had to be deleted due to inconsistency during the filesystem consistency check. Remember to check the lost+found directory at the root of the cloned drive (i.e., in the directory where you mounted it), which is where fsck and its friends place files and directories that could not be correctly linked into the recovered filesystem. For more detailed information about identifying and piecing things together from a lost+found directory, see "Piece Together Data from the lost+found" [Hack #96]. You'll be pleasantly surprised at how much data you can successfully recover using this techniqueas will your users, who will regard you as even more wizardly after a recovery effort such as this one. Between this hack and your backups (you do backups, right?), even a disk failure may not cause significant data loss. 10.7.7. See Also "Recover Lost Partitions" [Hack #93]• "Repair and Recover ReiserFS Filesystems" [Hack #95]• "Piece Together Data from the lost+found" [Hack #96]• "Recover Deleted Files" [Hack #97]• Hack 95. Repair and Recover ReiserFS Filesystems Different filesystems have different repair utilities and naming conventions for recovered files. Here's how to repair a severely damaged ReiserFS filesystem. 377 377 [...]... /dev/hda1 * 1 13 104 391 83 Linux /dev/hda2 14 103 3 8193150 83 Linux /dev/hda3 103 4 109 8 522112+ 82 Linux swap / Solaris /dev/hda4 109 9 7297 49793467+ f W95 Ext'd (LBA) /dev/hda5 109 9 2118 8193118+ 83 Linux /dev/hda6 2119 3138 8193118+ 83 Linux /dev/hda7 3139 4158 8193118+ 83 Linux /dev/hda8 4159 5178 8193118+ 83 Linux /dev/hda9 5179 6198 8193118+ 83 Linux /dev/hda10 6199 7218 8193118+ 83 Linux 10. 8.2 Identifying... personality and life into potentially dry subjects The tools on the cover of Linux Server Hacks, Volume Two are hatchets, a type of ax The hatchet is a single-handed striking tool used primarily to cut and split wood Based on the wedge, one of the six simple machines of physics, the ax is one of the earliest man-made tools It dates back from 100 ,000 to 500,000 years, but its simplicity and efficiency make it... #11993099: ASCII English text, with CRLF line terminators #1199 3100 : ASCII text, with CRLF line terminators #1199 3101 : ASCII English text Looking at the text files in any directory usually provides some information about the contents of that directory Let's use the head command to examine the first 10 lines of each of the text files: $ head *99 *100 *101 ==> #11993099 #1199 3100 #1199 3101 . /dev/hda1 * 1 13 104 391 83 Linux /dev/hda2 14 103 3 8193150 83 Linux /dev/hda3 103 4 109 8 522112+ 82 Linux swap / Solaris /dev/hda4 109 9 7297 49793467+ f W95 Ext'd (LBA) /dev/hda5 109 9 2118 8193118+. operation Numbers may be followed by a multiplier: b = blocks, k = kB = 10^ 3 = 100 0, Ki = KiB = 2 ^10 = 102 4, M = 10^ 6, Mi = 2^20, G = 10^ 9, Gi = 2^30, etc… If logfile given and exists, try to resume. 3697 2891668+ 83 Linux /dev/hdb7 3698 4057 2891668+ 83 Linux /dev/hdb8 4058 4417 2891668+ 83 Linux /dev/hdb9 4418 4777 2891668+ 83 Linux /dev/hdb10 4778 5137 2891668+ 83 Linux /dev/hdb11