Firewalls and Internet Security, Second Edition phần 8 docx

45 354 0
Firewalls and Internet Security, Second Edition phần 8 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

296 An Evening with Berferd Figure 16.1: Connections to the Jail. Two logs were kept per session, one each for input and output. The logs were labeled with starting and ending times. The Jail was hard to set up. We had to get the access times in /dev right and update utmp for Jail users. Several raw disk files were too dangerous to leave around. We removed ps, who, w, netstat, and other revealing programs. The "login" shell script had to simulate login in several ways (see Figure 16.2.) Diana D'Angelo set up a believable file system (this is very good system administration practice) and loaded a variety of silly and templing files. Paul Glick got the utmp stuff working. A little later Berferd discovered the Jail and rattled around in it. He looked for a number of programs that we later learned contained his favorite security holes. To us the Jail was not very convincing, but Berferd seemed to shrug it off as part of the strangeness of our gateway. 16.5 Tracing Berferd Berferd spent a lot of time in our Jail. We spent a lot of time talking to Stephen Hansen, the system administrator at Stanford. Stephen spent a lot of lime trying to get a trace. Berferd was attacking us through one of several machines at Stanford. He connected to those machines from a terminal server connected to a terminal server. He connected to the terminal server over a telephone line, We checked the times he logged in to make a guess about the time zone he might be in. Figure 16.3 shows a simple graph we made of his session start times (PST). It seemed to suggest a sleep period on the East Coast of the United States, but programmers are noted for strange hours. This Tracing Berferd 297 # setupsucker login SUCKERROOT=/usr/spool/hacker login='echo $CDEST | cut -f4 -d! '# extract login from service name home='egrep ""$login:" SSUCKERROOT/etc/passwd | cut -d: -f6' PATH=/v:/bsd43:/sv; export PATH HOME=$home; export HOME USER=$login; export USER SHELL=/v/sh; export SHELL unset CSOURCE CDEST # hide these Datakit strings #get the tty and pid to set up the fake utmp tty='/bin/who | /bin/grep $login | /usr/bin/cut -cl5-17 | /bin/tail -1' /usr/adm/uttools/telnetuseron /usr/spool/hacker/etc/utmp \ $login $tty $$ l>/dev/null 2>/dev/null chown $login /usr/spool/hacker/dev/tty$tty 1>dev/null 2>/dev/null chmod 622 /usr/spool/hacker/dev/tty$tty l>/dev/null 2>/dev/null /etc/chroot /usr/spool/hacker /v/SU -c "$login" /v/sh -c "cd $HOME; exec /v/sh /etc/profile" /usr/adm/uttools/telnetuseroff /usr/spool/hacker/etc/utmp $tty \ >/dev/null 2>/dev/null Figure 16.2: The setupsucker shell script emulates login, and it is quite tricky. We had to make the en-vironment variables look reasonable and attempted to maintain the Jail's own special utmp entries for the residents. We had to be careful to keep errors in the setup scripts from the hacker's eyes. analysis wasn't very useful, but was worth a try, Stanford's battle with Berferd is an entire story on its own. Berferd was causing mayhem. subverting a number of machines and probing many more. He attacked numerous other hosts around the world from there, Tsutomu modified tcpdump to provide a time-stamped recording of each packet. This allowed him to replay real time terminal sessions. They got very good at stopping Berferd's attacks within minutes after he logged into a new machine. In one instance they watched his progress using the ps command. His login name changed to uucp and then bin before the machine "had disk problems." The tapped connections helped in many cases, although they couldn't monitor all the networks at Stanford. Early in the attack, Wietse Venema of Eindhoven University got in touch with the Stanford folks. He had been tracking hacking activities in the Netherlands for more than a year, and was pretty sure thar he knew the identity of the attackers, including Berferd Eventually, several calls were traced. They traced back to Washington, Portugal, and finally to the Netherlands. The Dutch phone company refused to continue the trace to the caller because hacking was legal and there was no treaty in place. (A treaty requires action by the Executive branch and approval by the U.S. Senate, which was a bit further than we wanted to take this.) 298 An Evening with Berferd 1 2 Jan 012345678901234567890123 s 19 X s 20 xxxx m 21 X X XXXX t 22 XXXXX X w 23 XX X XX X XX t 24 X X f 25 X XXXX s 26 s 27 XXXX XX X m 28 XX X t 29 X XXXX X w 30 X t 31 XX Feb 012345678901234567890123 f 1 x x x s 2 X XX XXX s 3 X X XXXX X m 4 X Figure 16.3: A time graph of Berferd's activity, This is a crude plot made at the time. The tools built during an attack are often hurried and crude. A year later, this same crowd damaged some Dutch computers. Suddenly, the local authorities discovered a number of relevant applicable laws. Since then, the Dutch have passed new laws outlawing hacking. Berferd used Stanford as a base for many months. There are tens of megabytes of logs of his activities. He had remarkable persistence at a very boring job of poking computers. Once he got an account on a machine, there was little hope for the system administrator. Berferd had a fine list of security holes. He knew obscure sendmail parameters and used them well. (Yes, some sendmails have security holes for logged-in users, too. Why is such a large and complex program allowed to run as root?) He had a collection of thoroughly invaded machines, complete with setuid-to-root shell scripts usually stored in /usr/lib/term/.s. You do not want to give him an account on your computer. 16.6 Berferd Comes Home In ihe Sunday New York Times on 21 April 1991, John Markoff broke some of the Berferd story. He said that authorities were pursuing several Dutch hackers, but were unable to prosecute them because hacking was not illegal under Dutch law. Berferd Comes Home 299 The hackers heard about the article within a day or so. Wietse collected some mail between several members of the Dutch cracker community. It was clear that they had bought the fiction of our machine's demise. One of Berferd's friends found it strange that the Times didn't include our computer in the list of those damaged. On May 1, Berferd logged into the Jail. By this time we could recognize him by his typing speed and errors and the commands he used to check around and attack. He probed various computers, while consulting the network whois service for certain brands of hosts and new targets. He did not break into any of the machines he tried from our Jail. Of the hundred-odd sites he attacked, three noticed the attempts, and followed up with calls from very serious security officers. I explained to them that the hacker was legally untouchable as far as we knew, and the best we could do was log his activities and supply logs to the victims. Berferd had many bases for laundering his connections, It was only through persistence and luck that he was logged at all, Would the system administrator of an attacked machine prefer a log of the cracker's attack to vague deductions?' Damage control is much easier when the actual damage is known. If a system administrator doesn't have a log, he or she should reload his compromised system from the release tapes or CD-ROM. The systems administrators of the targeted sites and their management agreed with me, and asked that we keep the Jail open. At the request of our management I shut the Jail down on May 3. Berferd tried to reach it a few times and went away. He moved his operation to a hacked computer in Sweden. We didn't have a formal way to stop Berferd. In fact, we were lucky to know who he was: Most system administrators have no means to determine who attacked them. His friends finally slowed down when Wietse Venema called one of their mothers. Several other things were apparent with hindsight. First and foremost, we did not know in advance what to do with a hacker. We made our decisions as we went along, and based them partly on expediency. One crucial decision— to let Berferd use part of our machine, via the Jail—did not have the support of management. We also had few tools available. The scripts we used, and the Jail itself, were created on the fly. There were errors, things that could have tipped off Berferd, had he been more alert. Sites that want to monitor hackers should prepare their toolkits in advance. This includes buying any necessary hard-ware. In fact, the only good piece of advance preparation we had done was to set up log monitors. In short, we weren't ready. Are you? 300 17 The Taking of Clark And then Something went bump! How that bump made us jump! The Cat in the Hat —DR. SEUSS Most people don't know when their computers have been hacked. Most systems lack the logging and the attention needed to detect an attempted invasion, much less a successful one. Josh Quittner [Quittner and Slatalla. 1995] tells of a hacker who was caught, convicted, and served his time. When he got out of jail, many of the old back doors he had left in hacked systems were still there. We had a computer that was hacked, but the intended results weren't subtle. In fact, the attackers' goals were to embarrass our company, and they nearly succeeded. Often, management fears corporate embarrassment more than the actual loss of data. It can tarnish the reputation of a company, which can be more valuable than the company's actual secrets. This is one important reason why most computer break-ins are never reported to the press or police. The attackers invaded a host we didn't care about or watch much. This is also typical behavior. Attackers like to find abandoned or orphaned computer accounts and hosts—these are unlikely to be watched. An active user is more likely to notice that his or her account is in use by someone else. The finger command is often used to list accounts and find unused accounts. Unused hosts are not maintained, Their software isn't fixed and. in particular, they don't receive security patches. 301 302 _________________________________________________________________The Taking of Clark 17.1 Prelude Our target host was CLARK.RESEARCH.ATT.COM. It was installed as part of the XUNET project, which was conducting research into high-speed (DS3: 45 Mb/sec) networking across the U.S. (Back in 1994. that was fast ) The project needed direct network access at speeds much faster than our firewall could support at the time. The XUNET hosts were installed on a network outside our firewall . Without our firewall's perimeter defense, we had to rely on host-based security on these ex-ternal hosts, a dubious proposition given we were using commercial UNIX systems. This difficult task of host-based security and system administration fell to a colleague of ours, Pat Parseghian. She installed one-time passwords for logins, removed all unnecessary network services, turned off the execute bits on /usr/lib/sendmail. and ran COPS [Farmer and Spafford, 1990] on these systems. Not everything was tightened up. The users needed to share file systems for development work, so NFS was left running. Ftp didn't use one-time passwords until late in the project. Out of general paranoia, we located all the external nonfirewall hosts on a branch of the net-work beyond a bridge. The normal firewall traffic does not pass these miscellaneous external hosts—we didn't want sniffers on a hacked host to have access to our main Internet flow. 17.2 CLARK CLARK was one of two spare DECstation 5000s running three-year-old software. They were equipped with video cameras and software for use in high-speed networking demos. We could see people sitting at similar workstations across the country in Berkeley, at least when the demo was running. The workstations were installed outside with some care: Unnecessary network services were removed, as best as we can recall. We had no backups of these scratch computers. The password file was copied from another external XUNET host. No arrangements were made for one-time password use. These were neglected hosts that collected dust in the corner, except when used on occasion by summer students. Shortly after Thanksgiving in 1994. Pat logged into CLARK and was greeted with a banner quite different from our usual threatening message. It started with ULTRIX V4.2A (Rev. 47) System 6: Tue Sep 22 11:41:50 EDT 1992 UWS V4.2A (Rev. 420) %% GREETINGS FROM THE INTERNET LIBERATION FRONT %% Ones upon a time, there was a wide area network called the Internet. A network unscathed by capitalistic Fortune 500 companies and the like. and continued on: A one-page diatribe against firewalls and large corporations. The message in-cluded a PGP public key we could use to reply to them. (Actually, possesion of the corresponding private key could be interesting evidence in a trial.) Crude Forensics 303 Pat disconnected both Ultrix hosts from the net and rebooted them. Then we checked them out. Many people have trouble convincing themselves that they have been hacked. They often find out by luck, or when someone from somewhere complains about illicit activity originating from the hacked host. Subtlety wasn't a problem here. 17.3 Crude Forensics It is natural to wander around a hacked system to find interesting dregs and signs of the attack. It is also natural to reboot the computer to stop whatever bad things might have been happening. Both of these actions are dangerous if you are seriously interested in examining the computer for details of the attack. Hackers often make changes to the shutdown or restart code to hide their tracks or worse. The best thing to do is the following: 1. Run ps and netstat to see what is running, but it probably won't do you any good. Hackers have kernel mods or modified copies of such programs that hide their activity. 2. Turn the computer off, without shutting it down nicely. 3. Mount the system's disks on a secure host read-only.noexec, and examine them. You can no longer trust the programs or even the operating system on a hacked host. There are many questions you must answer: • What other hosts did they get into? Successful attacks are rarely limited to a single host, • Do you want them to know that they have been discovered? • Do you want to try to hunt them down? • How long ago was the machine compromised? • Are your backups any good? • What are the motives of the attackers'? Are they just collecting hosts, or were they spying? • What network traffic travels past the interfaces on the host? Couid they have sniffed pass- words, e-mail, credit card numbers, or important secrets? • Are you capable of keeping them out from a newly rebuilt host? The Taking of Clark 17.4 Examining CLARK We asked a simple, naive question: Did they gain root access? If they changed /etc/motd, the answer is probably "yes": # cd /etc # ls -l motd -rw-r r 1 # root 2392 Jan 6 12:42 motd Yes. Either they had root permission or they hacked our ls command to report erroneous informa-tion. In either case, the only thing we can say about the software with confidence is that we have absolutely no confidence in it. To rehabilitate this host, Pat had to completely reload its software from the distribution media. It was possible to save remaining non-executable files, but in our case this wasn't necessary. Of course, we wanted to see what they did. In particular, did they get into the main XUNET hosts through the NFS links? (We never found out, but they certainty could have.) We had a look around: # cd / # l s -l total 6726 -rw-r r 1 root 162 Aug 5 1992 .Xdefaults -rw-r r 1 root 32 Jul 24 1992 .Xdefaults.old -rwxr r 1 root 259 Aug 18 1992 .cshrc -rwxr r 1 root 102 Aug 18 1992 .login -rwxr r 1 root 172 Nov 15 1991 .profile -rwxr r 1 root 48 Aug 21 10:41 .rhosts 1 root 14 Nov 24 14:57 NICE_SECURITY_BOOK_CHES_BUT_ drwxr-xr-x 2 root2048 Jul 20 1993 bin -rw-r r 1 root315 Aug 20 1992 default.DECterm drwxr-xr-x 3 root 3072 Jan 6 12:45 dev drwxr-xr-x 3 root 3072 Jan 6 12:55 etc -rwxr-xr-x 1 root 2761504 Nov 15 1991 genvmunix lrwxr-xr-x 1 root7 Jul 24 1992 lib -> usr/lib drwxr-xr-x 2 root8192 Nov 15 1991 lost+found drwxr-xr-x 2 root512 Nov 15 1991 mnt drwxr-xr-x 6 root512 Mar 26 1993 n drwxr-xr-x 2 root512 Jul 24 1992 opr lrwxr-xr-x 1 root7 Jul 24 1992 sys -> usr/sys lrwxr-xr-x 1 root8 Jul 24 1992 trap -> /var/tmp drwxr-xr-x 2 root 1024 Jul 18 15:39 u -rw-r r 1 root11520 Mar 19 1991 ultrixboot drwxr-xr-x 23 root512 Aug 24 1993 usr lrwxr-xr-x 1 root4 Aug 6 1992 usrl -> /usr lrwxr-xr-x 1 root8 Jul 24 1992 var -> /usr/var -rwxr-xr-x 1 root4052424 Sep 22 1992 vmunix Examining CLARK 305 # cat NICE_SECURITY_BOOK_CHES_BUT_ILF_OWNZ_U we win u lose A message from the dark side! (Perhaps they chose a long filename to create typesetting difficulties for this chapter—but that might be too paranoid.) 17.4.1 /usr/lib What did they do on this machine? We learned the next forensic trick from reading old hacking logs. It was gratifying that it worked so quickly: # find / -print | grep ' ' /usr/var/tmp/ /usr/lib/ /usr/lib/ /es.c /usr/lib/ / /usr/lib/ /in.telnetd Creeps like to hide their files and directories with names that don't show up well on directory listings. They use three tricks on UNIX systems: embed blanks in the names, prefix names with a period, and use control characters, /usr/var/tmp and /usr/lib/ / had interesting files in them. We looked in /usr/lib, and determined the exact directory name: # cd /usr/lib I ls | od -c | sed l0q OOOOOOO \n D P S \n M a i l . h e l 0000020 p \n M a i l . h e l p . ~n Ma 0000040 il.rc\nXll\nXMedia 0000060\ n x i i b i n t v . o \n a 1 i a 0000100s e s \ n a l i a s e s . d i r \ n 0000120 aliases .pag\narin 0000140g . l o d \ n a t r u n \ n c a l « 0000160n d a r \ n c d a \n c m p 1 r s \ n 0000200 cpp\ncron\nc r o n t a b 0000220\ n c r t O . o \ n c t r ace\nd (Experienced UNIX system administrators employ the od command when novices create strange, unprintable filenames.) In this case, the directory name was three ASCII blanks. We enter the directory: # cd '/usr/lib/ ' # ls -la total 103 drwxr-xr-x 2 root 512 Oct 22 17:07 drwxr-xr-x 2 root 2560 NOV 24 13:47 -rw-r r 1 root 92 Oct 22 17:08 -rw-r r- 1 root 9646 Oct 22 17:06 -rwxr-xr-x 1 root 90112 O ct 22 17:07 # cat Log started a Sat Oct 22 17:07 :41, pid=2671 Lo g started a Sat Oct 22 17:0B :36, p id=26721 es .c in.telnetd [...]... mechanism for the Internet is known as IPsec [Kent and Atkin-son, 1998c; Thayer et al., 19 98] IPsec includes an encryption mechanism (Encapsulating Secu-rity Protocol (ESP)) [Kent and Atkinson 1998b]; an authentication mechanism (Authentication Header (AH)) [Kent and Atkinson 1998a]; and a key management protocol (Internet Key Ex-change (IKE)) [Harkim and Carrel, 19 98] , 18. 3.1 ESP and AH ESP and AH rely... protected by a variety of patents It may be wise to seek competent legal advice 18. 1 The Kerberos Authentication System The Kerberos Authentication System [Bryant, 1 988 ; Kohl and Neuman, 1993; Miller et a/., 1 987 : Steiner et al., I 988 ] was designed at MIT as part of Project Athena.1 It serves two purposes: authentication and key distribution That is, it provides to hosts—or more accurately, to various... users Each user and each service shares a secret key with the Kerberos Key Distribution Center (KDC); these keys act as master keys to distribute session keys, and as evidence that the KDC vouches for the information contained in certain messages The basic protocol is derived from one originally proposed by Needham and Schroeder [Needham and Schroeder, 19 78, 1 987 : Denning and Sacco, 1 981 ] More precisely,... made about exactly where and how it should be installed, with trade-offs in terms of economy, granularity of protection, and impact on existing systems Accordingly, Sections 18. 2, 18. 3, and 18. 4 discuss these trade-offs, and present some security systems in use today In the discussion that follows, we assume that the cryptosystems involved—that is, the crypto-graphic algorithm and the protocols that... ticket for server s and a copy of Kc,s, all encrypted with a private key shared by the TGS and the principal: Kc,tgs[Ks[Tc,s],Kc,s] ( 18. 5) The session key Kc,s, is a newly chosen random key The key Kc,tgs and the ticket-granting ticket are obtained at session start time The client sends a message to Kerberos with a principal name; Kerberos responds with Kc[Kc,tgs,Ktgs[Tc,tgs] ( 18. 6) The client key... university-wide KDC, and it in turn to a regional one Only the regional KDCs would need to share keys with each other in a complete mesh 18. 1.1 Limitations Although Kerberos is extremely useful, and far better than the address-based authentication meth-ods that most earlier protocols used, it does have some weaknesses and limitations [Bellovin and The Kerberos Authentication System 317 Merritt 1991] First and foremost,... a detailed look at cryptography and network security We first discuss the Kerberos Authentication System Kerberos is an excellent package, and the code is widely available It's an IETF Proposed Standard, and it's part of Windows 2000 These things make it an excellent case study, as it is a real design, not vaporware It has been the subject of many papers and talks, and enjoys widespread use Selecting... using RSA and sends it to the server The server then uses its private key to decrypt the symmetric key material and derives the en-cryption and authentication keys Next, the client and the server exchange messages that contain the MAC of the entire dialogue up to this point This ensures that the messages were not tampered with and that both parties have the correct key After the MACs are received and verified,... there is indeed a threat, but that the threat can generally be contained by proper techniques, including the use of firewalls Firewalls are not the be-all and end-all of security, though Much more can and should be done Here's our take on where the future is headed We've been wrong before, and we'll likely be wrong again (One of us, Steve, was one of the developers of NetNews He predicted that the ultimate... 19.3 Microsoft and Security Recently, the media has been reporting that Microsoft is now going to focus on security This seems to be true; it's not just public relations propaganda They are offering widespread security training and awareness courses and are developing new security auditing tools; their corporate Internet Ubiquity 331 culture is already changing We salute this effort, and hope that the . competent legal advice. 18. 1 The Kerberos Authentication System The Kerberos Authentication System [Bryant, 1 988 ; Kohl and Neuman, 1993; Miller et a/., 1 987 : Steiner et al., I 988 ] was designed at. terms of economy, granularity of protection, and impact on existing systems. Accordingly, Sections 18. 2, 18. 3, and 18. 4 discuss these trade-offs, and present some security systems in use today typing speed and errors and the commands he used to check around and attack. He probed various computers, while consulting the network whois service for certain brands of hosts and new targets.

Ngày đăng: 14/08/2014, 18:20

Từ khóa liên quan

Mục lục

  • Part VI Lessons Learned

    • 17 The Taking of Clark

    • 18 Secure Communications over Insecure Networks

    • 19 Where Do We Go from Here?

    • Part VII Appendixes

      • Appendix A An Introduction to Cryptography

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan