Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 83 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
83
Dung lượng
605,58 KB
Nội dung
464 Chapter 12 • Spoofing: Attacks on Trusted Identity Most of the Web serves entire streams of data without so much as a blink to clients whose only evidence of their identity can be reduced down to a single HTTP call: GET /. (That’s a period to end the sentence, not an obligatory Slashdot reference. This is an obligatory Slashdot reference.) The GET call is documented in RFCs (RFC1945) and is public knowledge. It is possible to have higher levels of authentication supported by the protocol, and the upgrade to those levels is reasonably smoothly handled. But the base public access system depends merely on one’s knowledge of the HTTP protocol and the ability to make a successful TCP connection to port 80. Not all protocols are as open, however.Through either underdocumentation or restriction of sample code, many protocols are entirely closed.The mere ability to speak the protocol authenticates one as worthy of what may very well repre- sent a substantial amount of trust; the presumption is, if you can speak the lan- guage, you’re skilled enough to use it. That doesn’t mean anyone wants you to, unfortunately. The war between open source and closed source has been waged quite harshly in recent times and will continue to rage.There is much that is uncertain; however, there is one specific argument that can actually be won. In the war between open protocols versus closed protocols, the mere ability to speak to one or the other should never, ever, ever grant you enough trust to order workstations to execute arbitrary commands. Servers must be able to provide something— maybe even just a password—to be able to execute commands on client machines. Unless this constraint is met, a deployment of a master server anywhere con- ceivably allows for control of hosts everywhere. Who made this mistake? Both Microsoft and Novell. Neither company’s client software (with the pos- sible exception of a Kerberized Windows 2000 network) does any authentication on the domains they are logging in to beyond verifying that, indeed, they know how to say “Welcome to my domain. Here is a script of commands for you to run upon login.”The presumption behind the design was that nobody would ever be on a LAN (local area network) with computers they owned themselves; the physical security of an office (the only place where you find LANs, appar- ently) would prevent spoofed servers from popping up.As I wrote back in May of 1999: “A common aspect of most client-server network designs is the login script. A set of commands executed upon provision of correct www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 464 Spoofing: Attacks on Trusted Identity • Chapter 12 465 username and password, the login script provides the means for corporate system administrators to centrally manage their flock of clients. Unfortunately, what’s seemingly good for the business turns out to be a disastrous security hole in the University environment, where students logging in to the network from their dorm rooms now find the network logging in to them. This hole provides a single, uniform point of access to any number of previously uncom- promised clients, and is a severe liability that must be dealt with the highest urgency. Even those in the corporate environment should take note of their uncomfortable exposure and demand a number of security procedures described herein to protect their networks.” —Dan Kaminsky “Insecurity by Design: The Unforeseen Consequences of Login Scripts” www.doxpara.com/login.html Ability to Prove a Shared Secret: “Does It Share a Secret with Me?” This is the first ability check where a cryptographically secure identity begins to form. Shared secrets are essentially tokens that two hosts share with one another. They can be used to establish links that are: ■ Confidential The communications appear as noise to any other hosts but the ones communicating. ■ Authenticated Each side of the encrypted channel is assured of the trusted identity of the other. ■ Integrity Checked Any communications that travel over the encrypted channel cannot be interrupted, hijacked, or inserted into. Merely sharing a secret—a short word or phrase, generally—does not directly win all three, but it does enable the technologies to be deployed reasonably straightforwardly.This does not mean that such systems have been.The largest deployment of systems that depend upon this ability to authenticate their users is by far the password contingent. Unfortunately,Telnet is about the height of pass- word-exchange technology at most sites, and even most Web sites don’t use the Message Digest 5 (MD5) standard to exchange passwords. It could be worse; passwords to every company could be printed in the classi- fied section of the New York Times.That’s a comforting thought.“If our firewall goes, every device around here is owned. But, at least my passwords aren’t in the New York Times.” www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 465 466 Chapter 12 • Spoofing: Attacks on Trusted Identity All joking aside, there are actually deployed cryptosystems that do grant cryp- tographic protections to the systems they protect.Almost always bolted onto decent protocols with good distributed functionality but very bad security (ex: RIPv2 from the original RIP, and TACACS+ from the original TACACS/XTA- CACS), they suffer from two major problems: First, their cryptography isn’t very good. Solar Designer, with an example of what every security advisory would ideally look like, talks about TACACS+ in “An Analysis of the TACACS+ Protocol and its Implementations.”The paper is located at www.openwall.com/advisories/OW-001-tac_plus.txt. Spoofing packets such that it would appear that the secret was known would not be too difficult for a dedicated attacker with active sniffing capability. Second, and much more importantly, passwords lose much of their power once they’re shared past two hosts! Both TACACS+ and RIPv2 depend on a single, shared pass- word throughout the entire usage infrastructure (TACACS+ actually could be rewritten not to have this dependency, but I don’t believe RIPv2 could).When only two machines have a password, look closely at the implications: ■ Confidential? The communications appear as noise to any other hosts but the ones communicating…but could appear as plaintext to any other host who shares the password. ■ Authenticated? Each side of the encrypted channel is assured of the trusted identity of the other…assuming none of the other dozens, hun- dreds, or thousands of hosts with the same password have either had their passwords stolen or are actively spoofing the other end of the link themselves. ■ Integrity Checked? Any communications that travel over the encrypted channel cannot be interrupted, hijacked, or inserted into, unless somebody leaked the key as above. Use of a single, shared password between two hosts in a virtual point-to-point connection arrangement works, and works well. Even when this relationship is a client-to-server one (for example, with TACACS+, assume but a single client router authenticating an offered password against CiscoSecure, the backend Cisco password server), you’re either the client asking for a password or the server offering one. If you’re the server, the only other host with the key is a client. If you’re the client, the only other host with the key is the server that you trust. However, if there are multiple clients, every other client could conceivably become your server, and you’d never be the wiser. Shared passwords work great www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 466 Spoofing: Attacks on Trusted Identity • Chapter 12 467 for point-to-point, but fail miserably for multiple clients to servers:“The other end of the link” is no longer necessarily trusted. NOTE Despite that, TACACS+ allows so much more flexibility for assigning access privileges and centralizing management that, in spite of its weak- nesses, implementation and deployment of a TACACS+ server still remains one of the better things a company can do to increase security. That’s not to say that there aren’t any good spoof-resistant systems that depend upon passwords. Cisco routers use SSH’s password-exchange systems to allow an engineer to securely present his password to the router.The password is used only for authenticating the user to the router; all confidentiality, link integrity, and (because we don’t want an engineer giving the wrong device a password!) router-to-engineer authentication is handled by the next layer up: the private key. Ability to Prove a Private Keypair: “Can I Recognize Your Voice?” Challenging the ability to prove a private keypair invokes a cryptographic entity known as an asymmetric cipher. Symmetric ciphers, such as Triple-DES, Blowfish, and Twofish, use a single key to both encrypt a message and decrypt it. See Chapter 6 for more details. If just two hosts share those keys, authentication is guaranteed—if you didn’t send a message, the host with the other copy of your key did. The problem is, even in an ideal world, such systems do not scale. Not only must every two machines that require a shared key have a single key for each host they intend to speak to—an exponential growth problem—but those keys must be transferred from one host to another in some trusted fashion over a network, floppy drive, or some data transference method. Plaintext is hard enough to transfer securely; critical key material is almost impossible. Simply by spoofing oneself as the destination for a key transaction, you get a key and can impersonate two people to each other. Yes, more and more layers of symmetric keys can be (and in the military, are) used to insulate key transfers, but in the end, secret material has to move. www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 467 468 Chapter 12 • Spoofing: Attacks on Trusted Identity Asymmetric ciphers, such as RSA, Diffie-Helman/El Gamel, offer a better way.Asymmetric ciphers mix into the same key the ability to encrypt data, decrypt data, sign the data with your identity, and prove that you signed it.That’s a lot of capabilities embedded into one key—the asymmetric ciphers split the key into two: one of which is kept secret, and can decrypt data or sign your indepen- dent identity—this is known as the private key.The other is publicized freely, and can encrypt data for your decrypting purposes or be used to verify your signature without imparting the ability to forge it.This is known as the public key. More than anything else, the biggest advantage of private key cryptosystems is that key material never needs to move from one host to another.Two hosts can prove their identities to one another without having ever exchanged anything that can decrypt data or forge an identity. Such is the system used by PGP. Ability to Prove an Identity Keypair:“Is Its Identity Independently Represented in My Keypair?” The primary problem faced by systems such as PGP is:What happens when people know me by my ability to decrypt certain data? In other words, what happens when I can’t change the keys I offer people to send me data with, because those same keys imply that “I” am no longer “me?” Simple.The British Parliament starts trying to pass a law saying that, now that my keys can’t change, I can be made to retroactively unveil every e-mail I have ever been sent, deleted by me (but not by a remote archive) or not, simply because a recent e-mail needs to be decrypted.Worse, once this identity key is released, they are now cryptographically me—in the name of requiring the ability to decrypt data, they now have full control of my signing identity. The entire flow of these abilities has been to isolate out the abilities most focused on identity; the identity key is essentially an asymmetric keypair that is never used to directly encrypt data, only to authorize a key for the usage of encrypting data. SSH and a PGP variant I’m developing known as Dynamically Rekeyed OpenPGP (DROP) all implement this separation on identity and con- tent, finally boiling down to a single cryptographic pair everything that humanity has developed in its pursuit of trust.The basic idea is simple:A keyserver is updated regularly with short-lifespan encryption/decryption keypairs, and the mail sender knows it is safe to accept the new key from the keyserver because even though the new material is unknown, it is signed by something long term that is known:The long-term key. In this way, we separate our short-term requirements to accept mail from our long-term requirements to retain our iden- tity, and restrict our vulnerability to attack. www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 468 Spoofing: Attacks on Trusted Identity • Chapter 12 469 In technical terms, the trait that is being sought is that of Perfect Forward Secrecy (PFS). In a nutshell, this refers to the property of a cryptosystem to, in the face of a future compromise, to at least compromise no data sent in the past. For purely symmetric cryptography, PFS is nearly automatic—the key used today would have no relation to the key used yesterday, so even if there’s a compromise today, an attacker can’t use the key recovered to decrypt past data.All future data, of course, might be at risk—but at least the past is secure.Asymmetric ciphers scramble this slightly:Although it is true that every symmetric key is usually dif- ferent, each individual symmetric key is decrypted using the same asymmetric private key.Therefore, being able to decrypt today’s symmetric key also means being able to decrypt yesterday’s.As mentioned, keeping the same decryption key is often necessary because we need to use it to validate our identity in the long term, but it has its disadvantages. www.syngress.com Perfect Forward Secrecy: SSL’s Dirty Little Secret The dirty little secret of SSL is that, unlike SSH and unnecessarily like standard PGP, its standard modes are not perfectly forward secure. This means that an attacker can lie in wait, sniffing encrypted traffic at its leisure for as long as it desires, until one day it breaks in and steals the SSL private key used by the SSL engine (which is extractable from all but the most custom hardware). At that point, all the traffic sniffed becomes retroactively decryptable—all credit card numbers, all transactions, all data is exposed no matter the time that had elapsed. This could be pre- vented within the existing infrastructure if VeriSign or other Certificate Authorities made it convenient and inexpensive to cycle through exter- nally-authenticated keypairs, or it could be addressed if browser makers mandated or even really supported the use of PFS-capable cipher sets. Because neither is the case, SSL is left significantly less secure than it otherwise should be. To say this is a pity is an understatement. It’s the dirtiest little secret in standard Internet cryptography. Tools & Traps… 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 469 470 Chapter 12 • Spoofing: Attacks on Trusted Identity Configuration Methodologies: Building a Trusted Capability Index All systems have their weak points, as sooner or later, it’s unavoidable that we arbitrarily trust somebody to teach us who or what to trust. Babies and ‘Bases, Toddlers ‘n TACACS+—even the best of security systems will fail if the initial configuration of their Trusted Capability Index fails. As surprising as it may be, it’s not unheard of for authentication databases that lock down entire networks to be themselves administered over unencrypted links. The chain of trust that a system undergoes when trusting outside communica- tions is extensive and not altogether thought out; later in this chapter, an example is offered that should surprise you. The question at hand, though, is quite serious: Assuming trust and identity is identified as something to lock down, where should this lockdown be centered, or should it be centered at all? Local Configurations vs. Central Configurations One of the primary questions that comes up when designing security infrastruc- tures is whether a single management station, database, or so on should be entrusted with massive amounts of trust and heavily locked down, or whether each device should be responsible for its own security and configuration.The intention is to prevent any system from becoming a single point of failure. The logic seems sound.The primary assumption to be made is that security considerations for a security management station are to be equivalent to the sum total of all paranoia that should be invested in each individual station. So, obvi- ously, the amount of paranoia invested in each machine, router, and so on, which is obviously bearable if people are still using the machine, must be superior to the seemingly unbearable security nightmare that a centralized management database would be, right? The problem is, companies don’t exist to implement perfect security; rather, they exist to use their infrastructure to get work done. Systems that are being used rarely have as much security paranoia implemented as they need. By “offloading” the security paranoia and isolating it into a backend machine that can actually be made as secure as need be, an infrastructure can be deployed that’s usable on the front end and secure in the back end. The primary advantage of a centralized security database is that it models the genuine security infrastructure of your site—as an organization gets larger, www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 470 Spoofing: Attacks on Trusted Identity • Chapter 12 471 blanket access to all resources should be rare, but access as a whole should be consistently distributed from the top down.This simply isn’t possible when there’s nobody in charge of the infrastructure as a whole; overly distributed controls mean access clusters to whomever happens to want that access. Access at will never breeds a secure infrastructure. The disadvantage, of course, is that the network becomes trusted to provide configurations. But with so many users willing to Telnet into a device to change passwords—which end up atrophying because nobody wants to change hundreds of passwords by hand—suddenly you’re locked into an infrastructure that’s depen- dent upon its firewall to protect it. What’s scary is, in the age of the hyperactive Net-connected desktop, firewalls are becoming less and less effective, simply because of the large number of oppor- tunities for that desktop to be co-opted by an attacker. Desktop Spoofs Many spoofing attacks are aimed at the genuine owners of the resources being spoofed.The problem with that is, people generally notice when their own resources disappear.They rarely notice when someone else’s does, unless they’re no longer able to access something from somebody else. The best of spoofs, then, are completely invisible.Vulnerability exploits break things; although it’s not impossible to invisibly break things (the “slow corrup- tion” attack), power is always more useful than destruction. The advantage of the spoof is that it absorbs the power of whatever trust is embedded in the identities that become appropriated.That trust is maintained for as long as the identity is trusted, and can often long outlive any form of network- level spoof.The fact that an account is controlled by an attacker rather than by a genuine user does maintain the system’s status as being under spoof. The Plague of Auto-Updating Applications Question:What do you get when you combine multimedia programmers, con- sent-free network access to a fixed host, and no concerns for security because “It’s just an auto-updater?”Answer: Figure 12.1. What good firewalls do—and it’s no small amount of good, let me tell you— is prevent all network access that users themselves don’t explicitly request. Surprisingly enough, users are generally pretty good about the code they run to access the Net.Web browsers, for all the heat they take, are probably among the most fault-tolerant, bounds-checking, attacked pieces of code in modern network www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 471 472 Chapter 12 • Spoofing: Attacks on Trusted Identity deployment.They may fail to catch everything, but you know there were at least teams trying to make it fail. See the Winamp auto-update notification box in Figure 12.1. Content comes from the network, authentication is nothing more than the ability to encode a response from www.winamp.com in the HTTP protocol GETting /update/ latest-version.jhtml?v=2.64 (Where 2.64 here is the version I had. It will report whatever version it is, so the site can report if there is a newer one.). It’s not diffi- cult to provide arbitrary content, and the buffer available to store that content overflows reasonably quickly (well, it will overflow when pointed at an 11MB file). See Chapter 11 for information on how you would accomplish an attack like this one. However many times Internet Explorer is loaded in a day, it generally asks you before accessing any given site save the homepage (which most corporations set). By the time Winamp asks you if you want to upgrade to the latest version, it’s already made itself vulnerable to every spoofing attack that could possibly sit between it and its rightful destination. If not Winamp, then Creative Labs’ Sound Blaster Live!Ware. If not Live!Ware, then RealVideo, or Microsoft Media Player, or some other multimedia applica- tion straining to develop marketable information at the cost of their customers’ network security. www.syngress.com Figure 12.1 What Winamp Might As Well Say 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 472 Spoofing: Attacks on Trusted Identity • Chapter 12 473 Impacts of Spoofs Spoofing attacks can be extremely damaging—and not just on computer net- works. Doron Gellar writes: The Israeli breaking of the Egyptian military code enabled them to confuse the Egyptian army and air force with false orders. Israeli officers “ordered an Egyptian MiG pilot to release his bombs over the sea instead of carrying out an attack on Israeli positions.” When the pilot questioned the veracity of the order, the Israeli intelligence officer gave the pilot details on his wife and family.” The pilot indeed dropped his bombs over the Mediterranean and parachuted to safety. —Doron Gellar, Israeli Intelligence in the 1967 War In this case, the pilot had a simple “trusted capabilities index”: His legitimate superiors would know him in depth; they’d be aware of “personal entropy” that no outsider should know. He would challenge for this personal entropy—essen- tially, a shared key—as a prerequisite for behaving in a manner that obviously violated standard security procedure. (In general, the more damaging the request, the higher the authentication level should be—thus we allow anyone to ping us, but we demand higher proof to receive a root shell.) The pilot was tricked— www.syngress.com Auto Update as Savior? I’ll be honest: Although it’s quite dangerous that so many applications are taking it upon themselves to update themselves automatically, at least something is leading to making it easier to patch obscenely broken code. Centralization has its advantages: When a major hole was found in AOL Instant Messenger, which potentially exposed over fifty million hosts to complete takeover, the centralized architecture of AOL IM allowed them to completely filter their entire network of such packets, if not completely automatically patch all connecting clients against the vulnerability. So although automatic updates and centralization has sig- nificant power—this power can be used to great effect by legitimate providers. Unfortunately, the legitimate are rarely the only ones to par- take in any given system. In short: It’s messy. Notes from the Underground… 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 473 [...]... reliable they allowed the service to remain The results should be taken with a grain of salt, but as with much of the material on Cryptome, is well worth the read www.syngress.com 477 194_HPYN2e_12.qxd 478 2/15/02 12:58 PM Page 478 Chapter 12 • Spoofing: Attacks on Trusted Identity Bait and Switch: Spoofing the Presence of SSL Itself If you think about it, really sit down and consider—why does a given user... Implementation: DoxRoute, Section by Section Execution of DoxRoute is pretty trivial: [root@localhost effugas]# /doxroute -r 10.0.1.254 -c -v 10.0.1. 170 ARP REQUEST: Wrote 42 bytes looking for 10.0.1.254 Router Found: 10.0.1.254 at 0:3:E3:0:4E:6B DATA: Sent 74 bytes to 171 .68.10 .70 DATA: Sent 62 bytes to 216.239.35.101 DATA: Sent 60 bytes to 216.239.35.101 DATA: Sent 406 bytes to 216.239.35.101 DATA: Sent 60 bytes... interesting packages for using spoofs to execute man-in-the-middle (MITM) attacks against sessions on your network, with extensive support for a wide range of protocols Good luck building your specific spoof into this DoxRoute provides the infrastructure for answering the question “What if we could put a machine on the network that did…”? Well, if we can spoof an entire router in a few lines of code, spoofing whatever... differentiate the Web’s content from your system’s; by bug or design there are methods of removing your system’s pixels leaving the Web to do what it will (In this case, all that was needed was to set two options against each other: First, the fullscreen=1 variable was set in the popup function, increasing the size of the window and removing the borders But then a second, contradictory set of options... built because it wouldn’t elegantly fit within some kernel interface is even greater Particularly when it comes to highly flexible network solutions, the www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 4 87 Spoofing: Attacks on Trusted Identity • Chapter 12 highly tuned network implementations built into modern kernels are inappropriate for our uses.We’re looking for systems that break the rules,... computer networks is their actual consistency—they’re highly deterministic, and problems generally occur either consistently or not at all.Thus, the infuriating nature of testing for a bug that occurs only intermittently—once every two weeks, every 50,000 +/–3,000 transactions, or so on Such bugs can form the gamma-ray bursts of computer networks—supremely major events in the universe of the network, ... an extensive investigation by Caldera (who eventually bought DR-DOS), the information never would have seen the light of day It would have been a perfect win www.syngress.com 475 194_HPYN2e_12.qxd 476 2/15/02 12:58 PM Page 476 Chapter 12 • Spoofing: Attacks on Trusted Identity Subtlety Will Get You Everywhere The Microsoft case gives us excellent insight on the nature of what economically motivated... extern int opterr; By now, you’ve probably noticed that almost all command-line apps on UNIX share a similar syntax—something like foo -X –y argument.This syntax for accepting options is standardized and handled by the getopt library.Very old platforms require you to add #include to the beginning of your code to parse your options successfully More modern standards put getopt as part of unistd.h:...194_HPYN2e_12.qxd 474 2/15/02 12:58 PM Page 474 Chapter 12 • Spoofing: Attacks on Trusted Identity Israeli intelligence earned its pay for that day—but his methods were reasonably sound.What more could he have done? He might have demanded... DoxRoute 0.1, available at www.doxpara.com/tradecraft/doxroute and documented (for the first time) here, is a possible solution to this problem Designing the Nonexistent:The Network Card That Didn’t Exist but Responded Anyway As far as a network is concerned, routers inherently do three things: s Respond to ARP packets looking for a specific MAC address s Respond to Ping requests looking for a specific IP . data or sign your indepen- dent identity—this is known as the private key.The other is publicized freely, and can encrypt data for your decrypting purposes or be used to verify your signature without. infrastructure of your site—as an organization gets larger, www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 470 Spoofing: Attacks on Trusted Identity • Chapter 12 471 blanket access to. fault-tolerant, bounds-checking, attacked pieces of code in modern network www.syngress.com 194_HPYN2e_12.qxd 2/15/02 12:58 PM Page 471 472 Chapter 12 • Spoofing: Attacks on Trusted Identity deployment.They