1. Trang chủ
  2. » Công Nghệ Thông Tin

code hacking 1-2

51 569 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 51
Dung lượng 683,5 KB

Nội dung

Code Hacking 1-2 Chapter 1: Introduction Overview Large portions of the security community began their careers as part of the hacker underground. While this has been popularized through Hollywood with a “live fast, die young” image, the truth is far from this perfect picture of coolness. It is fair to say that the term hacker now being applied by the media as a misappropriated derisory term meant something vastly different in its heyday. Hackers then, as now, were concerned about how things fit together, what makes things tick. Generally, many early hackers concerned themselves with understanding the nature of the telephone system, which encouraged the development of “blue boxes” and war dialers such as Ton Loc. Public bulletin boards (such as Prestel) had security flaws exposed and various services disrupted. Ten years later, teenagers with the same mindsets were “nuking” each other over IRC and discovering the inherent flaws in various implementations of Windows File Sharing. These teenagers of yesterday are now the security professionals of today. Back then, many used to spoof phone calls and hack bulletin boards, trade illegal software before the popularity of Internet services and mass-marketing of the Web. Their experimentation, while sometimes costly to the industry, performed a security conscious service that enabled the bug-ridden code and ideals to erode. In fact, this service, while it has a malignant side, helped the growth of the computer industry and championed an era of software engineering excellence and intellect. During the course of learning about hacking, many aficionados soon discover that this actually covers a multitude of topics and technologies. There is a wealth of information about hacking online, and much can be gleaned from experimentation. In fact, experimentation is the only true guide to knowledge in this field. To truly understand the nature of network security issues, there have to be far-reaching practices and tests, which means that we have to mimic complex networking environments to understand the plausibility of various attacks and defenses. Enterprise-level security is a complex and difficult to manage subject. On one hand, systems must be protected from threat, but on the other there is always the measure of risk versus budget that ultimately decides what resources can be afforded to any particular item. Prevention is undoubtedly better than cure, and a good security policy should focus on this. This is not to say that it shouldn’t put post attack or compromise procedures in place. Keeping systems patched and protected is a never-ending task, and it is very difficult not to let something slip under the radar. We just checked the Witness Security vulnerability database (VDB) and found that there were 12 vulnerabilities registered yesterday. These range from Internet Explorer cross-frame security restrictions bypass to issues with a commercial intrusion detection device. Without using systems such as the Witness Vulnerability alerting engine, how is a busy sysadmin going to find the vulnerabilities that matter to his systems? If we look a little further back in the database, we can see 192 vulnerabilities in the last 28 days. Companies, like Microsoft, are now starting to realize that they need to keep their customers informed of security issues; however, for every company that warns its clients, there are 10 that don’t. That includes the manufacturer of the IDS system mentioned earlier. The vulnerability was reported by an independent security company and has circulated around the security community, making it onto the Witness Security VDB on the way, but the manufacturer has so far failed to inform its customers. This makes it very difficult for sysadmins who already have full-time jobs without having to monitor 20 security sites in case a vulnerability comes up that relates to one of their systems. That’s why we’ve included a free trial subscription to Witness Security’s vulnerabilities alerting engine as part of the purchase price of the book. Your trial subscription can be obtained at http://www.witness-security.com/protection/trial/. It’s worth considering for a moment how many public announcements we receive each year regarding virus or worm threats. The public profile of these and the sheer volume and chaos caused should be enough for every system administrator and programmer to get security savvy. Indeed, if the FBI can get security savvy and set up computer crime departments, why can’t organizations dedicate at least scant resources to security? In an age in which vulnerabilities breed like rabbits, why is it that many IT and system support staff know very little about security? There is a multitude of sociological reasons for this—including the growth, specialization, and diversity of our field (in an inordinately short space of time) coupled with very quick business-driven growth that much of the time seems to regard security as an afterthought (until it begins to cost businesses vast amounts of money); we will only pay scant attention to these in this book. This book is about facts, not suppositions; its audience are programmers and system support staff who want a reference guide in aiding general security knowledge as well as to understand how many of the technologies involved in attacking and defending a network actually work. We don’t have to be gurus to be able to understand the content in these pages. In essence, it has been written for an intermediate level of skill. Code is presented throughout this book and explained in its entirety. In some cases, the authors wrote large tracts of code to illustrate examples; in other cases, the code of Open Source projects is referenced. The point here is simple: there is a wealth of fantastic security material out there that needs to be investigated—we felt that the aim of this book was to introduce you, the reader, to as many resources as possible. The first thing to understand about building a secure environment is the nature of products, exploits, and attacks. This presents itself as a very dynamic model, which means that we also need to be dynamic and update our skills, keep up to date with the latest security bulletins and patches, and understand the consequences and knock-on effects of every successful intrusion into our systems. One assumption made by security analysts is that attack is the best form of defense (certainly Sun Tzu would agree). This doesn’t mean that we should constantly be paranoid and check every single connection (personally) that comes into our network. However, we should understand that the network should be attacked quite frequently from outside and inside to ensure a high level of security. The security industry refers to such procedures as penetration testing (or simply pen testing). Many things will be apparent from pen testing, and it will greatly improve the processes surrounding network intrusions and aid in the support and understanding of all aspects of systems. This should be a double-edged sword. We should consider testing for exploitable code in generic software like operating systems and Web servers, and should regularly review our own code and test for exploitable issues. Frequently, it’s useful to bring in an impartial external party who hasn’t been involved in either setting up the network infrastructure, or writing the various Internet or intranet applications used by the organization. Many assumptions we make in this first chapter are consistent throughout the book. We need to be aware that sloppy code has potential consequences, not just for the organizations and manufacturers, but for users as well. In fact, the losses incurred through bug-related attacks each year are staggering. Although we drive home the point of the consequences of sloppy coding in enterprise applications, this book doesn’t attempt to touch the area of writing secure code; the lesson from here on should be what kinds of vulnerabilities a predator might prey on. This book contains code samples in a variety of programming languages, the majority of which is written for Microsoft® Windows®. We felt that there was a wealth of information on security and hacking tools (and code) for Linux, but less so for Windows. Generally, most of the developers we’ve worked with are more familiar with Windows anyway (although we provide references to Linux tools and Web sites on the companion CD-ROM). To truly become proficient at exploit- based programming, we have to have a thorough understanding of operating systems, C, and Assembler. However, if we don’t want to take it that far, we can rely on the work of an ever-expanding Internet security community that provides much of these exploits for us. We can then begin to write test software and understand the types of bugs to look for. It is for this reason that the majority of code in this book (bar buffer overflows and shell code programming) is written step by step in high-level languages such as C#, Perl, and C++. During the course of reading about the lives and times of various hackers and script kiddies who’ve had an impact on the scene over the last 10 years, it becomes clear that the hacker operating system of choice is currently Linux. The tools available on Linux are currently a clear cut above those available on Windows, and there is a lower granular level of control that Linux necessitates that is abstracted from the Windows user. That being so, all the tools referred to in this book are for Microsoft Windows (bar Nessus, iptables, and snort-inline). If any readers have tried to write NDIS device drivers for Windows for firewalls or network traffic analysis, then the choice of operating systems for hackers becomes self-evident since the complexity to write tools for Linux (and the free tools available for Linux) far outweighs the equivalent complexity in advanced Windows tool development. On the CD Included on the companion CD-ROM is a vulnerability scanner that has been written specifically for testing exploits. The scanner is a set of related networking and security tools that are used to illustrate points in the book, and more importantly to illustrate how various common networking and security utilities work. It can also be extended (which is our hope) and used to test network security. The scanner itself was written for this book, but we enjoyed writing it so much that we’ve decided to continue working on it, developing exploit code and the repertoire of networking tools (and refining the existing code). We urge you to do the same (if you’re partial to writing code), since copying new exploits is the best way to understanding their workings. The scanner is illustrative of many of the principles demonstrated in this book, although within these pages we’ll cover the usage of some absolutely fantastic scanners and tools that are pretty much standards within the security community. The scanner was also written for the authors’ use, which entailed fast, full connect scans that had to be done to a single IP address within a few seconds. Equivalent scans using NMap and Shadow security scanner use stealth modes (which we’ll discuss in the next chapter) and take longer to produce a set of results. Note With the advent of the Internet, security pitfalls have had to be understood by a segment of the IT industry to ensure that up-to-date information on the latest vulnerabilities is brought to the attention of the public at large. Developers for the most part have been forced to write secure code (especially within the Open Source community where well-established products can be code checked for security excellence). We are coming into a new age now where applications will communicate with each other via Web services, meaning that firewalls can continue to allow non- HTTP traffic to be blocked at the border of an organization while maintaining the benefits of a distributed environment across the Internet. Organizations are beginning to be held accountable through various national and state laws that make preventable security breaches the onus of the hacked company. This means that there are now severe consequences to leaving unencrypted credit card information in a database (we won’t mention the numerous firms that have stored user information in plain text). Even in these fully networked days, major suppliers make mistakes, and unless our infrastructure has been thoroughly (and continually) tested, then we will probably be exposed. Take the SQL Slammer Worm as an example. This exploited a known vulnerability in Microsoft’s SQL Server 2000 (known for over six months with a patch available that too many people hadn’t applied) to spread across the Internet like wildfire. Now, with a correctly configured firewall, with all but essential ports open, it’s very unlikely that the required SQL Agent port (UDP 1433) would be open; therefore, any chance of attack would be negated. The “worm” despite not having a destructive payload, brought many sites to a standstill. It took so much processing power in its attempts to spread and clog networks as it searched for new hosts to infect that it soon had a major international impact. At the time, a client of ours panicked when the network sniffer on their internal network picked up a lot of traffic on port 1433. If they’d taken the time to look through any historic logs, they would have seen this was fairly common and related to general, internal SQL Server administration. Instead, SQL Server 2000 Service Pack 3 was rapidly applied, as this contained a patch for the exploit. It also contained MDAC 2.7 (Microsoft Data Access Components), and this upgrade brought all the other application software to a halt. They had taken a working site and brought it to its knees in 10 minutes. What an attack. It took half a day to find out which part of the service pack had broken the application and then the rest of the day to decide what to do and do it (roll back to MDAC 2.6 leaving the rest of the service pack in place). This was not a good day for all concerned and there are some important lessons in there. These are some of the real-world consequences of failing to have a good security strategy that is proactive and not reactive. This chapter is concerned with impressing the need of security; getting the threats in context, and what could possibly happen if you’re breached. We hear about hack attacks weekly, and must ensure that we’re not one of the victims, so by completely understanding the consequences of being hacked we can get the motivation to prevent it. This book is built as a course with the aim of progressing from fundamentals to more complicated issues. Let’s quickly summarize the content of this book. Chapter 2, “Networking,” begins with a treatise of networks and how LANs and WANs work. It introduces TCP/IP in reasonable depth, which will be used throughout this book. Readers with a good networking knowledge can skim read this chapter (although there is a wealth of information on networking exploits that are fairly fundamental to understand since they illustrate both the underlying security deficiency in protocols such as ARP and TCP and emphasize the need to ensure that any one implementation is bug free). Chapter 3, “Tools of the Trade,” introduces networking tools and demonstrates how they can be used to secure a network. It also tackles how these tools work “under the hood” with code illustrations. This chapter at large represents our core toolbox and explains aspects of footprinting and enumerating a target; there are some explanations of how port scanning works referencing a selection of port scanning tools. This chapter is divided into a very practical approach to building a port scanner and related essential networking tools, and a theoretical appreciation of how these networking tools work. Chapter 4, “Encryption and Password Cracking,” takes a serious look at encryption, authentication, and authorization, discussing some of the common forms of encryption and password cracking techniques. This chapter also introduces a toolset that can be used to analyze passwords on a network through network sniffing and password hash theft. The majority of techniques in this chapter can be applied in exactly the same way on any operating system (even though the target operating system is Windows). Chapter 5, “Hacking the Web,” delves into Web hacking, breaking apart common Web vulnerabilities and introducing buffer overflows and how they can be compromised by an intruder. It also illustrates various client-side vulnerabilities that occur with cross-site scripting, SQL injection attacks, ActiveX® Controls, and Java. Chapter 6, “Cracks, Hacks and Counterattacks,” represents a compendium of hack attacks and hacking- related tools. This chapter is very comprehensive and provides an introduction to many forms of service hacking, including FTP, Windows Media Player, Web browser, Web server, and ARP poisoning. We’ll also introduce vulnerability scanning in this chapter and describe the framework used for the vulnerability scanner in this book, as well as other fantastic tool standards such as Nessus and Nikto. This chapter also analyzes the use of buffer overflows and shell code exploits that are introduced in the context of shatter attacks involving techniques to exploit Windows through code injection. Firewalls are covered in Chapter 7, “Firewalls,” which contains information on how to configure firewalls, good firewall policy, configuration errors, firewall hacks, and more. Much of this chapter is based on examples of iptables usage. iptables is an Open Source firewall that is bundled with Linux.While Chapter 7 discusses “active defense,” Chapter 8, “Passive Defense,” moves on to “passive defense,” introducing the Intrusion Detection System (IDS), paying particular attention to the Open Source product Snort. We cover how Snort works and how to write code plug-ins for Snort as well as the ease in which we can add new rules to cover newly discovered signatures. We cover honeynets and pay particular attention to how they can be used to “track a hacker,” and gather stats on the latest techniques and the kinds of intruders that we could face. Chapter 9, “Wireless Networking,” discusses the uses and problems involved with wireless LANs, introducing a range of tools and techniques that can be used to test the integrity of a wireless network. We also chronicle the problems involved with the WEP protocol and how we can configure wireless LANs to ensure maximum protection from snooping and unauthorized access. The rest of this chapter briefly summarizes some hacking history and mentions one or two of the key players during the “early years.” Since we want to highlight the threat here, some of the authors’ personal experiences along with those of colleagues are chronicled to provide an understanding of security Consequences of Intrusion To be prepared or to even understand why we should make provisions for intrusive access to a system, we need to understand the consequences of an intrusion caused by bad password policy and an “open” firewall policy. The following account is a true story of a successful hack attack that the authors witnessed first- hand. It involves the intentional destruction of a Web application that was serving continuously updated information to the wider financial community. It will give you an indication as to how fragile applications actually are if security is not applied correctly by developers and support staff. The organization in question is a multinational financial institution operating a Web site that displays daily updates of client’s financial positions. The site was accessible via the Internet, and many of the clients would regularly access the site to check daily stock portfolio positions. A prestigious UK Internet service provider hosted the site; they maintained the site (and the Web application), including infrastructure (this included DNS), policies, and access control. One morning, the support staff arrived at work to find the home page changed to “You have been hacked SucKerz!!” Needless to say, this was a problem that none of them were capable of dealing with. There were no policies to deal with this type of situation; in fact, the day-to-day running of the IT infrastructure didn’t involve a security practice or audit of any kind. Blind panic set in. To summarize the situation, nobody knew what to do, nobody knew who was responsible for addressing the issue, and nobody even knew who was capable of deciding what to do. Initially, the internal response was to take all the application and Web server(s) offline (a denial-of-service (DoS) attack that any hacker would be proud of). This panic response was followed up by an external forensic analysis of the attack by an independent security company. With no previous contractual agreement with the security company, the cost of analysis and policy recommendation was fairly high. After analyzing the log files for close to a week, they determined that the hacker had gained access to the site by using brute login attempts. It was determined that the hacker gained access by connecting to a share over SMB and correctly guessing the username and password (in fact, the account that the hacker had used was the built-in Administrator account—on this basis the hacker had 50% of the information needed to gain access as the Administrator simply because this is a well-known username). Once the brute-force attack had identified the Administrator password, the hacker had complete control of the system; in fact, all he had to do to update the Web site was to replace a page on the file system that he could connect to using an c$ admin share (the terminology will become clear later—this example, however, should illustrate the fragility and exposure of certain systems and the consequences involved when not understanding security and having either a too tight or too loose security policy). Defacing the Web site was only the start of the hacker’s foray into the system. From there, he moved to other servers and applications, exploiting known weaknesses and default shares and passwords. It was after this that the environment was taken offline by the support staff, and within a couple of days a Web redirection was added displaying an “offline for maintenance” message. The organization then decided to rehost the site, pay for a dedicated environment, and maintain firewall policies and other security infrastructure using in-house “expertise.” Apart from the phenomenal cost associated with this overreaction (which ran into millions of dollars—although for the most part this was based on a hardware infrastructure upgrade), the staff and administration, operating policies, and paperwork for the new environment ended up slowing development tenfold. One of the first policies to be put in place as a result of the successful hacking of the site was that accounts would be locked out after three unsuccessful password attempts. This would seem like a good policy initially since it would stop a brute-force attack, but would create a multitude of other problems. Essentially, a DoS attack could now take place since all that would be necessary for any hacker to do is enumerate usernames and guess incorrect passwords to cripple a system. The application in question was deployed in Microsoft® Transaction Server 2.0, which would run COM libraries impersonating NT users. On a number of occasions, the accounts would be locked out, due to incorrect password guessing (not necessarily by a hacker—also by support or development staff who had typed in the password incorrectly over a terminal services session login prompt). One of the reasons why the in-house staff continually locked out the accounts and crippled the system was that the password policy entailed creating extremely long, complex, and unmemorable passwords; for example, d67h$$75^#hd~!8#. The story illustrates one thing: if the worst happens and a hacking attempt succeeds for whatever reason, corporations need a carefully planned response. A panic response simply cripples productivity and introduces large amounts of financial overhead. There is no such thing as a foolproof network; as we shall see, we can have all the security in the world to stop external attacks on a corporate LAN/WAN, but no amount of network security can protect against individuals who obtain insider knowledge of networks, applications, passwords, and weak links in the chain. A standard response when asking a security team about audits and policies will be to regurgitate firewall policies. However, in many organizations, it might go unnoticed that an individual somehow managed to get a piece of software onto the LAN that reprogrammed the firewall from inside and allowed access to any potential hacker (although this is a somewhat far-fetched extreme, since firewall access will no doubt be very controlled and secure (hopefully), it does illustrate the potential consequences of unknown intrusions). A common extension of this conceptual threat is for Trojans to appear on a network fed by unsuspecting users downloading software from the Internet. When this occurs, one potential consequence of the Trojan would be to punch a hole through the firewall from inside out, since it is a common default firewall policy to allow outgoing connections on any port. This being the case, the message to take away from this story is that we can never be wholly prepared when a security breach occurs, but we should have good policies in place to analyze what has occurred and recover gracefully. Indirect Threats Ask 10 different experts in different fields of security and you’ll get 10 different perspectives on a good security model. While this book is concerned with the technical threats, which are not exclusively found in a networked environment (although this book distinctly focuses on this area), we should be mindful of indirect threats that can cause onsite access to hardware and can result in the same issues as could be attributable to a remote intrusion attempt. We specifically mean physical security. No amount of network security will be enough to prevent intrusions if we operate an “open door” policy (the front door, that is). Good networked security should be coupled with even tighter physical security (since most attacks within organizations occur from the inside, this point is even more important to grasp). It is best illustrated, as always, with an example. Recently, we were asked to build an application, which for various reasons had to run on an interactive console. Once the application was built, we were accompanied to the server with a support engineer who had sole administrative access to the machine in question. Once the installation procedure finished, the support engineer was just about to lock the machine to stop unauthorized interactive access to it when we told him that that would stop the application. The engineer decided it would be okay if the machine was left unlocked and the unlock timeout policy to be removed if there was no user peripheral feedback, as is the default for any C3-compliant Windows machine (i.e., Windows NT family). We then decided a couple of hours later that it was worth attempting to gain access to the room, which needed a special card key to enter. After waiting outside the door for about 20 minutes, somebody came and we decided to look busy while cussing the card reader and looking very angry. Another engineer with access to the room let us in after having a laugh and joke about the cardkeys; by this time, the engineer thought that we were from an external consultancy and they had messed up the card key access. As we were all engineers, he could relate to the problems we were having and so let us into the room with very few questions asked. We, of course, promptly gained access to the interactive login and changed the Administrator password, locking out the support department and removing all other logins from the administrators and power users groups. Unhappy with the state of the physical security, we made a full report to the support department pointing out the weaknesses, demonstrating what was accomplished with relative ease, and suggesting ways to improve the procedures to disallow access to the room for anyone who isn’t authorized full stop. The story is a classic case of the need for vigilance to avoid giving any unauthorized individuals physical access to resources. This should be a mantra, but, unfortunately, as many hackers can attest to, people are the weak links in the chain sometimes and can be convinced to break protocol in a number of ways. By portraying ourselves as something that we are not or asserting some kind of authority, it is possible to “trick” authorized individuals into giving out information, which can prove helpful in an attack. These techniques of gaining access to resources by sizing up authorized employees and conning information or access from them continue to work since many individuals don’t see the harm in certain actions. For example, the first engineer thought access to the room was completely prohibited to anyone other than engineers and that no engineer would misuse his position. The second engineer was fooled into thinking that we were engineers because of a few terms used and name dropping of department heads/server names and so forth. These techniques are called “social engineering” and recently popularized by hacker Kevin Mitnick in an attempt to educate security policymakers to the types of attacks they can expect from those individuals determined to gain access to authorized resources. Corporate security needs to be as stringent (but not debilitating) as national security and other forms of security (such as network provider security—at the lowest level, cables should reside in gas pipes so that if anybody tried to access the network cabling in a data network (i.e., fiber-optic cable), gas pressure would change and alert the provider that there was an attempted breach in the network—this should be for core networks not access networks, as the associated cost would render the service uneconomic for the consumer. We can see and apply this philosophy to everyday life. How many of us have used credit cards so many times that the signature has been rubbed off the card, and yet when we come to pay for something it’s fairly easy to get away without being challenged on the signature that we write on the card receipt. Our fast- paced lives have lulled many of us into a false sense of security so much so that very few sales attendants would question that we were the owner of the card, yet to perpetrate fraud in these circumstances would be relatively easy. These things are being taken into account daily by the financial services, insurance, and banking communities that have millions of dollars’ worth of payout in fraud costs that are unrecoverable. Newer security measures for this type of thing now require PIN numbers to be processed interactively with cards in restaurants and shops, which mitigates the risk of stolen credit cards and signature fraud. A Short Tour of Social Engineering While not really the purpose of this book, social engineering is briefly covered here because it highlights the threat to security with far less ambiguity than hacking. We generally think of hackers as teenagers (with an exceptional knowledge of computing) who are plotting to bring the Internet to an evil, chaotic end. In some cases, this is true; there are individuals who thrive on destruction for political (or more commonly economic) ends (e.g., the recent case of Mafiaboy and the DoS attacks that occurred Internet-wide). The recent war in Iraq is a testament to this. Some American Web sites were defaced by groups of Arab hackers, and similarly, Arabic news sites were attacked by American hackers. In situations such as these, hacking and perpetrating DoS attacks is a way to assert some type of authority, and by implication make sure that the victim understands how powerful you are. Although this level of destruction (or attempted destruction) on the Internet is fairly common, it is far less common in organizations that have good network security policies involving firewalls, controlled VPN access, routing restrictions, and IDSs. (This example of the Iraq war is one of many, as Web site defacements occur for a variety of reasons and “causes,” including religious, political, and revenge.) How, therefore, would an attacker breach an organization? Well, that really depends on the determination of the hacker. Many attacks are averted on the Internet simply because skilled hackers avoid trouble and decide not break the law; in most cases, they certainly have the skills necessary to perpetrate the hack but will keep a low profile (and out of mischief). The dedicated hacker might take a political statement or financial gain a step too far and break the law. Sit with a “dedicated” hacker for a day and you’ll find it a very boring experience full of different attempts at gaining root access. Some hackers will scope out a site footprinting and enumerate it for weeks before attempting an intrusion. (U.S. Defense services have attested to the fact that footprinting and enumeration of their networks can take place over several months while a hacker checks a single port a week to see the services behind it.) These are the hackers to watch, since they almost certainly have an arsenal of unpublished exploits (unpublished, that is, everywhere but in the computer underground). The dividing line between those hackers who don’t struggle to break into systems can best be shown in an interview conducted by hacker researcher Paul Taylor (from the book Hackers: Crime in the Digital Sublime). The subject in this case is Chris Goggans, a very skillful hacker involved in the Legion of Doom hack against the AT&T telephone network. Goggans claimed to be able to view, alter consumer credit, and monitor funds, as well as snoop into telephone conversations. He also suggested that he was able to monitor data on any computer network and gain root access on any Sun Microsystems UNIX machine. Some individuals might use social engineering tactics to breach the security of a company knowing that the network security is too tight. This might sound like a fantasy from a spy film, but where millions of dollars’ worth of financial gain is on the table, criminals will try a variety of tactics to achieve their goals. Consider this for a minute: you receive a phone call from a network support engineer over the weekend on a company mobile phone explaining that there has been a crash and all of your network files have become corrupted. The engineer claims that he needs your username/password, or when you come in Monday morning, you will find your machine rebuilt and all your files, including those on the network, gone. What do you do? All readers of this book will instantly know that under no circumstances should credentials be given to other people (especially over the phone). However, many users would not stop to consider that they (a) don’t know the person on the phone, and (b) should never give their credentials to anyone, especially somebody from support who should have administrative access to all machines on the network and can therefore assume ownership of any files. The other issue here is that the network files are regularly backed up, and, because it is a weekend, there would have been no changes to files. Therefore, it would be easy for any support personnel to restore user files from the last good backup. (Unfortunately, this is rarely the case, and backups tend to fail without the support department noticing, causing several days’ worth of lost data.) In this instance, users should be trained never to give passwords to anyone; this should be a mantra. NT support personnel are not supposed to know plain-text passwords, as they are not stored on disk. Frequently, a password hash is stored in Windows 2000/2003, or there is an encrypted password on all flavors of Unix. This is why support personnel ask the user for a new password when they reset user passwords and always ensure that the user must change his new password at the next available login so that the only person who knows the password is the user. Social engineering attacks can be as simple as the one described previously or as complicated as a covert entry into a building, use of trash trawling for passwords/names/IP addresses, and so forth. The point is that security, company policy, and training should expand on the threat and consequences of security breaches. Since in the preceding example a username and password is stolen, the endgame is a breach of computer security. Therefore, the link between different security and data departments should be strong; otherwise, all of the firewalls and checked software in the world will be unable to stop a would-be attacker. The Changing World of Development The tendency to network at any cost (for the performance benefits of parallel processing and information sharing) has created a great many paradigm shifts in the way in which systems are designed and developed. The Internet was originally coined ARPANET by its (Department of Defense) creators, and standards such as TCP/IP that drive the Internet were born out of the ARPANET. Many other standards followed closely to enable other services, which are regularly used by systems (e.g., SMTP for e-mail transfer). This level of agreement and standardization has enabled limitless communication between systems, since many of the same pieces of networking code within distributed systems can be ported from one machine to another with minimum hassle and can be used to send messages (via agreed protocols) that other machines on the network intrinsically understand. In fact, many architects and developers believe that we have gone full circle from the days of mainframes and dumb terminals. Developers regularly use terms such as thin and fat client, which refer to the two development paradigms. A thin client generally uses the Web browser to display Web pages from an application; the intelligence and body of the code resides on the server where any data access, networking, or Remote Procedure Calls (RPC) occurs. This is good for a variety of reasons. We can:  Roll out code changes to a single place (i.e., the server).  Secure access at a single point (i.e., the server).  Not use proprietary protocols and forms-based clients that have a higher associated development cost (i.e., develop intranet client-server systems). Certainly, the first point could be enough to decide to write a thin client application, since the client Web browser is standard and therefore desktop client rollout could be avoided initially and also for subsequent updates. If bugs in the software are shown to exist, then they can be rolled out to a single place, the server hosting the application. (In fact, there is a new tendency toward forms-based applications, so-called smart clients that are sandboxed but contain fully functional windows GUIs. Recently, the technology has made its way into mainstream development with the advent of Java applets using the Java Security Manager, and more recently in .NET using Code Access Security. ActiveX controls, the forerunner to this, were trusted by the local machine and could thus be used to maliciously control or post back user data that users shouldn’t have had access to in the first place.) Sandboxing is a great implementation of the principle of least privilege. We have to ask ourselves, what does the application need? Does it need full access to the hard drive, Registry, or various services? If it does, then it has probably been badly designed from the top down. The principle of least privilege has now permeated into all forms of application development. Does the client application need to persist state information? Well, over the Internet the information might be small since the applications tend to have less of a business focus so the client can be content with using small cookie files in an isolated area on disk. In an intranet environment, clients might be a bit fatter and forms based. Do they need absolute access to the filesystem? Almost certainly not; in fact, most applications would be content to gain access to isolated storage on the local hard drive (certainly the Zones model used in the Smart Client sandbox adopted by Internet Explorer illustrates this usage). Fat client applications tend to be used within business now as a hangover from days gone by. They are generally more expensive to conceive and develop and require more intrusive maintenance. The financial community tends to continue using many complex fat client applications, which are slowly being phased out as traders move to Web-based systems. Web-based systems have been on the rise for many years now since they can be built rather simply using RAD languages such as C#, VB, and Java and can be componentized to ensure that code reuse is maximized. (Many third-party vendors support components that can be used in applications to enable developers to focus on the business problem. Later in the book, we look at trusting third-party code, which can be in the form of a COM component, a browser plug-in, an EJB, and so forth.) Implementation of a third-party component can in itself be a risk, especially if a service is running as a higher privileged user. Imagine a component that scans a directory for plug-in files. If we could arbitrarily write files to a host, then we could soon take advantage of the software by substituting a [...]... The type member contains a code that defines the type of message we want to send Echo Request is 8, so this value will always be populated in the type field for a ping application The code member is related to the type (there can be many codes for a particular type); this is especially consistent with error message replies as opposed to query (echo request) replies where the code represents distinct... importance of this book, we feel, is to show the fountain of knowledge that is out there on the Internet and in Open Source to help you come to grips with both hacking techniques and security techniques (two edges of a sword) It’s worth considering the hacking timeline here (and the techniques) We should be very focused on the manner and intent of our testing; we should always set up tests in a lab environment... networks (it wasn’t the intention to breed a generation of code- literate hackers with this book) and adapt a good security and testing model to the network, we can work out ways to check the performance of an IDS or a honeynet (and a firewall) We can even stage an environment where we try to second-guess colleagues and look for some evidence of them hacking into our network It allows us to gain skill and... obfuscation and attacks themselves are conceived IP fragmentation has offered numerous other hacking opportunities over time There have been issues with certain IP implementations The original Teardrop attack that affected both Linux and Windows implementations of IP exploited a weakness in the fragment reassembly code across both systems The premise behind the attack was to send a fragmented datagram... datagram, which is defined as 64K A maximum length violation would have caused the buffer to overflow and crash the machine In the ping code demonstrated in this section, fragmentation is considered a lower-level service provided by a network driver or Kernel Module, so our code wouldn’t control the fragment size directly Ping Flooding Ping flooding like the “Ping of Death” can be considered a DoS attack,... of a smurf attack is simple and unique; simply filter all Echo Requests at the broadcast address of the network A Ping Example On the CD This section illustrates the code necessary to create a ping application Throughout this book are code examples in a variety of different languages explaining the use of tool making (such as ping in this case) or the use of exploits posted by other people The ping... security as high priority whenever we develop any production applications; however, it shouldn’t be used to cripple or undermine the development process as we saw in the opening story A Word on Hacking Although the term hacking is used frequently throughout this book and we use the word hacker in the same context, we really refer to intrusion attempts and mean hacker in a very abstract sense As the authors... subscription service Altavista (http://www.altavista.net), which provides a wargames server to aid members in understanding software and service vulnerabilities A Word on Hackers… In understanding the nature of hacking, it is worth relaying a few stories from the history of hackers and their effect on society The word hackers obtained a negative connotation from a lack of public understanding driven primarily... describe the pioneers in computing, to a virtual one-to-one correlation with a teenage criminal Many sociologists who have studied this phenomenon distinguish those teenagers who are criminals and have used hacking to further their criminal nature and those who make a foray into the world of better understanding who inadvertently get caught up in the computer underground The word hackers, prior to the surge... hackers used aliases or “handles” to perpetuate anonymity (Phrack and 2600 were electronic documents replicated throughout BBSs worldwide) The purpose of this section is not to regurgitate old stories of hacking triumphs but to confirm the severity of some the incidents that occurred in the past There is a great deal of information online about the exploits of various well-known hackers These in themselves . Code Hacking 1-2 Chapter 1: Introduction Overview Large portions of the security community began their. of skill. Code is presented throughout this book and explained in its entirety. In some cases, the authors wrote large tracts of code to illustrate examples; in other cases, the code of Open. working on it, developing exploit code and the repertoire of networking tools (and refining the existing code) . We urge you to do the same (if you’re partial to writing code) , since copying new exploits

Ngày đăng: 10/07/2014, 20:22

Xem thêm

w