Software is in danger throughout its lifespan, and systems can be in danger both when operating and when not operating, e.g., a laptop containing sensitive information could be stolen. Security, both at the individual software component and whole system levels, is about dealing with these dangers and preserving properties such as confidentiality and, integrity, and availability in the face of attacks, mistakes, and mishaps. Protection needs are based on the risk or consequences resulting from the threatening dangers and the value of the items being protected. A rational approach to protection must anticipate dangers and provide defensive, tolerance, and resilience measures based on the potential damage to operations.
No system can be protected perfectly. Much commonly used software is far from vulnerability free, and attacks based on social deception or physical means can happen at any time as can accidents. Nonetheless, without confidence in the security being adequate, one cannot – as the Soviets could not – rationally rely on software being secure or dangers not becoming realities. The software must not only be secure, but evidence must exist to justify rational confidence that it is secure.
Much of this document is about producing software that can block or tolerate and then recover from attacks while sounding alarms and keeping worthwhile records – and the case for having confidence in it. Before continuing into the details, however, some fundamental concepts and principles need to be covered and the legal and organizational context must be set. These topics are covered in the sections 3 and 4, respectively.
[Hoglund 2004] Hoglund, Greg, and Gary McGraw. Exploiting Software: How to break code. Addison-Wesley, 2004.
[Howard 2005] Howard, Michael, David LeBlanc, John Viega, 19 Deadly Sins of Software Security. McGraw-Hill Osborne Media, 1st edition, 2005.
Lakhani, Karim R. and Robert G Wolf, “Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects”, in Perspectives on Free and Open Source Software. Cambridge, MA: MIT Press, 2005. Available at http://ocw.mit.edu/NR/rdonlyres/Sloan-School-of-Management/15-352Spring-2005/D2C127A9-B712-4ACD-AA82-C57DE2844B8B/0/lakhaniwolf.pdf.
Denning, Dorothy E. Information Warfare and Security, pp 46-50. Reading MA: Addison-Wesley, 1999.
Thomas, Douglas. Hacker Culture. Minneapolis, MO: University of Minnesota Press, 2002.
Jordan, Tim, “Mapping Hacktivism: Mass Virtual Direct Action (MVDA), Individual
Virtual Direct Action (IVDA) and Cyberwars”, in Computer Fraud & Security. Issue 4, 2001.
Kleen, Laura J. Malicious Hackers: A Framework for Analysis and Case Study. Master's Thesis, AFIT/GOR/ENS/01M-09. Wright-Patterson Air Force Base, OH: Air Force Institute of Technology, March 2001. Available at http://www.iwar.org.uk/iwar/resources/usaf/maxwell/students/2001/afit-gor-ens-01m-09.pdf.
Garfinkel, Simson L., “CSCI E-170 Lecture 09: Attacker Motivations, Computer Crime and Secure Coding”. Cambridge, MA: Harvard University Center for Research on Computation and Society, 21 November 2005. Available at http://www.simson.net/ref/2005/csci_e-170/slides/L09.pdf.
Rattray, Gregory J., “The Cyberterrorism Threat”, Chapter 5 in Smith, James M. and William C. Thomas (editors), The Terrorism Threat and U.S. Government Response: Operational and Organizational Factors. U.S. Air Force Academy, Colorado: March 2001. Available at http://www.usafa.af.mil/df/inss/Ch%205.pdf.
Krone, Tony, “Hacking Motives”, in High Tech Crime Brief. Australian High Tech Crime Centre, June 2005. Available at: http://www.aic.gov.au/publications/htcb/htcb006.pdf.
Fötinger, Christian S. and Wolfgang Ziegler, “Understanding a Hacker's Mind - A Psychological Insight into the Hijacking of Identities”. Krems, Austria: Donau-Universität Krems, 2004. Available at http://www.donau-uni.ac.at/de/studium/fachabteilungen/tim/zentren/zpi/DanubeUniversityHackersStudy.pdf.
Besnard, Denis, “Attacks in IT Systems: a Human Factors-Centred Approach”. University of Newcastle upon Tyne, 2001. Available at http://homepages.cs.ncl.ac.uk/denis.besnard/home.formal/Publications/Besnard-2001.pdf.
Lin, Yuwei. Hacking Practices and Software Development. Doctoral Thesis. York, UK: University of York, September 2004. Available at http://opensource.mit.edu/papers/lin2.pdf.
Alexander, Steven. “Why Teenagers Hack: A Personal Memoir”, in Login, Vol. 10, No. 1, pp. 14-16, February 2005. Available at http://www.usenix.org/publications/login/2005-02/pdfs/teenagers.pdf.
Wong, Kelvin, “Friend Or Foe: Understanding The Underground Culture and the Hacking Consciousness”. Melbourne, Australia: RMIT University, 20 March 2001. Available at http://www.security.iia.net.au/downloads/friend_or_foe.pdf and http://www.security.iia.net.au/downloads/new-lec.pdf.
Leeson, Peter T. and Christopher J. Coyne, “The Economics of Computer Hacking”, in Journal of Law, Economics and Policy, Vol. 1, No. 2, pp. 473-495, 2006. Available at http://www.ccoyne.com/Economics_of_Computer_Hacking.pdf.
Sverre H. Huseby. Innocent Code: A Security Wake-up Call for Web Programmers. John Wiley & Sons, 2004).
James Whittaker, Herbert Thompson. How to Break Software Security. Addison Wesley, 2003.
Herbert Thompson, Scott Chase. The Software Vulnerability Guide. Charles River Media, 2005.
Frank Swiderski and Window Snyder. Threat Modeling. Microsoft Press, 2004).
Bruce Schneier. “Attack Trees: Modeling Security Threats”, Dr. Dobb’s Journal, December 1999.
2.7.1Appendix A. Social Engineering Attacks
The main categories of social engineering attacks are:
Spam – unsolicited bulk e-mail. Recipients who click links in spam messages may put themselves at risk of inadvertently downloading spyware, viruses, and other malware.
Phishing – the creation and use of fraudulent but legitimate looking e-mails and Web sites to obtain Internet users’ identity, authentication, or financial information, or to trick the user into doing something he/she normally wouldn’t. In many cases, the perpetrators embed the illegitimate Web sites’ universal resource locators (URLs) in spam – unsolicited bulk e-mail – in hopes that the curious recipient will click on those links and trigger the download of the malware or initiate the phishing attack.
Pharming – the redirection of legitimate Web traffic (e.g., browser requests) to a illegitimate site for the purpose of obtaining private information. Pharming often uses Trojans, worms, or other virus technologies to attack the Internet browser's address bar so that the valid URL typed by the user is modified to that of the illegitimate website. Pharming may also exploit the Domain Name Server (DNS) by causing it to transform the legitimate host name into the invalid site’s IP address; this form of pharming is also known as “DNS cache poisoning”.
2.7.2Appendix B. Attacks Against Operational Systems
The typical attack on an operational system consists of the following steps:
Target Identification and Selection: The desirability of a target depends to a great extent on the attacker's intentions, and any evidence of the target's vulnerability that can be discovered through investigation of news reports, incident and vulnerability alerts, etc.
Reconnaissance: Can include technical means, such as scanning and enumeration, as well as social engineering and “dumpster diving” to discover passwords, and investigation of “open source intelligence” using DNS lookups, Web searches, etc. to discover the characteristics of the system being attacked, and particularly to pinpoint any potentially exploitable vulnerabilities.
Gaining access: Exploits the attack vectors and vulnerabilities discovered during reconnaissance.
Maintaining access: Often involves escalation of privilege in order to create accounts and/or assume a trusted role. May also involve planting of rootkits, back doors, or Trojan horses. Depending on the attacker's goal, maintaining access may not be necessary.
Covering tracks, which may include hiding, damaging, or deleting log and audit files, and other data (temp files, cache) that would indicate the attacker's presence, altering the system's output to the operator/administrator console, and exploitation of covert channels.
Attacks and malicious behavior may be undertaken not only by outsiders [Berg 2005, Chapter 5] but by insiders, including authorized users. The specific techniques used to access systems change on a routine basis. Once a vulnerability has been patched, attackers must use a different technique to gain attack the system. Nevertheless, these attacks can be generalized into certain types of attacks, which are:
Attacks performed by an unauthorized attacker:
Eavesdrops on, or captures, data being transferred across a network.
Gains unauthorized access to information or resources by impersonating an authorized user.
Compromises the integrity of information by its unauthorized modification or destruction.
Performs unauthorized actions resulting in an undetected compromise of assets.
Observes an entity during multiple uses of resources or services and linksthese uses to deduce undisclosed information.
Observes legitimate use when the user wishes kept private their use of that resource or service.
Attacks performed by an authorized user:
Accesses without permission from the person who owns, or is responsible for, the information or resource.
Abuses or unintentionally performs authorized actions resulting in undetected compromise of the assets
Consumesshared resources and compromises the ability of other authorized users to access or use those resources.
Intentionally or accidentally observesstored information notauthorizedto see.
Intentionally or accidentally transmits sensitive information to users not authorized to see it.
Participates in the transfer of information(either as originator or recipient) and then subsequently denies having done so.
Users or operators with powerful privileges can be especially dangerous. Administrators or other privileged users can compromise assets by careless, willfully negligent or even hostile actions. Finally, in certain situations a physical attack may compromise security (e.g. breaking and entering, theft of media, physically tapping cables).
Each of the preceding technical events may result in bad security-related outcomes. The severity of the consequence ranges from the annoying to the severe.
A number of other common attack techniques are described in Appendix C of Security in the Software Lifecycle, which can be downloaded from the DHS BuildSecurityIn Web portal.
Purely physical attacks can be devastating to an organization. Simply cutting critical cables, stealing computers containing critical information, or stealing or destroying the only copies of critical information could potentially ruin an organization. Physical attacks require very little skill to accomplish and can simply be the result of a poorly planned backhoe operation or other simple mistake. Although it is impossible to predict and defend against all possible physical attacks, proper planning such as off-site backups and redundancy can mitigate many of the consequences or risks posed by physical attacks.
2.7.3Appendix C. Unintentional Events that Can Threaten Software
Software in operational may be made vulnerable by a number of unintentional, non-malicious events. Some of these events threaten the availability of software. However, because these events are unintentional, they do not constitute a threat to the software’s security. This is because availability is correctly categorized as a security property only when:
The compromise of availability is intentional, i.e., the result of a denial of service attack; or
The compromise of availability leaves the software (or system) vulnerable to compromise of any of its other security properties (for example, denial of service in a software component relied on to validate code signatures could leave the software’s integrity vulnerable to compromise through insertion of unsigned malicious code).
[MoD DefStan 00-56 Part 2/3 2004, page 31] lists a number of unintentional events that may threaten the security of operational software. These include:
Systematic and random failures;
Credible failures arising from normal and abnormal use in all operational situations;
Scenarios, including consequential credible failures (accident sequences);
Predictable misuse and erroneous operation;
Faulty interactions between systems, sub-systems, or components;
Failure in the software operation environment (e.g., operating system failure);
Procedural, managerial, human factors, and ergonomics-related activities.
In addition, there are physical events that can result in hardware failures (and, by extension, software failures); other physical events may render the hardware unstable, which can cause common mode failures in operational software, or render software in development vulnerable to physical access by unauthorized persons. Such physical events include:
Natural disasters, e.g., hurricanes, earthquakes;
Failure in the physical operating environment (hardware failures, power failures);
Mechanical, electro-magnetic, and thermal energy and emissions;
Explosions and other kinetic events (kinetic events are those that involve movement of the hardware on which software operates);
Chemical, nuclear, or biological substance damage to the hardware or network;
Unsafe storage, maintenance, transportation, distribution, or disposal of physical media containing the software.
On analysis many of these events may not result in a security requirement, but some may. Examples of the effects that must usually be considered are
Sudden physical accessibility to the systems or media containing the software may increase the ability of attackers to obtain a copy of the software’s source code (in order to study it, or to insert malicious code and produce a subverted version of the software) or binary (with the intention of reverse engineering).
Three additional unintentional event categories may have the same effects on the software as malicious code attacks:
Unintentional software defects: these can have the same effects as malware – currently the most common source of vulnerabilities
Intentional extra functionality: can provide additional paths of attack, defects and vulnerabilities, or surprises – particularly unused/unadvertised functionality
Easter eggs: code placed in software for the amusement of its developers or users
Protecting from or reacting to all the events listed above will seldom fall to a single piece of software, but any could be relevant to a given product. Nevertheless, the list provides insight into the kinds of compromising events that may occur.