Software and the data it handles face numerous sources and types of damage and risk. This section provides an introduction and necessary background to understand these threats and risks, which underlie the need for software security processes and practices that are expressly intended to produce secure software security. The knowledge identified in this section is relevant to persons developing, sustaining, and acquiring (more) secure software.
The definition and subtleties of security will be explored at length in the following sections. For the purpose of beginning to understand the dangers being addressed, the reader needs to remember that as covered in the Introduction, security is often spoken of as a composite of the three attributes – confidentiality, integrity, and availability.
Probes as a prelude to attacks are continually increasing. Systems in large organizations such as DoD and Microsoft are probed several hundred thousand times per month. Actual attacks are increasing as well. Many attackers exploit already identified vulnerabilities – often as soon as they can compare the old versions to fixed (patched) versions of the software and analyze what changed. They can then attack any non-patched copies of the software. The time between the announcement of a vulnerability and attempted exploits of the vulnerability has diminished from months to a few days, if even that long.1 Some vulnerabilities are even exploited as “zero-day,” meaning that the exploit appears before the vulnerability is formally disclosed. Attacks have moved from primarily targeting widely used software from major vendors to more frequently targeting Web applications.2 Network traffic to and from Web applications bypasses many traditional network security protections—even though the Web application interfaces directly with an organization’s databases or other internal systems.
The amount of malicious software in the wild, spam, phishing, and spyware (all of which are defined in Section 2.4) is increasing, leading to a draining of resources, potential identify theft and loss of sensitive information.3
Though the effects of attacks on software security can range from irritating to devastating, no accurate measurements exist to determine the national or worldwide costs of an attack. A Gartner analyst, Avivah Litan, testified the costs of identity theft at a Senate hearing related to the Department of Veterans Affairs loss of 26.5 million veteran identities in May 2006. According to Litan, “a company with at least 10,000 accounts can spend, in the first year, as little as $6 per customer account for just data encryption, or as much as $16 per customer account for data encryption, host-based intrusion prevention, and strong security audits combined.” In contrast, Litan said that companies can spend “…at least $90 per customer account when data is compromised or exposed during a breach.”4 The identity theft at the Department of Veterans Affairs was not the result of a software security breach—but many identity thefts are. Regardless, many organizations are starting to realize that the cost of a security breach can far outweigh the costs of security. According to a 2003 CERT/CC report on incident and vulnerability trends, attackers can include: teenage intruders, industrial spies, foreign governments, criminals, and insiders.5 Attacks are becoming more sophisticated: targeting specific organizations. Security intelligence experts believe that many of these sophisticated and targeted attacks are being performed by organized crime and government espionage.6 In order to truly understand the repercussions of inadequate software security, some example incidents are provided below.
The earliest known exploitation of a buffer overflow, a common software vulnerability, was in 1988. It was one of the several exploits used by the Morris worm to propagate itself over the Internet. It took advantage of software vulnerabilities in the Unix service fingerd. Since that time, several Internet worms have exploited buffer overflows to compromise increasingly large numbers of systems. In 2001, the Code Red worm exploited a buffer overflow in Microsoft’s Internet Information Service (IIS) 5.0, and in 2003 the SQLSlammer worm compromised machines running Microsoft SQL Server 2000. In 2004, the Sasser worm exploited a buffer overflow in the Local Security Authority Subsystem Service (LSASS), which is part of the Windows operating system that verifies users logging into the computer.
In 2004, a 16-year-old hacker found a few systems on the San Diego Supercomputer Center (SDSC) that had been patched for a software vulnerability but not yet rebooted. He exploited the unpatched software still running on those machines to gain access to the network and install a sniffer to detect users’ login sessions and capture login data, such as usernames and passwords.7
In May 2006, a large number of spam messages were disseminated from a .de email address. The messages contained a password-stealing Trojan horse called “Trojan-PSW.Win32.Sinowal.u” along with text in German claiming the attachment was an official Microsoft Windows patch. The new Trojan is a member of the Sinowal family of malware first detected in December 2005. The original versions install themselves onto systems using browser exploits while this new variant tricks users into installing it. The malware acts as a man-in-the-middle that captures usernames and passwords when users access certain European bank Web sites.8
In May 2006, a zero-day vulnerability in Microsoft Word XP and Microsoft Word 2003 enabled attackers to plant the Backdoor. Ginwui Trojan on PCs of users who received emails with malicious Word documents attached. The Trojan enables an attacker to connect to and hijack the compromised PC by installing additional software.9
In 2005, security researchers discovered a rootkit distributed by Sony BMG in its compact discs (CDs) that acted as digital rights management (DRM) for the music contained within the CDs. The rootkit installed itself on users’ PCs after inserting the CD into the optical drive. The DRM rootkit contained spyware that surreptitiously transmitted details about the user back to Sony BMG. In addition, the rootkit contained software security vulnerabilities that made the PCs vulnerable to malicious code and other attacks. This spurred 15 different lawsuits against Sony BMG to cease selling the audio CDs containing the rootkits.10
In 2005, students discovered a weakness in the third-party software used to manage Harvard Business School applications. Students who use the same third-party software at other schools observed that when an application decision is made, applicants visit a series of pages with the final decision appearing at a URL with certain parameters. By using similar parameters on the Harvard Business School Web site, students could view the application decisions before receiving official notice. The applications of students who used this technique to view decisions in advance were refused due to an inappropriate ethical mindset for future leaders.11
In late 2003, Nicolas Jacobsen accessed T-Mobile’s Web site using a vulnerability in BEA WebLogic. While the patch was released in early 2003, T-Mobile had not applied the patch to its own servers. Using this vulnerability, the attacker installed an interface to T-Mobile’s customer service database. He had access to numerous T-Mobile account details—including social security numbers.12
Even more damage can be caused when the resources of a nation state are directed towards subversion of software or exploitation of software vulnerabilities. Thomas C. Reed, former Secretary of the Air Force and special assistant to President Reagan, detailed one such example in his book At the Abyss: An Insider’s History of the Cold War. In 1981, it became apparent that the Soviet Union was stealing American technology. Instead of shutting down the Soviet operation, CIA director William Casey and National Security Council staffer Gus Weiss came up with a plan in which the U.S. would intentionally subvert the microchips and software that the Soviets were stealing. According to Reed, “every microchip they stole would run fine for 10 million cycles, and then it would go into some other mode. It wouldn’t break down, it would start delivering false signals and go to a different logic.” Similarly, the software the Soviets stole to run their natural gas supply systems were programmed to include a “time bomb” that changed processing to a different logic after a set period of time. In 1982, the failure of the gas system software caused the explosion of a major gas pipeline, resulting in “the most monumental non-nuclear explosion and fire ever seen from space.” [Reed 2004] The whole U.S. sabotage operation resulted in a huge drain on the Soviet economy. Moreover, the Soviets had based a huge number systems on stolen software and hardware. The realization that some of this stolen technology had been compromised made it virtually impossible for them to determine which equipment was safe and which was untrustworthy.13
The pipeline explosion occurred over two decades ago. Since then, the emergence of the internet, coupled with the exponential growth in size, complexity, and ubiquity of software, has made such sabotage operations much easier, and it is feared, much more likely. Commodity computer hardware and commercial off-the-shelf (COTS) software are manufactured in a number of countries, some of which are openly hostile to the U.S., and in some of which the software industries are subject to direct influence or pressure from their governments. In many cases, the origin of a particular software product is impossible (this is especially true of open source software). And yet, the U.S. and other governments have policies giving preference to COTS software and hardware over custom-built, and mandating their use. Knowing this, a hostile nation state with a booming software industry would be in an ideal position to sabotage software or hardware developed for export to the U.S. and its allies.