Guide to the Common Body of Knowledge to Produce, Acquire, and Sustain Secure Software



Download 1.28 Mb.
Page8/55
Date conversion09.01.2018
Size1.28 Mb.
TypeGuide
1   ...   4   5   6   7   8   9   10   11   ...   55

2.4Methods for Attacks

Attacks can occur during at any phase of the software life cycle: from the development, testing, deployment, operation, sustainment, to decommissioning and disposal. The ability to conduct an attack can be made significantly easier if the software being attacked contains vulnerabilities, malicious code, or back doors that were placed intentionally during its development for exploitation during its operation\.

Even software in disposal may be subject to attack. By gaining physical access to software that has been disposed of, the attacker may find it easier to recover residual sensitive data stored with or embedded in the software—data that could not be easily recovered when the operational software was protected by environment security controls and mechanisms. Or the attacker may wish to copy disposed software that has been replaced by a later but derivative version of the same program, in order to reverse engineer the older version, and use the knowledge gained to craft more effective attacks against the new version.


2.4.1Malicious Code Attacks

One means by which attackers attempt to achieve their objectives is by inserting malicious software code within a software program, or or planting it on an operational system. Malicious code, also referred to as malicious software or malware, is designed to deny, destroy, modify, or impede the software’s logic, configuration, data, or library routines.

Malicious code can be inserted during software’s development, preparation for distribution, deployment, installation, and or update. It can be inserted or planted manually or through automated means. Regardless of when in the software lifecycle the malware is embedded, it effectively becomes part of the software to which it can present substantial dangers.

Viruses, worms, spyware, and adware are all rampant on the Internet, and some names of malware, such as Code Red and Nimda, have entered the popular vocabulary. Everyone from home computer owners to Fortune 500 information technology (IT) infrastructure system managers are waging a constant battle to protect their systems against these threats.

A software producer clearly needs to be concerned about preventing and handling malicious actions directed against the software he or she develops or sustains, as well as the assets that software protects once it is in operational use. As previously noted, malicious actions can occur at other times as well.

Certain categories of malicious code are more likely to be planted on operational systems, while others are more likely to be inserted into software before it is deployed. The malware categories described below are therefore divided into categories that are likely to “inserted” versus categories that are likely to be “planted”.

Malware is increasingly being combined with deceptive “social engineering” techniques to accomplish more complex attacks on unsuspecting users. In some cases, malware is used to enable a deception, as in pharming. In other cases, deception is used to trick the user into downloading and executing malicious code. Another popular deception technique, phishing, is worth noting though it does not require malicious code to succeed. “Social engineering” attacks are described in Appendix A (Section 2.8).

2.4.1.1Categories of Malware Likely to Be Inserted During Development or Sustainment


  • Back door or trap doors – a hidden software mechanism that is used to circumvent the system's security controls, often to enable an attacker to gain unauthorized remote access to the system. One frequently-used back door is a malicious program that listens for commands on a particular Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) port.

  • Time bomb – a resident computer program that triggers an unauthorized or damaging action at a predefined time.

  • Logic bomb – a resident computer program that triggers an unauthorized or damaging action when a particular event or state in the system's operation is realized, for example when a particular packet is received.

2.4.1.2Categories of Malware Likely to Be Planted on Operational Systems

  • Virus – a form of malware that is designed to self-replicate (make copies of itself) and distribute the copies to other files, programs, or computersprogram or programming code that replicates by being copied or by initiating its copying. A virus attaches itself to and becomes part of another executable program: delivery mechanism for malicious code or for denial of service attack. There are a number of different types of viruses, including (1) boot sector viruses, that infect the master boot record (MBR) of a hard drive or the boot sector of removable media; (2) file infector viruses, that attach themselves to executable programs such as word processing and spreadsheet applications and computer games; (3) macro viruses, that attach themselves to application documents, such as word processing files and spreadsheets, then use the application's macro programming language to execute and propagate; (4) compiled viruses, that have their source code converted by a compiler program into a format that can be directly executed by the operating system; (5) interpreted viruses, composed of source code that can be executed only by a particular application or service; (5) multipartite viruses, which use multiple infection methods, typically to infect both files and boot sectors. More recently, a category of virus called a morphing virus has emerged; as its name suggests, a morphing virus changes as it propagates, making it extremely difficult to eradicate using conventional antivirus software because its signature is constantly changing.


  • Worm – a self-replicating program that is completely self-contained and self-propagating. a self-replicating computer program similar to a computer virus. Unlike a virus, a worm is self-contained and does not need to be part of another program to propagate itself. Worms frequently exploit the file transmission capabilities found on many computers: self-propagating delivery mechanism for malicious code or for a denial of service (DoS) attack that effectively shuts down service to users. Types of worms include a network service worm, that spreads by taking advantage of a vulnerability in a network service associated with an operating system or application, and a mass mailing worm, that spreads itself by identifying e-mail addresses (often located by searching infected systems) then using either the system's email client or a self-contained mailer built into the worm to send copies of itself to those addresses..

  • Trojan, or Trojan Horse – a non-replicating program that appears to be benign but actually has a hidden malicious purpose.

  • Zombie – a program that is installed on one system with the intent of causing it to attack other systems.

Operational software can be modified by the actions of malware. For example, some viruses and worms insert themselves within installed executable software binary files where they trigger local or remote malicious actions or propagation attempts whenever the software is executed.

One noteworthy “delivery” technique for malicious code within Web applications is the cross-site scripting attack:


  • Cross-site scripting (abbreviated as “XSS”, or less often “CSS”) – an attack technique in which an attacker subverts a valid Web site, forcing it to send malicious scripting code to an unsuspecting user’s browser. Because the browser believes the script came from a trusted source, it executes it. The malicious script may be designed to access any cookies, session tokens, or other sensitive information retained by the browser for use when accessing the subverted Web site. XSS differs from pharming in that in XSS, the Website involved is a valid site that has been subverted, whereas in pharming, the Website is invalid but made to appear valid.

2.4.2Hidden Software Mechanisms


There are categories of “hidden” or “surreptitious” software mechanisms that were originally designed for legitimate purposes but which are increasingly being used by attackers to achieve malicious purposes. When this happens, these mechanisms, for all practical purposes, operate as malware.

Like viruses and worms, these hidden software mechanisms are most likely to be planted on operational systems rather than inserted into software during its development or sustainment. The most common hidden software mechanisms are:



  • Bot (abbreviation of robot) – an automated software program that executes certain commands when it receives a specific input. Bots are often the technology used to implement Trojan horses, logic bombs, back doors, and spyware.

  • Spyware - any technology that aids in gathering information about a person or organization without their knowledge. Spyware is placed on a computer to secretly gather information about the user and relay it to advertisers or other interested parties. The various types of spyware include (1) a web bug, a tiny graphic on a Web site that is referenced within the Hypertext Markup Language (HTML) content of a Web page or e-mail to collect information about the user viewing the HTML content; (2) a tracking cookie, which is placed on the user's computer to track the user's activity on different Web sites and create a detailed profile of the user's behavior.
  • Adware Any software program that includes additional code to deliver and display advertising banners or popups to the user’s screen while the program is running. Adware is frequently bundled with spyware that tracks the user’s personal information and usage activity and passes it on to third parties without the user’s authorization or knowledge.

2.4.3Lifecycle Process Vulnerabilities that Enable Attacks


This subsection addresses vulnerabilities in the software’s lifecycle process that make software vulnerable to malicious code insertions and other compromises.

2.4.3.1Authorized Access

During the development of software, an inside attacker could intentionally implant malicious code. The malicious code could be an intentional backdoor to allow someone to remotely log in to the system running the software or could be a time or logic bomb. Alternatively, the malicious code could be an intentionally implanted vulnerability14 that would allow the attacker to later exploit the vulnerability. This method would provide the software company with plausible deniability of intent should the vulnerable code be found.

Inserting the malicious code during the software development process places the malicious code in all copies of the software. There are advantages and disadvantages to placing the malicious code in all copies of the software rather than a targeted few. If the malicious code is in all copies, then wherever the software is running, the malicious code will be present. This averts the danger to the attacker of there existing “pure” and “altered” versions of the software so that a simple comparison of checksums will reveal differences between copies of software that should be the same.

The easiest and most effective time to insert a malicious mechanism into a software product is during its requirements phase. Because the malicious code is conceived at the beginning of the software development process, it can be designed either as an integrated and visible feature or as an unadvertised feature. Stating malware intentions in the requirements phase causes many people become aware of the malicious feature, possibly raising the probability of public disclosure. However, conceivably the requirements could be crafted appropriately to obscure the malicious intent of the software (e.g., backdoor to permit remote observation as a user support feature).

Inserting the vulnerability or malicious functionality at a later stage of software development would potentially expose it to fewer people. In situations where organizations are not taking precautions, inserting the malicious mechanism during the design or implementation phase could be relatively easy and in many organizations would only require the actions of a rogue programmer. The likelihood of the attack being exercised and discovered during testing is not high since normal testing is based on the unmodified specifications.15 Historically, testing does not look for added functionality, which is one of the reasons Easter eggs (recreational insertions)16 and backdoors have been able to become part of final products relatively easily. Code inspection may reveal the malicious code, but in many shops a part of the code is not likely be examined late in the process unless a problem is discovered during testing. Even then, the same programmer who put the malicious code in the product may be the person asked to fix the problem.

Once coding is complete and testing is in progress, the level of difficulty facing an insider inserting malicious code depends on the environment in which the testing is conducted. If the testers have access to the source code and development environment, inserting malicious code could be easy. Having access only to the compiled binary could make inserting a malicious mechanism much more difficult.

Other avenues of insider attacks can occur when placing the product on a CD or a website, or during sustainment such as when developing or releasing upgrades or patches for the software. The original software or updates could also be modified during the delivery and installation or after it has been installed.

To summarize, an attacker who is part of the software development process might alter the product during any phase of the software development process.

During deployment an attacker might:


  • Change the product or update it after approval and before distribution;

  • Usurp or alter the means of distribution;

  • Change the product at user site before or during installation.

2.4.3.2Unauthorized Access

For an outsiders and insiders, the later in the software development process that the attack is inserted, the less likely it is to be found. Usually, the initial release of a software product receives more analyses and testing than an upgrade or patch. Inserting an attack as part of an upgrade or patch would thus be less likely to be discovered. But, on the other hand, not all systems would perform the upgrade or patch, and the lifespan of the vulnerability or attack would be shorter than if the attack had been in the software initially.

An unauthorized attacker could alter the product electronically by hacking into the distribution site for the software. Performing an attack in this manner would likely be difficult and require substantial skill. Also, as mentioned previously, danger of detection increases when both pure and altered copies exist.

Others who may be unauthorized to change the software, but who have to a degree insider access, include secretaries, janitors, and system administrators. Being insiders to the company, they are more likely to be trusted and less closely watched. In the case of system administrators, they might even be the ones responsible for detecting any attacks. These or other trusted insiders might be self-motivated, either an inserted agent of an outside entity or as a subversive, e.g., bribed. They may perform the actions themselves or provide access for an unauthorized outsider.

One danger that was not mentioned was disclosing a vulnerability to attackers or to the public either by an insider or someone not connected to the development, sustainment, or operation of the secure software system. Knowledge of the vulnerability may be known to the company, but until a patch is available, the details of the vulnerability should be closely held. This time window of opportunity could be valuable to an attacker.

To summarize, attackers during development might:


  • Change a product from outside by

    • Initiating electronic intrusion

    • Allowing physical intrusion (combined with electronic)

  • Change a product from inside by

    • Inserting an agent

    • Corrupting someone already in place

    • Having an insider who is self-motivated

  • Change or disrupt development process by

    • Failing to run or report a test

    • Categorizing a vulnerability’s defect report as not a vulnerability

In summary, the means to conduct an attack can take several forms. Also, attacks do not have to compromise the security of a system to be successful at denying the continued operation of a system. An outsider can simply overwhelm critical resources (e.g., communication paths) that a system depends upon. Such an attack can be as effective as a successful intrusion if the objective is to deny the use of the system.

Many paths of attack exist including:



  • Intrusion: gaining illegitimate access to a system

  • External or Perimeter Effects: acts that occur outside or at the defense perimeter but nevertheless have a damaging effect; the most common one is denial of service from overload of a resource

  • Insider: a person with existing authorization uses it to compromise security possibly including illegitimately increasing authorizations

  • Subversion: changing (process or) product so as to provide a means to compromise security17

  • Malware: software placed to aid in compromising security

Attempts to prevent attacks against software during its operation fall into the realm of operational security rather than software security.


1   ...   4   5   6   7   8   9   10   11   ...   55


The database is protected by copyright ©hestories.info 2017
send message

    Main page