Produced by the Software Assurance Workforce Education and Training Working Group, this compilation is a needed preliminary step toward addressing the issues related to achieving adequate United States (US) education and training on engineering of secure software. These issues include the skill shortages within government and industry and curriculum needs within universities, colleges, and trade schools.
The ultimate goal for this document is to better prepare the software development workforce to produce secure software. However, the intended primary audiences for this guide go beyond software practitioners, and also include:
Educators – influence and support curricula;
Trainers – extend/improve contents of training of current workforce;
Acquisition personnel – aid in acquiring (more) secure software;
Evaluators and testers – assist in adding risk-driven testing and evaluation that consider threats and vulnerabilities to their current requirements-driven functional tests and evaluations;
Standards developers – encourage and facilitate inclusion of security-related items in standards;
Experts – supply feedback: modifications or validation to document authors;
Practitioners – a guide for learning or a high-level introduction to field;
Program managers – to understand software-specific security and assurance, including alternative approaches, risks, prioritization, and budget.
Educators and trainers wishing to develop specific curricula and curricular material will benefit from this comprehensive summation of the topics, facts, principles, and practices of software-specific security and its assurance. This represents a body of knowledge (BOK) that can provide a benchmark for educators and trainers. This benchmark will enable them to target and validate detailed learning objectives, develop coherent instructional plans, and structure their teaching and evaluation processes to effectively teach specific content across the wide range of relevant audiences and roles.
Evaluators, standards developers, and practitioners including testers will benefit from being able to identify weaknesses or gaps in their personal knowledge that they might address by additional study as well as ones in their organization’s processes or products. They will be able to judge their performance against minimum and maximum standards of practice. Finally, by leveraging the content of the numerous references in the guide, they will be able to tailor specific operational recommendations and processes and to ensure the development, acquisition, deployment, operation, and sustainment or maintenance of secure software within their professional setting.
While the content of this document provides broad coverage, readers interested in gaining an even deeper knowledge in secure software engineering or acquisition of secure software are encouraged to also read the references provided throughout this document.
Thus, this document’s content can provide a high-level introduction, and this content combined with its numerous references can serve to guide readers in performing their work.
After covering the foundation material in the next three major sections of this document, readers should refer to the portions of this guide relevant to their needs. Feedback from readers is important in identifying needed improvements for future versions and related products.
Specific initial targets for influence include universities and training organizations willing to be trial users or early adopters, and influencing content of the Institute of Electrical & Electronics Engineers (IEEE) Guide to the Software Engineering Body of Knowledge [SWEBOK]8.
After discussing the meaning of security, this section describes the scope of the “additional” knowledge included in this report, outlining what is needed but not available elsewhere. It covers:
Security properties for software;
Knowledge needed to engineer secure software;
Boundaries of document scope;
Related subject matter areas.
1.4.1Security Properties for Software
The goal of software security is to produce software that is is able to:
Resist or withstand many anticipated attacks;
Recover rapidly, with minimum damage, from attacks that cannot be resisted or withstood.
Software can be characterized by its fundamental properties. Fundamental properties of software include such things as “functionality”, “performance”, “reliability”, “cost”, “usability”, “manageability”, “adaptability” and, of most interest to us, “security”.
Exhibition of security as a property of software is what indicates the software’s ability to resist, withstand, and recover from attacks. However, while both software and information usually require the same properties in order to be deemed secure, the specific objectives and relative importance of each property in software is likely to be different than that property’s objective and importance in information. For example, confidentiality as a property is usually considered far more important for sensitive information than it is for software; indeed, confidentiality of software (versus the data it processes) if often not required at all.
Three lower-level properties of software can be said to compose its property of security:
Availability: Timely, reliable access is maintained to the software by its authorized users [CNSSI 4009]. This means the software must continue to operate correctly and predictably, and remain accessible by its intended users and the external processes with which it must interact. For the software to be considered available, its continued operation and accessibility must be preserved under all but the most hostile conditions. If it must cease operation because it can no longer withstand those conditions, its failure must not leave the software vulnerable to the compromise of any of its other security properties. Nor must the failure of the software result in or enable the compromise of any security property in the software's data or environment.
Integrity: The software is protected from improper modification or destruction [NIST FIPS 200], whether that modification/destruction is intentional or accidental. As a result, its authenticity is ensured, and it can be expected to performs its intended functions in an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation [CNSSI 4009]. In practical terms, itegrity means that at no time during its development, installation, or execution can the software's content, functionality, configuration, or data be modified in any way or deleted by unauthorized entities (humans, processes). Nor can it be modified in unauthorized ways by authorized entities.
Confidentiality: The nature, location, and existence of the software (including its configuration and volatile data) must not be disclosed to unauthorized individuals, processes, or devices [CNSSI 4009]. Moreover, all required restrictions or constraints on the accessibility and disclosure of the software must be preserved [NIST FIPS 200]. In practical terms, confidentiality as a property of software, is less often required than confidentiality of information. However, hiding and obfuscation of software can reduce its exposure to attack or reverse engineering. Note that software may include functionality that contributes to the confidentiality of the information it processes; that confidentiality, however, is correctly seen as a property of the information, not of the software.9
Two additional lower-level security properties are more often associated with information and the human users who access it. However, with the increasing use of agent-based computing, service oriented architectures, and other computing models in which software entities interact without direct human user intervention, these properties are becoming equally important for the software itself:
Accountability: The process of tracing the software's activities to a responsible source [CNSSI 4009], accountability is achieved by recording and tracking, with attribution of responsibility, the actions of the software itself, and of the entities (human users or external software processes) that interact with the software. This tracking must be possible both while the recorded actions are occurring, and afterwards. Accountability has traditionally been achieved through audit mechanisms that record and track user actions at the operating system level. Increasingly, comparable logging mechanisms are being used to record and track events at the application level, including events performed by software entities as well as human users.
Non-repudiation: Pertains to the ability to prevent entities from disproving or denying responsibility for actions they performed while interacting with the software. For example, the software would be provided with proof of the identity of any entity that provided input to it, while that entity would be provided with proof that the input had been received by the software, so neither can later deny their actions involved in processing the input [CNSSI 4009]. The mechanism frequently used to ensure non-repudiability of responsibility for transmission and receipt of messages or creation of files is the digital signature: the creator of a file or sender of a message digitally signs the message, and the recipient digitally signs and transmits to the sender an acknowledgement of having received the original message.
There are clear dependencies among and relationships between properties. For example, the preservation of availability depends on the prevention of unauthorized destruction, which is one of the objectives of integrity.
Moreover, the ability to preserve security properties often presumes the existence of mechanisms (security controls10) that perform of certain functions that are intended to help preserve those properties. For example, most of the security properties described above are more easily preserved if the actions of entities that interact with the software can be ensured to be consistent with and not in violation of the software’s security properties. Several security controls operating in combination may achieve this: an “identification and authentication (I&A)” mechanism can act as a trusted proxy of the software in order to reliably determine the identities of all entities that wish to access that software. I&A in turn is likely to be a prerequisite to the ability to preserve the software’s confidentiality and integrity, because it enables the “authorization” mechanism to grant permissions to each entity based on its authenticated identity. Each entity’s authorized permissions are then used by an “access control” mechanism that determines what actions a given entity is allowed to perform, so that the entity is able to perform those actions but also prevented from performing any other actions. Ensuring that entities are able to perform their allowed functions is one of the objectives of availability, while preventing entities from performing any actions that are not explicitly permitted can help reduce the software’s exposure to violations of its integrity, availability, and (when relevant) confidentiality. Similarly, the preservation of accountability and non-repudiation will be useful in restoring those properties if they are violated, because the cause of the violation (i.e., the entity that performed an unauthorized function) can be traced and blocked.
It is important to recognize, however, that the security mechanisms, controls, and functions, as described in [NIST FIPS 200], [CNSSI 4009], the Department of Defense (DoD) Instruction 8500.2, and [DCID 6/3] all have the objective of preserving security properties of information or whole systems. Such mechanisms may be ineffective, or of only partial effectiveness, in preserving the security properties of software. This is because non-exhibition of those properties in software often results from how the software was specified, designed, implemented, or prepared for deployment, while security mechanisms, controls, and functions are intended to preserve security properties of software only when it is executing (i.e., in operation). Also, because many security controls are implemented at the environment level rather than in the software itself, they are necessarily limited to protecting the software from without, by imposing constraints on how external entities interact with it. However, many software security problems arise from within the software: how it performs its processes, how it handles input has already received, and how its components or modules interact with each other.
The background material on Information Security and Cyber Security concepts in Section 3, Appendix A should further help the reader distinguish between how emergent security properties are exhibited in information (or data), which is passive, and how they are exhibited in software (which is simultaneously active in that it executes functions, and passive in that it exists as one or more binary files). The computer security, cyber security, and information assurance disciplines use much of the same terminology as software security/software assurance, but the shades of meaning of those terms often differ when applied to software rather than information, networks, or whole systems.
In information systems, software functions are often relied upon to preserve the security properties of the information processed by the system. For example, the access control mechanisms of the operating system, which except in certain embedded systems is implemented by software, may be relied on to protect the information from disclosure when it is at rest stored in the operating system’s file system. Or a Web application’s Secure Socket Layer (SSL) software may be relied upon to encrypt that information before transmitting it over a network, again protecting it from disclosure and preserving (and, in fact, accomplishing) its confidentiality.
In this way, software security functions can become critical to the achievement of information security. However, this is coincidental. Just because software performs functions that preserve the security properties of information does not mean the same software will be capable of preserving its own security properties. Yet, it is the very fact that the software is relied upon to perform functions that preserve the security properties of information that makes it so important that the security properties of software itself be consistently exhibited and non-compromisable.
Desired security properties in software must be preserved wherever they are required throughout the software system, and must not be able to be bypassed anywhere they are required in the system. In other words, security properties are emergent systems properties.11
Emergent properties are those that derive from the interactions among the parts—components—of the software, and from the software’s interactions with external entities. As an emergent property, security of the software as a whole cannot be predicted solely by considering the software’s separate components in isolation. The security of software can only be determined by observing its behavior as a collective entity, under a wide variety of circumstances. Such observation should reveal whether the desired property, in fact, emerges from the collective behavior of the software’s components as they prepare to interact and respond to and recover from interaction.
The main objective of software security practices and secure software processes, then, is to ensure that all behaviors associated with software’s internal and external interactions are accomplished only in ways that preserve the software’s security properties, even when the software is subjected to unpredictable inputs and stimuli (such as those associated with attacks on the software).
Neither in the physical world nor in software can anyone absolutely guarantee security. Thus, when this guide speaks of “secure software,” the actual meaning is “software that can be considered secure with a justifiably high degree of confidence, but which does not absolutely guarantee ‘a substantial set of explicit security properties.’” [Redwine 2004, p. 2] This definition can also be stated in a negative way as “justifiably high confidence that no software-based vulnerabilities exist that the system is not designed to either tolerate,” or able to recover from with a minimum amount and extent of damage. Not withstanding these definitions’ emphasis on high security, the material in this document covers issues important not just for producing highly secure software but ones important to the many organizations under pressure to produce software merely more secure than the current version.
1.4.2System Security vs. Software Security
In both the corporate world and the Federal Government, systems security is generally defined at the architectural level, and uses a “secure the perimeter” approach to prevent malicious actors and input from crossing the boundary and entering the system from outside. The premise of the “secure the perimeter” approach is that most of the system components within the boundary are themselves incapable of resisting, withstanding, or recovering from attacks.
Traditionally, systems security has been achieved almost exclusively through use of a combination of network and operating system layer mechanisms, controls, and protections. More recently, application security measures have been added. These measures extend to the application layer the same types of mechanisms, controls, and protections found at the network and operating system layers. The resulting combination of security measures at the system architecture, network protocol, and application levels results in layered protection referred to as “defense in depth”.
System security measures are both preventive and reactive. The preventive measures include firewalls and filters, intrusion detection systems, virus scanners, trending and monitoring of network traffic. The objective of all of these mechanisms is to block input that is suspected to contain attack patterns or signatures, and to prevent access or connection to the system by unknown or suspicious actors. The reactive measures include mobile code containment, malicious code containment and eradication, patching (location and correction of known security vulnerabilities usually after they already have been exploited), vulnerability scanning, and penetration testing.
By contrast with systems security, which focuses on protecting the system’s already-codified operational capabilities and assets, the basic premise of software security is that security is not a feature that can be simply added on after all the other features have been codified [McGraw, 2006]. This is because many security exploits against software target defects in the software’s design or code. Moreover, software’s emergent security properties must be identified as security requirements from the software’s conception. Security requirements for software will, of course, include requirements for security functions (i.e., the use applied cryptography). But they must also include the software’s emergent security characteristics [McGraw, 2006]. Section TBD discusses the requirements process for secure software at length.
1.4.3Knowledge Needed to Engineer Secure Software
The knowledge needed to engineer secure software falls naturally into the following categories, which can be mapped to information considered during the security risk assessment process:
The nature of threats12 to software,
How those threats manifest as attacks,
The nature of the vulnerabilities that they are likely to target or exploit13,
The measures necessary to prevent the threats from targeting/exploiting those vulnerabilities,
The nature of the computing environment in which the conflict takes place.
For software that is not “written from scratch” but results from the integration or assembly of pre-existing commercial-off-the-shelf (COTS), open source, legacy, or other reusable components, there is a need for security evaluations as part of the acquisition (e.g., through purchase, download, retrieval from a reuse repository) of those components. For this reason, acquisition processes that expressly consider the security of the software being procured are also covered in this guide.
Thus, the coverage includes:
Threats, and the manifestation of threats as attacks: entities, objectives, strategies, techniques, and effects;
D Major Sections of Document
Threats and Hazards – introduction to the nature of attacks and other dangers software faces
Fundamental Concepts and Principles – needed knowledge across all areas
Ethics, Law, and Governance
Verification, Validation, and Evaluation
Tools and Methods
Sustainment – unique considerations after initial deployment
Acquisition – acquiring secure software for integration or use
efense: roles, objectives, strategies, techniques, and how to develop and sustain software to defend and survive;
Environment: aspects of environment for operational software with security implications;
Acquisition: to ensure that procured software is secure.
Defense spans activities and practices throughout the entire secure software system lifecycle, with approaches and pitfalls from concept through disposal covering all aspects:
Work environment (including physical aspects) and support.
The bulk of knowledge for secure software engineering and sustainment is the same, regardless of whether the software is “acquired” or “built from scratch”. This is especially true of knowledge pertaining to ethics, and legal and regulatory concerns. This said, the acquisition phase may have what seems to be a disproportionately large impact on the success of the system, and a large proportion of most organizations’ software budgets are often spent on sustainment.
In addition to the Preface, which discusses the special interest paths a reader can take when reading this guide, and the Introduction, this guide contains five main parts:
Threats and Hazards – attacks and non-malicious acts in conflict that may threaten security – Section 2;
Fundamental Concepts and Principles; and Ethics, Law, and Governance – needed knowledge across all areas – Sections 3 and 4;
Development, Sustainment, and Management – engineering, management, and support directly involved in producing and sustaining secure software – Sections 5-12;
Use of this Document – tips, thoughts, and suggestions to help the various audiences use this document in their work, including some tips from the first limited usage of drafts.
1.4.4Boundaries of Document Scope
Because knowledge for software development is recorded elsewhere in bodies of knowledge, de facto and official standards, curricula standards, and textbooks, this document presumes that the reader already has a baseline of generally accepted knowledge about good software engineering. The guide, therefore, focuses on the “delta” between “good” software engineering practices and secure software engineering practices. This “delta” knowledge includes:
Information rigorous approaches, including formal methods, and less rigorous approaches that are considered useful for producing secure software;
An outline of the information often considered relevant for dealing with legacy software that is deficient in security;
Proven (to some degree/in some contexts) techniques, practices, and types of tools;
Information currently useful or expected to become useful or needed in the near future.
Descriptions of knowledge do not include details on particular products or operational activities. This is because such coverage exists elsewhere. Examples of areas not addressed in detail include:
Specific details of Windows, Unix, Linux, and other operating systems;
Network protocols and operations;
Specific application frameworks, such as Java Enterprise Edition (Java EE), Microsoft .NET, and Eclipse;
Any other specific products, commercial or open source (except when mentioned as an illustrative example within a type or category of tool or technology that has been discussed);
Rules of evidence, search and seizure, surveillance laws, and forensice and investigative methods and procedures.
According to Committee on National Security Systems (CNSS) Instruction No. 4009 “National Information Assurance Glossary”(Revised 2006, http://www.cnss.gov/Assets/pdf/cnssi_4009.pdf), Software Assurance is “the level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at anytime during its lifecycle, and that the software functions in the intended manner.”
The Department of Defense (DoD) in “DoD Software Assurance Initiative” (13 September 2005, https://acc.dau.mil/CommunityBrowser.aspx?id=25749) states further that Software Assurance relates to “the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software.”
The National Aeronautics and Space Administration (NASA) takes a more process-oriented view, stating in NASA-STD-2201-93 “Software Assurance Standard” (http://satc.gsfc.nasa.gov/assure/astd.ps) that Software Assurance comprises the “planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. It includes the disciplines of Quality Assurance, Quality Engineering, Verification and Validation, Nonconformance Reporting and Corrective Action, Safety Assurance, and Security Assurance and their application during a software life cycle.”
The National Institute of Standards and Technologies (NIST, http://samate.nist.gov/index.php/Main_Page) and the Department of Homeland Security (DHS, https://buildsecurityin.us-cert.gov/portal) both agree with NASA's process-oriented view of Software Assurance, but state that the objectives of the Software Assurance activities is to achieve software that exhibits:
“Trustworthiness, whereby no exploitable vulnerabilities exist, either of malicious or unintentional origin;
“Predictable Execution, whereby there is justifiable confidence that the software, when executed, functions as intended;
“Conformance, whereby a planned and systematic set of multi-disciplinary activities ensure that software processes and products conform to their requirements, standards, and procedures.”
Finally, the Object Management Group (OMG) Special Interest Group on Software Assurance (http://adm.omg.org/SoftwareAssurance.pdf and http://swa.omg.org/docs/softwareassurance.v3.pdf) provides a definition of Software Assurance that combines the “level of confidence” aspect of the CNSS and DoD definitions with the “trustworthiness” aspect of the NIST and DHS definitions, to whit: Software Assurance is the software's demonstration of “justifiable trustworthiness in meeting its established business and security objectives.”
In point of fact, the term “software assurance” can potentially refer to the assurance of any property or functionality of software. However, as the definitions above demonstrate, the current emphases of Software Assurance encompass safety and security disciplines while integrating practices from multiple other disciplines, and recognizes that software must, of course, be satisfactory in other aspects as well, such as usability and mission support.
As mentioned previously, the knowledge to develop, sustain, and acquire “unsecured” software is not included in this guide. In addition, a number of areas related to, or included in, the knowledge needed to produce secure software are not included in detail. Some underlying fundamentals are simply presumed, such as:
In many situations, relevant concerns in secure software development, sustainment, and acquisitions could easily be labeled “systems” concerns. With this in mind, this document sometimes uses the term “software system” or just “system”. This usage is intended to refer to what is known as a “software-intensive system”, i.e., a system in which software is the predominating, most important component.
Because the purpose of secure information systems implemented by software—systems likely to be connected to a network (often the Internet)—is to process and transmit information, concepts from related disciplines, including computer security, information security, and network and cyber security are also presented. Because security of software does not end with its development, but extends into its deployment, concepts of operational security are included. And because software practitioners are people, personnel security concepts are also presented. To provide full context for the discussion of law, policy, and governance, concepts related to criminology, the legal system, and law and regulatory enforcement are also presented. In all cases, these ancillary concepts are usually mentioned only briefly, at a high level of abstraction. Finally, domain knowledge, such as operating systems or banking, while quite important in practice, is out of the scope of this guide.