Guide to the Common Body of Knowledge to Produce, Acquire, and Sustain Secure Software



Download 1.28 Mb.
Page2/55
Date conversion09.01.2018
Size1.28 Mb.
TypeGuide
1   2   3   4   5   6   7   8   9   ...   55



Early discussions revolved around the first two of these. In the end to facilitate future work and acceptance, the WG picked a set of generic categories easily mapped to the categories used in a number of standards, curricula, and body of knowledge efforts. These are possibly close to the categories used in the SWEBOK Guide – see Table 1.

Answering the second question, moving from activities to required knowledge, is sometimes easy and sometimes difficult. While people are often tempted to state that the knowledge is, “The knowledge of activity X,” this statement usually unsatisfactory except in quite specific cases.

The WG decided to what extent the knowledge included in this body of knowledge document excludes knowledge identified by the second question but already identified by existing standards for software engineering. In the end, the question of what to leave out or make only a passing mention of was answered pragmatically by what was believed to be known (or at least known of) by most of the members of the intended audiences for this guide.

In 2005, the WG began efforts to identify the knowledge needed by the previously identified activities. Considerable effort was required before it finally established a satisfactory scope for the knowledge set. Review by experts not intimately involved with the original draft set was crucial to this effort.

The WG has benefited from the experiences of the SWEBOK project. For example, the WG has established a single lead author for each section because the SWEBOK experience showed that arrangements with more co-authors worked poorly.

In line with the guiding principle that the text straightforwardly supply the background and contextual knowledge that persons already possessing good software engineering knowledge need to be familiar enough with the relevant software-specific security or assurance, the group decided that its initial output should have a level of exposition between that used by the SWEBOK Guide (prose format), and that used by DoD information assurance standards Committee on National Security Systems (CNSS) Training Standards 4011 and 401210 (list format). Members of the WG felt that lists would be too sparse for an audience that lacked prior knowledge of the area, but that lists were adequate for the lowest level of detail and more amenable to production within the short timeframe available for its initial product.

In this first version, the Workforce Education and Training Working Group’s aim is to be inclusive in enumerating the knowledge needed to acquire, develop, and sustain secure software including assurance (objective grounds for reduced uncertainty or confidence) of its security properties and functionality. To help industry, government, and academia to target their education and training curricula, as well as to aid self-study efforts, the WG may eventually need to provide indications of what knowledge is needed by different roles and other supplementary materials.

The WG aimed to ensure adequate coverage of requisite knowledge areas in contributing disciplines to enable instructors and professionals in several disciplines, such as software engineering, systems engineering, project management, etc., to identify and acquire competencies associated with secure software. Because of this wide coverage and applicability as well as historical reasons, the guide’s subtitle includes the phrase “Common Body of Knowledge.” Indeed, no individual practitioner would probably ever be expected to know all the knowledge identified in this guide, and the guide is designed so that after reading the first four sections readers can go to the sections of their choice.

How to Read this Document

The first major section is an Introduction that explains the scope and purpose and lists the twelve common references that are cited throughout the document. Immediately following this Introduction, Section 2 covers the dangers that software systems face. Section 3 covers a number of fundamental terms, concepts, and principles. Together Section 3 and Section 4, on Ethics, Law, and Governance, are relevant to all the Sections that follow them. Because it provides a set of inputs to requirements, the short section addressing laws, regulations, policies, and ethics related to software systems precedes the more technical sections on requirements, design, construction and verification, validation, and evaluation activities. Sections 5-12 follow a nominal lifecycle plus management and support, with Section 12 addressing the unique considerations after initial deployment.

Some lifecycle aspects are further detailed in a tools and methods section before a process section covers their arrangement, introduction, and improvement. Section 11 addresses difference in managing secure software developments. These sections cover the additional technical, managerial, and supporting activities and interactions related to producing secure software. Section 13 covers acquisition of secure software. The final section gives tips on using this document for the several kinds of intended audiences. Thus, depending on the reader’s interests, he or she can take a number of paths within this guide as shown in Error: Reference source not found.

All readers could also benefit from section 5 on Requirements and managers could directly benefit from section 10 on Processes.

In addition to the main sections of the document, the Appendices of Section 3 provide background information on information assurance and computer security concepts, and the implementation of information and computer security functionality in software. These discussions are provided to help clarify the commonalities and differences between security for software on the one hand and security for data and whole systems on the other, and also to clarify the distinction between software that is secure versus software that performs information or computer security functions.

Because of the iterative nature of many software production efforts, the reader cannot always draw a clear boundary between “development” including assurance and “sustainment“. In addition, acquisition activities require an understanding of software production and sustainment. Thus, to maintain coherence some overlap is inevitable. This overlap is kept reasonable by not having in-depth treatment of a sub-area in multiple sections, but rather having different sections reference each other for the details. This may result in some switching back and forth by readers but avoids excessive duplication or conflict.

A bibliography follows these content sections listing the items referenced throughout the document and the entries in the lists of further readings (for those who want to pursue areas even further) that appear at the end of each major section, and the document closes an Index. The Index should be useful for seeing the mentions of a particular topic regardless of location.

Finally, I would like to say that I have enjoyed creating this document and all the helpful people involved, and welcome interaction on the topic.


Samuel T. Redwine, Jr.
James Madison University, Harrisonburg, Va.
May 2006

Foreword by Joe Jarzombek

[July 7, 2006 Note: Section to be updated]

Dependency on information technology makes software assurance a key element of national security and homeland security. Software vulnerabilities jeopardize intellectual property, consumer trust, business operations and services, and a broad spectrum of critical applications and infrastructure, including everything from process control systems to commercial application products. In order to ensure the integrity of key assets, the software that enables and controls them must be reliable and secure. However, informed consumers have growing concerns about the scarcity of practitioners with requisite competencies to build secure software. They have concerns with suppliers’ capabilities to build and deliver secure software with requisite levels of integrity and to exercise a minimum level of responsible practice. Because software development offers opportunities to insert malicious code and to unintentionally design and build exploitable software, security-enhanced processes and practices – and the skilled people to perform them – are required to build trust into software.

The Department of Homeland Security (DHS) Software Assurance Program is grounded in the National Strategy to Secure Cyberspace which indicates: “DHS will facilitate a national public-private effort to promulgate best practices and methodologies that promote integrity, security, and reliability in software code development, including processes and procedures that diminish the possibilities of erroneous code, malicious code, or trap doors that could be introduced during development.”

Software Assurance has become critical because dramatic increases in business and mission risks are now known to be attributable to exploitable software: system interdependence and software dependence has software as the weakest link; software size and complexity obscures intent and precludes exhaustive test; outsourcing and use of un-vetted software supply chain increases risk exposure; attack sophistication eases exploitation; reuse of legacy software interfaced with other applications in new environments introduces other unintended consequences increasing number of vulnerable targets; and the number of threats targeting software. These all contribute to the increase of risks to software-enabled capabilities and the threat of asymmetric attack. A broad range of stakeholders now need confidence that the software which enables their core business operations can be trusted to perform (even with attempted exploitation).

DHS began the Software Assurance (SwA) Program as a focal point to partner with the private sector, academia, and other government agencies in order to improve software development and acquisition processes. Through public-private partnerships, the Software Assurance Program framework shapes a comprehensive strategy that addresses people, process, technology, and acquisition throughout the software lifecycle. Collaborative efforts seek to shift the paradigm away from patch management and to achieve a broader ability to routinely develop and deploy software products known to be trustworthy.  These efforts focus on contributing to the production of higher quality, more secure software that contributes to more resilient operations.

In their Report to the President, Cyber Security: A Crisis of Prioritization (February 2005), in the chapter entitled “Software Is a Major Vulnerability”, the President’s Information Technology Advisory Committee summed up the problem of non-secure software concisely and accurately:

Network connectivity provides “door-to-door” transportation for attackers, but vulnerabilities in the software residing in computers substantially compound the cyber security problem.  As the PITAC noted in a 1999 report, the software development methods that have been the norm fail to provide the high-quality, reliable, and secure software that the Information Technology infrastructure requires.

Software development is not yet a science or a rigorous discipline, and the development process by and large is not controlled to minimize the vulnerabilities that attackers exploit.  Today, as with cancer, vulnerable software can be invaded and modified to cause damage to previously healthy software, and infected software can replicate itself and be carried across networks to cause damage in other systems. Like cancer, these damaging processes may be invisible to the lay person even though experts recognize that their threat is growing. And as in cancer, both preventive actions and research are critical, the former to minimize damage today and the latter to establish a foundation of knowledge and capabilities that will assist the cyber security professionals of tomorrow reduce risk and minimize damage for the long term.

Vulnerabilities in software that are introduced by mistake or poor practices are a serious problem today. In the future, the Nation may face an even more challenging problem as adversaries - both foreign and domestic—become increasingly sophisticated in their ability to insert malicious code into critical software.

The DHS Software Assurance (SwA) program goals promote the security of software across the development life cycle and are scoped to address:


  • Trustworthiness – No exploitable vulnerabilities exist, either maliciously or unintentionally inserted;

  • Predictable Execution – Justifiable confidence that software, when executed, functions in a manner in which it is intended;

  • Conformance – Planned and systematic set of multi-disciplinary activities that ensure software processes and products conform to requirements and applicable standards and procedures.

Initiatives such as the DHS “Build Security In” web site at https://buildsecurityin.us-cert.gov and the developers’ guide, entitled Security in the Software Lifecycle:  Making Application Development Processes – and Software Produced by Them – More Secure, will continue to evolve and provide practical guidance and reference material to software developers, architects, and educators on how to improve the quality, reliability, and security of software – and the justification to use it with confidence.

This document, “Software Assurance:  A Guide to the Common Body of Knowledge to Produce, Acquire, and Sustain Secure Software,” (referenced in short as SwA CBK) provides a framework intended to identify workforce needs for competencies, leverage sound practices, and guide curriculum development for education and training relevant to software assurance. A series of CBK working group sessions have involved participation from academia, industry and federal government to develop this SwA CBK addressing three domains: “acquisition,” “development,” and post-release “sustainment.”  Several disciplines contribute to the SwA CBK, such as software engineering, systems engineering, information systems security engineering, safety, security, testing, information assurance, and project management. While SwA is not a separate profession, SwA processes and practices should contribute to enhancing these contributing disciplines.  In education and training, Software Assurance could be addressed as: a “knowledge area” extension within each of the contributing disciplines, a stand-alone CBK drawing upon contributing disciplines, or as a set of functional roles drawing upon the CBK, allowing more in-depth coverage dependent upon the specific roles. As a minimum, SwA practices should be integrated within applicable knowledge areas of relevant disciplines.

This SwA CBK is a part of the DHS Software Assurance Series, and it is expected to contribute to the growing Software Assurance community of practice.  This document is intended solely as a source of information and guidance, and is not a proposed standard, directive, or policy from DHS. Because this document will continue to evolve with use and changes in practices, comments on its utility and recommendations for improvement are always welcome.

Joe Jarzombek, PMP

Director for Software Assurance
National Cyber Security Division
Department of Homeland Security

Table of Contents




Please Ensure Proper Acknowledgement 2

How to Make Contact, Find out More, and Contribute 2

NO WARRANTY 2

Authorship and Acknowledgements vii

Editor’s Preface xi

Introduction xi

Some History xi

How to Read this Document xv

Foreword by Joe Jarzombek xvii

Table of Contents xix



1. Introduction 1

1.1 Purpose and Scope 1

1.2 Motivation 1

1.3 Audience 2

1.4 Secure Software 3

1.4.1 Security Properties for Software 3

1.4.2 System Security vs. Software Security 6

1.4.3 Knowledge Needed to Engineer Secure Software 7

1.4.4 Boundaries of Document Scope 8

1.4.5 Software Assurance and Related Disciplines 8

1.5 Selection of References 9

1.5.1 Common References 10


2. Threats and Hazards 13

2.1 Introduction 13

2.2 Dangerous Effects 13

2.3 Attackers 15

2.3.1 Types of Attackers 15

2.3.2 Motivations of Attackers 16

2.4 Methods for Attacks 17

2.4.1 Malicious Code Attacks 17

2.4.2 Hidden Software Mechanisms 18

2.4.3 Lifecycle Process Vulnerabilities that Enable Attacks 19

Initiating electronic intrusion 20

Allowing physical intrusion (combined with electronic) 20

Inserting an agent 20

Corrupting someone already in place 20

Having an insider who is self-motivated 20

Failing to run or report a test 20

Categorizing a vulnerability’s defect report as not a vulnerability 20

2.5 Conclusion 21

2.6 Further Reading 21

2.7 Appendices 22

2.7.1 Appendix A. Social Engineering Attacks 22

2.7.2 Appendix B. Attacks Against Operational Systems 23

2.7.3 Appendix C. Unintentional Events that Can Threaten Software 24

3. Fundamental Concepts and Principles 27

3.1 Introduction 27

3.1.1 Software Security versus Information Security 27

3.1.2 Software Engineering versus System Engineering 28

3.2 Variations in Terms and Meaning 28

3.3 Concepts 30

3.3.1 Dependability 30

3.3.2 Security 30

3.3.3 Assurance 31

3.4 Safety 35

3.4.1 Probability versus Possibility 36

3.4.2 Combining Safety and Security 36

Goals/claims, 36

Assurance arguments, 36

Evidence, 36

3.4.3 Threats, Attacks, Exploits, and Vulnerabilities 36

3.4.4 Stakeholders 37

3.4.5 Assets 37

3.4.6 Security Functionality 38

3.4.7 Security-Related System Architecture Concepts 38

3.5 Basic Security Principles for Software 39

3.5.1 Least Privilege 39

3.5.2 Fail-Safe Defaults 39

3.5.3 Economy of Mechanism 39

3.5.4 Complete Mediation 39

3.5.5 Open Design 39

3.5.6 Separation of Privilege 39

3.5.7 Least Common Mechanism 40

3.5.8 Psychological Acceptability 40

3.5.9 Work Factor 40

3.5.10 Recording of Compromises 40

3.5.11 Defense in Depth 40

3.5.12 Analyzability 40

3.5.13 Treat as Conflict 40

3.5.14 Tradeoffs 41

3.6 Secure Software Engineering 42

3.6.1 Security Risk Management for Software 42

3.6.2 Secure Software Development Activities 43

Processes, 45

Personnel-related activities and skills, 45

Artifact quality. 45

3.7 Further Reading 45

3.8 Appendices 46

3.8.1 Appendix A: Information Security Concepts 46

Deter and mislead attackers, 46

Force attackers to overcome multiple layers of defense, 46

Support investigations to identify and convict attackers. 46

Multi-Level Security (MLS): both a generic term for multiple levels of sensitivity being involved within the same system and a name sometimes given to a specific scheme where all accesses are checked by a single software trusted computing base; 50

Multiple Independent Levels of Security (MILS): bottom separation layer providing information flow and data isolation facilities so higher layers can define and enforce policies themselves [Vanfleet 2005]; 50

Multiple Single Levels of Security (MSLS): no intermixture, 50

Virtual machines, 50

Separation via encryption, 50

Physical separation, 50

Separation except at point of use, 50

Filters, guardians, and firewalls. 50

3.8.2 Appendix B: Security Functionality 53


4. Ethics, Law, and Governance 57

4.1 Scope 57

4.2 Ethics 57

4.3 Law 57

4.4 Governance: Regulatory Policy and Guidance 57

4.4.1 Policy 58

4.4.2 Laws Directly Affecting Policy 58

4.4.3 Standards and Guidance 59

4.4.4 Organizational Security Policies 59

4.5 Further Readings 59


5. Secure Software Requirements 61

5.1 Scope 61

5.2 Requirements for a Solution 61

5.2.1 Traceability 62

5.2.2 Identify Stakeholder Security-related Needs 62

5.2.3 Software Asset Protection Needs 62

Primary memory, 62

Transit, 62

Registers, 62

Caching, 62

Virtual memory paging. 62

5.2.4 Threat Analysis 63

5.2.5 Interface and Environment Requirements 65

5.2.6 Usability Needs 65

5.2.7 Reliability Needs 66

5.2.8 Availability, Tolerance, and Sustainability Needs 66

5.2.9 Obfuscation and Hiding Needs 66

5.2.10 Validatability, Verifiability, and Evaluatability Needs 66

5.2.11 Certification Needs 67

5.2.12 System Security Auditing or Certification and Accreditation Needs 68

5.3 Requirements Analysis 68

5.3.1 Risk Analysis 68

5.3.2 Feasibility Analysis 69

5.3.3 Tradeoff Analysis 69

5.3.4 Analysis of Conflicts among Security Needs 69

5.4 Specification 70

5.4.1 Document Assumptions 70

5.4.2 Specify Software Security Policy 70

5.4.3 Security Functionality Requirements 71

5.4.4 High-Level Requirements Specification 71

5.5 Requirements Validation 72

5.6 Assurance Case 72

5.7 Further Reading 72

6. Secure Software Design 75

6.1 Scope 75

Assurance of design’s agreement with the specifications 75

Its construction 75

6.2 Design Goals 76

Administrative controllability, 76

Tamper resistance; 76

Authentication of the identities of entities external to the system (human users, processes), as the basis for authorizing permissions to those entities, 76

Authorization of permissions to entities, and access control to prevent entities from exceeding those permissions, i.e., in order to perform unauthorized actions (such as gaining access to data or resources to which they are not entitled), 76

Comprehensive accountability of entity actions while interacting with the system, including non-repudiability of entity actions; 76

6.3 Principles and Guidelines for Designing Secure Software 76

6.3.1 General Design Principles for Secure Software Systems 76

6.3.2 Damage Confinement and Resilience 78

Rollback, 79

Fail forward, 79

Compensate; 79

Record secure audit logs and facilitate periodical review to ensure system resources are functioning, reconstruction possible, and identify unauthorized or abuse, 79

Supports forensics and incident investigations, 79

Helps focus response and reconstitution efforts to those areas that are most in need. 79

6.3.3 Vulnerability Reduction 79

Although a system may be powered down, critical information still resides on the system and could be retrieved by an unauthorized user or organization; 79

At the end of a system’s life-cycle, procedures must be implemented to ensure system hard drives, volatile memory, and other media are purged to an acceptable level and do not retain residual information. 79

6.3.4 Viewpoints and Issues 80

Subversion is the attack mode of choice e.g., subvert people, processes, procedures, testing, repair, tools, infrastructure, protocol, or systems; 80

Understand and enforce the chain of trust; 80

Don’t invoke untrusted programs from within trusted ones. 80

Implement security through a combination of measures distributed physically and logically; 80

Associate all system and network elements with the appropriate security services, and weight the advantages against the disadvantages of implementing a given service at the network, system, or software level; 80

Authenticate users and processes to ensure appropriate access control decisions; 80

Establish trustworthiness of entities involved in any information flow, and control information flows between system components, and between system and users appropriately. 80

Formulate security measures to address the need to distribute system components across multiple security policy domains. 80

6.4 Documentation of Design Assumptions 80

6.4.1 Environmental Assumptions 81

6.4.2 Internal Assumptions 81

6.5 Documentation of Design Decisions and Rationales 81

6.6 Software Reuse 81

6.7 Architectures for Information Systems 82

6.8 Frameworks 82

6.9 Design Patterns for Secure Software 82

6.10 Specify Configurations 83

6.11 Methods for Attack Tolerance and Recovery 83

6.12 Software Rights Protection 83

6.13 Obfuscation and Hiding 83

6.13.1 Purposes of Obfuscation and Hiding 83

6.13.2 Techniques for Deception 84

6.14 User Interface Design 84

6.15 Assurance Case for Design 84

6.15.1 Design for Easier Modification of Assurance Argument after Software Change 85

6.15.2 Design for Testability 85

6.16 Secure Design Processes and Methods 85

6.17 Design Reviews for Security 86

6.18 Further Reading 86

6.19 Appendix A. System Engineering Techniques for Securing Software 87

6.19.1 Input Filtering Technologies 87

6.19.2 Anti-Tamper Technologies 87

7. Secure Software Construction 89

7.1 Scope 89

7.2 Construction of Code 89

7.2.1 Common Vulnerabilities 89

7.2.2 Using Security Principles in Secure Coding 90

7.2.3 Secure Coding Practices 90

7.2.4 Secure Coding Standards 91

7.2.5 Language Selection 91

7.3 Construction of User Aids 93

Normal documentation includes security aspects and use 94

Operational Security Guide [Viega 2005, p. 33, 98-00] 94

Help desk 94

Online support 94

7.4 “Best Practices” for Secure Coding and Integration 94

Minimize code size and complexity, and increase traceability: this will make the code easy to analyze. 94

Code with reuse and sustainability in mind: this will make code easy to understand by others. 94

Use a consistent coding style throughout the system: this is the objective of the coding standards described in subsection 7.2.4. 94

Make security a criterion when selecting programming languages to be used. 94

Use programming languages securely: avoid “dangerous” constructs, and leverage security features such as “taint” mode in Perl and “sandboxing” in Java. 94

Avoid common, well known logic errors: use input validation, compiler checks to verify correct language usage and flag “dangerous” constructs, code review to ensure conformance to specification, absence of “dangerous” constructs and characters, type checking and static checking, and finally comprehensive security testing to catch more complex defects. 94

Use consistent naming and correct encapsulation. 94

Ensure asynchronous consistency: this will avoid timing and sequence errors, race conditions, deadlocks, order dependencies, syncronization errors, etc. 94

Use multitasking and multithreading safely. 94

Implement error and exception handling safely: a failure in any component of the software should never be allowed to leave the software, its volatile data, or its resources vulnerable to attack. 94

Program defensively: Use techniques such as information hiding and anomaly awareness. 94

Always assume that the seemingly impossible is possible: the history of increased sophistication, technical capability, and motivation of attackers shows that events, attacks, and faults that seem extremely unlikely when the software is written can become quite likely after it has been in operation for a while. Error and exception handling should be programmed explicitly to deal with as many “impossible” events as the programmer can imagine. 94

Make security a criterion when selecting components for reuse or acquisition: before a component is selected, it should undergo the same security analyses and testing techniques that will be used on the final software system. For open source and reusable code, these can include code review. For binary components, including COTS, these will necessarily be limited to “black box” tests, as described in Section 8.4, Testing, except in the rare cases where the binary software will be used for such a critical function that reverse engineering to enable code review may be justified. 94

Analyze multiple assembly options: the combination and sequencing of components that are selected should produce a composed system that results in the lowest residual risk, because it presents the smallest attack surface. Ideally, it will require the fewest add-on countermeasures such as wrappers. 95

Verify the secure interaction of the software with its execution environment: this includes never trusting parameters passed by the environment, separation of data and program control, always presuming client/user hostility (thus always validating all input from the client/user), never allowing the program to spawn a system shell. 95

7.5 Further Reading 95

7.5.1 Secure Programming Languages and Tools 96

7.6 Appendix A. Taxonomy of Coding Errors 96

Buffer Overflows. Buffer overflows are the principal method used to exploit software by remotely injecting malicious code into a target. The root cause of buffer overflow problems is that C and C++ are inherently unsafe. There are no bounds checks on array and pointer references, meaning a developer has to check the bounds (an activity that is often ignored) or risk encountering problems. Reading or writing past the end of a buffer can cause a number of diverse (and often unanticipated) behaviors: (1) programs can act in strange ways, (2) programs can fail completely, and (3) programs can proceed without any noticeable difference in execution. The most common form of buffer overflow, called the stack overflow, can be easily prevented. Stack-smashing attacks target a specific called the stack overflow, can be easily prevented. Stack-smashing attacks target a specific programming fault: the careless use of data buffers allocated on the program's runtime stack. An attacker can take advantage of a buffer overflow vulnerability by stack-smashing and running arbitrary code, such as code that invokes a shell in such a way that control gets passed to the attack code. More esoteric forms of memory corruption, including the heap overflow, are harder to avoid. By and large, memory usage vulnerabilities will continue to be a fruitful resource for exploiting software until modern languages that incorporate modern memory management schemes are in wider use. 97

SQL Injection. SQL injection is a technique used by attackers to take advantage of non-validated input vulnerabilities to pass SQL commands through a Web application for execution by a backend database. Attackers take advantage of the fact that programmers often chain together SQL commands with user-provided parameters, and the attackers, therefore, can embed SQL commands inside these parameters. The result is that the attacker can execute arbitrary SQL queries and/or commands on the backend database server through the Web application. Typically, Web applications use string queries, where the string contains both the query itself and its parameters. The string is built using server-side script languages such as ASP or JSP and is then sent to the database server as a single SQL statement. 97

Cross-Site Scripting. A CSS vulnerability is caused by the failure of a site to validate user input before returning it to the client’s web-browser. The essence of cross-site scripting is that an intruder causes a legitimate web server to send a page to a victim's browser that contains malicious script or HTML of the intruder's choosing. The malicious script runs with the privileges of a legitimate script originating from the legitimate web server. 97

Integer Overflows. Not accounting for integer overflow can result in logic errors or buffer overflow. Integer overflow errors occur when a program fails to account for the fact that an arithmetic operation can result in a quantity either greater than a data type's maximum value or less than its minimum value. These errors often cause problems in memory allocation functions, where user input intersects with an implicit conversion between signed and unsigned values. If an attacker can cause the program to under-allocate memory or interpret a signed value as an unsigned value in a memory operation, the program may be vulnerable to a buffer overflow. 97

Command Injection. Executing commands that include unvalidated user input can cause an application to act on behalf of an attacker. Command injection vulnerabilities take two forms: (1) An attacker can change the command that the program executes: the attacker explicitly controls what the command is, and (2) An attacker can change the environment in which the command executes: the attacker implicitly controls what the command means 97

Call to Thread.run(). The program calls a thread's run() method instead of calling start(). In most cases a direct call to a Thread object's run() method is a bug. The programmer intended to begin a new thread of control, but accidentally called run() instead of start(), so the run() method will execute in the caller's thread of control. 97

Call to a Dangerous Function. Certain functions behave in dangerous ways regardless of how they are used. Functions in this category were often implemented without taking security concerns into account. For example in C the gets() function is unsafe because it does not perform bounds checking on the size of its input. An attacker can easily send arbitrarily-sized input to gets() and overflow the destination buffer. 97

Directory Restriction. The chroot() system call allows a process to change its perception of the root directory of the file system. After properly invoking chroot(), a process cannot access any files outside the directory tree defined by the new root directory. Such an environment is called a chroot jail and is commonly used to prevent the possibility that a processes could be subverted and used to access unauthorized files. Improper use of chroot() may allow attackers to escape from the chroot jail. 97

Use of java.io. The Enterprise JavaBeans specification requires that every bean provider follow a set of programming guidelines designed to ensure that the bean will be portable and behave consistently in any EJB container. The program violates the Enterprise JavaBeans specification by using the java.io package to attempt to access files and directories in the file system. 98

Use of Sockets. The Enterprise JavaBeans specification requires that every bean provider follow a set of programming guidelines designed to ensure that the bean will be portable and behave consistently in any EJB container. The program violates the Enterprise JavaBeans specification by using sockets. An enterprise bean must not attempt to listen on a socket, accept connections on a socket, or use a socket for multicast. 98

Authentication. Security should not rely on DNS names, because attackers can spoof DNS entries. If an attacker are can make a DNS update (DNS cache poisoning), they can route network traffic through their machines or make it appear as if their IP addresses are part of your domain. If an attacker can poison the DNS cache, they can gain trusted status. 98

Exception Handling. The _alloca function allocates dynamic memory on the stack. The allocated space is freed automatically when the calling function exits, not when the allocation merely passes out of scope. The _alloca() function can throw a stack overflow exception, potentially causing the program to crash. If an allocation request is too large for the available stack space, _alloca() throws an exception. If the exception is not caught, the program will crash, potentially enabling a denial of service attack. 98

Privilege Management. Failure to adhere to the principle of least privilege amplifies the risk posed by other vulnerabilities. Programs that run with root privileges have caused innumerable Unix security disasters. 98

Strings. Functions that convert between Multibyte and Unicode strings often result in buffer overflows. Windows provides the MultiByteToWideChar(), WideCharToMultiByte(), UnicodeToBytes, and BytesToUnicode functions to convert between arbitrary multibyte (usually ANSI) character strings and Unicode (wide character) strings. The size arguments to these functions are specified in different units – one in bytes, the other in characters – making their use prone to error. In a multibyte character string, each character occupies a varying number of bytes, and therefore the size of such strings is most easily specified as a total number of bytes. In Unicode, however, characters are always a fixed size, and string lengths are typically given by the number of characters they contain. Mistakenly specifying the wrong units in a size argument can lead to a buffer overflow. 98

Unchecked Return Value. Ignoring a method's return value can cause the program to overlook unexpected states and conditions. Two dubious assumptions that are easy to spot in code are “this function call can never fail” and “it doesn't matter if this function call fails”. When a programmer ignores the return value from a function, they implicitly state that they are operating under one of these assumptions. 98

Least Privilege Violation. The elevated privilege level required to perform operations such as chroot() should be dropped immediately after the operation is performed. When a program calls a privileged function, such as chroot(), it must first acquire root privilege. As soon as the privileged operation has completed, the program should drop root privilege and return to the privilege level of the invoking user. If this does not occur, a successful exploit can be carried out by an attacker against the application, resulting in a privilege escalation attack because any malicious operations will be performed with the privileges of the superuser. If the application drops to the privilege level of a non-root user, the potential for damage is substantially reduced. 98

Hardcoded Password. Hardcoded passwords may compromise system security in a way that cannot be easily remedied. Once the code is in production, the password cannot be changed without patching the software. If the account protected by the password is compromised, the owners of the system will be forced to choose between security and availability. 98

Weak Cryptography. Obscuring a password with a trivial encoding, such as base 64 encoding, but this effort does not adequately protect the password. 98

Insecure Randomness. Standard pseudo-random number generators cannot withstand cryptographic attacks. Insecure randomness errors occur when a function that can produce predictable values is used as a source of randomness in security-sensitive context. Pseudo-Random Number Generators (PRNGs) approximate randomness algorithmically, starting with a seed from which subsequent values are calculated. There are two types of PRNGs: statistical and cryptographic. Statistical PRNGs provide useful statistical properties, but their output is highly predictable and forms an easy to reproduce numeric stream that is unsuitable for use in cases where security depends on generated values being unpredictable. Cryptographic PRNGs address this problem by generating output that is more difficult to predict. For a value to be cryptographically secure, it must be impossible or highly improbable for an attacker to distinguish between it and a truly random value. 98

Race Conditions. The window of time between when a file property is checked and when the file is used can be exploited to launch a privilege escalation attack. File access race conditions, known as time-of-check, time-of-use (TOCTOU) race conditions, occur when: (1) the program checks a property of a file, referencing the file by name, and (2) the program later performs a filesystem operation using the same filename and assumes that the previously-checked property still holds. The window of vulnerability for such an attack is the period of time between when the property is tested and when the file is used. Even if the use immediately follows the check, modern operating systems offer no guarantee about the amount of code that will be executed before the process yields the CPU. Attackers have a variety of techniques for expanding the length of the window of opportunity in order to make exploits easier, but even with a small window, an exploit attempt can simply be repeated over and over until it is successful. 99

Insecure Temporary Files. Creating and using insecure temporary files can leave application and system data vulnerable to attacks. The most egregious security problems related to temporary file creation have occurred on Unix-based operating systems, but Windows applications have parallel risks. The C Library and WinAPI functions designed to aid in the creation of temporary files can be broken into two groups based whether they simply provide a filename or actually open a new file. IN the former case, the functions guarantee that the filename is unique at the time it is selected, there is no mechanism to prevent another process or an attacker from creating a file with the same name after it is selected but before the application attempts to open the file. Beyond the risk of a legitimate collision caused by another call to the same function, there is a high probability that an attacker will be able to create a malicious collision because the filenames generated by these functions are not sufficiently randomized to make them difficult to guess. In the latter case, if a file with the selected name is created, then depending on how the file is opened the existing contents or access permissions of the file may remain intact. If the existing contents of the file are malicious in nature, an attacker may be able to inject dangerous data into the application when it reads data back from the temporary file. If an attacker pre-creates the file with relaxed access permissions, then data stored in the temporary file by the application may be accessed, modified or corrupted by an attacker. On Unix based systems an even more insidious attack is possible if the attacker pre-creates the file as a link to another important file. Then, if the application truncates or writes data to the file, it may unwittingly perform damaging operations for the attacker. This is an especially serious threat if the program operates with elevated permissions. 99

Session Fixation. Authenticating a user without invalidating any existing session identifier gives an attacker the opportunity to steal authenticated sessions. Session fixation vulnerabilities occur when: (1) a web application authenticates a user without first invalidating the existing session, thereby continuing to use the session already associated with the user, and (2) an attacker is able to force a known session identifier on a user so that, once the user authenticates, the attacker has access to the authenticated session. 99

Empty Catch Block. Ignoring an exception can cause the program to overlook unexpected states and conditions. Two dubious assumptions that are easy to spot in code are “this method call can never fail” and “it doesn't matter if this call fails”. When a programmer ignores an exception, they implicitly state that they are operating under one of these assumptions. 99

Catching NullPointerExceptions. It is generally a bad practice to catch NullPointerException. Programmers typically catch NullPointerException under three circumstances: (1) the program contains a null pointer dereference. Catching the resulting exception was easier than fixing the underlying problem, (2) the program explicitly throws a NullPointerException to signal an error condition, and (3) the code is part of a test harness that supplies unexpected input to the classes under test. Of these three circumstances, only the last is acceptable. 100

Overly Broad Catch. The catch block handles a broad swath of exceptions, potentially trapping dissimilar issues or problems that should not be dealt with at this point in the program. Multiple catch blocks can get ugly and repetitive, but “condensing” catch blocks by catching a high-level class like Exception can obscure exceptions that deserve special treatment or that should not be caught at this point in the program. Catching an overly broad exception essentially defeats the purpose of Java's typed exceptions, and can become particularly dangerous if the program grows and begins to throw new types of exceptions. The new exception types will not receive any attention. 100

Overly Broad Throws. The method throws a generic exception making it harder for callers to do a good job of error handling and recovery. Declaring a method to throw Exception or Throwable makes it difficult for callers to do good error handling and error recovery. Java's exception mechanism is set up to make it easy for callers to anticipate what can go wrong and write code to handle each specific exceptional circumstance. Declaring that a method throws a generic form of exception defeats this system. 100

Return Inside Finally. Returning from inside a finally block will cause exceptions to be lost. A return statement inside a finally block will cause any exception that might be thrown in the try block to be discarded. 100

Expression is Always False. An expression will always evaluate to false. 100

Expression is Always True. An expression will always evaluate to true. 100

Memory Leak. Memory is allocated but never freed. Memory leaks have two common and sometimes overlapping causes: (1) error conditions and other exceptional circumstances and, (2) confusion over which part of the program is responsible for freeing the memory. Most memory leaks result in general software reliability problems, but if an attacker can intentionally trigger a memory leak, the attacker might be able to launch a denial of service attack (by crashing the program) or take advantage of other unexpected program behavior resulting from a low memory condition. 100

Null Dereference. The program can potentially dereference a null pointer, thereby raising a NullPointerException. Null pointer errors are usually the result of one or more programmer assumptions being violated. Most null pointer issues result in general software reliability problems, but if an attacker can intentionally trigger a null pointer dereference, the attacker might be able to use the resulting exception to bypass security logic or to cause the application to reveal debugging information that will be valuable in planning subsequent attacks. 100

Uninitialized Variable. The program can potentially use a variable before it has been initialized. Stack variables in C and C++ are not initialized by default. Their initial values are determined by whatever happens to be in their location on the stack at the time the function is invoked. Programs should never use the value of an uninitialized variable. 100

Unreleased Resource. The program can potentially fail to release a system resource. Most unreleased resource issues result in general software reliability problems, but if an attacker can intentionally trigger a resource leak, the attacker might be able to launch a denial of service attack by depleting the resource pool. Resource leaks have at least two common causes: (1) error conditions and other exceptional circumstances and (2) confusion over which part of the program is responsible for releasing the resource. 100

Use After Free. Referencing memory after it has been freed can cause a program to crash. Use after free errors occur when a program continues to use a pointer after it has been freed. Like double free errors and memory leaks, use after free errors have two common and sometimes overlapping causes: (1) error conditions and other exceptional circumstances, and (2) confusion over which part of the program is responsible for freeing the memory. Use after free errors sometimes have no effect and other times cause a program to crash. 100

Double Free. Double free errors occur when free() is called more than once with the same memory address as an argument. Calling free() twice on the same value can lead to a buffer overflow. When a program calls free() twice with the same argument, the program's memory management data structures become corrupted. This corruption can cause the program to crash or, in some circumstances, cause two later calls to malloc() to return the same pointer. If malloc() returns the same value twice and the program later gives the attacker control over the data that is written into this doubly-allocated memory, the program becomes vulnerable to a buffer overflow attack. 101

Leftover Debug Code. Debug code can create unintended entry points in a deployed web application. A common development practice is to add “back door” code specifically designed for debugging or testing purposes that is not intended to be shipped or deployed with the application. When this sort of debug code is accidentally left in the application, the application is open to unintended modes of interaction. These back door entry points create security risks because they are not considered during design or testing and fall outside of the expected operating conditions of the application. The most common example of forgotten debug code is a main() method appearing in a web application. Although this is an acceptable practice during product development, classes that are part of a production J2EE application should not define a main(). 101

Trust Boundary Violation. A trust boundary can be thought of as line drawn through a program. On one side of the line, data is untrusted. On the other side of the line, data is assumed to be trustworthy. The purpose of validation logic is to allow data to safely cross the trust boundary--to move from untrusted to trusted. A trust boundary violation occurs when a program blurs the line between what is trusted and what is untrusted. The most common way to make this mistake is to allow trusted and untrusted data to commingle in the same data structure. 101

Unsafe Mobile Code: Access Violation. The program violates secure coding principles for mobile code by returning a private array variable from a public access method. Returning a private array variable from a public access method allows the calling code to modify the contents of the array, effectively giving the array public access and contradicting the intentions of the programmer who made it private. 101

Unsafe Mobile Code: Inner Class. The program violates secure coding principles for mobile code by making use of an inner class. Inner classes quietly introduce several security concerns because of the way they are translated into Java bytecode. In Java source code, it appears that an inner class can be declared to be accessible only by the enclosing class, but Java bytecode has no concept of an inner class, so the compiler must transform an inner class declaration into a peer class with package level access to the original outer class. More insidiously, since an inner class can access private fields in their enclosing class, once an inner class becomes a peer class in bytecode, the compiler converts private fields accessed by the inner class into protected fields. 101

Unsafe Mobile Code: Public finalize() Method. The program violates secure coding principles for mobile code by declaring a finalize()method public. A program should never call finalize explicitly, except to call super.finalize() inside an implementation of finialize(). In mobile code situations, the otherwise error prone practice of manual garbage collection can become a security threat if an attacker can maliciously invoke one of your finalize() methods because it is declared with public access. If you are using finalize() as it was designed, there is no reason to declare finalize() with anything other than protected access. 101

Unsafe Mobile Code: Dangerous Array Declaration. The program violates secure coding principles for mobile code by declaring an array public, final and static. In most cases an array declared public, final and static is a bug. Because arrays are mutable objects, the final constraint requires that the array object itself be assigned only once, but makes no guarantees about the values of the array elements. Since the array is public, a malicious program can change the values stored in the array. In most situations the array should be made private. 101

Unsafe Mobile Code: Dangerous Public Field. The program violates secure coding principles for mobile code by declaring a member variable public but not final. All public member variables in an Applet and in classes used by an Applet should be declared final to prevent an attacker from manipulating or gaining unauthorized access to the internal state of the Applet. 102



8. Secure Software Verification, Validation, and Evaluation 103

8.1 Scope 103

8.2 Assurance Case 103

Agrees with top-level specification and security policy 105

Contains only items called for in top-level specification 105

8.3 Ensure Proper Version 105

8.4 Testing 106

8.4.1 Test Process 106

8.4.2 Test Techniques 106

8.5 Static Analysis 108

8.5.1 Formal Analysis and Verification 109

8.5.2 Static Code Analysis 109

8.6 Dynamic Analysis 110

8.6.1 Simulations 110

8.6.2 Prototypes 110

8.6.3 Mental Executions 110

8.6.4 Dynamic Identification of Assertions and Slices 110

8.7 Informal Analysis, Verification, and Validation 110

8.7.1 Reviews and Audits 110

8.8 Usability Analysis 111

8.9 Verification and Validation of User Aids 111

Normal documentation covers security aspects and use 111

Additional operational security guide 111

Help desk 111

Online support 111

8.10 Secure Software Measurement 111

8.11 Independent Verification and Validation and Evaluation 113

8.11.1 Independent Verification and Validation 113

8.11.2 Independent Product Certifications and Evaluations 113

8.11.3 Commercial Software Security Assessments and Audits 114

8.11.4 Government System Accreditations 114

8.12 Assurance for Tools 114

8.13 Selecting among VV&E Techniques 115

8.14 Further Reading 115


9. Secure Software Tools and Methods 117

9.1 Scope 117

9.2 Secure Software Methods 117

9.2.1 Formal Methods 117

9.2.2 Semi-formal Methods 118

9.3 Secure Software Tools 118

9.3.1 Software Construction Tools 118

9.3.2 Software Testing Tools 119

9.3.3 Miscellaneous Tool Issues 119

9.3.4 Tool Evaluation 120

9.4 Further Reading 121

10. Secure Software Processes 123

10.1 Formal and Semi-Formal Methods 124

10.2 Security-Enhanced Processes 124

10.3 Legacy Software Upgrade Processes 125

10.4 Concern for Security of Developmental Process 125

10.5 Improving Processes for Developing Secure Software 126

10.5.1 Introducing Secure Software Engineering Processes 126

10.5.2 Improving Software Engineering Processes to Add Security 126

10.6 Further Reading 128

11. Secure Software Engineering Management 129

11.1 Introduction 129

11.2 Start Up 130

11.3 Scoping Project 130

11.4 Project Risk Management 130

11.5 Selecting a Secure Software Process 130

11.6 Security Management 131

11.6.1 Personnel Management 131

11.6.2 Development Work Environment 131

11.6.3 Using Software from Outside the Project 132

11.7 Secure Release 132

11.7.1 Assuring Security Level of Software Shipped 132

11.8 Secure Configuration Management 132

11.8.1 Using CM to Prevent Malicious Code Insertion During Development 133

11.9 Software Quality Assurance and Security 134

11.10 Further Reading 134

11.10.1 Reading for Secure Software Engineering Management 134

12. Secure Software Sustainment 137

12.1 Introduction 137

12.2 Background 138

12.2.1 Types of Response 138

12.2.2 Representation 139

12.3 Operational Assurance (Sensing) 139

12.3.1 Initiation 139

12.3.2 Operational Testing 140

12.3.3 Environmental Monitoring 140

12.3.4 Incident Reporting 140

12.3.5 Reporting Vulnerabilities 140

12.3.6 Operational Process Assurance 140

12.3.7 Assurance Case Evidence for Operational Assurance 140

Documenting the precise steps taken to build awareness of correct practice, including a formal employee education and training program 141

Documenting each employee’s specific education, training, and awareness activities 141

Documenting the explicit enforcement requirements and consequences for non-compliance for every job title 141

Specification and evidence of personal agreement to the consequences for non-compliance 141

12.3.8 Reading for Operational Assurance 141

12.3.9 List of Standards 142

12.4 Analysis 142

12.4.1 Understanding 143

12.4.2 Impact Analysis 143

12.4.3 Reporting 144

12.4.4 Pertinent Reading for Sustainment Analysis 144

12.4.5 Related Readings 145

12.4.6 List of Standards 145

12.5 Response Management (Responding) 145

12.5.1 Responding to Known Vulnerabilities 146

12.5.2 Change Control 147

12.5.3 Configuration Management 148

12.5.4 Periodic Security Reviews: Repeat Audits or Certifications and Accreditations 149

12.5.5 Secure Migration, Retirement, Loss, and Disposal 149

Transition of the software or system [ISO04, p.33] is accomplished in a safe and secure fashion. 149

The transition process is confirmed effective and correct. 149

The proper functioning of the software, or system after transition [ISO04, p.33]. 149

The effectiveness of the transition is confirmed and certified by a satisfactory verification and validation procedure. 149

The results of the transition process are documented and retained. 149

The integrity of the software or system is confirmed after transition [ISO04, p.33]. 149

All software operation and data integrity is confirmed by an appropriate set of measures. 149

The results of the program and data integrity checking process are documented and retained. 149

Software or system documentation accurately reflects the changed state. 149

12.5.6 List of Further Readings for Response Management 149

12.5.7 List of Standards 150

12.6 Operational Assurance 150

12.6.1 Security Architecture 150

12.6.2 Policy, Process, and Methodology Assurance 151

12.6.3 Assurance Case Evidence for Operational Assurance 151

Documentation of assumptions about current, known risks and threats 151

Documentation of organization-wide standards, or standard practices 151

Specification of the technologies and products that will be utilized during the planning period, along with the method for installing, maintaining, and operating them on a secure basis 152

Evidence that the information sharing process is revised and updated as the security situation changes. 152

12.6.4 Pertinent Reading for Operational Assurance 152

12.6.5 Related Reading for Operational Assurance 152

12.6.6 List of Standards 153



13. Acquisition of Secure Software 155

13.1 Introduction 155

13.2 Concepts, Terms, and Definitions 155

13.2.1 Acquisition 155

13.2.2 Off the Shelf Software 155

13.2.3 Information Assurance Architecture 155

13.2.4 US National Information Assurance Partnership (NIAP) 156

13.2.5 Security Accreditation 156

13.2.6 Security Certification 156

13.3 Program Initiation and Planning--Acquirer 156

13.3.1 Scope 156

13.3.2 Determining the Need (Requirements) and Solution Approaches 157

13.3.3 Making the Decision to Contract 157

13.3.4 Risk Management 157

13.4 Acquisition and Software Reuse – Acquirer/Supplier 159

13.4.1 Scope 159

13.4.2 Reusable Software in the Acquisition Process 159

13.4.3 Acquirer Only 159

13.4.4 Supplier Software Reuse as Part of Acquirer’s Solution 160

13.4.5 Evaluating Reusable Software 160

13.5 Request for Proposals—Acquirer 160

13.5.1 Scope 160

13.5.2 Software Assurance Terms and Conditions 160

Security violations. These can even be caused when the software is performing correctly. Some examples include backdoors (software companies may include these to assist customers), malicious code, etc. 161

Allocating contractual risk and responsibility. Acquirers may wish to include clauses to address allocating responsibility for integrity, confidentiality, and availability. Consideration should also be given to using guarantees, warranties, and liquidated damages (i.e., providing for the supplier to compensate the acquirer for losses or damage resulting from security issues). 161

Remediation. As used here, this would be the process of tracking and correcting software security flaws by the supplier. The terms and conditions should require the supplier to have a procedure acceptable to the acquirer for acting on reports of software security flaws [NIST Special Pub 800-64; Rasmussen 2004]. 161

Access, identification and authentication, auditing, cryptography (data authentication, digital signature, key management, security of cryptographic modules [FIPS PUB 140-2], cryptographic validations), software integrity, software architecture, and media sanitation 161

Non-bypassibility and self-protection of security functionality 161

13.5.3 Software Assurance and the Common Criteria in the Acquisition Process 162

13.5.4 Software Assurance Measures and Metrics in the Acquisition Process 162

13.5.5 Software Assurance Language for a Statement of Work to Develop Secure Software, Including Incentives 162

13.5.6 Develop Software Assurance Language for a Statement of Work to Acquire COTS or Commercial Items 163

13.5.7 Software Assurance Language for the Instructions to Suppliers 163

13.6 Preparation of Response--Supplier 163

13.6.1 Scope 163

13.6.2 Initial Software Architecture 164

13.6.3 Initial Software Assurance Plan 164

13.7 Source Selection–Acquirer 164

13.7.1 Scope 164

13.7.2 Develop Software Assurance Evaluation Criteria 165

13.7.3 Software Assurance in the Source Selection Plan 165

13.8 Contract Negotiation and Finalization 165

13.8.1 Scope 165

13.8.2 Contract Negotiations 165

13.9 Project/Contract Management—Acquirer/Supplier 165

13.9.1 Scope 165

13.9.2 Project/Contract Management 166

13.10 Further Reading 166

13.11 Appendices 167

13.11.1 APPENDIX A:

NOTIONAL Language for the Statement of Work 167

1.1 Key definitions: 167

1.1.1 “Secure software“ means “highly secure software realizing – with justifiably high confidence but not guaranteeing absolutely – a substantial set of explicit security properties and functionality including all those required for its intended usage.” [Redwine 2004, p. 2] One can also state this in a negative way as “justifiably high confidence that no software-based vulnerabilities exist that the system is not designed to tolerate.” That definition incorporates the appropriate software security controls for a software intensive system’s security category to meet software security objectives. 167

1.1.2 “Software security controls” mean the management, operational, and technical controls (i.e., safeguards and/or countermeasures) prescribed for a software information system to protect the confidentiality, integrity, and availability of the system and its information. 167

1.1.3 “Security category“ means the characterization of information or an information system based on an assessment of the potential impact that a loss of confidentiality, integrity, or availability of such information or information system would have on organizational operations, organizational assets, or individuals. 167

1.1.4 “Software security objectives” means confidentiality, integrity, availability, authenticity, accountability, and non-repudiation. 167

1.1.5 “Software assurance case“ means a reasoned, auditable argument created to support the contention that the defined software-intensive system will satisfy software security requirements and objectives. 167

1.1.6 Include other appropriate definitions---

167

1.2 Security Category [NOTE: This is an example, also see FIPS Pub 199 and DoDI 8500.2, Enclosure 4.]: 167

1.2.1 This software system is used for large procurements in a contracting organization and contains both sensitive and proprietary supplier information and routine administrative information. For the sensitive supplier information, the potential impact from a loss of confidentiality is moderate (e.g., the loss may result in a significant financial loss), the potential impact from a loss of integrity is moderate (e.g., the loss may result in the effectiveness of the contracting mission is significantly reduced and there is significant damage to the information asset), the potential impact from a loss of availability is low (e.g., the loss may result in downtime, but there is backup). For the routine administrative information, the potential impact from a loss of confidentiality is low, the impact from a loss of integrity is low, and the impact from a loss of availability is low. 167

1.2.2 Based on 2.1, the resulting security category of the software system is {(confidentiality, moderate), (integrity, moderate), (availability, low)}

167

1.3 Software Security Requirements. Based on the security category for the software system, the minimum security requirements specified in [NOTE: Reference the external document(s)] are required.


(NOTE: Minimum security controls may be specified in this paragraph or in an external document similar to FIPS Pub 200; NIST SP 800-53; and DoDI 8500.2, Enclosure 4].
167

1.4 Software Assurance Case. The contractor shall refine the Software Assurance Case throughout the development process. This assurance case should be based on the software security requirements. The contractor shall submit the case for review —[NOTE: Specify when the case should be reviewed, such as when submitting the software design, etc.] Lastly, the successful execution of the Software Assurance Case shall be a condition for final acceptance of the software.


167

1.5 Auditing the Code. The supplier shall have an independent verification and validate (V&V) performed on the code to determine the security posture of the code. This verification and validation shall be performed by a qualified [NOTE: specify what “qualify” means] software assurance V&V entity. [NOTE: Also see “Secure Software Verification, Validation, and Evaluation“ section in this CBK]


167

1.6 Software Assurance Practices. The supplier shall use software assurance practices in accordance with [NOTE: either explain those practices or provide a reference document].

168

1.7 Software Assurance Plan. The supplier shall refine, throughout the life cycle of this software development work, the Software Assurance Plan that was submitted with the supplier’s proposal. The Software Assurance Plan shall be submitted to the acquirer [XX] days after each development milestone for review. [NOTE: Include how often this should be delivered. As a suggestion, the revisions to this plan should be submitted at key milestones. Such milestones might be after requirements analysis, after software architectural design, after detailed software design, after coding and testing. See the “Development” section in this CBK. Also, see ISO/IEC 12207, 5.3.] This plan shall include but not be limited to: [State what is to be included. See Section 13.6, Preparation of Response—Supplier in this section.]

168

1.8 Software Assurance Risk Management. The supplier shall maintain a formal software assurance risk management program. Within [XX ] days of the award of the contract, the supplier shall deliver a Software Assurance Risk Management Plan to the acquirer for review. [NOTE: This could be a section in the Software Assurance Plan.] 168

13.11.2 APPENDIX B:
NOTIONAL Language for Instructions to Suppliers
168

14. Tips on Using this Body of Knowledge 170

14.1 Purpose and Scope 170

14.2 General Considerations 170

14.3 Use for Learning 171

14.3.1 Use in Higher Education Instruction 171

14.3.2 Using in Training 172

14.3.3 Some Topics that May Require Special Attention from Instructors 174

14.3.4 Training Educators and Trainers 176

14.3.5 Education and Training Literature 176

14.3.6 Use by Practitioners for Professional Development 177

14.4 Using to Develop Standards and Guidelines 177

14.4.1 Curriculum 178

14.4.2 Bodies of Knowledge 178

14.4.3 Professional Personnel 178

14.4.4 Professional Practice and Processes 178

14.4.5 Product Evaluation 179

14.5 Use in Evaluation and Testing 179

14.6 Tips on Using the Acquisition Section 179

14.6.1 Introduction 179

14.6.2 About the Sample Language 179

14.6.3 Software Acquisition Education and Training 180

14.6.4 Standards Developers 180

14.6.5 Buyers and Suppliers 180

14.7 Final Remark 180

14.8 Further Reading 180




1   2   3   4   5   6   7   8   9   ...   55


The database is protected by copyright ©hestories.info 2017
send message

    Main page