Saltzer and Schroeder published their list of principles in 1975, and they remain valid. [Saltzer and Schroeder 1975] Everyone involved in any way with secure software systems needs to be aware of them. Most of the list below follows the principles proposed by [Saltzer and Schroeder 1975] and liberally quotes edited selections from that text. [Viega and McGraw 2001] has a particularly accessible discussion of many of them. These principles have relevance throughout secure software development and sustainment including requirements; design; construction; and verification, validation, and evaluation.
Least privilege is a principle whereby each entity (user, process, or device) is granted the most restrictive set of privileges needed for the performance of that entity’s authorized tasks. Application of this principle limits the damage that can result from accident, error, or unauthorized use of a system. Least privilege also reduces the number of potential interactions among privileged processes or programs, so that unintentional, unwanted, or improper uses of privilege are less likely to occur.
This principle calls for basing access decisions on permission rather than exclusion. Thus, the default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. To be conservative, a design must be based on arguments stating why objects should be accessible, rather than why they should not.
3.5.3Economy of Mechanism
“Keep the design as simple and small as possible” applies to any aspect of a system, but it deserves emphasis for protection mechanisms, since design and implementation errors that result in unwanted access paths may not be noticed during normal use.
Every access to every (security-sensitive) object must be checked for proper authorization; and access denied if it violates authorizations. This principle, when systematically applied, is the primary underpinning of the protection system, and it implies the existence and integrity of methods to (1) identify the source of every request, (2) ensure the request is unchanged since its origination, and (3) check the relevant authorizations. It also requires that design proposals to allow access by remembering the result of a prior authority check be examined skeptically.
Security mechanisms should not depend on the ignorance of potential attackers, but rather on assurance of correctness and/or the possession of specific, more easily protected, keys or passwords. This permits the mechanisms to be examined by a number of reviewers without concern that the review may itself compromise the safeguards.
The practice of openly exposing one’s design to scrutiny is not universally accepted. The notion that the mechanism should not depend on ignorance is generally accepted, but some would argue that its design should remain secret since a secret design may have the additional advantage of significantly raising the price of penetration. The principle still can be applied, however, restricted to within an organization or a “trusted” group.
3.5.6Separation of Privilege
A protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of a single key. By requiring two keys, no single accident, deception, or breach of trust is sufficient to compromise the protected information.
Redundancy is also used in the traditional “separation of duties” in human financial processes, e.g., the persons who fill out the check and sign it are two different people.
3.5.7Least Common Mechanism
Minimize the security mechanisms common to more than one user or depended on by multiple users or levels of sensitivity. Whenever the same executing process services multiple users or handles data from multiple security levels or compartments, this potentially creates an opportunity for illegitimate information flow. Every shared mechanism (as opposed, for example, to non-communicating multiple instances) represents a potential information path between users and must be designed with great care to ensure it does not unintentionally compromise security. Virtual machines each with their own copies of the operating system are an example of not sharing the usage of the operating system mechanisms – a single instance is not common across the users of the virtual machines. Thus, one desires the least possible sharing of common mechanisms (instances).
It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.15
The cost of a countermeasure to eliminate or mitigate a vulnerability should be commensurate with the cost of a loss if the vulnerability were to be successfully exploited. Generally, the more valuable the asset targeted by an attacker, the more effort and resources that attacker will expend, and therefore the most effort and resources the defender should expend to prevent or thwart the attack.
3.5.10Recording of Compromises
If a system’s defense was not fully successful, trails of evidence should exist to aid understanding, recovery, diagnosis and repair, forensics, and accountability. Likewise, records of suspicious behavior and “near misses”, and records of legitimate behaviors can also have value.
3.5.11Defense in Depth
Defense in depth is a strategy in which human, technological, and operational capabilities are integrated to establish variable protective barriers across multiple layers and dimensions of a system. This principle ensures that an attacker must compromise more than one protection mechanism to successfully exploit a system. Diversity of mechanisms can make the attacker’s problem even harder. The increased cost of an attack may dissuade an attacker from continuing the attack. Note that multiple less expensive but weak mechanisms do not necessarily make a stronger barrier than fewer more expensive and stronger ones.
Systems whose behavior is analyzable from their engineering descriptions such as design specifications and code have a higher chance of performing correctly because relevant aspects of their behavior can be predicted in advance. In any field, analysis techniques are never available for all structures and substances. To ensure analyzability one must restrict structural arrangements and other aspects to those that are analyzable.16
3.5.13Treat as Conflict
Because many software-security-related issues and activities concern a conflict pitting the system and those aiding in its defense against those who are attacking or may attack in the future, one needs to bring to the situation all that one can of what is known about conflicts. This includes recognizing when attacks in the computer security arena are part of a wider conflict. Below are some of the key concepts.
As enumerated in Section 2, Threats and Hazards, the threats or hazards faced may come from a number of sources including intelligent, skilled adversaries. When possession, damage, or denial of assets is highly valuable to someone else, then considerable skill and resources could be brought to bear. When poor software security makes these actions relatively easy and risk free, even lower value assets may become attractive.
One cannot simply use a probabilistic approach to one’s analyses because, for example, serious, intelligent opponents tend to attack where and when you least expect them – where your estimate of the probability of such an attack is relatively low.17
Where judging risks is difficult, one possible approach is to make them at least not unacceptable (tolerable) and “as low as reasonably practicable” (ALARP).18 In employing the ALARP approach, judgments about what to do are based on the cost-benefit of techniques – not on total budget. However, additional techniques or efforts are not employed once risks have achieved acceptable (not just tolerable) levels. This approach is attractive from an engineering viewpoint, but the amount of benefit cannot always be adequately established.
126.96.36.199Security is a System, Organizational, and Societal Problem
Security is not merely a software concern, and mistaken decisions can result from confining attention to only software. Attempts to break security are often part of a larger conflict such as business competition, crime and law enforcement, social protest, political rivalry, or conflicts among nation states, e.g. espionage. Specifically in secure software development, the social norms and ethics of members of the development team and organization as well as suppliers deserve serious attention. Insiders constitute one of the most dangerous populations of potential attackers, and communicating successfully with them concerning their responsibilities and obligations is important in avoiding subversion or other damage.
188.8.131.52Measures Encounter Countermeasures
Despite designers’ positive approach to assuring protection, in practice one also engages in a software measure-countermeasure cycle between offense and defense. Currently, reducing the attackers’ relative capabilities and increasing systems’ resilience dominates many approaches. Anonymity of attackers has led to asymmetric situations where defenders must defend everywhere and always,19 and attackers can chose the time and place of their attacks. Means for reducing anonymity – thereby making deterrence and removal of offenders more effective – could somewhat calm the current riot of illicit behavior.
184.108.40.206Learn and Adapt
The attack and defense of software-intensive systems is a normal but not a mundane situation; it is a serious conflict situation with serious adversaries such as criminal organizations, terrorist groups, and nation states, and competitors committing industrial espionage. Serious analyses of past and current experience can improve tactics and strategy.
One should not forget how to be successful in conflicts. While it is difficult to state the principles of conflict in a brief manner, some principles exist such as exploiting the arenas in which the conflict occurs; using initiative, speed, movement, timing, and surprise; using and trading-off quality and quantity including technology and preparation; carefully defining success and pursuing it universally and persistently but flexibly; and hindering adversaries.
Tradeoffs exist between security and efficiency, speed, and usability. For some systems, other significant tradeoffs with security may also exist. Design decisions may exacerbate or ease these tradeoffs. For example, innovative user interface design may ease security’s impact on usability. [Cranor 2005]
Attempting defense in depth raises the tradeoff between fewer layers of defenses each constructed with more resources or more layers each costing less resources. An attacker may overcome several weak lines of defense with less difficulty than fewer but stronger lines of defense.
Business tradeoffs also exist (or are believed to exist) for software producers between effort and time-to-market, and security. Finally, producers want users and other developers to use features, especially new features, of a product but the likelihood of this is reduced if these are shipped “turned off”.