This article needs additional citations for verification. (July 2018) |
Trusted Computer System Evaluation Criteria (TCSEC) is a United States Government Department of Defense (DoD) standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system. The TCSEC was used to evaluate, classify, and select computer systems being considered for the processing, storage, and retrieval of sensitive or classified information.[1]
The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications. Initially issued in 1983 by the National Computer Security Center (NCSC), an arm of the National Security Agency, and then updated in 1985, TCSEC was eventually replaced by the Common Criteria international standard, originally published in 2005.[citation needed]
History
editBy the late 1960s, government agencies, like other computer users, had gone far in the transition from batch processing to multiuser and time-sharing systems. The US Department of Defense (DoD) Advanced Research Projects Agency (ARPA), now DARPA was a primary funder of research into time-sharing.[1] By 1970, DoD was planning a major procurement of mainframe computers referred to as the Worldwide Military Command and Control System (WWMCCS) to support military command operations. The desire to meet the more advanced challenges emerged early. The Air Force's Military Airlift Command (MAC), for example, provided the military services with a largely unclassified air cargo and passenger service but on rare occasions was required to classify some of its missions using the same aircraft and crews—for example, in cases of military contingencies or special operations. By 1970, MAC had articulated a requirement to process classified information on its soon-to-arrive WWMCCS mainframes while allowing users without security clearance to access classified information (uncleared users) access to the mainframes.[2]
The national security community responded to the challenges in two ways: the Office of the Secretary of Defense commissioned a study of the policy and technical issues associated with securing computer systems, while ARPA funded the development of a prototype secure operating system that could process and protect classified information.
The study effort was organized as the Defense Science Board (DSB) Task Force on Computer Security under the chairmanship of the late Willis Ware. Its membership included technologists from the government and defense contractors as well as security officials from the DoD and intelligence community. The task force met between 1967 and 1969 and produced a classified report that was made available to organizations with appropriate security clearance beginning in 1970.[3] The Ware Report, as the DSB task force report came to be called, provided guidance on the development and operation of multiuser computer systems that would be used to process classified information.
In the early 1970s, United States Air Force requirements for the development of new computer system capabilities were addressed to the Air Force Electronic Systems Division (ESD) later known as the Electronic Systems Center at Hanscom Air Force Base in Massachusetts. ESD received technical advice and support of the Mitre Corporation, one of the countries federally funded research and development centers (FFRDC). An early MITRE report[2] suggested alternative approaches to meeting the MAC requirement without developing a new multilevel secure operating system in hopes that these approaches might avoid the problems the Ware Report characterized as intractable.
Grace Hammonds Nibaldi while she worked at the Mitre Corporation published a report that laid out the initial plans for the evaluation of commercial off-the-shelf operating systems.[4] The Nibaldi paper places great emphasis on the importance of mandatory security. Like the Orange Book to follow, it defines seven levels of evaluated products with the lowest, least-secure level (0) reserved for “unevaluated.” In the Nibaldi scheme, all but level 1 (the lowest level that actually undergoes evaluation) must include features for extensive mandatory security.
Work on the Orange book began in 1979. The creation of the Orange Book was a major project spanning the period from Nibaldi's 1979 report[4] to the official release of the Orange Book in 1983. The first public draft of the evaluation criteria was the Blue Book released in May 1982.[1] The Orange book was published in August 1983. Sheila Brand was the primary author and several other people were core contributors to its development. These included Grace Hammonds Nibaldi and Peter Tasker of Mitre Corporation; Dan Edwards, Roger Schell, and Marvin Schaeffer of National Computer Security Conference; and Ted Lee of Univac. A number of people from government, government contractors, and vendors, including Jim Anderson, Steve Walker, Clark Weissman, and Steve Lipner were cited as reviewers who influenced the content of the final product.[1]
In 1999, the Orange book was replaced by the International Common Criteria for Information Technology Security Evaluation.[1]
On 24 October 2002, The Orange Book (aka DoDD 5200.28-STD) was canceled by DoDD 8500.1, which was later reissued as DoDI 8500.02, on 14 March 2014.[5]
Fundamental objectives and requirements
editPolicy
editThe security policy must be explicit, well-defined, and enforced by the computer system. Three basic security policies are specified:[6]
- Mandatory Security Policy – Enforces access control rules based directly on an individual's clearance, authorization for the information and the confidentiality level of the information being sought. Other indirect factors are physical and environmental. This policy must also accurately reflect the laws, general policies and other relevant guidance from which the rules are derived.
- Marking – Systems designed to enforce a mandatory security policy must store and preserve the integrity of access control labels and retain the labels if the object is exported.
- Discretionary Security Policy – Enforces a consistent set of rules for controlling and limiting access based on identified individuals who have been determined to have a need-to-know for the information.
Accountability
editIndividual accountability regardless of policy must be enforced. A secure means must exist to ensure the access of an authorized and competent agent that can then evaluate the accountability information within a reasonable amount of time and without undue difficulty. The accountability objective includes three requirements:[6]
- Identification – The process used to recognize an individual user.
- Authentication – The verification of an individual user's authorization to specific categories of information.
- Auditing – Audit information must be selectively kept and protected so that actions affecting security can be traced to the authenticated individual.
Assurance
editThe computer system must contain hardware/software mechanisms that can be independently evaluated to provide sufficient assurance that the system enforces the above requirements. By extension, assurance must include a guarantee that the trusted portion of the system works only as intended. To accomplish these objectives, two types of assurance are needed with their respective elements:[6]
- Assurance Mechanisms
- Operational Assurance: System Architecture, System Integrity, Covert Channel Analysis, Trusted Facility Management, and Trusted Recovery
- Life-cycle Assurance : Security Testing, Design Specification and Verification, Configuration Management, and Trusted System Distribution
- Continuous Protection Assurance – The trusted mechanisms that enforce these basic requirements must be continuously protected against tampering or unauthorized changes.
Documentation
editWithin each class, an additional set of documentation addresses the development, deployment, and management of the system rather than its capabilities. This documentation includes:[citation needed]
- Security Features User's Guide, Trusted Facility Manual, Test Documentation, and Design Documentation
Divisions and classes
editThe TCSEC defines four divisions: D, C, B, and A, where division A has the highest security. Each division represents a significant difference in the trust an individual or organization can place on the evaluated system. Additionally divisions C, B and A are broken into a series of hierarchical subdivisions called classes: C1, C2, B1, B2, B3, and A1.[7]
Each division and class expands or modifies as indicated the requirements of the immediately prior division or class.[7]
D – Minimal protection
edit- Reserved for those systems that have been evaluated but that fail to meet the requirement for a higher division.[8]
C – Discretionary protection
edit- C1 – Discretionary Security Protection[9]
- Identification and authentication
- Separation of users and data
- Discretionary Access Control (DAC) capable of enforcing access limitations on an individual basis
- Required System Documentation and user manuals
- C2 – Controlled Access Protection
- More finely grained DAC
- Individual accountability through login procedures
- Audit trails
- Object reuse
- Resource isolation
- An example of such as system is HP-UX
B – Mandatory protection
edit- B1 – Labeled Security Protection[10]
- Informal statement of the security policy model
- Data sensitivity labels
- Mandatory Access Control (MAC) over selected subjects and objects
- Label exportation capabilities
- Some discovered flaws must be removed or otherwise mitigated
- Design specifications and verification
- B2 – Structured Protection
- Security policy model clearly defined and formally documented
- DAC and MAC enforcement extended to all subjects and objects
- Covert storage channels are analyzed for occurrence and bandwidth
- Carefully structured into protection-critical and non-protection-critical elements
- Design and implementation enable more comprehensive testing and review
- Authentication mechanisms are strengthened
- Trusted facility management is provided with administrator and operator segregation
- Strict configuration management controls are imposed
- Operator and Administrator roles are separated.
- An example of such a system was Multics
- B3 – Security Domains
- Satisfies reference monitor requirements
- Structured to exclude code not essential to security policy enforcement
- Significant system engineering directed toward minimizing complexity
- Security administrator role defined
- Audit security-relevant events
- Automated imminent intrusion detection, notification, and response
- Trusted path to the TCB for the user authentication function
- Trusted system recovery procedures
- Covert timing channels are analyzed for occurrence and bandwidth
- An example of such a system is the XTS-300, a precursor to the XTS-400
A – Verified protection
edit- A1 – Verified Design[11]
- Functionally identical to B3
- Formal design and verification techniques including a formal top-level specification
- Formal management and distribution procedures
- Examples of A1-class systems are Honeywell's SCOMP, Aesec's GEMSOS, and Boeing's SNS Server. Two that were unevaluated were the production LOCK platform and the cancelled DEC VAX Security Kernel.
- Beyond A1
- System Architecture demonstrates that the requirements of self-protection and completeness for reference monitors have been implemented in the Trusted Computing Base (TCB).
- Security Testing automatically generates test-case from the formal top-level specification or formal lower-level specifications.
- Formal Specification and Verification is where the TCB is verified down to the source code level, using formal verification methods where feasible.
- Trusted Design Environment is where the TCB is designed in a trusted facility with only trusted (cleared) personnel.
Matching classes to environmental requirements
editThe publication entitled "Army Regulation 380-19" is an example of a guide to determining which system class should be used in a given situation.[12]
See also
editReferences
edit- ^ a b c d e Lipner, Steve (2015-06-02). "The Birth and Death of the Orange Book". IEEE Annals of the History of Computing. 37 (2): 19–31. doi:10.1109/MAHC.2015.27. S2CID 16625319. Retrieved 2024-01-28 – via IEEE.
- ^ a b Lipner, S.B. (1971-01-06). "MACIMS Security Configurations" (PDF). stevelipner.org. Retrieved 2024-01-28.
- ^ Ware, Willis H., ed. (1979-10-10). "Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security". Rand. doi:10.7249/R609-1. Retrieved 2024-01-28.
- ^ a b Nibaldi, G.H. (1979-10-25). "Proposed Technical Evaluation Criteria for Trusted Computer Systems" (PDF). UC Davis Computer Security Lab History Project. The Mitre Corporation. Retrieved 2024-01-28.
- ^ "Department of Defense INSTRUCTION - Cybersecurity" (PDF). www.dtic.mil. 2014-03-14. Archived from the original on 2014-04-29. Retrieved 2024-01-28.
{{cite web}}
: CS1 maint: unfit URL (link) - ^ a b c Klein, Melville H. (2014-01-15). Department of Defense Trusted Computer System Evaluation Criteria (PDF) (Report). DOD (published 1983-08-15). pp. 3–4. CSC-STD-001-83. Retrieved 2024-01-28 – via CIA.
- ^ a b Klein, Melville H. (2014-01-15). Department of Defense Trusted Computer System Evaluation Criteria (PDF) (Report). DOD (published 1983-08-15). p. 5. CSC-STD-001-83. Retrieved 2024-01-28 – via CIA.
- ^ Klein, Melville H. (2014-01-15). Department of Defense Trusted Computer System Evaluation Criteria (PDF) (Report). DOD (published 1983-08-15). p. 9. CSC-STD-001-83. Retrieved 2024-01-28 – via CIA.
- ^ Klein, Melville H. (2014-01-15). Department of Defense Trusted Computer System Evaluation Criteria (PDF) (Report). DOD (published 1983-08-15). p. 12. CSC-STD-001-83. Retrieved 2024-01-28 – via CIA.
- ^ Klein, Melville H. (2014-01-15). Department of Defense Trusted Computer System Evaluation Criteria (PDF) (Report). DOD (published 1983-08-15). p. 20. CSC-STD-001-83. Retrieved 2024-01-28 – via CIA.
- ^ Klein, Melville H. (2014-01-15). Department of Defense Trusted Computer System Evaluation Criteria (PDF) (Report). DOD (published 1983-08-15). p. 44. CSC-STD-001-83. Retrieved 2024-01-28 – via CIA.
- ^ Walker, Robert M. (1998-03-27). Army Regulation 380-19: Information Systems Security (PDF) (Report). United States Army. Retrieved 2024-01-28.