Preface

The Open Group

The Open Group is a vendor-neutral and technology-neutral consortium, whose vision of Boundaryless Information Flow™ will enable access to integrated information within and between enterprises based on open standards and global interoperability. The Open Group works with customers, suppliers, consortia, and other standards bodies. Its role is to capture, understand, and address current and emerging requirements, establish policies, and share best practices; to facilitate interoperability, develop consensus, and evolve and integrate specifications and Open Source technologies; to offer a comprehensive set of services to enhance the operational efficiency of consortia; and to operate the industry's premier certification service, including UNIX® certification.

Further information on The Open Group is available at www.opengroup.org.

The Open Group has over 15 years' experience in developing and operating certification programs and has extensive experience developing and facilitating industry adoption of test suites used to validate conformance to an open standard or specification.

More information is available at www.opengroup.org/certification.

The Open Group publishes a wide range of technical documentation, the main part of which is focused on development of Technical and Product Standards and Guides, but which also includes white papers, technical studies, branding and testing documentation, and business titles. Full details and a catalog are available at www.opengroup.org/bookstore.

As with all live documents, Technical Standards and Specifications require revision to align with new developments and associated international standards. To distinguish between revised specifications which are fully backwards-compatible and those which are not:

  • A new Version indicates there is no change to the definitive information contained in the previous publication of that title, but additions/extensions are included. As such, it replaces the previous publication.
  • A new Issue indicates there is substantive change to the definitive information contained in the previous publication of that title, and there may also be additions/extensions. As such, both previous and new documents are maintained as current publications.

Readers should note that updates – in the form of Corrigenda – may apply to any publication. This information is published at www.opengroup.org/corrigenda.

Network Applications Consortium (NAC)

The Network Applications Consortium (NAC) was founded in 1990 as a strategic end-user organization whose vision was to improve the interoperability and manageability of business-critical applications being developed for the heterogeneous, virtual enterprise computing environment. Its mission was to promote member collaboration and influence the strategic direction of vendors developing virtual enterprise application and infrastructure technologies. Its diverse membership equipped it to explain the need for agile IT infrastructure in support of business objectives, aimed at consolidating, clarifying, and communicating infrastructure technology needs to influence the IT industry and drive the evolution of standards and products.

One of its significant achievements was publishing the NAC Enterprise Security Architecture (ESA) Guide in 2004. In late 2007, the NAC transitioned into the Security Forum. At that time, the members of the NAC who joined the Security Forum recognized the significant value of the ESA Guide. It contained much valuable information that remains as relevant today as when it was first published. Members also realized, however, that security practice had moved on since 2004, so parts of the ESA Guide would benefit from updates and additions. Accordingly, a new project was initiated to update this ESA Guide.

This Document

Information security professionals today recognize the high value of having a clear enterprise security architecture for their business, and developing and migrating their security strategies within a sound and well-structured framework, driven by business priorities derived from sound risk management assessments. This Open Enterprise Security Architecture (O‑ESA) Guide provides a valuable reference resource for practicing security architects and designers. It gives a comprehensive overview of the key security issues, principles, components, and concepts underlying architectural decisions that are involved when designing effective enterprise security architectures. It does not define a specific enterprise security architecture, and neither is it a “how to” guide to design one, although in places it does indicate some of the “how”.

This Guide updates the NAC 2004 ESA Guide to bring it up-to-date in those areas which have evolved since its 2004 publication date. In particular, it replaces the quoted extract licensed from the British Standards Institute Code of Practice for Information Security Management, by referencing rather than licensing reproduction of quoted extracts from the latest ISO/IEC 27001/2 standard.

Intended Audience

The O‑ESA Guide provides a valuable reference resource for practicing security architects and designers – explaining key terms and concepts underlying security-related decisions that security architects and designers have to make. In doing so it enables them to explain their architectures and decision-making processes to their associated architecture and management colleagues in related disciplines.

The description avoids excessively technical presentation of the issues and concepts, so making it also an eminently digestible reference for CxO-level managers, enterprise architects, and designers, so enabling them to appreciate, validate, and balance the security architecture viewpoints along with all the other viewpoints involved in creating a complete enterprise architecture.

Trademarks

Boundaryless Information Flow™ is a trademark and ArchiMate®, Jericho Forum®, Making Standards Work®, Motif®, OSF/1®, The Open Group®, TOGAF®, UNIX®, and the ``X'' device are registered trademarks of The Open Group in the United States and other countries.

The Open Group acknowledges that there may be other brand, company, and product names used in this document that may be covered by trademark protection and advises the reader to verify them independently.

Acknowledgements

The Open Group gratefully acknowledges the contribution of the following people in the development of this O-ESA Guide:

  • Project Leader:
    Stefan Wahe, University of Wisconsin – Madison
  • Consulting Author-Editor:
    Gunnar Peterson, Managing Principal, Artec Group
  • Reviewer Group:
    Security Forum members, in particular:

—  Vicente Aceituno, ISM3 Consortium

—  Françiois Jan, Systems Architect & Security/IAM Specialist, Arismore

—  Mike Jerbic, Trusted Systems Consulting, and Chair of the Security Forum

—  Mary Ann Mezzapelle, Chief Technologist, HP Enterprise Security Services

Referenced Documents

The following documents are referenced in this O-ESA Guide:

  • Architectural Patterns for Enabling Application Security, Joseph Yoder & Jeffrey Barcalow (1998).
  • Attack Surface Measurement and Attack Surface Reduction, Pratyusa K. Manadhata & Jeannette M. Wing; refer to: www.cs.cmu.edu/~pratyus/as.html.
  • Building Secure Software: How to Avoid Security Problems the Right Way, John Viega & Gary McGraw, Addison-Wesley, 2001.
  • Burton Group (now merged into Gartner) Enterprise Identity Management: It’s about the Business, Version 1, July 2003.
  • Computer Security: Art and Science, Matt Bishop, Addison-Wesley, 2002.
  • Cyber Security and Control System Survivability, Howard Lipson, 2005; refer to www.pserc.org.
  • HIPAA: (US) Health Insurance Portability and Accountability Act, 1996.
  • Introduction to XDAS; refer to: www.opengroup.org/security/das/xdas_int.htm.
  • ISO/IEC 10181-3:1996: Information Technology – Open Systems Interconnection – Security Frameworks for Open Systems: Access Control Framework; refer to www.iso.org.
  • ISO/IEC 27001:2005: Information Technology – Security Techniques – Information Security Management Systems – Requirements; refer to www.iso.org (also BS 7799‑2:2005).
  • ISO/IEC 27002:2005: Information Technology – Security Techniques – Code of Practice for Information Security Management; refer to www.iso.org.
  • Logging in the Age of Web Services, Anton Chuvakin & Gunnar Peterson, IEEE Security & Privacy Journal, May 2009; refer to: http://arctecgroup.net/pdf/82-85.pdf.
  • NIST SP 800-27: Engineering Principles for Information Technology Security (A Baseline for Achieving Security), Revision A, June 2004; refer to: csrc.nist.gov/publications/PubsSPs.html.
  • NIST SP 800-33: Underlying Technical Models for Information Technology Security, Special Publication, December 2001.
  • NIST SP 800-53A: Guide for Assessing the Security Controls in Federal Information Systems and Organizations, June 2010; refer to: csrc.nist.gov/publications/PubsSPs.html.
  • NIST SP 800-55: Performance Measurement Guide for Information Security, July 2008; refer to: csrc.nist.gov/publications/PubsSPs.html.
  • NIST SP 800-56A: Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography, March 2007; refer to: csrc.nist.gov/publications/PubsSPs.html.
  • NIST SP 800-56B: Recommendation for Pair-Wise Key Establishment Schemes Using Integer Factorization Cryptography, August 2009; refer to: csrc.nist.gov/publications/PubsSPs.html.
  • NIST SP 800-63: Electronic Authentication Guideline, April 2006; refer to: csrc.nist.gov/publications/PubsSPs.html.
  • Open Information Security Management Maturity Model (O-ISM3), Technical Standard (C102), published by The Open Group, February 2010; refer to: www.opengroup.org/bookstore/catalog/c102.htm.
  • OWASP Guide Project; refer to www.owasp.org/index.php/OWASP_Guide_Project.
  • Payment Card Industry (PCI) Data Security Standard (DSS): Requirements and Security Assessment Procedures, Version 1.2; October 2008.
  • Peer Reviews in Software: A Practical Guide, Karl E. Wiegers (Addison-Wesley, 2001).
  • Problems with XACML and their Solutions, Travis Spencer, September 2010; refer to: http://travisspencer.com/blog/2010/09/problems-with-xacml-and-their.html#comments.
  • Putting the Tools to Work: How to Succeed with Source Code Analysis, Pravir Chandra, Brian Chess, & John Steven, IEEE Security & Privacy; refer to www.cigital.com/papers/download/j3bsi.pdf.
  • Risk Taxonomy, Technical Standard (C081), published by The Open Group, January 2009; refer to www.opengroup.org/bookstore/catalog/c081.htm.
  • (US) Sarbanes-Oxley Act; refer to: www.sox-online.com.
  • Secure Coding: Principles and Practices, Mark G. Graff & Kenneth R. Van Wyk, O’Reilly, 2003.
  • Securing the Virtual Enterprise Network: Layered Defenses, Coordinated Policies, Version 2, May 2003 (inc. description of the Burton Group (now merged into Gartner) VEN Security Model).
  • Security at Microsoft, Technical White Paper, Published: November 2003.
  • Security Design Patterns, Part 1 v1.4, Sasha Romanosky, November 2001.
  • Security Design Patterns, by Bob Blakley, Craig Heath, and members of The Open Group Security Forum (G031), published by The Open Group, 2004; refer to: www.opengroup.org/bookstore/catalog/g031.htm.
  • Security Engineering: A Guide to Building Dependable Distributed Systems, Ross J. Anderson, ISBN: 978-0-470-06852-6, Wiley, 2008.
  • Software Security: Building Security In, Gary McGraw, Addison-Wesley, 2006.
  • The Security Development Lifecycle, Michael Howard & Steve Lipner, Microsoft Press, 2006.
  • Uncover Security Design Flaws Using the STRIDE Approach, Shawn Hernan, Scott Lambert, Tomasz Ostwald, & Adam Shostack; refer to: msdn.microsoft.com/en-us/magazine/cc163519.aspx.
  • Writing Secure Code, Second Edition, Michael Howard & David C. LeBlanc, Microsoft Press, 2002.
  • XACML (Extensible Access Control Markup Language), OASIS; refer to www.oasis-open.org/committees/xacml.

1 Executive Overview

Information systems security has never been more critical around the world. As more data is collected, stored, and propagated, the protection of information systems grows increasingly complex. Demand for new and improved services in both the public and private sectors is intense, and as enterprises reinvent their services infrastructure to meet this demand, traditional boundaries are disappearing. The cyber security threats lurking outside those traditional boundaries are real and well documented. Security by exclusion – attempting to maintain hard perimeters – is no longer a viable approach. The enterprise must allow access to its information resources by the services that citizens, customers, suppliers, and business partners are demanding; to allow employees and independent agents to work effectively from home; or to support some other variation on user access to the services of the enterprise.

Late in 2003 a group of NAC[1] members began meeting the challenge of describing a common framework that would speed the process of developing enterprise security architectures for this complex environment and create the governance foundation for sustaining it into the future. How does one simplify the process of governing security by exclusion (keeping the bad guys out) and security by inclusion (allowing and encouraging legitimate users to come in)? The NAC members’ premise[2] was that policy-driven security architecture is essential in order to simplify management of this increasingly complex environment. As the Corporate Governance Task Force Report[3] states: “The road to information security goes through corporate governance.” At the heart of governance are policy definition, implementation, and enforcement. To simplify security management, there must be a direct linkage between governance and the security architecture itself – in other words, policy-driven security architecture.

What is policy-driven security architecture? It starts with a policy framework for identifying guiding security principles; authorizing their enforcement in specific control domains through a set of policies; and implementing the policies through technical standards, guidelines, and procedures. It continues with a policy-driven technical framework for creating electronic representations of the policy standards, storing them in central policy repositories, and referencing them at runtime to make and enforce policy decisions. Finally, it provides a policy-driven security operations framework for ensuring that the technology as deployed both conforms to policy and enforces policy across the environment.

The approach to designing policy-driven security architecture taken in this O‑ESA Guide starts with defining an enterprise security program framework that places security program management in the larger context.

It continues with in-depth focus on the three major components that make up enterprise security architecture:

  • Governance
  • Technology Architecture
  • Operations

For governance, this approach establishes the overall process, defines the policy framework that is at the heart of governance, and provides templates for security principles and policies. The principles template is derived from the National Institute of Standards and Technology (NIST) Engineering Principles for IT Security, supplemented by principles from Open Group member organizations and others. The policy template is adopted directly from ISO/IEC 27002:2005: Code of Practice for Information Security Management, which is now well established and adopted globally in the enterprise security space.

For technology architecture, the approach defines a generic framework for the management of policy-driven security services, and then utilizes the framework as the basis of an overall conceptual architecture for implementing policy-driven security services. The framework is based in part on the Burton Group (now merged into Gartner) Virtual Enterprise Network (VEN) Security Model,[4] and in part on current and evolving standards in the policy management space. It extends the policy-driven concepts beyond access management to include configuration of other security services such as border protection, cryptography, content management, and auditing. The adoption of effective open standards is critical to the implementation of general-purpose policy-driven security architecture. Without these standards, centralization of policy and interoperability of the many products in the federated environment will not be possible. The complete solution can’t be purchased off-the-shelf today; however, the XACML standard[5] (see also Section 3.6.3 and Chapter 6) is today widely seen as the basis for solutions being implemented by vendors, private and public companies, and by government and educational institutions.

In addition to the overall conceptual architecture, two of the identified security services – identity management and border protection – are analyzed further to the level of service-specific conceptual and logical architecture. These two examples illustrate the logical decomposition of high-level services to the level of detail required to implement the architecture. For other identified security services, there is a template of high-level service definitions but no additional detailed perspective.

The approach to security operations in this O‑ESA Guide is to define the operational processes required to support a policy-driven security environment. These processes are of two types. One includes the administration, compliance, and vulnerability management processes required to ensure that the technology as deployed conforms to policy and provides adequate protection to control the level of risk to the environment. The other category includes the administration and event and incident management processes required to enforce policy within the environment. These operational processes are defined at a high level, not at the level of detail provided for governance and technology architecture.

Having developed the major components of open enterprise security architecture (O‑ESA), this Guide goes on to describe the vision, technical model, and roadmap for achieving automated definition, instantiation, and enforcement of security policy. It begins by defining the policy layers and policy automation vision:

  • Starting with the high-level definition of a business policy
  • Mapping that to appropriate standards such as ISO/IEC 27001/2 security policies
  • Translating the security policies to detailed technical standards
  • Instantiating an electronic representation of those standards
  • Then using that representation to drive the automated decision-making and enforcement process

The Guide then describes a technical model for implementing the vision, and establishes a roadmap of user and industry actions required to enable that technical model. The intent is to use this portion of the Guide as a catalyst to drive awareness of the need for the required industry standards and technologies.

This document concludes with recommendations in the policy-driven security space. The key recommendation is that security architects and designers proceed with implementation of the O‑ESA policy and technology frameworks, recognizing that they must map business policies to the detailed technical standards required for decision-making and enforcement, which today can be increasingly though not yet fully automated. Incremental transition to policy automation products as business drivers and technology warrant can then follow. Vendors and standards organizations are encouraged to adopt O‑ESA as a common vocabulary; support current and emerging standards related to policy-driven security; and consider the opportunities for open, standards-based products that support a common policy automation vision.

2 Introduction

There is general agreement among certified security professionals and others that the overall objective of information security is to preserve the availability, integrity, and confidentiality of an organization’s information. Effective IT security management also calls for providing accountability and assurance. Enterprise security architecture is the component of the overall enterprise architecture designed specifically to fulfil these objectives. A critical element of enterprise information security is physical security, which is the linchpin of a secure environment.

Enterprise security architecture may also be thought of as the overall framework for fulfilling these objectives while satisfying the security demands placed on the IT service organization by its customers. It includes all aspects of security governance, security technology architecture, and security operations required to protect the IT assets of the enterprise.

The objective of this document is twofold:

  • To provide a framework that serves as a common reference for describing enterprise security architecture and technology both within and between organizations
  • To provide a template that allows user organizations to select the elements of enterprise security architecture they require and to tailor them to their needs

2.1 General Description of an Enterprise Security Program

This Guide’s enterprise security architecture must be understood in the larger corporate context, where it is part of an overall enterprise security program, as shown in Figure 1.

figure1

Figure 1: Corporate Enterprise Security Context

It must relate appropriately to the corporate risk management, corporate IT governance, enterprise architecture, and physical security programs of the enterprise. The specifics of how it relates may vary from one organization to another.

The overall enterprise security program is expanded in Figure 2 as four concentric rings of responsibility:

  • Overall program management responsibility lives in the outer ring.
  • Security governance responsibility lives in the second ring.
  • Security technology architecture responsibility lives in the third ring.
  • Security operations responsibility lives in the inner ring.

figure2

Figure 2: Enterprise Security Program Model

Each ring identifies key components and processes that fall within that responsibility domain. Viewed in the context of a constraints-based methodology, the components of each ring represent deliverables that further narrow the definition of what must be provided by the inner rings. Thus the requirements, strategy, planning roadmaps, and risk management assessments from the outer ring narrow the definition of what must be provided in the governance and technology architecture rings. For example, a new privacy requirement may dictate the definition of new governing principles, policies, and standards as well as the implementation of new technology architecture. The implementation of new standards and new architecture may in turn dictate the creation of new security processes or other capabilities within operations.

The program management functions identified in the outer ring of the enterprise security program model are considered outside the main scope of this O‑ESA Guide’s security architecture focus. In the next major section, the document focus will shift to O-ESA components identified in the inner rings: security governance, security technology architecture, and security operations. First, however – in recognition of the importance of the program management functions – the following section describes an overall enterprise security program framework. The goal is to provide a more complete overview of the security drivers and the program management functions, and also to provide a preview of the O‑ESA structure and show how it relates to program management.

2.2 Enterprise Security Program Framework

Figure 3 provides a more complete framework view of the enterprise security program. Note that the rectangular boxes represent components or deliverables, while the octagonal boxes represent processes. The framework starts with the four security drivers shown at the top, which identify the primary sources of security requirements that must be addressed. The key sources of internal requirements are the business areas, which have service-level business requirements they must meet to serve their current customers and to take advantage of new business opportunities. External requirements include security threats and legal and regulatory compliance requirements. Privacy and confidentiality are key examples of functional requirements driven by legal requirements. Risk management may also be affected for business areas within the purview of external regulatory commissions.

Requirements drive the development of the security program strategy deliverables as well as the planning process. Risk management is the crucial process of determining the acceptable level of security risk at various points in the enterprise IT system and implementing the optimal level of management and technical control; too little control may result in financial exposure, and too much may result in unnecessary cost. Education and awareness processes are critical to the success of any security program. Ongoing program assessment and gap analysis processes provide continual requirements feedback.

The functions of the O‑ESA components and processes are summarized below and will be described further in the subsequent sections of the document.

Governance

  • Principles: Basic assumptions and beliefs providing overall security guidance.
  • Policies: The security rules that apply in various control domains.
  • Standards, Guidelines, and Procedures: The implementation of the policies through technical requirements, recommended practices, and instructions.
  • Audit: The process of reviewing security activities for policy compliance.
  • Enforcement: The processes for ensuring compliance with the policies.

Figure 3: Enterprise Security Program Framework

Technology Architecture

  • Conceptual Framework: Generic framework for policy-based management of security services.
  • Conceptual Architecture: Conceptual structure for management of decision-making and policy enforcement across a broad set of security services.
  • Logical Architecture: Provides more detail on the logical components necessary to provide each security service.
  • Physical Architecture: Identifies specific products, showing where they are located and how they are connected to deliver the necessary functionality, performance, and reliability.
  • Design/Development: Guides, templates, tools, re-usable libraries, and code samples to aid in the effective utilization and integration of applications into the O‑ESA environment.

Security Operations

  • Deployment: Assumed to be the normal IT deployment process, not a security operations process.
  • Services: The core security functions defined by the security technology architecture that support devices and applications, as well as other security operations processes.
  • Devices and Applications: Devices and applications that use O‑ESA services and are supported by the security operations processes.
  • Administration: The process for securing the organization’s operational digital assets against accidental or unauthorized modification or disclosure.
  • Event Management: The process for day-to-day management of the security-related events generated by a variety of devices across the operational environment, including security, network, storage, and host devices.
  • Incident Management: The process for responding to security-related events that indicate a violation or imminent threat of violation of security policy (i.e., the organization is under attack or has suffered a loss).
  • Vulnerability Management: The process for identifying high-risk infrastructure components, assessing their vulnerabilities, and taking the appropriate actions to control the level of risk to the operational environment.
  • Compliance: The process for ensuring that the deployed technology conforms to the organization’s policies, procedures, and architecture.

2.3 Enterprise Security Architecture

With the enterprise security program framework as background, the focus for the remainder of the document shifts to the O‑ESA components. As shown in Figure 4, the security program management functions now assume a background role and become part of the larger corporate context, as the focus shifts to security governance, security technology architecture, and security operations. Our goal is to describe an O‑ESA framework and templates that user organizations can understand, tailor to their needs, and use as a starting point for an O‑ESA implementation.

figure6

Figure 4: Enterprise Security Architecture Components

To effectively design and implement O‑ESA, one needs to understand the purpose and relationships of the O‑ESA components. To aid in that understanding, the following discussion draws an analogy to a more commonly understood architectural model – designing a house. This discussion opens with a brief comparison of the house design model and the design of the enterprise security system model.

2.3.1 The House Design Model

It is helpful to begin with a brief review of the issues involved in the house design model:

  • Community Standards: The specific external and internal standards required by the housing community.
  • Design Requirements: The specific design criteria that are settled on after considering wants, needs, costs, etc. such as passive solar design with star wiring topology (LAN/telephony) and home entertainment system, plus fully disability-accessible downstairs with master bedroom and utilities.
  • Building Codes and Engineering Practices: The building standards and practices that support the design requirements and the architecture.
  • Architectural Plan: This is the resulting set of the artist’s renderings and blueprints that document what the house will look like from various perspectives. Also a necessary part of the plans are the detail drawings for the major components of the overall construction such as framing and the plumbing, electrical, and HVAC systems.
  • Bill of Materials: The detailed list of materials needed to build the house.
  • Maintenance: The specific considerations for keeping the house up and its systems operational. Although not typically a significant part of the house design process, these specifications are relevant.

2.3.2 The Enterprise Security System Design Model

  • Corporate Standards: The specific corporate standards that affect the enterprise security system.
  • Design Requirements: The specific design criteria that are settled on after consideration of wants, needs, costs, etc. One of these is already specified: the policy-driven security services. Other examples might include support for service-oriented application designs and role-based access control.
  • Governance: The principles, policies, and implementing standards that support the design requirements and the specific architecture.
  • Architectural Plan: This is the resulting set of conceptual diagrams and blueprints that document what the resulting security system will look like from various perspectives. The plan includes the conceptual and detail drawings for major subsystems such as identity management, access control, and border protection services, as well as the required products, applications, platforms, etc.
  • Security Services: The itemization of the services and ultimately the individual applications and products needed.
  • Operations: The considerations for day-to-day-operation of the security services and supporting infrastructure.

Let’s take a look at each of these in detail and compare and contrast the components of the two models.

2.3.3 Community Standards versus Corporate Standards

It is important to keep in mind that both designs take place in a larger context that may impose constraints on the design – the house is part of a larger residential development or community, and the enterprise security system is part of a larger enterprise IT system.

In the house example, the community may impose standards to maintain a certain level of quality and appearance. It may, for example, restrict the use of certain types of siding and certain colors, and it may require a Jacuzzi and ceramic tile floor in the master bath and wood floors in certain rooms.

In the security example, corporate standards may be imposed to ensure that investments leverage existing technology or support infrastructure. They may, for example, require that all user-interfacing products support Lightweight Directory Access Protocol (LDAP) interoperability with their standard Network Operating System (NOS) or corporate directory to avoid proliferation of additional user registries and sign-on requirements.

2.3.4 Building Codes and Engineering Practices versus Governance

In both models, development of the architectural plan must consider the constraints imposed by this component, based on experience and good judgment.

In the house example, building codes and engineering practices are constraints developed through years of experience to ensure a sound and safe dwelling. Considerations here include such things as structural integrity, a healthy environment, and fire safety. The finished architectural plan for each house may vary widely, but all must comply with these requirements.

In the security example, governance defines the principles, policies, standards, guidelines, and procedures that constrain the design and operation of the security system. As with the house example, the governance elements are based on experience and good sense. Considerations include such things as simplicity, defense in depth, resilience, and common policy enforcement. As with the house example, security infrastructure implementations may vary widely, but all should comply with these requirements.

2.3.5 House Architecture versus Security Technology Architecture

In both cases the architectural plan represents the blueprints for implementation. In the house example, the industry has built enough houses to clearly understand the various levels of detail and perspectives necessary for successful construction. Unfortunately, not many security infrastructures have been built using a comprehensive plan, so we are not nearly so clear on the levels of detail or perspectives needed.

One thing we do seem clear on is that any good plan starts with some high-level pictures and successively expands the detail in some organized fashion until the physical construction blueprints have been completed and construction can begin. In the computing industry these levels of detail are commonly termed the conceptual, logical, and physical architectures.

At the conceptual level, our design has the artist’s renderings of various views of the house. We see what the house looks like, possibly from various perspectives, but without any of the construction details or internal system components. In the security context, this should be a picture or pictures of the infrastructure as a whole, defining the key design concepts – hence, a conceptual architecture.

At the logical level, our house design has floor plans to specify the layout of each floor and show how the rooms are connected. There is still no detail of construction or the systems such as plumbing, heating, or framing. In the security design, this is where we see major services (such as identity management, access control, and border protection) decomposed into a set of related components and supporting services. For identity management, we see provisioning services, external and internal directories, policy administration systems, HR systems, identity mapping services, and more.

At the physical level, our house design has details for assembling the framing, electrical, plumbing, and HVAC components. In the security context, we see deployment of products and applications that make up the various functional components; we see computing platforms and connectivity.

2.3.6 Bill of Materials versus Security Services

The security services are the security infrastructure bill of materials. These are the core functions we need to actually assemble a cohesive security infrastructure. To better understand this area, it’s useful to look at the similarities between a house bill of materials and security services.

In both cases it is easy to start by itemizing a high-level bill of materials. We all know what kinds of material it takes to build a house. We need lumber, concrete, pipes, fixtures, ducting, fasteners, etc. We can easily make this list, but without the detailed plan we are not able to specify the quantities and types of each component. Similarly, we all know what security services are needed, but without the plan we cannot accurately list the specific products and platforms. The bill of materials is not an integral part of the plan, although it is a necessary part of the overall effort. The detailed bill of materials is derived from the plan. The list of security services at the detailed product level allows us to know what we need to build or buy to implement our plan.

Although natural, it is a mistake to think we can start with this bill of materials (list of security services) and somehow derive the plan (this is discussed a little more in Section 2.3.8).

2.3.7 Maintenance versus Operations

Once we have completed our house or security infrastructure, we need some processes and tools to maintain our work in a quality state. Furthermore, we probably need to take maintenance requirements into account in the design phase to facilitate our maintenance activities after completion.

In our house example, design elements related to maintenance might include selection of siding and flooring materials, installation of a built-in vacuum system, or placement of hose bibs to facilitate washing exterior components. Typical maintenance considerations after construction might be a daily cleaning plan, periodic painting and structural repair, regular heating and plumbing maintenance, and an occasional upgrade or addition.

In the security context, operations includes processes and tools for day-to-day vulnerability management, event management, and incident management, as well as other aspects of daily security administration and operation. These elements ensure continued effective and efficient functioning of the security environment.

2.3.8 The Remodeling

Most enterprises do not start with a green field in the security infrastructure space. We all have existing environments developed over the years, typically started with independent proprietary platforms, each with its own security silo. The advent of the Internet has been the primary driver for the deployment of a variety of products and solutions that attempt to integrate these disparate systems. For most of us the current state is a hodge-podge of environments and tools in various states of interoperability. The good news is:

  • These point and reactive solutions have been built by smart people. Even if they did not use a comprehensive plan, these smart people typically made decisions and deployed solutions with a vision of what the plan should ultimately look like.
  • There is increasing focus on the development of security standards to deliver interoperability among these disparate platforms. Many standards are in the early stages of development or adoption, so it is likely that interoperability will improve with time. At the same time, however, the interoperability challenge is increasingly complex.
  • For the solutions we already have deployed, the marketplace is driving the vendors to continually enhance their interoperability, thus making our lives easier.

What this means as we work to articulate our new enterprise security infrastructure design is that we already have much of our bill of materials and we can probably use a substantial portion of our existing deployment.

So in the context of our analogy, we are possibly talking about house remodeling, not new construction. Somewhat contrary to what was stated earlier, the bill of materials will not be completely derived from the plan. There will now be some consideration of the existing construction (and inherent bill of materials) incorporated into our new design. However, in this case it is probably not wise to overemphasize the existing deployment when laying out the conceptual and upper-level aspects of the logical design. Consideration of the existing infrastructure will have more influence on the details of the logical design subcomponents and the physical design.

In our security context this remodeling probably means:

  • Leveraging existing work to identify the security drivers and governance components. Care should be taken to be comprehensive at this point in the effort and not to assume that previous work is up-to-date in our rapidly changing environment.
  • Assessing our existing environment and products as we work through the lower-level logical design and physical design. Much of what we have should be usable in our new comprehensive vision.
  • Identifying gaps and areas for improvement in our existing infrastructure and then making plans for closing the gaps and implementing the improvements.

With the house analogy as background, let’s move on to describe the O‑ESA framework and templates, starting with security governance and then describing security technology architecture and security operations. Hopefully the house analogy has provided a basis for clearer understanding of some of the terms we use, and at appropriate points, we’ll refer to the analogy again to clarify the discussion.

3 Security Governance

3.1 Governance Components and Processes

The focus now shifts to the security governance components and processes of this O‑ESA Guide’s overall framework, shown in the left center of Figure 5.

Figure 5: Security Governance Components and Processes

This section provides an overall security governance framework and template that member organizations can tailor to their needs. The governance components and processes were introduced earlier as follows:

  • Principles: Basic assumptions and beliefs providing overall security guidance.
  • Policies: The security rules that apply in various control domains.
  • Standards, Guidelines, and Procedures: The implementation of the policies through technical requirements, recommended practices, and instructions.
  • Audit: The process of reviewing security activities for policy compliance.
  • Enforcement: The processes for ensuring compliance with the policies.

The following sections provide an overview of the overall governance process and the policy framework, followed by descriptions of the individual components and processes identified above.

3.2 Governance Process Overview

For technicians, it is easy enough to find technical solutions to business problems; for example, there are various solutions for protecting a customer’s identity. But how do technicians know their responsibility is to protect identity? How do they know that management has mandated this requirement? How do they know what the standards and guidelines are for implementing this requirement? How do they know what resources and services are available to implement a solution? Or more simply put, how do you, as a technician in an IT service organization, know what needs to be done to provide and maintain secure technical solutions that support the business mission and objectives of your organization?

This O‑ESA Guide has identified this critical ESA component as governance. In this O‑ESA Guide’s vision of ESA, there is a strong linkage among governance, technology architecture, and operations. That linkage is provided via the policy framework, which is at the heart of the governance model, and the policy-driven security architecture framework, which is at the heart of the technology architecture and operations model. Before we describe the policy framework, it’s useful to look at the overall governance process.

If we take a process-oriented view of defining a governance framework, the first step is to identify the guiding principles that your organization will follow in securing the information technology assets of the enterprise. These principles provide the highest level of guidance for the security governance process as well as for technology architecture and operations.

The second step is to authorize enforcement of the guiding principles through the creation of policies in various domains of management control. The control domains – such as organizational security, asset classification and control, personnel security, and access control – represent the highest-level identification of policy. The specific policies within each of these domains authorize a course of action.

The third step is to implement the authorized courses of action. The results are the technical standards, guidelines, and procedures that govern IT security for the organization.

The two additional governance concepts are enforcement and ongoing assessment. Typically, enforcement controls are built into the technical standards and procedures, but there are also requirements for separate enforcement processes triggered, for example, as a result of security-related events. Ongoing assessment is needed to respond to change as business models evolve, new technologies are developed, and new legislation is passed. An example of such a change occurred in the 1990s when business products and services were suddenly offered directly to the consumer through web-based front ends to the traditional services. This change created the need to extend confidentiality principles to encompass the protection of personal data – the need for privacy protection is now taken for granted and is in many cases mandated by law. The effects of this sea change are still unfolding through the implementation of the Health Insurance Portability and Accountability Act (HIPAA) and other privacy-related legislation. Ongoing assessment is necessary to detect and respond to smaller changes as well and should be a built-in process for continuous improvement.

3.3 Governance Process Roles

Many different people are involved in identifying the guiding principles, authorizing them through policies, implementing and enforcing the policies, and continually assessing the effectiveness of the governance process. These people are not only involved in creating and maintaining the governance framework but may also have roles in technology architecture and operations.

Organizational managers are responsible for defining the organization’s principles by classifying the data used to drive the organization’s business needs based on legal, statutory, regulatory, and contractual agreements. Organizational management is also responsible for managing risk.

Information systems management or the CIO is responsible for managing an organization’s technical systems that support the business services identified by organizational management through the creation and maintenance of policies.

The security officer is responsible for the security of a company's communications and other business systems. The security officer may also work with the CIO in planning for and managing disaster recovery. The security officer is likely to be involved in both the business (including people) and technical aspects of security, and is responsible for managing security incidents.

The data security officer assists with identifying and assessing risks associated with an organization’s data structure. This includes how data is accessed, stored, managed, and transferred.

Technical architects are responsible for building policy enforcement into the technical architecture.

Technicians (operations) apply the standards, guidelines, and procedures to their areas of responsibility.

3.4 Governance Model Policy Framework

At the heart of the governance model is the policy framework. As mentioned earlier, this O‑ESA Guide’s vision of ESA includes a strong linkage among governance, technology architecture, and operations. At the governance level, the policy framework provides this linkage.

Figure 6 identifies this O‑ESA Guide’s generic policy framework. The basic framework concept is very simple; however, concept simplicity does not necessarily provide ease of definition and implementation.

Figure 6: Generic Policy Framework

The following discussion explains the details of the framework.

  • Identify the guiding principles for your organization:

—  Start with the fundamental objectives of IT security: availability, integrity, confidentiality, accountability, and assurance.

—  Identify basic assumptions and beliefs, derived from your organization’s mission, values, and experience.

—  Identify organization-specific business, legal, and technical principles.

—  Tailor the O‑ESA Guide principles template to your needs, based on these and any other organization-specific considerations.

  • Authorize enforcement of your organization’s guiding principles through an agreed policy template:

—  Start with the ISO/IEC 27001/2 policy template, keeping in mind that these are high-level guidelines, not detailed technical guidelines.

—  Modify and extend the policy template based on your guiding principles and business needs.

—  Or purchase an ISO/IEC 27001/2-compliant set of policies and modify them as required to align with guiding principles and business needs.

  • Implement the standards, guidelines, and procedures for your organization’s technical environment, based on your policy template:

—  Section 3.6.3 includes examples of policy implementation guidance from the ISO/IEC 27001/2 standard.

—  Section A.1 will identify additional sources of implementation guidance.

  • Enforce compliance with policy. Enforcement is typically built into the technical standards and procedures, and it is supported by this O‑ESA Guide’s policy-driven security architecture (see Chapter 4). In addition, there are requirements for separate enforcement processes triggered as a result of security-related incidents or audits. Audit is highlighted in this O‑ESA Guide’s overall security program framework diagram because of its importance in supporting the requirement for accountability to the individual level.
  • Conduct ongoing assessments for evaluating and responding to changes that may impact security policy; e.g., when business requirements change, new threats arise, new technologies are developed, and new legislation is passed.

ISO/IEC 27002:2005: Code of Practice for Information Security Management is an international standard that is gaining traction in the enterprise security space. This O‑ESA Guide embraces it as an integral part of the O‑ESA policy. It has broad applicability across the many organizational types represented in this O‑ESA Guide.

3.5 Governance Principles

Technology governance principles are the basic assumptions, beliefs, theories, and values guiding the use and management of technology within an organization. All policies, standards, architectures, designs, operations, and other components of the technology process should align with these principles unless a governance body grants an exception.

Depending on the organization, governing principles may be established at one or more levels; this document focuses on the governing principles for enterprise security. Identifying an organization’s guiding security principles is the first critical step in the governance process. These security principles constrain the definition of the other governance components, such as policies and standards, and they constrain the definition of the technology architecture and operations components. As NAC members adapt these principles to the needs of their particular organizations, they must ensure alignment with their higher-level corporate IT principles, which provide guidance on the use and deployment of all IT resources and assets across the enterprise.

The following O‑ESA definition of security principles is based on input from several member organizations as well as NIST[6] and Microsoft[7]. This input is sorted into ten categories that represent the highest-level principles. Within each category are second-order and in some cases third-order principles.

3.5.1 Security by Design

Security should not be an afterthought or add-on. Security considerations should begin with the requirements phase of development and be treated as an integral part of the overall system design.

  • Establish a sound security policy as the “foundation” for design.
  • Build security into the life cycle:

—  Plan for system maintenance

—  Ensure proper security in the shutdown or disposal of a system

—  Commit to secure operations

  • Clearly delineate the physical and logical security boundaries governed by associated security policies.
  • Protect technology assets through a comprehensive security program that includes appropriate security education, processes, and tools:

—  Invest in secure design

—  Train developers in the techniques, processes, and tools needed to ensure secure software

—  Define the organizational roles and responsibilities required to implement security by design in your culture

Note that Security by Design is the inverse of Design for Malice – see Section 3.5.9.

3.5.2 Managed Risk

Risk and security countermeasures should be balanced according to business objectives. Identify potential trade-offs between reducing risk and increasing cost, including negative impacts on other aspects of operational effectiveness, if any.

  • Reduce risk to an acceptable level.
  • Identify and prevent common errors and vulnerabilities.
  • Assume that external systems are insecure.
  • Ensure that the cost of security controls does not exceed the benefits (i.e., the tangible and intangible costs of the losses that could occur in the absence of the controls).

3.5.3 Usability and Manageability

Two aspects of usability must be considered: the end-user experience and the ease of administration and operation. Security should be user-transparent and not cause users undue extra effort. Administration and configuration of security components should not be overly complex or obscure.

  • Base security on open standards for portability and interoperability.
  • Use common language in developing security requirements.
  • Design security to allow for regular adoption of new technology, including a secure and logical technology upgrade process.
  • Automate identity and access management activities.
  • Strive for operational ease-of-use.

3.5.4 Defense in Depth

Greater security is obtained by layering defenses.

  • Ensure that there is not just a single point of protection.
  • Implement security through a combination of measures distributed physically and logically.
  • Isolate public access systems from mission-critical resources (e.g., data, processes, etc.).
  • Use common boundary mechanisms to separate computing systems and network infrastructures.

3.5.5 Simplicity

Complexity is the enemy of security. Systems should be as simple as possible while retaining functionality.

  • Minimize the number of system elements to be trusted. Reduce the attack surface.
  • Do not implement unnecessary security mechanisms.
  • The number of security modules and services in the corporate systems environment should be minimized based on technical feasibility, cost, and security requirements.

3.5.6 Resilience

Design and operate IT systems so as to limit vulnerability and to be resilient in response. Automated recovery from attack or failure is desirable. The design should include the ability to restore operations in the event of a disaster, within a timeframe appropriate to business needs.

  • Take appropriate measures to secure the information and communications business-critical infrastructure to enable business continuity in the event of disaster or attack:

—  Exercise contingency or disaster recovery procedures to ensure appropriate availability.

—  Ensure that security systems support restoration of data and recovery of function.

3.5.7 Integrity

All components of the computing environment must provide for information integrity and confidentiality.

  • Protect information while it is being processed, in transit, and in storage.
  • Protect personally identifiable information and enforce other privacy requirements.
  • Base decisions on data classification and fair use.
  • Protect resources by using strong authentication.
  • Formulate security measures to address multiple overlapping information domains.
  • Authenticate users and processes to ensure appropriate access control decisions both within and across domains.
  • Design and implement audit mechanisms to detect unauthorized use and to support incident investigations.
  • Monitor and audit system access and use.
  • Practice incident response.
  • Limit access to systems and data to the least privilege required to perform a job function.
  • Permit external access to enterprise technology assets only through methods that ensure enforcement of appropriate security measures.

3.5.8 Enforced Policy

Implement processes, procedures, and systems that promote enforcement of organizational security policies. Design component configuration procedures in accordance with security policy. Automate access control decisions based on corporate user identity information and access control policy statements.

  • Implement policy-driven access control.
  • Monitor identity confirmation.
  • Use unique identities to ensure accountability.
  • Leverage common enterprise identity and access management services.
  • Distribute management of identity information.
  • Utilize role-based and/or policy-based access control for authorization.
  • Enforce secure configuration and hardening.

3.5.9 Design for Malice

Note that Design for Malice is the inverse of Security by Design – see Section 3.5.1.

Information systems, organizations, and users have very fuzzy boundaries between what is deemed external and what is deemed internal and external. In terms of security architecture these differences have been quantified by the ability to apply additional controls, such as access control, on company systems and users. However, security mechanisms like access control typically function by implementing a security policy such as authentication and authorization. These policies describe the rules for the road for participating in the system and are most applicable to customers, employees, and other users who are attempting to play by the rules. While security policies often do a thorough job of carving up the users into organizational units, groups, and roles, and putting authentication and authorization protocols around them, these policies and protocols are not generally designed to operate in a hostile environment.

To deal with malice additional steps must be taken to protect, detect, and respond against malicious actors who may elect to act entirely outside of the prescribed policy domains. There are two main changes to consider: first to the Security Policy Enforcement and Decision Points, and second to the system’s subjects, objects, and other entities outside of the Policy Enforcement Point (PEP) and Policy Decision Point (PDP).

In the case of PEPs and PDPs:

  • Don’t Trust and Verify: Trust and verify has proven to be a way to take on a lot of risk in a short amount of time. A better approach from a security point of view is to assume maliciousness for policies, requests, responses, tokens, and other exchanges with the PEP/PDP must be protected through encryption and other means. Verification means verifying the subjects, objects, resources, condition, and actions under consideration for the PEP/PDP.
  • Detection and Monitoring: Ensure that the PEP/PDP details all events and decisions to a separate standalone Audit Logging system.

Outside of the PEP/PDP there are several additional steps to designing for malice; these problems and solutions are described by Howard Lipson (CERT)[9] as answering this challenge: “Traditional computer security is not adequate to keep highly distributed systems running in the face of cyber attacks. Survivability is an emerging discipline – a risk-management-based security paradigm.

Survivability is “the ability of a system to fulfil its mission, in a timely manner, in the presence of attacks, failures, or accidents”. The three Rs of survivability that deliver on this goal are:

  • Resistance: Ability of a system to repel attacks.
  • Recognition: Ability to recognize attacks and the extent of the damage.
  • Recovery: Ability to restore essential services during attack, and recover full services after attack.

Looking at security architecture through the lens of malice is quite useful, because the output of Security by Design activities yields controls like authentication and authorization, which yields increased Resistance (the first R), but does relatively little to address Recognition and Recovery. The Resistance, it should be noted, only applies when the PEP/PDP cannot be bypassed and/or the enforcement/decision gained.

The fundamental goal of survivability is:

  • The mission must survive:

—  Not any individual component

—  Not even the system itself

3.5.10 Mobility

The combination of various technologies is referred to as the Internet of Things (IoT).[10] These include mobile, RFID, Near Field Communication (NFC), 2D bar codes, wireless sensor/actuators, Internet Protocol Version 6 (IPv6), ultra-wide band, or 3/4GOT (Global Offset Table).

The report identifies three trends:

  • Scale: The number of connected devices is increasing, while their size is reduced below the threshold of visibility to the human eye.
  • Mobility: Objects are ever more wirelessly connected, carried permanently by individuals, and geo-localizable.
  • Heterogeneity and Complexity: IoT will be deployed in an environment already crowded with applications that generate a growing number of challenges in terms of interoperability.

The rapidly increased amount of mobile applications – a typical employee has at least “three computing screens” – work screen, smartphone screen, home PC screen – that enable users to roam and still connect to enterprise assets from mobile locations creates some subtle nuances around security that relate to Usability and Manageability. This begins with the typical situation that the mobile devices while powerful compared to their predecessors are (1) much more constrained in terms of power, storage, and bandwidth compared to a PC and (2) very proprietary and Byzantine.

The server side of mobile applications is often based on web services, but these are frequently delivered in different ways with special purposes – mobile tiers that perform functions like caching, optimization, routing, and other capabilities that improve the mobile experience.

  • Mobile device security is often unreliable; like all clients the server should not trust data from mobile devices and should always verify.
  • Servers and services that provide access to mobile applications must be designed to deal with special security concerns around caching, backhaul resolution, and other techniques.
  • Security protocols and mechanisms must be reviewed for their ability to fit into mobile use-cases – it’s not the case that general-purpose enterprise security protocols and mechanisms will perform in the same way in a mobile context.

3.6 Policies

Policies define the authorizations and a program of actions adopted by an organization to govern the use of technology in specific areas of management control. Policies are a security governance tool used to enforce an organization’s guiding principles. They are established and maintained through standards, guidelines, and procedures in accordance with related legal and business principles.

The development, use, and enforcement of policies as well as the level of policy detail may differ among organizations based on their business functions, cultures, and technology models. One organization may have a few policies that authorize the creation of many standards, guidelines, and procedures, while another organization may embed standards, guidelines, and procedures within its policies. In addition, policy development, enforcement, and maintenance strategies may differ.

3.6.1 Policy Development

Policy development history and current practice vary widely among organizations. Even applying a model such as the ISO/IEC 27002:2005 Code of Practice can be difficult unless the organizational goals are first identified. The following are questions worth considering when creating a policy:

Analysis Questions

  • What is being protected?
  • Which principle or principles does the policy enforce?
  • To whom does the policy apply, and are there limitations in the policy?
  • Does the policy fit the organization’s business needs and culture?
  • Does the policy relate to the activities that actually take place within the organization?
  • What deviations from the policy are acceptable?
  • Does the policy state what must be done and what happens if the policy is not carried out?

Implementation Questions

  • Who approved, authorized, and deployed the policy?
  • When does the policy take effect?
  • If the policy ends, when?
  • Does all appropriate management properly support the policy?

Enforcement Questions

  • Is the policy enforceable?
  • Who is responsible for enforcing the policy?
  • What are the ramifications of noncompliance?
  • Who is responsible for monitoring and reporting policy violations?

Maintenance Questions

  • Who is responsible for updating and maintaining the policy?
  • How often should the policy be reviewed and updated?

Communication Questions

  • How is the policy communicated?
  • How are changes to the policy communicated?

3.6.2 Policy Template – ISO/IEC 27002

The following list identifies the policy standards included in the ISO/IEC 27002:2005: Code of Practice for Information Security Management. [11] The policies are described at three levels. The first level is what we have referred to as the policy domain. Note that the first domain is security policy, which defines the requirement to develop and implement an information security policy. Altogether there are ten policy domains:

  • Security Policy
  • Organizational Security
  • Asset Classification and Control
  • Personnel Security
  • Physical and Environmental Security
  • Communications and Operations Management
  • Access Control
  • Systems Development and Maintenance
  • Business Continuity Management
  • Compliance

3.6.3 Security Policy Language – XACML

Security policy languages enable the security architect to express rules around allowable and non-allowable behaviors. Standards such as XACML (Extensible Access Control Markup Language) have proven useful to resolve authorization requests in an interoperable way. This achieves the goal of consistent security policy, so when enterprises have policy settings in infrastructure (like routers and firewalls) and information systems (like identity management, applications, and databases), the access control can be created, managed, and enforced in a consistent manner despite different runtime implementations.

As noted in the Executive Summary at the start of this O‑ESA Guide, XACML is an OASIS standard. It is a declarative access control policy language implemented in XML and a processing model, describing how to interpret the policies. XACML Version 2.0 was ratified by the OASIS standards organization on 1 February 2005. The planned Version 3.0 will add generic attribute categories for the evaluation context and policy delegation profile (administrative policy profile).

The blogging world has a number of relevant articles on issues with XACML. One example is “Problems with XACML and their Solutions”.[12] While most practitioners agree with these challenges, they also acknowledge it addresses the subject space well, and appreciate that no standard in this complex space can provide solutions to all the issues or be simple to understand and use. We can expect XACML Version 3.0 to improve on Version 2.0, based on implementation experience from diverse industry development sources which recognize the value of open standards-based solutions as a major value-add.

3.7 Standards, Guidelines, and Procedures

Policies are implemented through technical standards, guidelines, and procedures, which this O‑ESA Guide distinguishes as follows:

  • Standards are mandatory directives.
  • Guidelines are recommended best practices.
  • Procedures describe how to achieve the standard or guideline. Usually they are incorporated within the standards or guidelines. In some cases, separate procedures may be needed; for example, to establish a process whereby independent business units comply with corporate policies or standards.

This section does not attempt to provide a complete template for standards, guidelines, and procedures that implement the ISO/IEC 27001/2 policies. Such a template could include hundreds of specific standards covering a broad range of infrastructure topics and platforms.

One of the many realizations after the September 11, 2001 attacks is the importance of the cyber infrastructure in supporting emergency response systems. Many IT security professionals further emphasized the importance of securing our financial systems against the threats of cyber-terrorists. The past 10 to 15 years have demonstrated that professional criminals have become proficient in exploiting vulnerabilities in our cyber infrastructure to lucratively commit many variations of fraud and extortion. These vulnerabilities range from mis-configuration of equipment, to software bugs and the social engineering of employees and general computer users. Because of the rise in criminal activity and due to the lessons learned from “9/11”, there has been an ever-increasing number of information technology standards to consider when developing an enterprise security architecture.

In developing your organizational IT Security Governance model, management should identify the standard(s) that apply to the organization’s environment and set policies requiring systems to be created and maintained. There is a variety of standards that ranges from broad standards that address the typical security domains to the standards that address a specific technology, application, or data type. To note, many security professionals identify that there is a difference between complying with a standard, implementing, and maintaining controls that appropriately secure an organization or enterprise system. This is because a standard may not adequately address a control that is specific or unique to the organization. The organization will need to monitor and address these differences as part of the enterprise security architecture.

Two organizations that address the breadth of information security are the International Standards Organization (ISO) and the US-based National Institute of Standards and Technology (NIST). ISO’s most recent model is ISO/IEC 27001:2005: Information Technology – Security Techniques – Information Security Management Systems – Requirements. ISO also published ISO/IEC 27002: Technology – Security Techniques – Code of Practice for Information Security Management that provides additional guidance on implementation and controls. NIST has a series of special publications that address various components of the information technology space. NIST SP 800-53A: Recommended Security Controls for Federal Information Systems and Organizations can be considered the starting point of developing an organizational security program. This document covers the spectrum of the IT security domains and references other NIST special publications that offer specific guidance for implementation and maintenance of a specific control.

The payment card industry created a standard to address the high number of incidents involving the loss or compromise of credit card data. Merchants who process or store credit card data are required by the credit card provider to maintain compliance with the Payment Card Industry Data Security Standard (PCI-DSS). This standard was originally a simplified approach to the government standards. However, as banks and credit card providers continued to experience losses as the result of merchants’ mismanagement of the systems, the standard has grown more detailed and complex. Merchants are now required to comply with this standard and if a breach occurs their bank holds the merchant financially responsible until the exploited vulnerability is mitigated.

Whereas the ISO, NIST, and PCI standards focus on the spectrum of the security program, these authentication standards focus on controls to ensure the accuracy of identities stored in the credential provider, the strength and security of the credential set, and the security of the infrastructure and its management. To address the growing need to federate organizational credentials (e.g., user names and passwords) organizations, such as InCommon, have developed identity assurance assessment frameworks. Members of InCommon who provide federated authentication and authorizations services are encouraged to implement and maintain this framework. NIST provides a similar model in NIST SP 800-63: Electronic Authentication Guideline.

In comparing the standards against risk modeling tools, it appears that there is a similar risk tolerance that is collectively shared. Maintaining the compliance with a standard does not ensure complete security for the organization or the system.

Most standards follow a similar framework. Once the standard is chosen, the organization assesses risk based on the controls that are or are not implemented. Policies are then reviewed, revised, or created to mitigate, transfer, or avoid the risk. Controls to implement and maintain the policy are then documented. Users and/or administrators are then trained on how to manage the controls and comply with the policy. Monitoring for compliance with the policies, changes in the threat landscape, or vulnerabilities that have been created due to changes in software or business practice is conducted and reported by appropriate staff. The organization or system is then re-assessed at the appropriate time or when major changes to the organization or system occur.

 

Figure 7: Typical Framework for Security Policy Standards & Guidelines

3.8 Enforcement

Enforcement is the overall process of ensuring compliance with policy. It is accomplished through a combination of technical controls, process and procedure controls, and management controls. Many of these controls are built into the implementing technical standards and procedures. Management controls provide for discretionary invocation of enforcement processes (such as disciplinary actions) as a result of security events or incidents. Management enforcement depends upon maintaining accountability for user actions, which is performed primarily by the audit and non-repudiation services.

3.9 Ongoing Assessment

Ongoing assessment is the process of evaluating and responding to changes that may impact any aspect of the governance process and policy framework.

Changes in business, legal, and technical principles need to be reviewed periodically in order to determine whether additions or modifications to security policy may be implied or even mandated.

Policies need to be reviewed to ensure that they are effectively protecting IT assets as intended. For example, if a security incident indicates that an unauthorized person was able to access data from an unattended workstation, then the policy that restricts inappropriate access needs to be reviewed for enforcement practices. The applicable standard also needs to be reviewed.

Standards, guidelines, and procedures also need to be reviewed on an ongoing basis as new employees are hired, new systems or services are implemented, or current systems are upgraded.

An effective and efficient ongoing assessment process requires supporting tools and metrics. The key is to collect and measure data that identifies the strengths and weaknesses of the security architecture as implemented. For example, it will be easier to demonstrate to management that a stronger anti-virus package is required if you have historical metrics showing the impact of virus attacks on your organization.

3.10      Governance Example

This section provides an example from the fictitious XYZ Company to illustrate the relationship between security principles and policies and the implementing standards. In summary, the key relationships are:

  • Policies define the rules for a particular domain – in this case, the domain is authentication passwords.
  • Policy rule definitions must be consistent with guiding principles – in this case, there are two guiding principles: integrity and usability.
  • A policy may be implemented by multiple standards covering different aspects of the policy – in this example, only one of the standards is shown.

The names of the principles, policy, and standard for this example are shown in bold.

Integrity Principle

All components of the computing environment must provide for information integrity and confidentiality. Resources must be protected using strong authentication.

Usability and Manageability Principle

Two aspects of usability must be considered: the end-user experience and the ease of administration and operation. Ideally, security should be user-transparent and not cause users undue extra effort. Administration and configuration of security components should not be overly complex or obscure.

3.10.1 Authentication Policy Example

Requirement

Authentication mechanisms must be protected commensurate with the value of the information or business process they support, and they must be resistant to common methods of compromise.

Overview

Authentication mechanisms substantiate a claim of identity through the use of authentication data. All components of authentication systems need to be protected from unauthorized disclosure and misuse to preserve the integrity of the authentication. Additionally, the data components of authentication systems need to be protected commensurate with the sensitivity of the assets they help protect. The following components and processes accomplish authentication and protect authentication mechanisms:

  • Selection of authentication data
  • Collection of authentication and verification data
  • Association of collected authentication and verification data with an identity
  • Protection of user's authenticator
  • Transport of the authenticator
  • Protection of verification data in storage
  • Validation of authenticator
  • Access permission or denial based on results of the authentication

When authentication data is presented to the authentication mechanism, it is called an authenticator. Authenticators are generated in the following ways (called factors):

  • Something you have (e.g., a token card)
  • Something you know (e.g., a password)
  • Something you are (e.g., fingerprints)

The three XYZ Company standard authentication types are designated as normal, supplemented, and strong.

Normal authentication is the weakest type allowed on the XYZ Company internal network. It includes clear text passwords and digital certificates where the private key protection can't be guaranteed.

Supplemented authentication is normal authentication that is implemented in combination with additional controls. Supplemented authentication is resistant to common methods of compromise and needs to be used instead of normal authentication when additional risk is present. For example, additional risk is present when an entity connects to an XYZ Company asset from outside the XYZ Company internal network. A standard way to implement supplemented authentication is using normal authentication within an approved encrypted channel such as a Secure Sockets Layer (SSL).

Strong authentication is authentication that provides a high degree of accountability and assurance of identity on its own. To be considered strong, authentication needs to incorporate two factors. Standard ways to implement strong authentication include:

  • Smart card with a Personal Identification Number (PIN)
  • Two-factor token card (e.g., a SecureID device such as a token that provides an ever-changing password, in combination with a PIN)
  • X.509 certificate, where the private key is protected by a PIN that complies with the password selection standard

Verification data is the information a system compares with the user's authenticator to validate the user's identity. Usually, verification data is not stored in the same format as the authenticator. Typically, a secure mathematical operation (encryption, one-way hash) is performed to derive verification data from the authenticator.

3.10.2 Password Quality Enforcement Standard Example

Systems must be configured so that static passwords cannot be re-used for an account for eight reset cycles. When systems do not support eight cycles, the maximum number of cycles permitted by the system must be used.

Periodic checking for weak passwords should be performed. Password checking by any organization or individual must be authorized by Enterprise Computing Security.

  • When a weak password is discovered, the user should be notified to change the password immediately. If the user is unavailable or unable to comply, the account should be disabled.
  • System administrators desiring to discover weak passwords must confine their activities to systems under their direct responsibility.
  • Any password scanning software or resulting data must be protected, and visibility must be limited to persons with a need to know.

Systems should be configured to enforce password complexity, when such capability is provided by the infrastructure.

Systems configured to enforce password complexity should allow passwords that can be used on multiple systems. Such passwords have the following characteristics:

  • They contain eight characters.
  • The first character is alphabetic.
  • At least one numeric character is in the second through seventh character position.
  • The last character is non-numeric.
  • When the system is case-sensitive, the password includes at least one uppercase and one lowercase character.

3.10.3 Example Comments

This example illustrates an Open Group member’s implementation of authentication policy through a password quality enforcement standard. Not shown are 13 other standards that implement various aspects of the member’s authentication policy.

The password quality enforcement standard stipulates periodic checking for weak passwords and mandates their replacement. The standard also takes into account the principle that security should be user-transparent and not cause users undue extra effort by allowing for passwords that can be used on multiple systems.

Also note that in this example enforcement is built into the standard – if the user is unavailable or is unable to comply, the user account is disabled. Other examples may not be so simple or clear-cut and may involve a separate enforcement process that invokes disciplinary actions.

4 Security Technology Architecture

4.1 Components and Processes

The focus now shifts to the security technology architecture components and processes of this O‑ESA Guide’s overall framework, shown in the center of Figure 7.

Figure 8: Security Technology Architecture Components and Processes

The purpose of this section is to provide an overall security technology architecture framework and template that member organizations can tailor to their needs. The overall framework is described at four levels of abstraction: conceptual framework, conceptual architecture, logical architecture, and physical architecture. These were introduced briefly during the earlier discussion of the security program framework, as follows:

  • Conceptual Framework: Generic technical framework for policy-based security services.
  • Conceptual Architecture: Conceptual structure for management of decision-making and policy enforcement across a broad set of security services.
  • Logical Architecture: Structure and relationships of the key components and services defined within the constraints of the conceptual architecture.
  • Physical Architecture: Identifies the structure of specific products, showing their placement and the connectivity relationships required to deliver the necessary functionality, performance, and reliability within the constraints of the logical architecture.

These definitions are tied closely to this O‑ESA Guide’s vision of policy-driven security, with a strong linkage among governance, technology architecture, and operations. This is the fundamental concept underlying the definition of ESA and forward-looking enterprise security system implementations. The following sections explain the concept, starting with the conceptual framework.

4.2 Conceptual Framework for Policy-Driven Security

Figure 8 shows this O‑ESA Guide’s conceptual framework for policy-driven security services.

figure10

Figure 9: Policy-Driven Security Conceptual Framework

In simple terms, it stores electronic policy representations[13] in a policy repository so that they can be referenced at runtime to make and enforce policy decisions. The following are the principal components of the policy model:

  • Policy Management Authority (PMA): The PMA is a person or application entity that composes or creates electronic policy representations through a policy console, policy interpreter, or other tool. These electronic policy representations may be expressed in XML-based policy language, directory entries, configuration file entries, or some other form. Often they are configured directly into a proprietary product’s policy interface or policy repository. There may be multiple PMAs for different policy domains.[14]
  • Policy Decision Point (PDP): The PDP accesses electronic policy information and makes runtime policy decisions at the request of a Policy Enforcement Point (PEP). The PDP is sometimes collocated with the PEP due to product packaging, often justified by performance considerations. In other situations, it is desirable to decouple the PDP from the PEP. There may be multiple PDPs and PEPs, but overall there should be fewer PDPs than PEPs, so as to reduce policy administration and/or allow PDPs to offload complex logic from PEPs.
  • Policy Repository: The repository is where electronic policy representations are stored. Repositories may be general-purpose directory services, or they may be service-specific policy repositories associated, for example, with a specific access management product.
  • Policy Enforcement Point (PEP): The PEP enforces policy at runtime, based on the policy decisions made by the PDP. PEP functionality may be tightly integrated with the security service, as in the case of typical file system, database, or firewall product implementations. Alternatively, it may be a separate agent or plug-in that extends a service implementation to provide policy enforcement, as in the case of web access management agents or web server plug-ins that control access to web pages.
  • Security Services: These are the core functions of the O‑ESA that cooperate to provide a complete enterprise security services system.
  • Resources: The IT assets that security services protect.

4.3 Conceptual Architecture for Policy-Driven Security

With the conceptual framework as a starting point, this section describes O‑ESA’s overall conceptual architecture for policy-driven security services. It starts by further decomposing the policy management and security services components of the framework to the specific conceptual services shown in Figure 9. It then describes in further detail the policy decision and enforcement point concepts.

figure11

Figure 10: Policy-Driven Security Conceptual Architecture

As shown on the left of the figure, policy management has been split into identity management, access management, and configuration management services, which represent three roles of the PMA shown in the conceptual framework. Management services are responsible for maintaining their electronic representation of runtime policy information in the policy repository. A provisioning function can be used to automate the process of updating the policy repository in a timely fashion, so that runtime policy decisions are accurate within an acceptable timeframe. Section 4.4 addresses provisioning in more detail relative to the creation and maintenance of user accounts for digital identities and their attributes.

On the right are the specific runtime security services and their associated resources and PEPs. The PEP and PDP interact to make runtime policy decisions and then to enforce those decisions via the PEP and the associated service. Again, the level of integration between PEPs and services and between PEPs and PDPs may vary widely.

Following is a brief overview of policy management and runtime security services:

  • Identity management services are responsible for assigning and maintaining digital identities and associated attributes across the electronic computing environment and for deleting identities when they no longer represent valid users of the environment.
  • Access management services are responsible for assigning and maintaining resource access privileges across the electronic computing environment and for terminating those privileges when they are no longer required. Access management services may encompass a variety of components such as access policy definition, account creation, and Access Control List (ACL) maintenance. The key differentiator in O‑ESA between access management and identity management is that access management is target-centric or resource-centric, while identity management is initiator-centric or user-centric.
  • Configuration management services are responsible for consistently setting and maintaining the security configuration across the electronic computing environment. Configuration management is where this O‑ESA Guide extends the policy-driven conceptual framework beyond the access control framework to include the distributed components of all the security services. Configuration of the various security services – border protection, threat detection, content control, auditing, cryptography, and even configuration management itself – is constrained by the policy model. A classic example is the centralized management and deployment of anti-virus definition files – policies are defined, and updates are automatically pushed to all appropriate corporate end-points in accordance with that policy.
  • Access control services are responsible for controlling access to the enterprise computing environment based on the user’s identity (authentication services) and controlling access to specific resources within the environment based on the user’s entitlements or privileges (authorization services). This is the classic PDP-PEP implementation where information provided by identity management and access management is used to determine access authorizations.
  • Border protection services are responsible for controlling information traffic across external or internal boundaries between security zones, based on the location of the traffic source and destination or on the content of the traffic. In O‑ESA’s policy model, configuration of the many devices (including end-user clients) providing border protection services is controlled through centralized policy with configuration definition pushed to the end-points. Border protection vendors currently provide some tools for centralized management of their proprietary platforms; however, open standards-based, comprehensive management across vendor platforms is generally lacking.
  • Detection services are responsible for identifying and protecting against real and potential threats to the computing environment. Policy-based management of detection services generally involves vendor-proprietary solutions for centralized detection engines, with various means for collecting logs from many sources. The consolidated logs are then analyzed by the detection engine based on pattern and heuristic analysis to identify intrusion attempts.
  • Content control services ensure that the enterprise information base is not corrupted and that the external information base being accessed is legitimate and appropriate for business use. Today’s anti-virus and anti-spam services are already within the purview of policy-based management controls. One can readily visualize more sophisticated policy-based controls over virus scanning, spam filtering, and content inspection services as well as the emerging enterprise rights management services. Just as organizational roles and job function may be used to determine access privileges, they might also be used to determine the appropriate level of content control.
  • Auditing services are responsible for analyzing security logs in support of security investigations, risk assessments, and related activities. The vision for policy-based management is to be able to define auditing requirements in a centralized policy base that is then enforced at the auditing end-points. This does not seem to be an area of focus by vendors today.
  • Cryptographic services are responsible for enabling the confidentiality and integrity of sensitive data and for higher-level digital signature services. The policy-based vision is to be able to define encryption policy for data both in transit and at rest in a central repository, and then apply the policy based on content tags connected directly to the targets. Digital Rights Management (DRM) technology is beginning to address this requirement, but it is in its infancy and will need several years to mature.

4.3.1 PDP/PEP Detail

Figure 10 provides additional detail on the PDP/PEP portion of the conceptual architecture. Although the model is based on the ISO/IEC 10181-3:1996: Access Control Framework, this O‑ESA Guide applies the model to all of the policy management and security services that make up the conceptual architecture. The key distinction is that this O‑ESA Guide applies the policy decision-making and enforcement model to configuration-time services as well as production runtime security services. For additional detail on the importance of this distinction, refer to the vision, technical model, and roadmap for policy automation as described in Chapter 6.

figure12

Figure 11: PDP/PEP Detail Model

As shown above, the PDP/PEP model defines the following key participants:

  • Initiator: The user, application, or service that initiates a request of some target resource. The initiator may be a policy management administrator, application, service, or an end user or using application or service.
  • Target: The application, service, or other resource to which the request is directed. The target may be a policy repository[15] being updated by a policy management service, or it may be a resource being operated upon by any of the runtime security services.
  • PEP: The guard function that enforces policy decisions for target resources.
  • PDP: The engine that evaluates requests against the policy (or rules) data and makes policy decisions.

The basic operation is that initiators submit requests to targets. The request specifies an operation to be performed on the target, and it may contain relevant data or more detailed instructions. Requests are intercepted by PEPs, packaged into a decision request, and forwarded to a PDP to determine whether a particular request should be granted or denied. In order to make the policy decision, PDPs may need the following information:

  • Initiator Data: This is data about the user, application, or service making the request. In the case of a human user, this is known as identity data and could include such things as company affiliation, job function, security clearance, roles, etc. For a service or application, identity data is less clear. In many cases, the service or application is simply a proxy on behalf of some end user, so the identity data will probably be that of the end user. If the application or service is working independently of a specific end user, service identity data might only be a company-defined service ID. In a more sophisticated model, services might be assigned permission attributes. Initiator data is most often stored in an LDAP directory, where the information is created and maintained by identity management services.
  • Target Data: Target data is data about the target resource and is typically related to information sensitivity or content classification.[16] Maintenance and retrieval of target data are among the most difficult functions of policy-driven security architecture. Today, most solutions require embedding some of the PDP logic into the end application or service for dealing with the target data involved in the decision-making process. In the future, a more generalized solution will provide mechanisms for the PDP to query the PEP for target attributes required in the decision-making process.
  • Environment Data: Data about the environment includes details such as time of day, access path, user session context, or transaction context. The access path might indicate the security of the access channel or the current user location (for example, directly connected to the company network or at some Internet café). Environment data such as time of day is easily accessible to the PDP for making decisions. Session context might include strength of user authentication. Transaction context might include the dollar amount of bank withdrawal. Although the user location may be very relevant to the decision-making process, under many circumstances that information cannot be reliably obtained.
  • Policy or Rules: In order to make a policy decision about a request of some resource, we need a statement of the policy (or rule) that can be interpreted by the PDP. In the conceptual architecture, this is shown as the policy repository, which may be a general-purpose directory service, a combination of directory data sources accessed as a virtual directory or meta-directory, or a product-specific policy repository.

Once the decision has been made by the PDP, the result would be packaged into a decision reply and returned back to the PEP for enforcement.

4.4 Identity Management Architecture

The following sections further analyze two of the identified security services – identity management (IdM) and border protection – to describe service-specific conceptual and logical architectures. These should be viewed as examples that illustrate the logical decomposition of high-level services to the level of detail required to implement the architecture. In terms of the house analogy, they identify the bill of materials required to determine what we need to build or buy. IdM is then discussed at a further level of detail to provide an example of physical architecture, in which the discrete logical services have been mapped to specific products.

For services other than IdM and border protection, only high-level service definitions are provided (see Section 4.6).

This IdM example first shows the high-level conceptual services, then their decomposition into discrete logical services, and finally their mapping to specific products. Consistent with the overall purpose of the document, these are provided as starting points for developing organization-specific IdM architectures.

4.4.1 Identity Management Conceptual Architecture

Figure 11 depicts the conceptual architecture for identity management (IdM). This is based on the Burton Group (now merged into Gartner) identity management architecture,[17] but it is greatly simplified because it focuses solely on the identity administration and provisioning concepts of IdM and does not address access management architecture.

figure13

Figure 12: Identity Management (IdM) Conceptual Architecture

Following is a brief overview of the key conceptual services of IdM:

  • Identity administration services create and maintain unique identities and attributes for various types of users (human users, applications, other digital entities), including external users.

—  It includes delegated administration, self-service administration, and automated administration feeds.

—  It includes identity-mapping services for federated users.

—  Attributes may include roles and groups.

  • Provisioning services automate the creation and maintenance of accounts (typically in proprietary systems) through agents associated with supported applications and platforms.
  • The identity repository houses identities and their attributes, including federated identities.

4.4.2 Identity Management Logical Architecture

Figure 12 shows the discrete services that make up IdM. Referring back to the house analogy, the goal of logical architecture is to identify the services bill of materials. Thus, it should identify the discrete logical services and their relationships at the level required to determine what you need to build or buy in order to construct a set of IdM services for your environment.

figure14

Figure 13: IdM Logical Architecture

Note that this is the point in the services decomposition process where architecture becomes much more organization-specific and less generic, so this should be understood as just an example of IdM logical architecture.

The following briefly describes the elements of this IdM logical architecture diagram:

  • In the bottom center of the figure is the Human Resources (HR) system that provides administrative feeds to create or update internal user identities in the internal entities directory.
  • On the bottom left is the external identity administration system that creates or updates external user identities in the external entities directory on behalf of affiliated enterprises (federated or non-federated). In the federated case, Security Assertions Markup Language (SAML) identity mapping services are required as well. In both cases, user administration is delegated to an administrator at the affiliate enterprise site.
  • In the center are the external and internal user directories, housed in the directory services function. Most enterprises have more than one source of authoritative identity information, including relational databases, mainframe directories, and other LDAP directories. Virtual directory services allow all those sources to be accessed as a single virtual LDAP name space. Alternative solutions such as meta-directories can provide equivalent results. As the Burton Group (now merged into Gartner) says: “At the end of the day, product strategies don’t matter as much as results. The degree to which an enterprise works to ‘clean’ its identity house, to ‘scrub’ the data, to identify authoritative sources, and to make that authoritative data available to key IdM components, will have a huge impact on how successful subsequent IdM efforts will be.” Also shown in the directory services component is an extranet directory to provide Internet-accessible directory information to external users. Extranet directories can be used to provide common directory lookup capability or can be a necessary component of scalable inter-company secure email communications and digital signing services.
  • Identity registration and vetting functions provide the means for establishing digital identities for persons that might not go through the HR system, such as contractors or consultants. Additional functions may be included to support special identity attributes, such as security clearances or citizenship, which may be provided by organizations other than HR. In some member organizations, a branch of the security organization verifies and maintains these special attributes.
  • Identity self-service systems provide for user maintenance of certain identity attributes, as determined by organizational policy.
  • Group administration systems allow users to create and maintain groups that provide access control for resources under their control.
  • In the top center and right are the provisioning services and agents (not all end systems require agents) that provide account creation and maintenance for the various resource systems. These elements begin to form the identity infrastructure that will be used by access control services.

4.4.3 Identity Management Security Services Template

For completeness, this section provides additional detail on specific IdM services that may be required. These services are responsible for assigning and maintaining digital identities and associated attributes across the environment. This includes deleting or appropriately flagging identities (for historical accountability purposes) when they no longer represent valid users.

4.4.3.1    User and Identity Administration Services

Identity Administration Services

Identity administration services assign and maintain user and application (“principal”) identities and identity attributes, including “federated” identities. The tools typically support centralized and delegated administration of these identities.

Access Provisioning Services

Access provisioning includes those tools and services that maintain access policies and rights:

  • Access rules and policies
  • Account and privilege management

4.4.3.2    Directory Services

General-Purpose Directory Services

  • Designed to meet the general needs of many (even unknown) applications
  • Characterized by adherence to international standards and established conventions
  • Provide vendor-neutral services and are loosely coupled with other infrastructure services
  • Attempt to minimize the need for special-purpose identity stores
  • Include an enterprise directory, which is a general-purpose directory representing the whole population of interest (people, applications, etc.) for the extended enterprise

Special-Purpose Directory Services

  • Designed to meet the specialized needs of particular applications or environments such as a network operating system
  • Characterized by vendor-proprietary schemas and features
  • Represent population of interest (users, devices, etc.) to a specific application or environment

Extranet Directory Services

  • Use LDAP proxy and/or LDAP border services to facilitate secure communication and collaboration with business partners
  • Provide a controlled subset of identity information to the public Internet
  • Provide mechanism(s) to obtain directory information from business partners

Meta-Directory and Virtual Directory Services

  • Provide federation capabilities for disparate directory services
  • Provide an abstraction layer between directories and the applications that use them

4.4.4 Identity Management Physical Architecture

The physical implementation of servers, software, network connections, etc. of the IdM environment described by the logical architecture above is complex. To fully describe such an environment requires multiple documents, including:

  • Various diagrams such as software component layering on servers and network topology diagrams
  • Various lists such as all the network addresses of Microsoft Domain Controllers in an environment
  • Documentation of the configuration settings for software components

This document does not attempt to provide a full set of such documentation because document formats are very specific to individual companies, to the technologies being implemented, and even to the individuals involved. Rather, we provide what should be considered a template for the highest-level view of physical architecture and one from which the need for more detailed documentation can be determined.

figure15

Figure 14: IdM Physical Architecture

Figure 13 maps the discrete services of the IdM logical architecture to specific products, showing their placement on hardware devices and connectivity relationships. It shows how the IdM logical services bill of materials maps to a set of specific products, taking into account that this is remodeling of an existing infrastructure, not a completely new construction. For example, the existing HR system is a crucial component of the identity administration services.

Some key points need to be understood about the diagram and its use:

  • Its purpose is to illustrate a high-level physical architecture diagram, in this case corresponding to the earlier IdM logical architecture, and is not a recommendation on how to structure an IdM environment.
  • It is not based on an actual corporate environment or set of requirements.
  • No endorsement of specific products is implied. The products listed are intended to be representative of market spaces, and in fact the collection of products shown might be suboptimal for interoperability.
  • The diagram shows device connectivity but not information flow. It would be impractical to show all the flows on a single version of the diagram.
  • Typically, a separate version of the diagram with a subset of the relevant information flows would be developed for a particular service aspect, such as access provisioning, and used for analysis, communication, and education.

With both the IdM logical and physical architectures now drawn, some key aspects of the interplay between these levels of the architecture hierarchy become apparent:

  • The physical architecture, although essential for implementing technology, is much harder to comprehend than the logical architecture and relies heavily on the logical view for context. This fact highlights the need to follow the flow from conceptual, to logical, to physical architecture, not only during design and analysis but also in communication and education of the various audiences touched by the enterprise security program.
  • Services that seem distinct at the logical architecture level might be more closely aligned in the physical architecture. For example, in the IdM logical architecture diagram, access provisioning, group administration, identity self-service, and external identity administration services are all distinct. In the physical architecture these services are available in single product solutions.
  • Services at the logical level might be decomposed into several sub-services and technologies at the physical level. For example, at the logical level it is adequate to consolidate various directory services into the virtual directory services component. However, to understand the physical architecture it is necessary to expose the five separate directory service implementations in the environment.
  • Organizational responsibilities may be far more disjointed than they appear, based on the logical architecture diagram. The administration responsibilities for directory products and identity administration services, for example, may span organizational boundaries.

4.4.5 Federated Identity Management

Federated identity management introduces new challenges for security architecture; for example, naming constructs for security policy are created in one place and enforced in another. In Federated Identity, there is a separation between the Identity Provider and the Relying Party. This has proven to be a pragmatic solution to many real-world integration problems where the ability to communicate security assertions across technical boundaries (such as from Java to .Net), across organizational boundaries (such as different business units), and across company boundaries (such as business-to-business (B2B) exchanges).

Crafting security policies that can mediate access control across these boundaries requires new architectures and Federated Identity Management is among the most widely adopted. In a classic IT architecture, the Subject’s requests to the Object’s access control provider are mediated by an access control system that can locate all the information on the Objects’ side to make authentication and authorization decisions.

In Federated Identity the Subject side adds an Identity Provider which makes assertions on the Subject’s behalf. These assertions can be:

  • Authentication assertions; for example, how, when, and where the subject has authenticated
  • Authorization assertions; for example, if any authorization decisions have been made about this request
  • Attribute assertions; for example, name value pairs used to communicate attributes using secure protocols

The assertions are communicated to the Object (e.g., Service Provider) and these assertions are evaluated by the Object’s Relying Party. The way these assertions are secured across boundaries is through the addition of a Security Token. The Security Token is what is used to facilitate authentication, integrity, and confidentiality of the assertion. Tokens can be of various types depending on existing infrastructure, capabilities, and constraints. Tokens may implement X.509, Kerberos tickets, hash functions, and other means to deliver authentication and integrity. The token may also be used to encrypt the assertions’ message so that any sensitive data is not disclosed passing boundaries.

In effect, this separation creates three distinct policy domains:

  • Subject Policy Domain: Governs the subjects, users, client machines, and Identity Providers.
  • Object Policy Domain: Governs the objects, resources, servers, service providers, and Relying Parties.
  • Federation Policy Domain: Governs the communication between the Identity Provider and Relying Party, the security token formats, and types.

This basic architecture has been widely implemented to allow secure communication across boundaries in web applications, web services, and mobile applications. Federation does introduce new management challenges. In a basic Identity Management Lifecycle, there are the following elements:

  • Provision: Create and assigning rights to accounts.
  • Propagate: Synchronize and replicate accounts.
  • Manage: Updating or editing accounts.
  • De-provisioning: Disable or deleting accounts.
  • Monitor: Logging and reporting of account issues.

With Federated Identity Management, there is no longer one policy domain, but three. This means that any system that manages the identity lifecycle must process across multiple domains.

The strength of this approach is that the Service Provider does not need to manage the users in the Identity Provider’s systems, and by the same token the Subject side does not need to provision policy on the Server side. But the processes used to ensure runtime compatibility and adherence to the security policy must be kept in synch.

4.5 Border Protection Architecture

This section explains the border protection conceptual and logical architectures by first describing the high-level conceptual services and then providing an example of their decomposition to discrete logical services. Consistent with the purpose of the overall document, these architectures are provided as example templates for use as starting points in developing organization-specific architectures.

4.5.1 Border Protection Conceptual Architecture

One of the key concepts of border protection is that the services are distributed throughout the enterprise; they are not intended to focus only on the boundary between the intranet and the Internet (Figure 14).

figure16

Figure 15: Border Protection Conceptual Architecture

Another key concept is that packet flow is controlled independently of the initiator’s identity. Thus, it might also be referred to as traffic-based access control, as opposed to identity-based access control. This conceptual architecture identifies three primary types of perimeter control. It also addresses controls between the general internal network and isolated “enclaves” and controls on the client platform, as follows:

  • Filtered and unfiltered Virtual Private Network (VPN) access controls the ability of customers, employees, partners, and suppliers to connect to the company intranet. VPN devices can employ filter lists that restrict incoming access to a specified subset of the applications, services, and other resources inside the company. Where appropriate, unfiltered and unrestricted access can be allowed as well.
  • HTTP-based access is the typical means for supporting “e-business”. It allows only HTTP and HTTPS protocols, with access to selected internal company web resources through reverse proxy technology deployed on the company perimeter.
  • The other traffic component provides for the other various types of traffic that must be accommodated, such as email, File Transfer Protocol (FTP), and Voice-over IP (VoIP), which is a rapidly emerging IP telephony technology.
  • Enclave firewalls inside the company are used to isolate special portions of the internal network from the rest of the company intranet. These isolated “enclaves” can be used to either restrict communications from within the enclave out to the general intranet, or to prevent general intranet communications from entering the enclave.
  • A client may be a desktop machine (e.g., at home or the office) or a mobile laptop that sometimes connects via the company intranet and other times connects via the public Internet (at home or on the road). In other cases, the client may simply represent a client service.
  • The server box symbol shown in “Other Domains” represents other external services that may require access to the company intranet, or external services that require access from the intranet.
  • Personal firewalls are deployed to client machines to prevent unauthorized communication to the client, as well as protecting clients from worms and other invasions that evade detection by anti-virus software – both when connected to the company intranet and when connected directly to the public Internet. For example, a personal firewall would prevent unauthorized access to the machine while it is connected via an Internet café. This model shows two client machines with personal firewalls, one inside the company perimeter and one in the public Internet. The external client uses a VPN connection to get back into the company intranet to protect the traffic between the company perimeter firewall and the client personal firewall.

4.5.2 Border Protection Logical Architecture

Figure 15 decomposes the border protection conceptual architecture to identify the discrete logical services and their relationships at the level required to determine what you need to build or buy in order to construct border protection services for your environment. Again, this is the point in the process where architecture becomes much more organization-specific and less generic, so this should be understood as an example of border protection logical architecture.

figure17

Figure 16: Border Protection Logical Architecture

The following briefly describes the primary elements of the logical architecture that were not covered in the conceptual architecture description:

  • The gateway router manages Internet routing and provides coarse-grain packet filtering based on IP/TCP/UDP protocols.
  • The external demilitarized zone (DMZ) segment is a limited functionality network segment that provides connectivity between the gateway router and the outer firewall.
  • The outer firewall is a high-performance, low-latency firewall capable of providing:

—  An additional layer of IP/TCP/UDP packet filtering

—  In-depth packet inspection and protocol validity checking

—  Some level of denial of service (DoS) detection and prevention

—  Secured IP routing to mitigate IP address space leakage

  • The hosting DMZ segment is a network segment between the outer and inner firewalls that may contain hosts/servers based on the need to locate them behind a load-balancing content switch.
  • Wireless access points (APs or WAPs) are transceivers in a wireless LAN that act as transfer points between wired and wireless signals, and vice versa. An AP is sufficiently trusted to put it inside the outermost firewall, so it is marginally more trustworthy than the Internet at large. However, its ability to implement security (namely authentication, authorization, and encryption) is deemed insufficient and a VPN[18] is used to implement those functions on top of the wireless infrastructure (the same way VPN is used to provide secure communication paths over the Internet).
  • The content switch is a traditional IP load-balancing device that also has the capability to balance sessions (TCP) across servers. It is a key mechanism for hiding the internal IP address space.
  • The load-balanced DMZ segment is similar to the hosting DMZ segment except that it is behind the content switch and provides a network segment between the content switch and the inner firewall.
  • The inner firewall is a high-performance, low-latency firewall capable of providing:

—  An additional layer of IP/TCP/UDP packet filtering

—  In-depth packet inspection and protocol validity checking

—  Some level of DoS detection and prevention

—  Limited, secured IP routing or, more often, static IP routes that mitigate IP address space leakage or unauthorized IP traffic

4.5.3 Border Protection Security Services Template

Border protection services control information traffic across external or internal boundaries between security zones, based either on the location of the traffic source and destination or on the content of the traffic. In this O‑ESA policy model, configuration of the many devices (including possibly end-user clients) providing border protection services is controlled through centralized policy with configuration definition pushed to the end-points. Border protection vendors currently provide some tools for centralized management of their proprietary platforms; however, open standards-based, comprehensive management across multiple vendor platforms is generally lacking.

4.5.3.1    Packet Filtering Service

This service provides dynamic, stateful (i.e., stable), IP-only packet filtering. It examines each IP packet and determines whether to allow the packet to pass. It does this by examining source and destination addresses and ports and by understanding the “state” of a transaction. This allows a reply to be treated differently from a query.

Packet filtering also supports the concept of one service per server and segregation of servers from each other. Packet filters deny all traffic to the server that is not expressly allowed; for example, an HTTP proxy receives only HTTP requests, and a VPN server receives only VPN requests.

Packet filtering provides security controls between different zones of trust.

4.5.3.2    VPN Service

A VPN encrypts data as it traverses untrusted networks. VPNs fall into two categories: LAN-to-LAN and client-to-server. The main difference is that client-to-server VPNs usually require a user to authenticate (e.g., by providing a user name and password), whereas LAN-to-LAN VPNs do not. LAN-to-LAN VPN “tunnels” are usually set up between routers or servers and are transparent to users.

4.5.3.3    Proxy Services

Forward Proxy Services

These services provide access to the external web while protecting corporate workstations from external threats and filtering offensive materials from coming into the corporation. The services support HTTP, HTTPS, and FTP protocols and are outbound only, so that requests must be initiated from inside the corporate network.

Reverse Proxy Services

These services enable secure external access to internal corporate resources by requiring user authentication and authorizing user access only to selected locations on interior web servers. The reverse proxy service transmits authorized requests to the permitted interior server and then returns the response from the interior server to the user.

Application Proxy Services

These services provide inbound and outbound connection between the Internet and the corporate intranet in support of FTP, Telnet, TN3270, SQLNet, the X Window system, and Line-Printer Daemon (LPD) protocols.

4.6 Other Security Services Template

Sections 4.4 and 4.5 provided example architecture diagrams and descriptions for Identity Management and Border Protection services respectively. This section discusses each of the other policy-driven security services mentioned in Section 4.3. These services are broken out into second-level and in some cases third-level services. These definitions are intended to serve as a template that organizations may choose from and tailor to their specific current and future needs.

4.6.1 Access Management Services

Access management is responsible for assigning and maintaining resource access privileges across the electronic computing environment and for terminating access privileges when they are no longer required. Access management services may encompass a variety of components such as access policy definition, account creation, and Access Control List (ACL) maintenance. The key differentiator in O‑ESA between access management and identity management is that access management is target-centric or resource-centric, while identity management is initiator-centric or user-centric.

4.6.2 Configuration Management Services

Configuration management is responsible for consistently setting and maintaining the security policy configuration across the electronic computing environment. As discussed earlier, this is where this O‑ESA Guide extends the PDP/PEP model to support configuration-time instantiation of policy for all management and security services. This includes instantiation of policy decision and enforcement data for the identity, access, and configuration management services themselves, and all the various production runtime security services – access control, border protection, threat detection, content control, auditing, and cryptography. A current example is the centralized management and deployment of anti-virus definition files – policies are centrally defined, and updates are automatically pushed to all affected corporate end-points in accordance with that policy. A more forward-looking example is the centralized management of policy configuration files for border protection and threat detection servers, which eliminates the need to manually configure each end-point based on configuration checklists for each variant of the server architecture.

4.6.3 Access Control Services

Access control services are responsible for controlling user access to the enterprise computing environment based on the user’s identity (authentication services) and controlling access to specific resources within the environment based on the user’s entitlements or privileges (authorization services). This is the classic PDP-PEP implementation where information provided by identity management and access management is used to determine access authorizations.

4.6.4 Authentication Services

In simple terms, authentication services verify who the user is. Typically, the user is required to have a unique identity, and that identity is then verified using a particular authentication technique. It may be a human user with a unique name or user ID, or a computer process with a unique ID. Authentication services generally also assign the user to one or more roles or groups with identities that are subsequently used for authorization. Authentication services are the linchpin of user security – everything else depends on knowing the unique identity of the user and the roles or groups to which it belongs.

Direct (First-Person) Authentication Services

Direct authentication services verify the unique identity of the human user or process based on a unique user identity and password or a stronger authentication technique (smartcards, secure ID fobs, biometrics, etc.).

Indirect (Third-Party) Authentication Services

Third-party authentication services are trusted services that pass previously authenticated identities. They may include Single Sign-On (SSO) products, SAML-based services, perimeter proxies, etc.

4.6.5 Authorization Services

Authorization services determine what a properly authenticated user is allowed to do. The authorization process ensures that users are allowed only the access they require to do their jobs. This is referred to as the principle of least privilege and is as important for electronic users (processes or applications) as it is for human users. As an example, adherence to the principle of least privilege in program design reduces the damage that can occur if a user attempts to exploit that program for mischievous or malicious purposes.

Online (Connected) Authorization Services

Online authorization services evaluate identity, environment, and asset attributes (tags) against policies (sets of rules) to arrive at a permission recommendation. This process includes the use of distributed, on-demand authorization services.

Offline Authorization Services

Offline authorization services correctly determine permission in offline situations and in potentially hostile environments. This is a future direction that may not be realized in the next three to five years.

4.6.6 Detection Services

Detection services assist in the enforcement of security policy through the ongoing creation, capture, and monitoring of security-relevant events. The goal is to detect and respond to threats and vulnerabilities in a way that prevents damage or loss. One of the design requirements[19] is to gracefully degrade access to services in the event of an attack or disaster, to recover from the resulting failures, and to efficiently restore access as impeding circumstances wane.

Policy-based management of detection services generally involves vendor-proprietary solutions for centralized detection engines, with various means for collecting logs from many sources. The consolidated logs are then analyzed by the detection engine, based on pattern and heuristic analysis to identify intrusion attempts.

Intrusion Detection Services

Intrusion detection services identify attempts to break in to a protected network or system and provide real-time or near real-time alarms.

Anomaly Detection Services

Anomaly detection services identify irregularities or glitches in the infrastructure that could be exploited, and they provide guidance for dealing with the risk.

Vulnerability Assessment Services

Vulnerability assessment services are used to analyze systems to identify potential security weaknesses and exposure to known threats.

Logging Services

Logging services provide the capability to collect and consolidate security logs.

4.6.7 Virtualization

This is not strictly a security service but as the use of virtual machines becomes more popular, it is important that organizations pay attention to the security issues that arise with deployment of this new technology. Key security concerns include things such as memory leak exploits and configuration control. In physical servers, patching and vulnerability management controls are limited to individual physical devices. In a world where IT resources are easily replicated and spun up as virtual systems, there are more demands on configuration management, inventory and capacity management, audit and training/staffing processes. On the other hand, for enterprises that can implement solid default configuration and other risk management processes, the management of virtual machine security may actually be carried out more easily in a virtual network than on physical devices and networks because of more centralized controls.

Mis-configuring virtual hosting platforms and guest operating systems is one of the mistakes commonly made with virtualization. Other common mistakes include poor or lack of patch management oversight for virtualized resources, and failure to properly separate duties.

4.6.8 Content Control Services

Content control services ensure that the internal enterprise information base is not corrupted and that the external information base being accessed is legitimate and appropriate for business use. In addition, enterprise digital rights management technology is now being introduced to provide content-based control over what can and cannot be done with information.

Today’s anti-virus and anti-spam services are already within the purview of policy-based management controls. One can readily visualize more sophisticated policy-based controls over virus scanning, spam filtering, and content inspection services as well as the emerging enterprise rights management services. Just as organizational roles and job function may be used to determine access privileges, these attributes might also be used to determine the appropriate level of content control.

Anti-Virus Services

Anti-virus services identify, block, and remove viruses embedded in email and files. Major virus targets include boot records, program files, and data files with macro capabilities (e.g., Microsoft Word document and template files). Viruses spread rapidly as infected program and document files are shared via email, and they are also transmitted through direct downloading from Internet sites and through sharing of removable media that are infected.

Anti-Spam Services

Anti-spam services attempt to identify spam email messages and filter them out. An alternative strategy that may be appropriate in some environments is to filter out the good email and assume that everything else is spam.

Data Loss Prevention

The issue of identifying sensitive information and preventing this information from being sent outside of the organization is known as Data Loss Prevention(DLP). Whether done deliberately or inadvertently, loss of sensitive data can have serious consequences for an organization, including regulatory fines, brand damage, breach disclosure costs, and loss of competitive advantage.

Technologies are now available that can help prevent this from occurring. These technologies are generally deployed at some combination of end-points, servers, or as Internet gateways. The techniques used by DLP products include scanning and in some cases tagging sensitive content, and blocking or quarantining data transmissions where the content of the transmission matches a profile established by the organization for sensitive information. These functions can be performed on a real-time basis, or scans may be conducted at periodic intervals, and transmissions may be blocked outright, or alerts and audit logs may be created for further investigation.

Enterprise Rights Management Services

Enterprise Rights Management (ERM) services utilize digital rights management technology to govern access to enterprise information throughout its lifecycle. Traditionally, digital rights management has been used commercially to protect electronic media such as music and movies. Recently, it is being used to protect sensitive information both inside and outside the enterprise. ERM software provides fine-grain control over what can and cannot be done with information. For example, email messages can be marked with usage permissions and identity-specific access controls so that they can be neither modified nor forwarded to parties outside the organization. For information that once was very difficult to protect, such as word processor documents on removable media, rights management provides policy-based cryptographic protection even when the data is physically stolen.

Content Inspection Services

Content inspection services utilize content inspection technologies to detect and then deal with viruses, spam, and pornography or other information content control issues.

4.6.9 Auditing Services

Auditing services are responsible for aggregating, normalizing, and analyzing events from consolidated security logs in support of day-to-day event management and other security-related activities. Audits may be conducted to ensure the integrity of information resources, to investigate incidents, to ensure conformance to security policies, or to monitor user or system activity as appropriate.

The vision for policy-based management is to be able to define auditing requirements in a centralized policy base that is then enforced at the auditing end-points. PCI-DSS and other security standards have recently created a market for audit logging tools; however, audit logging in distributed systems remains problematic.[20] The PCI-DSS standard contains some foundation guidance for audit log storage and protection, and a basic event model for audit logging credit card events, but this may not be applicable in other domains.

XDAS[21] presents a more flexible approach that specifies an audit logging event model:

  • Account Management Event
  • User Session Event
  • Data Item or Resource Element Management Event
  • Service or Application Management Event
  • Service or Application Utilization Event
  • Peer Association Management Event
  • Data Item or Resource Element Content Access Event
  • Exception Event
  • Audit Service Management Event

XDAS also specifies the audit record format:[22]

  • Header
  • Originator
  • Initiator
  • Target
  • Source
  • Event

The XDAS model can be applied to define the audit logging events and record format of the information captured. The security architecture must still specify three additional areas: the protection and storage model, the integration for how the audit log events are published to the audit log, and the reporting interface.

4.6.10 Cryptographic Services

Cryptographic services enable the confidentiality and integrity of sensitive data, and provide higher-level digital signature services. Enabling services are designed to handle the details of cryptographic key management and support on behalf of users of these cryptographic services. Higher-level digital signature services can be used to authenticate the identity of the sender of a message or the signer of a document and to ensure that the original content of the message or document is unchanged.

The policy-based vision is to be able to define encryption policy for data both in transit and at rest in a central repository and then apply the policy based on content tags connected directly to the targets. DRM technology is beginning to address this requirement, but it is in its infancy and will need several years to mature.

Cryptography Services

  • Enabling services implement standard cryptographic algorithms on memory objects, documents, files, repositories, data streams, etc.
  • Some of the cryptographic topics supported include encryption, hashing, key generation, digital watermarking, and steganography.
  • These services may be delivered by various servers, web services, and desktop tools, but primarily developer libraries.

Public Key Infrastructure Services

  • Public key infrastructure services manage and process X.509 V3 certificates, including the certificate authority, the certificate revocation list, certificate validation services, and trust relationships.
  • This definition specifically excludes identity management and key generation.

Private Key Storage Services

These trusted services store private keys, including key escrow for private and secret encryption keys, personal key wallets, smart cards, and hardware key vaults for private signing keys.

Digital Signature Services

Digital signature services can be used to authenticate the identity of the sender of a message or the signer of a document and to ensure that the original content of the message or document is unchanged. Digital signatures are easily transportable, cannot be imitated by someone else, and can be automatically time-stamped. They provide the capability to ensure that the original signed message arrived, which means that the sender cannot easily repudiate it later. Assuming the required legal status exists, they can enable support of non-repudiation requirements.

Signing Services

Signing services provide the capability to sign documents, files, and memory objects electronically, on behalf of an identity. Implementation may include desktop signing tools, a signing server (server-hosted signing keys for browser-only signing), various workflow applications, and developer libraries.

Notary Services

Notary services provide trusted long-lived digital signatures and timestamps on top of existing, valid signatures.

Code Signing Services

These are services used to sign code and other programming deliverables.

Verification Services

These services are used to verify signatures and establish data integrity.

This ends the description of the Security Technology Architecture components and services. The next section is focused on the design and development process.

4.7 Design and Development

This section identifies the types of guidance that organizations may want to provide to those responsible for design, development, and deployment of applications. This guidance applies to enterprise security infrastructure components as well as applications that use the infrastructure. Further, much of this guidance can be applied to components or applications built in-house as well as Commercial Off-The-Shelf (COTS) applications selected for integration. The goal is to select applications that utilize their enterprise security architecture services in the most effective way possible, so as to achieve the overall security goal[23] and objectives[24] for the organization.

Design and development guidance may range from overall process guidelines to specific guides, templates, and tools. It may include design patterns, code samples, re-usable libraries, and testing tools. All of these are aimed at effective utilization of O‑ESA and effective integration into the enterprise security architecture environment.

The following discussion is based on Open Group member organization experience and is intended to serve as a starting point for an overall process outline, with a few notes about each element and in some cases references to additional information.

4.7.1 Design Principles

Once the process of identifying the guiding principles has been completed, as described in the policy framework overview starting in Section 3.4, then those guiding principles are used as security design principles. They may be tailored or augmented to make them more design and development-specific, and they then become the starting point for designing and developing enterprise security architecture applications. A design principles checklist should be provided to all those responsible for design, development, and testing of these applications.

4.7.2 Design Requirements

Design requirements can be categorized as explicit or implicit. However, to support Requirements-Based Testing, all requirements need to be made explicit as part of the specifications process. A requirements checklist should be provided to all those responsible for design, development, and testing of particular enterprise security architecture applications.

4.7.2.1    Explicit Requirements

Business Requirements

These are new business opportunity requirements.

Compliance Requirements

These include legal, legislative, and regulatory requirements (e.g., HIPAA, Sarbanes-Oxley).

Technology/Deployment Policy Requirements

This category covers infrastructure requirements, as opposed to the business requirements named above. As an example, if there are server-to-server authentication and connectivity requirements, they could affect the application design in some way.

4.7.2.2    Implicit Requirements

Data Class

These requirements are based on the confidentiality or privacy classification of the data (e.g., if a Social Security number is passed over the wire, the risk of passing it over a particular type of data channel must be assessed).

Threats

These requirements are based on known threats in the particular application environment.

4.7.3 Design Best Practices

4.7.3.1    Design Patterns

Design patterns are recurring solutions to software design problems that are ubiquitous in real-world application development. Patterns give information system architects a method for defining re-usable solutions to design problems in a way that is programming language-independent. Following are some references for security design patterns:

4.7.3.2    Security Engineering

Security engineering is the field of systems engineering dealing with the security and integrity of real-world systems. It is the engineering discipline focused on how to build dependably secure systems in the face of malice, error, or mischance. It requires expertise in a variety of disciplines – including computer security, cryptography, applied psychology, management, and the law – as well as knowledge of critical applications.

The following books on security engineering describe a framework for security engineering including policy, mechanism, threat models, assurance, and economic incentives; and details technical guidance and examples:

  • Computer Security: Art and Science, by Matt Bishop (Addison-Wesley, 2002)
  • Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross J. Anderson (Wiley, 2008)

4.7.3.3    Applying Security Design with Threat Models

Security by design poses several challenges to normal systems and software development lifecycle. These challenges include:

  • Identifying security requirements – to be involved in the development lifecycle, means dealing with certain amounts of ambiguity as features, priorities, and trade-offs are decided upon throughout iterations. Further, the requirements that govern the development typically do not include detailed security requirements.
  • Specifying actionable security requirements – the next challenge is to make the security requirements actionable:

—  Making the requirements implementable: Requirements must include what the system and users should do, not only what’s not allowed. This is the gap found in simply using security policies.

—  Making the requirements context-sensitive: Given the wide distribution of software, systems, data, and users, the requirements must be scoped to a usage context, such as a use-case.

—  Backed up by reference implementations and architectures.

  • Assurance that the security mechanisms in the implementation deliver the goals specified by the requirements.

There are three artifacts that can address the above security by design challenges. To aid in identifying and specifying security requirements, Threat Models and Attack Surface provide an approach to arrive at a set of context-specific security requirements. The Countermeasure model consolidates these security requirements into a model that can be tested to verify compliance. These three artifacts are described below.

Threat Model

For security to be involved in the software development process, the information security team must bring carrots and not just sticks. For security to function as a design partner in the Systems Development Life Cycle (SDLC), security needs to bring actionable requirements, design patterns, secure coding practices, and practical testing tools. One reason why this has historically not been the case for security and software development to work together is that security requirements are fiendishly difficult to discern. So to this point we see a situation where security falls back to the “it’s perfect or it’s broken” mentality and the involvement has been focused on writing draconian and unimplementable information security policy documents that don’t translate to code, and are generally useful only to protect people from auditors, not useful as security architecture to design, build, and operate a more secure system.

One of the most effective ways to get at security requirements is threat modeling, which has long been a standard component in security vendor products. It is impossible to predict even a fraction of the actual threats your system will face; however, it is practical to start with a high-level threat classification to illuminate threat types – typified by the acronym STRIDE:[25] Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. One reason this threat model is particularly useful is that each high-level threat type maps to a specific set of controls, allowing you to design security mechanisms for each threat type.

In brief, the threat model process begins with some mix of software architecture, design, and code artifacts. The team develops the threat model that classifies the system by a set of threats it faces. Then, through security architecture and design, countermeasures are defined to deal with the prioritized threats. The security properties deal with one or more threats.

This O‑ESA Guide makes one alteration to the STRIDE threat model, exchanging the threat Repudiation for Dispute. Each threat yields one or more countermeasures to deal with the threats.

Threat

Countermeasure

Spoofing

Authentication

Tampering

Digital Signature, Integrity

Dispute

Audit Logging

Information Disclosure

Encryption

Denial of Service

Availability

Elevation of Privilege

Authorization

Table 1: Threat Models: Threats and Countermeasures

The power of the threat model is that each threat class is dealt with independently and yields a different mechanism such that the security architect can compose a cost-effective security solution for the context in which they are executing.

Example: Mapping to Web Service Security Standards

In web services security, architects are always on the look-out for standards that will help to address architectural issues in interoperable, re-usable ways; this has long been an anathema in information security. Let’s revisit the above table in the context of standards.

Threat

Countermeasure

Standard

Spoofing

Authentication

XML Signature & WS-Security, SAML Authentication, Information Cards

Tampering

Digital Signature, Integrity

XML Signature

Dispute

Audit Logging

No standard widely implemented for audit logging.

Information Disclosure

Encryption

XML Encryption

Denial of Service

Availability

No standard.

Elevation of Privilege

Authorization

XACML, SAML Authorization Decision Assertion, OAuth

Table 2: Threat Models: Mapping Countermeasures to Security Standards

The basic threat model approach is useful to generate security requirements derived from known threats. The next step is to specify location in the overall architecture.

Attack Surface

The Attack Surface[26] is the sum of the attack vectors by which an attacker seeks to compromise the system. The Attack Surface is comprised of:

  • Data: The data in the system and message exchanged.
  • Method: The application or service methods for processing applications, such as request and response methods.
  • Communication Channel: Network communications channel such as HTTP or TCP.

A couple of interesting things emerge from analyzing the above table. Threat modeling shows that no one security mechanism guarantees resilience to all threats; rather, a mix is required. The Attack Surface model provides an additional layer to specify context of threats and countermeasures.

Threat

Attack Surface

Countermeasure Examples

Spoofing

Data

Signed XML data

Method

Channel

TLS/SSL with client & server certificates for mutual authentication

Tampering

Data

Signed XML with hash

Method

Channel

Dispute

Data

Method

Channel

Information Disclosure

Data

Method

Channel

SSL encrypted communications

Denial of Service

Data

Method

Channel

Elevation of Privilege

Data

Method

Channel

Table 3: Threat Models: Example Countermeasures Responding to Threats

The output of the threat model and Attack Surface is the Countermeasure Model that leads to further design, build, test, and operational activities including:

  • Static analysis
  • Runtime analysis
  • Fault injection fuzzing

These activities are described in Section 5.8.

4.7.4 Re-Usable Tools, Libraries, and Templates

To ensure proper utilization of the security infrastructure and to simplify the job of the developers and system administrators, it is important to provide meaningful guidance at the code level. These objectives require detailed guidance that is specific to each platform (i.e., hardware/software combination) endorsed by the enterprise.

This is a critical step in an organization’s security architecture development that is easy to overlook. The security architecture is developed from the top down and is typically delivered by those looking at the big picture vision for the enterprise. However, that vision, though necessary, is of little interest to most developers and administrators. The architecture must bridge the gap from a vision, prolific with pictures, to the code and configuration level.

The developer guidance should include:

  • Detailed, platform-specific instructions for utilizing the security services – these instructions should include installation and configuration of the required software components for the particular environment, as well as methods to accomplish various security functions such as authentication, authorization, and encryption
  • Sample configurations and working code that is well documented for review by developers
  • Code snippets that can be freely copied or downloaded for incorporation into development projects
  • Libraries that provide necessary security functions that have encapsulated low-level code details into simpler, higher-level functions
  • Templates that describe typical application functionalities with necessary security aspects identified

4.7.5 Coding Best Practices

Design is complete; design patterns have been identified; security engineering principles have been taken into account; and re-usable tools, libraries, and templates have been put in place. Now secure design must be translated into secure code. The starting point is to understand the best practices for developing and delivering secure code and then to utilize a process that supports those practices. The following books offer concrete guidance on secure coding and how to apply these practices in real-world development projects:

  • Secure Coding: Principles and Practices, by Mark G. Graff and Kenneth R. Van Wyk (O’Reilly, 2003)
  • Building Secure Software: How to Avoid Security Problems the Right Way, by John Viega and Gary McGraw (Addison-Wesley, 2001)
  • Writing Secure Code, Second Edition, by Michael Howard and David C. LeBlanc (Microsoft Press, 2002)
  • The Security Development Lifecycle, by Michael Howard and Steve Lipner (Microsoft Press, 2006)
  • Software Security: Building Security In, by Gary McGraw (Addison-Wesley, 2006)

The OWASP Guide project (www.owasp.org/index.php/OWASP_Guide_Project) shows many examples of known bad and known good practices in secure coding through documents and wikis.

4.7.5.1    Code Reviews

Recommended Reference

Your organization may want to consider putting a code review process in place if it hasn’t already. The book Peer Reviews in Software: A Practical Guide, by Karl E. Wiegers (Addison-Wesley, 2001) has excellent reviews.

Code reviews focus on more than just security issues, but one of their key purposes should be to review for secure coding practices.

Input Validation

One of the first lines of defense in any secure program is to validate input. The article “Best practices for accepting user data” provides information on that topic.

In addition, as discussed in a Computerworld article dated June 4, 2003 which covered a subset of the same issues and is consistent with this O‑ESA Guide in its recommendations, application-specific vulnerabilities are born in coding primarily because developers fail to validate user input. Not detecting and eliminating this simple error can allow the following exploits to occur:

  • Buffer Overrun: A buffer overrun occurs when data larger than the entry field is written into memory. Hackers can use this flaw to overload the server with data and crash the site, shutting down business.
  • Cross-site Scripting: Cross-site scripting happens when a hacker injects malicious code into a site to make a user session appear as if it is originating from the targeted site. As a result, the attacker is given full access to any information exchanged in the user session, such as account passwords and Social Security numbers.
  • SQL Injection: An SQL injection embeds SQL script into the user input by placing a malicious character (such as an apostrophe) in the input field, allowing the SQL server to execute a malicious query such as delivering a directory of customer credit card numbers.

Preventing these attacks primarily requires a change of mind-set. Simply rewriting a few lines of code to apply the following maxims will greatly reduce the risk of attack:

  • Validate all user input to allow nothing other than the function's expected input and output
  • Set a trusted code base, and validate all data before it can enter the trusted environment
  • Test each data type before entry (i.e., web, registry, file system, and configuration files)
  • Define all data format (such as buffer length and integer type)
  • Define valid user requests and reject all others
  • Look for valid, not invalid, data
  • Never mirror web input
  • Encode or encrypt output

4.7.5.2    Code Analysis Tools

Source code analysis products are aimed at helping companies unearth and fix flaws in software – notably in C/C++ and Java code-based application development. The goal is to give companies a way to discover flaws in code that could lead to threats such as buffer overflows, format string errors, and SQL injection exploits. They also include a runtime analysis component that allows security workers to launch a variety of attacks against new applications before they are deployed. Today, there are a number of vendors marketing SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) products.

4.7.6 Testing Best Practices

According to the International Institute for Software Testing (IIST): “Experience reports show that up to 80% of the maintenance effort is spent to fix problems resulting from requirements errors … well-understood, well-defined, and managed requirements are the basis for effective testing of the software system.”

4.7.6.1    Requirements-Based Testing

Testing should tie back to the requirements. As mentioned earlier in Section 4.7.2, the specifications should make all requirements explicit, including both the positive (should happen) and negative[27] (should not happen) requirements. Some requirements will be application-specific, while others will be general requirements derived from the design principles. There should be a test checklist for both.

4.7.6.2    Requirements-Based Testing Tools

Requirements-based testing methodology tools are available. However, there is very little related to security testing specifically.

5 Security Operations

The focus now shifts to security operations, highlighted in the bottom center of the overall framework in Figure 16. This is the third and final set of components and processes that make up this O‑ESA Guide.

Figure 17: Security Operations Components and Processes

This section provides a high-level security operations framework and template that member organizations can tailor to their needs. The security operations function defines the processes required for operational support of a policy-driven security environment. This function involves two key types of processes. One includes the administration, compliance, and vulnerability management processes required to ensure that the technology as deployed conforms to policy and provides adequate protection to control the level of risk to the environment. The other type consists of the administration, event, and incident management processes required to enforce policy within the environment. The security operations function has a strong dependency on asset management. Figure 17 shows this overall set of components and processes.

figure19

Figure 18: Security Operations Overview

The components and processes that make up security operations are introduced briefly below and then described in more detail in the following sections:

  • Asset management is a component and process for maintaining the inventory of hardware and software assets required to support device administration, compliance monitoring, vulnerability scanning, and other elements of security operations. Though not strictly an O‑ESA component, it is a key dependency of security operations.
  • Administration is the process for securing the organization’s operational digital assets against accidental or unauthorized modification or disclosure.
  • Compliance is the process for ensuring that the deployed technology conforms to the organization’s policies, procedures, and architecture.
  • Vulnerability management is the process for identifying high-risk infrastructure components, assessing their vulnerabilities, and taking the appropriate actions to control the level of risk to the operational environment.
  • Event management is the process for day-to-day management of the security-related events generated by a variety of devices across the operational environment, including security, network, storage, and host devices.
  • Incident management is the process for responding to security-related events that indicate a violation or imminent threat of violation of security policy (i.e., the organization is under attack or has suffered a loss).

5.1 Asset Management

Asset management includes the components and processes for maintaining the inventory of hardware and software assets required to support device administration, compliance monitoring, vulnerability scanning, and other aspects of security operations. Common components include a repository of hardware and software assets (including the configuration and usage information), a capability to discover assets as they are added to the network, and reporting capabilities. This information may be used for activities such as contract renewals, software license compliance audits, and cost reduction activities. While asset management is not specific to O‑ESA and may in fact be valued more for its contribution to enterprise architecture, it is a foundational dependency of security operations.

From a security perspective, an organization must be able to respond quickly to threats, and doing so requires knowledge of the assets that may be under attack.

Asset information needs include:

  • Asset location
  • Configuration of software and hardware
  • Support ownership
  • Business context
  • Identity information

Monitoring is also required to ensure that the inventory is complete and up-to-date.

5.2 Security Event Management

Security event management gives information security groups visibility into security incidents. This facilitates better understanding of the use and misuse of the system and gives the information security team a way to track and respond to events. The security event management system is closely aligned with logging, monitoring, audit logging systems, and incident response processes.

5.3 Security Administration

Security administration includes the components and processes for securing the organization’s operational digital assets against accidental or unauthorized modification or disclosure. This is accomplished by planning, coordinating, and implementing the technologies and best practices required to create and maintain secure access to resources and protect the integrity of system and device configurations.

Security administration comprises two primary sub-components:

  • Identity management is responsible for the creation, modification, and termination (inactivation or deletion) of digital identities, including the workflow process for managing both identity and access management information. It is also responsible for management of authentication tokens and certificates.
  • Device configuration is responsible for technical standards instantiation at the device level (see Section 6.2 for background information). It is also responsible for ensuring that updates to the actual devices are reflected in the asset database.

5.4 Security Compliance

Security compliance (Figure 18) provides a process framework for ensuring that the deployed technology conforms to the organization’s technical standards, procedures, and architecture. An organization must have processes and tools that enable:

  • Monitoring of the deployed technology to ensure that it remains in alignment with policy. Monitoring involves gathering data about deployed technology and comparing it to a defined state.
  • Alerting and reacting to identified exceptions, bringing the technology back into alignment. When technology is determined to be out of alignment, processes need to be in place that allow for notification of the appropriate personnel and bringing the technology back into compliance.

figure20

Figure 19: Security Compliance

5.5 Vulnerability Management

Vulnerability management provides a process framework for identifying high-risk infrastructure components, assessing their vulnerabilities, and taking the appropriate actions to control the level of risk to the operational environment.

Asset management is a core dependency for the vulnerability management process. It is assumed that the asset repository:

  • Contains hardware and software configuration information, owner information, and business context and value information.
  • Allows exact understanding of targets for remediation of vulnerability notifications from vendors.
  • Supports evaluation of targets identified as a result of vulnerability assessment scanning.

Vulnerability management encompasses both reactive and proactive processes for dealing with vulnerability issues:

  • Reactive process for dealing with vulnerability reports from vendors
  • Proactive process for identifying vulnerabilities and taking appropriate actions to control the level of risk

5.5.1 Reactive Process for Responding to Vulnerability Notifications

  • Receive notification of potential vulnerabilities
  • Query the asset repository, looking for target systems that are susceptible to the new vulnerability
  • Perform risk analysis
  • Develop a response plan if warranted

—  If patch available:

  • Deploy the patch
  • Update the asset repository

—  If patch not yet available:

  • Determine whether interim risk mitigation is required
  • If required, define and apply risk mitigation measures to the target systems; if not, await patch
  • Update the asset repository if required
  • Document actions taken
  • Assess success/failure of process
  • Assess success/failure of process

5.5.2 Proactive Process for Vulnerability Identification and Response

The only difference in this process is that it is proactively initiated as a result of vulnerability assessment scanning. It is identical after the first step.

  • Perform vulnerability assessment scanning
  • Receive report of potential vulnerabilities
  • Query the asset repository, looking for target systems that are susceptible to the new vulnerability
  • Perform risk analysis
  • Develop a response plan if warranted

—  If patch is available:

  • Deploy the patch
  • Update the asset repository

—  If patch is not yet available:

  • Determine whether interim risk mitigation is required
  • If required, define and apply risk mitigation measures to the target systems; if not, await the patch
  • Update the asset repository if required
  • Document actions taken
  • Assess success/failure of process

5.6 Event Management

Event management provides a process for day-to-day management of the security-related events generated and logged from a variety of sources within the operational environment, including security, network, storage, and host devices. The following processes are required:

  • Security logs must be consolidated and maintained. A strategy for storage and maintenance of log files must be defined and implemented.
  • Events must be aggregated, normalized, and analyzed regularly to provide a baseline. An event console strategy must be defined and implemented.
  • Alerts must be generated and routed to the appropriate individuals when suspicious activity has been detected. In addition, if the event represents an immediate or imminent security threat, then incident management processes must be invoked (see below).

5.7 Incident Management

Incident[28] management provides a process framework for responding to security-related threats. Incident management processes are invoked when the analysis of events indicates a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard computer security practices. These include (but are not limited to):

  • Attempts (failed or successful) to gain unauthorized access to a system or its data
  • Unwanted disruption or denial of service
  • The unauthorized use of a system for the processing or storage of data
  • Changes to system hardware, firmware, or software characteristics without the owner's knowledge, instruction, or consent

Incident management processes include the following:

  • A process for analyzing and responding to incidents (see the diagram below)
  • A process for contacting appropriate personnel
  • A process for understanding and fulfilling legal requirements (if applicable) with provisions for:

—  Chain of custody

—  Notification of appropriate authorities, including the ability to provide appropriate documentation

—  Recovery

—  Reporting

  • A recovery process to bring the organization back to its defined state
  • A reporting process to ensure all interested parties are apprised of incident management activities

As these descriptions indicate, event and incident management are closely related. Figure 19 shows their relationship graphically.

figure21

Figure 20: Incident Management

This concludes the description of security operations, the third and final component of this O‑ESA Guide. As discussed earlier, security operations encompasses two critical types of processes required to make policy-driven security a reality: the processes required to ensure that technology as deployed conforms to policy and adequately protects the environment, and the processes required to enforce policy within the environment. Although these processes are defined at only a high level, they are equally as important as the other components of O‑ESA. It is in fact these processes that bring policy-driven security architecture to life.

5.8 Testing Security Architecture

There are several techniques to consider when testing security architecture. The first point to consider is whether the goal is to test the security design or whether to test how well the system holds up in the face of malicious use. Checking the security design functions as expected means testing the security properties in the system such as authentication and access control. Assuring that the system holds up in the face of malicious use involves intentionally attempting to bypass controls.

The primary focus of testing is to ensure that the goals described in the security policy are realized in the runtime implementation. There are several ways to verify that the policy goals are met:

  • Static analysis: scanning the application code
  • Runtime analysis: attempting to find vulnerabilities in the running systems
  • Fuzzing/fault injection: purposely injecting malicious or unexpected data to cause faults

Work[29] in practice on static analysis identifies five keys to making sure that security testing creates positive change in the enterprise:

  • Start small – security testing tools and processes are new technologies and there is wide variance in how best to apply them for a given organization.
  • Go for the throat – focus on the areas of biggest impact, not casting the widest net initially.
  • Appoint a champion – ensure there are mentors in place to smooth adoption.
  • Measure the outcome – objective measures for demonstrable progress.
  • Make it your own – ownership around the governance, standards, guidelines, and results.

These five keys are essential across all the areas of security testing to make sure these efforts generate maximum value. Security architecture by itself guarantees nothing; security testing processes verify that the intent of the architecture and design are delivered in the system runtime.

5.9 Security Metrics

5.9.1 Operational and Business-Aligned Metrics

Security metrics offer objective methods to track and communicate the overall maturity of the security architecture. Because security issues manifest themselves in both technical terms (like vulnerabilities) and business terms (like availability outages), the security metrics field is expanding to fill the age-old gap between IT and the business. This section describes some currently observed practices around information security operational metrics. These are necessary but not sufficient for an overall security metrics program.

A successful security metrics program must include not only operational metrics that report on the health of the system, but also must link operations to the business risks and impact.

The Open Group Risk Taxonomy Technical Standard[30] shows one end-to-end example of this using the following steps:

  • Stage 1: Identify scenario components:

—  Identify the asset at risk

—  Identify the threat community under consideration

  • Stage 2: Evaluate Loss Event Frequency (LEF):

—  Estimate the probable Threat Event Frequency (TEF)

—  Estimate the Threat Capability (TCap)

—  Estimate Control Strength (CS)

—  Derive Vulnerability (Vuln)

—  Derive Loss Event Frequency (LEF)

  • Stage 3: Evaluate Probable Loss Magnitude (PLM):

—  Estimate worst-case loss

—  Estimate Probable Loss Magnitude (PLM)

  • Stage 4: Derive and articulate risk:

—  Derive and articulate risk

The BSI MM[31] survey of security maturity at 30 companies in a variety of industries (financial services, independent software vendors, technology firms, healthcare, insurance, energy, and media) describes a security framework that may be measured to gauge the maturity of the security program.

5.9.2 Objectives

The primary purpose of security and risk metrics is decision support. Trained security experts are often able to make informed decisions about security matters based on their experience and reading the situation, and tend to use security metrics to confirm their assessments. Executives, managers, and technical staff rely more on metrics to inform their decisions and detect and respond to incidents and anomalies. Security and risk metrics may be used to:

  • Measure the effectiveness of security personnel, processes, and technology based on objective targets
  • Make more informed risk management decisions
  • Establish benchmarks and trends over time
  • Measure whether security goals have been achieved

When building an architecture there is a litany of design options and trade-offs to be considered. Security, risk, and integration are inextricably linked. Security metrics use objective measurement to aid this decision process.

5.9.3 What is a Security Metric?

The discussion site www.securitymetrics.org defines the essential characteristics of a security metric. The descriptions below utilize some of the characteristics as described by securitymetrics.org, and naming conventions for metrics from O-ISM3,[32] an Open Group standard for information security management. This data model will help you define metrics and show you how to integrate them into your enterprise:

  • Metric name
  • Metric description
  • Metric purpose/objective
  • Required data sources
  • Required logic, algorithms, or formulae (Measurement Procedure in O-ISM3)
  • Frequency of measurement (Measurement Frequency in O-ISM3)
  • Units of measure (Units in O-ISM3)
  • Benchmark or goal (Target Value in O-ISM3)
  • Visualization
  • Metric report (publication schedule)

The source data and publication schedule may dictate certain regimes in the amount of processing that may or may not be done on the metric. In the logic, algorithm, formulae, units (of measure), and target value (benchmark) determine the symbolic representation of what the metrics capture. In the following sections, we will examine some types of metrics and how these characteristics bear upon what is practical in deploying a security metrics program in your enterprise.

5.9.4 Types of Metrics

The time dimension of how they are gathered, reported, and used may define metrics. These characteristics have direct bearing on the amount of data that may be gathered, how it is reported, what data cleansing is required and/or feasible, and a host of other issues.

Metric Type

Description

Examples

Reporting Metrics

Measurements of processes and system elements. Reports on past events and performance.

Identity provisioning cycle time
Access control authorization reports

Predictive Models

Large data sets used to generate models for decision support; for example, for generating likely outcomes based on given inputs.

Learning mode web application firewalls

Real-time Metrics

Small data sets used to provide critical information for systems management and diagnostic data.

Warnings
SNMP
JMX

Table 4: Metric Types and Examples

There are many variations on the three metrics types listed above. What is important for the metric designer to understand is:

  • What type of metrics they are building: is it backward or forward-looking?
  • Who is the consumer of that metric?
  • What reporting format makes sense to display the metric?

The time that the metrics are collected, and how they are processed and used defines the metric, as much as the data itself.

metricslifecycle

Figure 21: Metric Types and Uses

In real-world situations, all three types of metrics are likely to be used at different points in the lifecycle. It is useful to understand the metric type and end audience when designing the metric, and deriving its source data.

5.9.5 Applying Security Metrics

Metrics may be qualitative where the measures are subjective based on the assessment of the measurer, or quantitative where the measurements are objective. Whereas qualitative metrics are better than none, an objective quantitative metric provides a more consistent measure. Andrew Jaquith’s rules[33] for effective security metrics are:

  • Be consistently measured: The criteria must be objective and repeatable.
  • Be cheap to gather: Using automated tools (such as scanning software or password crackers) helps.
  • Contain units of measure: Time, dollars, or some numerical scale should be included – saying “green”, “yellow”, or “red” is more qualitative than quantitative.
  • Be expressed as a number: Give the results as a percentage, ratio, or some other kind of actual measurement. Don't give subjective opinions such as “low risk” or “high priority”.

Consistently measured metrics that contain actual units of measure expressed as numbers are good goalposts for establishing useful security metrics, because they act as an objective guide for decision-makers. The guidance to find security metrics that are cheap to gather is due to the fact that security metrics are generally gathered on an ongoing basis, so do not build out a security metrics initiative that relies on end-to-end auditing of all infrastructure; rather, identify metrics that can be generated and consumed efficiently.

5.9.6 Types of Metrics

Risk metrics are concerned with assets, threats, vulnerabilities, and countermeasures, and each of these areas has different types of measurement associated with its domain. We are not at a point in security and risk metrics where we can achieve one metric to rule them all, but at a granular level we can identify useful metrics for certain domains at different times in the development lifecycle. Below we look at design time, deployment time, and runtime metrics examples. This is not an exhaustive list, but rather an illustrative example. In each architectural view in service-oriented security we examine domain-specific metrics for that view to show further how metrics are applied in specific contexts.

5.9.6.1    Design-Time Metrics

Design-time metrics are used to make risk management decisions about deploying software and security mechanisms. Design-time metrics may be harvested from source code through static analysis, or from sources of risk metrics such as audits, or iteratively from runtime and deploy-time metrics. Design-time metrics are important because they enable the designer to use metrics to improve the product as it is under development. Projects generally have at least some focus on integration; where at least one part of the system is already in production, metrics may be used to confirm or challenge assumptions made about the security of the environment that is to be integrated. Other examples to consider for design-time metrics include:

  • Percentage of service interfaces using input validation: Measure the percentage of services taking external input versus those using input validation. This may be worked further if the enterprise has stronger and weaker input validation mechanisms, so the enterprise can measure white list versus black list validation, for example.
  • Percentage of interfaces and message exchanges using message encryption and signatures: For each interface and message exchange patterns measure the number against what services offer encryption and digital signature protection for their messages.
  • Policy violations: Measure amount and trend of policy violations over time.
  • Authentication strength: In large systems, there may be a variety of authentication strengths available to the designer. Put the authentication mechanism on a continuum – such as user name/password (weakest), federated identity (medium), smart card (strongest) – and measure the prevalence of each. Associate these with the attendant risks for the systems they protect.

Note the difference between metrics and measurements. Useful metrics measure how a process or system behaves, where the metric value will change according to process or system behavior. By comparison, a structural component which is not likely to be affected by process or system behavior may have a measurement but is not a useful metric.

Design-time metrics are used to decide whether additional security mechanisms should be added based on the risk profile of the enterprise. The metrics may be gathered throughout the development process to identify trends in the software development lifecycle. For example:

  • Did the security of the system increase or decrease over time?
  • Did the outsourced development partner produce more or less secure code than the code developed by internal staff?
  • Is the code under development more or less secure than what is already in production?

Design-time metrics typically are gathered and used by the development staff such as developers, software architects, and software security architects. These metrics are available while the code is under development and that means they may influence the quality and security of the end product if they are found and communicated early enough in the software development lifecycle.

5.9.6.2    Deploy-Time Metrics

Deploy-time metrics are concerned with measuring the changes to the system over time. While these metrics may not necessarily signify security concerns in and of themselves, when combined with runtime metrics, deploy-time metrics may give insight into runtime events. Some examples of deploy-time metrics:

  • Configuration Change: Change of settings and configuration by user and by type.
  • Change Trends: Amount of configuration changes over time.
  • System State: Percentage of system in a given state, such as application servers using a certain security configuration in one geographic location versus an alternate security configuration in a different location.

Deploy-time metrics can be combined with runtime metrics, so that events that occur at runtime can be correlated back to deploy-time trends. Deploy-time metrics may be used by operations staff and auditors to understand the security of the system and its administrative metrics. IT shops typically gather metrics for deployment, as part of ITIL and related processes and security-centric metrics should be used to augment these standard metrics.

5.9.6.3    Run-Time Metrics

Run-time metrics are focused on the runtime behavior and diagnostics that services exhibit. Runtime metrics may be reported on historically, assessed in a forward-looking predictive model, and used for debugging production systems with alerts and warnings on detection of incidents and anomalies. Some examples of runtime metrics include:

  • Security Alerts: Real-time alerts are available from host processes and application code, as well as security systems like HIDS, NIDS, SIM, and SEM.
  • Failed Message Validation: Systems such as XML security gateway and input validation code report on failed message validation either through intentional or unintentional misuse of the system.
  • Unauthorized Access: Access control mechanisms report on events such as authentication and authorization faults, account logouts, and so on.
  • HIDS/NIDS/SEM/SIM Events: IT security typically has controls to report on health and security of operational systems at a network and host level, these metrics can inform the overall security metrics for the system as a whole.
  • Honeypot/Honeytoken: Honeypots and honeytokens are systems and tokens that are designed to be compromised. These systems give visibility into the threat landscape and may enable the security response team to deal with attacks more efficiently.

Runtime metrics may be fed into the overall metrics program to improve the quality of the other system metrics. Runtime metrics are critical because they show both the threats and vulnerabilities.

Runtime metrics (Alerts in O-ISM3) are typically constrained by the amount of data available, meaning a narrow stream of data that is available at runtime and/or the amount of data an analyst can consume at runtime. This has led IT security to further invest in SEM and SIM technologies, which seek to correlate events on behalf of the analyst.

5.9.6.4    Measurers and Modelers

Another interesting discussion topic on www.securitymetrics.org centers on whether you are a measurer or a modeler. In general, the group consensus for key defining characteristics is as follows:

Modelers

Measurers

Risk equations

Empirical data

Loss expectancy

Time-series analysis

Linear algebra

Correlation

Attack surfaces

Essential practices

Information flow

Information sharing

Economic incentives

Economic spending

Vendors

Enterprises

Why

Before and after

Table 5: Metrics Approach: Modeler or Measurer

These are not hard and fast categories, but give an overall taste on two different high-level approaches to metrics gathering and utilization. A given security metrics program may implement variations on both themes, but it is useful to understand the program’s approach and focus when building a holistic metrics program.

5.9.7 Security Metrics Process

The security metrics process is comprised of the following main steps:

  • Interpret: The meaning of a measured value is evaluated comparing the value of a measurement with a threshold, comparable measurement, or target. Depending on source(s) the metrics generally require sorting and filtering, interpretation in the light of previously gathered metrics, and in some cases correlation across systems and processes.
  • Investigation: Of abnormal metric values and their cause.
  • Representation: Visualization of the metric to enable reliable interpretation. Metrics representation will vary depending on the type of comparison and distribution of a resource; e.g., bar charts, pie charts, line charts, use of colors (green-amber-red), time period involved.
  • Diagnosis: Analysis process further distills the metrics to identify signal-to-noise in terms of security events, and indicates consequences that can be used to inform business decisions.
  • Report: Based on how the metrics are to be used (decision support, or real-time incident response) a report is developed. Reports may be communicated to a wider stakeholder community.

The basic process described above can be enhanced based on your environment. ISM3 defines five phases: Measurement, Interpretation, Investigation, Representation, and Diagnosis. Security metrics is an emerging field that holds promise to improve security architecture and communication. The discipline that an architecture program can cultivate is to identify metrics that measure the design, deploy, and runtime effectiveness of your security design.

6 Toward Policy-Driven Security Architecture

Ideally, a policy-driven security environment is one where policies are articulated at a business level and automatically codified and enforced across the environment. For example, in a healthcare environment the business policy requirement might be expressed simply as “HIPAA[34] compliance”. Our security systems would be able to automatically translate that business policy into the security policies and detailed technical standards required to implement the business policy, and then push them out in electronic form to the various policy decision and enforcement points in the enterprise. In addition, our security systems would have access to the necessary identity and information attributes such that policy decisions could be properly based (e.g., on the characteristics of the requesting user, the requested information, and the environment). Obviously today’s technology does not provide such capabilities, and although there is industry movement in this direction, at current course and speed realization of the full vision may be years away, at best. The policy automation vision, technical model, and roadmap discussed below are viewed as important steps in organizing the user and industry actions required to actualize the vision sooner.

6.1 Policy Layers and Relationships

Figure 20 briefly defines policy layers and their relationships in the context of this O‑ESA Guide’s conceptual framework for policy-driven security.

Business policy is the highest level expression of business intention in the security policy realm. The example used is “HIPAA compliance”, which means compliance with the HIPAA standards designed to assure the security of electronic protected health information. One can imagine other examples that might be less industry-specific, such as “perimeter access policy” or “software configuration policy”. In the O‑ESA policy model, business policy is then translated to high-level security policies in the relevant ISO/IEC 27001/2 policy domains. The specific security policies must then be translated into the management and technical standards that define in detail how each policy will be implemented and enforced in the user organization’s technical environment:

  • ISO/IEC 27001/2 Policy Template (Organizational Security) is an example of a management policy. For organizations that outsource some of their information processing and support multiple third-party access arrangements, these policies may result in the implementation of a number of management standards that define in detail the relevant roles, responsibilities, and processes for dealing with service providers and third-party access requirements. The management standards themselves are outside the scope of what is being addressed here, keeping in mind that those policy domains such as third-party access may spawn technical standards as well.
  • ISO/IEC 27001/2 Policy Template (Access Control) and Systems Development and Maintenance are examples of technical policies. Depending on the scope and diversity of the technical environment, technical policies may translate to a very large number of technical standards,[35] defining in detail how each policy is to be implemented and enforced across the technical infrastructure.

figure22

Figure 22: Policy Layers and Relationship

Implementation of the technical standards results in an electronic representation of the business policy, augmented as required by administrative procedures. As business events occur, policy is enforced in real time by the security infrastructure, augmented by manual enforcement procedures as required.

6.2 Policy Automation Vision

With the above definitions as background, Figure 21 describes the current state and future vision for business policy implementation and enforcement.

Figure 23: Policy Automation Vision

As shown on the left of the figure, today’s current state is that essentially all of the policy and standards definition process is manual, as well as much of the standards implementation process. Implementation is often based on configuration checklists that are associated with each technical standard and each type of hardware or software system where it is to be applied. This may involve manual administration tasks at each system or it may be partially automated based on profiles for each specific system configuration. The amount of manual versus automated configuration definition varies widely from organization to organization.

Once configuration is accomplished, compliance is enforced through a combination of automated runtime controls and manual monitoring. However, there are many cases today in which few if any automated runtime controls are available. This is especially true for standards related to appropriate personal use of equipment and information, which are enforced largely through manual monitoring if at all. Enterprise rights management technology may provide automated runtime controls for appropriate use of information (documents, email, and removable media containing sensitive information) in the future.

As shown on the right of Figure 21, the future vision automates all of the policy and standards definition process, as well as the standards implementation process. This is obviously a tall order, and at this point the future arrows on the right of the diagram represent magic.

The next section describes a model for automation of the policy generation and instantiation process, which begins to lay out a technical vision for how the future arrows could potentially be made real. This is followed by a roadmap of user and industry actions that need to occur in order to enable the technical vision.

6.3 Policy Automation Model

Figure 22 portrays a high-level technical model for automation of the business policy implementation process.

Figure 24: Policy Automation Model

At the center is a policy management system with integrated policy mapping, policy translation, and technical standards instantiation modules. On the right is a business policy module that provides a generic definition of the business policy to be implemented. On the left is the enterprise-specific policy schema and configuration data required to map the generic business policy definition to the organization’s particular technical architecture. The following describes the model in a little more detail, before moving on to an example:

  • Business Policy Module: It is assumed that there are three major aspects to the generic definition of a business-level policy:

—  Generic business content definitions for the particular type of target services/resources affected by the business policy. The automation model example will make this a little clearer.

—  Generic role definitions for the users (or initiators) affected by the business policy.

—  Generic specification of the discrete security policy statements that implement the business policy. It is assumed that these are provided in the form of a standard policy language that is compliant with the ISO/IEC 27001/2 policy template.

  • Enterprise-Specific Policy Schema & Configuration Data: It is assumed that the following types of schema and configuration data will be required by the policy management system:

—  Identity schema and role definitions required to map the generic role definitions to the enterprise-specific roles of the particular organization.

—  Content tagging schema required to map the generic content definitions to the enterprise-specific definitions for the particular organization.

—  Computing environment definitions (servers, firewalls, directories, etc.) required for policy translation to technical standards and instantiation in the organization-specific environment.

  • Policy Management System: The three modules shown represent the three conceptual steps required to map high-level business policy statements to the electronic representation required for runtime policy decision-making and enforcement. The security services manager is the management component required to update each of the managed security systems that are affected.

—  Policy Mapping Module: Takes the generic role and content definitions associated with the generic policy specification and maps it to the enterprise-specific schema to produce an enterprise-specific policy specification. In short, this module maps generic schema to enterprise-specific schema.

—  Policy Translation Module: Takes the enterprise-specific policy specification statements and translates them, based on the enterprise computing environment definition, to produce the enterprise-specific technical standards. Note that each policy statement may translate to multiple implementing standards, depending on the number of technical controls, end-point architecture variations, and environment-specific variations required. In short, this module converts high-level security policy to detailed technical standards based on the type of device/service.

—  Technical Standards Instantiation Module: Takes the enterprise-specific technical standards and instantiates them in centralized policy and configuration repositories[36] based on the enterprise-specific configuration definition. In short, this module instantiates technical standards for each instance of a particular device/service type.

—  Security Service Manager: The security service manager is responsible for updating local policy/configuration repositories for the affected security services. This is accomplished through interactions with the security management agents at each of the managed systems.[37] Once the local repositories have been updated, the security services environment is ready for runtime decision-making and enforcement.

  • Runtime Decision & Enforcement[38]: This portion of the diagram shows the relationship to runtime policy decision-making and enforcement components of each O‑ESA security service. Some policy decision points (Central PDPs) may operate directly off the central policy repositories. It is assumed that these are full function PDPs that are not tightly integrated with the Policy Enforcement Point (PEP) and service implementation. Local PDPs may be less capable, may be tightly integrated with proprietary policy/configuration store and PEP implementations, or may be tightly integrated for performance reasons. As a result, local security management agents may have additional responsibilities, such as mapping policy/configuration updates to proprietary interfaces.

6.3.1 Policy Automation Model – HIPAA Example

This section builds on the general automation model with a specific example of business policy implementation. As introduced earlier, the business policy example used in Figure 23 is for HIPAA compliance.

Figure 25: Policy Automation Model – HIPAA Example

On the right is the HIPAA business policy module; on the left is the enterprise-specific policy schema and configuration data required to map the generic HIPAA policy definition to the organization’s particular technical architecture; and in the center is the policy management system. The following describes the components of the HIPAA policy automation example in more detail:

  • Business Policy Module: It is assumed that the HIPAA module would be provided by or in conjunction with an industry organization that has the HIPAA expertise required to define the generic HIPAA content definitions, role definitions, and policy specifications shown. It is also assumed that the generic HIPAA policy specifications are provided in the form of a standard policy language, in this case XACML.[39] Other business policy domains may require policy language standards in addition to those provided by the access control language of XACML.
  • Identity Schema and Role Definitions: This is the enterprise-specific identity schema required to map the generic HIPAA schema to the enterprise-specific roles of the particular organization. It is assumed that both sets of schema definitions are provided in a de jure or de facto standard form understood by the policy mapping module.
  • Content Tagging Schema: This is the enterprise content tagging schema required to map the generic schema for Patient Records, Prescriptions, and so forth to the enterprise-specific definitions for the particular organization. It is assumed that content tagging information is provided in a de jure or de facto standard form understood by the policy mapping module.
  • Computing Environment Definition: This is the enterprise-specific technical computing environment definition (for servers, firewalls, directories, etc.) required for policy translation and technical standards instantiation. It is assumed that this information is provided in a standard form defined by the Common Information Model (CIM).
  • Policy Mapping Module: This takes the generic HIPAA role and content schema definitions associated with the generic policy specification and maps it to the enterprise-specific schema to produce the enterprise-specific HIPAA policy specification. It is assumed that these are in the form of a standard policy language, in this case XACML, and that they are compliant with the ISO/IEC 27001/2 policy template. Again, other business policy domains may require policy language standards in addition to those provided by the access control language of XACML.
  • Policy Translation Module: This takes the enterprise-specific HIPAA policy specification statements and translates them based on the enterprise computing environment definition to produce the enterprise-specific HIPAA technical standards. As discussed earlier, each HIPAA policy statement may translate to multiple implementing technical standards (multiple technical controls for each device/service type).
  • Technical Standards Instantiation Module: This takes enterprise-specific HIPAA technical standards and instantiates them in centralized policy/configuration repositories for each instance of a particular device/service type involved in enforcing HIPAA business policy.

6.4 Policy Automation Roadmap

With the business policy automation technical model as background, Figure 24 lays out a roadmap of user organization and industry actions required to actualize the technical vision. As the legend indicates, boxes identifying user organization actions briefly describe “Conditions inhibiting automation” on the left and “Conditions supporting automation” on the right. The following describes each of the user organization and industry actions in more detail, starting at the top left:

  • Policy: If your organization’s security policy is fragmented, undocumented, and inconsistent without a clear linkage to business-level policy definitions, then there is much that can be done to put yourself in a better position to support policy automation. Start by applying the O‑ESA policy framework – identify the high-level, security-related business principles that will drive your organization’s tailoring of the ISO/IEC 27001/2 policy template. These high-level, security-related business principles are the business-level policy definitions. Some may be driven by industry-specific regulatory requirements, such as the HIPAA example. Others may be more organization-centric, such as business-level definitions of “perimeter access policy” or “software configuration policy”. Some may vary based on the specific business unit. The end goal is documented business policy and security policy consistently translated to technical standards for each element of your enterprise-specific technical environment.
  • Identifier Semantics Standards: Today there is no industry agreement on how to identify “real-world” users, applications, services, and resources that are the initiators and targets of requests in this O‑ESA Guide’s policy-driven security model. Identifiers vary in syntax and semantics – as exhibited by the use of user-friendly names on one hand, and algorithmically assigned and opaque Globally Unique Identifiers (GUIDs) on the other. The following short list describes several problematic aspects of current identifier practices:

—  Representation of an entity’s identifier may take the form of a user ID, UUID, OID, public key, email address, distinguished name, or some other form of identifier (or a combination of the former).

—  Complexities caused by myriad identifier syntaxes are compounded by a lack of consensus around desirable identifier characteristics.

—  Desired characteristics are often mutually-exclusive – e.g., visually meaningful versus obscure to protect privacy; verbally conveyable versus lengthy to ensure uniqueness; static to support personalization versus dynamic to avoid profiling; standards-based for interoperability versus innovative with built-in functionalities like check digits.

—  All of this makes it difficult to use identifiers across applications and across organizational boundaries.

It seems clear that identifiers are highly variable, and flexibility must be allowed. But, there is also a need for some level of standardization across application and organizational boundaries.

Figure 26: Policy Automation Roadmap

  • Identity Data: If your organization has multiple, fragmented, or incomplete schemes for identifying and authenticating users, applications, services, and devices, then again there is much that can be done to put yourself in a better position to support policy automation. Digital credential and identity information is the technological foundation for policy-based security. Start by taking every opportunity to move your organization incrementally toward a unified, consistent, enterprise-standard set of identifiers and credentials to be used (possibly with other management data) to authenticate people, devices, and services. These credentials should allow trace-back to the identities to which they are assigned. Keep in mind that this step is necessary but may not be sufficient as you begin supporting federated identity management with business affiliates, depending on the extent to which your identifiers, credentials, and identity semantics match those of your affiliates. But, having taken this step, your organization will be in a position to more easily move to standard and robust mechanisms for authentication and identity management.
  • Architecture: If your organization’s security architecture is fragmented and incomplete, perhaps lacking a clear direction and with unclear linkage to policy, then again there is much that can be done to put yourself in a better position. Take advantage of our O‑ESA framework and template for policy-driven security. Start by tailoring the overall enterprise security framework to the needs of your organization, as well as the policy framework, already discussed above. This process will clarify your enterprise security vision, provide a strong linkage to business policy and regulations, and provide direction to IT staff. They can then take advantage of opportunities to tailor specific architectures such as identity management, border protection, and access control to your organization’s needs, as well.
  • Content Tagging[40] Standards: Currently there is little in the way of standardization activity to point to, with the exception of Extensible Rights Markup Language (XrML), which has its roots in the digital property rights space. To date, XrML has not received broad industry support, and the breadth and scope of future support is unclear. From the viewpoint of this O‑ESA Guide, what is needed is a standard way of communicating content tagging information among all of the policy decision and enforcement engines that must support it in order to enable the full set of O‑ESA services. In reality, target data content will be tagged in a variety of ways, and the content tagging information (or tags) will be stored in a variety of ways. Tags may be stored as a field in a database, as a separate database, as a resource attribute field in a directory, as part of a document, or in some other way. A standard way of communicating content tags among policy decision and enforcement points should make this transparent.
  • Content Tagging: If your organization’s architecture for communicating content tags among policy decision and enforcement points is application-specific, then there is work you can do to put yourself in a better position to support standards-based decision and enforcement engines. This is perhaps best done in the context of a business driver for a standards-based approach, such as a business driver for standards-based access control with business affiliates using SAML. This will allow you to focus on a standard way of communicating content tags among the limited set of policy engines involved in such a project, and will put your organization in a position to more easily move to standardization across a broader set of decision and enforcement engines in the future.
  • Proprietary Management within Vendor Silos: If your organization has manual, inconsistent policy implementations by device, then you can begin taking steps to put yourself in a better position to support policy automation. The prerequisite steps are discussed above under Policy and Architecture, but assuming those steps, you may be able to utilize proprietary policy management tools to achieve consistent, automated, timely policy implementation across devices (at least for a specific device vendor). As always, the key is to pick a reputable and forward-looking management tool vendor who will support common standards as they become available.
  • Common Industry Management Standards: In the context of the policy automation model described earlier, common management standards are of two principal types, as follows:

—  Standards for describing an integrated network and computing environment:

Currently there are several relevant standards in this space:

CIM from the DMTF (for enterprise and service provider environments).

Shared Information/Data (SID) model from the TeleManagement Forum (TMF) (for the telecommunications environment).

Standard IETF MIBs such as the Entity MIB (RFC2737) for asset management.

There are other efforts in this space that may be evaluated. These include the OASIS Data Center Markup Language (DCML), the Enterprise Grid Alliance (EGA), and the GGF Open Grid Services Architecture (OGSA).

—  Standards for management of the computing environment:

Web-Based Enterprise Management (WBEM) from the DMTF: A set of management and Internet standard technologies developed to unify management of enterprise computing environments. The DMTF has developed a core set of standards that make up WBEM, which includes a data model – the Common Information Model (CIM) standard; an encoding specification for the model – xmlCIM Encoding Specification; and a transport mechanism and set of operations against the model – CIM Operations over HTTP.

Web Services Distributed Management (WSDM) from OASIS: The minimum set of management capabilities that should be provided by a resource in a web services environment. Several management interfaces and discovery are also addressed, as well as how the underlying WS-* foundation is assembled and utilized.

Simple Network Management Protocol (SNMP) from the IETF: Currently, there are very few MIBs that provide configuration capabilities, but it is possible to do so. The SNMP protocol is not well suited to this task and SNMP v3 or later is required in order to support adequate security levels. On May 16, 2004, the IETF published an Internet-Draft, Policy Based Management MIB. Interestingly, the document overview is focused on management based on high-level business policies. Also, it should be noted that the DMTF has put emphasis on mapping the standard IETF MIBs to CIM in order to re-use the knowledge that went into the MIB development, and position the MIBs relative to each other and to the other data in CIM, to enable consistent management and policies.

Network Configuration Protocol (NetConf) from the IETF: A set of operations for manipulating configuration data sets via a variety of underlying protocols (such as SOAP, BEEP, and SSH). Currently, the protocol is model-neutral, although discussions are underway regarding the formation of a NetConf Data Model Working Group.

  • Policy Specification and Communication Standards: In the context of the policy automation model described earlier, there are two requirements: a standard way of specifying policies so that they can be understood and applied across vendor products and security domains, and a standard language for communicating policy decision and enforcement data across vendor products and security domains.

—  Policy specification standards include XACML and the CIM Policy Model. XACML has evolved as the industry’s favored standard for a general policy language. The CIM Policy Model is designed to be independent of any policy language, is applicable to managing the configuration and behavior of any resource (policy configuration of routers, packet filters, operating systems, storage, etc.), and also addressing authentication and authorization/access control. Other options for policy specification include languages like PONDER from London’s Imperial College. The coupling of the PONDER syntax with CIM’s semantics has been demonstrated.

—  Policy communication standards include XACML, SAML, and CIM Policy.

—  Lastly, the policy infrastructure itself must be managed. The basic concept of CIM Services could be extended via subclasses to describe and manage the functionality provided by PMAs, PDPs, and PEPs. Work in this area is critical to management of the overall policy infrastructure envisioned by O‑ESA.

  • Common Management across Vendor Silos: As the above standards are defined and gain traction, you can begin moving from consistent, automated, timely policy implementation across devices within vendor silos to common policy implementation across all vendors’ devices.
  • Multi-Vendor Policy Management System with Standard Languages and Protocols: As the above standards and common management products mature, this enables an enterprise-class, multi-vendor policy management system based on standard languages and protocols.

In summary, the O‑ESA Guide’s policy-driven security vision is one in which high-level business policies are automatically translated into the specific security policies and detailed technical standards required to implement the business policy, and then automatically instantiated in a standard form for the various policy decision and enforcement points in the enterprise. An essential corollary is that policy engines must also have access to the necessary identity and management information attributes such that policy decisions can be accurately made (i.e., based on the characteristics of the initiator, the target content, and the environment). Although there are necessary standards and technology gaps that must be filled in order to enable the vision across the full set of O‑ESA services and the multiple product and security domains that are involved, industry groups are active in addressing these gaps. Hopefully this business policy automation vision, technical model, and roadmap will assist our industry in influencing the user and vendor actions required to deliver the vision sooner rather than later. The extent to which the full vision can be achieved has yet to be determined, but it’s clear that the goal of significantly reducing the manual effort and cost of business policy implementation can be achieved.

7 Conclusions and Recommendations

7.1 Conclusions

As discussed in the executive overview, information systems security has never been more critical or more complex. Complexity is a result of escalating demand for new and improved e-business services in the face of escalating cyber security threats, escalating requirements for corporate governance, and the escalating collapse of traditional protection boundaries. The O‑ESA framework is designed to meet this challenge by simplifying management of this increasingly complex environment. This is accomplished by providing a direct linkage between governance, based on clear and effective business policy, and the security architecture itself.

Our O‑ESA Guide’s security technology architecture framework focuses on automated policy-driven security, where policy instantiation, decision-making, and enforcement are built into the architecture. Because the critical standards and implementing products are still evolving and gaining greater acceptance through implementation experience – in particular, XACML[41] – automated policy-driven security continues to be work-in-progress. Even so, as discussed in detail in Section 6.4, there is a great deal that user organizations can do to better position themselves for future policy automation while at the same time proceeding with an O‑ESA framework that supports partial automation.

7.2 Recommendations

This section provides recommendations to user organizations on ways to effectively utilize O‑ESA. It also provides recommendations to security infrastructure product vendors and standards organizations for supporting the O‑ESA framework and the policy-driven security architecture vision.

7.2.1 Recommendations to User Organizations

Start using O‑ESA as your common reference architecture framework for communication on security architecture topics and issues. Use it within your organization and with others in the security space – business partners, vendors, consultants, and industry groups in which you participate.

Start tailoring the O‑ESA framework and template to the needs of your organization:

  • Tailor the policy framework to your requirements:

Define your organization’s policy framework, starting with the high-level, security-related business principles that will drive your tailoring of the ISO/IEC 27001/2 policy template. For more detailed guidance, see Section 3.4 and Section 6.4. Define detailed management and technical standards, guidelines, and procedures required to implement the policy framework.

—  Implement policy instantiation through processes that minimize manual configuration of decision and enforcement end-points to the extent possible. Utilize well-defined configuration checklists from NIST or other sources to facilitate centralized configuration definition for each architectural variant. Implement processes to push proven definitions out to the end-points and ensure that the end-points are kept in synch.

—  Implement policy enforcement through code procedures that are built into the policy decision and enforcement points, or separate processes where appropriate, recognizing that they will have to be re-engineered as you move to automated policy instantiation and enforcement products.

  • Tailor the technology architecture and operations to your requirements, and populate them as required to meet the needs of your organization.
  • Make sure that you have a quality identity management source and unique-identity strategy in place as a starting point.

Assess business drivers for policy automation products that could further automate both the instantiation and enforcement of policy within a particular context. If the business drivers are in place and a reputable standards-based product[42] is available, don’t wait – begin incremental implementation so that you can gain hands-on experience with the technology. But, do it in the context of your tailored version of both the O‑ESA framework and the specific components such as identity management and border protection that are within the scope of your project. Tailor the architecture to your needs and start building it incrementally around the identified business drivers and products you select for your project. See Section 6.4 for further guidance on how to better position your organization for policy automation.

Through your procurement processes encourage O‑ESA vendors to embrace standards-based interoperability and to participate in development and adoption of standards that support the policy automation vision.

7.2.2 Recommendations to Vendors and Standards Organizations

Start utilizing this O‑ESA Guide as a common reference for semantics and terminology around policy-driven security architecture and the enterprise security architecture framework in general. Adopting the terminology used in this document to describe your products and strategies will be valuable to customers and potential customers as they sort through the options offered in the marketplace.

Participate in the development and adoption of standards that support the policy automation vision. For additional detail on the automation vision, models, and roadmap, see Chapter 6.

Key standards in the policy-driven security arena include:

  • CIM: The Common Information Model is a conceptual information model for describing management that is not bound to a particular implementation.
  • DCML: The Data Center Markup Language is an emerging standard for describing the computing environment to be managed.
  • ISO/IEC 27001/2: International standards for information security management that are now established as the global standard for managing information security.
  • LDAP: The standards-based means for accessing identity authentication and authorization data and related policy data that is stored in an X.500 directory.
  • SAML: The Security Assertion Markup Language provides the standards-based means for communication of identity, attributes, and authorization decisions related to initiators and targets.
  • SNMP: The Simple Network Management Protocol is the most pervasive management standard today, broadly supported and used for network configuration management.
  • WBEM: Web-Based Enterprise Management is a set of management and Internet standard technologies developed to unify management of enterprise computing environments.
  • WS-Policy: An emerging standard for specifying initiator/target interaction policies in a web services environment (for example, the initiator must authenticate using X.509 certificate or using SAML).
  • X.509: X.509 is the fundamental public key infrastructure-based technology critical for establishing identities and secure, trusted communications between the components. In many cases, X.509 certificates may be used in place of SAML assertions to provide initiator identity.
  • XACML: The Extensible Authorization Control Markup Language provides the standards-based means to specify and communicate access control policy. XACML is used for communication of policy between the policy repository and the PDP.

Product vendors should consider the opportunities afforded by policy-based security architecture in general, and by the automated policy instantiation and enforcement vision in particular. The products that will thrive are those based on open standards and a common vision, with product differentiation based on interoperability features, standards-based functionality, performance, and reliability rather than proprietary management features that attempt to lock users into a particular vendor silo.

A Glossary of Resources

A.1     Security Governance Resources and Tools

Policy Development Tools

The ISO/IEC 27002:2005: Code of Practice for Information Security Management is now an established international standard that is widely implemented. In addition to the ISO/IEC 27001/2 standards, other resources such as the ISO/IEC 27001/2 toolkit (http://www.iso27001/2-made-easy.com) are also available. The toolkit contains:

  • Both parts of the ISO/IEC 27001/2 standards[43]
  • A full set of ISO/IEC 27001/2-compliant information security policies
  • A management presentation on ISO/IEC 27001/2 and BS 7799-2 in PowerPoint format
  • A disaster recovery planning kit (related to ISO/IEC 27001/2 §11)
  • A roadmap for certification
  • An audit kit (checklists, etc.) for a modern network system (§12)
  • A comprehensive glossary of information security and computer terms
  • A business impact analysis questionnaire

There are other standards and toolkits available outside the ISO model. The most notable is the NIST SP 800-56 model. This is a free toolkit but has been viewed by some as more complex and more cumbersome to apply. For more information, visit http://csrc.nist.gov/.

For other policy examples and templates, visit the SANS Security Policy Project web site at www.sans.org/resources/policies. This is a consensus research project of the SANS community whose ultimate goal is to offer rapid development and implementation of information security policies. However, this is an incomplete policy template set.

Metrics

The NIST SP 800-55: Security Measurement Guide for Information Security: “… provides an approach to help management decide where to invest in additional security protection resources or identify and evaluate non-productive controls. It explains the metric development and implementation process and how it can also be used to adequately justify security control investments. The results of an effective metric program can provide useful data for directing the allocation of information security resources and should simplify the preparation of performance-related reports.”

The more recent Information Security Management Maturity Model (O-ISM3)[44] is The Open Group framework for managing information security, and wider still for managing information in any other context. It aims to ensure that security processes in any organization are implemented so as to operate at a level consistent with that organization's business requirements. ISM3 is technology-neutral. It defines a comprehensive but manageable number of information security processes sufficient for the needs of most organizations, with the relevant security control(s) being identified within each process as an essential subset of that process. In this respect, it is fully compatible with the well established ISO/IEC 27000:2009, COBIT, and ITIL standards in this field. Key features of O-ISM3 include:

  • It complements the TOGAF model for enterprise architecture.
  • It defines operational metrics and their allowable variances.
  • It provides a framework for building, tailoring, and operating an Information Security Management System (ISMS).
  • It uses metrics to ensure that the management system uses objective quantitative criteria to inform business decisions on allocating IT security resources efficiently and responding to changes. The beneficial outcomes for information security are lower risk and better Return on Investment (RoI).
  • It defines maturity in terms of the operation of key security processes. Capability is defined in terms of the metrics and management practices used.
  • It requires security objectives and targets to be derived from business objectives, and promotes the formal measurement of effectiveness of each security management process.

Organizations in different business sectors and countries have different business requirements and risk tolerances. The O-ISM3 framework helps information security managers to evaluate their own operating environment and to plan their security management processes so they are consistent with and cost-effective for their organization’s business objectives.

A.2     NIST References for O‑ESA Implementation

  • Telecommuting Guidance: SP 800-46: Security for Telecommuting and Broadband Communications, September 2002.
  • Patch Management Guidance: SP 800-40: Procedures for Handling Security Patches, September 2002.
  • Incident Handling: SP 800-61: Computer Security Incident Handling Guide, January 2004.
  • Network Security Testing Guidance: SP 800-42: Guideline on Network Security Testing, October 2003.
  • Web Server Security Guidance: SP 800-44: Guidelines on Securing Public Web Servers, September 2002.
  • E-Mail Security Guidance: SP 800-45: Guidelines on Electronic Mail Security, September 2002.
  • W2K Administration Guidance: SP 800-43: Systems Administration Guidance for Windows 2000 Professional, November 2002.
  • Self-Assessment Tool: Automated Security Self-Evaluation Tool (ASSET). The purpose of ASSET is to automate the completion of the questionnaire contained in NIST Special Publication SP 800-26: Security Self-Assessment Guide for Information Technology Systems, November 2001.

Other more recent NIST publications:

  • Draft NIST Special Publication 800-68: Guidance for Securing Microsoft Windows XP Systems for IT Professionals: A NIST Security Configuration Checklist.
  • Draft NIST Special Publication 800-70: The NIST Security Configuration Checklists Program.
  • Draft NIST Special Publication 800-72: Guidelines on PDA Forensics.
  • Draft Special Publication 800-66: An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule.

For up-to-date information, go to NIST Computer Security Special Publications.

Appendix B: Security Architecture Checklist

This Security Architecture Checklist facilitates the process of overseeing the many complex decisions, technologies, and processes involved in deploying and managing security services to the enterprise. Since the most important question to ask in security is often: “what are you securing?”, this checklist uses the Attack Surface concept (Data, Method, and Channel) to enumerate what security services are used to provide security to what assets.

Domain

Security Service

Attack Surface

Data

Method

Channel

Security Technology Architecture

Policy Enforcement Points

Policy Decision Points

User/Identity Administration Services

Access Provisioning Services

Directory Services

Meta-Directory/ Virtual Directory Services

Federated Identity Services

Border Protection Services

VPN Services

Proxy Services

Application Proxy Services

Access Management Services

Authentication Services

Authorization Services

Intrusion Detection Services

Anomaly Detection Services

Vulnerability Assessment Services

Logging Services

Virtualization

Anti-Virus Services

Anti-Spam Services

Data Loss Protection

Enterprise Rights Management

Content Inspection Services

Auditing Services

Cryptographic Services

PKI Services

Digital Signature Services

Security Operations

Asset Management

Security Event Management

Security Administration

Security Compliance

Vulnerability Management

Event Management

Incident Management

Testing Security Architecture

Security Metrics


Glossary

Many of the definitions in this glossary are taken from NIST SP 800-33: Underlying Technical Models for Information Technology Security, December 2001.

Access Control
Enable authorized use of a resource while preventing unauthorized use or use in an unauthorized manner.

Accountability
The security objective that generates the requirement for actions of an entity to be traced uniquely to that entity. This supports non-repudiation, deterrence, fault isolation, intrusion detection and prevention, and after-action recovery and legal action.

Assurance Grounds for confidence that the other four security objectives (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation. “Adequately met” includes (1) functionality that performs correctly, (2) sufficient protection against unintentional errors (by users or software), and (3) sufficient resistance to intentional penetration or by-pass.

Authentication
Verifying the identity of a user, process, or device, often as a prerequisite to allowing access to resources in a system.

Authorization
The granting or denying of access rights to a user, program, or process.

Availability The security objective that generates the requirement for protection against intentional or accidental attempts to (1) perform unauthorized deletion of data or (2) otherwise cause a denial of service or data.

Confidentiality
The security objective that generates the requirement for protection from intentional or accidental attempts to perform unauthorized data reads. Confidentiality covers data in storage, during processing, and while in transit.

Data Integrity
The property that data has not been altered in an unauthorized manner. Data integrity covers data in storage, during processing, and while in transit.

Data Origin Authentication
The verification that the source of data received is as claimed.

Denial of Service
The prevention of authorized access to resources or the delaying of time-critical operations.

Domain See Security Domain.

Entity Either a subject (an active element that operates on information or the system state) or an object (a passive element that contains or receives information).

Guideline An enterprise-wide recommended course of action. While not mandatory, it is highly encouraged that guidelines be reviewed for applicability to particular environments, and implemented as appropriate for the business environment. Guidelines support the policy and the standards.

Integrity The security objective that generates the requirement for protection against either intentional or accidental attempts to violate data integrity (the property that data has not been altered in an unauthorized manner) or system integrity (the quality that a system has when it performs its intended function in an unimpaired manner, free from unauthorized manipulation).

Identity Information that is unique within a security domain and is recognized as denoting a particular entity within that domain.

Identity-based Security Policy
A security policy based on the identities and/or attributes of the object (system resource) being accessed and of the subject (user, group of users, process, or device) requesting access.

Incident A violation or imminent threat of violation of computer security policies, acceptable use policies, or standard computer security practices. These include (but are not limited to):

  • Attempts (failed or successful) to gain unauthorized access to a system or its data
  • Unwanted disruption or denial of service
  • The unauthorized use of a system for the processing or storage of data
  • Changes to system hardware, firmware, or software characteristics without the owner's knowledge, instruction, or consent

IT-related Risk
The net mission/business impact (probability of occurrence combined with impact) from a particular threat source exploiting, or triggering, a particular information technology vulnerability. IT related-risks arise from legal liability or mission/business loss due to:

  • Unauthorized (malicious, non-malicious, or accidental) disclosure, modification, or destruction of information
  • Non-malicious errors and omissions
  • IT disruptions due to natural or man-made disasters
  • Failure to exercise due care and diligence in the implementation and operation of the IT

IT Security Architecture
A description of security principles and an overall approach for complying with the principles that drive the system design; i.e., guidelines on the placement and implementation of specific security services within various distributed computing environments.

Policy A broad statement authorizing a course of action to enforce the organization’s guiding principles for a particular control domain. Policies are interpreted and supported by standards, guidelines, and procedures. Policies are intended to be long-term and guide the development of rules to address specific situations.

Principles In this document, the agreed-upon set of security principles that govern the use and management of technology across an organization. They are derived from a combination of (1) basic assumptions and beliefs that reflect the organization’s mission, values, and experience; and (2) business, legal, and technical principles that drive the enterprise.

Procedure A procedure provides instructions describing how to achieve a policy or standard. A procedure establishes and defines the process whereby a business unit complies with the policies or standards of the enterprise.

Risk Within this document, the term is synonymous with “IT-related risk”.

Risk Analysis
The process of identifying the risks to system security and determining the probability of occurrence, the resulting impact, and the additional safeguards that mitigate this impact. It is part of risk management and synonymous with risk assessment.

Risk Assessment
See Risk Analysis.

Risk Management
The total process of identifying, controlling, and mitigating IT-related risks. It includes risk analysis; cost-benefit analysis; and the selection, implementation, test, and security evaluation of safeguards. This overall system security review considers both effectiveness and efficiency, including impact on the mission/business and constraints due to policy, regulations, and laws.

Rule-based Security Policy
A security policy based on global rules imposed for all subjects. These rules usually rely on a comparison of the sensitivity of the objects being accessed and the possession of corresponding attributes by the subjects requesting access.

Security Security is a system property. Security is much more than a set of functions and mechanisms. IT security is a system characteristic as well as a set of mechanisms that span the system both logically and physically.

Security Domain
A set of subjects, their information objects, and a common security policy.

Security Goal
The IT security goal is to enable an organization to meet all mission/business objectives by implementing systems with due care and consideration of IT-related risks to the organization, its partners, and its customers.

Security Policy
The statement of required protection of the information objects.

Security Objectives
The five security objectives are availability, integrity, confidentiality, accountability, and assurance.

Standard A standard is an enterprise-wide, mandatory directive that specifies a particular course of action. Standards support the policy and outline a minimum baseline for policy compliance.

System Integrity
The quality that a system has when it performs its intended function in an unimpaired manner, free from unauthorized manipulation of the system, whether intentional or accidental.

Threat The potential for a “threat source” (defined below) to exploit (intentional) or trigger (accidental) a specific vulnerability.

Threat Source
Either (1) intent and method targeted at the intentional exploitation of a vulnerability or (2) the situation and method that may accidentally trigger a vulnerability.

Threat Analysis
The examination of threat sources against system vulnerabilities to determine the threats for a particular system in a particular operational environment.

Vulnerability
A weakness in system security procedures, design, implementation, internal controls, etc. that could be accidentally triggered or intentionally exploited and could result in a violation of the system’s security policy.


Footnotes

[1] Network Applications Consortium – merged in 2007 into membership of The Open Group Security Forum – refer to: www.opengroup.org/projects/sec-arch.

[2] This premise was shared by many others in the industry, including the Open Group’s Security Forum.

[3] The full report is available at www.cyberpartnership.org/init-governance.html.

[4] This VEN Security Model is described in Securing the Virtual Enterprise Network: Layered Defenses, Coordinated Policies.

[5] XACML (Extensible Access Control Markup Language) is an OASIS standard. It is a declarative access control policy language implemented in XML and a processing model, describing how to interpret the policies.

[6] NIST SP 800-27: Engineering Principles for Information Technology Security (A Baseline for Achieving Security).

[7] Security at Microsoft, Technical White Paper, Published: November 2003.

[8] A component failure should result in no access being granted, as opposed to a failure leaving the system open to accidental or intentional access.

[9] Cyber Security and Control System Survivability by Howard Lipson, 2005.

[10] ENISA published a report called “Internet of Things – An Action Plan for Europe”; refer to http://ec.europa.eu/information_society/policy/rfid/documents/commiot2009.pdf.

[11] ISO/IEC 27002 is available from many sources, but usually at commercial cost to enterprises. For enterprises seeking something more freely downloadable, the US government, via their NIST Computer Security Division, has an extensive online library of publications at csrc.nist.gov/index.html.

[12] Refer to Problems with XACML and their Solutions by Travis Spencer where he expands on three areas in XACML Version 2.0 that are generally accepted as impeding its mass adoption: (1) The wire is not defined. (2) The attributes describing the subject presented to the PDP are not cryptographically bound to a trusted identity provider (IdP). (3) The policy authoring requires high-level technical expertise.

[13] It should not be assumed that all security policy will be represented electronically. Some policy is of a management nature and will be implemented primarily as management standards. Other policy is of a technical nature and will be implemented primarily as technical standards. It is the latter that will be represented electronically, technology permitting. Policies that coordinate and define the interaction of other technical policies (policies about policy prioritization or conflict resolution) may be difficult to represent electronically, but even these can be addressed by detecting conflicts and ensuring that they are surfaced to the appropriate authority.

[14] For a definition of policy domains and their applications, see the Burton Group (now merged into Gartner) VEN Security Model, as described in Securing the Virtual Enterprise Network: Layered Defenses, Coordinated Policies.

[15] Policy repository is being used in the broad sense, to encompass any type of policy store, including configuration files residing, for example, on a general-purpose or special-purpose server or appliance.

[16] Many refer to the assignment of this kind of data as Information or Content Tagging – the act of attributing (tagging) content via metadata to facilitate any or all of the following: information protection (confidentiality, export control classification), information management (identity, version, ownership, valid dates, etc.), or information retrieval (subject/taxonomy, business object type, etc.).

[17] For more complete background information on the topic, see the Burton Group’s Enterprise Identity Management: It’s about the Business, 2003.

[18] An alternative is to use the Extensible Authentication Protocol (EAP), possibly in conjunction with proprietary vendor features, to sufficiently secure the wireless infrastructure and associated end-points. EAP is a general protocol for authentication that also supports multiple authentication methods, such as token cards, Kerberos, one-time passwords, certificates, public key authentication, and smart cards. IEEE 802.1x specifies how EAP should be encapsulated in LAN frames; refer to www.ieee802.org.

[19] The goal is a resilient design that adapts to attacks or disasters in reasonable ways. Resilience is the property often associated with disaster recovery processes, defenses against denial of service attacks, fallback regimes used for restoration of critical services, and similar approaches to assuring availability.

[20] Logging in the Age of Web Services, by Anton Chuvakin and Gunnar Peterson, May 2009.

[21] Introduction to XDAS; see Referenced Documents.

[22] At the time of publication of this ESA Guide, XDAS event record format is being updated to meet today’s more stringent audit industry demands. It will be published as XDAS Version 2.

[23] The IT security goal is to enable an organization to meet all mission/business objectives by implementing systems with due care and consideration of IT-related risks to the organization, its partners, and its customers.

[24] The five security objectives are availability, integrity, confidentiality, accountability, and assurance.

[25] Uncover Security Design Flaws Using the STRIDE Approach, by Shawn Hernan, Scott Lambert, Tomasz Ostwald, and Adam Shostack; refer to: msdn.microsoft.com/en-us/magazine/cc163519.aspx.

[26] Attack Surface Measurement and Attack Surface Reduction, by Pratyusa K. Manadhata and Jeannette M.

[27] For example, based on design principles any component that controls access to resources should be tested to ensure that it does not fail open (i.e., it fails in such a way that no access is granted).

[28] “An incident refers to a computer security problem arising from a threat. Computer security incidents can range from a single virus occurrence to an intruder attacking many networked systems, or such things as unauthorized access to sensitive data and loss of mission-critical data.” (NIST)

[29] Refer to Putting the Tools to Work: How to Succeed with Source Code Analysis available at: www.cigital.com/papers/download/j3bsi.pdf.

[30] Technical Standard: Risk Taxonomy (C081), January 2009, published by The Open Group.

[31] The Building Security in Maturity Model (BSIMM, pronounced “bee simm”) is designed to help you understand, measure, and plan a software security initiative; see http://bsimm2.com/.

[32] The Open Group Technical Standard for Information Security Management Maturity Model (O-ISM3).

[33] A Few Good Metrics, CSO Magazine, 2005; refer to: www.csoonline.com/read/070105/metrics.html.

[34] HIPAA is the acronym for the Health Insurance Portability and Accountability Act of 1996, which mandated the establishment of national standards to protect electronic health information.

[35] The term “technical standard” used in this context refers to the standards that implement an organization’s security policies, as identified in an organization-specific policy template. It should not be confused with de jure or de facto industry standards.

[36] Although configuration and policy repositories may be distinct, they interact. For example, policy will often regulate what software gets installed and how configuration parameters are defined for a particular device/service type. Policy may also vary for the same device/service type based solely on environment (in which the particular device/service instance is operating at a particular point it time).

[37] The monitor and control functions represent a standard interface between management systems and managed systems. Monitor is used to determine current state (may be requested or asynchronously reported, depending on implementation). Control is used to update the current state.

[38] For background information, see Section 4.2 and Section 4.3.

[39] The Extensible Access Control Markup Language (XACML) is an XML-based language, or schema, designed specifically for creating policies and automating their use to control access to disparate devices and applications on a network.

[40] Content tagging is the act of attaching attribute values to digital content via a variety of mechanisms including standard data attributes and metadata. In the context of this document, it is the data (about the target resource) required in order to make and enforce policy decisions to grant or deny requests of the target resource.

[41] See Section 3.6.3.

[42] As suggested in Section 6.4, proprietary management products are available in some security service and product domains that facilitate automation across a particular vendor’s product set.

[43] ISO/IEC 27001 is the specification for information security management systems, also known as BS 7799-2:2005.

[44] The ISM3 Technical Standard is available from The Open Group Security publications, listed at: www.opengroup.org/bookstore/catalog/se.htm.

return to top of page