Security Automation and Continuous Monitoring WGInternet Engineering Task Force (IETF) D. WaltermireInternet-DraftRequest for Comments: 7632 NISTIntended status:Category: Informational D. HarringtonExpires: January 2, 2016ISSN: 2070-1721 Effective SoftwareJuly 1,September 2015 Endpoint Security PostureAssessment -Assessment: Enterprise Use Casesdraft-ietf-sacm-use-cases-10Abstract This memo documents a sampling of use cases for securely aggregating configuration and operational data and evaluating that data to determine an organization's security posture. From these operational use cases, we can derive common functional capabilities and requirements to guide development of vendor-neutral, interoperable standards for aggregating and evaluating data relevant to security posture. Status of This Memo ThisInternet-Draftdocument issubmitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documentsnot an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF).Note that other groups may also distribute working documents as Internet-Drafts. The listIt represents the consensus ofcurrent Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents validthe IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are amaximumcandidate for any level ofsix monthsInternet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may beupdated, replaced, or obsoleted by other documentsobtained atany time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on January 2, 2016.http://www.rfc-editor.org/info/rfc7632. Copyright Notice Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 2. Endpoint Posture Assessment . . . . . . . . . . . . . . . . .43 2.1. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1. Define, Publish,QueryQuery, and Retrieve Security Automation Data . . . . . . . . . . . . . . . . . . . 5 2.1.2. Endpoint Identification and Assessment Planning . . . 9 2.1.3. Endpoint Posture Attribute Value Collection . . . . . 10 2.1.4. Posture Attribute Evaluation . . . . . . . . . . . . 11 2.2. Usage Scenarios . . . . . . . . . . . . . . . . . . . . . 12 2.2.1. Definition and Publication of Automatable Configuration Checklists . . . . . . . . . . . . . . 12 2.2.2. Automated Checklist Verification . . . . . . . . . . 13 2.2.3. Detection of Posture Deviations . . . . . . . . . . . 16 2.2.4. Endpoint Information Analysis and Reporting . . . . . 17 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station Zebra . . . . . . . . . . . . . . . . . .1817 2.2.6. Identification and Retrieval of Guidance . . . . . .2019 2.2.7. Guidance Change Detection . . . . . . . . . . . . . .2120 3.IANASecurity Considerations . . . . . . . . . . . . . . . . . . .. .21 4.Security ConsiderationsInformative References . . . . . . . . . . . . . . . . . . . 215.Acknowledgements . . . . . . . . . . . . . . . . . . . . . .22 6. Change Log . . . . . . . . . . . . . . . . . . . . . . .. . 226.1. -08- to -09- . . . . . . . . . . . . . . . . . . . . .Authors' Addresses .22 6.2. -07- to -08-. . . . . . . . . . . . . . . . . . . . . . 226.3. -06-1. Introduction This document describes the core set of use cases for endpoint posture assessment for enterprises. It provides a discussion of these use cases and associated building-block capabilities. The described use cases support: o securely collecting and aggregating configuration and operational data, and o evaluating that data to-07- . . . . . . . . . . . . . . . . . . . . . . 23 6.4. -05- to -06- . . . . . . . . . . . . . . . . . . . . . . 23 6.5. -04- to -05- . . . . . . . . . . . . . . . . . . . . . . 23 6.6. -03- to -04- . . . . . . . . . . . . . . . . . . . . . . 24 6.7. -02- to -03- . . . . . . . . . . . . . . . . . . . . . . 24 6.8. -01- to -02- . . . . . . . . . . . . . . . . . . . . . . 25 6.9. -00- to -01- . . . . . . . . . . . . . . . . . . . . . . 25 6.10. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm- use-cases-00 . . . . . . . . . . . . . . . . . . . . . . 26 6.11. waltermire -04- to -05- . . . . . . . . . . . . . . . . . 27 7. Informative References . . . . . . . . . . . . . . . . . . . 28 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 28 1. Introduction This document describes the core set of use cases for endpoint posture assessment for enterprises. It provides a discussion of these use cases and associated building block capabilities. The described use cases support: o securely collecting and aggregating configuration and operational data, and o evaluating that data to determine the security posture of individual endpoints. Additionally, this document describes a set of usage scenarios that provide examples for using the use cases and associated building blocks to address a variety of operational functions. These operational use cases and related usage scenarios cross many IT security domains. The use cases enable the derivation of common: o concepts that are expressed as building blocks in this document, o characteristics to inform development of a requirements document o information concepts to inform development of an information model document, and o functional capabilities to inform development of an architecture document. Together these ideas will be used to guide development of vendor- neutral, interoperable standards for collecting, aggregating, and evaluating data relevant to security posture. Using this standard data, tools can analyze the state of endpoints, user activities and behaviour, and evaluate the security posture of an organization. Common expression of information should enable interoperability between tools (whether customized, commercial, or freely available), and the ability to automate portions of security processes to gain efficiency, react to new threats in a timely manner, and free up security personnel to work on more advanced problems. The goal is to enable organizations to make informed decisions that support organizational objectives, to enforce policies for hardening systems, to prevent network misuse, to quantify business risk, and to collaborate with partners to identify and mitigate threats. It is expected that use cases for enterprises and for service providers will largely overlap. When considering this overlap, there are additional complications for service providers, especially in handling information that crosses administrative domains. The output of endpoint posture assessment is expected to feed into additional processes, such as policy-based enforcement of acceptable state, verification and monitoring of security controls, and compliance to regulatory requirements. 2. Endpoint Posture Assessment Endpoint posture assessment involves orchestrating and performing data collection and evaluating the posture of a given endpoint. Typically, endpoint posture information is gathered and then published to appropriate data repositories to make collected information available for further analysis supporting organizational security processes. Endpoint posture assessment typically includes: o Collecting the attributes of a given endpoint; o Making the attributes available for evaluation and action; and o Verifying that the endpoint's posture is in compliance with enterprise standards and policy. As part of these activities, it is often necessary to identify and acquire any supporting security automation data that is needed to drive and feed data collection and evaluation processes. The following is a typical workflow scenario for assessing endpoint posture: 1. Some type of trigger initiates the workflow. For example, an operator or an application might trigger the process with a request, or the endpoint might trigger the process using an event-driven notification. 2. An operator/application selects one or more target endpoints to be assessed. 3. An operator/application selects which policies are applicable to the targets. 4. For each target: A. The application determines which (sets of) posture attributes need to be collected for evaluation. Implementations should be able to support (possibly mixed) sets of standardized and proprietary attributes. B. The application might retrieve previously collected information from a cache or data store, such as a data store populated by an asset management system. C. The application might establish communication with the target, mutually authenticate identities and authorizations, and collect posture attributes from the target. D. The application might establish communication with one or more intermediary/agents, mutually authenticate their identities and determine authorizations, and collect posture attributes about the target from the intermediary/agents. Such agents might be local or external. E. The application communicates target identity and (sets of) collected attributes to an evaluator, possibly an external process or external system. F. The evaluator compares the collected posture attributes with expected values as expressed in policies. G. The evaluator reports the evaluation result for the requested assessment, in a standardized or proprietary format, such as a report, a log entry, a database entry, or a notification. 2.1. Use Cases The following subsections detail specific use cases for assessment planning, data collection, analysis, and related operations pertaining to the publication and use of supporting data. Each use case is defined by a short summary containing a simple problem statement, followed by a discussion of related concepts, and a listing of associated building blocks which represent the capabilities needed to support the use case. These use cases and building blocks identify separate units of functionality that may be supported by different components of an architectural model. 2.1.1. Define, Publish, Query and Retrieve Security Automation Data This use case describes the need for security automation data to be defined and published to one or more data stores, as well as queried and retrieved from these data stores for the explicit use of posture collection and evaluation. Security automation data is a general concept that refers to any data expression that may be generated and/or used as part of the process of collecting and evaluating endpoint posture. Different types of security automation data will generally fall into one of three categories: Guidance: Instructions and related metadata that guide the attribute collection and evaluation processes. The purpose of this data is to allow implementations to be data-driven enabling their behavior to be customized without requiring changes to deployed software. This type of data tends to change in units of months and days. In cases where assessments are made more dynamic, it may be necessary to handle changes in the scope of hours or minutes. This data will typically be provided by large organizations, product vendors, and some 3rd-parties. Thus, it will tend to be shared across large enterprises and customer communities. In some cases access may be controlled to specific authenticated users. In other cases, the data may be provided broadly with little to no access control. This includes: * Listings of attribute identifiers for which values may be collected and evaluated * Lists of attributes that are to be collected along with metadata that includes: when to collect a set of attributes based on a defined interval or event, the duration of collection, and how to go about collecting a set of attributes. * Guidance that specifies how old collected data can be to be used for evaluation. * Policies that define how to target and perform the evaluation of a set of attributes for different kinds or groups of endpoints and the assets they are composed of. In some cases it may be desirable to maintain hierarchies of policies as well. * References to human-oriented data that provide technical, organizational, and/or policy context. This might include references to: best practices documents, legal guidance and legislation, and instructional materials related to the automation data in question. Attribute Data: Data collected through automated and manual mechanisms describing organizational and posture details pertaining to specific endpoints and the assets that they are composed of (e.g., hardware, software, accounts). The purpose of this type of data is to characterize an endpoint (e.g., endpoint type, organizationally expected function/role) and to provide actual and expected state data pertaining to one or more endpoints. This data is used to determine what posture attributes to collect from which endpoints and to feed one or more evaluations. This type of data tends to change in units of days, minutes, a seconds with posture attribute values typically changing more frequently than endpoint characterizations. This data tends to be organizationally and endpoint specific, with specific operational groups of endpoints tending to exhibit similar attribute profiles. This data will generally not be shared outside an organizational boundary and will generally require authentication with specific access controls. This includes: * Endpoint characterization data that describes the endpoint type, organizationally expected function/role, etc. * Collected endpoint posture attribute values and related context including: time of collection, tools used for collection, etc. * Organizationally defined expected posture attribute values targeted to specific evaluation guidance and endpoint characteristics. This allows a common set of guidance to be parameterized for use with different groups of endpoints. Processing Artifacts: Data that is generated by, and is specific to, an individual assessment process. This data may be used as part of the interactions between architectural components to drive and coordinate collection and evaluation activities. Its lifespan will be bounded by the lifespan of the assessment. It may also be exchanged and stored to provide historic context around an assessment activity so that individual assessments can be grouped, evaluated, and reported in an enterprise context. This includes: * The identified set of endpoints for which an assessment should be performed. * The identified set of posture attributes that need to be collected from specific endpoints to perform an evaluation. * The resulting data generated by an evaluation process including the context of what was assessed, what it was assessed against, what collected data was used, when it was collected, and when the evaluation was performed. The information model for security automation data must support a variety of different data types as described above, along with the associated metadata that is needed to support publication, query, and retrieval operations. It is expected that multiple data models will be used to express specific data types requiring specialized or extensibledetermine the securityautomation data repositories. The different temporal characteristics, access patterns, and access control dimensionsposture ofeach data type may also require different protocols and data models to be supported furthering the potential requirement for specialized data repositories. See [RFC3444] forindividual endpoints. Additionally, this document describes adescription and discussionset ofdistinctions between an information and data model. It is likelyusage scenarios thatadditional kinds of data will be identified through the process of defining requirements and an architectural model. Implementations supporting this building block will need to be extensible to accommodate the addition of new types of data, both proprietary or (preferably)provide examples for usinga standard format. The building blocks of thisthe usecase are: Data Definition: Security automation data will guide and inform collection and evaluation processes. This data may be designed by a variety of roles - application implementers may build security automation data into their applications; administrators may define guidance based on organizational policies; operators may define guidance and attribute data as needed for evaluation at runtime,cases andso on. Data producers may choose to reuse data from existing stores of security automation data and/or may create new data. Data producers may develop data based on available standardized or proprietary data models, such as those used for network management and/or host management. Data Publication: The capability to enable data producers to publish data to a security automation data store for further use. Published data may be made publicly available or access may be based on an authorization decision using authenticated credentials. Asassociated building blocks to address aresult, the visibilityvariety ofspecificoperational functions. These operational use cases and related usage scenarios cross many IT securityautomation data to an operator or application may be public, enterprise-scoped, private, or controlled within any other scope. Data Query: An operator or application should be abledomains. The use cases enable the derivation of common: o concepts that are expressed as building blocks in this document, o characteristics toquery a security automation data store usinginform development of asetrequirements document, o information concepts to inform development ofspecified criteria. The resultan information model document, and o functional capabilities to inform development ofthe queryan architecture document. Together, these ideas will bea listing matching the query. The query result listing may contain publication metadata (e.g., create date, modified date, publisher, etc.) and/or the fullused to guide development of vendor- neutral, interoperable standards for collecting, aggregating, and evaluating data relevant to security posture. Using this standard data,a summary, snippet, ortools can analyze thelocation to retrievestate of endpoints as well as user activities and behaviour, and evaluate thedata. Data Retrieval: A user, operator, or application acquires one or more specificsecurityautomation data entries. The locationposture of an organization. Common expression ofthe data may be known a priori, or may be determined based on decisions made usinginformationfrom a previous query. Data Change Detection: An operatorshould enable interoperability between tools (whether customized, commercial, orapplication needsfreely available), and the ability toknow whenautomate portions of securityautomation data they interestedprocesses to gain efficiency, react to new threats inhas been published to, updated in, or deleted froma timely manner, and free up securityautomation data store which they have been authorizedpersonnel toaccess. These building blocks are usedwork on more advanced problems. The goal is to enableacquisitionorganizations to make informed decisions that support organizational objectives, to enforce policies for hardening systems, to prevent network misuse, to quantify business risk, and to collaborate with partners to identify and mitigate threats. It is expected that use cases for enterprises and for service providers will largely overlap. When considering this overlap, there are additional complications for service providers, especially in handling information that crosses administrative domains. The output ofvarious instancesendpoint posture assessment is expected to feed into additional processes, such as policy-based enforcement of acceptable state, verification and monitoring of securityautomation data based on specific data models that are usedcontrols, and compliance todrive assessment planning (see section 2.1.2),regulatory requirements. 2. Endpoint Posture Assessment Endpoint postureattribute value collection (see section 2.1.3),assessment involves orchestrating andposture evaluation (see section 2.1.4). 2.1.2. Endpoint Identificationperforming data collection andAssessment Planning This use case describesevaluating theprocessposture ofdiscovering endpoints, understanding their composition, identifying the desired state to assess against, and calculating whata given endpoint. Typically, endpoint postureattributesinformation is gathered and then published tocollectappropriate data repositories toenable evaluation. This process may be a set of manual, automated, or hybrid steps that are performedmake collected information available foreach assessment. The building blocks of this use case are:further analysis supporting organizational security processes. EndpointDiscovery: To determineposture assessment typically includes: o collecting thecurrent or historic presenceattributes ofendpoints ina given endpoint; o making theenvironment that areattributes available for evaluation and action; and o verifying that the endpoint's postureassessment. Endpoints are identifiedis insupportcompliance with enterprise standards and policy. As part ofdiscovery using information previously obtained or by using other collection mechanismsthese activities, it is often necessary togather identificationidentify andcharacterization data. Previously obtainedacquire any supporting security automation data that is needed to drive and feed datamay originate from sources such as network authentication exchanges. Endpoint Characterization: The act of acquiring, through automatedcollectionor manual input,andorganizing attributes associated with an endpoint (e.g., type, organizationally expected function/role, hardware/software versions). Identify Endpoint Targets: Determine the candidateevaluation processes. The following is a typical workflow scenario for assessing endpointtarget(s) against which to performposture: 1. Some type of trigger initiates theassessment. Depending onworkflow. For example, an operator or an application might trigger theassessment trigger,process with asingle endpointrequest, ormultiple endpoints may be targeted based on characterizedthe endpointattributes. Guidance describingmight trigger theassessmentprocess using an event-driven notification. 2. An operator/application selects one or more target endpoints to beperformed may contain instructions or references usedassessed. 3. An operator/application selects which policies are applicable todeterminetheapplicable assessmenttargets.In this case the Data Query and/or Data Retrieval building blocks (see section 2.1.1) may be used4. For each target: A. The application determines which (sets of) posture attributes need toacquire this data. Endpoint Component Inventory: To determine what applicable desired statesbe collected for evaluation. Implementations should beassessed, it is first necessaryable toacquire the inventorysupport (possibly mixed) sets ofsoftware, hardware,standardized andaccounts associated with the targeted endpoint(s). If the assessment of the endpoint is not dependent on the these details, then this capability is not required for use in performing the assessment. This process can be treatedproprietary attributes. B. The application might retrieve previously collected information from a cache or data store, such as acollection use case for specificdata store populated by an asset management system. C. The application might establish communication with the target, mutually authenticate identities and authorizations, and collect postureattributes. In this caseattributes from thebuilding blocks for Endpoint Posture Attribute Value Collection (see section 2.1.3) cantarget. D. The application might establish communication with one or more intermediaries or agents, which may beused. Posture Attribute Identification: Oncelocal or external. When establishing connections with an intermediary or agent, theendpoint targets andapplication can mutually authenticate theirassociated asset inventory is known, it is then necessary to calculate whatidentities and determine authorizations, and collect posture attributesare required to beabout the target from the intermediaries or agents. E. The application communicates target identity and (sets of) collected attributes toperforman evaluator, which is possibly an external process or external system. F. The evaluator compares thedesired evaluation. When available, existingcollected posturedata is queriedattributes with expected values as expressed in policies. G. The evaluator reports the evaluation result for the requested assessment, in a standardized or proprietary format, such as a report, a log entry, a database entry, or a notification. 2.1. Use Cases The following subsections detail specific use cases forsuitability using the Data Query building block (see section 2.1.1). Such postureassessment planning, datais suitable if it is completecollection, analysis, andcurrent enough for use inrelated operations pertaining to theevaluation. Any unsuitable posture data is identified for collection. If thispublication and use of supporting data. Each use case isdrivendefined byguidance, then the Data Query and/or Data Retrievala short summary containing a simple problem statement, followed by a discussion of related concepts, and a listing of associated building blocks(see section 2.1.1) may be used to acquire this data. At this pointthat represent theset of posture attribute valuescapabilities needed to support the usefor evaluation are knowncase. These use cases andthey canbuilding blocks identify separate units of functionality that may becollected if necessary (see section 2.1.3). 2.1.3. Endpoint Posture Attribute Value Collectionsupported by different components of an architectural model. 2.1.1. Define, Publish, Query, and Retrieve Security Automation Data This use case describes theprocess of collecting a set of posture attribute values relatedneed for security automation data toone or more endpoints. This use case canbeinitiated by a variety of triggers including: 1. A posture change or significant event on the endpoint. 2. A network event (e.g., endpoint connectsdefined and published toa network/VPN, specific netflow is detected). 3. A scheduledone orad hoc collection task. The building blocks of this use case are: Collection Guidance Acquisition: If guidance is required to drivemore data stores, as well as queried and retrieved from these data stores for thecollectionexplicit use of postureattributes values, this capabilitycollection and evaluation. Security automation data isuseda general concept that refers toacquire thisany datafrom one or moreexpression that may be generated and/or used as part of the process of collecting and evaluating endpoint posture. Different types of security automation datastores. Depending on the trigger,will generally fall into one of three categories: Guidance: Instructions and related metadata that guide thespecific guidanceattribute collection and evaluation processes. The purpose of this data is to allow implementations toacquire mightbeknown. If not,data-driven, thus enabling their behavior to be customized without requiring changes to deployed software. This type of data tends to change in units of months and days. In cases where assessments are made more dynamic, it may be necessary todetermine the guidance to use based onhandle changes in thecomponent inventoryscope of hours orother assessment criteria. The Data Query and/or Data Retrieval building blocks (see section 2.1.1) mayminutes. This data will typically beusedprovided by large organizations, product vendors, and some third parties. Thus, it will tend toacquire this guidance. Posture Attribute Value Collection: The accumulation of posture attribute values. This maybebased on collection guidance that is associated with the posture attributes. Once the posture attribute values are collected, theyshared across large enterprises and customer communities. In some cases, access may bepersisted for later use or theycontrolled to specific authenticated users. In other cases, the data may beimmediately used for posture evaluation. 2.1.4. Posture Attribute Evaluationprovided broadly with little to no access control. Thisuse case represents the actionincludes: * Listings ofanalyzing collected postureattribute identifiers for which valuesas part of an assessment. The primary focusmay be collected and evaluated. * Lists ofthis use case isattributes that are tosupport evaluation of actual endpoint state against the expected state selected for the assessment. This use case canbeinitiated bycollected along with metadata that includes: when to collect avarietyset oftriggers including: 1. A posture change or significant eventattributes based onthe endpoint. 2. A network event (e.g., endpoint connects toanetwork/VPN, specific netflow is detected). 3. A scheduleddefined interval orad hoc evaluation task. The building blocksevent, the duration ofthis use case are: Collected Posture Change Detection: An operator or application hascollection, and how to go about collecting amechanismset of attributes. * Guidance that specifies how old collected data can be when used for evaluation. * Policies that define how todetecttarget and perform theavailabilityevaluation ofnew,a set of attributes for different kinds orchanges to existing, posture attribute values. The timelinessgroups ofdetection may vary from immediate to on-demand. Having the ability to filter what changes are detected will allow the operator to focus onendpoints and thechanges thatassets they arerelevantcomposed of. In some cases, it may be desirable totheir use and will enable evaluationmaintain hierarchies of policies as well. * References tooccur dynamically based on detected changes. Posture Attribute Value Query: If previously collected posture attribute values are needed, the appropriatehuman-oriented datastores are queriedthat provide technical, organizational, and/or policy context. This might include references to: best practices documents, legal guidance and legislation, and instructional materials related toretrieve them usingthe automation data in question. Attribute Data: DataQuery building block (see section 2.1.1). If allcollected through automated and manual mechanisms describing organizational and postureattribute values are provided directly for evaluation, then this capability may not be needed. Evaluation Guidance Acquisition: If guidance is requireddetails pertaining todrivespecific endpoints and theevaluationassets that they are composed of (e.g., hardware, software, accounts). The purpose ofposture attributes values,thiscapabilitytype of data isusedtoacquire thischaracterize an endpoint (e.g., endpoint type, organizationally expected function/role) and to provide actual and expected state datafrompertaining to one or moresecurity automationendpoints. This datastores. Depending on the trigger, the specific guidance to acquire might be known. If not, it may be necessaryis used to determinethe guidancewhat posture attributes touse based on the component inventorycollect from which endpoints and to feed one orother assessment criteria. The Data Query and/or Data Retrieval building blocks (see section 2.1.1) may be usedmore evaluations. This type of data tends toacquire this guidance. Posture Attribute Evaluation: The comparisonchange in units of days, minutes, and seconds, with posture attribute valuesagainst their expected values as expressed in the specified guidance. The result of this comparison is output as a set of posture evaluation results. Such results include metadata requiredtypically changing more frequently than endpoint characterizations. This data tends toprovide a levelbe organizationally and endpoint specific, with specific operational groups ofassuranceendpoints tending to exhibit similar attribute profiles. Generally, this data will not be shared outside an organizational boundary and will require authentication withrespect tospecific access controls. This includes: * Endpoint characterization data that describes the endpoint type, organizationally expected function/role, etc. * Collected endpoint posture attributedata and, therefore, evaluation results. Examples of such metadata include provenancevalues andor availability data. While the primary focus of this use case is around enabling the comparisonrelated context including: time of collection, tools used for collection, etc. * Organizationally defined expectedvs. actual state, the same building blocks can support other analysis techniques that are applied to collectedposture attributedata (e.g., trending, historic analysis). Completion of this process represents a complete assessment cycle as defined in Section 2. 2.2. Usage Scenarios In this section, we describevalues targeted to specific evaluation guidance and endpoint characteristics. This allows anumber of usage scenarios that utilize aspectscommon set ofendpoint posture assessment. These are examplesguidance to be parameterized for use with different groups ofcommon problemsendpoints. Processing Artifacts: Data thatcanis generated by, and is specific to, an individual assessment process. This data may besolved withused as part of thebuilding blocks defined above. 2.2.1. Definitioninteractions between architectural components to drive andPublication of Automatable Configuration Checklists A vendor manufactures a numbercoordinate collection and evaluation activities. Its lifespan will be bounded by the lifespan ofspecialized endpoint devices. Theythe assessment. It may alsodevelopbe exchanged andmaintainstored to provide historic context around anoperating system for these devicesassessment activity so thatenables end-user organizations to configure a number of securityindividual assessments can be grouped, evaluated, andoperational settings. As partreported in an enterprise context. This includes: * The identified set oftheir customer support activities, they publish a numberendpoints for which an assessment should be performed. * The identified set ofsecure configuration guidesposture attributes thatprovide minimum security guidelines for configuring their devices. Each guide they produce appliesneed toabe collected from specificmodelendpoints to perform an evaluation. * The resulting data generated by an evaluation process including the context ofdevicewhat was assessed, what it was assessed against, what collected data was used, when it was collected, andversion ofwhen theoperating system and providesevaluation was performed. The information model for security automation data must support anumbervariety ofspecialized configurations depending ondifferent data types as described above, along with thedevice's intended function and what add-on hardware modulesassociated metadata that is needed to support publication, query, andsoftware licenses are installed on the device. To enable their customersretrieval operations. It is expected that multiple data models will be used toevaluate theexpress specific data types requiring specialized or extensible securitypostureautomation data repositories. The different temporal characteristics, access patterns, and access control dimensions oftheir deviceseach data type may also require different protocols and data models toensure that all appropriate minimal security settings are enabled, they publish an automatable configuration checklists usingbe supported furthering the potential requirement for specialized data repositories. See [RFC3444] for apopulardescription and discussion of distinctions between an information and dataformatmodel. It is likely thatdefines what settingsadditional kinds of data will be identified through the process of defining requirements and an architectural model. Implementations supporting this building block will need tocollectbe extensible to accommodate the addition of new types of data, whether proprietary or (preferably) using anetwork management protocolstandard format. The building blocks of this use case are: Data Definition: Security automation data will guide andappropriate values for each setting. They publish these checklists toinform collection and evaluation processes. This data may be designed by apublicvariety of roles -- application implementers may build security automation datastore that customers can query to retrieve applicable checklist(s) forinto theirdeployed specialized endpoint devices. Automatable configuration checklist could also come from sources other than a device vendor, suchapplications; administrators may define guidance based on organizational policies; operators may define guidance and attribute data asindustry groups or regulatory authorities, or enterprises could develop their own checklists. This usage scenario employs the following building blocks defined in Section 2.1.1 above:needed for evaluation at runtime; and so on. DataDefinition: To allow guidanceproducers may choose tobe defined usingreuse data from existing stores of security automation data and/or may create new data. Data producers may develop data based on available standardized or proprietary datamodels that will drive collection and evaluation.models, such as those used for network management and/or host management. Data Publication:Providing a mechanismThe capability to enable data producers to publishcreated guidancedata to a security automation datastore. Data Query: To locate and select existing guidance thatstore for further use. Published data may be made publicly available or access may be based on an authorization decision using authenticated credentials. As a result, the visibility of specific security automation data to an operator or application may bereused.public, enterprise-scoped, private, or controlled within any other scope. DataRetrieval To retrieve specific guidance fromQuery: An operator or application should be able to query a security automation data storefor editing. While each building block can be used in a manual fashion byusing ahuman operator, it is also likely that these capabilitiesset of specified criteria. The result of the query will beimplemented together in some form ofaguidance editorlisting matching the query. The query result listing may contain publication metadata (e.g., create date, modified date, publisher, etc.) and/or the full data, a summary, snippet, orgenerator application. 2.2.2. Automated Checklist Verificationthe location to retrieve the data. Data Retrieval: Afinancial services company operates a heterogeneous IT environment. In support of their risk management program, they utilize vendor provided automatable security configuration checklists for each operating system anduser, operator, or applicationused within their IT environment. Multiple checklists are used from different vendors to insure adequate coverage of all IT assets. To identify what checklists are needed, they useacquires one or more specific security automationto gather an inventory of the software versions utilized by all IT assets in the enterprise. Thisdatagathering will involve querying existing data storesentries. The location ofpreviously collected endpoint software inventory posture data and actively collecting data from reachable endpoints as needed utilizing network and systems management protocols. Previously collectedthe data may beprovided by periodic data collection, network connection-driven data collection,known a priori, or may be determined based on decisions made using information from a previous query. Data Change Detection: An operator orongoing event-driven monitoring of endpoint posture changes. Appropriate checklistsapplication needs to know when security automation data they arequeried, located and downloadedinterested in has been published to, updated in, or deleted fromthe relevant guidance data stores. The specifica security automation datastores queried and the specificsstore that they have been authorized to access. These building blocks are used to enable acquisition ofeach query may be driven byvarious instances of security automation dataincluding: o collected hardware and software inventory data, and o associated asset characterizationbased on specific data models thatmay indicateare used to drive assessment planning (see Section 2.1.2), posture attribute value collection (see Section 2.1.3), and posture evaluation (see Section 2.1.4). 2.1.2. Endpoint Identification and Assessment Planning This use case describes theorganizational defined functionsprocess ofeach endpoint. Checklistsdiscovering endpoints, understanding their composition, identifying the desired state to assess against, and calculating what posture attributes to collect to enable evaluation. This process may besourced from guidance data stores maintained by an application or OS vendor, an industry group,aregulatory authority,set of manual, automated, ordirectly by the enterprise.hybrid steps that are performed for each assessment. Theretrieved guidance is cached locally to reduce the need to retrieve the data multiple times. Driven bybuilding blocks of this use case are: Endpoint Discovery: To determine thesetting data providedcurrent or historic presence of endpoints in thechecklist, a combinationenvironment that are available for posture assessment. Endpoints are identified in support ofexisting configuration data storesdiscovery by using information previously obtained or using other collection mechanisms to gather identification and characterization data. Previously obtained data may originate from sources such as network authentication exchanges. Endpoint Characterization: The act of acquiring, through automated collectionmethods are usedor manual input, and organizing attributes associated with an endpoint (e.g., type, organizationally expected function/role, hardware/software versions). Endpoint Target Identification: Determine the candidate endpoint target(s) against which togatherperform theappropriate posture attributes from (or pertaining to) each endpoint. Specific posture attribute values are gatheredassessment. Depending on the assessment trigger, a single endpoint or multiple endpoints may be targeted based on characterized endpoint attributes. Guidance describing thedefined enterprise function and software inventory of each endpoint. The collection mechanisms usedassessment tocollect software inventory posture willbe performed may contain instructions or references usedagain forto determine the applicable assessment targets. In thispurpose. Oncecase, thedataData Query and/or Data Retrieval building blocks (see Section 2.1.1) may be used to acquire this data. Endpoint Component Inventory: To determine what applicable desired states should be assessed, it isgathered,first necessary to acquire theactual stateinventory of software, hardware, and accounts associated with the targeted endpoint(s). If the assessment of the endpoint isevaluated againstnot dependent on theexpected state criteria definedthese details, then this capability is not required for use ineach applicable checklist. A checklist can be assessed as a whole, or a specific subset ofperforming thechecklistassessment. This process can beassessed resulting in partial data collection and evaluation. The results of checklist evaluation are provided to appropriate operators and applications to drive additional business logic. Specific applications for checklist evaluation results are out-of- scopetreated as a collection use case forcurrent SACM efforts. Irrespective ofspecificapplications,posture attributes. In this case, theavailability, timeliness, and liveness of results is often of general concern. Network latency and available bandwidth often create operational constraints that require trade-offs between these concerns and need tobuilding blocks for Endpoint Posture Attribute Value Collection (see Section 2.1.3) can beconsidered. Uses of checklistsused. Posture Attribute Identification: Once the endpoint targets and their associatedevaluation results may include, but are not limited to: o Detecting endpointasset inventory is known, it is then necessary to calculate what posturedeviations as part of a change management program to: * identify missingattributes are requiredpatches, * unauthorized changestohardware and software inventory, and * unauthorized changesbe collected toconfiguration items. o Determining compliance with organizational policies governing endpoint posture. o Informing configuration management, patch management, and vulnerability mitigation and remediation decisions. o Searchingperform the desired evaluation. When available, existing posture data is queried forcurrentsuitability using the Data Query building block (see Section 2.1.1). Such posture data is suitable if it is complete andhistoric indicators of compromise. o Detectingcurrentand historic infection by malware and determiningenough for use in thescope of infection within an enterprise. o Detecting performance, attack and vulnerable conditions that warrant additional network diagnostics, monitoring, and analysis. o Informing network access control decision makingevaluation. Any unsuitable posture data is identified forwired, wireless, or VPN connections. This usage scenario employscollection. If this is driven by guidance, then thefollowingData Query and/or Data Retrieval buildingblocks defined inblocks (see Section2.1.1 above: Endpoint Discovery: The purpose of discovery is2.1.1) may be used todetermineacquire this data. At this point, thetypeset ofendpoint to bepostureassessed. Identify Endpoint Targets: To identify what potential endpoint targets the checklist should applyattribute values tobased on organizational policies. Endpoint Component Inventory: Collecting and consuming the software and hardware inventoryuse forthe target endpoints. Posture Attribute Identification: To determine what data needs toevaluation are known, and they can be collectedto support evaluation, the checklist is evaluated against the component inventory and other endpoint metadata to determineif necessary (see Section 2.1.3). 2.1.3. Endpoint Posture Attribute Value Collection This use case describes the process of collecting a set of posture attribute valuesthat are needed.related to one or more endpoints. This use case can be initiated by a variety of triggers including: 1. a posture change or significant event on the endpoint. 2. a network event (e.g., endpoint connects to a network/VPN, specific netflow [RFC3954] is detected). 3. a scheduled or ad hoc collection task. The building blocks of this use case are: Collection Guidance Acquisition:Based onIf guidance is required to drive theidentifiedcollection of postureattributes, the application will query appropriateattributes values, this capability is used to acquire this data from one or more security automation datastoresstores. Depending on the trigger, the specific guidance tofindacquire might be known. If not, it may be necessary to determine the"applicable" collectionguidancefor each endpoint in question.to use based on the component inventory or other assessment criteria. The Data Query and/or Data Retrieval building blocks (see Section 2.1.1) may be used to acquire this guidance. Posture Attribute Value Collection:For each endpoint, the values forThe accumulation of posture attribute values. This may be based on collection guidance that is associated with therequiredpostureattributes are collected. Posture Attribute Value Query: If previously collectedattributes. Once the posture attribute values areused,collected, theyare queried from the appropriate data storesmay be persisted forthe target endpoint(s). Evaluation Guidance Acquisition: Any guidance that is needed to support evaluation is queried and retrieved.later use or they may be immediately used for posture evaluation. 2.1.4. Posture AttributeEvaluation: The resulting posture attribute values from previous collection processes are evaluated usingEvaluation This use case represents theevaluation guidance to provide a setaction of analyzing collected postureresults. 2.2.3. Detection of Posture Deviations Example corporation has established secure configuration baselines for each different typeattribute values as part ofendpoint within their enterprise including: network infrastructure, mobile, client, and server computing platforms. These baselines defineanapproved listassessment. The primary focus ofhardware, software (i.e., operating system, applications, and patches), and associated required configurations. When an endpoint connects to the network, the appropriate baseline configurationthis use case iscommunicatedtothesupport evaluation of actual endpointbased on its location in the network,state against the expectedfunction of the device, and other asset management data. It is checkedstate selected forcompliance with the baseline indicating any deviations to the device's operators. Oncethebaseline has been established,assessment. This use case can be initiated by a variety of triggers including: 1. a posture change or significant event on the endpoint. 2. a network event (e.g., endpointis monitored for any change events pertainingconnects tothe baseline on an ongoing basis. Whenachange occursnetwork/VPN, specific netflow [RFC3954] is detected). 3. a scheduled or ad hoc evaluation task. The building blocks of this use case are: Collected Posture Change Detection: An operator or application has a mechanism toposture defined indetect thebaseline, updatedavailability of new postureinformation is exchanged, allowing operatorsattribute values or changes tobe notified and/or automated actionexisting ones. The timeliness of detection may vary from immediate tobe taken. Likeon-demand. Having theAutomated Checklist Verification usage scenario (see section 2.2.2), this usage scenario supports assessment based on automatable checklists. It differs from that scenario by monitoring for specific endpoint postureability to filter what changeson an ongoing basis. Whenare detected will allow theendpoint detects a posture change, an alert is generated identifyingoperator to focus on thespecificchangesinthat are relevant to their use and will enable evaluation to occur dynamically based on detected changes. Posture Attribute Value Query: If previously collected postureallowing assessment ofattribute values are needed, thedeltaappropriate data stores are queried tobe performed instead of a full assessment in the previous case. This usage scenario employsretrieve them using thesameData Query buildingblocks as Automated Checklist Verificationblock (seesection 2.2.2). It differs slightly in howSection 2.1.1). If all posture attribute values are provided directly for evaluation, then this capability may not be needed. Evaluation Guidance Acquisition: If guidance is required to drive the evaluation of posture attributes values, this capability is used to acquire this data from one or more security automation data stores. Depending on the trigger, the specific guidance to acquire might be known. If not, itusesmay be necessary to determine thefollowing building blocks: Endpoint Component Inventory: Additionally, changesguidance to use based on thehardware and softwarecomponent inventoryare monitored, with changes causing alerts toor other assessment criteria. The Data Query and/or Data Retrieval building blocks (see Section 2.1.1) may beissued.used to acquire this guidance. Posture AttributeValue Collection: After the initial assessment, posture attributes are monitored for changes. If anyEvaluation: The comparison ofthe selectedposture attribute valueschange, an alert is issued. Posture Attribute Value Query:against their expected values as expressed in the specified guidance. Theprevious stateresult of this comparison is output as a set of postureattributes are tracked, allowing changesevaluation results. Such results include metadata required tobe detected. Posture Attribute Evaluation: After the initial assessment,provide apartial evaluation is performed based on changeslevel of assurance with respect tospecificthe postureattributes. This usage scenario highlightsattribute data and, therefore, evaluation results. Examples of such metadata include provenance and or availability data. While theneedprimary focus of this use case is around enabling the comparison of expected vs. actual state, the same building blocks can support other analysis techniques that are applied toquery acollected posture attribute datastore to prepare(e.g., trending, historic analysis). Completion of this process represents acompliance report forcomplete assessment cycle as defined in Section 2. 2.2. Usage Scenarios In this section, we describe a number of usage scenarios that utilize aspects of endpoint posture assessment. These are examples of common problems that can be solved with the building blocks defined above. 2.2.1. Definition and Publication of Automatable Configuration Checklists A vendor manufactures aspecificnumber of specialized endpointanddevices. They alsothe needdevelop and maintain an operating system fora change in endpoint statethese devices that enables end-user organizations totrigger Collection and Evaluation. 2.2.4. Endpoint Information Analysisconfigure a number of security andReporting Freed from the drudgeryoperational settings. As part ofmanual endpoint compliance monitoring, onetheir customer support activities, they publish a number ofthe security administrators at Example Corporation notices (not using SACM standards)secure configuration guides thatfive endpoints have been uploading lots of dataprovide minimum security guidelines for configuring their devices. Each guide they produce applies to asuspicious serverspecific model of device and version of the operating system and provides a number of specialized configurations depending on theInternet. The administrator queries data stores for specific endpoint posture to seedevice's intended function and what add-on hardware modules and softwareislicenses are installed onthose endpoints and finds that they all have a particular program installed. She then queriestheappropriate data storesdevice. To enable their customers tosee which other endpoints have that program installed. All these endpoints are monitored carefully (not using SACM standards), which allowsevaluate theadministratorsecurity posture of their devices todetectensure thatthe other endpointsall appropriate minimal security settings arealso infected. This is just one example of the useful analysisenabled, they publish automatable configuration checklists using a popular data format that defines what settings to collect using a network management protocol and appropriate values for each setting. They publish these checklists to askilled analyst can do usingpublic security automation datastores ofstore that customers can query to retrieve applicable checklist(s) for their deployed specialized endpointposture.devices. Automatable configuration checklists could also come from sources other than a device vendor, such as industry groups or regulatory authorities, or enterprises could develop their own checklists. This usage scenario employs the following building blocks defined in Section 2.1.1 above:Posture Attribute Value Query: Previously collected posture attribute values for the target endpoint(s) are queried from the appropriate data storesData Definition: To allow guidance to be defined usingastandardizedmethod. This usage scenario highlights the need to queryor proprietary data models that will drive collection and evaluation. Data Publication: Providing arepository for attributesmechanism tosee which attributes certain endpoints have in common. 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station Zebra A university team receives a grantpublish created guidance todo research atagovernment facility in the arctic. The only network communications will be via an intermittent, low-speed, high-latency, high-cost satellite link. During their extended expedition, they will need to show continue compliance with thesecuritypolicies of the university, the government, and the provider of the satellite network as well as keep current on vulnerability testing. Interactive assessments are therefore not reliable, and since the researchers have very limited funding they need to minimize how much money they spend on network data. Prior to departure they register all equipment with an asset management system owned by the university, which will also initiateautomation data store. Data Query: To locate andtrack assessments. On a periodic basis -- either afterselect existing guidance that may be reused. Data Retrieval To retrieve specific guidance from amaximum time delta or when thesecurity automation data storehas received a threshold level of new vulnerability definitions -- the university uses the informationfor editing. While each building block can be used inthe asset management system to put togetheracollection request for all of the deployed assetsmanual fashion by a human operator, it is also likely thatencompasses the minimal setthese capabilities will be implemented together in some form ofartifacts necessary to evaluate all three security policies as well as vulnerability testing.a guidance editor or generator application. 2.2.2. Automated Checklist Verification A financial services company operates a heterogeneous IT environment. Inthe case of new critical vulnerabilities, this collection request consists onlysupport ofthe artifacts necessarytheir risk management program, they utilize vendor- provided automatable security configuration checklists forthose vulnerabilitieseach operating system andcollection is only initiated for those assets that could potentially have a new vulnerability. (Optional) Asset artifacts are cached in a local CMDB. When new vulnerabilitiesapplication used within their IT environment. Multiple checklists arereported to the security automation data store, a requestused from different vendors tothe live asset is only done if the artifacts in the CMDB are incomplete and/or not current enough. The collection request is queued for the next windowensure adequate coverage ofconnectivity. The deployed assets eventually receive the request, fulfill it, and queue the results for the next return opportunity. The collected artifacts eventually make it backall IT assets. To identify what checklists are needed, they use automation to gather an inventory of theuniversity wheresoftware versions utilized by all IT assets in thelevelenterprise. This data gathering will involve querying existing data stores ofcompliancepreviously collected endpoint software inventory posture data andvulnerability exposed is calculatedactively collecting data from reachable endpoints as needed, utilizing network andasset characteristics are compared to what is in the assetsystems managementsystem for accuracy and completeness. Like the Automated Checklist Verification usage scenario (see section 2.2.2), this usage scenario supports assessment based on checklists. It differs from that scenario in how guidance,protocols. Previously collected data may be provided by periodic data collection, network connection-driven data collection, or ongoing event-driven monitoring of endpoint postureattribute values, and evaluation resultschanges. Appropriate checklists areexchanged due to bandwidth limitationsqueried, located, andavailability. This usage scenario employs the same building blocks as Automated Checklist Verification (see section 2.2.2). It differs slightly in how it usesdownloaded from thefollowing building blocks: Endpoint Component Inventory: It is likely thatrelevant guidance data stores. The specific data stores queried and thecomponent inventory will not change. If it does, this information will need tospecifics of each query may bebatcheddriven by data including: o collected hardware andtransmitted during the next communication window. Collection Guidance Acquisition: Due to intermittent communication windowssoftware inventory data, andbandwidth constraints, changes to collectiono associated asset characterization data that may indicate the organizationally defined functions of each endpoint. Checklists may be sourced from guidance data stores maintained by an application or OS vendor, an industry group, a regulatory authority, or directly by the enterprise. The retrieved guidancewill needis cached locally tobatched and transmitted duringreduce thenext communication window. Guidance willneed tobe cached locallyretrieve the data multiple times. Driven by the setting data provided in the checklist, a combination of existing configuration data stores and data collection methods are used toavoidgather theneed for remote communications. Posture Attribute Value Collection: The specificappropriate posture attributes from (or pertaining to) each endpoint. Specific posture attribute valuesto be collectedareidentified remotely and batched for collection duringgathered based on thenext communication window. If a delay is introduced fordefined enterprise function and software inventory of each endpoint. The collection mechanisms used tocomplete, results will need to be batched and transmitted. Posture Attribute Value Query: Previously collectedcollect software inventory postureattribute valueswill bestoredused again for this purpose. Once the data is gathered, the actual state is evaluated against the expected state criteria defined in each applicable checklist. A checklist can be assessed as aremote data store for use atwhole, or a specific subset of theuniversity Evaluation Guidance Acquisition: Duechecklist can be assessed resulting in partial data collection and evaluation. The results of checklist evaluation are provided tointermittent communication windowsappropriate operators andbandwidth constraints, changesapplications to drive additional business logic. Specific applications for checklist evaluationguidance will need to batchedresults are out of scope for current SACM (Security Automation and Continuous Monitoring) efforts. Irrespective of specific applications, the availability, timeliness, and liveness of results are often of general concern. Network latency and available bandwidth often create operational constraints that require trade-offs between these concerns andtransmitted during the next communication window. Guidance willneed to becached locally to avoid the need for remote communications. Posture Attribute Evaluation: Due to the cachingconsidered. Uses ofposture attribute valueschecklists and associated evaluationguidance, evaluationresults maybe performed at both the university campus as wellinclude, but are not limited to: o Detecting endpoint posture deviations as part of a change management program to identify: * missing required patches, * unauthorized changes to hardware and software inventory, and * unauthorized changes to configuration items. o Determining compliance with organizational policies governing endpoint posture. o Informing configuration management, patch management, and vulnerability mitigation and remediation decisions. o Searching for current and historic indicators of compromise. o Detecting current and historic infection by malware and determining thesatellite site.scope of infection within an enterprise. o Detecting performance, attack, and vulnerable conditions that warrant additional network diagnostics, monitoring, and analysis. o Informing network access control decision-making for wired, wireless, or VPN connections. This usage scenariohighlightsemploys theneedfollowing building blocks defined in Section 2.1.1 above: Endpoint Discovery: The purpose of discovery is tosupport low-bandwidth, intermittent, or high-latency links. 2.2.6. Identification and Retrievaldetermine the type ofGuidance In preparation for performing an assessment, an operator or application will needendpoint to be posture assessed. Endpoint Target Identification: To identifyone or more security automation data stores that containwhat potential endpoint targets theguidance entries necessarychecklist should apply toperform data collectionbased on organizational policies. Endpoint Component Inventory: Collecting andevaluation tasks. The location of a given guidance entry will either be known a priori or known security automationconsuming the software and hardware inventory for the target endpoints. Posture Attribute Identification: To determine what datastores will needneeds to bequeriedcollected toretrieve applicable guidance. To query guidance it will be necessarysupport evaluation, the checklist is evaluated against the component inventory and other endpoint metadata todefine adetermine the set ofsearch criteria. This criteria will often utilize a logical combination of publication metadata (e.g. publishing identity, create time, modification time) and guidance data-specific criteria elements. Onceposture attribute values that are needed. Collection Guidance Acquisition: Based on thecriteria is defined, one or moreidentified posture attributes, the application will query appropriate security automation data storeswill needtobe queried generating a result set. Depending on howfind theresults"applicable" collection guidance for each endpoint in question. Posture Attribute Value Collection: For each endpoint, the values for the required posture attributes are collected. Posture Attribute Value Query: If previously collected posture attribute values are used,it may be desirable to returnthey are queried from thematching guidance directly, a snippet ofappropriate data stores for the target endpoint(s). Evaluation Guidance Acquisition: Any guidancematchingthat is needed to support evaluation is queried and retrieved. Posture Attribute Evaluation: The resulting posture attribute values from previous collection processes are evaluated using thequery, orevaluation guidance to provide aresolvable locationset of posture results. 2.2.3. Detection of Posture Deviations Example Corporation has established secure configuration baselines for each different type of endpoint within their enterprise including: network infrastructure, mobile, client, and server computing platforms. These baselines define an approved list of hardware, software (i.e., operating system, applications, and patches), and associated required configurations. When an endpoint connects toretrieve the data at a later time. The guidance matchingthequery will be restricted basednetwork, theauthorized level of access allowedappropriate baseline configuration is communicated to therequester. If theendpoint based on its locationof guidance is identifiedin thequery result set,network, theguidance will be retrieved when needed using one or more data retrieval requests. A variation on this approach would be to maintain a local cacheexpected function ofpreviously retrievedthe device, and other asset management data.In this case, only guidance thatIt isdetermined to be stale by some measure will be retrieved from the remote data store. Alternately, guidance can be discovered by iterating over data publishedchecked for compliance witha given context within a security automation data store. Specific guidance can be selected and retrieved as needed. This usage scenario employsthefollowing building blocks defined in Section 2.1.1 above: Data Query: Enables an operator or applicationbaseline, and any deviations are indicated toquery one or more security automation data storesthe device's operators. Once the baseline has been established, the endpoint is monitored forguidance using a set of specified criteria. Data Retrieval: If data locations are returned inany change events pertaining to thequery result set, then specific guidance entries can be retrieved and possibly cached locally. 2.2.7. Guidance Change Detection An operator or application may needbaseline on an ongoing basis. When a change occurs toidentify new, updated, or deleted guidanceposture defined ina security automation data store for which they have been authorizedthe baseline, updated posture information is exchanged, allowing operators toaccess. This maybeachieved by querying or iterating over guidance in a security automation data store, or through a notification mechanism that alertsnotified and/or automated action to be taken. Like the Automated Checklist Verification usage scenario (see Section 2.2.2), this usage scenario supports assessment based on automatable checklists. It differs from that scenario by monitoring for specific endpoint posture changesmade toon an ongoing basis. When the endpoint detects asecurity automation data store. Once guidanceposture change, an alert is generated identifying the specific changeshave been determined, data collection and evaluation activities mayin posture, thus allowing assessment of the delta to betriggered.performed instead of a full assessment as in the previous case. This usage scenario employs thefollowingsame building blocksdefinedas Automated Checklist Verification (see section 2.2.2). It differs slightly inSection 2.1.1 above: Data Change Detection: Allows an operator or applicationhow it uses the following building blocks: Endpoint Component Inventory: Additionally, changes toidentify guidancethe hardware and software inventory are monitored, with changesin a security automation data store which they have been authorizedcausing alerts toaccess. Data Retrieval: If data locationsbe issued. Posture Attribute Value Collection: After the initial assessment, posture attributes areprovided bymonitored for changes. If any of thechange detection mechanism, then specific guidance entries canselected posture attribute values change, an alert is issued. Posture Attribute Value Query: The previous state of posture attributes are tracked, allowing changes to beretrieved and possibly cached locally. 3. IANA Considerations This memo includes no requestdetected. Posture Attribute Evaluation: After the initial assessment, a partial evaluation is performed based on changes toIANA. 4. Security Considerationsspecific posture attributes. Thismemo documents, for informational purposes, use casesusage scenario highlights the need to query a data store to prepare a compliance report forsecurity automation. Specific securitya specific endpoint andprivacy considerations will be providedalso the need for a change inrelated documents (e.g., requirements, architecture, information model, data model, protocol) as appropriateendpoint state to trigger Collection and Evaluation. 2.2.4. Endpoint Information Analysis and Reporting Freed from the drudgery of manual endpoint compliance monitoring, one of thefunction described in each related document. One consideration forsecurityautomation isadministrators at Example Corporation notices (not using SACM standards) that five endpoints have been uploading lots of data to amalicious actor could usesuspicious server on thesecurity automation infrastructure and related collectedInternet. The administrator queries data stores for specific endpoint posture togain access to an item of interest. This may include personal data, private keys,see what software is installed on those endpoints andconfiguration statefinds thatcan be used to inform an attack againstthey all have a particular program installed. She then queries thenetwork and endpoints, andappropriate data stores to see which othersensitive information. It is importantendpoints have thatsecurity and privacy considerations inprogram installed. All these endpoints are monitored carefully (not using SACM standards), which allows therelated documents identify methodsadministrator toboth identify and prevent such activity. For considerationdetect that the other endpoints aremeans for protectingalso infected. This is just one example of the useful analysis that a skilled analyst can do using data stores of endpoint posture. This usage scenario employs thecommunications as well asfollowing building blocks defined in Section 2.1.1 above: Posture Attribute Value Query: Previously collected posture attribute values for thesystems that storetarget endpoint(s) are queried from theinformation. For communications betweenappropriate data stores using a standardized method. This usage scenario highlights thevarying SACM components there should be considerationsneed to query a repository forprotectingattributes to see which attributes certain endpoints have in common. 2.2.5. Asynchronous Compliance/Vulnerability Assessment at Ice Station Zebra A university team receives a grant to do research at a government facility in theconfidentiality, data integrity and peer entity authentication. For exchanged information, there shouldArctic. The only network communications will bea meansvia an intermittent, low-speed, high-latency, high-cost satellite link. During their extended expedition, they will need toauthenticateshow continue compliance with theoriginsecurity policies of theinformation. This is important where trackinguniversity, theprovenance of data is needed. Also, for any systems that store information that could be used for unauthorized or malicious purposes, methods to identify and protect against unauthorized usage, inappropriate usage,government, anddenialthe provider ofservicethe satellite network, as well as keep current on vulnerability testing. Interactive assessments are therefore not reliable, and since the researchers have very limited funding, they need tobe considered. 5. Acknowledgements Adam Montville edited early versions of this draft. Kathleen Moriarty,minimize how much money they spend on network data. Prior to departure, they register all equipment with an asset management system owned by the university, which will also initiate andStephen Hanna contributed text describingtrack assessments. On a periodic basis -- either after a maximum time delta or when thescopesecurity automation data store has received a threshold level of new vulnerability definitions -- thedocument. Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and Aron Woland provided use cases text for various revisions of this draft. 6. Change Log 6.1. -08-university uses the information in the asset management system to-09- Fixedput together anumbercollection request for all ofgramatical nits throughoutthedraft identified bydeployed assets that encompasses theSECDIR review. Added additional textminimal set of artifacts necessary totheevaluate all three securityconsiderations about malicious actors. 6.2. -07- to -08- Reworked long sentences throughoutpolicies as well as vulnerability testing. In thedocument by shortening or using bulleted lists. Re-orderedcase of new critical vulnerabilities, this collection request consists only of the artifacts necessary for those vulnerabilities, andcondensed textcollection is only initiated for those assets that could potentially have a new vulnerability. (Optional) Asset artifacts are cached inthe "Automated Checklist Verification" sub-sectiona local configuration management database (CMDB). When new vulnerabilities are reported toimprovetheconceptual presentation andsecurity automation data store, a request toclarify longer sentences. Clarified thatthe"Posture Attribute Value Query" building block represents a standardized interfacelive asset is only done if the artifacts in thecontextCMDB are incomplete and/or not current enough. The collection request is queued for the next window ofSACM. Removedconnectivity. The deployed assets eventually receive the"others" sub-section withinrequest, fulfill it, and queue the"usage scenarios" section. Updatedresults for the"Security Considerations" sectionnext return opportunity. The collected artifacts eventually make it back toidentify that actual SACM security considerations will be discussed intheappropriate related documents. 6.3. -06- to -07- A number of edits were made to section 2 to resolve open questions inuniversity where thedraft based on meetinglevel of compliance andmailing list discussions. Section 2.1.5 was merged into section 2.1.4. 6.4. -05- to -06- Updated the "Introduction" sectionvulnerability exposed is calculated and asset characteristics are compared tobetter reflectwhat is in theuse case, building block,asset management system for accuracy and completeness. Like the Automated Checklist Verification usage scenariostructure changes(see section 2.2.2), this usage scenario supports assessment based on checklists. It differs fromprevious revisions. Updated most uses of the terms "content"that scenario in how guidance, collected posture attribute values, and"content repository"evaluation results are exchanged due touse "guidance" and "security automation data store" respectively. In section 2.1.1, added a discussion of different data typesbandwidth limitations andrenamed "content" to "data" inavailability. This usage scenario employs the same buildingblock names. Inblocks as Automated Checklist Verification (see section2.1.2, separated out2.2.2). It differs slightly in how it uses the following buildingblock concepts of "Endpoint Discovery" and "Endpoint Characterization" based on mailing list discussions. Addressed some open questions throughoutblocks: Endpoint Component Inventory: It is likely that thedraft based on consensus from mailing list discussionscomponent inventory will not change. If it does, this information will need to be batched and transmitted during thetwo virtual interim meetings. Changed many section/sub-section names to better reflect their content. 6.5. -04-next communication window. Collection Guidance Acquisition: Due to-05- Changes in this revision are focused on section 2intermittent communication windows andthe subsequent subsections: o Moved existing use casesbandwidth constraints, changes toa subsection titled "Usage Scenarios". o Added a new subsection titled "Use Cases"collection guidance will need todescribe the common use casesbatched andbuilding blocks usedtransmitted during the next communication window. Guidance will need toaddressbe cached locally to avoid the"Usage Scenarios". The new use cases are: * Define, Publish, Query and Retrieve Content * Endpoint Identification and Assessment Planning * Endpointneed for remote communications. Posture Attribute ValueCollection * Posture Evaluation * Mining the Database o Added a listing of building blocks usedCollection: The specific posture attribute values to be collected are identified remotely and batched forall usage scenarios. o Combinedcollection during thefollowing usage scenarios into "Automated Checklist Verification": "Organizational Software Policy Compliance", "Searchnext communication window. If a delay is introduced forSigns of Infection", "Vulnerable Endpoint Identification", "Compromised Endpoint Identification", "Suspicious Endpoint Behavior", "Traditional endpoint assessment with stored results", "NAC/NAP connection with no storedcollection to complete, resultsusing an endpoint evaluator",will need to be batched and"NAC/NAP connection with notransmitted. Posture Attribute Value Query: Previously collected posture attribute values will be storedresults usingin athird-party evaluator". o Created new usage scenario "Identification and Retrieval of Repository Content" by combiningremote data store for use at thefollowing usage scenarios: "Repository Interaction - A Full Assessment"university. Evaluation Guidance Acquisition: Due to intermittent communication windows and"Repository Interaction - Filtered Delta Assessment" o Renamed "Register with repository for immediate notification of new security vulnerability content that match a selection filter"bandwidth constraints, changes to"Content Change Detection"evaluation guidance will need to batched andgeneralizedtransmitted during thedescriptionnext communication window. Guidance will need to beneutralcached locally toimplementation approaches. o Removed out-of-scope usage scenarios: "Remediationavoid the need for remote communications. Posture Attribute Evaluation: Due to the caching of posture attribute values andMitigation"evaluation guidance, evaluation may be performed at both the university campus as well as the satellite site. This usage scenario highlights the need to support low-bandwidth, intermittent, or high-latency links. 2.2.6. Identification and"Direct HumanRetrieval ofAncillary Materials" Updated acknowledgementsGuidance In preparation for performing an assessment, an operator or application will need torecognize thoseidentify one or more security automation data stores thathelped with editing the use case text. 6.6. -03- to -04- Added four new use cases regarding content repository. 6.7. -02- to -03- Expanded the workflow description based on ML input. Changedcontain theambiguous "assess"guidance entries necessary tobetter separateperform data collectionfrom evaluation. Added use case for Search for Signs of Infection. Added use case for Remediation and Mitigation. Added use case for Endpoint Information AnalysisandReporting. Added use case for Asynchronous Compliance/Vulnerability Assessment at Ice Station Zebra. Added use case for Traditional endpoint assessment with stored results. Added use case for NAC/NAP connection with no stored results using an endpoint evaluator. Added use case for NAC/NAP connection with no stored results usingevaluation tasks. The location of a given guidance entry will either be known athird-party evaluator. Added use case for Compromised Endpoint Identification. Added use case for Suspicious Endpoint Behavior. Added use case for Vulnerable Endpoint Identification. Updated Acknowledgements 6.8. -01-priori or known security automation data stores will need to-02- Changed title removed section 4, expectingbe queried to retrieve applicable guidance. To query guidance it will bemoved into the requirements document. removed the list of proposed capabilities from section 3.1 Added empty sections for Search for Signs of Infection, Remediation and Mitigation, and Endpoint Information Analysis and Reporting. Removed Requirements Language section and rfc2119 reference. Removed unused references (which ended up being all references). 6.9. -00- to -01- o Work on this revision has been focused on document content relating primarilynecessary tousedefine a set ofasset management datasearch criteria. This criteria will often utilize a logical combination of publication metadata (e.g., publishing identity, create time, modification time) andfunctions. o Made significant updatescriteria elements specific tosection 3 including: * Reworked introductory text. * Replacedthesingle example with multiple use cases that focus onguidance data. Once the criteria are defined, one or morediscrete uses of asset managementsecurity automation data stores will need tosupport hardware and software inventory, and configuration management use cases. * For onebe queried, thus generating a result set. Depending on how the results are used, it may be desirable to return the matching guidance directly, a snippet of theuse cases, added mappingguidance matching the query, or a resolvable location tofunctional capabilities used. If popular, thisretrieve the data at a later time. The guidance matching the query will beaddedrestricted based on the authorized level of access allowed to theother use cases as well. * Additional use cases will be added inrequester. If thenext revision capturing additional discussion fromlocation of guidance is identified in thelist. o Made significant updates to section 4 including: * Renamedquery result set, thesection heading from "Use Cases"guidance will be retrieved when needed using one or more data retrieval requests. A variation on this approach would be to"Functional Capabilities" since use cases are covered in section 3. This section now extrapolates specific functionsmaintain a local cache of previously retrieved data. In this case, only guidance thatare needed to support the use cases. * Started workis determined toflatten the section, moving select subsections upbe stale by some measure will be retrieved fromunder asset management. * Removedthesubsections for: Asset Discovery, Endpoint Components and Asset Composition, Asset Resources,remote data store. Alternately, guidance can be discovered by iterating over data published with a given context within a security automation data store. Specific guidance can be selected andAsset Life Cycle. * Renamedretrieved as needed. This usage scenario employs thesubsection "Asset Representation Reconciliation"following building blocks defined in Section 2.1.1 above: Data Query: Enables an operator or application to"Deconflictionquery one or more security automation data stores for guidance using a set ofAsset Identities". * Expandedspecified criteria. Data Retrieval: If data locations are returned in thesubsections for: Asset Identification, Asset Characterization,query result set, then specific guidance entries can be retrieved andDeconfliction of Asset Identities. * Added a new subsection for Asset Targeting. * Moved remaining sections to "Other Unedited Content" for future updating. 6.10. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use- cases-00 o Transitioned from individual I/Dpossibly cached locally. 2.2.7. Guidance Change Detection An operator or application may need toWG I/D based on WG consensus call. o Fixedidentify new, updated, or deleted guidance in anumber of spelling errors. Thank you Erik! o Added keywords to the front matter. o Removed the terminology section from the draft. Termssecurity automation data store for which they have beenmoved to: draft-dbh-sacm-terminology-00 o Removed requirementsauthorized to access. This may bemoved intoachieved by querying or iterating over guidance in anew I/D. o Extracted the functionality from the examples andsecurity automation data store, or through a notification mechanism that generates alerts when changes are madethe examples less prominent. o Renamed "Functional Capabilities and Requirements" section to "Use Cases". * Reorganized the "Asset Management" sub-section. Added new text throughout. + Renamed a few sub-section headings. + Added text to the "Asset Characterization" sub-section. o Renamed "Security Configuration Management"to"Endpoint Configuration Management". Not sure ifa security automation data store. Once guidance changes have been determined, data collection and evaluation activities may be triggered. This usage scenario employs the"security" distinction is important. * Added new sections, partially integrated existing content. * Additional text is neededfollowing building blocks defined inall of the sub-sections. o Changed "Security Change Management" to "Endpoint PostureSection 2.1.1 above: Data ChangeManagement". Added new skeletal outline sections for future updates. 6.11. waltermire -04-Detection: Allows an operator or application to-05- o Are we including user activities and behavioridentify guidance changes inthe scope of this work? That seemsa security automation data store for which they have been authorized to access. Data Retrieval: If data locations are provided by the change detection mechanism, then specific guidance entries can belayer 8 stuff,retrieved and possibly cached locally. 3. Security Considerations This memo documents, for informational purposes, use cases for security automation. Specific security and privacy considerations will be provided in related documents (e.g., requirements, architecture, information model, data model, protocol) as appropriate toan IDS/IPS application, not Internet stuff. o Removed the references to whattheWG will do because this belongsfunction described inthe charter, not the (potentially long-lived) use caseseach related document.I removed mention of charter objectives because the charter may go through multiple iterations over time; there is a websiteOne consideration forhosting the charter; this documentsecurity automation isnot the correct place forthatdiscussion. o Moveda malicious actor could use thediscussion of NIST specificationssecurity automation infrastructure and related collected data tothe acknowledgements section. o Removed the portiongain access to an item ofthe introductioninterest. This may include personal data, private keys, software and configuration state thatdescribescan be used to inform an attack against thechapters; we have a table of concepts,network andthe existing text seemed redundant. o Removed marketing claims, to focus on technical conceptsendpoints, andtechnical analysis,other sensitive information. It is important thatwould enable subsequent engineering effort. o Removed (commented out in XML) UC2 and UC3,security andeliminated some text that referredprivacy considerations in the related documents indicate methods tothese use cases. o Modified IANAboth identify andSecurity Consideration sections. o Moved Terms toprevent such activity. For consideration are means for protecting thefront, so we can use them incommunications as well as thesubsequent text. o Removedsystems that store the"Key Concepts" section, sinceinformation. For communications between theconcepts of ORM and IRM were not otherwise mentioned invarying SACM components, there should be considerations for protecting thedocument. This would seem more appropriateconfidentiality, data integrity, and peer entity authentication. For exchanged information, there should be a means to authenticate thearch doc rather than use cases. o Removed role=editor from David Waltermire's info, since there are three editors onorigin of thedocument. The editorinformation. This ismostimportantwhen one person writes the document that representswhere tracking theworkprovenance ofmultiple people. When there are three editors, this role marking isn't necessary. o Modified text to describedata is needed. Also, for any systems thatthis was specific to enterprises, andstore information thatit was expectedcould be used for unauthorized or malicious purposes, methods tooverlap with service provider use cases, and described the context of this scoped work within a larger context of policy enforcement,identify andverification. o The document had asset management, but the charter mentioned asset, change, configuration,protect against unauthorized usage, inappropriate usage, andvulnerability management, so I added sections for eachdenial ofthose categories. o Added textservice need toIntroduction explaining goal of the document. o Added sections on various example use cases for asset management, config management, change management, and vulnerability management. 7.be considered. 4. Informative References [RFC3444] Pras, A. and J. Schoenwaelder, "On the Difference between Information Models and Data Models", RFC 3444, DOI 10.17487/RFC3444, January2003.2003, <http://www.rfc-editor.org/info/rfc3444>. [RFC3954] Claise, B., Ed., "Cisco Systems NetFlow Services Export Version 9", RFC 3954, DOI 10.17487/RFC3954, October 2004, <http://www.rfc-editor.org/info/rfc3954>. Acknowledgements Adam Montville edited early versions of this document. Kathleen Moriarty and Stephen Hanna contributed text describing the scope of the document. Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and Aron Woland provided text about the use cases for various revisions of this document. Authors' Addresses David Waltermire National Institute of Standards and Technology 100 Bureau Drive Gaithersburg, Maryland 20877USAUnited States Email: david.waltermire@nist.gov David Harrington Effective Software 50 Harding Rd Portsmouth,NHNew Hampshire 03801USAUnited States Email: ietfdbh@comcast.net