Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 105 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
105
Dung lượng
2,21 MB
Nội dung
Department of Homeland Security Federal Network Security Branch Continuous Asset Evaluation, Situational Awareness, and Risk Scoring Reference Architecture Report (CAESARS) September 2010 Version 1.8 Document No MP100146 This page intentionally left blank CAESARS September 2010 Table of Contents Introduction 1.1 Objective 1.2 Intended Audience .1 1.3 References 1.4 Review of FISMA Controls and Continuous Monitoring 1.5 CAESARS Reference Architecture Concept of Operations 1.5.1 Definition .4 1.5.2 Operating Principles .4 1.5.3 Relationship of CAESARS to CyberScope 1.5.4 Cautionary Note – What Risk Scoring Can and Cannot Do 1.5.5 CAESARS and Risk Management .7 1.5.6 Risk Management Process .8 1.6 The CAESARS Subsystems 1.7 Document Structure: The Architecture of CAESARS .10 1.7.1 CAESARS Sensor Subsystem .11 1.7.2 CAESARS Database/Repository Subsystem .12 1.7.3 CAESARS Analysis/Risk Scoring Subsystem 13 1.7.4 CAESARS Presentation and Reporting Subsystem .13 The Sensor Subsystem 14 2.1 Goals 14 2.1.1 Definitions 14 2.1.2 Operating Environment Assumptions for the Sensor Subsystem 15 2.2 Solution Concept for the Sensor Subsystem 16 2.2.1 Tools for Assessing Security Configuration Compliance 19 2.2.2 Security Assessment Tools for Assessing Patch-Level Compliance 23 2.2.3 Tools for Discovering and Identifying Security Vulnerabilities 25 2.2.4 Tools for Providing Virus Definition Identification 29 2.2.5 Other Sensors .30 2.2.6 Sensor Controller 32 2.3 Recommended Technology in the Sensor Subsystem .33 CAESARS iii September 2010 2.3.1 Agent-Based Configuration 33 2.3.2 Agentless Configuration 35 2.3.3 Proxy-Hybrid Configuration 36 2.3.4 NAC-Remote Configuration 37 CAESARS Database 39 3.1 Goals 39 3.1.1 Raw Data Collected and Stored Completely, Accurately, Automatically, Securely, and in a Timely Manner 39 3.1.2 Modular Architecture 39 3.2 Objects and Relations 39 3.2.1 Repository of Asset Inventory Baselines .40 3.2.2 Repository of System Configuration Baselines 41 3.2.3 National Vulnerability Database 42 3.2.4 Database of Findings 42 CAESARS Analysis Engine 49 4.1 Goals 49 4.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies .49 4.1.2 Make Maximal Use of Existing, In-Place, or Readily Available Sensors 50 4.1.3 Harmonize Data from Different Sensors .50 4.1.4 Develop Analysis Results that Are Transparent, Defensible, and Comparable 50 4.2 Types of Raw Data and Data Consolidation and Reduction 51 CAESARS Risk Scoring Engine 53 5.1 Goals 53 5.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies .53 5.1.2 Common Base of Raw Data Can Be Accessed by Multiple Analytic Tools .54 5.1.3 Multiple Scoring Tools Reflect Both Centralized and Decentralized Analyses 54 5.2 Centralized Scoring that Is Performed Enterprise-Wide 54 5.2.1 Allow Common Scoring for Consistent Comparative Results Across the Enterprise 54 5.2.2 Show Which Components Are Compliant (or Not) with High-Level Policies 54 5.2.3 Permit Decomposition of Scores into Shadow Prices 55 5.2.4 Show Which Components Are Subject to Specific Threats/Attacks 55 5.2.5 Allow Controlled Enterprise-Wide Change to Reflect Evolving Strategies 55 5.3 Decentralized Analyses that Are Unique to Specific Enterprise Subsets 55 CAESARS iv September 2010 5.3.1 Raw Local Data Is Always Directly Checkable by Local Administrators 55 5.3.2 Different Processor Types or Zones Can Be Analyzed Separately .56 5.3.3 Region- and Site-Specific Factors Can Be Analyzed by Local Administrators 56 5.3.4 Users Can Create and Store Their Own Analysis Tools for Local Use .56 5.4 Description of the iPost Implementation of the Analysis and Scoring Engines 56 5.4.1 Synopsis of the iPost Scoring Methodology 57 5.4.2 Using iPost for Centralized Analyses 58 5.4.3 Using iPost for Decentralized Analyses 58 5.4.4 The Scoring Methodologies for iPost Risk Components .59 CAESARS Presentation and Reporting Subsystem 61 6.1 Goals 61 6.1.1 Modular Presentation, Independent of Presentation and Reporting Subsystem Technologies 61 6.1.2 Allow Either Convenient ―Dashboard‖ Displays or Direct, Detailed View of Data 61 6.2 Consistent Display of Enterprise-Wide Scores 62 6.2.1 Device-Level Reporting .63 6.2.2 Site-, Subsystem-, or Organizational-Level Reporting 63 6.2.3 Enterprise-Level Reporting 64 6.2.4 Risk Exception Reporting 64 6.2.5 Time-Based Reporting 64 Areas for Further Study 65 Conclusions and Recommendations .67 Appendix A NIST-Specified Security Content Automation Protocol (SCAP) 68 Appendix B Addressing NIST SP 800-53 Security Control Families 69 Appendix C Addressing the Automatable Controls in the Concensus Audit Guidelines 71 Appendix D List of Applicable Tools 74 Appendix E Sample List of SCAP Security Configuration Checklists 82 Appendix F Sample Risk Scoring Formulas 84 Acronyms CAESARS 90 v September 2010 List of Figures Figure ES-1 Contextual Description of the CAESARS System x Figure Continuous Monitoring of a System‘s Security Posture in the NIST-Defined System Life Cycle and Risk Management Framework Figure Contextual Description of the CAESARS System 11 Figure Relationships Between Security Configuration Benchmarks, Baseline, and RBDs 15 Figure Contextual Description of the Sensor Subsystem 17 Figure Contextual Description of Interfaces Between an FDCC Scanner Tool and the Database/Repository/Subsystem 21 Figure Contextual Description of Interfaces Between an Authenticated Security Configuration Scanner Tool and the Database/Repository Subsystem 23 Figure Contextual Description of Interfaces Between an Authenticated Vulnerability and Patch Scanner Tool and the Database/Repository Subsystem 24 Figure 8: Contextual Description of Interfaces Between an Unauthenticated Vulnerability Scanner Tool and the Database/Repository Subsystem 25 Figure Contextual Description of Interfaces Between a Web Vulnerability Scanner Tool and the Database/Repository Subsystem 28 Figure 10 Contextual Description of Interfaces Between an Database Vulnerability Scanner Tool and the Database/Repository Subsystem 29 Figure 11 Contextual Description of Interfaces Between an Authenticated Security Configuration Scanner Tool and the Database Subsystem 30 Figure 12 Contextual Description of Sensor Controller to Control Security Assessment Tools 32 Figure 13 Agent-Based Deployment Configuration 33 Figure 14 Agentless Deployment Configuration 35 Figure 15 Proxy-Hybrid Deployment Configuration – Agentless 37 Figure 16 NAC-Remote Deployment Configuration – Agent-Based 37 Figure 17 Contextual Description of Database/Repository Subsystem 40 Figure 18 Contextual Description of Interfaces Between the Database Subsystem and an FDCC Scanner Tool 43 Figure 19 Contextual Description of Interfaces Between the Database Subsystem and an Authenticated Security Configuration Scanner Tool 44 Figure 20 Contextual Description of Interfaces Between the Database Subsystem and an Authenticated Vulnerability and Patch Scanner Tool 45 CAESARS vi September 2010 Figure 21 Contextual Description of Interfaces Between the Database Subsystem and an Unauthenticated Vulnerability Scanner Tool 46 Figure 22 Contextual Description of Interfaces Between the Database Subsystem and a Web Vulnerability Scanner Tool 47 Figure 23 Contextual Description of Interfaces Between the Database Subsystem and a Database Vulnerability Scanner Tool 48 List of Tables Table Recommended Security Tools for Providing Data to Support Risk Scoring 18 Table Currently Scored iPost Components 57 Table Components Under Consideration for iPost Scoring 57 Table Reportable Scoring Elements (Sample) 63 CAESARS vii September 2010 Executive Summary ―Continuous monitoring is the backbone of true security.‖ Vivek Kundra, Federal Chief Information Officer, Office of Management and Budget A target-state reference architecture is proposed for security posture monitoring and risk scoring, based on the work of three leading federal agencies: the Department of State (DOS) Security Risk Scoring System, the Department of Treasury, Internal Revenue Service (IRS) Security Compliance Posture Monitoring and Reporting (SCPMaR) System, and the Department of Justice (DOJ) use of BigFix and the Cyber Security Assessment and Management (CSAM) tool along with related security posture monitoring tools for asset discovery and management of configuration, vulnerabilities, and patches The target reference architecture presented in this document – the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring (CAESARS) reference architecture – represents the essential functional components of a security risk scoring system, independent of specific technologies, products, or vendors, and using the combined elements of the DOS, IRS, and DOJ approaches The objective of the CAESARS reference architecture is to provide an abstraction of the various posture monitoring and risk scoring systems that can be applied by other agencies seeking to use risk scoring principles in their information security program The reference architecture is intended to support managers and security administrators of federal information technology (IT) systems It may be used to develop detailed technical and functional requirements and build a detailed design for tools that perform similar functions of automated asset monitoring and situational awareness The CAESARS reference architecture and the information security governance processes that it supports differ from those in most federal agencies in key respects Many agencies have automated tools to monitor and assess information security risk from factors like missing patches, vulnerabilities, variance from approved configurations, or violations of security control policies Some have automated tools for remediating vulnerabilities, either automatically or through some user action These tools can provide current security status to network operations centers and security operations centers, but they typically not support prioritized remediation actions and not provide direct incentive for improvements in risk posture Remedial actions can be captured in Plans of Actions and Milestones, but plans are not based on quantitative and objective assessment of the benefits of measurably reducing risk, because the potential risk reduction is not measured in a consistent way What makes CAESARS different is its integrated approach and end-to-end processes for: Assessing the actual state of each IT asset under management Determining the gaps between the current state and accepted security baselines Expressing in clear, quantitative measures the relative risk of each gap or deviation Providing simple letter grades that reflect the aggregate risk of every site and system Ensuring that the responsibility for every system and site is correctly assigned Providing targeted information for security and system managers to use in taking the actions to make the most critical changes needed to reduce risk and improve their grades CAESARS viii September 2010 Making these assessments on a continuous or nearly continuous basis is a prerequisite for moving IT security management from isolated assessments, supporting infrequent authorization decisions, to continuous risk management as described in the current federal guidance of the National Institute of Standards and Technology (NIST) and Office of Management and Budget (OMB) mandates The risk scoring and continuous monitoring capabilities that were studied for this document represent operational examples of a more generalized capability that could provide significant value to most or all federal agencies, or to any IT enterprise What makes risk scoring different from compliance posture reporting is providing information at the right level of detail so that managers and system administrators can understand the state of the IT systems for which they are responsible, the specific gaps between actual and desired states of security protections, and the numerical value of every remediation action that can be taken to close the gaps This enables responsible managers to identify the actions that will result in the highest added value in bringing their systems into compliance with standards, mandates, and security policies The reference architecture consists of four interconnected architectural subsystems, the functions and services within those subsystems, and expected interactions between subsystems The four subsystems are: Sensor Subsystem Database/Repository Subsystem Analysis/Risk Scoring Subsystem Presentation and Reporting Subsystem The fundamental building blocks of all analysis and reporting are the individual devices that constitute the assets of the information system enterprise An underlying assumption of risk scoring is that the total risk to an organization is an aggregate of the risks associated with every device in the system The risk scoring system answers the questions: What are the devices that constitute the organization‘s IT assets? What is the current state of security controls (subset of technical controls) associated with those assets? How does their state deviate from the accepted baseline of security controls and configurations? What is the relative severity of the deviations, expressed as a numerical value? At its core, CAESARS is a decision support system The existing implementations on which it is based have proven to be effective means of visualizing technical details about the security posture of networked devices in order to derive actionable information that is prioritized, timely, and tailored The core of the reference architecture is the database, which contains the asset status as reported by the sensors as well as the baseline configurations against which the asset status is compared, the rule sets for scrubbing data for consistency, the algorithms for computing the risk score of each asset, and the data that identifies, for each asset, the responsible organization, site, and/or individual who will initiate the organzation‘s remediation procedure and monitor their CAESARS ix September 2010 completion This assignment of responsibility is key to initiating and motivating actions that measurably improve the security posture of the enterprise The subsystems interact with the database and through it with each other, as depicted in Figure ES-1 Figure ES-1 Contextual Description of the CAESARS System Using an implementation based on this reference architecture, risk scoring can complement and enhance the effectiveness of security controls that are susceptible to automated monitoring and reporting, comparing asset configurations with expected results from an approved security baseline It can provide direct visualization of the effect of various scored risk elements on the overall posture of a site, system, or organization Risk scoring is not a substitute for other essential operational and management controls, such as incident response, contingency planning, and personnel security It cannot determine which IT systems have the most impact on agency operations, nor can it determine how various kinds of security failures – loss of confidentiality, integrity, and availability – will affect the functions and mission of the organization In other words, risk scoring cannot score risks about which it has no information However, when used in conjunction with other sources of information, such as the FIPS-199 security categorization and automated asset data repository and configuration CAESARS x September 2010 Presentation and Reporting Subsystem Reporting Risk Scoring and Analysis Database/ Repository Subsystem Analysis/Risk Scoring Subsystem Relational DBMS Network Configuration Management Tool System Configuration Management Tool Anti-Virus Tool Web Vulnerability Scanner Database Vulnerability Scanner Unauthenticated Vulnerability Scanner Authenticated Configuration Scanner/FDCC Scanner Tenable Security Center SW YES Triumfant Resolution Manager SW YES Authenticated Configuration Scanner/FDCC Scanner BMC BladeLogic Client Automation SW YES Authenticated Configuration Scanner/FDCC Scanner ASG IA2 SCAP Module SW YES Authenticated Configuration Scanner/FDCC Scanner SAINT Vulnerability Scanner SW YES Unauthenticated Vulnerability Scanner nCircle IP360 Network Appliance YES Rapid7 NeXpose SW YES DbProtect SW YES CAESARS DOS Function Authenticated Vulnerability and Patch Scanner Agency Basis of Information Authenticated Configuration Scanner GSA Schedule? Sensor Subsystem FDCC Scanner Tool Form (HW, SW, Network Appliance, etc.) NIST SCAP Validated Product List (http://nvd.nist.gov/scapproducts.cfm) Technology Risk Management and Monitoring List of Applicable Tools CAESARS Component DOS, IRS Unauthenticated Vulnerability Scanner Unauthenticated Vulnerability Scanner IRS Unauthenticated Vulnerability Scanner 78 September 2010 AppDetective SW YES IRS SW YES IRS Presentation and Reporting Subsystem Reporting Risk Scoring and Analysis Database/ Repository Subsystem Analysis/Risk Scoring Subsystem Relational DBMS Network Configuration Management Tool System Configuration Management Tool Anti-Virus Tool Web Vulnerability Scanner Database Vulnerability Scanner Unauthenticated Vulnerability Scanner Cisco Works Campus Manager Unauthenticated Vulnerability Scanner Function Authenticated Vulnerability and Patch Scanner Agency Basis of Information Authenticated Configuration Scanner GSA Schedule? Sensor Subsystem FDCC Scanner Tool Form (HW, SW, Network Appliance, etc.) NIST SCAP Validated Product List (http://nvd.nist.gov/scapproducts.cfm) Technology Risk Management and Monitoring List of Applicable Tools CAESARS Component Network Management Tool Tavve PReView NO Network Monitoring Tool Niksun NetOmni NO Network Monitoring Tool Microsoft SQL Server 2005, 2008 SW YES DOS, IRS Relational Database Management System (RDBMS) Oracle 10g, 11i SW YES IRS Relational Database Management System (RDBMS) BMC Remedy Service Desk SW YES IRS Trouble Ticketing (Helpdesk Workflow Management) System Microsoft Active Directory Server SW YES DOS, IRS Directory service Microsoft Windows 2003 Enterprise Server SW DOS, IRS Operating System CAESARS 79 September 2010 Presentation and Reporting Subsystem Reporting Risk Scoring and Analysis Database/ Repository Subsystem Analysis/Risk Scoring Subsystem Relational DBMS Network Configuration Management Tool System Configuration Management Tool Anti-Virus Tool Web Vulnerability Scanner Database Vulnerability Scanner Unauthenticated Vulnerability Scanner Relational Database Management System (RDBMS) Microsoft SQL Server 2005 SW YES Microsoft SQL Server Integration Services SW YES RDBMS Service Component Microsoft SQL Server reporting Services SW YES RDBMS Service Component Microsoft Internet Information Server (With Load balancing and cluster Services) SW YES ADO.net SW YES Active Server Pages (ASP).NET (C#) SW YES DOS, IRS Middleware Microsoft NET Framework version 3.5 SW YES DOS, IRS Middleware iPost (PRSM) SW NO DOS GOTS Agiliance RiskVision SW NO IRS Presentation Engine Archer GRC Solution SW YES IRS Presentation/ Risk Scoring Engine CAESARS DOS, IRS Function Authenticated Vulnerability and Patch Scanner Agency Basis of Information Authenticated Configuration Scanner GSA Schedule? Sensor Subsystem FDCC Scanner Tool Form (HW, SW, Network Appliance, etc.) NIST SCAP Validated Product List (http://nvd.nist.gov/scapproducts.cfm) Technology Risk Management and Monitoring List of Applicable Tools CAESARS Component DOS, IRS Web server Middleware 80 September 2010 Tool CAESARS Form (HW, SW, Network Appliance, etc.) GSA Schedule? ArcSight Enterprise Security Manager SW YES Dundas Chart SW YES Agency Basis of Information IRS Function 81 Reporting Risk Scoring and Analysis Presentation and Reporting Subsystem Database/ Repository Subsystem Analysis/Risk Scoring Subsystem Sensor Subsystem Relational DBMS Network Configuration Management Tool System Configuration Management Tool Anti-Virus Tool Web Vulnerability Scanner Database Vulnerability Scanner Unauthenticated Vulnerability Scanner Authenticated Vulnerability and Patch Scanner Authenticated Configuration Scanner FDCC Scanner NIST SCAP Validated Product List (http://nvd.nist.gov/scapproducts.cfm) Technology Risk Management and Monitoring List of Applicable Tools CAESARS Component Presentation/ Security Information & Event Manager (SIEM) Data Visualization Tool September 2010 Appendix E Sample List of SCAP Security Configuration Checklists The following is a partial list of SCAP security configuration checklists in the NIST NCP Repository and in government agency and industry offerings It should be noted that this list does not encompass all available COTS software platforms; both government and industry are contributing to this repository on a regular basis Workstation Operating Systems Platform Source Microsoft Windows XP USGCB and NVD Windows Vista USGCB and NVD Windows CIS* Apple Mac OS X 10.5 CIS* Server Operating Systems Platform Source Microsoft Windows Server 2003 (32- and 64-bit platforms) IRS**, CIS* Windows Server 2008 (32- and 64-bit platforms) IRS**, CIS* Sun Solaris 10 IRS**, CIS* Sun Solaris 2.5.1 – 9.0 CIS* HP-UX 11 IRS**, CIS* IBM AIX IRS**, CIS* RedHat EL IRS**, CIS* Debian Linux IRS**, CIS* FreeBSD CIS Apple Mac OS X Snow Leopard Server Edition CIS* Network Operating Systems Platform Source Cisco IOS 12 CIS* Workstation Applications Platform Source Microsoft IE USGCB and NVD Microsoft IE CIS* Mozilla Firefox CIS* Microsoft Office 2003 CIS* Microsoft Office 2007 CIS* Symantec Norton AntiVirus 10.0 CIS* Symantec Norton AntiVirus 9.0 CIS* McAfee VirusScan 7.0 CIS* CAESARS 82 September 2010 Server Applications Platform Source Microsoft IIS 6.0, 7.0, and 7.5 IRS**, CIS* Apache 2.0, 1.3 CIS* Apache Tomcat 5.5, 6.0 CIS* BEA WebLogic Server 8.1, 7.0 CIS* Microsoft SQL Server 2005, 2008 IRS**, CIS* Oracle Database Server 9i, 10g, and 11g IRS**, CIS* IBM DB2 Version 8.0 and 9.5 IRS**, CIS* VMware ESX 3.5 CIS* * SCAP benchmarks by the Center for Internet Security (CIS) are proprietary and cannot be used by other SCAP-certified Authenticated Configuration Scanners This is because the CIS Benchmark Audit Tools have OVAL components embedded into the tools SCAP benchmarks by CIS are currently available as a part of CIS-CAT Benchmark Audit Tool ** SCAP benchmarks by the IRS are designed for meeting the IRS‘s security configuration policies However, SCAP content such as OVAL, CCE, and CPE can be reused by other agencies with express agreement from the IRS SCAP contents are currently being developed by the IRS; the first formal release is scheduled for October 2010 CAESARS 83 September 2010 Appendix F Sample Risk Scoring Formulas The table below provides examples of formulas used to calculate risk scores for workstations in a representative federal agency They are intended to illustrate how different measurements can be combined into a numerical representation of relative risk for an asset of interest Sensor Derived Configuration Scores Name Vulnerability Score Abbreviation VUL Description Each vulnerability detected by the sensor is assigned a score from 1.0 to 10.0 according to the Common Vulnerability Scoring System (CVSS) and stored in the National Vulnerability Database (NVD) To provide greater separation between HIGH and LOW vulnerabilities (so that it takes many LOWs to equal one HIGH vulnerability), the raw CVSS score is transformed by raising to the power of and dividing by 100 Individual Scores VUL Score = 01 * (CVSS Score)^3 Host Score Host VUL Score = SUM(VUL scores of all detected vulnerabilities) Notes The VUL score for each host is the sum of all VUL scores for that host Name Patch Score Abbreviation PAT Description Each patch which the sensor detects is not fully installed on a host is assigned a score corresponding directly to its risk level Individual Scores 10.0 High 9.0 Medium CAESARS Risk Score Critical Host Score Patch Risk Level 6.0 Host PAT Score = SUM(PAT scores of all incompletely installed patches) 84 September 2010 Notes None Name Security Compliance Score Abbreviation SCM Description Each security setting is compared to a template of required values, based on the operating system of the host Scores are based on general comparable risk to the CVSS vulnerability scores, then algebraically transformed in the same way as the CVSS vulnerability scores, then uniformly scaled to balance the total Security Compliance score with other scoring components Individual Scores SCM Score for a failed check = Score of the check‘s Security Setting Category Host Score Host SCM Score = SUM(SCM scores of all FAILed checks) Notes Security Setting Category Initial Adjusted Final CVSS- CVSSAgency Based Based Score Score Score File Security 10.0 4.310 0.8006 Group Membership 10.0 4.310 0.8006 System Access 10.0 4.310 0.8006 Registry Keys 9.0 3.879 0.5837 Registry Values 9.0 3.879 0.5837 Privilege Rights 8.0 3.448 0.4099 Service General Setting 7.0 3.017 0.2746 Event Audit 6.0 2.586 0.1729 Security Log 5.0 2.155 0.1001 System Log 3.0 1.293 0.0216 Application Log 2.0 0.862 0.0064 NOTE: There is no SCM score for a check that cannot be completed Only a FAIL is scored CAESARS 85 September 2010 Name Anti-Virus Score Abbreviation AVR Description The date on the anti-virus signature file is compared to the current date There is no score until a grace period of days has elapsed After six days, a score of 6.0 is assigned for each day since the last update of the signature file, starting with a score of 42.0 on day Individual Scores Not applicable Host Score Host AVR Score = (IF Signature File Age > THEN ELSE 0) * 6.0 * Signature File Age Notes None Name Standard Operating Environment Compliance Score Abbreviation SOE Description Each product in the Standard Operating Environment is required and must be at the approved version SOE Compliance scoring assigns a distinct score to each product that is either missing or has an unapproved version Currently, each product has an identical score of 5.0 There are 19 products in the SOE, so a workstation with no correctly installed SOE products would score 19 * 5.0 = 95.0 Individual Scores Product SOE Score = 5.0 (for each product not in approved version) Host Score Host SOE Score = SUM(SOE product scores) Notes Potential enhancement: Add Unapproved Software as a separate scoring component Add five points for each Unapproved Software product detected Product SOE Score = 5.0 (for each unapproved software product or version) Host SOE Score = SUM(SOE product scores) Password Age Scores Name CAESARS User Password Age Score 86 September 2010 Abbreviation UPA Description By comparing the date each user password was changed to the current date, user account passwords not changed in more than 60 days are scored one point for every day over 60, unless: The user account is disabled, or The user account requires two-factor authentication for login Individual Scores UPA Score = (IF PW Age > 60 THEN ELSE 0) * 1.0 * (PW Age – 60) Host Score Same Notes If the date of the last password reset cannot be determined, e.g., if the user account has restrictive permissions, then a flat score of 200 is assigned Name Computer Password Age Score Abbreviation CPA Description By means of Group Policy Objects (GPOs), workstations should refresh passwords every days, while the server refresh is set to 30 days By comparing the date the password was changed to the current date, a score of 1.0 is assigned for each day over 30 since the last password refresh, unless the computer account is disabled Individual Scores CPA Score = (IF PW Age > 30 THEN ELSE 0) * 1.0 * (PW Age – 30) Host Score Same Notes None Incomplete Reporting Scores Name SMS Reporting Abbreviation SMS Description SMS Reporting monitors the health of the SMS client agent that is installed on every Windows host This agent independently reports the following types of information: Hardware inventory data, e.g., installed memory and serial number CAESARS 87 September 2010 Software inventory data, i.e., EXE files Patch status, i.e., which patches are applicable to the host and the installation status of those patches The SMS Reporting Score serves as a surrogate measure of risk to account for unreported status Error codes have been added to SMS to assist in identifying the reason that a client is not reporting For each host, its error codes, if any, are examined to determine the score If an error code has the form 1xx or 2xx, a score is assigned to the host The score is a base of 100 points plus 10 points for every day since the last correct report (i.e., no 1xx or 2xx error codes) Individual Scores Error Codes 1xx or 2xx Host Score Host SMS Score = (IF Error Code = 1xx/2xx THEN ELSE 0) * (100.0 + 10.0 * (SMS Reporting Age)) Notes SMS is necessary to score each of the following other scoring components: Patch Anti-Virus SOE Compliance If there is a score for SMS Reporting, the scores for those three scoring components are set to 0.0, since any residual SMS data is not reliable Name Vulnerability Reporting Abbreviation VUR Description Vulnerability Reporting measures the age of the most recent vulnerability scan of a host This scan is conducted from outside the host rather than by an agent It is therefore possible that a host may not have recent scan information for one of the following reasons: The host was powered off when the scan was attempted The host‘s IP address was not included in the range of the scan The scanner does not have sufficient permissions to conduct the scan on that host The date of the most recent scan is used as a base date and compared to the current date There is a conceptual grace period of consecutive scans Operationally, each host is scanned for vulnerabilities every days Therefore, a grace period of 15 days is allowed for VUR After this period, a score of 5.0 is assigned for each subsequent missed scan CAESARS 88 September 2010 Individual Scores VUR Age Host Score Host VUR Score = (IF VUR Age > 15 THEN ELSE 0) * 5.0 * FLOOR((VUR Age - 15)/7 Notes If a host has never been scanned, e.g., the host is new on the network, the current date is used as the base date Name Security Compliance Reporting Abbreviation SCR Description Security Compliance Reporting measures the age of the most recent security compliance scan of a host This scan is conducted from outside the host rather than by an agent It is therefore possible a host may not have recent scan information for one of the following reasons: The host was powered off when the scan was attempted The host‘s IP address was not included in the range of the scan The scanner does not have sufficient permissions to conduct the scan on that host The date of the most recent scan is used as a base date and compared to the current date There is a conceptual grace period of consecutive scans Operationally, each host is scanned for security compliance every 15 days Therefore, a grace period of 30 days is allowed for SCR After this period, a score of 5.0 is assigned for each subsequent missed scan Individual Scores SCR Age Host Score Host SCR Score = (IF SCR Age > 30 THEN ELSE 0) * 5.0 * FLOOR((SCR Age – 30) / 15) Notes If a host has never been scanned, e.g., the host is new on the network, the current date is used as the base date CAESARS 89 September 2010 Acronyms AD Active Directory ADC Active Directory Computer ADU Active Directory User AVR Anti-Virus CAG Consensus Audit Guidelines CAESARS Continuous Asset Evaluation, Situational Awareness, and Risk Scoring CCE Common Configuration Enumeration CERT Computer Emergency Response Team CI Configuration Item CIO Chief Information Officer CIS Center for Internet Security CISO Computer Information Security Officer COTS Commercial Off-tThe-Shelf CPE Common Platform Enumeration CSAM Cyber Security Assessment and Management CSC Critical Security Control CVE Common Vulnerabilities and Exposures CWE Common Weakness Enumeration DHS Department of Homeland Security DBMS Database Management System DISA Defense Information Systems Agency DOJ Department of Justice DOS Department of State ESB Enterprise Service Bus FDCC Federal Desktop Core Configuration FIPS Federal Information Processing Standard FISMA Federal Information System Management Act CAESARS 90 September 2010 FNS Federal Network Security HTML Hypertext Markup Language IA Information Assurance ID Identifier ID/R Intrusion Detection and Response IDS Intrusion Detection System IRS Internal Revenue Service ISSO Information System Security Officer IT Information Technology NIST National Institute of Standards and Technology NAC Network Admission Control NSA National Security Agency NVD National Vulnerability Database OMB Office of Management and Budget OS Operating System OVAL Open Vulnerability Assessment Language PAT Patch P.L Public Law POA&M Plan of Action and Milestones PWS Performance Work Statement RAT Router Audit Tool RBD Risk-Based Decision RFI Request for Information RFP Request for Proposal RMF Risk Management Framework SANS SysAdmin, Audit, Network, Security (Institute) SCAP Security Content Automation Protocol SCM Security Compliance SCPMaR Security Compliance Posture Monitoring and Reporting CAESARS 91 September 2010 SCR Security Compliance Reporting SMS Systems Management Server SOA Service Oriented Architecture SOE Standard Operating Environment SOW Statement of Work SP Special Publication SQL Structured Query Language SRR Security Readiness Review SSH Secure Shell STIG Security Technical Implementation Guide SwA Software Assurance VPN Virtual Private Network VUL Vulnerability VUR Vulnerability Reporting WAN Wide Area Network WS Web Service XCCDF Extensible Configuration Checklist Definition Format XML Extensible Markup Language CAESARS 92 September 2010 ... for asset discovery and management of configuration, vulnerabilities, and patches The target reference architecture presented in this document – the Continuous Asset Evaluation, Situational Awareness,. .. the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring (CAESARS) reference architecture – is provided to the federal D/As to help develop this important capability Continuous monitoring... Institute of Standards and Technology (NIST) and Office of Management and Budget (OMB) mandates The risk scoring and continuous monitoring capabilities that were studied for this document represent