A Cyber Engineering Primer: Vulnerability Management Lifecycle

June 07, 2018, Ben Scudera, Cyber Engineering Consultant, Coalfire

Part Three in a Series
Previous Posts in the Series:

According to the SANS Institute, “Vulnerability management is the process in which vulnerabilities in IT are identified and the risks of these vulnerabilities are evaluated. This evaluation leads to correcting the vulnerabilities and removing the risk or a formal risk acceptance by the management of an organization.”

At Coalfire, we consider vulnerability management to be a continuous process as shown in the illustration. The frequency and speed at which an entity must proceed through this lifecycle is dictated by an organization’s policies and its underlying compliance needs. Some compliance frameworks require this to be completed annually, and others require a quarterly process; but it is a best practice to do this continuously.

In this post, we will break down each step of the cycle.

Discover – Proper discovery procedures using automated tools allow for accurate accounting of network connected devices and their functions. Without a mature discovery process, an organization might miss vulnerabilities on hosts that are not being tracked. The number one issue we find when assessing organizations’ vulnerability management programs is related to inventory discrepancies.

To ensure that all IPs are enumerated, network scanners should be configured to scan entire subnets or use agent-based approaches. In some situations, organizations using Infrastructure as a Service (IaaS) cloud services can use their management consoles to identify provisioned assets.

These results can be used to improve an existing inventory or build a new one. Analysts compare the results against the system inventory to identify deltas. Inventory deltas can go either direction – if a device exists in the scan results but not in the inventory, then it may be a rogue device. If a device exists in the inventory but not the scan results, then it may have been decommissioned. (Other possibilities include the scan’s target list was not broad enough, network access restrictions might have blocked the scan traffic, or the system itself is not responding to ping sweeps and port scans.) A passive scanner monitoring network traffic can help identify these systems that do not respond to discovery attempts made by active scanners.

It is important to review and reconfigure discovery settings at each reiteration. When it comes to discovery settings, the broader the better. Finally, analysts need to ensure that they are discovering more than just IPs. Credentialed scans provide valuable information about databases, installed software, web applications, and ports/protocols/services.

Prioritize – The second step of the vulnerability management lifecycle is Prioritize. This step is directly related to identifying high-priority systems, which typically are externally facing, lack fault tolerance, or store sensitive information such as customer data, Personally Identifiable Information (PII), or Protected Health Information (PHI). The key is to not chase vulnerabilities on low-priority systems while high-impact systems remain vulnerable. Every organization has a limited amount of personnel bandwidth and must consider this when planning remediation efforts. This step is part of the larger organizational risk assessment and should include performing a Business Impact Analysis, assigning values to high-value targets, documenting, and doing table-top exercises for incident response.

Assess – The assessment step involves enumerating vulnerabilities through automated scans. To have a successful vulnerability management program you must continuously ensure breadth and depth of these scans. Breadth is achieved by scanning every asset in your environment. This includes acquiring dedicated tools to scan web applications, databases, source code, and evaluating system compliance to baseline standards. Depth is achieved through providing administrator-level credentials and verifying that they are successful in each scan. Often, we will see credentials supplied in scan policy that are not successful due to locked accounts, lack of permissions, or ports such as 22, 139, and 445 being closed on target systems. Successful authentication is one of the most important portions of this process. The difference between the quality of results in an uncredentialled versus a credentialed scan is night and day. Another component of depth is ensuring that the tools are updated prior to each scan and all non-destructive checks are enabled. Most scan definitions have new releases daily to include checks for recently released patches and vulnerabilities.

Report – It is important to know your audience when building these reports. At least three different reports should be built from each Assess phase. Your scan tool should be able to export a human-readable output for an executive report. Executive reports normally include data on trending and high-level security posture. Administrators will want more detailed reports with information on systems and their patching/compliance status. Publicly available or in-house parsing tools may be able to provide better reporting options using XML results. Finally, the security team needs information about the success of breadth and depth of scanning and information to track remediation efforts.

Showing progress or regression in the environment can help lead to identification of issues in the development and/or operations/maintenance of the system. For example, when vendor dependencies exist for vulnerabilities and they are not receiving updates, it may lend itself to business decisions being made to change their location in the environment, invest in additional protections, or transition to different technologies. ​

Remediate – Remediation efforts should be based on the prioritized findings and should be tracked via a Plan of Actions and Milestones (POAM) or a similar method. Consider low-hanging fruit on many devices, ease of fix and administration, severity of the findings, age of the finding, location, and criticality of the device. Due dates can be organizationally defined or driven by regulation. A good starting point would be to remediate Highs within 30 days – which means before your next monthly scan; Moderates within 90 days; and Lows when time allows, 365 days maximum. Whenever possible, patches and configuration changes should be tested in a dev or test environment before being pushed into production. This will avoid operational impacts and downtime.

Verify – During this step we confirm that remediation efforts have succeeded. Often this step will overlap with the next Assess phase one month after the first set of scans. Rescanning with the same breadth and depth is critical to ensuring that vulnerabilities are no longer present in the environment. After this verification, tickets should be closed and saved for future reference, and the finding tracker should be updated and similarly saved as reference material for future similar vulnerabilities. Any justifications for false positives and operational requirements should be reviewed regularly to make sure they are still valid.

In summary, the vulnerability management process is important not just to meet regulatory standards, but also as a basic building block of every security program. An effective vulnerability management program enables an organization to mitigate these risks and have a higher confidence in the integrity of their infrastructure and security of their systems and data. More information on vulnerability management can be found here.

Ben Scudera

Author

Ben Scudera — Cyber Engineering Consultant, Coalfire

Recent Posts

Post Topics

Archives

Tags

2.0 3.0 access Accounting Agency AICPA Assessment assessments ASV audit AWS AWS Certified Cloud Practitioner AWS Certs AWS Summit bitcoin Black Hat Black Hat 2017 blockchain Blueborne Breach BSides BSidesLV Burp BYOD California Consumer Privacy Act careers CCPA Chertoff cloud CoalfireOne Compliance credit cards C-Store Cyber cyber attacks Cyber Engineering cyber incident Cyber Risk cyber threats cyberchrime cyberinsurance cybersecurity danger Dangers Data DDoS DevOps DFARS DFARS 7012 diacap diarmf Digital Forensics DoD DRG DSS e-banking Ed Education encryption engineering ePHI Equifax Europe EU-US Privacy Shield federal FedRAMP financial services FISMA Foglight forensics Gartner Report GDPR Google Cloud NEXT '18 government GRC hack hacker hacking Halloween Health Healthcare heartbleed Higher Higher Education HIMSS HIPAA HITECH HITRUST HITRUST CSF Horror Incident Response interview IoT ISO IT JAB JSON keylogging Kubernetes Vulnerability labs LAN law firms leadership legal legislation merchant mobile NESA News NH-ISAC NIST NIST 800-171 NIST SP 800-171 NotPetya NRF NYCCR O365 OCR of P2PE PA DSS PA-DSS password passwords Payments PCI PCI DSS penetration Penetration Testing pentesting Petya/NotPetya PHI Phishing Phising policy POODLE PowerShell Presidential Executive Order Privacy program Ransomware Retail Risk RSA RSA 2019 Safe Harbor Scanning Scans scary security security. SOC SOC 2 social social engineering Spectre Splunk Spooky Spraying Attack SSAE State Stories Story test Testing theft Virtualization Visa vulnerability Vulnerability management web Wifi wireless women XSS
Top