Offensive Security

A Cyber Engineering Primer: Vulnerability Management Lifecycle

Part Three in a Series
Previous Posts in the Series:

According to the SANS Institute, “Vulnerability management is the process in which vulnerabilities in IT are identified and the risks of these vulnerabilities are evaluated. This evaluation leads to correcting the vulnerabilities and removing the risk or a formal risk acceptance by the management of an organization.”

At Coalfire, we consider vulnerability management to be a continuous process as shown in the illustration. The frequency and speed at which an entity must proceed through this lifecycle is dictated by an organization’s policies and its underlying compliance needs. Some compliance frameworks require this to be completed annually, and others require a quarterly process; but it is a best practice to do this continuously.

In this post, we will break down each step of the cycle.

Discover – Proper discovery procedures using automated tools allow for accurate accounting of network connected devices and their functions. Without a mature discovery process, an organization might miss vulnerabilities on hosts that are not being tracked. The number one issue we find when assessing organizations’ vulnerability management programs is related to inventory discrepancies.

To ensure that all IPs are enumerated, network scanners should be configured to scan entire subnets or use agent-based approaches. In some situations, organizations using Infrastructure as a Service (IaaS) cloud services can use their management consoles to identify provisioned assets.

These results can be used to improve an existing inventory or build a new one. Analysts compare the results against the system inventory to identify deltas. Inventory deltas can go either direction – if a device exists in the scan results but not in the inventory, then it may be a rogue device. If a device exists in the inventory but not the scan results, then it may have been decommissioned. (Other possibilities include the scan’s target list was not broad enough, network access restrictions might have blocked the scan traffic, or the system itself is not responding to ping sweeps and port scans.) A passive scanner monitoring network traffic can help identify these systems that do not respond to discovery attempts made by active scanners.

It is important to review and reconfigure discovery settings at each reiteration. When it comes to discovery settings, the broader the better. Finally, analysts need to ensure that they are discovering more than just IPs. Credentialed scans provide valuable information about databases, installed software, web applications, and ports/protocols/services.

Prioritize – The second step of the vulnerability management lifecycle is Prioritize. This step is directly related to identifying high-priority systems, which typically are externally facing, lack fault tolerance, or store sensitive information such as customer data, Personally Identifiable Information (PII), or Protected Health Information (PHI). The key is to not chase vulnerabilities on low-priority systems while high-impact systems remain vulnerable. Every organization has a limited amount of personnel bandwidth and must consider this when planning remediation efforts. This step is part of the larger organizational risk assessment and should include performing a Business Impact Analysis, assigning values to high-value targets, documenting, and doing table-top exercises for incident response.

Assess – The assessment step involves enumerating vulnerabilities through automated scans. To have a successful vulnerability management program you must continuously ensure breadth and depth of these scans. Breadth is achieved by scanning every asset in your environment. This includes acquiring dedicated tools to scan web applications, databases, source code, and evaluating system compliance to baseline standards. Depth is achieved through providing administrator-level credentials and verifying that they are successful in each scan. Often, we will see credentials supplied in scan policy that are not successful due to locked accounts, lack of permissions, or ports such as 22, 139, and 445 being closed on target systems. Successful authentication is one of the most important portions of this process. The difference between the quality of results in an uncredentialled versus a credentialed scan is night and day. Another component of depth is ensuring that the tools are updated prior to each scan and all non-destructive checks are enabled. Most scan definitions have new releases daily to include checks for recently released patches and vulnerabilities.

Report – It is important to know your audience when building these reports. At least three different reports should be built from each Assess phase. Your scan tool should be able to export a human-readable output for an executive report. Executive reports normally include data on trending and high-level security posture. Administrators will want more detailed reports with information on systems and their patching/compliance status. Publicly available or in-house parsing tools may be able to provide better reporting options using XML results. Finally, the security team needs information about the success of breadth and depth of scanning and information to track remediation efforts.

Showing progress or regression in the environment can help lead to identification of issues in the development and/or operations/maintenance of the system. For example, when vendor dependencies exist for vulnerabilities and they are not receiving updates, it may lend itself to business decisions being made to change their location in the environment, invest in additional protections, or transition to different technologies. ​

Remediate – Remediation efforts should be based on the prioritized findings and should be tracked via a Plan of Actions and Milestones (POAM) or a similar method. Consider low-hanging fruit on many devices, ease of fix and administration, severity of the findings, age of the finding, location, and criticality of the device. Due dates can be organizationally defined or driven by regulation. A good starting point would be to remediate Highs within 30 days – which means before your next monthly scan; Moderates within 90 days; and Lows when time allows, 365 days maximum. Whenever possible, patches and configuration changes should be tested in a dev or test environment before being pushed into production. This will avoid operational impacts and downtime.

Verify – During this step we confirm that remediation efforts have succeeded. Often this step will overlap with the next Assess phase one month after the first set of scans. Rescanning with the same breadth and depth is critical to ensuring that vulnerabilities are no longer present in the environment. After this verification, tickets should be closed and saved for future reference, and the finding tracker should be updated and similarly saved as reference material for future similar vulnerabilities. Any justifications for false positives and operational requirements should be reviewed regularly to make sure they are still valid.

In summary, the vulnerability management process is important not just to meet regulatory standards, but also as a basic building block of every security program. An effective vulnerability management program enables an organization to mitigate these risks and have a higher confidence in the integrity of their infrastructure and security of their systems and data. More information on vulnerability management can be found here.

How can we help?