Vulnerability scanning has an important role in most enterprise threat & vulnerability management programmes – it provides multiple benefits to internal security teams as they identify vulnerabilities and it can also help verify control performance. Associated vulnerability scoring systems, such as the Common Vulnerability Scoring System (CVSS), have also gained widespread industry adoption, as they are simple to understand and usually produce repeatable results.
Adopting vulnerability scoring systems universally for technical risk management, as we now commonly see, can result in failures to detect, manage and respond to security defects. The main reason is that vulnerability scoring systems are pretty good at measuring vulnerabilities of course, but unsuited to handling weaknesses.
What is the difference between a vulnerability and weakness? The MITRE Corporation simply defines a weakness as “a type of mistake in software that, in proper conditions, could contribute to the introduction of vulnerabilities within that software”. This definition can be expanded to a general notion that “weakness are errors that can lead to vulnerabilities” making it applicable to other assets not just software, including systems, networks and controls.
CVSS v3 for example, can’t really be used to measure the characteristics and severity of a weakness that has no currently defined vulnerability – we encounter this problem routinely when customers request CVSS ratings for application penetration tests where weaknesses are usually more evident.
How weaknesses are managed alongside vulnerabilities is critical to the success of technical risk management programmes. It is common to see weaknesses inadequately assessed, measured and remediated – often they are overlooked or fall off the radar completely as remediation of critical and high severity vulnerabilities with verified scores are prioritised by overstretched security teams.
Let’s consider BlueKeep. If we ran a perimeter vulnerability scan today, which identified a notably unpatched RDP service, it would be scored by CVSS as 9.8 or ‘Critical’. But how would the vulnerability scanner report the exposure of the same RDP service prior to BlueKeep public disclosure? Potentially in several different ways, but more than likely it would misclassify the exposure, despite it requiring immediate treatment as an obvious weakness, given its poor security reputation alone.
Another example where problems arise is in unsupported systems where vulnerabilities have not yet surfaced. The weakness here is obvious, but unsupported systems alone cannot be systematically scored. We often find that vulnerability scanners fudge high CVSS values to compensate; perhaps this is a pragmatic, qualitative approach to handling weaknesses which cannot be measured. If this qualitative approach is not applied to all weaknesses, however, unidentified gaps and inconsistencies will be inevitable in the assurance activity.
Both examples consider vulnerability scanners which are intrinsically affected by vulnerability scoring, but any service or security process that uses vulnerability scoring at its core is at risk of mishandling weaknesses.
Review any tools and internal processes which assess security defects by vulnerability scoring at their core. Understand how they identify and interpret the severity of weaknesses alongside vulnerabilities. Remember that CVSS assumes that a vulnerability has already been discovered and verified; anything outside of this scope may have be misrepresented or missed entirely.
Do not dismiss qualitative approaches in your threat and vulnerability management programme – they can be invaluable in gaining a comprehensive view of technical security issues and assurance. Although qualitative assessments are also subject to bad press, they can be pragmatic particularly when conducted by subject matter experts.
A varied programme of technical assessments should provide a broader view of priorities – short and longer term. Ensure your assurance programme delivers across all your assurance objectives, by reviewing vendor assessment methodologies carefully. For example, high-quality penetration tests should provide context and visibility of application and system weaknesses over a longer term, not just a snapshot of verified vulnerabilities.