Beyond vulnerability management – what can I cve?

19 Min Read
19 Min Read

Vulnerability Treadmill

The reactive nature of vulnerability management combined with delays from policies and processes creates tension on security teams. The capacity is limited, and patching everything right away is a struggle. Our Vulnerability Operations Center (VOC) dataset analysis has identified 1,337,797 unique findings (security issues) across 68,500 unique customer assets. Of these, 32,585 is a clear CVE, with a CVSS score of 10,014 being 8 or higher. Of these, external assets have 11,605 different CVEs, and internal assets have 31,966. It’s not surprising that not patching some people with this volume’s CVE leads to compromise.

Why are we stuck in this situation, what can we do, and is there a better approach there?

We investigate the status of vulnerability reporting, how to prioritize vulnerabilities through threats and exploitation, how to examine statistical probability and easily discuss risks. Finally, we will consider solutions to minimize the impact on vulnerabilities, giving management teams flexibility in responding to crisis. This should make a good impression, but if you want a full story, you can find it in our annual report, Security Navigator.

What can I cve?

Western countries and organizations use common vulnerability enumeration (CVE) and common vulnerability scoring system (CVSS) to track and evaluate vulnerabilities overseen by US government-funded programs such as MITER and NIST. By September 2024, the 25-year active CVE program issued over 264,000 CVEs, increasing by April 15, 2025 to approximately 290,000 CVEs, including “rejection” or “deferral.”

NIST’s National Ulnerability Database (NVD) relies on CVE Numbering Authorities (CNAs) to record CVEs in early CVSS assessments. Disclosure of serious vulnerabilities is complicated by differences in opinion between researchers and vendors regarding impact, relevance, and accuracy that affect the broader community. (1, 2).

By April 2025, the NVD had a wealth of CVE backlogs of over 24,000 people accumulated. (3, 4) Due to bureaucratic delays that occurred in March 2024. Despite ongoing vulnerability reports, it temporarily halted CVE enrichment, dramatically explaining the vulnerability of this system. Due to a temporary pause, this backlog has not been cleared yet.

On April 15, 2025, MITER announced that the US Department of Homeland Security would not renew its contract with MITER and would directly affect the CVE program.(15). This has created a lot of uncertainty about the future of CVE and how it will affect cybersecurity practitioners. Fortunately, funding for the CVE program has been extended due to strong community and industry responses(16).

CVE and NVD are not the only sources of vulnerability intelligence. Many organizations, including us, are developing independent products that track vulnerabilities much more than Miter’s CVE program and NIST NVD.

Since 2009, China has run its own vulnerability database CNNVD (5)this can be a valuable technical resource (6, 7)political barriers make collaboration less likely. Furthermore, not all vulnerabilities are immediately disclosed, creating blind spots, but some are exploited without detection.

In 2023, Google’s Threat Analysis Group (TAG) and Mandiant identified 97 zero-day exploits that affect mobile devices, operating systems, browsers and other applications. On the other hand, only about 6% of the vulnerabilities in CVE dictionary have been exploited. (8)and a 2022 study shows that half of organizations patch vulnerabilities of less than 15.5% each month (9).

CVE is important for security managers, but it is an incomplete and spontaneous system, not globally regulated or universally adopted.

This blog also aims to explore how to reduce dependency in everyday work.

Based on threats

Despite its shortcomings, CVE systems still offer valuable intelligence regarding vulnerabilities that can affect security. However, we need to use so many CVEs to address to prioritize those most likely to be exploited by threat actors.

See also  The defect in Microsoft Onedrive file picker gives you full cloud access even if you upload only one file

Incident Response and Security Team Forum (first) Exploit Prediction Scoring System (EPSS) developed by the SIG Forum (10)helps to predict the likelihood that vulnerabilities will be exploited in the wild. EPSS Intelligence allows security managers to patch as many CVEs as possible for wide coverage or focus on critical vulnerabilities to maximize efficiency and prevent exploitation. Both approaches have their advantages and disadvantages.

Two datasets are required to demonstrate the trade-off between coverage and efficiency. One requires a potential patch (VOC dataset) and another dataset representing an actively exploited vulnerability, including CISA Kev. (10)ethical hacking findings, and data from the CERT Vulnerability Intelligence Watch Service (12).

Security Navigator 2025 is here – Download now

The newly released Security Navigator 2025 provides important insights into current digital threats, documenting 135,225 incidents and 20,706 confirmed violations. It serves as a guide to navigating a safer digital landscape than just a report.

What is inside?#

  • Depted Detailed Analysis: Statistics from Cybersocks, Vulnerability Scans, Pen Tests, Certificates, Cy-X and Ransomware Observations.
  • Future Future: Learn security predictions and stories from the field.
  • Security Deep-Dives: Be informed of Hacktivist activities and emerging trends related to LLMS/generated AI.

Take one step ahead with cybersecurity. Your essential guide is waiting for you!

Get copy now

The EPSS threshold is used to select the set of CVEs to patch based on the likelihood that it will be exploited in the wild. You can use duplicate repair sets and exploited vulnerability sets to calculate the efficiency, coverage, and effort of a selected strategy.

EPSS predicts the possibility that vulnerabilities could be exploited somewhere in the wild rather than in a particular system. However, the probability can be “scaled”. For example, flipping one coin gives you a 50% chance of heading, but flipping 10 coins gives you a 99.9% chance of at least one heading. This scaling is calculated using complement rules (13)find the probability of the desired outcome by subtracting the possibility of a failure from 1.

As we will explain first, “EPSS can be scaled to estimate server, subnet, or enterprise-wide threats by predicting the likelihood that a particular vulnerability will be exploited and calculating the probability of at least one event.”(14, 15)

EPSS allows you to use completion rules to similarly calculate the likelihood that at least one vulnerability can be exploited from a list.

To demonstrate, we analyzed 397 vulnerabilities from VOC scan data from administrative clients. As shown in the chart below, most vulnerabilities had low EPSS scores until they rose sharply at position 276. Additionally, the scaled probability of exploitation using the complement rules shown in the chart effectively reaches 100% if only the first 264 vulnerabilities were considered.

As the scaled EPSS curve of the chart (left) shows, as more CVEs are considered, the scaled probability that one of them is scaled to be exploited in the wild increases very rapidly. There are 265 different CVEs under consideration, and one of them is over 99% likely to be exploited in the wild. This level is reached before individual vulnerabilities that are considered for high EPS are considered. When the scaled EPSS value exceeds 99% (position 260), the maximum EPSS is still below 11% (0.11).

This example shows how difficult it can become to prioritize vulnerabilities as the number of systems increases, based on actual client data about vulnerabilities exposed to the Internet.

The EPSS gives defenders the possibility of vulnerabilities being exploited in the wild, which is useful, but showed how quickly this probability scales when multiple vulnerabilities are involved. There is sufficient vulnerability, so there is a real probability even when individual EPSS scores are low.

See also  Why the published credentials remain unfixed and how to change them

Like weather forecasts that predict “potential rain,” the larger the area, the higher the chances of rain somewhere. Similarly, it may be impossible to bring the probability of exploitation even closer to zero.

Attacker’s odds

We have identified three important truths that need to be integrated into investigating the vulnerability management process.

  • The attacker is not focused on a particular vulnerability. They aim to compromise the system.
  • Abuse of vulnerabilities is not the only way to compromise.
  • Attackers have different skills and persistence levels.

These factors allow us to extend the analysis of EPS and probability to account for the possibility that an attacker may harm any system, scale it, and determine the probability that it will harm a system within the network that allows access to the rest.

You can assume that each hacker has a particular “probability” that the system compromises. This probability increases based on skill, experience, tools, and time. Probability scaling can then be continued to apply to assess the success of the attacker against the broader computing environment.

Given the undetected patient hackers, how many statistical attempts are required to violate the system that allows access to the graph? To answer this, we need to apply a binomial distribution reworked in the form of this equation. (16, 17):

This equation can be used to estimate the number of trials required by an attacker at a particular skill level. For example, if attacker A1 has a success rate of 5% (1/20) per system, then up to 180 systems should be sure to target 99.99% success.

Another attacker A2 with a 10% success rate (1 in tenth) needs around 88 targets to ensure at least one success, while A3, a more skilled attacker, only needs around 42 targets with a 20% success rate (1 in five).

These are probability. An attacker may succeed on the first attempt or require multiple attempts to reach the expected success rate. To assess real-world impact, we investigated senior business intrusion testers. He estimated the success rate for any internet-connected goal to be about 30%.

Assuming that the chances of a skilled attacker compromise one machine are 5% to 40%, we can estimate the number of targets needed to almost guarantee one successful compromise.

The meaning is impressive. With only 100 potential targets, even moderately skilled attackers are almost certain to succeed at least once. In a typical company, this single compromise often provides access to a wider network, and companies usually have thousands of computers to consider.

Rethinking vulnerability management

For the future, we need to imagine an environment and architecture that is not affected by compromises from individual systems. In the short term, I argue that approaches to vulnerability management need to change.

Current approaches to vulnerability management are rooted in its name: focusing on “vulnerabilities” (as defined in CVE, CVSS, EPSS, misconceptions, errors, etc.) and its “management.” However, because you can’t control the amount, speed, or importance of CVE, you will always be able to respond to new, chaotic intelligence.

EPSS helps to prioritize vulnerabilities that are likely to be exploited in the wild, representing real threats and enters reactive mode. Mitigation addresses vulnerabilities, but our response is to truly counter threats. Therefore, this process should be called threat mitigation.

As mentioned before, it is statistically impossible to effectively counter the threats of large corporations by simply responding to vulnerability intelligence. Reducing risk is the best we can do. Cyber ​​risk exploits threats, vulnerabilities that target systems assets and is attributable to the potential impact of such attacks. By addressing risks, open up, manage and mitigate more areas under control.

Threat Mitigation

Threat mitigation is a dynamic, continuous process of identifying threats, assessing their associations, and taking action to mitigate them. This response includes patching, reconfiguring, filtering, adding compensation controls, or removing vulnerable systems. EPSS is a valuable tool that complements other threats and vulnerability intelligence.

See also  Researchers reveal flaws in new Intel CPUs that allow memory leaks and Specter V2 attacks

However, due to the nature of probability scaling, EPS is less useful in large internal environments. EPSS is most applicable to systems exposed directly to the Internet, as it focuses on vulnerabilities that are likely to be exploited “in the wild.” Therefore, threat mitigation efforts should primarily target externally exposed systems.

Risk reduction

Cyber ​​risk is the product of threats, vulnerabilities, and impact. Threats are almost beyond our control, but patching certain vulnerabilities in large environments does not significantly reduce the risk of compromise. Therefore, risk reduction should focus on three important efforts.

  1. Reduce attack surface: As the probability of compromise increases with scale, it can be reduced by reducing the attack surface. An important priority is identifying and removing unmanaged or unwanted Internet-facing systems.
  2. Limit the impact: Lambert’s law advises limiting the ability of attackers to access and pass “graphs.” This is achieved through segmentation at all levels (network, authorization, applications, and data). Zero Trust Architecture provides a practical reference model for this goal.
  3. Baseline improvements: Instead of focusing on specific vulnerabilities reported or discovered, systematically reducing the overall number and severity of vulnerabilities, reducing the risk of compromise. This approach ignores the current acute threats that support long-term risk reduction, prioritizing efficiency and return on investment.

By isolating threat mitigation from risk reduction, you can be free from the constant cycle of responding to a particular threat, focusing on a more efficient and strategic approach, and freeing up resources for other priorities.

An efficient approach

This approach can be pursued systematically to optimize resources. Focus moves from “vulnerability management” to design, implementation, and validation of resilient architectures and baseline configurations. Once these baselines are set by security, implementation and maintenance can be taken over.

The key here is that the “trigger” to patch the internal system is a predefined plan agreed with the system owner and upgrade to a new, approved baseline. This approach is certainly far more destructive and efficient than constantly chasing the latest vulnerabilities.

Vulnerability scans remain important for creating accurate asset inventory and identifying non-compliant systems. Instead of triggering them, you can support existing standardized processes.

Shaping the future

The overwhelming barrage of randomly discovered and reported vulnerabilities, represented by CVE, CVSS, and EPSS, highlights our people, processes and technologies. We have been effectively approaching vulnerability management for over 20 years and have had moderate success.

It’s time to rethink how you design, build and maintain your systems.

New Strategy Template

Key factors to consider your security strategy for 2030 and beyond:

  • It starts with the source
  • Human Factor
    • It utilizes human strengths and predicts their weaknesses.
    • Get support from senior management and executives.
    • It becomes an enabler, not a blocker.

Threat-based decision making

    • Learn from incidents and focus on what is being abused.
    • Use strategies to enhance repairs based on functionality.

Threat modeling and simulation

    • Use threat models to understand potential attack paths.
    • Perform ethical hacks to test your environment against real threats.

System Architecture and Design

    • Apply threat models and simulations to validate the assumptions of the new system.
    • Systematically reduces the attack surface.
    • Deeply strengthen your defenses by checking your existing systems.
    • Not only technology, but Sase and Zero-Trust are treated as strategies.

Demand/Protect by default

    • Implement formal policies to incorporate security into your corporate culture.
    • Make sure your vendor and supplier have active security improvements programs.

There’s more to this. This is an excerpt from the reporting of vulnerabilities in Security Navigator 2025. We recommend head to the download page and get a full report to find out more about vulnerabilities in 2025, how control can be regained, how vulnerability screening operations compare different industries, and how factors like generation AI can impact.

Note: This article was skillfully written and contributed by Wicus Ross, a senior security researcher at Orange Cyberdefense.

Share This Article
Leave a comment