As both threats and enterprise technology environments grow increasingly complex, there's an urgency to spot and prioritize security weaknesses throughout digital and operational technology (OT) and industrial control systems (ICS). Understanding how to prioritize vulnerabilities is a challenge everywhere, but perhaps nowhere more acute than in OT/ICS environments. However, experts note that these changes also introduce new complexity, underscoring the need for actionable guidance in vulnerability management to be urgently addressed.
For decades, vulnerability management programs rested primarily on the Common Vulnerability Scoring System (CVSS)—a universal 0–10 scale that measured the technical severity of software and firmware flaws. On the plus side, CVSS was straightforward to understand and automate, and compliance regimes quickly codified CVSS as the industry's default. Even today, most vulnerability scanning and ticketing platforms rank risks by CVSS score, driving remediation workflows across IT and OT alike.
However, as the number of annual vulnerabilities soared to more than 40,000 in 2024, security teams found that CVSS's severity-based model was inadequate. For starters, only a small fraction of published CVEs are ever exploited in attacks, and many "critical" vulnerabilities prove to be irrelevant in specific operational contexts. This disconnect prompted renewed scrutiny and some changes.
The latest version of CVSS, CVSS 4.0, was released in late 2023 as a fundamental upgrade designed to deliver greater precision, context, and flexibility in assessing and prioritizing software vulnerabilities. It extends CVSS's original purpose—to score the technical severity of flaws—by introducing new metrics and categories that help organizations judge real-world risk more accurately.
However, two years after the release, many experts say CVSS 4.0 hasn't changed the situation for most enterprises. Wim Remes, principal consultant at cybersecurity consultancy Torreon, argues that CVSS 4.0 has not materially shifted the goalposts.
"Organizations are reluctant to leverage threat, environmental, and supplemental metrics because those are adjusted mainly on gut feeling. We still would rather rely on what the tooling tells us versus what we think it should be," says Remes.
Appeasing auditors is another reason it hasn't fully resonated with them. "Further, increasing compliance requirements includes a focus on vulnerability management. Explaining to an auditor why you felt that a CVSS 9.0 should be bumped down to a 6.9 for you is not the kind of liability organizations are comfortable exposing themselves to," Remes explains.
Another vulnerability scoring development was the Exploit Prediction Scoring System (EPSS). Developed by FIRST (Forum of Incident Response and Security Teams), EPSS uses machine learning to estimate the likelihood that a vulnerability will be exploited, blending threat intelligence, historical exploit patterns, and signals from malware repositories into a dynamic score. For security professionals securing OT/ICS, EPSS provides a filter that focuses patching and mitigation on the subset of vulnerabilities attackers are most likely to target, rather than those vulnerabilities labeled "high" or "critical" by CVSS.
"CISA KEV has been a useful filter for many organizations. Most organizations have a backlog of thousands of vulnerabilities in their VM tooling. Even more when you consider contributions from penetration tests, SAST and DAST tooling, and CPSM/ASM tooling. And CISA KEV achieved something that neither CVSS nor EPSS has done: ultimate prioritization."
—Wim Remes
"As a data geek, I truly love what EPSS does, but I'm not convinced of its ultimate utility in patching prioritization for organizations. Do you want to have a patch management process that applies a patch when it is released or one that throws a wrench in your wheels when a score reaches a certain threshold," Remes says.
Complementing EPSS is CISA’s Known Exploited Vulnerabilities (KEV) catalog—a curated list of vulnerabilities with evidence of active exploitation. Federal mandates now require organizations to prioritize remediation of KEV entries, regardless of their technical severity rating. This real-world test is vital for OT environments, where "internet-facing" vulnerabilities and those known to enable lateral movement are often the prime targets for ransomware and nation-state threat actors.
"CISA KEV has been a useful filter for many organizations," Remes adds. "Most organizations have a backlog of thousands of vulnerabilities in their VM tooling. Even more when you consider contributions from penetration tests, SAST and DAST tooling, and CPSM/ASM tooling. And CISA KEV achieved something that neither CVSS nor EPSS has done: ultimate prioritization," he says.
He and others add that KEV helps eliminate debates over which vulnerabilities must be addressed first in their environment and makes it clear which vulnerabilities, addressed immediately, would instantly lower their exposure.
"CISA KEV aligns mostly with the answers to those questions. We can debate its update velocity and coverage to a certain extent, as its original goal was to inform and guide U.S. government asset owners. However, overall, it has significantly improved prioritization. Most vulnerability management tooling now incorporates CISA KEV data to provide users with prioritization reports as well," Remes says.
Michael Farnum, advisory CISO at cybersecurity consultancy A1Variant, agrees and values the enhancements to vulnerability management with the addition of KEV and EPSS to an enterprise's vulnerability prioritization calculation. "I do think, however, a lot of companies, if not most, don't put enough emphasis on their internal context. That should be the deciding factor," Farnum says.
"In fact, companies have a big lack of understanding of their list of assets and their locations," Farnum says, speaking to the need for most organizations to boost asset visibility.
The message from these changes is clear: severity is only one piece of the puzzle. Today's risk-based frameworks combine technical attributes, exploitability predictions, and asset/business context, enabling OT/ICS professionals to prioritize what truly matters. Modern assessment platforms now support contextual scoring, integrating factors such as asset criticality, exposure level, and even downtime costs into risk models.
The Stakeholder-Specific Vulnerability Categorization (SSVC) framework, for example, utilizes decision trees to consider not only exposure and exploitability but also the operational mission of the affected system and the broader impact on public safety or business continuity. This marks a key shift for OT: moving from "patch everything critical" to "act first on vulnerabilities that create the greatest operational risk."
OT/ICS environments present distinct challenges. In these environments, patch windows are limited and highly disruptive, as legacy systems are complex to update and availability is everything. Asset inventories encompass thousands of devices across decades-old platforms, where downtime has severe consequences for production, safety, and even life-critical services.
Legacy vulnerability scoring systems fail to reflect these realities, leading to ineffective prioritization and remediation actions that could even negatively impact operations. The recent steps toward risk-based, contextual assessment are crucial—but adoption remains uneven, and frameworks are not always harmonized or well-integrated.
However, new vulnerability assessment models enable smarter, risk-based prioritization and more effective use of limited resources. However, the complexity of today's scoring systems necessitates unified standards and integration, lest security teams succumb to analysis paralysis or leave critical assets vulnerable.
As threat actors increasingly target OT/ICS, there is no substitute for fast, context-aware vulnerability assessments. Industry, government, and vendors must continue to collaborate to establish shared frameworks and provide practical interoperability.
"Vulnerability Management is a specific problem set that, for a variety of reasons, has attracted specific attention over the years. In my view, because, unlike other cybersecurity disciplines, it has a very accessible and relatively complete data set that specifically invites analysis and facilitates limitless debate, but we have reached a point where we're overanalyzing the problems and putting effort into treating symptoms rather than root causes," says Remes.
Remes advises organizations to improve on a few core areas to improve their vulnerability management. First, focus on iterations of the top 10, 25, or 50 vulnerability findings from your available data sources to maximize the risk reduction from your remediation efforts. "Don't overinvest in analysis and prioritization until you have a grip on the number of vulnerabilities," he says.
Also, think of architecture first. "Vulnerabilities thrive inside long-lived, stagnant, and complex infrastructure. Iteratively changing infrastructure to make it resilient. Consider Sounil Yu's DIE principles: Distributed, Immutable, Ephemeral. This is a better investment in the long term than expanding vulnerability management/patch management capacity," he says.
Finally, security and vulnerability management require a significant amount of work. "The people that have to do the work don't necessarily require exact prioritization but actionable guidance: what do I need to do? When? How can I do it? What are the potential risks of doing it?"
"We're way past the point of looking at vulnerabilities only. Our reality has become much more complex as our infrastructures have aged and vulnerability sources have expanded, if not exploded, he adds. "A decade ago, we were looking just at NVD data, which we knew was relatively incomplete. Today, we're also adding outputs from penetration tests, static application security testing, dynamic application security testing, cloud security posture management systems, attack surface monitoring tooling, and more. That's because risk is rooted in vulnerabilities as well as configuration flaws and faulty coding patterns. CVSS and EPSS do not stretch that far," he adds.
George V. Hulme is an award-winning journalist and internationally recognized information security and business technology writer. He has covered business, technology, and IT security topics for more than 20 years. His work has appeared in CSOOnline, ComputerWorld, InformationWeek, Security Boulevard, and dozens of other technology publications. He is also a founding editor at DevOps.com.