Introducing EPSS Version 4
The fourth iteration of the Exploit Prediction Scoring System (EPSS) is being released today. I have been working on EPSS for just over six years now. While I’d love to take you on a long meandering walk down memory lane, go into detail about all of the lessons we’ve learned along the way and introduce you to all of the wonderful people who’ve helped make EPSS better with each iteration, I’ll spare you the details and just offer a set of bullet points…
EPSS Evolution in a Nutshell
- Version 1: Started in 2018 with our paper, “Improving Vulnerability Remediation through Better Exploit PredictionLink to https://weis2019.econinfosec.org/wp-content/uploads/sites/6/2019/05/WEIS_2019_paper_53.pdf,” and presented the first EPSS paperLink to https://arxiv.org/abs/1908.04856 at Blackhat 2019. It was designed to be implemented in a spreadsheet so it was simple with only 16 variables. Sasha Romanosky and I started a special interest group through FIRST.org in the spring of 2020 and started generating daily scores and publishing them in January of 2021.
- Version 2: Dropped the need to run in a spreadsheet and migrated to a machine learning model to improve the accuracy of the model in March of 2022. This version had over 1,000 variables, and was a strong improvement over version 1.
- Version 3: Marked a major leap forward in accuracy over both version 1 and version 2, still using machine learning and is described in the paper “Enhancing Vulnerability Prioritization: Data-Driven Exploit Predictions with Community-Driven InsightsLink to https://arxiv.org/abs/2302.14172” and started publishing scores in March of 2023. It used around 1500 variables.
- Version 4: The accuracy of version 3 has slowly and just slightly degraded over the last two years (though it was still easily outperforming version 2). Many things have changed in my personal and professional life too that make this version the most exciting for me
Over the years, we branded the terms “efficiency” and “coverage” to measure the effectiveness of prioritization approaches. Actually, we just rebranded the terms “precision and recallLink to https://en.wikipedia.org/wiki/Precision_and_recall” into vulnerability-specific terms to measure the efficiency of resources applied towards remediation and the amount of coverage from the exploited vulnerabilities. We also found it helpful to talk about the amount of effort a prioritization system would demand. But what does it look like in practice?
How does EPSS compare to existing methods?

Using the exploitation activity collected (with evidence of around 12,000 vulnerabilities exploited every month), we can measure the performance of any prioritization method (that publishes their scores anyway). It’s not that hard to imagine a strategy that says “Remediate CVSS 7 or higher” so let’s start there. The left circle above represents that strategy. The large light gray circle is all published CVEs with a CVSS scores, which is around 239K (and using the most recent version we can.) The blue represents the CVEs scored with a 7 or higher which is a lot of CVEs - just above half (“effort” is 50.7%). The red represents the exploitation activity recorded in the following 30 days. Notice the overlap between what was prioritized and what was exploited. That’s our target. With a CVSS 7 and above strategy, only about 6% of the blue circle is covering the red - meaning only 6% of the remediation effort was addressing vulnerabilities with exploitation activity, that’s rather poor efficiency. Most organizations however want to focus on the red circle, they want to be sure they can cover as much of that red circle that they can afford to remediate. CVSS 7 and up isn’t too bad in that regard, at 74.6%, it’s remediating 3 out of every 4 vulnerabilities that were observed exploited in the following 30 days.
To compare CVSS strategy to EPSS, we can hold coverage steady so roughly the same amount of exploited vulnerabilities are prioritized for remediation. Look at what happens to the other two metrics. Effort is reduced by more than 8 times over. We go from 50.7% down to 6% effort, which makes the efficiency jump to 47%. That’s a pretty big savings for roughly the same security outcome!
What makes v4 better?
- Vastly improved data ingestion, processing and monitoring
- Exploitation activity now includes malware activity and endpoint detections
- EPSS is collecting exploitation activity for 12K vulnerabilities a month!
- Adding data from RSS/web mentions, hundreds of sources discussing vulnerabilities.
- Using cve.org CNA/ADP information as backup to NVD enrichment (CPE/CVSS)
- Added vulns scanned by shodan, and HackerOne Hacktivity Reports
- CWEs now converted to top 22 categories with CWE category 1400
- Removed a handful of sources that stopped updating.
- CVEs marked as REJECTED will no longer be scored.
- Long term support from my company, Empirical Security, for development and data services. Also, huge thanks to Cyentia Institute for the unbelievable support getting us to v4!
While I hope you find EPSS useful, I hope you consider joining the EPSS SIGLink to https://portal.first.org/g/epss-sig, and I hope you provide any feedback you can.
Also, If your work depends on EPSS, Empirical is making enterprise support available for those who need hourly updates or version support, please reach outLink to https://www.empiricalsecurity.com/contact and ask for a demo!