MALWARE ATTACKS USED BY THE U.S. GOVERNMENT RETAIN POTENCY FOR MANY YEARS, NEW EVIDENCE INDICATES
by Kim Zetter
A NEW REPORT from Rand Corp. may help shed light on the government’s arsenal of malicious software, including the size of its stockpile of so-called “zero days” — hacks that hit undisclosed vulnerabilities in computers, smartphones, and other digital devices.
The report also provides evidence that such vulnerabilities are long lasting. The findings are of particular interest because not much is known about the U.S. government’s controversial use of zero days. Officials have long refused to say how many such attacks are in the government’s arsenal or how long it uses them before disclosing information about the vulnerabilities they exploit so software vendors can patch the holes.
Rand’s report is based on unprecedented access to a database of zero days from a company that sells them to governments and other customers on the “gray market.” The collection contains about 200 entries — about the same number of zero days some experts believe the government to have. Rand found that the exploits had an average lifespan of 6.9 years before the vulnerability each targeted was disclosed to the software maker to be fixed, or before the vendor made upgrades to the code that unwittingly eliminated the security hole.
Some of the exploits survived even longer than this. About 25 percent had a lifespan of a decade or longer. But another 25 percent survived less than 18 months before they were patched or rendered obsolete through software upgrades.
Rand’s researchers found that there was no pattern around which exploits lived a long or short life — severe vulnerabilities were not more likely to be fixed quickly than minor ones, nor were vulnerabilities in programs that were more widely available.
“The relatively long life expectancy of 6.9 years means that zero-day vulnerabilities — in particular the ones that exploits are created for [in the gray market] — are likely old,” write lead researchers Lillian Ablon and Andy Bogart in their paper “Zero Days, Thousands of Nights.”
Rand, a nonprofit research group, is the first to study in this manner a database of exploits that are in the wild and being actively used in hacking operations. Previous studies of zero days have used manufactured data or the vulnerabilities and exploits that get submitted to vendor bug bounty programs — programs in which software makers or website owners pay researchers for security holes found in their software or websites.
The database used in the study belongs to an anonymous company referred to in the report as “Busby,” which amassed the exploits over 14 years, going back to 2002. Busby’s full database actually has around 230 exploits in it, about 100 of which are still considered active, meaning they are unknown to the software vendors and therefore no patches are available to fix them. The Rand researchers only had access to information on 207 zero days — the rest are recently discovered exploits the company withheld from Rand’s set “due to operational sensitivity.”
While it’s not known how many of these exploits are in the U.S. government’s arsenal, Jason Healey, a senior research scholar at Columbia University’s School for International and Public Affairs, believes the U.S. government’s zero-day stockpile is comparable in size to Busby’s.
For many years, critics of the government’s use of zero days suspected the arsenal numbered in the thousands. But a report Healey published with his students last year, based in part on statistical analysis of the number of zero days that get discovered and disclosed each year to bug bounty programs, estimated that the government’s trove likely contained between two dozen and 225 zero-day exploits.
This would seem to jibe with statements made by government officials. Michael Daniel, former special adviser to President Obama on cybersecurity issues and a member of Obama’s National Security Council, has said in the past that “there’s often this image that the government has spent a lot of time and effort to discover vulnerabilities that we’ve stockpiled in huge numbers and similarly that we would be purchasing very, very large numbers of vulnerabilities on the open market, the gray market, the black market, whatever you want to call it. [But] the numbers are just not anywhere near what people believe they are.”
Shining a Light on the Government’s Zero-Day Policy
The government has long insisted that it discloses more than 90 percent of the vulnerabilities it finds or purchases, and that those it doesn’t disclose initially get reviewed on a regular basis to re-evaluate if they should be disclosed.
The problem with this is that the public doesn’t know how long the government is exploiting these security holes before they’re shared publicly — and therefore how long ordinary citizens are left exposed to Russian or Chinese nation-state hackers or cybercriminals who may discover the same vulnerabilities and exploit them.
One factor that can affect how quickly the government discloses vulnerabilities is their collision rate or rediscovery rate. This refers to how often the same vulnerabilities get discovered independently by two or more parties.
It’s a metric that is particularly important in the policy debate around the government’s use of zero-day exploits; if the U.S. knows about a vulnerability, there’s a good chance others do too and are quietly exploiting it. If the data shows there is high probability that criminal hackers or nation-state hackers from Russia or China could discover a vulnerability and create an exploit for it, this can be an argument for disclosing the vulnerability sooner rather than later to get it patched. But if that probability is low, the government can use it to justify nondisclosure and keeping people at risk longer.
The Rand researchers found that the collision rate for the exploits in the Busby database was indeed low. In a typical one-year period, only about 6 percent of the vulnerabilities got discovered by others. That figure jumped to 40 percent, however, when viewed across the entire 14 years of the database.
But there’s a slight problem with this analysis, says Columbia University’s Healey. The Rand researchers determined the collision rate based on publicly disclosed vulnerabilities — those discovered and reported by researchers as part of a vendor bug bounty program or made public in some other way, such as at conferences or in news articles. But this isn’t the collision that concerns critics of zero-day arsenals. They’re concerned about collisions with zero days that remain secret, such as those developed by other nation-state actors and criminal hackers and aren’t publicly disclosed.
“The collision rate is absolutely fascinating, but this is the wrong way to talk about it,” says Healey.
Healey says Rand should be looking for collisions with the zero days found in other gray market databases held by other exploit sellers. He says the kinds of researchers who participate in bug bounty programs tend to be looking for different kinds of vulnerabilities than researchers who are looking for vulnerabilities for offensive hacking. The latter will have different needs and also better resources to look for vulnerabilities.
It’s worth noting that another study released this week by cryptographer Bruce Schneier and Trey Herr of the Harvard Kennedy School found a higher collision rate when looking at vulnerabilities found in browser software and mobile phones.
“Between 15 percent and 20 percent of all vulnerabilities in browsers have at least one duplicate,” they wrote “For data available on Android between 2015 and 2016, 22 percent of vulnerabilities are rediscovered at least once an average of 2 months after their original disclosure. There are reasons to believe that the actual rate is even higher for certain types of software.”
But this study also involved vulnerabilities disclosed to bug bounty programs. Dan Guido, CEO of Trail of Bits, whose company does extensive consulting on iOS security, says, “I don’t think studying bug bounty collisions is representative of exploit use in the wild.”
Regardless of this limitation, Guido says the collision test conducted by Rand is still illuminating for the very fact that it involved at least one set of data consisting of live, in-the-wild exploits.
“Even with the caveats around the collision rate, using the best available data we have now [with those live exploits], is significantly lower than we expected,” he said.
Which begs the question — is it low enough that the government would be justified in holding on to exploits for years and not disclosing the vulnerabilities they attack?
Ari Schwartz, former senior director of cybersecurity in Obama’s White House who participated in the so-called Vulnerabilities Equities process where the government makes these assessments, says even a low collision rate is a problem.
“Let’s say it’s just 10 percent; is it worth doing disclosure for 10 percent? I think it is,” he says. “That’s still pretty high if you think about it — 1 in 10.”
Healey says the RAND study is an incredible asset to other researchers because of its use of live exploits that are in the wild. It makes the data and analysis more realistic than studies that only simulate scenarios and guess at conclusions, like what the consequences of not disclosing a vulnerability might be.
“We can theorize all we want about what’s good and what’s bad [in terms of disclosure], but this is going to shake things up, because now we can roll up our sleeves and actually come up with some real answers.”
They hope it may also encourage the owners of other exploit databases to share their collections with researchers.