Posted by Virus Bulletin on Sep 9, 2014
Richard Ford and Marco Carvalho present an idea for how to test products that claim to detect the unknown.
In the weeks running up to VB2014 (the 24th Virus Bulletin International Conference), we are looking at some of the research that will be presented at the event. Today, we look at the paper 'The three levels of exploit testing', by Richard Ford and Marco Carvalho from the Florida Institute of Technology.
Whether you're worried about China's 'Comment Crew', or state-sanctioned hackers from Fort Meade, MD, the use of zero-day exploits against your organisation is a worst-case scenario. Thankfully, a number of companies have developed solutions that claim to detect the unknown: they will detect attacks using unknown exploits of unknown vulnerabilities.
But are such products any good?
This question isn't easy to answer. From an attacker's point of view, the ideal zero-day exploit leaves no trace - so, in tests, the fact that a product has missed it won't be detected.
Getting hold of zero-days to use in a lab environment, also comes with various ethical issues: the morally right thing to do is to report the vulnerabilities to the affected vendor. Moreover, it is an understatement to say that finding such vulnerabilities for testing purposes scales rather badly.
In their paper, Richard and Marco describe a new approach: they suggest using popular open-source software, then modifying the source code and, while doing so, deliberately inserting a new vulnerability, known to the tester, but unknown to the product that is to be tested.
By using the CVE list, the testers should be able to make sure the vulnerabilities they insert are similar in type to those that are commonly seen. They could also design a test that measures detection of a specific class of vulnerabilities.
As everyone in security knows, almost all attacks seen in the wild exploit known (and usually patched) vulnerabilities. However, it is detection of zero-day vulnerabilities that people are most concerned about (and about which vendors tend to make very bold claims). Richard and Marco's paper presents a neat idea to test such claims.
Registration for VB2014 is still open.
Posted on 09 September 2014 by Martijn Grooten