2008-07-01
Abstract
'The purpose of the VB100 is to provide a regular measure of the competence, reliability and credibility of software vendors in the security field.' John Hawes, Virus Bulletin.
Copyright © 2008 Virus Bulletin
The VB100 certification system has come under fire in recent weeks, with much of the criticism focused on the WildList and its suitability as a basis for testing. It became quite clear from the stories that were published that there are several common misconceptions surrounding both the intended purpose of the VB100 certification, and in particular the WildList.
One of the central criticisms levelled at the WildList is that it does not include every piece of malware. To do so, of course, would be an immeasurably huge task beyond even the vast resources of large globe-straddling corporations. It would also be quite beside the point of both the WildList and the certification schemes that rely on its steady and regular output.
There have been numerous other criticisms of the WildList, most of which focus on the range of malware types covered by the list and the activeness of its reporting sources. These are issues into which the team behind the WildList are investing considerable effort to address. But even once the full range of improvements are fully on stream, the WildList will never pretend to cover the gamut of malicious software; rather it is intended to provide a limited, but unquestionable subset of the malware problem, containing items which are guaranteed to be affecting a significant proportion of real-world users and represented by a set of rigorously validated master samples.
Tests that pit products against the WildList have never claimed to prove that a given product can detect all known malware (which would be impossible to prove) and they do not attempt to rank products against one another on the basis of detecting more or fewer of the samples listed. The purpose of the VB100 and similar certification schemes is to provide a regular measure of the competence, reliability and credibility of software vendors in the security field – something which has become more important than ever in recent years with the growing tide of suspect software claiming to detect and remove malware.
Products are expected to be able to pass VB’s tests, and to pass regularly. With the level of co-operation and sample sharing going on across the industry, nothing on the list should be new to vendors, and with the comparatively tiny resources of the VB test lab in relation to the extensive research labs that AV vendors have at their disposal, no amount of replication of complex viruses carried out by VB should be beyond the capabilities of a commercial malware lab.
Passing, or even failing a single VB100 test means little in isolation – it is all about maintaining a steady record of passes over time, to demonstrate a long-term commitment to quality and reliability.
Of course, beyond these issues, there are far more complex and difficult problems facing testers. An ever-growing arsenal of weapons is being implemented in a diverse range of fashions as products adapt to combat the evolving threat. Testing these new weapons – and just as importantly interpreting and presenting the results in a manner comprehensible to the end-user – is a hard but vital task, and one that VB, like all testing bodies, is facing up to. We are hard at work developing a range of improvements and additions to the data we provide to our readers, and are currently hiring extra hands to cope with the requirements of testing a wider range of criteria and maintaining a broader and more up-to-the-minute sample collection.
For any such plan to work requires the input and co-operation of experts from across the industry, pooling both wisdom and resources for the greater good. Groups such as AMTSO provide great hope for the future, and a number of the presentations at this year’s VB conference will focus on the subject of testing. As we strive to provide useful and trustworthy data on the protection offered by a growing range of solutions to the security problem, we rely on the support of those whose performance we measure, as they rely on independent tests to keep them informed of their successes and failings. As always, we gladly welcome new ideas and constructive criticism.