The following FAQs are designed to help you interpret VB100 certification results and to give you insight into how the VB100 certification programme is set up and runs.
Any product that meets the VB100 certification criteria can be assumed to have a good ability to detect the most common variety of threats, and to do so without many false alarms.
To understand the figures, consider our testing process. We expose each tested product to various threats and non-threats to measure its malware detection capabilities, along with its ability to avoid false alarms. These test cases are assigned to two case sets: the Certification Set contains threats and the Clean Set contains non-threat cases.
In the context of VB100, 'static detection' means that we don’t actually execute malicious or clean test cases, but rather we 'show them' to the tested products. While this has several benefits, the coverage such testing provides is limited.
In practical terms, VB100 static testing offers greater statistical relevance than dynamic tests.
Simply put, the better the approximation of a real-world infection chain, the more resource-intensive each test case becomes. This often proves to be a limiting factor in the number of test cases a lab can evaluate.
Through focusing on the static detection layer only, VB100 can evaluate a much greater number of test cases and therefore provide better resolution. This means that blind chance plays a lesser role than it would with fewer samples. Resolution is particularly important for false positives, which generally represent well less than 0.1% of the cases. With just 100 test cases, you are almost guaranteed to miss any false positives. At 100,000 test cases, not only do we have a good chance of detecting false positives, but we can reliably tell the difference between a product that generates a prohibitively high 1% false positive rate (that is, 1 out of 100 programs on average triggering a false alarm) and one that has more modest 0.01% rate (1 out of 10,000).
Since VB100 only focuses on static detection of Windows executables (PE files), the coverage it provides is limited to that detection layer of products only. Many products employ several additional advanced technologies that simply won’t kick in during the VB100 test process, such as URL reputation, exploit protection, behavioural/runtime analysis, sandboxing, contextual analysis, and many more.
While not fully comprehensive, we believe that the base VB100 covers – static detection – is the most important one. This is because the vast majority of users face the common, 'garden variety' of threats, and because virtually all endpoint security products will detect common threats statically, without invoking more advanced security features.
In a certain way, VB100 will only give you part of the story, however it’s the part of the story that we think is the most relevant in an average case.
We encourage you to read the test reports from other labs as well. AMTSO – an industry organization for good anti-malware testing – is a great starting point. Since all tests are an opinionated approximation of real life, the best you can do is synthesize information from multiple sources to form the most complete picture you can.
No, you shouldn’t do that, because VB100 is not designed to tell you if Product A or Product B performs better. Such comparative tests require certain guarantees (for example, concurrent evaluation of test cases) that VB100 does not provide.
It is more complicated than that. Security is often layered and the static detection we test is among the first of those layers. Just because a threat makes it through the first layer does not mean that subsequent layers won’t stop it. To give an example, malware is often polymorphic, meaning the same malware may take a million forms through repackaging. A product that fails to detect a new, repackaged form in its static detection layer may very well detect the threat during runtime when the malware unpacks itself from its packaged form.
Due to the above, one can reasonably state that a product that earns Grade A+ (with a detection threshold of 99.5%) is expected to detect at least 99.5% of threats, but at the same time, a product that earns Grade C with e.g. 87.34% detection rate may provide the same level of protection through its multiple layers of security. VB100 detection grades recognise and classify static detection performance but will not give you the complete picture.
Our testing methodology is documented in detail on the VB100 Methodology page. It’s a bit of a dry read, but ultimately this is the most complete guide you can find to VB100.
The Anti-Malware Testing Standards Organization (AMTSO) is an international non-profit association that focuses on the objectivity, quality and relevance of anti-malware testing methodologies. AMTSO members include both vendors of infosecurity products and test labs.
Among other services, AMTSO maintains the AMTSO Testing Protocol Standard, which describes testing protocol and behaviour expectations for testers and vendors relating to the testing of anti-malware solutions.
The VB100 test is designed to be compliant with this standard. AMTSO regularly audits the execution of our VB100 tests and certifies them to be compliant if they meet the standard criteria.
AMTSO audits our tests regularly. This typically happens well after the test has been performed. As a result, the reports are released before they have been certified as AMTSO compliant. For this reason, you may find that the compliance information is missing for the newest tests listed on the AMTSO website.
If the report has no AMTSO reference at all, either it was executed prior to the introduction of the AMTSO compliance, or the vendor has explicitly opted out of AMTSO-compliant testing.
We don’t: VB100 testing is voluntary, and is paid for by the vendor of the tested product.
Any conflict of interest here is resolved with ease: the test ethics framework always prevails, because our ultimate responsibility is to the consumer of the report.
No. Any test that starts out as a public test must be followed through. The test report is published, regardless of whether or not it’s favourable for the vendor.
Under certain circumstances, we might refrain from publishing a test report if we believe that the data we collected is not relevant to the reader, e.g. it’s not healthy (tainted by technical issues, operator errors, etc.) or it was produced under circumstances that didn’t give the vendor a fair chance to succeed. We carefully consider the public interest in such cases, and if we invalidate the test results, we aim to repeat the test as soon as possible.
We don’t, because we believe that in all but special cases, tested vendors require representation in the process to effect sound and well-founded testing, and it’s something quite difficult to secure if you are testing a product against the vendor’s will.
Absolutely, email us at [email protected].