VB100 methodology - ver.1.1 overview

Overview of changes included in the VB100 methodology ver.1.1.

Changes for the next revision of the VB100 methodology (ver.1.1) are highlighed below. For a complete list of changes, please read the new methodology document.

 

Changes to certification criteria

To better reflect real-world usage, the current all-or-nothing certification criteria will be changed to allow up to 0.5% false negatives (i.e. malware misses) and up to 0.01% false positives. Products that do not exceed these thresholds will earn certification.

Comparison with the current criteria:

  Previous criteria New criteria
False negatives allowed None up to 0.5%
False positives allowed None up to 0.01%
Counting false negatives per event (per part × per test platform) 1 per sample
Counting false positives per event (per platform) 1 per sample
False positive (clean file) set 480,000, any file types ~100,000, at least 25% PE

 

Based on recent tests, we estimate that the new permitted false negative rate (up to 0.5%) will allow for 12-16 malware sample misses, while the new permitted false positive rate (up to 0.01%) will allow for 9-10 false positives.

 

Successor to the RAP Test

We are retiring the Reactive/Proactive (RAP) Test and introducing a new test, known as the Diversity Test.

The RAP Test aimed to measure how well a tested product was able to keep up with new threats after losing Internet access. Over the years, however, the relevance of this test has gradually decreased, and it is now to be retired and replaced with the Diversity Test.

The Diversity Test will provide supplementary data to the certification. While the Certification Test is performed using a quasi-standard body of samples (currently the WildList), the Diversity Test uses a much wider pool of threats to provide the reader with an idea of products' real-life static detection capabilities.

A feature-to-feature comparison of the Diversity Test and the RAP Test is shown below.

  RAP test Diversity Test
Does it contribute towards certification criteria? No No
Can samples be disputed? No Yes
Sample count Several thousand 750-1,000 samples (to enable disputes)
Does test include false positive testing? No No
Test output Composite (hard to interpret) Simple catch rate %

 

Other changes

In addition to the changes described above, the new methodology includes amended or newly added policies about:

  • The usage of custom product builds
  • Acceptable product updates
  • On-demand vs. on-access testing requirements
  • Recovery from technical issues through re-running of test parts
  • Amount of data sufficient for certification

A detailed change log is available at the end of the methodology document.

VB100

Latest Report

The latest VB100 comparative test report

VB100 Methodology

Methodology for the VB100 certification test

VB100 for end-users

Learn what VB100 stands for and how we test..

VB100 for vendors

Not VB100 certified and want to be? See how your product can be enrolled to the test bench.

VB100 FAQs

FAQs for the VB100 certification scheme.

VB100 Test Archive

Details of all previous VB100 comparatives

VB Testing

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.