2012-06-01
Abstract
The VB lab team expected to see more stable, better behaved products on this month's server platform than in various other tests of late - 37 products lined up to be put through their paces and the team were treated to a relatively smooth ride. John Hawes has all the details.
Copyright © 2012 Virus Bulletin
After a rather trying time in the last comparative, we had pinned our hopes on a significant improvement in product performance and behaviour this month. Since this month’s was a server test – with many of the products taking part having been designed to protect vital systems – we expected to see far fewer problems here: serious administrators tasked with maintaining pivotal production servers would surely not stand for having them repeatedly crash, freeze or even slow down thanks to wobbly security solutions (and many would take issue with the poorly designed, inaccurate, cryptic or otherwise unusable reporting systems which so regularly cause us problems).
The test deadline was 18 April, which as I write, seems a long time ago. Product submission day passed with few shocks, a medium-sized field of entrants turning up with a few surprise omissions, but the bulk of the regular entrants making an appearance. A single newcomer – a Chinese product with no English translation available – promised some interest, but since we knew it was based on an engine with a solid performance record we were hopeful of a reasonably simple job. As usual, much of the field was made up of OEM and white-label products, and although not quite as dominant as last time around, close to half of this month’s submissions included one or other of the two most popular engines for this kind of implementation. In the end, the total number of submissions was 37 – the lowest turnout for a Windows comparative since our first look at the Windows 2008 Server R2 platform two years ago, but still enough to keep us busy.
Just as we were about to make a start on testing, a window opened up for our long-anticipated and well overdue lab move. This proved to be a fairly major undertaking, with the old lab having played host to VB100 testing for close to a decade – but the new space promised more room, better cooling and power availability, and an all-round environmental improvement compared to the dark and dismal cave we used to call home. The move went fairly smoothly, but almost inevitably there were a number of teething problems getting set up in our new space, and it was more than two weeks before we could get the test underway properly.
Windows 2008 Server R2 has been around for a few years now, as the server equivalent of Windows 7 first released in late 2009. It has already been the focus of a handful of VB100 comparatives, and has received approval from the lab team for its general air of stability and ease of use. One of few annoyances with its set-up is its insistence on creating a second partition at the start of the main system drive, which makes our re-imaging set-up a little more fiddly than normal. Otherwise, preparing the machines was fairly routine, with nothing beyond the latest available service pack (SP1) and a few handy tools such as archive unpacking and PDF viewing utilities added. As usual, test systems were defragmented and decluttered to make the process of applying a fresh image for each round of testing as rapid and painless as possible.
Test sets were prepared along the usual lines too, with a fairly small set of additions to the clean sets – these focused on business software appropriate to servers, but also included some items which would be more likely to be found in a domestic setting. A number of driver and utility CDs and DVDs received with a fresh batch of machines were also added. After retirement of some older items, the size of the set remained fairly steady at around 180GB, just under 620,000 files. The speed sets and other samples used in our performance measures remained unchanged, and our processes for building the weekly RAP sets and daily Response sets were also left as they were (once the servers in our new lab space had been coaxed into behaving properly and the daily gathering, classification and storage of new samples was back online). A late start building the RAP sets meant there was more time for items to make their way through to the older test sets, and the ‘Week -3’ set made up more than half the total; weekly sets averaged around 25,000 samples though. The WildList sets were based on the March lists, released a couple of weeks before our deadline. These offered few surprises; once again, a number of items from the Extended list were removed well into the testing period after their appropriateness had been challenged by some reporters.
With much time and effort having gone into getting the lab back up to speed, our plans to extend the data included in this report – such as details of the stability rating system introduced in the last test – were put on hold. After a frenzied period of preparation, we were at least ready to perform our standard set of tasks though, and with no time to waste we got down to the nitty-gritty of putting the products through their paces.
Product version: 3939.602.1809
Update versions: 19/04/2012, 07/06/2012, 11/06/2012, 13/06/2012
Agnitum’s Outpost suite will need little introduction to our regular readers, the main features being a renowned firewall and anti-malware protection based on the VirusBuster engine. The product was submitted with latest updates included in a 136MB package. Installation was a little on the slow side, taking two or three minutes to complete when run offline; online updates were fairly zippy though, only adding an extra minute or so to the process. A reboot was required to complete the set-up.
The design is fairly clear and pleasant, with good access to a set of controls that is a little more detailed than the most basic of products. Responsiveness and stability were generally good throughout, with no serious issues to report. Speed measures showed some reasonable initial scan rates, which increased significantly in the warm runs, with on-access overheads not excessive to start with and showing an impressive improvement once the product had familiarized itself with its surroundings. Performance measures showed some fairly heavy use of memory and very high CPU consumption, with our set of activities taking a long time to complete.
Detection rates were less impressive but still reasonable, with a fairly decent showing in the Response sets. RAP scores were not bad either – in the reactive weeks at least, the proactive week showing a fairly steep decline. The WildList sets were covered perfectly though, and with no issues in the clean sets a VB100 award is comfortably earned. That gives Agnitum ten passes from ten attempts in the last two years, with only Linux tests missed, and earns the product a ‘solid’ rating for stability.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 7.0.1551/7.0.1617
Update versions: 120418-1, 120531-0
Avast's server solution strongly resembles the vendor’s wildly popular free home-user offerings in most respects. The package provided weighed in at 95MB including updates, and offered an ‘express’ install mode which got the job done with just a couple of clicks, completing in well under a minute. No reboot was required on initial install, but one of the later updates did request a restart to complete; updates were fairly speedy, making the average set-up time around four minutes.
The interface is attractive and well designed, providing an impeccable range of controls without sacrificing clarity or usability. Responsiveness was generally good, but in the RAP tests the system suffered a single, non-reproducible blue screen incident, which dents the product’s stability rating somewhat – although it did occur in unusually high-stress circumstances.
Speed measures were fairly decent, with very light overheads, at least until the archive scanning settings were turned up to the max, and resource use was fairly low, with an average impact on our set of tasks.
Detection rates were pretty good too, with excellent scores in the reactive part of the RAP sets, the proactive section falling a little short of the very best performers this month but still well ahead of most. Scores in the Response tests were not quite as impressive, but still decent, and with the WildList and clean sets dealt with appropriately, a VB100 award is earned without difficulty. Avast now has five passes and one fail in the last six tests; 11 passes in the last two years. Given the single blue screen event, the product earns just a ‘fair’ rating for stability under our new (yet to be fully documented) schema.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2012.0.2127
Update versions: 2411/4940, 2433/5051, 2433/5056, 2433/5064
AVG is another big player in the free-for-home-use market, but it is the company’s business solution that is the more regular participant in our comparatives. The package provided measured a sizeable 173MB. Once again, an ‘express’ installation route was offered (accepting all default options) – but three or four clicks are required to get to this stage. The process takes quite some time to complete, the whole job taking more than three minutes, including fairly zippy updates. The interface itself has had a bit of a refresh of late and looks simpler, crisper and cleaner, with four main areas replacing the large number of overlapping categories previously provided. Customization is available in decent depth, and is pretty simple to navigate and operate, while stability seemed solid throughout testing.
Scanning speeds started off fairly slowly, but the product blasted through the sets once it had had an initial thorough check, and overheads were featherlight throughout. Resource use was pretty low, as was impact on our set of tasks.
Detection rates in the RAP sets very closely mirrored the company’s rival and near-neighbour Avast, being excellent in the reactive parts and very good in the proactive week. Response scores were very impressive too. The WildList was handled well, but in the clean sets a single item was alerted on as a trojan. Closer analysis showed this was an Islamic prayer time notifier, with very high download rates from the popular freeware sites – numbering in the tens of millions in some cases – but apparently no one in the AVG user-base had reported a problem with it. Nevertheless, this was enough to deny AVG a VB100 award this month, despite an otherwise solid showing. The vendor has five passes and one fail in the last six tests; 11 passes in the last two years. The product merits a ‘solid’ rating for stability.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 1
Product version: 12.0.0.2038
Update versions: 8.02.10.48/7.11.27.240, 8.02.10.80/7.11.32.28, 7.11.32.64, 7.11.32.152
The third in a trio of popular free solution providers, Avira provided a full-blown server solution this month. The base package was only 82MB including all updates, and with an express option provided it required only a couple of clicks to get going, completing very rapidly with updates taking place in the background (so quickly they were barely noticeable). The whole process was completed in under two minutes, with no need to reboot.
The interface uses the Microsoft Management Console (MMC), and is of necessity a little clunkier and less user-friendly than custom-built ones, but it manages to provide a thorough level of controls nevertheless. Most settings dialogs are provided as pop-outs from the rather wordy and fiddly console system, but navigation and operation was noticeably less smooth and more taxing than others seen this month.
Scanning speeds were pretty decent though, and overheads pretty light, with impressively low use of resources and minimal impact on our suite of tasks. Detection rates were excellent as usual, with a superb showing in the RAP sets and very good scores in the Response sets. With no issues in any of the core certification sets, a VB100 award is earned with flying colours. Recent tests show a flawless record of 12 passes in the last two years, and in this case stability was also flawless, earning the product a ‘solid’ rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 3.5.20.1
Update versions: 7076661, 7263693, 7270145, 7280998
Another full server solution, and again the MMC is used as the basis for the product interface. The 176MB set-up package ran through a fair number of steps, with updating and (even more unusually) reporting of incidents back to base both switched off by default. The process took a couple of minutes to complete but updates were reasonably fast and only took an extra minute or so.
The interface is about as well designed as it could be given the underlying MMC system, with some good use of colour and large, clear tabs separating functions. A very good level of configuration is provided. Scanning speeds were a little slow in some areas but very fast in others, while overheads started off fairly heavy but improved greatly in the warm runs. RAM use was fairly low, CPU use closer to average, and impact on our set of activities was noticeable, but far from extreme.
Detection rates were once again superlative, with a splendid showing in the RAP and Response sets. The WildList sets were covered impeccably, and with no issues in the clean sets another VB100 award is well deserved. With all of the last six tests passed, Bitdefender has 11 passes and a single fail in the last two years, and merits a ‘solid’ stability rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 3286
Update versions: N/A
A much less familiar product, Bkis is based in Vietnam and has made a few scattered appearances in our tests, the last of which was a little over a year ago. On the last few occasions we saw some distinctly odd results, with detections apparently depending on whether others were made in the same scan, and logs being adjusted and rewritten retrospectively so that results would differ if logs were looked at during or after a scan job. This time the product came as a jumbo 354MB installer, which seemed to run with only a single click, completing very rapidly but requesting a reboot to complete. On restart, the product interface opened and immediately suggested another restart was needed. Updates also seemed a little problematic, with the interface insisting the latest definitions had been acquired, but the version information and date of the definitions remaining constant, even several months after the initial installer was downloaded.
Testing itself was fairly straightforward, the product’s bright orange interface providing little by way of configuration but seeming to remain stable and responsive throughout testing. Scanning speeds were a little on the slow side, and overheads fairly hefty, with reasonable RAM use but high use of CPU cycles and an above average impact on our set of tasks.
Detection rates in the Response sets looked very impressive, despite our worries over whether updates had worked properly, and the WildList sets were dealt with very well too. In the clean sets however, a number of false alarms emerged, many of them simple autorun configuration files from a selection of legitimate CD back-ups included in our clean sets, but also including components of software packages from major providers such as IBM, Citrix and LG.
On contacting the developers, it was suggested that changes made to the product to disable automatic disinfection (to help speed along our tests) had in some way broken the logging component, causing it to erroneously log these items as detected, and it was pointed out that if scanned in isolation these files would not be alerted on. This was confirmed, but we also observed that if other clean items were interspersed among the RAP sets, many of them would now be detected as malicious, while some items in the RAP sets would no longer be detected. This complication led us to strike the RAP results out entirely as unreliable. Further discussions with the developers revealed that the ‘test’ version of the product provided to us also used different update servers from those used by real customers, which would not necessarily contain the same detection data available in the real world. This seemed to render our results considerably less than representative of the product’s true performance, and they should thus be taken with a hefty pinch of salt.
Based, if nothing else, on the false positives clearly recorded in the initial set of logs, no VB100 award can be granted to Bkis this month. This is the company’s first appearance for a while, but it now has two fails and three passes in the last two years. There were clearly some odd issues here, which must mark the product down to no more than a ‘stable’ rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 18
Product version: 12.0.218
Update versions: N/A
The first of a raft of products this month based on the popular Bitdefender engine, BullGuard is a fairly regular participant in our tests with a strong performance record. The current product was submitted as a 146MB package including updates, and again only required a couple of clicks to get going, zipping through in under a minute with no need to reboot. Updates were generally speedy and reliable, although on one occasion a run failed with a message stating that the update server had disconnected unexpectedly. Fortunately, a retry proved more successful, and even with this extra time updates averaged under two minutes.
The product interface bucks the standard approach somewhat, opting for a largely empty GUI dominated by a reassuring ‘Your computer is safe’ message, with just a few small links and buttons around the outside. With a little exploring, a fair degree of control can be accessed without undue difficulty.
Scanning speeds started off fairly sluggish in most areas, but in the warm runs it blasted through at an incredible pace. On-access overheads were likewise extremely light after an initial settling-in period. RAM use was low, but CPU use a little on the high side, with a fair impact on our set of tasks.
Detection rates, as expected, were stellar, with very little missed in the RAP sets and the Response sets very well covered too. With no issues in the WildList or clean sets, BullGuard easily makes the grade for VB100 certification, putting the vendor on five passes from five entries in the last six tests; eight from eight in the last two years. The product earns an effortless ‘solid’ rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 7.3.33
Update versions: 5.5.0/14.2.32, 5.5.1/15.0.46, 15.0.51, 15.0.53
Central Command’s Vexira has settled in as a firm regular which rarely misses a test these days, and generally puts in a decent showing. The underlying engine is from VirusBuster, and as usual for server tests, a full-blown server solution was provided. The installer was relatively compact at 69MB, with 92MB of updates also required for the offline parts of the test. Set-up was fairly straightforward, following through a standard selection of 10 or so steps. As noted in previous tests, the option to send feedback on detections and so on to the developers is rather deviously concealed on the EULA screen where one would usually expect a checkbox to accept the terms and conditions. While we would usually encourage people to participate in these schemes, some server admins (used to handling highly sensitive information) may find this approach more than a little sneaky. Assuming one is not required to be extra paranoid and read through each page in minute detail, the set-up process takes less than a minute, with a reboot required at the end. Once up and running, updates can be initiated manually, but on each of the five or six installs needed to complete testing, this first attempt failed with no information as to why. Leaving it alone for five minutes or so seemed to do the trick though, with a message saying it had at last managed to complete the required download of data.
The interface is another MMC-based monster, which manages to pull off the trick of making the console system even more awkward, fiddly, unintuitive and inconsistent than it needs to be. A decent level of controls is provided, but they must be wrestled out of what is at times a bewildering layout. As always with this product and its VirusBuster sister solution, an option to scan archives internally on access appears to be provided, but has no effect whatsoever.
Scanning speeds were sluggish in the archive and binary sets but pretty nippy in other areas, especially with the default settings where file extensions are trusted as a guide to file content. On-access overheads showed a similar pattern, with much time spent looking at some sets while others were mainly ignored. Memory consumption was fairly low, and CPU use not too high either, with a reasonable hit on our set of tasks.
Detection rates were not stellar, but not too disappointing either, with decent coverage of the reactive parts of the RAP sets, dropping off sharply into the proactive week, and a reasonable showing elsewhere. The WildList sets were well handled, and with no issues in the clean sets a VB100 award is earned. With a single hiccough a few tests back, Vexira boasts five passes and a single fail in the last six tests; 11 passes in the last two years. This month the product earns a ‘solid’ rating for stability.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2.1.89
Update versions: 14.2.31, 15.0.46, 15.0.51, 15.0.53
The last comparative saw something of a glut of solutions from the same white-label factory, using the VirusBuster engine with a front end provided by Preventon; this month only a handful returned – most of them, including this one, among a small group of long-term semi-regulars. Clearsight’s version came as a 90MB installer, which ran through in a small number of steps, completing in just half a minute or so with no need to reboot. The interface is simple and basic, with a slightly more than minimal set of controls, but is generally easy to operate and responsive. On this occasion we observed a couple of interface crashes, both of which appeared when the licence code was entered shortly after installation – but this didn’t seem to cause any lasting effects and simply reopening it got things moving again. There were a few other oddities with the interface that we had not previously observed, such as slots of scan times and numbers of files scanned sitting empty in the scan dialog. Previously mentioned problems, such as the right-click scan ignoring options not to remove or quarantine detected items, remained present.
Scanning speeds were fairly mediocre, but overheads weren’t too heavy and memory use was low. CPU use was a little on the high side, but our set of tasks ran through in good time. Detection rates were a little below par but not terrible (as expected having seen several performances from the same engine so far), with acceptable rates in the Response sets. RAP scores were initially baffling, seeming reasonable in the reactive parts but terrible in the proactive week. Further investigation showed the scan had silently given up on encountering a tricky packed file in the set. Removal of this and several similar items gave us a more complete picture, which was much closer to our expectations, but still showed a fairly steep decline. The WildList sets were handled without any problems though, and with nothing to trouble us in the clean sets a VB100 award is duly granted. With a few stability issues noted, Clearsight is rated no more than ‘fair’ in our new system, but has five passes from five entries in the last six tests; seven from seven in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 99.86%
False positives: 0
Product version: 5.1.16
Update versions: 5.3.14/201204190953, 201206061023, 201206081258, 201206120807
Another product based on a third-party engine, although with rather more additional protection rolled in, Commtouch has had a rather rocky time in our tests of late. This month’s offering was as compact as ever, with the main package only 14MB and offline updates just 28MB. The set-up process is fairly simple and speedy, needing no reboot, and updating online was pleasingly fast too, taking no more than four minutes to complete the whole install.
The interface is fairly basic, but has a reasonable set of controls, and seemed pretty sturdy under pressure. Scanning speeds were reasonable, but overheads were decidedly heavy with the on-access measures taking quite some time to complete. Although RAM use was low, CPU use was very high and our set of activities also took a very long time to get through.
Detection rates were not great – a little unpredictable through the RAP sets and not too impressive through the Response sets either, but the WildList at least was well handled. In the clean sets, a single item – apparently a fairly old and packed installer for some drivers provided with some much newer hardware – was alerted on with a heuristic flag, which was enough to deny Commtouch a VB100 award this month. That leaves the vendor looking rather wobbly, with two passes and three fails in the last six tests; four passes and five fails from nine entries in the last two years. Stability this month was fine though, earning the product a ‘solid’ rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 1
Product version: 3735.575.1669
Update versions: 19/04/2012, 07/06/2012, 11/06/2012, 13/06/2012
Sticking with an OEM theme, malware detection in the Defenx product is provided by VirusBuster by way of Agnitum, whose suite this strongly resembles. The 127MB installer needs only a couple of clicks but takes a while to complete, including updating and gathering ‘smart scan data’, with a reboot required at the end; the whole process takes around five minutes.
The interface is clear and well laid out, with good access to a decent level of controls, and we generally found that it responded well. At one point, after running an intense on-access job with multiple infections, we had some problems opening Notepad or WordPad to peruse plain-text logs. Many other functions were also unavailable, as if the high number of detections in such a short time had put the system into some sort of lockdown mode. After ten minutes or so this seemed to clear though, and the usability was restored.
Scanning speeds were not super-fast initially, but they improved greatly in the warm runs, while file access lag times were a little high in the archive and binary sets, but very low elsewhere. RAM use was unexceptional, but CPU use was pretty high, with a heavy impact on our set of tasks. Detection rates were much as expected: decent in the early parts of the RAP sets with a steep decline into the proactive week, reasonable in the Response sets, with no issues in the core certification sets. A VB100 award was earned without complaint. Some oddities observed after heavy bombardment rule out a perfect score for stability, but they don’t drag it too far down the rankings, leaving it on ‘stable’. Our test history shows ten passes from ten entries in the last two years, only the annual Linux tests not entered.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2.1.89
Update versions: 14.2.31, 15.0.46, 15.0.51, 15.0.53
Another product from the Preventon/VirusBuster stable, with a fairly long and respectable history in our tests, Digital Defender’s latest edition was submitted as an 88MB install package, which ran through quickly and simply with no reboot needed. The by now very familiar interface provides basic controls but is clearly and simply laid out, generally responding well under pressure. A few minor bugs were noted, but there were no serious issues or crashes until the latter stages of the RAP sets, where once again scans simply aborted with no alert or warning, and some files had to be removed before they could be coaxed through to the end.
Scanning speeds elsewhere were not bad, with overheads a little above average in some areas but very light in others; resource use and impact on our set of tasks were reasonable too. Detection rates held no surprises, dropping sharply through the RAP sets from a decent starting point. Response scores were mostly decent too, and with no issues in the WildList or clean sets a VB100 award is earned. That gives Digital Defender five passes from five entries in the last six tests; seven passes and three fails in the last two years, and a ‘stable’ rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 99.86%
False positives: 0
Product version: 5.0.2
Update versions: 1.1.2084, 1.1.2157, 1.1.2159, 1.1.2160
Not long after receiving the submission for this month’s test, we received word that eEye Digital Security had been acquired by BeyondTrust, so it seems likely that in future reports we will no longer see the eEye name. The product has put in a run of good showings of late. Malware detection mainly stems from the Norman engine incorporated in the product, including its Sandbox analysis system.
The installer submitted this month was a fair size, at 231MB, but it didn’t take too much effort to run through, completing in not much over a minute with no reboot required and a short set-up wizard at the end. Online updates weighed in at a hefty 200MB, but generally took no more than a couple of minutes to fetch.
The interface focuses mainly on the vulnerability management which is the product’s main strength, but reasonable space is given to the anti-malware configuration, with a decent basic level of controls. Scanning speeds were very slow over archives and binaries, but perfectly reasonable elsewhere; overheads were pretty hefty throughout the sets, although our set of tasks didn’t take too long to get through, with average RAM use and fairly high CPU consumption.
Detection rates were pretty decent in the Response sets; RAP tests were hindered by scans repeatedly snagging on a particular cluster of samples (the product sitting on one for an entire weekend), and even with dozens removed we were only able to get partial results for the oldest week of samples. This threw the scores out a little, but we would extrapolate those we did record to give pretty decent reactive rates, with the proactive week showing a fair decline. The core sets were handled well, and a VB100 award is easily earned. The only issues noted were with specific malicious files and would be unlikely to upset any real-world users, nevertheless a not-quite-perfect ‘stable’ rating is all that can be granted. Our test history shows a decent record of late, with four passes and one fail from five entries in the last six tests; seven passes and two fails in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 6.5.0.11
Update versions: N/A
Emsisoft has been battling some bad luck with false positives in our tests for a while now, and comes into this comparative hoping for a change of fortunes. The current version measured 128MB, and installed with just a couple of clicks; the interface is a little quirky and unusual, but is reasonably easy to navigate, providing a decent basic level of controls.
Speeds were very quick on demand, with generally very light overheads on access, although much of this may be down to the default behaviour of not checking much on-read; with more thorough settings similar to most other products, speeds decreased considerably. Our set of tasks still took some time to complete, but resource use was on the low side.
Detection rates were as excellent as we have come to expect from this product and the Ikarus engine included in it, with very impressive RAP and Response scores. With no issues in the WildList, a clean sheet in the false positive sets was enough to break Emsisoft’s run of misfortune, earning the product its first VB100 award for a while. This is the product’s first pass from five entries in the last six tests; longer term, we see three passes from nine entries. Stability was ‘solid’.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 11.0.1139.1184
Update versions: N/A
Sticking with third-party engines, eScan is another Bitdefender-based solution, although like the previous entry it includes much of its own input. The installer was towards the upper end of the scale at 180MB. It ran through in a number of stages with moments of action interspersed with further queries, completing with a speedy scan and a request for a reboot. Updates were a little on the slow side, averaging just over five minutes over all the runs.
The interface is fairly flashy and stylish, but is also simple to operate, providing an excellent degree of fine-tuning in a much more sensible fashion underneath the glitzy exterior. Scanning speeds were pretty good, and overheads pretty light, with low use of memory, average CPU use and a minimal effect on our set of activities. Scores were as excellent as expected in the RAP sets, but strangely disappointing in the Response sets, hinting that updating had not run as smoothly as we had been led to believe. Nevertheless, the core sets were properly dealt with, and a VB100 award is duly earned. The vendor’s recent history is very good, with six passes out of six this year; ten passes and two fails in the last two years. The product’s stability this month was rated ‘solid’.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 4.2.76.0
Update versions: 7067, 7200, 7206, 7215
Finally we come to a product including no third-party engine. Eset came into this comparative looking to maintain a record-breaking streak of successful appearances. The current product came in as a fairly lightweight 56MB installer, which ran through a very familiar set of steps, including the trademark insistence on explicitly choosing whether or not to alert on ‘potentially unwanted’ items. The process completed in good time with no reboot required; updates were likewise zippy to the point of being unnoticeable.
The interface is familiar from long exposure over the last few years, looking good and providing splendidly complete configuration with decent clarity of layout. A few places still seem to overlap somewhat perhaps, and we continue to question whether we are using the on-access archive unpacking controls incorrectly, or whether they simply don’t work.
Testing tripped along nicely, with solid speeds on demand and incredibly light overheads on access. RAM use was lowish and CPU use not bad either, and our set of activities zipped through in splendid time. Detection rates were solid, with a good showing in the Response sets and strong scores in the RAP tests – a little behind the leaders in the reactive weeks but well up with the pace in the proactive week. The core sets were once again handled without issues, although a number of suspicious alerts were recorded in the clean sets, most of them quite accurately highlighting toolbars and other extras of dubious merit. Eset thus comfortably earns a VB100 award, extending its epic unbroken run still further – the product has 12 passes in the last two years. With no problems of note, Eset earns a ‘solid’ stability rating.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2.5.0.18
Update versions: 12.3.20.1/425083.2012041810/7.41933/7073173.20120417, 425498.2012041913/7.41962/7077818.20120419
Returning after having some serious problems last time around, ALYac is another product based on the increasingly popular Bitdefender engine, again with some extra stuff in the form of the Korean company’s own ‘Tera’ engine. The installer submitted was 141MB, and the set-up process was fairly straightforward, following the standard pattern of stages, taking about a minute to get through and needing a reboot at the end. Running updates once again seemed unreliable, with several attempts taking several minutes before baling out. Once again, the only way of telling this had happened was an error message which disappeared after five seconds – users not monitoring the process closely as it fetched several hundred files would easily have missed this and would have been presented only with the misleading information that an update attempt had been run. Eventually we seemed to get a complete job finished satisfactorily on each install, or so it appeared.
The product interface is fairly attractive and reasonably well laid out, although a few oddities of translation make some sections less than lucid. Configuration is fairly rudimentary but the basics are available, and after some serious issues in the last comparative, stability this time was much improved.
Speeds were superb on demand, with on-access overheads pretty light; resource use was around normal, with an average impact on our set of activities. Detection rates were stunning in the RAP sets, as expected, but once again we saw a significant drop in the Response sets, implying that updating may not have been as successful as we had been led to believe. The core sets were properly dealt with though, and a VB100 award is duly earned. Officially ESTsoft now has three passes from three entries in its first year of participation, but an entry in the last test was not included due to there being too many problems in getting it to work. Some suspect updating behaviour this month has apparently now been addressed by the developers, but still marks the product down to only ‘fair’.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 4.1.3.146
Update versions: 4.3.396/15.439, 4.3.398/15.650, 15.655, 15.677
Fortinet has shown some impressive improvements of late, with scores rapidly catching up with those of its rivals apparently mainly due to heuristic rules being improved and turned up. The current product came as a very small 10MB main package with 33MB of updates, and set up in less than half a minute with just the standard set of stages to click through. No reboots were needed, and updates took only a couple of minutes extra to get done.
The interface is unflashy and businesslike, providing a decent degree of fine-tuning, but the default settings are pretty thorough and there was little we needed to adjust. Scanning speeds were not the fastest, and overheads were perhaps a fraction on the high side in some parts of the sets, and while resource use was not exceptional, our set of activities took rather a long time to complete.
Detection rates were good though, well up with the leading pack on the RAP chart and with a strong showing in the Response sets too. No issues were encountered in the core sets, and a VB100 award is thus earned without fuss. Fortinet’s test history now shows five passes from five entries in the last six tests; nine from ten in the last two years. The product earns a ‘solid’ stability rating this month.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 6.0.96
Update versions: 4.6.5
Frisk’s product is one of the least changed in the last several years, and the latest version brought no surprises. The installer weighed in at 25MB, with 27MB of updates, and ran through just a handful of steps, taking less than half a minute to complete but needing a reboot at the end. The interface remains bleak and chilly, but with a certain charm to its simplicity. It provides only basic configuration, and it seemed mostly stable and responsive through the tests.
Scanning speeds were decent, and overheads fairly light, with low use of RAM but pretty high CPU consumption; our set of tasks ran very quickly indeed though, barely distinguishable from the baseline measures. Detection rates were less splendid however, coming in towards the bottom of the pile in both the RAP and Response sets, the RAP scores once again showing a bewildering refusal to conform to the expected patterns. The core sets were properly handled though, and Frisk earns a VB100 award. With stability rated as ‘solid’, our test history for Frisk also looks good, with all of the last six tests passed; ten passes and two fails in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 99.84%
False positives: 0
Product version: 11.5.2.133
Update versions: AVA 22.4657/AVL 22.857, AVA 22.5203/AVL 22.969, AVA 22.5256/AVL 22.974, AVA 22.5280/AVL 22.974
Another proper server-oriented product, this one came in as a monster 636MB executable. This turned out to be a full-blown management server administration utility which took some work to install, first attempts failing thanks to not having the .NET framework ready to go on the test systems. With the .NET framework in place, the process was re-run, and turned out to be reasonably simple, including the set-up of an MS SQL database system among many other tasks. No initial reboot was requested, but we later discovered that restarting the machine made things work much more smoothly. With the administrator set-up, we then deployed the client component to the local system. On about half of the installations performed, this was a simple process, with the local system immediately offered as a potential target, but on the other half the list of available targets was empty and the machine had to be specified by name; in these cases a reboot was also needed to enable the client to deploy. Once this stage was complete, the actual installation took place on a seemingly arbitrary schedule, and the machine had to be left alone for a while until it happened. On some occasions a reboot was needed to get the client interface available, while on others it materialized in the system tray of its own accord.
Configuration is available through the administration interface, including reporting and scheduling of tasks, but we opted to cede control to the client subsystem for simplicity. Updates, however, were performed from the administration unit, and again it was difficult to predict just how long they would take, both to download to the management system and to deploy to the client. Over all the installations performed, at least half an hour was needed to get from bare system to fully functioning, up-to-date client.
From there on things were much more straightforward, with scanning speeds reasonable from the off and blasting through in the warm runs. Overheads were initially a little heavy, but soon became barely perceptible. As predicted – and warned about by the product itself – turning the thoroughness of scanning up increased the overheads considerably. CPU and RAM use were a little on the high side – likely at least in part due to the administration system being present – and our set of tasks took somewhat longer than average to get through, though not disastrously so.
Detection rates were, as ever, pretty remarkable, with barely anything missed in the reactive part of the RAP sets, and the proactive part well covered too; Response scores were pretty decent. The WildList sets presented no difficulties, and with no false alarms either, another VB100 award goes to G Data, putting it on five passes from five entries in the last six tests; eight from ten in the last two years. The product earns a ‘stable’ rating thanks to the unpredictable deployment experience.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 5.0.5134/5.2.5162
Update versions: 3.9.2533.2/11809, 3.9.2539.2/12016, 12044, 12049
GFI’s VIPRE is a relative newcomer on the anti-virus scene, but in a few brief years it has picked up a solid reputation, with a decent record in our tests. As ever, the main package is very compact at just 11MB, with updates a little larger at around 80MB. The install process is slick and glitzy, with a single licensing screen followed by a quick slideshow boasting of the product’s prowess, which completes in under a minute with an offer to join another ‘ThreatNet’ data-gathering scheme.
The product interface is similarly slick and pretty, but is rather short on configuration for business purposes; the options that are available are at times less than obvious in their intent. With some experience, though, operation is reasonably straightforward – although as usual a combination of initially allowing access to some complex items with delayed scanning implemented and curtailing of logs to the last 100 entries makes the on-access tests a little more trying than some. Awkward and wasteful XML logs for the on-demand scans have also been problematic in previous tests, but the provision of a simple parsing tool has made those worries a thing of the past.
Scanning speeds were decent throughout, and overheads pretty light, with very low use of memory, lowish CPU use and a low impact on our activities. Detection rates were pretty impressive – a little behind the very best this month but keeping pretty close to the pace in the RAP tests, with Response scores similarly impressive. The core sets were correctly dealt with, and with no major issues to report GFI earns another VB100 award. A few large scan jobs did fail to complete happily – invariably those dealing with large numbers of detections – so a ‘stable’ rating is the best we can offer this month. Our test history shows four passes from five entries in the last year; eight out of nine in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 6.0.0.0
Update versions: N/A
Bucking the standard trend which sees 2013 products start to emerge when 2012 is little more than halfway through, Hauri sticks to a 2011 branding. With the vendor’s history in our tests sporadic and less than illustrious, we hoped to see some improvement from a product which, based on the popular and usually strong Bitdefender engine, should really do very well indeed. The installer provided was a fair size at 167MB, but didn’t take long to run through its business. Updates were a little on the slow side (the company perhaps focusing its efforts on its home market in Korea), but there didn’t seem to be any problems.
The interface is slick and professional-looking, echoing the black-and-red colour scheme of Bitdefender’s own product lines. It favours the large-icon-and-button approach, but a good level of configuration is provided under the covers and it generally responded well. Under pressure we did see the GUI crash out a couple of times, not always when dealing with large amounts of malware.
Speeds on demand were a little underwhelming, but on-access lag times were reasonable; resource use was fairly low and our set of tasks took a little more than most to complete. Detection rates in the RAP sets were excellent, closely mirroring several other products based on the same technology, but again the Response sets showed much lower scores than expected, suggesting yet another issue with updating. The core sets were properly dealt with in spite of this though, and Hauri earns a VB100 award – breaking a streak of bad luck that has lasted a couple of years. A few minor crashes were observed, but nothing too severe and only at high-stress times, thus earning the product a ‘stable’ rating. Our test history for Hauri now shows one pass along with two fails in the last six tests; one pass from five entries in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2.0.134
Update versions: 1.1.118/80976, 81423, 81454, 81473
Ikarus is another product that has changed little since we first encountered it several years ago, although we have heard that a change of name is in the offing, and it may well be that a refresh of the interface will accompany it. For now, we stick with a familiar pattern: the product was provided as a complete CD ISO image measuring just over 200MB but containing much more besides the basics, along with updates in a separate 73MB archive. The installation process is pretty simple and speedy, running through a few initial steps then rushing to completion in moments. Online updates took a little longer, averaging around six minutes in total. An option to take part in a ‘Signature Quality Assurance Program’ (presumably another of the now ubiquitous feedback schemes) is hard to spot during the install process, being greyed out if the default update settings are accepted.
The interface itself uses the .NET framework and is thus a little wobbly at times, but seems to have improved greatly over the years, in stability if not in appearance. Fairly basic options are provided, but operation is pretty straightforward and requires minimal brainwork.
Scanning speeds were slow over archives and binaries but pretty quick elsewhere, with a similar pattern in the lag times, which were very heavy in the first two parts but not too bad over the remainder of the speed samples. RAM use was OK, CPU use very high, and our set of tasks took a fair while to get through.
Detection rates were very good indeed though, with some excellent scores in the RAP sets and impressive rates throughout the Response sets too, and with no problems in the clean sets (where issues have been all too common in the past) and nothing to complain about in the WildList sets either, Ikarus comfortably earns a VB100 award. The vendor now has two passes and two fails from the last six tests; four passes and four fails in the last two years. This month’s performance earns the product a ‘solid’ rating for stability.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 8.1.0.646
Update versions: N/A
One of our most venerable participants, Kaspersky maintained an unbroken run of appearances in our comparatives from the very start until a fateful day a couple of years ago, but has returned to reliable appearances ever since. The company’s business product was provided as a hefty 242MB install package, with a mirror of the full update system at 259MB but including a wide range of additional data. Setting it up required going through a fair number of steps but it only took a couple of minutes, including the building of a list of ‘trusted’ applications at the end. Updates were mostly fairly speedy but on one install they took over 15 minutes to complete, bringing the average up to seven minutes.
The interface is a little quirky, with some of the on-demand scan options hidden away separately from the rest of the controls, but with a little exploration and practice it soon becomes simple to operate, and it remained solid and stable throughout testing. Scanning speeds were rather slow, especially in the set of miscellaneous file types where we saw a total freeze in the last test, but they were much quicker over media and documents. Overheads were pretty light throughout, the only exception being archive files once the settings had been turned up to extremes. Resource use was fairly average, and our set of tasks ran through nice and quickly.
Detection rates were as splendid as we have come to expect, challenging the very best this month including some multi-engine solutions, and Response scores were solid too. With nothing of note in the clean or WildList sets, a VB100 award is duly granted. Stability was rated a firm ‘solid’, and in our test history for the company’s business line we see three passes and two fails in the last year; eight passes and three fails from 11 entries in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2.1.1116.0
Update versions: 1.123.1747.0, 1.127.1440.0, 1.127.1739.0, 1.127.1848.0
Microsoft has a much less lengthy history in our tests than many, but has established itself as a major heavyweight player in many sectors and regions, at the same time building an appropriately strong history of performances in our tests. The vendor’s business solution, Forefront, remains light though, with the main installer under 20MB, and offline updates at 65MB. Set-up is simple and very quick, with updates also speedy, the full process averaging less than two minutes across multiple installs with no reboots required.
Operation is similarly straightforward, with a fairly limited set of controls but the basics well covered, and stability was fine throughout. Scanning speeds were mostly decent, a little slow over archives which were delved into deeply by default, while lag times on access were barely noticeable. Resource use was low and our set of tasks blazed through in very impressive time, some runs completing quicker than some of our baseline measures and the average only just tipping over into positive numbers.
Detection rates were pretty good too, snapping close at the heels of the leading pack in the RAP sets and very respectable throughout the Response sets. With the core sets properly handled a VB100 award is comfortably earned by Microsoft. The product is also awarded another ‘solid’ stability rating, and in our history for Forefront (which usually only appears in server tests) we see two passes from two entries in the last six comparatives; five from five in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd: (o/a) 99.86%
False positives: 0
Product version: 9.00
Update versions: 6.08.06
Another long-standing regular, Norman’s business product seemed little different from its home-user offerings. The 228MB install package provided included latest updates and ran through the standard stages with just a couple of pauses, seeming to complete within a minute or so. This appearance was a little deceptive though, as things continued to be worked on in the background, the product itself not being fully ready for several more minutes after the install interface shut down.
The product’s own GUI is a little challenging, displaying in a browser-type setting and suffering from the difficulties of working in such a format. Settings changes were frequently lost thanks to changing pages without clicking the ‘save’ button (no warning was presented to prevent such errors), and occasionally lengthy loading periods had to be endured. The most glaring issue with it, however, was the requirement to click through no fewer than seven JavaScript error messages every time the interface was opened. For on-demand scans a proper interface was presented after running a right-click scan, which was then used for all further jobs.
Scanning speeds were pretty sluggish over archives, and also slow in the set of binaries, but they were reasonable elsewhere. Overheads were a little on the high side throughout. RAM use was low though, and CPU use pretty decent too, with our set of activities completed in good time. Detection rates continue to impress, having improved markedly in the last few tests, now threatening to catch up with the leading performers. The WildList and clean sets were dealt with without issues, and Norman earns a VB100 award, along with a ‘stable’ rating mainly thanks to interface wobbles. In our test history, Norman has five passes and one fail in the last six tests; ten passes and two fails in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 4.3.89
Update versions: 14.2.31, 15.0.46, 15.0.51, 15.0.53
Parent of a couple of products already discussed here, with more still to come, Preventon has become a regular fixture on our test bench with its solution based on the VirusBuster engine. As usual, the installer measured around 90MB and ran through the standard set of steps to complete in good time, needing no reboot; updates averaged less than two minutes.
The interface is fairly simple and basic, providing no more than the minimum set of options. Some server admins may find this restricting, but the product generally responded well, suffering a few minor issues previously mentioned many times, most notably ignoring its own settings when running right-click scans. Speeds were decent and overheads not bad either, with resource use a tiny bit higher than many, and impact on our set of activities similarly slightly above average.
Detection rates were not very impressive but fairly respectable, with again some problems completing the RAP tests since scans binned out without notice when encountering some nasty samples. No issues were encountered in the core certification sets, earning Preventon another VB100 award. With a rating no higher than ‘stable’ thanks to a couple of minor interface issues, the product’s history looks decent with five passes from five entries in the last six tests; seven passes and two fails from nine attempts in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 99.86%
False positives: 0
Product version: 13.00(6.0.0.4)
Update versions: N/A
Quick Heal’s VB100 history is another that stretches far back into the mists of time, and after a run of considerable reliability a rocky patch has emerged in the last few tests.
The latest version for servers came as a large 307MB installer, which set up in good time with little interaction required, completing in under a minute with no reboot called for. On one install, updates repeatedly failed, but after waiting half an hour for some unknown issue at the server side to resolve, everything seemed fine; other installs managed to update in minutes.
Scanning speeds were pretty quick, and overheads were pleasingly light too, although RAM use was high: CPU use was not much above average though, and our set of tasks got through fairly quickly. Detection rates were not too great in either the RAP or Response sets, but the core WildList and clean set presented no shocks, bringing a brief spell of misfortune to an end and earning Quick Heal another VB100 award. Stability this month was rated ‘solid’, and things seem to be back on track with three passes and two fails in the last six tests; nine passes from 11 attempts in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 2.5.0.18
Update versions: 12.3.20.1/425083.2012041810/7.41933/7073173.20120417, 12.3.20.1/432196.2012060719/7.42498/726393.20120606, 12.3.20.1/432664.2012061116/7.42555/7277649.20120611, 12.3.20.1/433117.2012061318/7.42574/7280998.20120612
Roboscan makes its first full appearance in a comparative this month, although its first submission was actually in the previous test, when we were unable to coax it into operating properly. Essentially a clone of ESTsoft’s ALYac targeting western markets, it is based on the Bitdefender engine with some additional detection technology courtesy of the Korean sister company. The installer measured 138MB, and presented no surprises with the steps that were run through, completing in a minute or so with an offer to open the product interface – which then failed to appear. Opening it manually proved more effective however, and testing proceeded unimpeded. Updates were once again rather worrying, running through a large number of data files and threatening to come to an abrupt and unannounced stop at any moment, but this time each run seemed to complete without issues, and on comparing the version details recorded in the process of compiling this report, they seem to have been much more successful than ESTsoft’s own.
The GUI is almost identical to that of ALYac, with only branding changes, and is reasonably clear and well laid out with a decent basic level of controls. Testing proved unproblematic, with impressive scanning speeds including improvements in the warm runs, light overheads and very low resource usage, with our set of activities running through quickly. Detection rates in the RAP sets were excellent, but once again the Response scores were way off the mark, implying that the updates had been less successful than they appeared. With the core sets handled properly though, Roboscan manages to earn a VB100 award at its first proper attempt. That makes the vendor’s first entry in our official test history look good, but it should be noted that a submission in the last test was discounted after we failed to get the product to function sufficiently well to include it in the report. Stability was much better this month, but some suspect updating behaviour marks it down to merely ‘stable’.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 10.0.3/10.0.5
Update versions: 3.30.0/4.76G, 3.31.20/4.78G
Back to more familiar fare, the latest offering from Sophos came as a fairly small 103MB installer, with updates very compact at just 3MB. The installation process was fairly standard and speedy, running through in under a minute with no need to reboot, and updates were also pretty zippy.
The interface is businesslike and efficient, simple to operate and navigate with an enormous wealth of fine-tuning available, and it ran smoothly throughout testing. Scanning speeds were a little slow over archives (which are scanned fairly deeply by default), and not super quick over binaries and similar items either, but they were fairly quick elsewhere, with lag times on access showing a similar pattern. RAM use was fairly low, CPU use a little above average, and impact on our set of tasks fairly light.
Detection rates were not bad – more impressive in the proactive week than elsewhere in the RAP sets, but Response scores were rather better thanks to cloud look-ups. The WildList and clean sets presented no issues, and a VB100 award is earned with minimal fuss. The product earns a ‘solid’ rating for stability this month, and in our test history we see four passes and two rare fails in the last year; ten passes in the last dozen tests.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 71.176
Update versions: N/A
Another from the Preventon/VirusBuster stable, albeit with a slightly different approach to the interface, SPAMfighter’s products have become a fairly regular sight on our test bench in recent years. Slightly larger than its cousins at 94MB, the install package ran through a similar process, the only exception being the request for an email address. The process completed in good time with no reboot required. Operation is fairly simple, with the interface pretty clear and well laid out, providing only basic controls. Stability was mostly OK, although when running scans of large (or even only fairly large) sets of malware the whole interface invariably froze up, requiring a reboot to return the system to full operation. On some occasions this proved frustrating, as the frozen on-access alert dialog covered the reboot dialog, which on this platform demands a reason for restarting the system, thus preventing a normal manual restart.
Otherwise there were few issues though, with scanning speeds not too bad, overheads a little heavy in places but reasonable in others, and the expected mediocre detection rates. Again, RAP tests took some effort to complete after initial runs crashed out multiple times in one particular cluster of samples. The core sets were properly dealt with though, and a VB100 award is earned. A stability rating would be no more than ‘fair’, thanks to a number of issues. In our test history, five passes from five entries in the last six tests looks good; longer term we see six passes and two fails in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 12.1.1000.157 RU1
Update versions: 06 June 2012 r34, 10 June 2012 r17, 11 June 2012 r34
Symantec has become something of a rare sight in our tests of late, its corporate solution having last appeared over a year ago. Returning to the test bench, an ample instruction document was provided to assist us in setting up and operating the product, which was very welcome but not really necessary for most tests, as all seemed relatively clear and straightforward. The installer was a fair size at 175MB, but ran through only a few stages including the offer to ‘join the fight against Cybercrime’, with the whole process including updates completed in under two minutes.
The interface is colourful but maintains a professional air, providing an excellent level of configuration in an accessible format. Parts of the interface met with rather surprised nods of approval from some members of the lab team, and it seemed to remain solid under pressure throughout testing. Scanning speeds were reasonable to start with but zoomed through the warm runs, while overheads were mostly light, increasing only when the archive scanning settings were turned up to the max.
The product did not participate in the RAP tests at the request of the submitters, as a considerable amount of its detection is provided by cloud look-ups, but scores in the Response test were respectable. The WildList sets were handled well, and the clean sets produced only ‘suspicious’ type alerts, most of which were labelled simply ‘Suspicious.Cloud’. These included a large number of components from a single package, Adobe Reader 9.0, as well as some other fairly well-known items such as Skype, Microsoft’s SharePoint and IBM’s Lotus Notes, with the remainder made up from various free and shareware items (many of which may well have suspicious components). As such alerts are permitted under our rules, a VB100 award is earned with no problems; it’s worth noting that if suspicious alerts were counted when measuring detections, the Response set scores would have challenged the very best this month. Stability was impeccable, earning the product a ‘solid’ rating, but our test history is sparse for this product, this being the only entry in the last six tests; it has three passes from three entries in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 6.6.2281.201
Update versions: N/A
Tencent is an entirely new name on our roster, although not a complete unknown to the anti-malware world. This is the company behind the QQ instant messaging system and one of the world’s busiest web portals: the IM program boasted over 700 million active accounts by the end of last year (according to Wikipedia at least), while the qq.com website is the eighth busiest in the world at the time of writing, ahead of Twitter.com and Amazon.com, according to Alexa rankings. Outside the company’s native China, QQ is also well known to malware researchers as a popular target for worms and data-stealing malware. The company’s anti-malware product is currently available in Chinese only, but from a brief pre-test overview appears to have a wide range of features; we of course mainly focused on the malware detection element, which is based on the Avira engine.
The installer provided measured 111MB, putting it among the smaller solutions this month, and the set-up process ran through just a handful of steps, some of which we actually managed to divine the meaning of. No reboot was needed to complete.
We can say little about the interface, but it looks like a fairly standard design and we were just about able to navigate parts of it using the icons marking different sections. Thankfully, a detailed guide was provided by the developers which helped us through the more complex parts of our task. Scanning speeds were pretty impressive, only archives slowing things down a little thanks to thorough default settings, but the on-access lag times – which appear almost invisible in our charts – should mostly be ignored as the product does no scanning on-read. On-write scanning is clearly a little cumbersome though: our set of activities, which includes a number of copy and write tasks including unpacking of archives, took a very long time to complete. RAM use was fairly low, but CPU use was pretty high.
Detection rates were hugely impressive though, with RAP scores putting the product way up in the top corner of our chart, and Response scores were also excellent. The certification sets were brushed aside effortlessly, and Tencent, courtesy of Avira, comfortably earns its first VB100 award on its first attempt, along with a ‘solid’ rating for stability.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 12.0.0.833
Update versions: 1.6.0.1884/5778.0.0.0, 5852.0.0.0, 5858.0.0.0, 5863.0.0.0
Back to something a little more familiar, Total Defense Inc. seems to have completed its separation from CA, the originator of this product, and is in the process of splitting up with HCL, which was behind much of the development in the past. For now at least, much CA branding remains, and the product is essentially unchanged from its first appearance a few years back.
Installation is performed from a full DVD image, although thankfully these days we no longer have to install administration tools first, and for the purposes of the RAP test updates were taken online on the deadline day. The set-up process is something of a chore, with a lot of information to fill in; some points seem almost deliberately irritating – such as the requirement to select a US state and provide a ZIP code, even after a non-US country has been selected, and the inability of the licensing module to accept the full code in a single paste operation. Once it was all set up though, all updates were reasonably speedy, completing in just a couple of minutes.
The interface is crisp and clear, with a decent level of controls and a fairly sensible layout. It seemed stable throughout testing, and as usual powered through our speed tests in superb time. Overheads were also fairly impressive, slowing down only when archive scanning was enabled on access. RAM use was a little high and CPU use also above average, but our set of tasks got through in decent time.
Detection rates were once again rather poor, coming in well down on the RAP chart, and underwhelming in the Response tests too. However, the core sets were handled better, with no issues in the WildList sets and no false alarms either, thus earning Total Defense a VB100 award. Stability, if not necessarily design, was sufficiently impressive to merit a ‘solid’ rating, and in our test history we see four passes and one fail from five entries in the last six comparatives; seven passes and three fails in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
Product version: 12.0.0.4865/12.0.0.4872
Update versions: N/A
TrustPort is the second of only two products this month to use two separate third-party engines, and once again the high-performing Bitdefender engine is in the mix. The latest build came as a 184MB package including updates, and took around a minute to run after the usual handful of stages. Online updates were a little on the slow side, averaging around five minutes thanks to the need to update both components separately. On one occasion an update was requested after an update, but this did not appear to be the norm.
The product has a rather unusual approach to interface design, with no main GUI, but a configuration tool provides most of the requirements in one spot. Overall, the system is fairly clear and usable, and it responded well under pressure, although as in previous tests we observed some rather odd changes in how windows – of other applications as well as those of the product itself – shifted their focus behaviour after we had run a few heavy tests.
Things moved along well though, with fairly sluggish speeds on demand, and overheads a tad high on access; resource use was above average but not excessive, and our set of tasks didn’t take too long either. Detection rates were as amazing as we might expect, with RAP scores some way ahead of a strong field and some very high numbers in the Response sets too. The WildList presented no difficulties, but in the clean sets a single FP, shared with engine provider AVG, was enough to upset TrustPort’s hopes of a VB100 award this month, as well as earning it a strikethrough on the RAP chart. There were some minor oddities in the system behaviour which were adjudged sufficient to mark it down to ‘stable’, and our test history now shows two passes and two fails in the last six tests; the longer-term view is better though, with six passes and two fails from eight entries in the last two years.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 1
Product version: 3.0.89
Update versions: 14.2.31, 15.0.46, 15.0.51, 15.0.53
Another member of the Preventon/VirusBuster family, UtilTool has put in quite a few appearances of late. The 89MB installer produced no surprises, following the standard path in a speedy manner, but the interface was a little different from its sibling products – a slightly newer, somewhat boxier-looking version which we have seen in a few products recently. Operation was much the same however, and updating once again completed in a couple of minutes.
Scanning speeds were reasonable, while overheads were heavy in the archives and binaries but light elsewhere. Resource use was on the low side for RAM, a little high for CPU and our set of tasks ran through in surprisingly good time. Detection rates were distinctly mediocre almost across the board, the WildList sets being the only exception, and with no issues in the clean sets either UtilTool earns a VB100 award. That makes for three passes and a single fail from four entries since the vendor’s first appearance under a year ago. UtilTool earns a ‘solid’ stability rating this month.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 99.86%
False positives: 0
Product version: 7.4.52
Update versions: 5.5.0/14.2.32, 5.5.1/15.0.46, 15.0.51, 15.0.53
Finally this month comes VirusBuster itself. As ever, it was unlikely to produce many shocks thanks to multiple appearances of its engine in various guises already. An 88MB install package was combined with 92MB of updates. The set-up process required quite a few steps to be clicked through, but once that was done it completed in good time. Updates were frustrating again – failing initially, but eventually completing happily if left to their own devices. The interface is an MMC monstrosity, the cause of much head-scratching thanks to its unpredictable and inconsistent layout, but it did at least remain fairly stable.
Scanning speeds were impressive in some parts of the test sets, but slow over binaries and archives, with fairly hefty overheads on all but archives, which go unopened even with the setting to inspect them apparently activated. Detection rates were some way off the pace, but just about within respectable limits, and the core sets were managed properly, earning VirusBuster a VB100 award. The product maintains an impeccable record of 12 passes out of 12 in the last two years, and earns a ‘solid’ rating for stability.
ItW Std: 100.00%
ItW Std (o/a): 100.00%
ItW Extd: 100.00%
ItW Extd (o/a): 100.00%
False positives: 0
(Click for a larger version of the table)
(Click for a larger version of the table)
(Click for a larger version of the graph)
(Click for a larger version of the graph)
(Click for a larger version of the table)
(Click for a larger version of the graph)
(Click for a larger version of the graph)
(Click for a larger version of the table)
(Click for a larger version of the table)
(Click for a larger version of the graph)
(Click for a larger version of the graph)
(Click for a larger version of the table)
(Click for a larger version of the table)
(Click for a larger version of the table)
(Click for a larger version of the table)
(Click for a larger version of the table)
(Click for a larger version of the chart)
This test has been a much more pleasant experience for us than the last one. It seems rather premature to announce that our nascent stability rating scheme has reaped miraculous rewards (particularly given that the submission date for this test came before the publication of the last set of results), but we have certainly seen far fewer problems with the product line-up this time. Perhaps that can in part be put down to the platform, with servers requiring much more reliable solutions than can be foisted upon mere members of the public, or perhaps it’s no more than good luck that we should have had a relatively smooth run in a month where moving our lab ate up a large chunk of testing time.
Once again, the pass rate is fairly high, compared to an average of around 70% in the last few years. We can only speculate as to the reasons for this, but it seems likely that it is partly due to a shift in focus for the WildList, with the Extended list containing only non-replicating items and few of the tricky polymorphic virus strains which generally wreak havoc making their way onto the standard list. We have also had to be fairly lenient when putting together our Extended sets, as during the ongoing settling-in phase we continue to see changes made and items removed long after the list’s official release (further samples were still being struck off even as this final report was being compiled, several months on). We continue to do our best to make sure that our tests are as thorough and stringent as ever, but the VB100 award is, by design, something that any decent solution should routinely pass. So perhaps we shouldn’t be surprised to see so many products doing well – perhaps it indicates not so much a marked improvement in products’ detection rates, but simply a decrease in errors, blunders and moments of lunacy by the developers (better ‘quality’ in the ‘assurance’ sense of the word).
Next up is another Windows 7 test, with the usual wide range of entries expected on a desktop platform, no doubt including a clutch of new faces. We hope to catch up with our schedule and get this one out in good time, and also to be able finally to finish the details of our stability rating scheme so we can include a nice, easily readable table of results. If there’s anything else our readers would like to see documented or tested, we remain, as ever, open to suggestions.
Test environment. All products were tested on identical machines with AMD Phenom II X2 550 processors, 4GB RAM, dual 80GB and 1TB hard drives, running Microsoft Windows Server 2008 R2, Enterprise Edition, with Service Pack 1. For the full testing methodology see http://www.virusbtn.com/vb100/about/methodology.xml.
Any developers interested in submitting products for VB's comparative reviews, or anyone with any comments or suggestions on the test methodology, should contact [email protected]. The current schedule for the publication of VB comparative reviews can be found at http://www.virusbtn.com/vb100/about/schedule.xml.