2009-08-01
Abstract
This month VB's test team put 35 products through their paces on Windows Vista. After sifting through product freezes, crashes, hangs, logging difficulties and false positives, John Hawes reveals the full results.
Copyright © 2009 Virus Bulletin
Windows Vista limps onto the test bench once more this month – perhaps not quite the lame duck it is reputed to be, but certainly far from a roaring success. As the release of its replacement (Windows 7) approaches fast, little nostalgia has accumulated for the platform, with the user base still barely troubling its aging predecessor XP. Most estimates put Vista on fewer than 30% of desktops, with XP holding onto more than 60% of the marketplace some eight years after its release and a year after the first stages of its withdrawal from sale. Popular opinion continues to belittle Vista’s accomplishments and most would-be upgraders seem content to wait for the new and improved version 7, due in just a few months’ time.
Our own previous experiences with the platform have done little to endear it to us, and presumably the developers of most anti-malware solutions have similar feelings, given the oddities, instabilities and general bizarreness we’ve seen on the platform in previous tests. We expected to see more of the same this time around, and hope that the advent of a replacement will mean not too many more comparatives on Vista will be necessary. The arrival of a new service pack promised to bring a new level of unpredictability to the mix, with the added stability it was designed to provide counterbalanced by the likelihood of a whole new range of horrors.
Installing and preparing the test systems was a little less unpleasant this time thanks to a little experience, with many of the pitfalls – such as the tendency to go into deep sleep in the middle of an overnight scan – circumvented at an early stage. Applying the new service pack proved unproblematic, if rather long-winded, and the systems did seem more stable than on previous occasions. As usual, our Luddite tendencies led us to disable most of the funky graphical stylings and revert settings to the ‘classic’ style where possible, but the UAC system was left intact to monitor how well various products were integrated with it, knowing full well that in some cases it would produce numerous intrusions and in a few it might need to be disabled completely.
Several of the products submitted were unable to comply with our request for offline updating, so in addition to snapshots of the bare test system several others were taken on the deadline date with installed and updated products in situ, thus allowing them to be tested on a level field with the others.
The deadline for product submission was set for 24 June, which proved a more than usually busy day thanks to the extra tasks of installing products, connecting them to the web for updates and taking snapshots. We were relieved that the field of entrants was not as enormous as it might have been, with several of the occasional entrants of recent tests failing to turn up and some prospective newcomers deciding at the last minute that they were not quite ready to dip their corporate toes into the often chilly waters of the VB100. In the end a total of 37 products were entered for the test, but as in previous tests we reserved the right to exclude any which proved intractable or uncooperative, to allow enough time to test as many products as possible.
A measure taken for the first time this month has been to impose a nominal charge for multiple entries from the same vendor. We have no intention of breaking with the VB100 tradition of being free and open to all comers, but in recent tests a number of vendors have opted to submit multiple products, which has added significantly to the growing burden of testing. To avoid passing on to our readers the additional costs (in terms of hardware, space and manpower) of the ever-increasing field of competitors, we have opted to impose a per-product fee on the third and subsequent submissions from any single vendor (any vendor may submit up to two products to each test free of charge, a nominal fee will be charged for each product that exceeds this number). This month just one vendor chose to enter three separate products and was duly requested to contribute to our running costs, but of course this was not allowed to influence our treatment of the product in any way, either in the opinions given in the write-up or in the results collected from it.
With the test systems prepared and the field of products gathered, the final stage of set-up was the compilation of the test sets, which as usual since the introduction of our RAP testing system was not completed until a week after the deadline for product submissions. The bulk of our test sets were already frozen, with the standard test set deadline set a few days prior to the product deadline, on 20 June. The May 2009 WildList was released a few days prior to this date, and was thus used as the basis for our core certification set. The list was remarkable for the large number of new items included, dominated as many recent lists have been by online gaming password-stealers. Of most note in the list were a handful of samples of Conficker (aka Downadup), whose headline-grabbing days seem well in the past now, along with some social network targeting items such as W32/Koobface. Of most interest to us, however, was the addition of a genuine and by all accounts highly tricky file-infecting virus – one of many sub-strains of the W32/Virut family which have caused a number of problems for major products in the past. With the ongoing development of our automated systems, we were able to include fairly large numbers of replicated samples in the test set. This meant that a list containing 677 items was represented by over 3,000 unique samples. As usual, percentages presented in the results tables are based on per-variant detections, rather than per sample, with the 2,500-odd Virut samples counted as a group with the same weighting as a single sample of the other entries, hence the rather fine percentage margins in some cases.
The growth in size was also seen elsewhere, with similarly large numbers of Virut samples added to the polymorphic set to represent some of the other sub-strains emerging in recent months. The RAP sets were compiled in the weeks leading up to and the week following the product submission deadline, and as usual fluctuated in size somewhat thanks to the unpredictable flow of samples into our various feeds. The trojan set was compiled from similar sources in the month or so prior to the RAP start date.
The greatest addition to the test sets this month was to the clean sets, with several hundred thousand new samples making their appearance this month after an ambitious period of sample gathering. The bulk of the samples came from well-known and widely used software brands and products, as part of a project to reorganize our clean sets by significance. While we expected few new false positives to emerge, it was of course impossible to rule out major and embarrassing slip-ups by some. We anticipated that the main impact of the enlargement of the set would be seen in scanning time and stability issues.
With all this squeezed onto the test systems, we prepared to shut ourselves away in the test lab, unlikely to see the sun for some time and with the prospect of a long and difficult month ahead.
First on the roster, Agnitum’s suite product is nice and thorough, and as such has a rather slow installation process, which somewhat unusually creates a restore point before it gets underway. The interface is nicely designed and clear, but gives little space to the anti-malware component in amongst the various other modules and so provides fairly little by way of user configuration. Running tests proved fairly unproblematic, but at one point, during on-access scanning of clean files, a nasty crash complete with blue screen was observed. With care, the tests were completed however.
Detection rates proved pretty impressive, with a fairly steep drop in the +1 week of the RAP sets mitigated by some comfortingly even scores in the reactive portion, making for a strong overall average. Coverage of the large number of new Virut samples was impeccable, and the WildList was detected without issues, but in the clean sets a single file from the large swathe of new additions was alerted on as a trojan. The detection was recognizable to the practised eye as packer-based, but as the file is included with a recent version of Microsoft’s .NET framework – something which labs really should be tracking as part of their false positive mitigation regime – this was considered enough to deny Agnitum a VB100 this month.
AhnLab’s product has had something of an overhaul since we last looked at it, and presents a clean and appealing interface with a speedy installation process needing no reboot to complete. The interface has a few quirks of layout and also a few stability issues under heavy fire, suffering some lengthy freezes after longer scans and on-access runs. Logging also proved somewhat tricky, as the log viewer utility seemed unable to cope with large logs, spending some time trying to refresh its listings but eventually giving up. On a couple of occasions we also observed the test machine mysteriously shutting down during a long scan, which we attributed to overheating. To complete testing some of the test sets had to be broken up into smaller chunks to ensure they ran to the end and to enable accurate collection of data.
Scanning speeds were pretty decent though, and the interface generally proved simple to use and responsive, so testing completed in reasonable time despite the extra steps required. Although a fair number of samples of recent W32/Virut strains were missed, the specific variant included in the WildList set was handled without difficulty, and with no other misses and no false positives, AhnLab earns the first VB100 award of this month’s batch.
Alwil’s avast! remains pretty much unchanged after some time in its present form, with its somewhat quirky design still resembling a media player in its standard ‘basic’ layout. A major new version is due sometime soon, and we look forward to the opportunity of taking a look at it in the coming months. For now, the installation process remains fast and simple, with a reboot not specifically required but recommended in case of problems. The advanced interface required for much of our testing still has a tree layout with some oddities of its own, which we found somewhat confusing despite much practice, and which continues to have a few issues during longer scans: a lack of refreshing leaves useful data invisible and inaccessible until the end of a scan.
Detection rates were pretty strong, not quite up to the excellent standards achieved in recent months, at least on the more recent samples in the RAP sets, but still good, with an overall average of 74% in the RAP test. The large number of new Virut samples presented no difficulty, and with no other issues in the WildList or extended clean sets, another VB100 award is granted to Alwil.
AVG’s product remains attractive and well designed. The offer of a toolbar somewhat dampened our enthusiasm, but its recommended rather than enforced nature made up for this a little. At the end of the rather sluggish install, a ‘first run wizard’ leads through some initial set-up steps for the various components, and once finally through to the main interface with its multiple module/icon layout we found it fairly intuitive to use, if a little over complicated in places. Configuration options appeared a little limited for our tastes – for example lacking the option to scan archives on access, and the on-access mode also relies on file extensions to decide whether or not to scan things.
Speeds were a little below average, but detection rates more than made up for this, with superb levels across the board, the RAP average pushing close to 85% in a masterful display. With no issue in any of the new Virut strains, or anywhere else really, AVG comfortably earns our praise, and of course a VB100 award.
Avira’s AntiVir has a nice swift installation, with a requirements wizard which we found a pleasing touch. No reboot is required to get things going, and a swift check-up of the system ensures everything is up and running safely. The interface presents a sleek and easy-to-navigate layout, with an excellent level of configuration available without overwhelming the user. Again, file extensions are considered a reliable method of judging whether a file needs scanning.
Perhaps aided by this shortcut, scanning speeds proved excellent, and detection rates once again highly impressive, very nearly catching the whole of our trojans set and also close to 85% average in the RAP sets. With no difficulties in the WildList set, and only a handful of suspicious alerts in the clean sets, Avira also walks away with a VB100 award.
We have been begging for a new version of CA’s corporate product for some time, and have heard hints that a major overhaul may be on the horizon, so when this month’s submission arrived labelled ‘refresh’ there was some excitement in the lab. However, after the install, with its usual long chain of EULAs and data gathering, we were treated to no major changes beyond a slight adjustment to the look and feel. As observed in some earlier tests on Vista, the browser-based interface is quite a lot less sluggish to respond than on other platforms, but remains rather awkward to use for any serious purposes. While settings appear to be present in some depth, some obvious items are missing, while some, such as the option to scan archives on access, fail to work once the option to enable them has been dug up.
Scanning speeds were as remarkable as ever, with the product powering through the test sets in incredible time, and detection rates in the older parts of the sets were decent. The RAP scores were somewhat disappointing, but in the core certification areas of the WildList and the clean sets no problems were encountered, and a VB100 award is duly granted to CA.
The latest version of CA’s home?user product is smooth and clean and generally very pleasing to look at, and setting up and running through the tests proved a fast and simple process. The on-access alert pop?ups had a tendency to recur rather more often than strictly necessary, but never caused any issues with the normal running of the system beyond their irritation value. In the standard set of tests, results were much the same as with the corporate version – remarkable speeds, reasonable to disappointing detection rates, but no major issues in any of the test sets, including the clean set.
However, the product submitted for this month’s test was the full Internet Security Suite, rather than the simpler anti?virus solution entered for previous tests. The suite includes, among other modules, an anti-spyware component. This component is pre-programmed to run a spyware scan on a schedule, which seems to be set up for a first run not long after installation. At the end of the scan, whether or not installed spyware is found on the machine (and indeed on other occasions, such as when attempting to disable the anti-spyware component) a pop?up appears, informing the user that unidentified, non?specific ‘threats’ have been discovered on the machine, which can only be removed by a fully licensed version of the product. Our test machines, although laden with malware sitting harmlessly on the hard disks, are in fact quite pure and free from infection, with no malware installed or even present in the system drive. On further testing, we found that the same pop-up appears on machines freshly installed with a clean copy of Windows and with no whisper of a ‘threat’ present. The issue appeared only to arise on systems disconnected from the Internet, and thus not fully ‘activated’, but it is a scenario in which real-world users may find themsleves, for example when checking a suspect and quarantined machine for infections, in which case they may find themselves misled.
Although we fully accept the developers’ insistence that the issue is a bug and that no deception is intended, the suggestion that these vague ‘threats’ are present is counted as an unspecified number of false positives, and CA’s home?user product is thus denied a VB100 award for this month.
Blink is a rather complex product with a range of components, and the installation process reflects this in its duration and complexity. The process is accompanied by the product’s usual range of peach, sky blue and other delightful pastel tones, as is the main interface when it comes along. With many other modules to manage, including a firewall which appears to be entirely disabled by default, the anti-malware component (which is based on the Norman engine) is afforded few configuration options, but the basics are catered for and the defaults are pretty sensible.
Scanning speeds were somewhat slow, most likely thanks to the implementation of the Norman Sandbox for extra protection. This provided a solid level of detection in the less recent parts of the test set, with a slight dip in coverage of the newer samples in the RAP sets and a small number of the new Virut samples not covered, but the strain included in the WildList set was fully detected. With no other issues in the rest of the WildList or the clean sets, eEye earns a VB100 award.
The latest iteration of ESET’s product has changed little on the surface. The usual fairly smooth installation process was interrupted only by a UAC prompt for an unfamiliarly titled installer program halfway through, and the usual attractive, excellently designed interface was present with its wealth of in-depth configuration. As in some previous tests, the stability of the interface was somewhat questionable under pressure, with a few wobbly moments evident especially after heavy bombardment in the on-access tests. On a couple of occasions we had to resort to the task manager to kill the product in order to get access to the GUI again.
These minor quibbles (unlikely to affect the bulk of everyday users) were more than made up for by some stellar detection rates, with the standard sets covered almost impeccably and the RAP sets handled with similar excellence. With some decent, if not outstanding scanning speeds, and no problems in either the WildList or clean sets, ESET easily earns another VB100 award.
The somewhat oddly named Filseclab’s somewhat oddly named Twister AntiTrojanVirus makes its second appearance in the VB100, having impressed last time around with its slick presentation and stable operation if not with its detection rates. This time once again the install process was fast and smooth, although the UAC system presented some serious warnings about unknown and untrusted publishers. The main interface is clear and lucid, with a user-friendly and attractive design.
Once again the on-demand mode proved fast and stable, while the on-access mode presented something which we would later find to be a recurring issue in this test: the inability to block access to infected files. Twister is designed primarily as a behavioural and HIPS product, intended to monitor executing programs for malicious behaviour, with the standard anti-virus-style file access hooking added later than much of the product. In this case the on-access detection seems only to log attempts to access files, doing nothing to prevent them from being accessed. The logging proved reliable however, and speeds were decent in both modes, although as the on-access module was not actually preventing access, the speed measurement may not be strictly comparable with other products. Detection rates were also fairly decent, at least in the less recent items in the standard sets, although handling of polymorphic viruses was less than impressive. In the RAP sets detection rates were somewhat below par but at least even and regular. The WildList was not fully covered, with fairly minimal coverage of the Virut variant included there, and in the clean sets a number of false positives turned up, denying Filseclab a VB100 award this time, but still looking a promising prospect.
Another product making its second appearance in our tests, Finport also had some issues with the UAC controls, requiring them to be turned off to allow the install process to complete successfully. While the main interface is pleasantly laid out and as simple as the title suggests, some aspects remain incomplete, with the EULA and some portions of the configuration and logging presented in the Cyrillic characters of the developers’ native Ukraine.
The controls are minimal but would be sufficient for many inexpert users, with a sensible set of defaults. Some areas of vital importance to us, such as stability and logging, seemed excellent, although some scans did present counts of ‘warnings’ at the end, with no details as to what we had been warned about. Detection remains pretty sketchy, particularly over the polymorphic sets, and there is much work to do to achieve full coverage of the WildList, but false positives were fairly few, and a VB100-worthy product could be achievable by Finport given some more work.
Fortinet presented a pleasantly redesigned interface in the previous test, and it returned this month. Warning messages about unsigned drivers during installation also returned with a vengeance thanks to the UAC system, which had to be disabled to allow the install to complete properly. The design is good though, with an excellent level of configuration and some very thorough default settings befitting its primarily business audience. Some other issues emerged with the UAC interaction, including the requirement to be running as admin to access some system files, which resulted in a standard scan of the C drive halting halfway through. Logging was also a bit of an issue, as what were nice clear records seemed to be compressed and encrypted without notice at one point.
With careful saving of logs the full set of tests were completed and results obtained, showing some mid-range scanning speeds and a considerable improvement in detection over the trojans set from the product’s last few appearances. The RAP sets showed much work still to be done, and as in previous tests a quick recheck with the ‘extended databases’ enabled, along with heuristics and ‘grayware’ detection, showed a huge improvement but could not be counted for the official scores as all are disabled by default. In the core areas, however, no problems were encountered, with a clean run over the WildList and clean sets, and a VB100 award is duly granted.
Frisk’s nice simple product installs at a reasonable pace and appears to be carrying out a lot of activity after installation, with a lengthy pause observed before the requested reboot was allowed to take place. The simplicity and relative shortage of configuration allows the interface to be clean and easy to use, although in a few places the available options seem a little esoteric. Logging is also fairly sparse, with no obvious recording of on-access detections, and the scanner remains somewhat prone to hangs and crashes; several error messages appeared during bigger scans of both clean and infected sets, and while in some cases a ‘continue’ button allowed scanning to complete, in others a restart was necessary.
Scanning speeds were only reasonable, and on-access overheads a little on the heavy side. Detection rates were generally pretty decent in the standard sets, although the RAP sets once again left something to be desired, hinting at issues with keeping up with the vast numbers of new samples appearing. The WildList, including the many Virut samples, was handled without issue though, and no problems were encountered in the clean set either, thus earning Frisk another VB100 award.
The first of two entries from F-Secure this month is the company’s standard desktop solution, which seems pretty much unchanged since we first encountered it a few years back. The install is surprisingly quick considering the number of components included, and requires a reboot to complete. The layout is fairly simple to navigate, and has a nice quirky but unfussy look and feel. The thoroughness of the detection took rather a heavy toll on our test systems, which seemed to wear out quite quickly and on a few occasions shut themselves down unexpectedly. We also noted a few issues with the product itself, which seemed to lose touch with its controls, the ‘scan target’ button regularly failing to bring up the required dialog if clicked on too soon after the completion of a scan. The thoroughness of the standard settings led to some rather slow scanning speeds, but on access, rather surprisingly, the product relies on file extensions to determine what to scan.
With the speed tests handled, the infected sets proved more difficult thanks to the rather shaky logging that has been mentioned in the past. This seemed less serious an issue this time though, and complete logs were obtained with only minimal moderation of scan sizes, with reboots in between scans used to circumvent the issue of the failing scan button. The final results showed a handful of samples of recent Virut variants missed in the polymorphic set, but no problems with the WildList strain. Detection rates overall were excellent, with a pretty decent showing in the RAP sets, and with no false positives either F-Secure earns a VB100 award.
This is a more corporate-oriented version of the F-Secure product, ‘PSB’ standing for ‘Protection Services for Business’. On the surface it seems much the same as the Client Security (CS) product, with a rather slower install process interrupted by a UAC warning about an unidentified publisher. Although we had to refuse its request to connect to the Internet to validate itself, all appeared to be running just fine, and pretty similar results to the CS version were found in the speed tests.
Running the product over the infected sets, perhaps overconfident after some luck with the CS version, we once again ran into the dreadful logging issues previously discussed. With a scan of any length producing more than a few hundred notable events, the log viewing process seems unable to cope and produces heavily truncated logs. In a business environment this would be unacceptable, and it made things pretty tricky for us – once again forcing us to carry out a time-consuming series of cautiously small scans. Eventually, after many frustrating reruns of tests in an attempt to find a viable set size, the data was gathered, and provided much the same results as the Client version – overall very thorough detection rates and no false positives, thus earning F-Secure a second VB100 award. The experience was not the most pleasant, however, and we will be looking closely at our rules on logging accuracy for future tests.
G DATA’s installation was a little slow, and forced a restart on completion. The interface is sleek and stylish and provides a fair level of configuration, although more experienced users may wish for more control over the behaviour. The multi-engine product has some seriously thorough default settings and took some time plodding through the tests. It also took a heavy toll on system resources, on a couple of occasions causing unexpected shutdowns. Scanning speeds were thus rather slow, with some pretty hefty lag times on access too. Logging also proved a little fiddly.
Detection rates were impressive however, with virtually nothing missed in the standard sets and an overall average in the RAP sets of over 85%. With full detection of the WildList and no false positives, G DATA easily wins another VB100 award.
K7 has had a bit of a rollercoaster ride in the VB100 in the couple of years since it became a semi-regular entrant, with some excellent detection levels tempered by the odd unexpected drop and an occasional false positive. The product itself is hard to dislike, with a swift and simple install process requiring no reboot, and a colourful, easy-to-navigate interface with sensible defaults and a reasonable degree of configuration for a home-user product.
It zipped through the speed tests in pretty good time, and achieved some very good detection rates in the standard sets, although the RAP scores were a little down on previous performances. In the clean sets, a single file was misidentified as malware, a component of a suite of mobile phone software from Sony Ericsson. This is unfortunate for K7, as the sample in question is unlikely to trouble users in the company’s key market of Japan, but under the strict rules of the VB100 any false positive is enough to spoil a product’s chances of qualification, and K7 will have to wait a while to earn another VB100 award.
.
Kaspersky’s current product is a stylish and attractive beast, with a lovely shiny interface that is a pleasure to explore and provides plenty of data on the activities of its various components in the form of eye-catching graphs and charts. Set-up is not too complex and configuration is ample without becoming overwhelming. Despite some pretty thorough defaults, scanning proceeded at a good pace and on-access overheads didn’t seem too heavy. The intensity did show itself a few times however, with a number of unexpected shutdowns as experienced with a few other products this month.
With a little care taken not to overtax the product, these issues were soon overcome, and results were easily gathered. These showed things to be much as expected, with most scores a notch or two better than other products using the same engine – especially in the RAP week +1 set where a truly excellent level was attained. A couple of recent Viruts went undetected, but precious little else, and with the WildList and clean sets handled ably, a VB100 award is easily earned.
Kingsoft’s ‘Standard’ product has appeared in our tests before. This time it looked much the same, with a fairly standard install process that is not too taxing on the user, and a simple set-up wizard for basic configuration. The interface is well designed with a tabbed set-up which keeps all the required controls in easy reach.
Scanning speeds were fairly mediocre, and results likewise; the standard sets were handled fairly well, with some work needed on polymorphic viruses. Perhaps the best that can be said of the RAP figures is that they are consistent. No false positives were observed in the clean sets, but the trouble with polymorphic viruses extended to the Virut variant in the WildList set, and thus no VB100 award is granted for this performance.
Kingsoft's advanced product has shown some slight superiority to the standard edition before, although on the surface it all seems much the same, with an identical appearance and no visible mention of its separate designation noted during install or use.
Once again, we quickly noticed that things were moving much faster this time, both in the install process as well as in both sections of the speed test, and when the detection results were processed we saw a considerable improvement here as well. Polymorphic detection rates were up, and a very creditable score was achieved in the trojans set. Even the RAP sets produced some decent figures, all without causing any new false alarms. The polymorphic improvement extended to full coverage of the Virut variant on the WildList, and for this edition Kingsoft earns a VB100 award.
This is the first appearance for the home-user version of McAfee’s product, but it is not entirely unfamiliar; having found it installed on a laptop received as a gift, I have spent some time wrestling with its Machiavellian removal process. For those not blessed with a free trial copy on new hardware, the product installs entirely from the Internet, so may not be suitable for anyone who likes to have their system protected at all times when connected to the wild wild web. The interface is curvy and colourful and fairly appealing at first sight, but navigation through what appears to be a wealth of options proved to be extremely difficult and rather disappointing – many of the controls we would have liked were either absent or too well hidden for the likes of us to discover. Problems with the destruction of our samples as well as some quirky on-access behaviour were overcome by careful analysis, sneaky workarounds and appeals to the developers for assistance. We eventually managed to get to the end of testing, though hampered once again by a number of surprise halts of the test system, generally in the middle of a long scan.
Checking over the results also proved something of a chore, as logging – once we had removed the rather low default size limit – seemed rather flaky, producing mangled tests with lines crushed together, seemingly random use of case and other quirks. Satisfactory results were eventually obtained after multiple retests, and showed the expected very solid levels of detection, though with a fairly steady decline through the RAP sets as samples grew fresher. No problems were encountered with the WildList and no false positives emerged either, and a VB100 award is thus granted.
McAfee’s corporate product is more familiar, and a welcome sight after its somewhat wayward sibling. Everything here is much more simple and businesslike, providing a much more satisfactory level of control, yet somehow making it more accessible and navigable. It has remained the same for many years now, but the developers seem to be sticking to the principle ‘if it ain’t broke, don’t fix it’ (most sensibly in our opinion). The only minor issue we noted during testing was that the on-access protection seemed to shut down momentarily when the settings were changed, perhaps only for a few seconds but long enough for us to notice it by running our on-access test scripts too soon after an adjustment.
No problems were encountered with the stability of the product or the test system, and as would be expected from a serious business product all detection activity is faithfully and accurately recorded. Detection rates seemed closely comparable with the home-user product. A few fractionally lower scores could be attributed to the offline updater package provided for the test being a few hours older than the updates applied to the home-user product during its brief time connected to the Internet on the deadline day. Again no problems were encountered with the WildList, and no false positives were generated either, thus McAfee earns a second VB100 award this month.
Microsoft’s corporate product is here on its own this month, with OneCare on its way into retirement and the replacement, code named Morro but apparently now to be referred to as ‘Security Essentials’, anticipated very soon. Forefront’s install process is rather different for us than for standard users thanks to the set-up of our lab. This made for a rather complex process with multiple reboots, but the standard set-up should, one hopes, be rather smoother and less laborious. The product has a pretty basic interface with extremely limited configuration, including the rather cryptic option to ‘use the program’, which apparently provides the option to shut it down completely if required. With response to clicks somewhat sluggish, and set-up of scans not as simple as some, we would normally have been tempted to resort to using the context menu scan (which has become something of a standard these days), but here for some reason it does not appear to be provided.
Nonetheless, we ploughed through testing without significant issues, although the ‘History’ option appears to be rather unreliable, on many occasions spending several minutes pondering after a lengthy scan only to present a blank screen and a message implying no detections had been recorded from a scan discovering several tens of thousands of infected files. Thankfully, full and reliable logging is buried in the product’s file structure, in a folder which Vista warned we should not be probing into but which was found thanks to some tips received from the developers. Parsing these showed some superb detection rates, continuing a long-term upward trend in the product’s prowess, with the RAP week +1 detection particularly noteworthy as the highest of any product tested this month. The WildList was handled without problems, although once again a fair number of the additional Virut sub-strains added to our polymorphic set this month were missed. This does not affect certification though, and with no false positives encountered Microsoft more than deserves its VB100 award.
Microworld’s eScan product has been settling in fairly well since the decision to rely entirely on the company’s own detection technology. The current version has a nice simple install process, which somehow feels a little old-fashioned next to some of the super-slick products appearing of late, and produces a few unexpected pop-ups of unpredictable appearance during the process, as well as lingering for several minutes at the ‘finishing’ stage.
Once up and running though, things are nicely laid out and simple to navigate, with a decent level of options. On-demand scanning is remarkably slow, perhaps not helped by the default option to log details of every item scanned rather than only infected or otherwise troublesome files, but on-access overheads seemed fairly light by comparison and there were no issues with stability.
Detection rates were most commendable, with no issues at all in the WildList, worms and bots or polymorphic sets, and precious few misses in the trojans set either. The RAP test was handled with considerable style, and with no false positives uncovered in the rather bulky logs of the clean sets, a VB100 award is duly granted.
A newcomer to the VB100 this month, representatives of Japan’s Nifty Corporation contacted us not long before the test deadline and bravely put their product on the line. With no English translation, we used the standard Japanese-language product – with a user guide kindly provided by the developers and a Kanji dictionary to hand to look up any troublesome words in the interface. Unable to provide an offline updater, we were forced to install the product on the deadline date, update and take a snapshot for later transfer to the test systems, but this proved no big deal, with a smooth and easy installation and a fast, straightforward update process.
The main interface is quite attractive, and is a little unusual compared to much of the rest of the market, but this is of little surprise. Even with the Japanese characters only partially rendered on our English-language systems it seemed fairly simple to navigate based on recognition of standard iconography and a basic if rather rusty understanding of the writing system. Logging is fairly minimal by default, but a simple registry tweak provides more detailed records to be passed to the event log, from where the required information was gathered without difficulty. The product is based on the Kaspersky engine, and thus, as one would expect, provides an excellent level of detection across the board, along with some impressive stability under pressure, although the thoroughness is naturally tempered by some rather slow scanning speeds and perhaps less than ideal on-access overheads. With no problems encountered in any of the test sets, the Nifty Corporation takes away a VB100 award at its first attempt, and we look forward to its return.
Norman’s current suite installs rapidly and easily, with the only tricky question during the process being whether or not to enable the ‘Screen saver scanner’, designed to run a scan when the system is idle. Although on by default, we opted to disable this in case it interrupted our normal testing. The end of the process suggests that a reboot may be necessary once the attempt to update has completed. This was indeed the case, with a small and rather subtle pop-up prompting for the restart a minute or two after the install proper was done.
The interface itself greatly resembles that of the Norman appliance product reviewed in these pages last month (see VB, July 2009, p.21), making for a nice consistency across products. However, it seemed a little sluggish to respond at times, perhaps in part thanks to the general slowness observed in the browser rendering on which it relies. Navigation could not be simpler though, and with a fairly minimal set of controls little time was spent using the interface.
With no obvious option to set up scans of specific areas from within the GUI, context-menu scans were used for all on-demand tests. On a few occasions, returning to the GUI to check settings after a hefty scan found it whited-out and failing to respond, and in most cases a reboot was required to regain control, but protection seemed stable throughout. Rather amusingly, even while the main GUI is in this state, the licensing wizard – which has long been a regular feature during tests of Norman offerings, popping up every so often to pester the user into fully licensing the product – is blocked by the UAC system and requires user confirmation to commence its nagging.
Scanning speeds were generally pretty good, and detection rates decent, with a notable dip in coverage in the most recent parts of the RAP sets. Although a handful of the new Virut samples in the polymorphic set were missed even with the Sandbox system enabled, none of the WildList samples went undetected, and without false positives either, Norman earns a VB100 award.
The first of three entries this month from PC Tools, the plain vanilla AV product has long been the favourite of the range with us, mostly thanks to its relative ease of use and adherence to standard anti-malware functionality. The install is fairly quick, although there is a rather worrying pause at the end before the product finally appears. It has a pretty simple layout, and very little by way of configuration, which is always somewhat worrying to us. The on-access tests proceeded with ease, however, trundling through the speed tests in fairly sluggish time and showing quite some slowdown in the general responsiveness of the test system. Once we reached the infected sets, things got rather worse, with the on-access scan holding up for a few dozen samples before shutting itself down with a rather limp error message (perhaps overwhelmed by the cascade of alert messages it insisted on flooding down one side of the screen). This happened on numerous occasions, requiring at least one and occasionally several reboots to get the engine to reload and resume protection.
With much care though, we managed to get through the on-access tests with reasonable success, and the on-demand tests proved much smoother and more straightforward. On final analysis, detection levels were mediocre, with the RAP scores particularly low (but at least consistently so) across the weeks. The WildList was mostly handled well, but with a little under half of the samples of the new Virut variant missed, and a single false positive in the clean sets too, no VB100 award is forthcoming despite all our testing efforts.
The second PC Tools offering is the company’s complete suite, combining the anti-malware protection of the preceding product with additional anti-spyware and firewalling. The install is fairly similar, with a reminder to remove any competing products and the offer of a ‘browser defender’ toolbar. The interface looks much the same but is even shorter on controls for the anti-malware component, perhaps in part thanks to the additional modules taking up valuable space. This time we opted to run the on-demand tests first and these proved much as expected, with some slight improvements over the plain AV product in some areas but, rather surprisingly, some areas less well covered. We had a few moments of worry when we found that the log was fixed at a maximum size, but alternative logging was shown to be available as part of the ‘community’ program, designed to record data more accurately for the developers’ use.
Approaching the on-access scan with some caution, we soon found that although the process clearly states that it ‘monitors and blocks’ the launching, accessing, copying or moving of malicious items, it actually appears to do none of these. Our standard on-access tool, which performs a simple open on the test files, provoked no response, and copying around the system seemed similarly ineffective. We eventually noted some pop-ups and logged items, and the occasional denial of read or write privileges, and in the end resorted to a combination of copying around the local system, and copying to the system across the network. The VB100 rules do not require blocking, only evidence that a malicious file has been noticed, so we were able to cobble results together from a combination of the product’s internal logs, and by counting the files successfully written to, denied access to, disinfected etc. This was by no means simple, as pop-ups and log entries continued to appear – claiming to have intercepted and blocked something – up to three hours after the supposed blocking had taken place. The results gathered may thus be somewhat inaccurate, but only to the extent that the product was, and they tallied at least reasonably closely with those of the plain AV product, with again the missed Virut in the WildList and a single false positive being more than enough to deny the product a VB100 award.
Testing PC Tools’ mid-level product, a combination of the plain anti-virus with the company’s longer-standing anti-spyware solution, was much the same fiddly, frustrating and occasionally frightening experience as testing the suite (from which it seems to differ only in the provision of a firewall). This time, a Google toolbar is offered during installation, for those who feel their browser does not have enough gadgets and gizmos. Otherwise, the interface, controls and layout are much the same.
Again, on-demand testing proved reasonably straightforward and reliable, and on-access scanning rather confusing and short of that all-important sense of security. Scores were once again gathered using multiple moving around of test sets and botched together from untrustworthy logs and analysis of file sets for changes, and should again be treated as unreliable thanks to the extreme difficulty of obtaining repeatable results. Extra care was taken with the WildList samples to ensure complete accuracy, and eventually we achieved a score directly matching the suite – fairly large numbers of the Virut strain not detected. Coupled with the same false positive as the other two offerings, none of PC Tools' trio of entries manages to win a VB100 award this month.
Quick Heal’s installer is pretty unusual in providing a little scan of the core system even before installation commences, and runs through its set-up in good time. The interface is unchanged from the last few tests, being fairly plain and simple to navigate but with a few quirks rendering some useful items rather obscure, and the whole is generally slightly sluggish to respond.
Scanning itself is lightning-fast as usual, more notably so over infected files than in the clean sets used for the official speed measurements, which come out as no more than good. Detection rates have lagged behind somewhat in recent tests but here were pretty good, with a fairly sharp drop in the last few weeks of the RAP sets. No problems were encountered with any Virut samples, and with no false alarms either Quick Heal earns a VB100 award.
Rising’s suite product has quite an involved and lengthy installation process, starting off with some serious warnings from the UAC system and a choice of languages, followed by complex licensing, a selection of installation options, a momentary disconnection from the LAN and a reboot. Once up, with the trademark cartoon lion prancing around in the corner of the screen, the main interface is fairly clear and usable, with the unusual but sensible precaution of a CAPTCHA being presented when important settings are changed, to ensure the action is intentional and not caused by a malevolent presence.
Unlike most other products, on-access protection is not sparked by simple file access, so detection was measured by copying files to the system, which meant that standard on-access overhead measurement was not possible. Logging was also rather odd, taking the form of databases rather than easily read and parsed plain text, but a fairly reliable log processor helped skirt around this even with large amounts of data to handle. In the end, fairly decent scores were observed in the standard sets, dropping in steps through the more recent samples in the RAP sets. Thanks to incomplete coverage of the latest Virut samples in the WildList and a single false positive in the clean sets, however, Rising does not quite make the grade for a VB100 award this month.
The main component of Sophos’s Endpoint Security and Control product, Sophos Anti-Virus continues to stick to the tried and trusted interface design which has graced many a VB100 in recent years. The installation is somewhat long-winded, offering removal of third-party software and the option of a firewall among its many stages. Once up and running, confirmation of a UAC pop-up is required before the main GUI can be accessed – and also, perhaps more surprisingly, before a context-menu scan is carried out. As ever, configuration is available in extreme depth for those seeking it, and options are generally easy to find and apply, and no issues with stability were encountered. We have previously mentioned issues with the progress bar as our main gripe about this product, but this time, in line with a general theme developing this month, we thought we should mention the rather awkward logging set-up. While it seems somewhat petty to complain about excessive detail, the product does produce a large, rather confusing log, with no option to record results of a particular scan to a particular location, and no option to purge existing data. This may only be of interest to testers, of course.
Scanning speeds were excellent and overheads quite acceptable on access, and detection rates very impressive across all the sets. With no problems handling the WildList and an absence of false positives, Sophos is a worthy winner of a VB100 award.
Symantec’s corporate desktop product had a facelift not so long ago, and now more closely resembles a home-user product than a business tool, with its bright colours and curvy shapes. The install is simple, if a little slow, but once up and running things are fairly responsive and easy to use – the serious configuration areas eschew the slick and shiny stylings of the main interface in favour of more traditional, solid, serious greys and right angles.
Zipping through the speed tests proved something of a breeze, and on-access tests were also pretty speedy, but the on-demand scanner took some time, particularly over infected sets, taking several days to complete the biggest scan and causing some worries as the end of the test period approached fast. The poor test machine also grew increasingly hot as the scan proceeded. Logging is recorded in extreme depth, to such an extent that the log viewing utility within the product is barely usable for our purposes, taking hours at a time to convert all the information for display. Fortunately, the bare logs were easily parsed and showed some pretty superb detection levels in most areas, with only the proactive week of the RAP sets showing any decline from the heights of excellence – something which should be addressed by the various additional proactive technologies included with the product.
In the WildList however, a tiny number of the new Virut samples were not detected; further analysis from Symantec has shown that a minor adjustment to the detection routines for this item, in place only for a short time around the submission deadline, led to the possibility of a fraction of infected samples not being detected – as few as one in 100,000 by the developers’ reckoning. It is extremely unlucky, therefore, that our set of 2,500 contained two such samples, but our rules are clear and our sets are designed to test completeness of detection. After putting together a quite magnificent unbroken run of 44 VB100 passes stretching back to the last century, this month Symantec is denied an award by a whisker.
Trustport’s installation procedure manages to be swift and straightforward despite some unusual steps, which include some initial configuration for the duo of engines used to provide protection. After the required reboot a registration wizard is presented, and the various control facilities can be accessed from a menu placed in the system tray. The main configuration tool has both simple and advanced modes, which provide a reasonable level of configuration options with all the main areas covered.
The dual engine approach as usual resulted in some fairly lacklustre scanning speeds in both modes, and the added strain on our tired old test systems saw more of those unexpected shutdowns – quite frequently during lengthy scans of infected sets. With some careful management of strings of small scans and saving of logs, plenty of data was acquired however. On processing, the results showed the expected outstanding detection rates, taking pride of place at the top of the table for the reactive part of the RAP sets and a close second in the overall averages. With no problems encountered in the WildList or clean sets, Trustport comfortably earns another VB100 award.
VirusBuster takes its accustomed place at the end of the comparative roster, with its familiar product presenting the same old outlook to the world. Its install process is enlivened only by the customary UAC pop-up at the beginning, and runs slick and smooth to completion.
The rather quirky design no longer presents much difficulty, mainly thanks to experience, but some of its oddities can still take the unwary tester by surprise. Not least of these was the lack of an option to simply run a scan and log results; opting for the ‘interactive’ method rather than the automatic cleaning/removal mode, we left a long scan of the clean set to run overnight, only to find, on arrival the next morning, that it had spent most of the night sitting waiting for a response to an alert. Fortunately, once this was given it provided an option to suppress further alerts, and could safely be left to run for another night. This set-up could be slightly frustrating for those who wish to leave their massive external hard drive to be scanned overnight but for whom the risk of false positives is too much of a concern to trust the product to automatically delete items detected.
Apart from this minor glitch everything else went smoothly, with some decidedly impressive results in the standard sets and some strong signs of improvement in the RAP sets too. The product had no problem handling the gamut of new Virut samples added to various sets either. The alert discovered after the abortive overnight scan presents the only hiccup in an otherwise excellent run – as with Agnitum at the very beginning of this long journey, that single suspect file from Microsoft’s .NET package is wrongly described as a trojan, and a VB100 award is therefore denied this time around despite a very good performance.
A remarkable feeling of calm descends over the test lab as we reach the end of a long, tough month of testing. This has been a more than usually arduous comparative for many reasons, the most obvious of which being the sheer size of the field of submissions. Though not quite a record, it was still a lot of products to get through, actually larger than shown here thanks to a couple of additional products which were eventually excluded from the test but still took up precious testing time. One of these was excluded thanks to limitations on logging which, even with our usual willingness to make the effort and plod through our tests in smaller chunks, would simply have taken too long, and the other rendered the test machine completely unresponsive on reboot.
Many of the products in this test did prove stable, speedy and well behaved, but many others had issues far too serious to be classed as mere quirks and oddities. We experienced a large number of freezes, crashes and hangs, not just of the product interfaces or of specific scans but in many cases seeing the whole machine shutting down. At first we suspected this was simply some incompatibility between Vista and our standard test hardware, but as the test progressed it became clear that it was happening frequently with a small group of products and not at all with the rest, implying that the activities of those specific products were the main factor in the incidents. We continue to investigate some new test procedures which will focus on product stability and proper interaction with the operating system.
Another major issue this time has been logging difficulties, whether it be unreliable, unnecessarily truncated, bizarrely mangled or strangely formatted log files, encrypted log files only accessible via untrustworthy display systems, or downright peculiar layout and content. We are considering imposing some rules on logging requirements which must be satisfied by any product before it will be accepted into our tests as we feel that, while it may be a rare thing for the average home-user to encounter large logs with high numbers of detections listed, it is a simple requirement of any product that it be able to account for its behaviour and record its own history.
The bulk of this month’s products made the VB100 grade – some just scraping across the line and some galloping home with plenty to spare. A handful of false positives caused problems for a few, most of which came from the sizeable new additions to the clean sets. There were also a few products that didn’t quite cover the highly complex polymorphic file infector that found its way onto the WildList for this test. As always seems to be the case with these items, whenever they appear on the list there are a few casualties. This month has also seen another interesting batch of figures from our RAP testing, which will be added into the aggregate graphs displayed at http://www.virusbtn.com/vb100/rap-index.xml.
With nothing more to be tested, the lab team is set to begin the process of clearing up and beginning preparations for the next test. We can only hope that some of the more troublesome vendors will be paying attention, and will provide better products next time, not just for our sakes, but for those of all their users.
Test environment. All products were tested on identical systems with AMD Athlon64 X2 Dual Core 5200+ processors, 2GB RAM, dual 80GB and 400GB hard drives, running Microsoft Windows Vista Business Edition, Service Pack 2, 32 bit.
Any developers interested in submitting products for VB's comparative reviews should contact [email protected]. The current schedule for the publication of VB comparative reviews can be found at http://www.virusbtn.com/vb100/about/schedule.xml.