2005-10-01
Abstract
27 products squeeze onto the Windows 2003 Advanced Server testing bench this month. Matt Ham has the details.
Copyright © 2005 Virus Bulletin
With Windows Longhorn now renamed Windows Vista , and still not expected for years, Windows 2003 Server remains the most recent server platform at the moment and for the foreseeable future. Having been in production for several years now, I expected the tests to progress easily on this occasion, since mature platforms tend to be less prone to problems. In the event, however, a host of problems were encountered. Some of these were due to the efficiency of the products, though rather more were the result of questionable design decisions.
The products included in this month's review were required to have publication dates no later than 31 August 2005 (both for the product itself and any database updates). The test sets were aligned with the most recent WildList published at the time, which was the June 2005 edition.
As expected, the bulk of additions to the test sets were W32/Mytob variants. This worm was of note more for its vast number of variants than the overwhelming success of any particular specimen - over 100 variants were added to the test sets. The majority of additional samples within the WildList (and added to the In the Wild [ItW] test set) were worms of one sort or another.
With the addition of the horde of W32/Mytob variants to the test sets, one feature of the scanners which would be of particular interest was their ability to use efficient generic detection techniques. All in all, however, there were no great challenges in terms of detection.
As a special note, when performing throughput tests on the zipped clean sets, most products were set up so as to detect within archives in their default state. The other products were activated for archive scanning during these tests alone. The products where archives are not scanned by default are those produced by AhnLab, Eset, McAfee, Sophos and VirusBuster.
The only major problem encountered with AhnLab 's offering was with the logs produced during on-demand scanning. These note only the file name on a single line, rather than the full path, thus making analysis lengthy. However, this problem does not affect detection rate and is likely to be of little relevance for most users. Of much more importance are the matters of detection and false positives, both areas where V3Net performed sufficiently well to be awarded a VB 100%.
Please refer to the PDF for test data
With absolutely no problems or outstanding issues in its operation, avast! is destined for a rather uneventful write-up in this review. A VB 100% award will, one hopes, go some way towards making up for the lack of discussion concerning the product.
Please refer to the PDF for test data
Again, the performance of Command AntiVirus produced nothing to comment on other than the full detection of viruses in the ItW set and the lack of false positives. Instead I will content myself with congratulating Authentium on achieving a further VB 100% for its collection.
Please refer to the PDF for test data
Having detected all samples in all of the test sets in the last Windows review, Avira will be pleased to have repeated the performance on this occasion. The fact that false positives are counted only in the non-archived clean test sets turned out to be fortuitous for Avira , since one clean archive was declared to contain a sample of W32/Fosforo. Scanning was otherwise a little slow but uneventful and a VB 100% award is thus winging its way to Avira 's headquarters.
Please refer to the PDF for test data
eTrust AntiVirus is provided with two scanning engines which can be exchanged at will: one can be used for on-access and the other for on-demand scanning if so desired. The InoculateIT engine is not activated by default, though, and is thus not eligible for a VB 100% award. It did, however, detect all samples in the wild, with no false positives.
Please refer to the PDF for test data
The Vet engine in the eTrust product performed slightly better in terms of detection than its optional counterpart, while speed tests produced similar results. Customers should therefore find little to complain about over the choice of default engine. Likewise, CA 's developers will be unlikely to complain at receiving a VB 100% award for their efforts.
Please refer to the PDF for test data
The strangest thing to happen while testing Vet was the production, during the installation procedure, of a dialog which read 'Should not see me' while the machine was rebooting. That apart, detection and false positives were much the same here as when the engine was tested in its eTrust incarnation. A second VB 100% for a product based on Vet 's engine is the result.
Please refer to the PDF for test data
Quick Heal has established itself in VB's tests as a reliable regular which tends to produce no major problems in testing. I appreciated this more than usual on this occasion, since I managed to lose my initial results for Quick Heal and was forced to repeat the tests. The overall result was identical, with a VB 100% narrowly missed on both occasions. The offending file was a .EML sample of W32/Nimda.A, missed on demand.
Please refer to the PDF for test data
Dr.Web's detection rates have always been high, with a handful of misses in the last few tests being attributable to optimization of older virus detections. On this occasion the optimizations were clearly working well, since all samples were detected in all test sets. A continuing irritation is this product's on-access scanner, which although still requiring a reboot for any configuration changes, no longer announces this fact. Irritation aside, a VB 100% is well deserved by the product.
Please refer to the PDF for test data
NOD32 has a strong history of detecting all infected files, with only a few minor deviations over the years. Yet again, no infected samples were missed across this month's test sets. A VB 100% award for Eset is the predictable result.
Please refer to the PDF for test data
Fortinet's product is beginning to become a familiar subject in VB's tests and its scanning results reflect this, with further improvements likely in the future. A VB 100% is awarded to FortiClient - which is also starting to become a regular result for the product.
Please refer to the PDF for test data
Unusually, several more files were missed by F-Prot while scanning on access than were missed on demand. FRISK's development team will no doubt be looking into this, although the problems did not occur in the ItW test sets, rather among very much older samples. Therefore, with no false positives generated, a VB 100% makes its way to Iceland for F-Prot.
Please refer to the PDF for test data
F-Secure's product has had a number of uncharacteristic non-detections in some recent tests, but the product's detection rate returned to its usual high levels in this test, and with false positives a VB 100% is the result.
Please refer to the PDF for test data
With its combination of two engines, AVK has sometimes seemed slightly slow while scanning, though on this occasion speed problems were comparatively non-existent. The engine combination has also traditionally paid off with good detection rates and in this there was no change - all infected files being detected in all test sets. With no false positives, the product qualified easily for a VB 100%.
Please refer to the PDF for test data
Unfortunately AVG generated one false positive while scanning the clean set this month. Despite good performance in all the detection-based tests this was sufficient to prevent Grisoft's product from achieving a VB 100%this time.
Please refer to the PDF for test data
Since AntiVir is all but identical to Avira, it came as no great surprise that the scanning results for the two products were identical - all infections were detected as such. Scanning speeds were also very similar, with differences easily attributable to those induced by background OS activity. Like its twin product, therefore, AntiVir gains a VB 100% award.
Please refer to the PDF for test data
ViRobot started the testing process disappointingly, with three false positives being picked up in the clean set. The scanning of infected files was, if anything, more frustrating, since numerous files took well over a minute to be scanned. Scanning of the test set rapidly became slower during the process, with a virtual memory warning also occurring. This combination suggests that bad things are afoot. It also seemed that exclusions were totally non-functional, requiring the product to be fully uninstalled for any manipulation of infected files to occur. It was perhaps not surprising that many files were missed on access, presumably due to timeouts during scanning.
Please refer to the PDF for test data
The Kaspersky entry this month was a great surprise, consisting of a command line scanner rather than the usual GUI. An optional 'free' GUI was suggested to interface with this. However, the interface required a fully operational SQL database to be installed on the machine in question. While many servers will have SQL available, those which do not will require a new installation which is free neither in a financial sense nor in a manpower sense. Oddly enough the command line version seemed, by pure observation, to be slower at scanning infected files than the more usual GUI versions tested. All these oddities aside, KAV receives a VB 100% award.
Please refer to the PDF for test data
The greatest surprise when testing VirusScan was noted during on-access scanning, where many samples of W32/Etap were not detected. It is possible that timeouts are responsible for this behaviour. W32/Etap is not a member of the ItW test set, however, so these obscure missed detections still allow McAfee to take home a VB 100% award for its pains.
Please refer to the PDF for test data
MicroWorld's eScan is part of a suite of, at least in some cases, rebadged products covering a variety of security functions. The anti-virus is provided by a version of GDATA's AVK, which, as in its original form, detected all samples that passed its way. It will come as little surprise, therefore, that a VB 100% is awarded to MicroWorld.
Please refer to the PDF for test data
Norman's product remains a solid workhorse, the only real complaint being that the scanning throughput is somewhat low. This is not the gravest of sins, however, and other areas of performance were sufficiently good that a VB 100% award is the result.
Please refer to the PDF for test data
A notable change in this version of BitDefender is the interface, which is much more akin to MMC than a usual anti-virus GUI. This added some initial frustration to the process of scanning, though once the changes had become less unfamiliar, the frustration was substantially lessened. With novelty present in the interface, the underlying scanning capacity of the program remains similar. As a result a VB 100% award is appropriate.
Please refer to the PDF for test data
The new Sophos interface includes a quarantine function which has certain peculiarities. Having scanned the test sets on demand, the summary declared that there were over 20,000 items in the quarantine. A different area claimed that this total was 1,000, while inspecting the quarantine area itself showed that there were precisely zero files in that location.
There were also new occurrences during scanning. On access several files were detected on this occasion which have not been detected in any previous default scan. Unfortunately, both on access and on demand, a sample of W32/Sdbot was missed from the ItW test set, thus denying the product a VB 100% award.
Please refer to the PDF for test data
Symantec's new engine seemed to bring few major changes to the process of scanning, and indeed none whatsoever in the results of those scans. With all infected files detected, however, an improvement would be hard to obtain and a VB 100% award impossible to deny.
Please refer to the PDF for test data
As has been the case with Trend's server products for some time, Server Protect needed to be within a domain for installation. My main complaint, however, was with the log file, which seemed to be truncated to the point of uselessness. This was bypassed by setting the scanner to delete infected objects, rather than relying on parsed logs for detection calculations. Server Protect missed no ItW files and produced no false positives, therefore receives a VB 100%.
Please refer to the PDF for test data
The user interface of this product has changed slightly since the last time it was reviewed, offering an easier and more pleasant experience on this front. There was also an improvement in detection rates, although misses of ItW samples were still present, thus denying UNA a VB 100%.
Please refer to the PDF for test data
Unfortunately for VirusBuster, two false positives were noted in the clean test set and a VB 100% award was denied for this reason. VirusBuster, is unusual in that it can use MMC as an interface for control. Control through MMC, however, seems not to allow the choice of areas to scan. A standard GUI is also available, with control here being irritatingly long-winded, but allowing the selection of scan areas.
Please refer to the PDF for test data
For such a stable and standard platform it was something of a surprise that so many problems showed themselves during testing. The usual caveat applies: that our test scenarios tend to throw more infected files at the scanners than might be expected in the real world. In the case of a server-based scanner, however, the loads produced by our tests might very well be reproduced in the case of a major outbreak, and under such circumstances some of the products tested here would be worthless. Scanning files at a rate of less than one per minute is far too slow and a server crippled by the load of scanning infected objects will prove more of a frustration than a useful tool.
Apart from the cries of woe brought about by these technical problems, design decisions also took their toll on my sanity. In a disturbingly high percentage of the products, the interface has been substantially changed for the worse over the last year. The most common irritation was the length of time required to set up a scan, for example, of a single directory. However much the design gurus may suggest otherwise, it is counterproductive to spend several minutes producing a detailed scan setup for an object, which will never be used again. Certainly complex feature tweaking should be a possibility, but making it a necessity is fundamentally user-unfriendly.
Test environment. Identical 1.6 GHz Intel Pentium machines with 512 MB RAM, 20 GB dual hard disks, DVD/CD-ROM and 3.5-inch floppy drive running Windows Server 2003 Web Edition V5.2 Build 3790.
Virus test sets. Complete listings of the test sets used can be found at http://www.virusbtn.com/vb100/archive/2005/10/testsets.
Results calculation protocol. A complete description of the results calculation protocol can be found at http://www.virusbtn.com/virusbulletin/archive/1998/01/vb199801-vb100-protocol.