2010-02-01
Abstract
After the last mammoth VB100 test on Windows 7, this month sees a smaller field of competitors for the less well supported Linux platform. John Hawes reports on a mixed bag of results.
Copyright © 2010 Virus Bulletin
Our annual excursion to the calm and balmy shores of Linux comes at the perfect time for us. The stresses and strains of the previous comparative – featuring a record number of participants on the shiny new Windows 7 platform – are gradually fading to a glorious, if rather painful memory, while the prospect looms of the XP comparative in the spring, which promises an even larger field of products to slog through. Sandwiched between these two behemoths, the Linux test provides a welcome moment of respite.
With far fewer products available for the Linux platform than for most others we test on, we always expect a quiet month, but it appeared from the start that our choice of distribution would make for an even smaller test than anticipated. Although Novell’s SUSE Linux is one of the most well funded and heavily marketed commercial server distributions, and its most recent edition (version 11) was released some nine months prior to the test deadline, it still presented enough of a challenge to put off entries from several of those normally expected to take part in our tests. The two largest security firms, Symantec and McAfee, were unable to provide products supporting the platform, although Symantec promises a new edition with full support due for release very soon.
Several others also decided not to take part due to timing issues and new releases pending.
Another major vendor also opted to skip this test, again with a significant upgrade to its Linux product close on the horizon but not quite ready for testing. This decision marks something of the end of an era: VB has been running VB100 comparative reviews since January 1998, more or less every two months, with a total of 67 sets of results published so far – this month’s test will be the first time that Kaspersky Lab has not submitted a product. A sad day indeed.
However, some nine products did make their way to us in time for the test deadline, with most of the remaining regular participants present and no further surprises. Small numbers do not necessarily make for a simple test of course, and previous experience of Linux tests has led us to expect all manner of opaque and confusing installation procedures, unusual and esoteric implementation and incomplete, inadequate and even well-hidden documentation. Hoping for a smooth and simple test (but as always prepared for the worst), we settled ourselves into the test lab for what was guaranteed to be an interesting month.
Novell’s acquisition of the SUSE distribution, announced in 2003 and finalized in early 2004, brought SUSE to the highest level of commercially supported Linux flavours. Always among the market leaders in Europe, it now stands as one of few likely challengers for Red Hat in the business sector. Successive releases have brought ever greater stability, completeness of vision and ease of use. The slick installer and the advanced Yast configuration tool bring most tasks involved in setting up and running a system, desktop or server within the grasp of even the most modest administrator. The platform has long been a favourite in the VB lab, with the fully open source variant OpenSUSE running on many of the servers that support our test networks. The more sober server edition is also the platform of choice for our anti-spam test network, and is selected by default if solution providers have no preference. So the task of preparing the test systems was a straightforward one. Installing and cloning systems was a fast and easy process, and few adjustments were required beyond pointing a few network shares to the right places and sharing the storage areas for the test sets ready for on-access testing. A client system, running Windows XP SP3, was set up with these shares mounted as network drives, with the standard set of scripts to run the on-access tests, and things were ready to go.
Putting the test sets in place proved similarly free from problems. The latest additions to the WildList were reasonably unremarkable, with yet more Koobface and OnlineGames variants continuing to dominate the list, and the old guard – the reams of Netsky and Mytob variants which once held sway over the list – continuing to decline. The most significant items on the list remain the complex W32/Virut strains, of which yet another new variant was added in recent months; as usual, our automated replicator churned out several thousand examples ready to challenge the products’ detection abilities to the utmost.
Elsewhere in the test sets everything was much as normal. The trojan set was slightly larger than usual thanks to a longer than average gap since the last test and some further expansion of our sample-gathering efforts. Meanwhile, the RAP sets were kept to a reasonable size thanks to a notable decrease in the number of incoming samples over the Christmas and New Year period, when most of the samples were taken in (probably due to various labs taking time off over the festive season); numbers quickly climbed back to previous levels a few weeks into 2010, with many sources providing extra large bundles to catch up with backlogs.
The speed sets were left largely unchanged, other than a little cleaning of inappropriate files which had slipped in, and the clean set had a fairly average number of additions, most of which were popular freeware utilities and some software provided free with magazines and hardware acquired by the lab in recent months. A dedicated set of Linux files was also compiled for the speed tests, taken from the /bin, /sbin, /etc, /opt and /usr folders of one of the test machines. As usual for the speed tests, scanning speeds using both default settings and ‘full’ settings were recorded – where necessary, settings were increased to include all file types with no size limit, scanning archives internally to the highest available depth. A product’s times were counted in the full column if files with non-standard extensions were detected and, for the archive set, if at least four of the eight archive types checked were scanned to a depth of at least four levels. Thanks to the fair number of compressed files in the Linux speed set, this was treated in the same way as the archive set for this month’s test.
With everything set up and more or less in order, we got down to business.
Alwil’s product was provided as a selection of RPM installer files, along with some instructions on compiling and installing the Dazuko filtering module required by the on-access scanner. Several steps were required, but thanks to the clear instructions everything ran through smoothly and we were soon underway with our first run through the tests.
Running of scans from the command line and operation of the on-access guard proved logical and simple, following pretty simple and predictable formats, and with decent speeds across all sets, results were gathered in pleasingly short order. Detection rates were as excellent as ever, with scanning speeds even more impressive in both modes. No problems were encountered in the WildList or any of the clean sets, and Alwil is duly awarded another VB100 for its collection.
Avira’s Linux solution uses the Dazuko filter module in a slightly different fashion, but provides it pre-compiled and ready to go with just a minor tweak required to one of the set-up scripts. Installation is well automated and lucid, and documentation is similarly clear. Operation and control of the product is thus straightforward, and with good speeds throughout, testing once again took little time or effort.
Some truly superb detection figures were achieved, nearing perfection in the trojans set and pretty impressive in the RAP sets. With the WildList and clean sets causing no difficulties, Avira also comfortably earns another VB100.
CA’s product has a rather more involved and complex set-up procedure, but ample instructions were provided and the product was up and running without much ado. A GUI is provided, accessible from a built-in web server, which closely resembles that seen in numerous Windows tests over the last few years. This proved to offer all the functions required, but not in great depth and without the finer control provided by command-line and configuration file set-ups – the preferred method of more serious Linux admins, particularly at the server level. A command-line tool for on-demand scanning is provided, but seemed unwilling to work for us without deeper research, so we mostly made do with the GUI. This proved a rather frustrating path, as the interface was flaky in the extreme, with around one click in five (and sometimes as many as one in three) producing a page error and requiring a refresh of the browser and a retry at configuring. Nevertheless, we got there in the end, with the only lasting problem being an apparent lack of archive scanning on access despite options to enable it in the interface – a problem we have noted many times previously with the Windows version of the same product.
Speeds were as excellent as we have come to expect from CA’s remarkably quick engine, and while detection rates in the trojans and RAP sets were rather disappointing, no problems were encountered in the WildList set and no false positives turned up in the clean sets, thus qualifying CA for a VB100 award.
The Linux version of eScan is another complete and professional server package, again with a web-accessible GUI as well as a desktop interface for running on-demand scans, but where possible we went with the command line to enable better automation of tasks. Installation was fairly painless, with a set of RPM installers to run and the on-access protection provided on Samba shares with some small additions to make to the Samba configuration file. All of this is clearly documented in an accompanying PDF manual.
Scanning speeds were not the fastest, partly thanks to some very thorough default settings, but detection rates seemed generally good. However, on checking the results we found a number of unexpected misses in the WildList and worms and bots sets, with files not spotted on access but easily noted on demand. Deeper analysis showed that the on-access scanner is set to ignore files larger than 2048KB by default – presumably to mitigate slowdowns caused by the thorough scanning. Retrying the speed tests with the limit disabled resulted in numerous problems, including the scanning engine daemon shutting down and disabling all access to the Samba share. Since several samples in the WildList – including a nasty W32/Bagle worm – were larger than 2MB and thus ignored in the default setting, eScan is denied a VB100 award this month.
ESET also offers some nice simple install scripts – not as straightforward as some, but still fairly easy to operate. On-access scanning can be provided either using the Dazuko module, allowing full system protection, or on Samba shares only; for simplicity we opted to use this method, and again it proved simple to set up and configure.
On-demand scanning speeds were pretty reasonable, and on-access overheads not too heavy, despite some pretty intensive default scanning levels. Detection rates were quite excellent, with the only problem encountered in the RAP sets, where a couple of files caused the engine to trip up with a segmentation fault error message. With these moved out of the way, RAP scores proved just as impressive as those in the main sets, and the clean sets threw up only a few (fairly accurate) warnings of potentially unwanted adware-type products (mostly toolbars included ‘free’ with some of the trialware products added this month). With no full blown false positives, and the WildList handled with ease, ESET earns another VB100 award to add to its impressive haul.
Frisk’s Linux product, like its Windows versions, is simple in the extreme, with most of it quite happy to run from wherever the initial zip is unpacked, but a Perl installer script is provided to simplify the set-up process.
Again, the choice of Dazuko or Samba-based on-access protection is offered, and again we opted for the Samba method as all on-access tests were being run from a Windows client. Everything was up and running fairly simply despite a lack of clear documentation.
On-demand scanning speeds were excellent, and on-access overheads were not bad either. Detection rates proved decent, if not overwhelming. The clean sets and the WildList presented no difficulties, and as a result Frisk also earns itself another VB100 award.
Quick Heal’s product was one of few to have dependencies, in the form of a compatibility library for some older C++ code, but this presented little problem. Dazuko was the on-access filtering method of choice, with again a slightly different implementation, but it proved no problem to set up and get running. This needs to be done manually before the installer script is run, but with it in place everything else runs like clockwork, with helpful and descriptive comments and instructions provided.
As ever, Quick Heal sped through the tests, with some superb times recorded in the on-access tests, the protection barely registering. This effect may have been helped by a lack of archive scanning on access – something which seemed impossible to activate in the rather minimal configuration system. Detection rates were pretty good on demand, although a notable difference between on-demand and on-access scores hinted at either wildly different settings or some stability issues with the on-access implementation. Further investigation and re-tests showed more variations in detection and speeds, with the protection appearing rather unstable on access. The fast speeds recorded may be a side effect of this uncertain application of scanning to files being rapidly accessed.
Elsewhere, a fairly steep decline was observed across the RAP sets, although with a solid starting point the overall average was decent. The clean sets were handled accurately, but in the WildList set a single sample of W32/Virut, from the latest strain added to the list, went undetected both on demand and on access, and Quick Heal does not quite make the required grade for VB100 certification this month.
Sophos’s product had one of the slickest installation processes, running smoothly and cleanly through the required steps, including the selection and implementation of its own ‘Talpa’ on-access hooking set-up. With everything set up and operational in double-quick time, testing sped through thanks to pleasantly sensible and standardized command-line options for the on-demand scanner, and a slightly more fiddly but well-documented control system for the on-access component.
Scanning speeds were not bad in either mode, and detection rates proved very solid throughout the sets. However, a few anomalies were observed in the on-demand scan; relying on extension lists to decide whether or not to scan files, it appeared that at least one viable executable extension had been missed off the list. When testing and replicating samples, we endeavour to capture copies of each sample with all the extensions it uses while spreading, to check for just such errors. With several pieces of malware using this extension to conceal spreading files from less sophisticated users, Sophos is lucky to have covered this month’s WildList without issues. As it is, several samples in the worms and bots set – all of which had been retired from the WildList in recent months – went undetected with the default settings.
A VB100 award is earned, but we hope to see the error fixed promptly.
As the name hints, VirusBuster’s SambaShield provides protection for Samba shares, which is just what is required for our test. The set-up process is simple and quick, with a nice installer script putting things in place and making the required tweaks to the Samba configuration file. Some rummaging around is subsequently required to find the scanner and other components. With these located, some fairly logical controls are provided for the on-access scanner, while the on-demand portion seemed less deeply configurable.
Testing zipped along nicely, and scanning speeds were pretty good in general, although slower than many at handling the large number of small files in the Linux speed set. Detection rates showed further improvements to those noted in recent months, with a reasonable decline across the RAP sets. The WildList, with its many tricky Virut samples, was handled with aplomb, but in the clean sets a couple of files – one from Roxio and another from an AOL set-up CD – were alerted on as trojans, thus spoiling VirusBuster’s chances of a VB100 award this month.
This month we saw some products with tricky and esoteric installation and control systems among a field dominated by a pleasant level of clarity and good design. We saw slow scanning times and heavy on-access lags, although most products were reasonably lightweight and nimble. We saw some disappointing detection rates alongside some highly impressive scores. Having expected a high pass rate – perhaps even a clean sweep – we saw problems with false positives, missed polymorphic samples in the WildList set, and an unfortunate default setting causing products to miss the required standard for the award. All in all, a bit of a mixed bag.
A few interesting trends and patterns were noted. Unlike most of our tests on the Windows platform, where on-demand scores are almost invariably higher than those recorded on access, we saw several products doing less well with their on-demand scans. This, of course, is due to the way the products are operated, with command-line scanners often defaulting to less thorough defaults than graphical ones, giving the operator more freedom and control to design scans as required. In the Linux context, the concept of ‘default’ is perhaps a little less exact than in most of our tests, although of course all scanners have their own list of automatically enabled options.
The RAP tests produced some interesting results once again, with most products taking up familiar positions in the RAP quadrant. As our automation systems improve we are slowly increasing the size of the RAP sets, as well as fine-tuning their correlation with prevalence and telemetry data and thus their relevance, and hopefully this will continue to increase the value of the data provided.
We also hope soon to implement a long-planned series of improvements in the polymorphic and worms-and-bots sets, as well as the redesign and rebuilding of our clean, speed and false positive sets. Much of this should be in place in time for the next test – for which we expect a vastly expanded field of products, including a number of newcomers.
Test environment. All products were tested on identical systems with AMD Athlon64 X2 Dual Core 5200+ processors, 2GB RAM, dual 80GB and 400GB hard drives. Test systems were running Novell SUSE Linux Enterprise Server 11, 32-bit edition, with Linux Kernel 2.6.27.19. On access tests were performed from clients running Microsoft Windows XP, Service Pack 3, accessing network shares exported using Samba 3.2.7.
Any developers interested in submitting products for VB's comparative reviews should contact [email protected]. The current schedule for the publication of VB comparative reviews can be found at http://www.virusbtn.com/vb100/about/schedule.xml.