Is Antivirus Software Dead?
Always-on Internet connectivity is keeping malware concerns alive and well. We examine whether antivirus software is up to the task, or whether it's a security solution of the past.
Are we headed towards a future where the idea of an antivirus program, or security software in general, is simply not part of the picture? We might well be, but not for a good long time yet.
In the last few years, consumer computing has come under attack like never before, and the attacks are only growing more clever and more concentrated. In response, there need to be changes to the platforms on which we do most of our computing, and new approaches to securing the platforms we already use.
Antivirus isn't going away. It's just changing its shape to meet the times. Or rather, it had better change, because the other options are few and far between.
The History Of PC (In)security
For a long time, security as we know today simply didn't exist in the PC world. Antivirus software was created because the software we used (DOS and Windows, and their attendant programs) was never designed to ward off attacks. They were single-user environments with little or no network connectivity, so anything that went wrong was typically the user's fault.
The few really severe pieces of malware that circulated during this time were things like Robert Morris's Internet worm of November 1988 -- a program that routed itself through Unix machines by exploiting buffer overflow conditions in that OS. It worked the same way as its descendants: It exploited the always-on nature of networks and flaws in the Unix machines through which it propagated.
Once always-on connectivity and downloaded, rather than boxed software started becoming the norm, malware became commonplace, and the native insecurity of consumer computing became all too easy to see. Antivirus programs were retooled into generic system-protection suites, watchdogs that guarded everything from network connection to on-disk activity.
The problem was that such programs, by and large, were horribly obtrusive and a drain on system resources to boot. A PC might be safer, but it hardly mattered if the machine ran at what felt like half speed. Even worse was the false sense of security that such programs could create. It became easy to assume nothing could go wrong, to behave dangerously, and to consequently be hit by an attack that circumvented the whole defense system. (See "Zero-day attacks", below.)
Now the picture has started to change for the better, thanks both to smarter operating system and program design. But there are serious doubts as to whether extant operating systems can be fixed to accommodate the kind of all-encompassing security that's best suited to fighting back against the way malware works today. Limiting Privileges
The most significant system-protection change that's been made as of late is the limiting of user and program privileges. A program should not, by default, be able to change any aspect of the system at will; it should only do what's required of it. If it wants to modify system settings, it can only do so after explicit admin authorization.
Linux, OS X, and the NT-based editions of Windows (NT, 2000, XP, and up) have this sort of privilege segregation. Up until recently, though, Windows made it too easy not to use this feature: most people simply logged in and ran as administrator because it was too much of a hassle not to. Too many programs were still written under the assumption they could change everything, and would break unless they didn't have admin privileges. But by the time Vista and User Account Control rolled around, things had changed: Windows programmers were now in the habit of writing apps that didn't need root privileges to run. The burden of making computing safer fell to both the platform and application providers.
Several things are immediately noticeable when you run as a non-admin by default. For one, this stops the majority of "invisible" attacks committed by malicious programs that run silently in the background. Two, it's much harder to unthinkingly make systemwide changes. And three, the majority of security problems that used to silently pile up under users' noses and then explode without warning don't. This isn't to say that it's not possible to trick users into running malicious programs at all, but that most of the common ways to do this have become harder.
I'll cite a personal experience as proof that this approach is hugely useful. I encouraged friends who used to run under the bad old security model (run as root) to do the right thing and run as non-admin. They were running Windows XP or Windows 2000, and in every single case, the number of malware infections and other security-related issues dropped off to just about nothing.
So does that mean UAC and similar technologies let you do without antivirus altogether? The short answer is "Yes, but not without some risk."
Zero-day Attacks
If operating systems were perfectly bug-free environments, then limiting user privileges might be a fairly bulletproof way to keep things secure. Unfortunately, bugs do exist, and the creators of malware have turned to exploiting newly revealed and as-yet-unpatched vulnerabilities -- the infamous "zero-day attacks" -- as their next big thing. Recent word about an OS X kernel flaw underscores this all the more: a bug like this could allow someone to write directly into kernel space, and completely bypass mechanisms like limited privileges. The odds are heavily mitigated by usage habits. If you aren't the sort of user who routinely exposes himself to danger -- you don't use file-sharing systems, don't open attachments without a pedigree, don't install software indiscriminately, don't visit Web sites of questionable provenance, and do use a late-model browser -- the risk goes way down. But it isn't completely gone. Even cautious people can get hit with the "drive-by infection", where an ad banner or other normally innocuous Web page element turns out to be a delivery mechanism for evil.
On top of zero-day attacks are a great many other vulnerabilities that remain chronically unpatched by the end user, and which can end up being an open door for the bad guys. Qualys, Inc., a network security firm, did its own research and found that the "half-life" of a given unpatched vulnerability is about 30 days across the industries it surveyed. The most chronically under-patched products were, ironically enough, some of the most widely used: Microsoft Office, Windows Server 2003, Sun Java, and Adobe Acrobat.
Acrobat in particular was not only highly vulnerable and chronically unpatched, but remains a major target of attacks -- almost 50% of document-format attacks charted through 2009 so far use .PDFs as a vector. Some of them do not even require any explicit user action: one recent .PDF flaw could be triggered by simply saving the document in question to the hard drive. Most people don't think of .PDFs as an attack vector, which is precisely what makes them dangerous.
The Limits Of Limited Privileges
How effective are reduced user privileges against such an attack? I talked to Didier Stevens, the researcher who conducted his own investigations into the .PDF vulnerability, and his answer was a little chilling: "It depends on the type of attack. Almost all malware requires local admin rights to execute properly, so it won't work. But if it's a targeted attack and the attacker knows you're running Vista, he can design the malware to perform its actions in this limited context. You don't need local admin rights to steal data, log keys or take screenshots. And a privilege escalation exploit can be used to gain system rights."
This points towards one of the major reasons why these attacks are taking place. They're not simply being done to ruin existing systems, but to steal things from their users. Therefore, many of the attacks that do the worst real-world damage (keylogging, information theft, financial crime, might not require privilege elevation to be effective in the first place. Taking control of the whole system is just a convenient bonus. So preventing privilege elevation alone isn't enough.
Still, how can you insure some degree of system security without the tedium of scanning everything that moves? Whitelisting
One relatively new approach to system security is whitelisting, where only a pre-defined catalog of programs (identified by a hash or other cryptographic token) are allowed to run at all. Whitelisting makes it tougher for any program to run, whether or not it requires privilege elevation to do its dirty deeds. That makes it a good local line of defense against, for instance, keyloggers. Whitelisting works more like a members-only club where you need an invitation, instead of a sports arena where only the truly unruly and dangerous are ejected by security.
The problem with whitelisting is essentially the same problem with blacklisting: a list needs to be created and maintained. The plus side is that whitelists tend to be far smaller, easier to maintain, and more effective than blacklists. Group Policy in Windows, for instance, makes it possible to whitelist applications based on file paths or file hashes.
One approach is to have the list maintained by a third party. Kaspersky Lab, makers of a popular antivirus product, are using pre-created whitelists as a way to insure that only known, "good" software is loaded and running. The local user can add known-good applications to the list, of course.
It seems unlikely that whitelisting will become a default course of action for most platforms. Rather, it will be a lockdown measure taken by an admin or an end user, and for an environment that should be tightly controlled anyway (e.g., corporate desktops) it makes plenty of sense. It might well be possible to create a whitelisting mechanism that works elegantly enough to be nearly invisible, that like modern firewalls only squawks at the user when something is manifestly wrong. But for now, whitelisting is an option and not a standard strategy.
Trust Models
Another approach, which can work in concert with whitelisting, is trust modeling. One can use the provenance of a file to create a model for how trustable it is -- where it was downloaded from, how the download was triggered, and so on.
Existing antivirus packages Norton 360, for example, already use similar heuristics in combination with conventional scanning -- but it's not likely that trust modeling will take over completely from conventional scanning. As security blogger Dr. Luke O'Connor put it, "the likelihood of being infected by malware will actually increase simply because less scanning will be done and the risk factors will not correlate perfectly with the presence of malware." It's simply not possible to scan everything that comes or goes without incurring an intolerable overhead, and the overhead will only get worse as time goes on.
In short, trust modeling is best used as one heuristic among many, not as an approach unto itself. It can augment an existing method, but unless it's radically reinvented it's not a whole defense strategy.
Retroactive Protection
If the concept of antivirus has been broadened to include generic "system protection," the concept of system protection itself has also been broadened to include more than just stopping bad activity in its tracks. It also now includes ways to gracefully recover from disaster, or to contain disaster.
The disaster-containing approach, "sandboxing", allows any software downloaded or installed to be run in a virtual space and have its behavior analyzed. If the system determines the program's not a threat, its actions can be merged out to the system at large and the program runs normally from then on. The program Sandboxie allows you to do something very much like this right now -- but in the long run, it's something best developed into a full-blown platform feature, rather than an application add-on. The disaster-recovery approach assumes that something will go wrong, but makes it easy to pick up where you left off. Full-system incremental imaging -- like that available in Windows Vista and Windows 7, or the OS X Time Machine -- is a big aid to this, but could benefit from more fine-grained control or integration with the above-described sandboxing technologies to be even more effective. Example: being able to take snapshots of different aspects of a system state, such as the states of different programs, and selectively roll them back as needed if they are damaged.
Linux, Mac: Uncharted Territory
One area where many of these new techniques might well be tested in a live context is not Windows or even Linux, but the Macintosh. Malware protection for OS X has typically been very meager, and a good deal of that is because the Mac simply hasn't been that big a target for malware. Yet. If it does become a bigger target, it will have more protection than Windows did simply by dint of having processes not run as admin by default.
But, as described above, that approach only goes so far. If the Mac becomes popular enough to also be a regular malware target, then it will experience the same baptism by fire that Windows did. Then Apple will either have to add new platform-level features to fight such things more elegantly (e.g., whitelisting), or add antivirus products as a regular presence there. That by itself would knock out one of the major selling points of the Mac as a platform: its general lack of malware and inherent security. For now, however, it's a safe place.
The same could be said for Linux as well. Its measurable desktop marketshare is far below that of Windows or Mac, but that doesn't make it immune from being a target. And, as above, the fact that non-essential processes don't run as root is not a cure-all, and in many cases isn't even required to do the kind of harm most malware authors are after. It might not be possible to know how secure the average desktop Linux stack is from concerted attack without it actually becoming broadly used and therefore broadly attacked. There's something of a paradox here: if few people use Linux, it remains relatively untargeted but it also remains less use-tested in the real world, where attacks on computers are a way of life.
The ideal solution to malware would be a secure platform, where malware was a thing of the past. Unfortunately, software's very complexity makes a de facto secure platform almost impossible to guarantee.
The best long-term solutions will be platform-based. Such platforms can't be perfect, but they can approach a greater degree of security through continual, rigorous improvement (both internally and externally). The most useful interim solutions, though, will still come from third parties. What will be and already is pass is the old-school approach to system security, the "scan everything that moves" philosophy that creates at least as many problems as it solves.
Antivirus isn't dead. But it must evolve into a true complement to the kind of computing we now do, and to the threats we're now trying to guard against.
For Further Reading:
Wolfe's Den Podcast: Trend Micro Takes Security To The Cloud
Think Your Anti-Virus Is Working? Think Again
Microsoft Offers Free Security Essentials
70 Of Top 100 Web Sites Spread Malware
Popular News Topics Become Malware Bait
InformationWeek Analytics has published an analysis of the current state of identity management. Download the report here (registration required).
About the Author
You May Also Like