Reputation-measurement is one of the newer ways of blocking malware in a world where threats have grown more elaborate and cunning, and Web 2.0 has grown the audience for rich media.
While the continuing usefulness of traditional virus-signature identification should not be underrated, at least two techniques are now in common use whose specific aim to evade it, according to Scott Montgomery, vice-president of global technical strategy at Californian-based Secure Computing.
The first technique is polymorphism, which involves producing a piece of malware in a number of variants with similar payload and mode of attack but written so as to offer a slightly different signature to filters.
A more widespread problem, however, is packer malware. This involves compressing, and sometimes even encrypting, agents before sending them, so as to evade filters tuned to particular signatures. The key to unscramble the infective agent and make it operational is then sent in a separate transmission.
Measures of malicious traffic by German malware analyst AVtest.org show no less than 48% of recent malware came in packed and/or encrypted form, says Montgomery. Signature-based detectors are of little use against this type of malware. The best hope of detecting its presence is by its behavior.
“If it tries to change permissions on files, change registry entries or start up a shell, then it’s probably up to no good.”
The new weapon against malware — and one that is quite potent — involves rating sources of traffic by reputation, says Montgomery. The domains of people and companies you know and trust, or who have a good commercial reputation, are given a high “reputation” score; while domains that send out a lot of traffic but have little or no in-coming traffic fall under suspicion of being potential spam sources.
Some reputation-scoring can be done by the receiving individual or organization, but companies like Secure Computing maintain a large population of sensors and build reputation scores internationally, by means of continuous monitoring.
Adding the two together helps build up a picture of who can be trusted. “But it’s a question of risk analysis,” Montgomery says. Reputation analysis can only give a score; it’s up to the receiver to decide where to draw the line between probably okay and probably not okay.
The set of developments known collectively as Web 2.0 involve richer media and a growing amount of dynamic content that works interactively within the browser, rather than at the server level.
“The browser has become the new battleground,” says Montgomery.
Another analyst claims 89% of browser and web vulnerabilities are related to ActiveX, he says. ActiveX is a component object model for Windows platforms and is also a prevalent form of Internet Explorer plug-in, so it’s clearly not possible to ban it totally. Applications based on it can be launched from web pages.
At the very least, says Montgomery, people should insist that all ActiveX control objects are digitally signed, testifying to their source and the fact they haven’t been interfered with.
Naturally, there is a role for company policies in trying to instill good practice here — such as banning the loading of unknown software, and the connection of devices to the company network after they have been outside. However, this offers no guarantees.
“It’s easy to say, ‘just educate the end-users’. But, year after year, end-users who have been ‘educated’ will still open an email from someone they don’t know and click on an executable attachment.”
Here again, Web 2.0 has worsened the situation, tying up online content with users’ real-world reputations. For example, a lot of people receiving an email with the heading, “Saw you last night on YouTube” wouldn’t be able to resist following the link, he says.
Montgomery, who is based in Washington DC, is on a tour of Australia and New Zealand, conducting seminars on the characteristics of today’s malware and the precautions available to protect against it.