Do sandboxes and Automated Dynamic Analysis Systems provide the protection they promise?

04.09.2012
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitters approach.

If you're charged with keeping malware out of your organization you're probably getting lots of visits from vendors of automated dynamic analysis systems for malware, the latest and greatest mouse trap being hawked by a slew of companies.

Automated dynamic analysis systems and sandboxes for malware are the latest "must-have" antivirus gap-filler. While signature-based detection and automated static analysis systems have continued to improve in incremental jumps, and have managed to keep pace with the threats they were designed to thwart, the overall percentage of malware threats that they're capable of detecting has been decreasing for a decade.

As a detection technology, the combination of these two methods probably ends up finding 10% to 20% of malware threats within one week of the malware being created and released by the bad guys. Ten years ago, that figure was likely in the 60% to 80% range.

To address the growing detection gap, the much touted and pimped solution is to use automated dynamic analysis systems (software- or appliance-based) to uncover the maliciousness of any binary file traversing the corporate network. The idea is to force any suspicious binary to run in a mock environment so it will exhibit its true behaviors, and if those behaviors are malicious, then the file is classified as malware. To deliver this mock environment just about all the vendors hawking "better mouse trap" solutions use some form of operating system emulation or (e.g. VMware).

The objective of the approach should be to detect malware that slips by the signature and static analysis systems, taking you from 10% to 20% detection back up to the glorious days of 60% to 80%. You'll even encounter some vendors claiming their solution will take you to the oxygen-deprived altitude of 99% to 100%.

Unfortunately, the reality of the situation is considerably different from the marketing pitch. The combination of signature detection, static analysis and automated dynamic analysis systems for malware detection yields different levels of success depending on the type of threat encountered.

Consider the corporate world which faces three separate threat categories: generic Internet threats (such as any target, anywhere on the Internet, with no victim selection); infiltration threats (such as malware crafted to work against typical/average corporate defense in depth strategies); and espionage threats (such as malware tools tuned to operate within your organization).

For generic Internet threats, the combination of antivirus defenses probably thwarts 80% to 90% of threats within one week of release by criminal authors. For infiltration threats orchestrated by criminals looking for bigger monetary yields, those same defenses thwart around 40% to 50% of the malware they employ. Meanwhile, for targeted espionage threats, you'd be hard pressed to detect up to 10% of the malware tools used to conduct such an attack.

The numbers are scary - and they should be - because they reflect the reality of the situation and not some idealized marketing fluff. But context is also important here. Malware threats that come via the front door (for example over unencrypted HTTP via a Web browser, or as email attachments) are the most convenient from a threat analysis perspective and, from purely a volume perspective, you can expect 90% to 95% of those binaries to be generic Internet threats.

So, purely from a statistical basis, preventing 80% to 90% of malware coming in to your organization that way sounds great, but is that the threat you're really worried about? A piece of malware that scrapes Facebook and login credentials and will be blocked automatically by the host-based protection suite you've already deployed.

No, the threat to business lays elsewhere and the tools being positioned to fill that legacy antivirus gap have significant weaknesses.

For some reason vendors continue to tap-dance around the weaknesses of automated dynamic analysis systems, calling malware samples that evade detection as sophisticated and advanced, as if you're unlikely to ever encounter them. Sure, the technical aspects of evading sandboxing and automated analysis platforms may be specialized, but it's been largely a commodity technique for at least the last five years (just do a Google search for "malware armoring").

Today, probably about a third of all suspicious binaries traversing corporate networks that will eventually be categorized as being part of infiltration or espionage threats are VM-aware or capable of bypassing not only the current generation of automated dynamic analysis systems, but also any subsequent iteration of that technological path.

Not only are there umpteen subtle technical methods in which the malware author can detect the presence of the virtual analysis environment, but there are an almost unlimited number of unsophisticated ways to trivially achieve the same, which will be further "commoditized" to become commonplace generic Internet threats in the very near future.

For example, consider the classic Zeus or SpyEye DIY malware construction ensemble. These packs include malware creators, distributors, exploit packs and management consoles all in one. How easy would it be for the malware created from these (and similar packs) to include the following?

· Detect whether the Web browser is open at the time the malware component is executed, and that the URL of the infector site is within the browser history. If not, then obviously this malware wasn't downloaded by this computer and it shouldn't act maliciously - so it won't be classed as malware by the automated analysis system.

· Check the date timestamp of the computer and if the malware installer component hasn't been executed within a couple of seconds of download from the infector site, then this probably isn't the victim's computer.

· Check the Web browser history to ensure that the computer frequently browses the Web (especially the day of infection) and that there are URLs that relate to the affiliates that drove the victim to the infector site. If not, then it wasn't downloaded from this computer and... well, you know the drill.

· Wait until the letter "T" has been pressed 100 times within an hour, and that the mouse has traveled the equivalent of 10 meters before initiating any malicious activities.

· Have the malware agent created "on-the-fly" by the infector site and contain the equivalent of a license key that restricts its execution to only one computer - matching the IP address, Web browser agent information and Facebook user name.

Obviously, the bad guys can be infinitely inventive. The point being that it will always be possible for the attackers to detect whether their malware agent is being analyzed on a computer that wasn't their intended target, and they can make the malware act benignly, thereby evading the automated analysis system.

It's not rocket science, it's not brain surgery, it's common sense being employed by a large number of very crafty individuals. Then, once it's packed in to a DIY kit or armoring tool, it's just a commodity evasion technique available to all and sundry.

What does this mean to the folks charged with protecting their corporation from the broad malware threat? It means that there's a breed of mouse that figured out how to get your cheese from that better mouse trap quite some time ago, and they're training their skinny buddies to do likewise. Deploying the current generation of a better mouse trap isn't going to stop the evolving threat - but it will do two things: It will kill off the remaining skinny mice, and it will probably stop more salesmen from knocking on your door and trying to sell you their version of the better mouse trap. Perhaps it's worth it then?

Gunter Ollmann has more than 20 years of experience within the information technology industry and is a known veteran in the security space. Prior to joining Damballa, Gunter held several strategic positions at IBM Internet Security Systems (IBM ISS) with the most recent one being the Chief Security Strategist, director of X-Force as well as the former head of X-Force security assessment services for EMEA while at ISS (which was acquired by IBM in 2006). Gunter has been a contributor to multiple leading international IT and security focused magazines and journals, and has authored, developed and delivered a number of highly technical courses on Web application security.

in Network World's Wide Area Network section.