Sunday, September 28, 2014

Predictive Research: Malware, You're Doing It Wrong

I sat down this weekend to document the inspiring thoughts behind a talk I gave at Next Generation Threats last week in Stockholm. The initial idea was to outline how today's threat detection systems work and how they are bound to fail in specific situations. Slides are pretty and contain cat pictures:


Yet, they do not express all the ideas which popped up in my head even only after the event.

As researchers are constantly mocking Anti-Virus solutions these days, let me point out what crap the malware authors themselves are actually doing wrong at the same time. 98% of malware (numbers not based on empirical research) we see today is not exceptionally good software. More even, as long as we see today’s malware it can’t quite be genius, no? 98% of malware (numbers not based on empirical research) has evil intentions, and, it works – not more and not less. But all the “even more sophisticated” malware spread over security software vendor pages is a myth, along with the even more sophisticated detection technology. 

And while malware and malware defense seems like a race of least effort, it is the work of offensive research to push defenders to new innovations. I’m not an offensive person, and with all means I do not intend to enable crooks to do their job better. Me, and I bet 98% of all others (numbers not based on empirical research) wish for intelligent defenses, which are predictive rather than knowledge based. In other words, who wouldn’t want to be safe from future threats rather than only the currently known ones?


From a simplified and abstract viewpoint there are three modes of detection:

  • The binary is known
  • The binary is recognized
  • The behavior of the binary is recognized
All three of them doubtlessly work. There is an entire industry busy with spreading samples and hashes of known threats to enable protection. Pattern recognition catches up to 90% of these known samples (numbers based on empirical research), while patterns consist of all sorts of data. Among these plain bytes like opcodes or file header characteristics, heuristic attributes like entropy or combined patterns. If nothing of these work, the behavior of a binary can still be extracted in various ways. Sandboxes and emulators do a great job nowadays in imitating real world systems.

However, all of these methods can be tricked. As detection technology is evolving, they will be tricked eventually. Let’s not call this write-up offensive research, but predictive. 


Let’s see what malware is doing wrong.


NOT BEING UNIQUE ENOUGH


As recent incidents picked up by public media have proofed, if malware is not intended to go large scale the need to evade static detection evaporates [1]. However, most successful attack vectors are not used only once. As soon as malware is out to infect more than a handful of systems consecutive success can only be achieved by constant morphing.
Let me call it ‘Weapons of Match Destruction’ what malware like Virut [2] is using. A polymorphic engine for example, which creates a totally new binary every time the infection spreads on, so new that less than 5-10 bytes of potential pattern remains equal. For what I know according crypters producing different representations of one payload are easy to acquire. Note though, their use is mere obfuscation than protection, really.


ONE BINARY TO RULE FOREVER


Malware that can’t be caught by pattern matching will be caught by file hash matching. The most ancient form of malware detection is still effective on highly polymorphic samples. A malicious binary once uncovered is shared among researchers and software vendors and will be detected as soon as threat definitions (aka. signatures) are formulated and pushed out to security solutions. The trick to fly under the radar is some effort but pretty simple actually. Exchanging binaries every once in a while, before they get detected, allows to escape the malware detection cycle. Given the binaries are robust against byte- and behavior pattern detection exchanging them faster than the signature update cycle can catch them will keep them undetected. True story, some variants of Zeus do so.


REPETITIVE ARTIFACTS


Threat detection technology is all about patterns. Legacy threat detection after 20 years of practice is pretty sophisticated in detecting binary patterns, but so is the malware in distorting its binary patterns. What malware is not very good in yet is hiding its artifacts. An infected machine will always show artifacts, let that be files, registry keys, domain lookups on the network or modifications of hard disk sectors. Repetitive use of these artifacts are what eventually identify a malware infection on a system and help mapping it to known families. In any case, the following artifacts are subject of special attention:

  • Filenames
  • Domain names
  • Registry key names / value names
  • Infiltration methods
  • Persistence methods

RUN EVERYWHERE


The existence of sandboxes is not a secret any more. Also it might have hit the public that most analysis-oriented systems, such as reverse engineer’s boxes, rely on virtualized Windows XP. At the point of writing I am not aware of the number of actual productive systems running on VMware or Qemu, but in consideration of the trade-off malware might want to refuse execution on these systems. 


There is a long list of evasion techniques focusing on virtual environments, from delaying execution to detection of supervisors. While I would not say that checking hard disk identifier names is a very smart technique, it is definitely a start. More successful would be to use a technique which doesn’t leave readable string constants in the binary. Or, multiple techniques spread throughout the binary. 


SINGULAR PERSISTENCE METHODS


Persistence mechanisms will, to a certain extent, always be visible on an infected machine. A registry key, an executable binary, a new service, a modification of the boot sector. For disinfection essentially two things have to be undertaken – neutralize the binaries and deactivate persistence / recovery techniques. The latter one is considerably easy given the analyst sees the respective system modification and manages to remove it. This would not be anything like easy if the malware would be able to apply a number of different techniques, to hide them well, and more even, to revive destroyed persistence mechanisms.
 

A pretty cool approach has been shown by recent Backoff variants [3], which manage two copies of the malicious binary on disk and use either the Windows services or a mutex to assure the system remains infected. However, the number of techniques available is large and innovative combinations are possible.

SHOWING STRINGS


The idea that readable string constants in the binary are not feasible when trying to be stealthy has already reached most malware authors. Interestingly, the main mitigation method though is to pack the binary with some crypter that won’t even leave a single byte where it has been before. Yet this implies, as soon as the crypter is bypassed the plain binary remains easy to analyze. And, seriously, most crypters one can get on the internet are not all that hard to bypass. Suitable methods exist to distort strings and only reconstruct them at runtime, so the static payload does not look like an open book to read. 


SHOWING API NAMES


Right after string constants on the ladder of analyst’s favorite helpers are API calls. It is kind of hard not to understand the significance of CreateFileA, WriteFileA and CloseHandle in a row. This fact is similarly understood by malware writers as the string issue, apply a runtime packer and all API magic is hidden. Again, to a certain extent this is bound to fail, as long as you don’t have a packer which is exceptionally hard to crack.  


Not very complicated and tremendously effective to defend against static detection are jump tables. Better even, morphing jump tables, where API offsets change place every once in a while. However, there is a number of ways to hinder the disassembler / debugger in showing API names. Choose one. Or many.


CLEAR SEPARATION OF LAYERS


The most common architecture of malware is the following: one or more protection layers on top of one or more packer layers on top of the plain payload. This, because usually the entity providing the plain binary and the entity providing packing and protection are not the same. Simple workload distribution ensures that the guy being specialist on packing does his job while the guy being specialist on exfiltrating data can focus on his desired target.
 

Again I am lacking the empirical data for such statement, but my guess would be that the number of available crypters or crypter services out there is fairly limited. So as most of these operate somewhat similar today’s analysts don’t have major problems in cracking literally all of them. Therefore, just like with the strings, keeping anti-analysis and packing / unpacking up throughout all layers raises the analysis pain by a multitude. 

HARDCODED DOMAIN NAMES


I guess it is tempting to not bother with the command and control server management issue and just use one with a single domain. However, looking at C&C domain names is by far the easiest way to classify malware. Domain generation algorithms haven’t only been invented yesterday, I remember reading even dictionary based DGAs are doable. While their main purpose remains hiding the C&C infrastructure on the web, they also pose significant headache for analysts. Gibberish domain names make it somewhat hard to map a sample to a given family, which would allow detection even if the sample is unknown. 


LINEAR EXECUTION


Dynamic analysis is limited in several ways, such as time or virtual environments – or in a sense that they are not very dynamic in what they are actually analyzing. Emulators and sandboxes usually only know one way, execute and see what happens. Simple examples of how this could go wrong are applications which need certain input parameters or DLLs which exhibit their maliciousness only when called on a certain export. And for what it’s worth, human analysts work usually in the same fashion. Parallelized execution is a bitch, conditional execution also raises analysis efforts significantly. No better way to trick sandboxes than showing malicious behavior every Tuesday 3 p.m. and acting benign the whole rest of the week. Just sayin’.


STICKING TO KNOWN SPHERES

Last but not least, remaining in known spheres gives analysts the advantage of the equal battle field. There are hordes of engineers out there who know how to handle a sandbox, interpret API call traces or dig into x86/64 binaries. From my personal borderline painful experience I can tell that the number of people being familiar with virtual machine execution, kernel land code, bootkits or bios malware is rather limited. 


FOR THE DEFENDERS


Looking at the attacks we see today, again only those we can see, they reuse tools and techniques that have been there before. Still, a large part of malware goes undetected every day. Most of it shows one or another weakness, that allows detection. The art in detection though is actually to see. Having visibility on the artifacts which indicate an attack or a successful breach is already half the way. 


The outlined mistakes malware authors are making are considerably easy to fix. Suitable source code and descriptions of techniques are available on the internet. It is a question of time only at what point the crooks will advance their miscreants to a level where current solutions are no more able to find and analyze them. Looking into the future is nothing humans are especially good at, however, the points I outline here are not generally unknown or anything innovative. Meanwhile I am sure current security solutions cover some, but surely not all of the mentioned weaknesses. New innovations and combined approaches only can extend visibility and empower new ways of threat detection.

RESOURCES


[1]
Home Depot Data Breach Could Be The Largest Yet [NY Times]
http://bits.blogs.nytimes.com/2014/09/08/home-depot-confirms-that-it-was-hacked/?_php=true&_type=blogs&_r=0

[2]
Polymorphic Virut Malware [Avira Blog]
techblog.avira.com/2011/03/11/polymorphic-virut-malware/en/

[3]
New POS Malware Family – Backoff [Trustware Blog]
https://gsr.trustwave.com/topics/backoff-pos-malware/backoff-malware-overview/

1 comment:

  1. Predictive Research: Malware, You're Doing It Wrong

    ReplyDelete