Malware Naming, Shape Shifters & Sympathetic Magic


This is the paper on malware naming I presented at the 3rd Cybercrime Forensics Education & Training (CFET 2009) Conference in Canterbury on The Game of the Name: Malware Naming, Shape Shifters and Sympathetic Magic.

Here’s the abstract:

Once upon a time, one infection by specific malware looked much like another infection, to an antivirus scanner if not to the naked eye. Even back then, virus naming wasn’t very consistent between vendors, but at least virus encyclopaedias and third-party resources like vgrep made it generally straightforward to map one vendor’s name for a virus to another vendor’s name for the same malware.

In 2009, though, the threat landscape looks very different. Viruses and other replicative malware, while far from extinct, pose a comparatively manageable problem compared to other threats with the single common characteristic of malicious intent. Proof-of-Concept code with sophisticated self-replicating mechanisms is of less interest to today’s malware authors than shape-shifting Trojans that change their appearance frequently to evade detection and are intended to make money for criminals rather than getting adolescent admiration and bragging rights.

Sheer sample glut makes it impossible to categorize and standardize on naming for each and every unique sample out of tens of thousands processed each day.

Detection techniques such as generic signatures, heuristics and sandboxing have also changed the ways in which malware is detected and therefore how it is classified, confounding the old assumptions of a simple one-to-one relationship between a detection label and a malicious program. This presentation will explain how one-to-many, many-to-one, or many-to-many models are at least as likely as the old one-detection-per-variant model, why “Do you detect Win32/UnpleasantVirus.EG?” is such a difficult question to answer, and explain why exact indication is not a pre-requisite for detection and remediation of malware, and actually militates against the most effective use of analysis and development time and resources. But what is the information that the end-user or end-site really needs to know about an incoming threat?

David Harley
Small Blue-Green World
ESET Senior Research Fellow

Execution Context in Anti-Malware Testing

This is one of my 2009 papers, presented by Randy Abrams and myself on behalf of ESET at the EICAR 2009 Conference in Berlin.


Anti-malware testing methodology remains a contentious area because many testers are insufficiently aware of the complexities of malware and anti-malware technology. This results in the frequent publication of comparative test results that are misleading and often totally invalid because they don’t accurately reflect the detection capability of the products under test. Because many tests are based purely on static testing, where products are tested by using them to scan presumed infected objects passively, those products that use more proactive techniques such as active heuristics, emulation and sandboxing are frequently disadvantaged in such tests, even assuming that sample sets are correctly validated.

Recent examples of misleading published statistical data include the ranking of anti-malware products according to reports returned by multi-scanner sample submission sites, even though the better examples of such sites are clear that this is not an appropriate use of their services, and the use of similar reports to generate other statistical data such as the assumed prevalence of specific malware. These problems, especially when combined with other testing problem areas such as accurate sample validation and classification, introduce major statistical anomalies.

In this paper, it is proposed to review the most common mainstream anti-malware detection techniques (search strings and simple signatures, generic signatures, passive heuristics, active heuristics and behaviour analysis) in the context of anti-malware testing for purposes of single product testing, comparative detection testing, and generation of prevalence and global detection data. Specifically, issues around static and dynamic testing will be examined. Issues with additional impact, such as sample classification and false positives, will be considered – not only false identification of innocent applications as malware, but also contentious classification issues such as (1) the trapping of samples, especially corrupted or truncated honeypot and honeynet samples intended maliciously but unable to pose a direct threat to target systems (2) use of such criteria as packing and obfuscation status as a primary heuristic for the identification of malware.

EICAR execution context paper

Phish Phodder: Is User Education Helping or Hindering?

[Go back to ESET White Papers page.]
[Go back to ESET blog.]

David Harley & Andrew Lee, “Phish Phodder: Is User Education Helping or Hindering?” (davidharleyandrewleevb2007), September 2007, Virus Bulletin. Copyright is held by Virus Bulletin Ltd, but the document is made available on this site for personal use free of charge by permission of Virus Bulletin.

Mostly, security professionals can spot a phish a mile off. If they do err, it’s usually on the side of caution, for instance when real organizations fail to observe best practice and generate phish-like marketing messages. Many sites are now addressing the problem with phishing quizzes, intended to teach the everyday user to distinguish phish from phowl (sorry). Academic papers on why people fall for phishing mails and sites are something of a growth industry. Yet phishing attacks continue to increase, and while accurate and up-to-date figures for financial loss are hard to come by, indications are that losses from phishing and other forms of identity theft continue to climb.

This paper:
1. Evaluates current research on how end users are susceptible to phishing attacks and ID theft.
2. Evaluates a range of web-based educational and informational resources in general and summarizes the pros and cons of the quiz approach in particular.
3. Reviews the shared responsibility of phished institutions and phishing mail targets for reducing the impact of phishing scams. What constitutes best practice for finance-related mail-outs and e-commerce transactions? How far can we rely on detection technology?

Who Will Test The Testers?

[Go back to ESET White Papers page.]
[Go back to ESET blog.]

Who Will Test The Testers? is a paper by myself and Andrew Lee on making anti-malware testers more accountable to their audiences, presented at the Virus Bulletin Conference in 2008 and published in the conference proceedings.

David Harley BA CISSP FBCS CITP & Andrew Lee CISSP, “Who Will Test The Testers?”, October 2008, Virus Bulletin. Copyright is held by Virus Bulletin Ltd, but the paper is made available on this site for personal use free of charge by permission of Virus Bulletin.


The anti-malware industry has been plagued since its earliest days by one poorly designed comparative test after another. In 2007, some of the best anti-malware researchers, comparative testers and product certification specialists took the first steps towards raising product testing standards with the formation of a group specifically focused on establishing standards and methodologies, educating both consumers and testers in discrimination between good and bad practice, and providing objective analyses of current testing practices. This paper summarizes current initiatives by the Anti-Malware Testing
Standards Organization and other groups, but also considers next steps, going beyond objectifying methodology, educational issues and blowing away the fog of misinformation and fallacy, to the next level. Underlying these vital issues is a question: is it possible to make testers and certifying authorities more accountable for the quality of their testing methods and the accuracy of the conclusions they draw based on that testing?

This paper attempts to answer that question.

Teach Your Children Well


[Go back to ESET White Papers page.]
[Go back to ESET blog.]

teach-your-children-well is a paper by myself, Eddy Willems and Judith Harley presented by Eddy and myself at the Virus Bulletin Conference in 2005, and published in the conference proceedings.

David Harley, Eddy Willems & Judith Harley, “Teach Your Children Well – ICT Security And The Younger Generation”, October 2005, Virus Bulletin. Copyright is held by Virus Bulletin Ltd, but the document is made available on this site for personal use free of charge by permission of Virus Bulletin.


An article by Eddy Willems in the August 2004 edition of VB discussed his research into the security awareness of Belgian children. The authors have developed this theme by submitting a similar questionnaire to ICT pupils in the UK and using the results as a basis for an interactive presentation and discussion with several groups in the UK, and an assignment-based follow-up with different groups was undertaken early in March 2005.

The paper is not intended as a completed formal study, but considers this presentation and the issues that came up in this preliminary research as a basis for further study and teaching tools. It also considers a range of resources in the area of child safety, learning, attitudes and behaviour as they affect and are affected by the use of information and communications technology, and the influence of the media, government, and the Internet itself. While the preliminary research has largely focused on malware and email abuse, we will also consider how these areas are connected with other technologies and areas of concern among parents and educators.