Results 1 - 10
of
127
Autograph: Toward automated, distributed worm signature detection
- In Proceedings of the 13th Usenix Security Symposium
, 2004
"... Today’s Internet intrusion detection systems (IDSes) monitor edge networks ’ DMZs to identify and/or filter malicious flows. While an IDS helps protect the hosts on its local edge network from compromise and denial of service, it cannot alone effectively intervene to halt and reverse the spreading o ..."
Abstract
-
Cited by 362 (3 self)
- Add to MetaCart
(Show Context)
Today’s Internet intrusion detection systems (IDSes) monitor edge networks ’ DMZs to identify and/or filter malicious flows. While an IDS helps protect the hosts on its local edge network from compromise and denial of service, it cannot alone effectively intervene to halt and reverse the spreading of novel Internet worms. Generation of the worm signatures required by an IDS—the byte patterns sought in monitored traffic to identify worms—today entails non-trivial human labor, and thus significant delay: as network operators detect anomalous behavior, they communicate with one another and manually study packet traces to produce a worm signature. Yet intervention must occur early in an epidemic to halt a worm’s spread. In this paper, we describe Autograph, a system that automatically generates signatures for novel Internet worms that propagate using TCP transport. Autograph generates signatures by analyzing the prevalence of portions of flow payloads, and thus uses no knowledge of protocol semantics above the TCP level. It is designed to produce signatures that exhibit high sensitivity (high true positives) and high specificity (low false positives); our evaluation of the system on real DMZ traces validates that it achieves these goals. We extend Autograph to share port scan reports among distributed monitor instances, and using trace-driven simulation, demonstrate the value of this technique in speeding the generation of signatures for novel worms. Our results elucidate the fundamental trade-off between early generation of signatures for novel worms and the specificity of these generated signatures. 1
Modeling Botnet Propagation Using Time Zones
- In Proceedings of the 13 th Network and Distributed System Security Symposium NDSS
, 2006
"... Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets ..."
Abstract
-
Cited by 132 (10 self)
- Add to MetaCart
Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets representing millions of victims. We noted diurnal properties in botnet activity, which we suspect occurs because victims turn their computers off at night. Through binary analysis, we also confirmed that some botnets demonstrated a bias in infecting regional populations. Clearly, computers that are offline are not infectious, and any regional bias in infections will affect the overall growth of the botnet. We therefore created a diurnal propagation model. The model uses diurnal shaping functions to capture regional variations in online vulnerable populations. The diurnal model also lets one compare propagation rates for different botnets, and prioritize response. Because of variations in release times and diurnal shaping functions particular to an infection, botnets released later in time may actually surpass other botnets that have an advanced start. Since response times for malware outbreaks is now measured in hours, being able to predict short-term propagation dynamics lets us allocate resources more intelligently. We used empirical data from botnets to evaluate the analytical model. 1
Anomalous payload-based worm detection and signature generation
- In Proceedings of the 8th International Symposium on Recent Advances in Intrusion Detection (RAID
, 2005
"... Abstract. New features of the PAYL anomalous payload detection sensor are presented and demonstrated to accurately detect and generate signatures for zero-day worm exploits. Experimental evidence is presented to demonstrate that “site-specific models ” trained and used for testing by PAYL are capabl ..."
Abstract
-
Cited by 126 (13 self)
- Add to MetaCart
(Show Context)
Abstract. New features of the PAYL anomalous payload detection sensor are presented and demonstrated to accurately detect and generate signatures for zero-day worm exploits. Experimental evidence is presented to demonstrate that “site-specific models ” trained and used for testing by PAYL are capable of detecting new worms with high accuracy in a collaborative security system. A new approach is proposed that correlates ingress/egress payload alerts to identify the worm’s initial propagation. The method also enables automatic signature generation very early in the worm’s propagation stage. These signatures can be deployed immediately to network firewalls and content filters to proactively protect other hosts. Finally, we also propose a collaborative security strategy whereby different hosts can themselves exchange PAYL signatures to increase accuracy and mitigate against false positives. The method used to represent these signatures is also privacy-preserving to enable crossdomain sharing. The important principle demonstrated is that the reduction of false positive alerts from an anomaly detector is not the central problem. Rather, correlating multiple alerts identifies true positives from the set of anomaly alerts and reduces incorrect decisions producing accurate mitigation. 1.
The Internet Motion Sensor: A Distributed Blackhole Monitoring System
- In Proceedings of Network and Distributed System Security Symposium (NDSS ’05
, 2005
"... As national infrastructure becomes intertwined with emerging global data networks, the stability and integrity of the two have become synonymous. This connection, while necessary, leaves network assets vulnerable to the rapidly moving threats of today’s Internet, including fast moving worms, distrib ..."
Abstract
-
Cited by 110 (16 self)
- Add to MetaCart
(Show Context)
As national infrastructure becomes intertwined with emerging global data networks, the stability and integrity of the two have become synonymous. This connection, while necessary, leaves network assets vulnerable to the rapidly moving threats of today’s Internet, including fast moving worms, distributed denial of service attacks, and routing exploits. This paper introduces the Internet Motion Sensor (IMS), a globally scoped Internet monitoring system whose goal is to measure, characterize, and track threats. The IMS architecture is based on three novel components. First, a Distributed Monitoring Infrastructure increases visibility into global threats. Second, a Lightweight Active Responder provides enough interactivity that traffic on the same service can be differentiated independent of application semantics. Third, a Payload Signatures and Caching mechanism avoids recording duplicated payloads, reducing overhead and assisting in identifying new and unique payloads. We explore the architectural tradeoffs of this system in the context of a 3 year deployment across multiple dark address blocks ranging in size from /24s to a /8. These sensors represent a range of organizations and a diverse sample of the routable IPv4 space including nine of all routable /8 address ranges. Data gathered from these deployments is used to demonstrate the ability of the IMS to capture and characterize several important Internet threats: the Blaster worm (August 2003), the Bagle backdoor scanning efforts
Detecting targeted attacks using shadow honeypots
- In Proceedings of the 14 th USENIX Security Symposium
, 2005
"... We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a “shadow ho ..."
Abstract
-
Cited by 89 (17 self)
- Add to MetaCart
(Show Context)
We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a “shadow honeypot ” to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular (“production”) instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20 % for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives. 1
The Architecture of PIER: an Internet-Scale Query Processor
- In CIDR
, 2005
"... This paper presents the architecture of PIER , an Internetscale query engine we have been building over the last three years. PIER is the first general-purpose relational query processor targeted at a peer-to-peer (p2p) architecture of thousands or millions of participating nodes on the Internet. ..."
Abstract
-
Cited by 88 (8 self)
- Add to MetaCart
This paper presents the architecture of PIER , an Internetscale query engine we have been building over the last three years. PIER is the first general-purpose relational query processor targeted at a peer-to-peer (p2p) architecture of thousands or millions of participating nodes on the Internet. It supports massively distributed, database-style dataflows for snapshot and continuous queries. It is intended to serve as a building block for a diverse set of Internet-scale informationcentric applications, particularly those that tap into the standardized data readily available on networked machines, including packet headers, system logs, and file names
HoneyStat: Local Worm Detection Using Honepots
- in Proceedings of the 7 th International Symposium on Recent Advances in Intrusion Detection (RAID
, 2004
"... Abstract. Worm detection systems have traditionally used global strategies and focused on scan rates. The noise associated with this approach requires statistical techniques and large data sets (e.g., monitored machines) to avoid false positives. Worm detection techniques for smaller local networks ..."
Abstract
-
Cited by 86 (5 self)
- Add to MetaCart
Abstract. Worm detection systems have traditionally used global strategies and focused on scan rates. The noise associated with this approach requires statistical techniques and large data sets (e.g., monitored machines) to avoid false positives. Worm detection techniques for smaller local networks have not been fully explored. We consider how local networks can provide early detection and compliment global monitoring strategies. We describe HoneyStat, which uses modified honeypots to generate a highly accurate alert stream with low false positive rates. Unlike traditional honeypots, HoneyStat nodes are minimal, script-driven and cover a large IP space. The HoneyStat nodes generate three classes of alerts: memory alerts (based on buffer overflow detection and process management), disk write alerts (such as writes to registry keys and critical files) and network alerts. Data collection is automated, and once an alert is issued, a time segment of previous traffic to the node is analyzed. A logit analysis determines what previous network activity explains the current honeypot alert. The result can indicate whether an automated or worm attack is present. We demonstrate HoneyStat’s improvements over previous worm detection techniques. First, using trace files from worm attacks on small networks, we demonstrate how it detects zero day worms. Second, we show how it detects multi vector worms that use combinations of ports to attack. Third, the alerts from HoneyStat provide more information than traditional IDS alerts, such as binary signatures, attack vectors, and attack rates. We also use extensive (year long) trace files to show how the logit analysis produces very low false positive rates. 1
On the Design and Use of Internet Sinks for Network Abuse Monitoring
- In Proceedings of the 7 th International Symposium on Recent Advances in Intrusion Detection (RAID
, 2004
"... Monitoring unused or dark IP addresses offers opportunities to significantly improve and expand knowledge of abuse activity without many of the problems associated with typical network intrusion detection and firewall systems. ..."
Abstract
-
Cited by 85 (10 self)
- Add to MetaCart
(Show Context)
Monitoring unused or dark IP addresses offers opportunities to significantly improve and expand knowledge of abuse activity without many of the problems associated with typical network intrusion detection and firewall systems.
CloudAV: N-Version Antivirus in the Network Cloud
"... Antivirus software is one of the most widely used tools for detecting and stopping malicious and unwanted files. However, the long term effectiveness of traditional hostbased antivirus is questionable. Antivirus software fails to detect many modern threats and its increasing complexity has resulted ..."
Abstract
-
Cited by 73 (6 self)
- Add to MetaCart
(Show Context)
Antivirus software is one of the most widely used tools for detecting and stopping malicious and unwanted files. However, the long term effectiveness of traditional hostbased antivirus is questionable. Antivirus software fails to detect many modern threats and its increasing complexity has resulted in vulnerabilities that are being exploited by malware. This paper advocates a new model for malware detection on end hosts based on providing antivirus as an in-cloud network service. This model enables identification of malicious and unwanted software by multiple, heterogeneous detection engines in parallel, a technique we term ‘N-version protection’. This approach provides several important benefits including better detection of malicious software, enhanced forensics capabilities, retrospective detection, and improved deployability and management. To explore this idea we construct and deploy a production quality in-cloud antivirus system called CloudAV. CloudAV includes a lightweight, cross-platform host agent and a network service with ten antivirus engines and two behavioral detection engines. We evaluate the performance, scalability, and efficacy of the system using data from a real-world deployment lasting more than six months and a database of 7220 malware samples covering a one year period. Using this dataset we find that CloudAV provides 35% better detection coverage against recent threats compared to a single antivirus engine and a 98 % detection rate across the full dataset. We show that the average length of time to detect new threats by an antivirus engine is 48 days and that retrospective detection can greatly minimize the impact of this delay. Finally, we relate two case studies demonstrating how the forensics capabilities of CloudAV were used by operators during the deployment. 1
Toward understanding distributed blackhole placement
- In Proceedings of the 2004 ACM Workshop on Rapid Malcode (WORM-04
, 2004
"... The monitoring of unused Internet address space has been shown to be an effective method for characterizing Internet threats including Internet worms and DDOS attacks. Because there are no legitimate hosts in an unused address block, traffic must be the result of misconfiguration, backscatter from s ..."
Abstract
-
Cited by 60 (15 self)
- Add to MetaCart
(Show Context)
The monitoring of unused Internet address space has been shown to be an effective method for characterizing Internet threats including Internet worms and DDOS attacks. Because there are no legitimate hosts in an unused address block, traffic must be the result of misconfiguration, backscatter from spoofed source addresses, or scanning from worms and other probing. This paper extends previous work characterizing traffic seen at specific unused address blocks by examining differences observed between these blocks. While past research has attempted to extrapolate the results from a small number of blocks to represent global Internet traffic, we present evidence that distributed address blocks observe dramatically different traffic patterns. This work uses a network of blackhole sensors which are part of the Internet Motion Sensor (IMS) collection infrastructure. These sensors are deployed in networks belonging to service providers, large enterprises, and academic institutions representing a diverse sample of the IPv4 address space. We demonstrate differences in traffic observed along three dimensions: over all protocols and services, over a specific protocol and service, and over a particular worm signature. This evidence is then combined with additional experimentation to build a list of sensor properties providing plausible explanations for these differences. Using these properties, we conclude with recommendations for better understanding the implications of sensor placement.