Results 1 - 10
of
178
Vigilante: End-to-End Containment of Internet Worm Epidemics
, 2008
"... Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work proposed network-level techniques to automate worm containment; these techniques have limitations because there is no information about the vulnerabilities exploited by worms at the network level. ..."
Abstract
-
Cited by 304 (6 self)
- Add to MetaCart
(Show Context)
Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work proposed network-level techniques to automate worm containment; these techniques have limitations because there is no information about the vulnerabilities exploited by worms at the network level. We propose Vigilante, a new end-to-end architecture to contain worms automatically that addresses these limitations. In Vigilante, hosts detect worms by instrumenting vulnerable programs to analyze infection attempts. We introduce dynamic data-flow analysis: a broad-coverage host-based algorithm that can detect unknown worms by tracking the flow of data from network messages and disallowing unsafe uses of this data. We also show how to integrate other host-based detection mechanisms into the Vigilante architecture. Upon detection, hosts generate self-certifying alerts (SCAs), a new type of security alert that can be inexpensively verified by any vulnerable host. Using SCAs, hosts can cooperate to contain an outbreak, without having to trust each other. Vigilante broadcasts SCAs over an overlay network that propagates alerts rapidly and resiliently. Hosts receiving an SCA protect themselves by generating filters with vulnerability condition slicing: an algorithm that performs dynamic analysis of the vulnerable program to identify control-flow conditions that lead
Modeling Botnet Propagation Using Time Zones
- In Proceedings of the 13 th Network and Distributed System Security Symposium NDSS
, 2006
"... Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets ..."
Abstract
-
Cited by 132 (10 self)
- Add to MetaCart
(Show Context)
Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets representing millions of victims. We noted diurnal properties in botnet activity, which we suspect occurs because victims turn their computers off at night. Through binary analysis, we also confirmed that some botnets demonstrated a bias in infecting regional populations. Clearly, computers that are offline are not infectious, and any regional bias in infections will affect the overall growth of the botnet. We therefore created a diurnal propagation model. The model uses diurnal shaping functions to capture regional variations in online vulnerable populations. The diurnal model also lets one compare propagation rates for different botnets, and prioritize response. Because of variations in release times and diurnal shaping functions particular to an infection, botnets released later in time may actually surpass other botnets that have an advanced start. Since response times for malware outbreaks is now measured in hours, being able to predict short-term propagation dynamics lets us allocate resources more intelligently. We used empirical data from botnets to evaluate the analytical model. 1
Global Intrusion Detection in the DOMINO Overlay System
- In Proceedings of Network and Distributed System Security Symposium (NDSS
, 2004
"... Sharing data between widely distributed intrusion detection systems offers the possibility of significant improvements in speed and accuracy over isolated systems. In this paper, we describe and evaluate DOMINO (Distributed Overlay for Monitoring InterNet Outbreaks); an architecture for a distribute ..."
Abstract
-
Cited by 127 (4 self)
- Add to MetaCart
Sharing data between widely distributed intrusion detection systems offers the possibility of significant improvements in speed and accuracy over isolated systems. In this paper, we describe and evaluate DOMINO (Distributed Overlay for Monitoring InterNet Outbreaks); an architecture for a distributed intrusion detection system that fosters collaboration among heterogeneous nodes organized as an overlay network. The overlay design enables DOMINO to be heterogeneous, scalable, and robust to attacks and failures. An important component of DOMINO’s design is the use of active sink nodes which respond to and measure connections to unused IP addresses. This enables efficient detection of attacks from spoofed IP sources, reduces false positives, enables attack classification and production of timely blacklists. We evaluate the capabilities and performance of DOMINO using a large set of intrusion logs collected from over 1600 providers across the Internet. Our analysis demonstrates the significant marginal benefit obtained from distributed intrusion data sources coordinated through a system like DOMINO. We also evaluate how to configure DOMINO in order to maximize performance gains from the perspectives of blacklist length, blacklist freshness and IP proximity. We perform a retrospective analysis on the 2002 SQL-Snake and 2003 SQL-Slammer epidemics that highlights how information exchange through DOMINO would have reduced the reaction time and false-alarm rates during outbreaks. Finally, we provide preliminary results from our prototype active sink deployment that illustrates the limited variability in the sink traffic and the feasibility of efficient classification and discrimination of attack types. 1
Model-based evaluation: From dependability to security
- IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
, 2004
"... The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques are now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based ..."
Abstract
-
Cited by 99 (5 self)
- Add to MetaCart
The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques are now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based methods, such as Markov reward models, and detailed, discreteevent simulation. The use of quantitative techniques for security evaluation is much less common, and has typically taken the form of formal analysis of small parts of an overall design, or experimental red team-based approaches. Alone, neither of these approaches is fully satisfactory, and we argue that there is much to be gained through the development of a sound model-based methodology for quantifying the security one can expect from a particular design. In this work, we survey existing model-based techniques for evaluating system dependability, and summarize how they are now being extended to evaluate system security. We find that many techniques from dependability evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks.
An advanced hybrid peer-to-peer botnet,
- Proceedings of the First Workshop on Hot Topics in Understanding Botnets.
, 2007
"... ..."
(Show Context)
Worm Propagation Modeling and Analysis under Dynamic Quarantine Defense
, 2003
"... Due to the fast spreading nature and great damage of Internet worms, it is necessary to implement automatic mitigation, such as dynamic quarantine, on computer networks. Enlightened by the methods used in epidemic disease control in the real world, we present a dynamic quarantine method based on the ..."
Abstract
-
Cited by 89 (11 self)
- Add to MetaCart
Due to the fast spreading nature and great damage of Internet worms, it is necessary to implement automatic mitigation, such as dynamic quarantine, on computer networks. Enlightened by the methods used in epidemic disease control in the real world, we present a dynamic quarantine method based on the principle "assume guilty before proven innocent " --- we quarantine a host whenever its behavior looks suspicious by blocking tra#c on its anomaly port. Then we will release the quarantine after a short time, even if the host has not been inspected by security sta#s yet. We present mathematical analysis of three worm propagation models under this dynamic quarantine method. The analysis shows that the dynamic quarantine can reduce a worm's propagation speed, which can give us precious time to fight against a worm before it is too late. Furthermore, the dynamic quarantine will raise a worm's epidemic threshold, thus it will reduce the chance for a worm to spread out. The simulation results verify our analysis and demonstrate the e#ectiveness of the dynamic quarantine defense.
Detecting targeted attacks using shadow honeypots
- In Proceedings of the 14 th USENIX Security Symposium
, 2005
"... We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a “shadow ho ..."
Abstract
-
Cited by 89 (17 self)
- Add to MetaCart
(Show Context)
We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a “shadow honeypot ” to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular (“production”) instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20 % for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives. 1
Collapsar: A VM-Based Architecture for Network Attack Detention Center
- Proceedings of the USENIX 13th Security Symposium
, 2004
"... The honeypot has emerged as an effective tool to provide insights into new attacks and current exploitation trends. Though effective, a single honeypot or multiple independently operated honeypots only provide a limited local view of network attacks. Deploying and managing a large number of coordina ..."
Abstract
-
Cited by 86 (13 self)
- Add to MetaCart
(Show Context)
The honeypot has emerged as an effective tool to provide insights into new attacks and current exploitation trends. Though effective, a single honeypot or multiple independently operated honeypots only provide a limited local view of network attacks. Deploying and managing a large number of coordinating honeypots in different network domains will not only provide a broader and more diverse view, but also create potentials in global network status inference, early network anomaly detection, and attack correlation in large scale. However, coordinated honeypot deployment and operation require close and consistent collaboration across participating network domains, in order to mitigate potential security risks associated with each honeypot and the non-uniform level
HoneyStat: Local Worm Detection Using Honepots
- in Proceedings of the 7 th International Symposium on Recent Advances in Intrusion Detection (RAID
, 2004
"... Abstract. Worm detection systems have traditionally used global strategies and focused on scan rates. The noise associated with this approach requires statistical techniques and large data sets (e.g., monitored machines) to avoid false positives. Worm detection techniques for smaller local networks ..."
Abstract
-
Cited by 86 (5 self)
- Add to MetaCart
(Show Context)
Abstract. Worm detection systems have traditionally used global strategies and focused on scan rates. The noise associated with this approach requires statistical techniques and large data sets (e.g., monitored machines) to avoid false positives. Worm detection techniques for smaller local networks have not been fully explored. We consider how local networks can provide early detection and compliment global monitoring strategies. We describe HoneyStat, which uses modified honeypots to generate a highly accurate alert stream with low false positive rates. Unlike traditional honeypots, HoneyStat nodes are minimal, script-driven and cover a large IP space. The HoneyStat nodes generate three classes of alerts: memory alerts (based on buffer overflow detection and process management), disk write alerts (such as writes to registry keys and critical files) and network alerts. Data collection is automated, and once an alert is issued, a time segment of previous traffic to the node is analyzed. A logit analysis determines what previous network activity explains the current honeypot alert. The result can indicate whether an automated or worm attack is present. We demonstrate HoneyStat’s improvements over previous worm detection techniques. First, using trace files from worm attacks on small networks, we demonstrate how it detects zero day worms. Second, we show how it detects multi vector worms that use combinations of ports to attack. Third, the alerts from HoneyStat provide more information than traditional IDS alerts, such as binary signatures, attack vectors, and attack rates. We also use extensive (year long) trace files to show how the logit analysis produces very low false positive rates. 1
A Taxonomy of Botnet Structures
- In Proc. of the 23 Annual Computer Security Applications Conference (ACSAC'07
, 2007
"... We propose a taxonomy of botnet structures, based on their utility to the botmaster. We propose key metrics to measure their utility for various activities (e.g., spam, ddos). Using the performance metrics, we consider the ability of different response techniques to degrade or disrupt botnets. In pa ..."
Abstract
-
Cited by 81 (4 self)
- Add to MetaCart
We propose a taxonomy of botnet structures, based on their utility to the botmaster. We propose key metrics to measure their utility for various activities (e.g., spam, ddos). Using the performance metrics, we consider the ability of different response techniques to degrade or disrupt botnets. In particular, our models show that for scale free botnets, targeted responses are particularly effective. Further, botmasters ’ efforts to improve the robustness of scale free networks comes at a cost of diminished transitivity. Botmasters do not appear to have any structural solutions to this problem in scale free networks. We also show that random graph botnets (e.g., those using P2P formations) are highly resistant to both random and targeted responses. We evaluate the impact of responses on different topologies using simulation. We also perform some novel measurements of a P2P network to demonstrate the utility of our proposed metrics. Our analysis shows how botnets may be classified according to structure, and given rank or priority using our proposed metrics. This may help direct responses, and suggests which general remediation strategies are more likely to succeed. 1