Results 1 - 10
of
94
Mining anomalies using traffic feature distributions
- In ACM SIGCOMM
, 2005
"... The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue tha ..."
Abstract
-
Cited by 322 (8 self)
- Add to MetaCart
(Show Context)
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Géant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.
Vigilante: End-to-End Containment of Internet Worm Epidemics
, 2008
"... Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work proposed network-level techniques to automate worm containment; these techniques have limitations because there is no information about the vulnerabilities exploited by worms at the network level. ..."
Abstract
-
Cited by 304 (6 self)
- Add to MetaCart
Worm containment must be automatic because worms can spread too fast for humans to respond. Recent work proposed network-level techniques to automate worm containment; these techniques have limitations because there is no information about the vulnerabilities exploited by worms at the network level. We propose Vigilante, a new end-to-end architecture to contain worms automatically that addresses these limitations. In Vigilante, hosts detect worms by instrumenting vulnerable programs to analyze infection attempts. We introduce dynamic data-flow analysis: a broad-coverage host-based algorithm that can detect unknown worms by tracking the flow of data from network messages and disallowing unsafe uses of this data. We also show how to integrate other host-based detection mechanisms into the Vigilante architecture. Upon detection, hosts generate self-certifying alerts (SCAs), a new type of security alert that can be inexpensively verified by any vulnerable host. Using SCAs, hosts can cooperate to contain an outbreak, without having to trust each other. Vigilante broadcasts SCAs over an overlay network that propagates alerts rapidly and resiliently. Hosts receiving an SCA protect themselves by generating filters with vulnerability condition slicing: an algorithm that performs dynamic analysis of the vulnerable program to identify control-flow conditions that lead
Detecting targeted attacks using shadow honeypots
- In Proceedings of the 14 th USENIX Security Symposium
, 2005
"... We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a “shadow ho ..."
Abstract
-
Cited by 89 (17 self)
- Add to MetaCart
(Show Context)
We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a “shadow honeypot ” to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular (“production”) instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20 % for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives. 1
Research Challenges for the Security of Control Systems
"... In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks ag ..."
Abstract
-
Cited by 69 (4 self)
- Add to MetaCart
(Show Context)
In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks. 1
Monitoring and Early Detection for Internet Worms
- IEEE/ACM Transactions on Networking
"... After several Internet-scale worm incidents in recent years, it is clear that a simple self-propagating worm can quickly spread across the Internet and cause severe damage to our society. Facing this great security threat, we must build an early detection system to detect the presence of a worm as q ..."
Abstract
-
Cited by 66 (2 self)
- Add to MetaCart
(Show Context)
After several Internet-scale worm incidents in recent years, it is clear that a simple self-propagating worm can quickly spread across the Internet and cause severe damage to our society. Facing this great security threat, we must build an early detection system to detect the presence of a worm as quickly as possible in order to give people enough time for counteractions. In this paper, we first present an Internet worm monitoring system. Then based on the idea of "detecting the trend, not the burst" of monitored illegitimate traffic, we present a non-threshold based "trend detection" methodology to detect a worm at its early stage by using Kalman filter estimation. In addition, for uniform scan worms such as Code Red and Slammer, we can effectively predict the overall vulnerable population size, and estimate accurately how many computers are really infected in the global Internet based on the biased monitored data. For monitoring of non-uniform scan worms such as Blaster, we show that the address space covered by a monitoring system should be as distributed as possible.
Routing worm: A fast, selective attack worm based on IP address information
, 2003
"... Most well-known Internet worms, such as Code Red, Slammer, and Blaster, infected vulnerable computers by scanning the entire Internet IPv4 space. In this paper, we present a new scan-based worm called “routing worm”, which can use information provided by BGP routing tables to reduce its scanning spa ..."
Abstract
-
Cited by 48 (5 self)
- Add to MetaCart
(Show Context)
Most well-known Internet worms, such as Code Red, Slammer, and Blaster, infected vulnerable computers by scanning the entire Internet IPv4 space. In this paper, we present a new scan-based worm called “routing worm”, which can use information provided by BGP routing tables to reduce its scanning space without ignoring any potential vulnerable computer. In this way, a routing worm can propagate twice to more than three times faster than a traditional worm. In addition, the geographic information of allocated IP addresses, especially BGP routing prefixes, enables a routing worm to conduct fine-grained selective attacks: hackers or terrorists can selectively impose heavy damage to vulnerable computers in a specific country, an Internet Service Provider, or an Autonomous System, without much collateral damage done to others. Routing worms can be easily implemented by attackers and they could cause considerable damage to our Internet. Since routing worms are scan-based worms, we believe that an effective way to defend against them and all other scan-based worms is to upgrade IPv4 to IPv6 — the vast address space of IPv6 ( 2 64 IP addresses for a single subnetwork) can prevent a worm from spreading through scanning. I.
Worm detection, early warning and response based on local victim information
- In Proceedings of 20 th Annual Computer Security Applications Conference
, 2004
"... Worm detection systems have traditionally focused on global strategies. In the absence of a global worm detection system, we examine the effectiveness of local worm detection and response strategies. This paper makes three contributions: (1) We propose a simple two-phase local worm victim detection ..."
Abstract
-
Cited by 44 (7 self)
- Add to MetaCart
(Show Context)
Worm detection systems have traditionally focused on global strategies. In the absence of a global worm detection system, we examine the effectiveness of local worm detection and response strategies. This paper makes three contributions: (1) We propose a simple two-phase local worm victim detection algorithm, DSC (Destination-Source Correlation), based on worm behavior in terms of both infection pattern and scanning pattern. DSC can detect zero-day scanning worms with a high detection rate and very low false positive rate. (2) We demonstrate the effectiveness of early worm warning based on local victim information. For example, warning occurs with 0.19 % infection of all vulnerable hosts on Internet when using a /12 monitored network. (3) Based on local victim information, we investigate and evaluate the effectiveness of an automatic real-time local response in terms of slowing down the global Internet worms propagation. (2) and (3) are general results, not specific to certain detection algorithm like DSC. We demonstrate (2) and (3) with both analytical models and packet-level network simulator experiments. 1.
Worm Origin Identification Using Random Moonwalks
- In IEEE Symposium on Security and Privacy
, 2005
"... We propose a novel technique that can determine both the host responsible for originating a propagating worm attack and the set of attack flows that make up the initial stages of the attack tree via which the worm infected successive generations of victims. We argue that knowledge of both is importa ..."
Abstract
-
Cited by 43 (13 self)
- Add to MetaCart
(Show Context)
We propose a novel technique that can determine both the host responsible for originating a propagating worm attack and the set of attack flows that make up the initial stages of the attack tree via which the worm infected successive generations of victims. We argue that knowledge of both is important for combating worms: knowledge of the origin supports law enforcement, and knowledge of the causal flows that advance the attack supports diagnosis of how network defenses were breached. Our technique exploits the “wide tree ” shape of a worm propagation emanating from the source by performing random “moonwalks” backward in time along paths of flows. Correlating the repeated walks reveals the initial causal flows, thereby aiding in identifying the source. Using analysis, simulation, and experiments with real world traces, we show how the technique works against both today’s fast propagating worms and stealthy worms that attempt to hide their attack flows among background traffic. 1
Hit-List Worm Detection and Bot Identification in Large Networks Using Protocol Graphs
- Recent Advances in Intrusion Detection (RAID), 10th International Symposium, RAID 2007
, 2007
"... Abstract. We present a novel method for detecting hit-list worms using protocol graphs. In a protocol graph, a vertex represents a single IP address, and an edge represents communications between those addresses using a specific protocol (e.g., HTTP). We show that the protocol graphs of four diverse ..."
Abstract
-
Cited by 24 (6 self)
- Add to MetaCart
(Show Context)
Abstract. We present a novel method for detecting hit-list worms using protocol graphs. In a protocol graph, a vertex represents a single IP address, and an edge represents communications between those addresses using a specific protocol (e.g., HTTP). We show that the protocol graphs of four diverse and representative protocols (HTTP, FTP, SMTP, and Oracle), as constructed from monitoring for fixed durations on a large intercontinental network, exhibit stable graph sizes and largest connected component sizes. Moreover, we demonstrate that worm propagations, even of a sophisticated hit-list variety in which the attacker has advance knowledge of his targets and always connects successfully, perturb these properties. We demonstrate that these properties can be monitored very efficiently even in very large networks, giving rise to a viable and novel approach for worm detection. We also demonstrate extensions by which the attacking hosts (bots) can be identified with high accuracy.
Host-Based Detection of Worms through Peer-to-Peer Cooperation
, 2005
"... ... that achieves negligible risk of false positives through peer-to-peer cooperation. We view correlation among otherwise independent peers' behavior as anomalous behavior, indication of a fast-spreading worm. We detect correlation by exploiting worms' temporal consistency, similarity (lo ..."
Abstract
-
Cited by 19 (0 self)
- Add to MetaCart
... that achieves negligible risk of false positives through peer-to-peer cooperation. We view correlation among otherwise independent peers' behavior as anomalous behavior, indication of a fast-spreading worm. We detect correlation by exploiting worms' temporal consistency, similarity (low temporal variance) in worms' invocations of system calls. We evaluate our ideas on Windows XP with Service Pack 2 using traces of nine variants of worms and twenty-five nonworms, including ten commercial applications and fifteen processes native to the platform. We find that two peers, upon exchanging snapshots of their internal behavior, defined with frequency distributions of system calls, can decide that they are, more likely than not, executing a worm between 76% and 97% of the time. More importantly, we find that the probability that peers might err, judging a non-worm a worm, is negligible.