Results 1 - 10
of
201
Denial of Service Resilience in Ad Hoc Networks
- In Proc. of ACM MobiCom
, 2004
"... Significant progress has been made towards making ad hoc networks secure and DoS resilient. However, little attention has been focused on quantifying DoS resilience: Do ad hoc networks have sufficiently redundant paths and counter-DoS mechanisms to make DoS attacks largely ineffective? Or are there ..."
Abstract
-
Cited by 82 (4 self)
- Add to MetaCart
(Show Context)
Significant progress has been made towards making ad hoc networks secure and DoS resilient. However, little attention has been focused on quantifying DoS resilience: Do ad hoc networks have sufficiently redundant paths and counter-DoS mechanisms to make DoS attacks largely ineffective? Or are there attack and system factors that can lead to devastating effects? In this paper, we design and study DoS attacks in order to assess the damage that difficultto -detect attackers can cause. The first attack we study, called the JellyFish attack, is targeted against closed-loop flows such as TCP; although protocol compliant, it has devastating effects. The second is the Black Hole attack, which has effects similar to the JellyFish, but on open-loop flows. We quantify via simulations and analytical modeling the scalability of DoS attacks as a function of key performance parameters such as mobility, system size, node density, and counter-DoS strategy. One perhaps surprising result is that such DoS attacks can increase the capacity of ad hoc networks, as they starve multi-hop flows and only allow one-hop communication, a capacity-maximizing, yet clearly undesirable situation.
DSSS-based flow marking technique for invisible traceback
- IN PROCEEDINGS OF IEEE SYMPOSIUM ON SECURITY AND PRIVACY (S&P
, 2007
"... Law enforcement agencies need the ability to conduct electronic surveillance to combat crime, terrorism, or other malicious activities exploiting the Internet. However, the proliferation of anonymous communication systems on the Internet has posed significant challenges to providing such traceback c ..."
Abstract
-
Cited by 69 (26 self)
- Add to MetaCart
(Show Context)
Law enforcement agencies need the ability to conduct electronic surveillance to combat crime, terrorism, or other malicious activities exploiting the Internet. However, the proliferation of anonymous communication systems on the Internet has posed significant challenges to providing such traceback capability. In this paper, we develop a new class of flow marking technique for invisible traceback based on Direct Sequence Spread Spectrum (DSSS), utilizing a Pseudo-Noise (PN) code. By interfering with a sender’s traffic and marginally varying its rate, an investigator can embed a secret spread spectrum signal into the sender’s traffic. The embedded signal is carried along with the traffic from the sender to the receiver, so the investigator can recognize the corresponding communication relationship, tracing the messages despite the use of anonymous networks. The secret PN code makes it difficult for others to detect the presence of such embedded signals, so the traceback, while available to investigators is, effectively invisible. We demonstrate a practical flow marking system which requires no training, and can achieve both high detection and low false positive rates. Using a combination of analytical modeling, simulations, and experiments on Tor (a popular Internet anonymous communication system), we demonstrate the effectiveness of the DSSS-based flow mark-
Exploiting the Transients of Adaptation for RoQ Attacks on Internet Resources
- IN PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP’04
, 2004
"... In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network el ..."
Abstract
-
Cited by 59 (12 self)
- Add to MetaCart
(Show Context)
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
Defense Against Spoofed IP Traffic Using Hop-Count Filtering
"... IP spoofing has often been exploited by Distributed Denial of Service (DDoS) attacks to (1) conceal flooding sources and dilute localities in flooding traffic, and (2) coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed I ..."
Abstract
-
Cited by 49 (2 self)
- Add to MetaCart
(Show Context)
IP spoofing has often been exploited by Distributed Denial of Service (DDoS) attacks to (1) conceal flooding sources and dilute localities in flooding traffic, and (2) coax legitimate hosts into becoming reflectors, redirecting and amplifying flooding traffic. Thus, the ability to filter spoofed IP packets near victim servers is essential to their own protection and prevention of becoming involuntary DoS reflectors. Although an attacker can forge any field in the IP header, he cannot falsify the number of hops an IP packet takes to reach its destination. More importantly, since the hop-count values are diverse, an attacker cannot randomly spoof IP addresses while maintaining consistent hop-counts. On the other hand, an Internet server can easily infer the hop-count information from the Time-to-Live (TTL) field of the IP header. Using a mapping between IP addresses and their hop-counts, the server can distinguish spoofed IP packets from legitimate ones. Based on this observation, we present a novel filtering technique, called Hop-Count Filtering (HCF)—which builds an accurate IP-to-hop-count (IP2HC) mapping table—to detect and discard spoofed IP packets. HCF is easy to deploy, as it does not require any support from the underlying network. Through analysis using network measurement data, we show that HCF can identify close to 90 % of spoofed IP packets, and then discard them with little collateral damage. We implement and evaluate HCF in the Linux kernel, demonstrating its effectiveness with experimental measurements.
On a new class of pulsing denial-of-service attacks and the defense
- IN NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS
, 2005
"... In this paper we analyze a new class of pulsing denial-of-service (PDoS) attacks that could seriously degrade the throughput of TCP flows. During a PDoS attack, periodic pulses of attack packets are sent to a victim. The magnitude of each pulse should be significant enough to cause packet losses. We ..."
Abstract
-
Cited by 40 (3 self)
- Add to MetaCart
(Show Context)
In this paper we analyze a new class of pulsing denial-of-service (PDoS) attacks that could seriously degrade the throughput of TCP flows. During a PDoS attack, periodic pulses of attack packets are sent to a victim. The magnitude of each pulse should be significant enough to cause packet losses. We describe two specific attack models according to the timing of the attack pulses with respect to the TCP’s congestion window movement: timeout-based and AIMD (additive-increase-multiplicative-decrease)-based. We show through an analysis that even a small number of attack pulses can cause significant throughput degradation. The second part of this paper is a novel two-stage scheme to detect PDoS attacks on a victim network. The first stage is based on a wavelet transform used to extract the desired frequency components of the data traffic and ACK traffic. The second stage is to detect change points in the extracted components. Through both simulation and testbed experiments, we verify the feasibility and effectiveness of the detection scheme.
Providing Packet Obituaries
, 2004
"... The Internet is transparent to success but opaque to failure. This veil of ignorance prevents ISPs from detecting failures by peering partners, and hosts from intelligently adapting their routes to adverse network conditions. To rectify this, we propose an accountability framework that would tell ho ..."
Abstract
-
Cited by 38 (5 self)
- Add to MetaCart
The Internet is transparent to success but opaque to failure. This veil of ignorance prevents ISPs from detecting failures by peering partners, and hosts from intelligently adapting their routes to adverse network conditions. To rectify this, we propose an accountability framework that would tell hosts where their packets have died. We describe a preliminary version of this framework and discuss its viability.
Phalanx: Withstanding Multimillion-Node Botnets
"... Large-scale distributed denial of service (DoS) attacks are an unfortunate everyday reality on the Internet. They are simple to execute and with the growing prevalence and size of botnets more effective than ever. Although much progress has been made in developing techniques to address DoS attacks, ..."
Abstract
-
Cited by 37 (3 self)
- Add to MetaCart
(Show Context)
Large-scale distributed denial of service (DoS) attacks are an unfortunate everyday reality on the Internet. They are simple to execute and with the growing prevalence and size of botnets more effective than ever. Although much progress has been made in developing techniques to address DoS attacks, no existing solution is unilaterally deployable, works with the Internet model of open access and dynamic routes, and copes with the large numbers of attackers typical of today’s botnets. In this paper, we present a novel DoS prevention scheme to address these issues. Our goal is to define a system that could be deployed in the next few years to address the danger from present-day massive botnets. The system, called Phalanx, leverages the power of swarms to combat DoS. Phalanx makes only the modest assumption that the aggregate capacity of the swarm exceeds that of the botnet. A client communicating with a destination bounces its packets through a random sequence of end-host mailboxes; because an attacker doesn’t know the sequence, they can disrupt at most only a fraction of the traffic, even for end-hosts with low bandwidth access links. We use PlanetLab to show that this approach can be both efficient and capable of withstanding attack. We further explore scalability with a simulator running experiments on top of measured Internet topologies. 1
Defending against Low-Rate TCP Attacks: Dynamic Detection and Protection
- in Proceedings of ICNP
, 2004
"... We consider a distributed approach to detect and to defend against the low-rate TCP attack [7]. The low-rate TCP attack is essentially a periodic short burst which exploits the homogeneity of the minimum retransmission timeout (RTO) of TCP flows and forces all affected TCP flows to back off and ente ..."
Abstract
-
Cited by 36 (4 self)
- Add to MetaCart
(Show Context)
We consider a distributed approach to detect and to defend against the low-rate TCP attack [7]. The low-rate TCP attack is essentially a periodic short burst which exploits the homogeneity of the minimum retransmission timeout (RTO) of TCP flows and forces all affected TCP flows to back off and enter the retransmission timeout state. This sort of attack is difficult to identify due to a large family of attack patterns. We propose a distributed detection mechanism which uses the dynamic time warping method to robustly and accurately identify the existence of this sort of attack. Once the attack is detected, a fair resource allocation mechanism is used so that (1) the number of affected TCP flows is minimized, and (2) we provide sufficient resource protection for the affected TCP flows. We report experimental results to quantify the robustness and accuracy of the proposed detection mechanism and the efficiency of the defense method. 1.
Loss and delay accountability for the internet
- In Proc. IEEE International Conference on Network Protocols. IEEE
, 2007
"... Abstract — The Internet provides no information on the fate of transmitted packets, and end systems cannot determine who is responsible for dropping or delaying their traffic. As a result, they cannot verify that their ISPs are honoring their service level agreements, nor can they react to adverse n ..."
Abstract
-
Cited by 33 (5 self)
- Add to MetaCart
(Show Context)
Abstract — The Internet provides no information on the fate of transmitted packets, and end systems cannot determine who is responsible for dropping or delaying their traffic. As a result, they cannot verify that their ISPs are honoring their service level agreements, nor can they react to adverse network conditions appropriately. While current probing tools provide some assistance in this regard, they only give feedback on probes, not actual traffic. Moreover, service providers could, at any time, render their network opaque to such tools. We propose AudIt, an explicit accountability interface, through which ISPs can pro-actively supply feedback to traffic sources on loss and delay, at administrative-domain granularity. Notably, our interface is resistant to ISP lies and can be implemented with a modest NetFlow modification. On our Click-based prototype, playback of real traces from a Tier-1 ISP reveals less than 2% bandwidth overhead. Finally, our proposal benefits not only end systems, but also ISPs, who can now control the amount and quality of information revealed about their internals. I.
A data streaming algorithm for estimating entropies of od flows
- In IMC ’07: Proceedings of the 7th ACM SIGCOMM conference on Internet measurement
, 2007
"... Entropy has recently gained considerable significance as an important metric for network measurement. Previous research has shown its utility in clustering traffic and detecting traffic anomalies. While measuring the entropy of the traffic observed at a single point has already been studied, an inte ..."
Abstract
-
Cited by 32 (7 self)
- Add to MetaCart
Entropy has recently gained considerable significance as an important metric for network measurement. Previous research has shown its utility in clustering traffic and detecting traffic anomalies. While measuring the entropy of the traffic observed at a single point has already been studied, an interesting open problem is to measure the entropy of the traffic between every origin-destination pair. In this paper, we propose the first solution to this challenging problem. Our sketch builds upon and extends the Lp sketch of Indyk with significant additional innovations. We present calculations showing that our data streaming algorithm is feasible for high link speeds using commodity CPU/memory at a reasonable cost. Our algorithm is shown to be very accurate in practice via simulations, using traffic traces collected at a tier-1 ISP backbone link.