Results 1 - 10
of
493
Embracing wireless interference: Analog network coding
- in ACM SIGCOMM
, 2007
"... Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, r ..."
Abstract
-
Cited by 358 (10 self)
- Add to MetaCart
(Show Context)
Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding. 1.
Trading structure for randomness in wireless opportunistic routing
, 2007
"... Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers ’ access to the medium. Although the scheduler delivers opportunistic gain ..."
Abstract
-
Cited by 296 (7 self)
- Add to MetaCart
Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers ’ access to the medium. Although the scheduler delivers opportunistic gains, it misses some of the inherent features of the 802.11 MAC. For example, it prevents spatial reuse and thus may underutilize the wireless medium. It also eliminates the layering abstraction, making the protocol less amenable to extensions to alternate traffic types such as multicast. This paper presents MORE, a MAC-independent opportunistic routing protocol. MORE randomly mixes packets before forwarding them. This randomness ensures that routers that hear the same transmission do not forward the same packets. Thus, MORE needs no special scheduler to coordinate routers and can run directly on top of 802.11. Experimental results from a 20-node wireless testbed show that MORE’s median unicast throughput is 22 % higher than ExOR, and the gains rise to 45 % over ExOR when there is a chance of spatial reuse. For multicast, MORE’s gains increase with the number of destinations, and are 35-200 % greater than ExOR.
On coding for reliable communication over packet networks
, 2008
"... We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of ..."
Abstract
-
Cited by 217 (37 self)
- Add to MetaCart
We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of stored packets. In such a strategy, intermediate nodes perform additional coding yet do not decode nor wait for a block of packets before sending out coded packets. Moreover, all coding and decoding operations have polynomial complexity. We show that, provided packet headers can be used to carry an amount of side-information that grows arbitrarily large (but independently of payload size), random linear network coding achieves packet-level capacity for both single unicast and single multicast connections and for both wireline and wireless networks. This result holds as long as packets received on links arrive according to processes that have average rates. Thus packet losses on links may exhibit correlations in time or with losses on other links. In the special case of Poisson traffic with i.i.d. losses, we give error exponents that quantify the rate of decay of the probability of error with coding delay. Our analysis of random linear network coding shows not only that it achieves packet-level capacity, but also that the propagation of packets carrying “innovative ” information follows the propagation of jobs through a queueing network, thus implying that fluid flow models yield good approximations.
Network coding: An instant primer
- ACM SIGCOMM Computer Communication Review
, 2006
"... Network coding is a new research area that may have interesting applications in practical networking systems. With network coding, intermediate nodes may send out packets that are linear combinations of previously received information. There are two main benefits of this approach: potential throughp ..."
Abstract
-
Cited by 195 (7 self)
- Add to MetaCart
(Show Context)
Network coding is a new research area that may have interesting applications in practical networking systems. With network coding, intermediate nodes may send out packets that are linear combinations of previously received information. There are two main benefits of this approach: potential throughput improvements and a high degree of robustness. Robustness translates into loss resilience and facilitates the design of simple distributed algorithms that perform well, even if decisions are based only on partial information. This paper is an instant primer on network coding: we explain what network coding does and how it does it. We also discuss the implications of theoretical results on network coding for realistic settings and show how network coding can be used in practice.
Analyzing and Improving a BitTorrent Network’s Performance Mechanisms
, 2006
"... Abstract — In recent years, BitTorrent has emerged as a very scalable peer-to-peer file distribution mechanism. While early measurement and analytical studies have verified BitTorrent’s performance, they have also raised questions about various metrics (upload utilization, fairness, etc.), particula ..."
Abstract
-
Cited by 177 (0 self)
- Add to MetaCart
(Show Context)
Abstract — In recent years, BitTorrent has emerged as a very scalable peer-to-peer file distribution mechanism. While early measurement and analytical studies have verified BitTorrent’s performance, they have also raised questions about various metrics (upload utilization, fairness, etc.), particularly in settings other than those measured. In this paper, we present a simulationbased study of BitTorrent. Our goal is to deconstruct the system and evaluate the impact of its core mechanisms, both individually and in combination, on overall system performance under a variety of workloads. Our evaluation focuses on several important metrics, including peer link utilization, file download time, and fairness amongst peers in terms of volume of content served. Our results confirm that BitTorrent performs near-optimally in terms of uplink bandwidth utilization, and download time except under certain extreme conditions. We also show that low bandwidth peers can download more than they upload to the network when high bandwidth peers are present. We find that the rate-based tit-for-tat policy is not effective in preventing unfairness. We show how simple changes to the tracker and a stricter, block-based tit-for-tat policy, greatly improves fairness. I.
Resilient Network Coding in the Presence of Byzantine Adversaries
"... Network coding substantially increases network throughput. But since it involves mixing of information inside the network, a single corrupted packet generated by a malicious node can end up contaminating all the information reaching a destination, preventing decoding. This paper introduces distribu ..."
Abstract
-
Cited by 169 (32 self)
- Add to MetaCart
(Show Context)
Network coding substantially increases network throughput. But since it involves mixing of information inside the network, a single corrupted packet generated by a malicious node can end up contaminating all the information reaching a destination, preventing decoding. This paper introduces distributed polynomial-time rateoptimal network codes that work in the presence of Byzantine nodes. We present algorithms that target adversaries with different attacking capabilities. When the adversary can eavesdrop on all links and jam zO links, our first algorithm achieves a rate of C − 2zO, where C is the network capacity. In contrast, when the adversary has limited eavesdropping capabilities, we provide algorithms that achieve the higher rate of C − zO. Our algorithms attain the optimal rate given the strength of the adversary. They are informationtheoretically secure. They operate in a distributed manner, assume no knowledge of the topology, and can be designed and implemented in polynomial-time. Furthermore, only the source and destination need to be modified; non-malicious nodes inside the network are oblivious to the presence of adversaries and implement a classical distributed network code. Finally, our algorithms work over wired and wireless networks.
Rarest First and Choke Algorithms Are Enough
, 2006
"... The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have propose ..."
Abstract
-
Cited by 149 (15 self)
- Add to MetaCart
(Show Context)
The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet. We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols.
Should Internet Service Providers Fear Peer-Assisted Content Distribution?
, 2005
"... Recently, peer-to-peer (P2P) networks have emerged as an attractive solution to enable large-scale content distribution without requiring major infrastructure investments. While such P2P solutions appear highly beneficial for content providers and end-users, there seems to be a growing concern among ..."
Abstract
-
Cited by 130 (3 self)
- Add to MetaCart
Recently, peer-to-peer (P2P) networks have emerged as an attractive solution to enable large-scale content distribution without requiring major infrastructure investments. While such P2P solutions appear highly beneficial for content providers and end-users, there seems to be a growing concern among Internet Service Providers (ISPs) that now need to support the distribution cost. In this work, we explore the potential impact of future P2P file delivery mechanisms as seen from three different perspectives: i) the content provider, ii) the ISPs, and iii) individual content consumers. Using a diverse set of measurements including BitTorrent tracker logs and payload packet traces collected at the edge of a 20,000 user access network, we quantify the impact of peer-assisted file delivery on end-user experience and resource consumption. We further compare it with the performance expected from traditional distribution mechanisms based on large server farms and Content Distribution Networks (CDNs).
Improving traffic locality in bittorrent via biased neighbor selection
- in ICDCS ’06: Proceedings of the 26th IEEE International Conference on Distributed Computing Systems
, 2006
"... Peer-to-peer (P2P) applications such as BitTorrent ignore traffic costs at ISPs and generate a large amount of cross-ISP traffic. As a result, ISPs often throttle BitTorrent traffic to control the cost. In this paper, we examine a new approach to enhance BitTorrent traffic locality, biased neighbor ..."
Abstract
-
Cited by 128 (0 self)
- Add to MetaCart
(Show Context)
Peer-to-peer (P2P) applications such as BitTorrent ignore traffic costs at ISPs and generate a large amount of cross-ISP traffic. As a result, ISPs often throttle BitTorrent traffic to control the cost. In this paper, we examine a new approach to enhance BitTorrent traffic locality, biased neighbor selection, in which a peer chooses the majority, but not all, of its neighbors from peers within the same ISP. Using simulations, we show that biased neighbor selection maintains the nearly optimal performance of Bit-Torrent in a variety of environments, and fundamentally reduces the cross-ISP traffic by eliminating the traffic’s linear growth with the number of peers. Key to its performance is the rarest first piece replication algorithm used by Bit-Torrent clients. Compared with existing locality-enhancing approaches such as bandwidth limiting, gateway peers, and caching, biased neighbor selection requires no dedicated servers and scales to a large number of BitTorrent networks. 1
The Importance of Being Opportunistic: Practical Network Coding for Wireless Environments
"... This paper applies network coding to wireless mesh networks and presents the first implementation results. It introduces COPE, an opportunistic approach to network coding, where each node snoops on the medium, learns the status of its neighbors, detects coding opportunities, and codes as long as the ..."
Abstract
-
Cited by 126 (10 self)
- Add to MetaCart
This paper applies network coding to wireless mesh networks and presents the first implementation results. It introduces COPE, an opportunistic approach to network coding, where each node snoops on the medium, learns the status of its neighbors, detects coding opportunities, and codes as long as the recipients can decode. This flexible design allows COPE to efficiently support multiple unicast flows, even when traffic demands are unknown and bursty, and the senders and receivers are dynamic. We evaluate COPE using both emulation and testbed implementation. Our results show that COPE substantially improves the network throughput, and as the number of flows and the contention level increases, COPE’s throughput becomes many times higher than current 802.11 mesh networks.