Results 1 - 10
of
307
NIRA: A New Internet Routing Architecture
, 2003
"... This paper presents the design of a new Internet routing architecture (NIRA). In today’s Internet, users can pick their own ISPs, but once the packets have entered the network, the users have no control over the overall routes their packets take. NIRA aims at providing end users the ability to choos ..."
Abstract
-
Cited by 121 (1 self)
- Add to MetaCart
This paper presents the design of a new Internet routing architecture (NIRA). In today’s Internet, users can pick their own ISPs, but once the packets have entered the network, the users have no control over the overall routes their packets take. NIRA aims at providing end users the ability to choose the sequence of Internet service providers a packet traverses. User choice fosters competition, which imposes an economic discipline on the market, and fosters innovation and the introduction of new services. This paper explores various technical problems that would have to be solved to give users the ability to choose: how a user discovers routes and whether the dynamic conditions of the routes satisfy his requirements, how to efficiently represent routes, and how to properly compensate providers if a user chooses to use them. In particular, NIRA utilizes a hierarchical provider-rooted addressing scheme so that a common type of domainlevel route can be efficiently represented by a pair of addresses. In NIRA, each user keeps track of the topology information on domains that provide transit service for him. A source retrieves the topology information of the destination on demand and combines this information with his own to discover end-to-end routes. This route discovery process ensures that each user does not need to know the complete topology of the Internet.
Secure or insure? A game-theoretic analysis of information security games
- IN: PROC. OF THE 17TH INTERNATIONAL WORLD WIDE WEB CONFERENCE (WWW2008), BEJING
, 2008
"... Despite general awareness of the importance of keeping one’s system secure, and widespread availability of consumer security technologies, actual investment in security remains highly variable across the Internet population, allowing attacks such as distributed denialof-service (DDoS) and spam distr ..."
Abstract
-
Cited by 98 (27 self)
- Add to MetaCart
(Show Context)
Despite general awareness of the importance of keeping one’s system secure, and widespread availability of consumer security technologies, actual investment in security remains highly variable across the Internet population, allowing attacks such as distributed denialof-service (DDoS) and spam distribution to continue unabated. By modeling security investment decision-making in established (e.g., weakest-link, best-shot) and novel games (e.g., weakest-target), and allowing expenditures in self-protection versus self-insurance technologies, we can examine how incentives may shift between investment in a public good (protection) and a private good (insurance), subject to factors such as network size, type of attack, loss probability, loss magnitude, and cost of technology. We can also characterize Nash equilibria and social optima for different classes of attacks and defenses. In the weakest-target game, an interesting result is that, for almost all parameter settings, more effort is exerted at Nash equilibrium than at the social optimum. We may attribute this to the “strategic uncertainty” of players seeking to self-protect at just slightly above the lowest protection level.
Untangling the Web from DNS
, 2004
"... The Web relies on the Domain Name System (DNS) to resolve the hostname portion of URLs into IP addresses. This marriage-of-convenience enabled the Web's meteoric rise, but the resulting entanglement is now hindering both infrastructures---the Web is overly constrained by the limitations of DNS, ..."
Abstract
-
Cited by 88 (12 self)
- Add to MetaCart
The Web relies on the Domain Name System (DNS) to resolve the hostname portion of URLs into IP addresses. This marriage-of-convenience enabled the Web's meteoric rise, but the resulting entanglement is now hindering both infrastructures---the Web is overly constrained by the limitations of DNS, and DNS is unduly burdened by the demands of the Web. There has been much commentary on this sad state-of-affairs, but dissolving the ill-fated union between DNS and the Web requires a new way to resolve Web references. To this end, this paper describes the design and implementation of Semantic Free Referencing (SFR), a reference resolution infrastructure based on distributed hash tables (DHTs).
NIRA: A New Inter-Domain Routing Architecture
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 2007
"... In today’s Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between provider-level routes has the potential of fostering ..."
Abstract
-
Cited by 77 (2 self)
- Add to MetaCart
(Show Context)
In today’s Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between provider-level routes has the potential of fostering ISP competition to offer enhanced service and improving end-to-end performance and reliability. This paper presents the design and evaluation of a new Internet routing architecture (NIRA) that gives a user the ability to choose the sequence of providers his packets take. NIRA addresses a broad range of issues, including practical provider compensation, scalable route discovery, efficient route representation, fast route fail-over, and security. NIRA supports user choice without running a global link-state routing protocol. It breaks an end-to-end route into a sender part and a receiver part and uses address assignment to represent each part. A user can specify a route with only a source and a destination address, and switch routes by switching addresses. We evaluate NIRA using a combination of network measurement, simulation, and analysis. Our evaluation shows that NIRA supports user choice with low overhead.
Pathlet routing
- In Proc. SIGCOMM Workshop on Hot Topics in Networking
, 2008
"... We present a new routing protocol, pathlet routing, in which networks advertise fragments of paths, called pathlets, that sources concatenate into end-to-end source routes. Intuitively, the pathlet is a highly flexible building block, capturing policy constraints as well as enabling an exponentially ..."
Abstract
-
Cited by 70 (14 self)
- Add to MetaCart
(Show Context)
We present a new routing protocol, pathlet routing, in which networks advertise fragments of paths, called pathlets, that sources concatenate into end-to-end source routes. Intuitively, the pathlet is a highly flexible building block, capturing policy constraints as well as enabling an exponentially large number of path choices. In particular, we show that pathlet routing can emulate the policies of BGP, source routing, and several recent multipath proposals. This flexibility lets us address two major challenges for Internet routing: scalability and source-controlled routing. When a router’s routing policy has only “local ” constraints, it can be represented using a small number of pathlets, leading to very small forwarding tables and many choices of routes for senders. Crucially, pathlet routing does not impose a global requirement on what style of policy is used, but rather allows multiple styles to coexist. The protocol thus supports complex routing policies while enabling and incentivizing the adoption of policies that yield small forwarding plane state and a high degree of path choice.
Content Availability, Pollution and Poisoning in File Sharing Peer-toPeer Networks
- Proceedings of the 6th ACM Conference on Electronic Commerce
, 2005
"... Copyright holders have been investigating technological solutions to prevent distribution of copyrighted materials in peer-to-peer file sharing networks. A particularly popular technique consists in “poisoning” a specific item (movie, song, or software title) by injecting a massive number of decoys ..."
Abstract
-
Cited by 66 (2 self)
- Add to MetaCart
(Show Context)
Copyright holders have been investigating technological solutions to prevent distribution of copyrighted materials in peer-to-peer file sharing networks. A particularly popular technique consists in “poisoning” a specific item (movie, song, or software title) by injecting a massive number of decoys into the peer-to-peer network, to reduce the availability of the targeted item. In addition to poisoning, pollution, that is, the accidental injection of unusable copies of files in the network, also decreases content availability. In this paper, we attempt to provide a first step toward understanding the differences between pollution and poisoning, and their respective impact on content availability in peer-to-peer file sharing networks. To that effect, we conduct a measurement study of content availability in the four most popular peer-to-peer file sharing networks, in the absence of poisoning, and then simulate different poisoning strategies
An adaptive communication architecture for wireless sensor networks
- in Proceedings of the Fifth ACM Conference on Networked Embedded Sensor Systems (SenSys 2007
, 2007
"... As sensor networks move towards increasing heterogeneity, the number of link layers, MAC protocols, and underlying transportation mechanisms increases. System developers must adapt their applications and systems to accommodate a wide range of underlying protocols and mechanisms. However, existing co ..."
Abstract
-
Cited by 66 (15 self)
- Add to MetaCart
(Show Context)
As sensor networks move towards increasing heterogeneity, the number of link layers, MAC protocols, and underlying transportation mechanisms increases. System developers must adapt their applications and systems to accommodate a wide range of underlying protocols and mechanisms. However, existing communication architectures for sensor networks are not designed for this heterogeneity and therefore the system developer must redevelop their systems for each underlying communication protocol or mechanism. To remedy this situation, we present a communication architecture that adapts to a wide range of underlying communication mechanisms, from the MAC layer to the transport layer, without requiring any changes to applications or protocols. We show that the architecture is expressive enough to accommodate typical sensor network protocols. Measurements show that the increase in execution time over a non-adaptive architecture is small. Dis-
Flow Rate Fairness: Dismantling a Religion
- ACM CCR
, 2007
"... Resource allocation and accountability keep reappearing on every list of requirements for the Internet architecture. The reason we never resolve these issues is a broken idea of what the problem is. The applied research and standards communities are using completely unrealistic and impractical fairn ..."
Abstract
-
Cited by 65 (9 self)
- Add to MetaCart
(Show Context)
Resource allocation and accountability keep reappearing on every list of requirements for the Internet architecture. The reason we never resolve these issues is a broken idea of what the problem is. The applied research and standards communities are using completely unrealistic and impractical fairness criteria. The resulting mechanisms don’t even allocate the right thing and they don’t allocate it between the right entities. We explain as bluntly as we can that thinking about fairness mechanisms like TCP in terms of sharing out flow rates has no intellectual heritage from any concept of fairness in philosophy or social science, or indeed real life. Comparing flow rates should never again be used for claims of fairness in production networks. Instead, we should judge fairness mechanisms on how they share out the ‘cost ’ of each user’s actions on others.
A System for Authenticated Policy-Compliant Routing
, 2004
"... Internet end users and ISPs alike have little control over how packets are routed outside of their own AS, restricting their ability to achieve levels of performance, reliability, and utility that might otherwise be attained. While researchers have proposed a number of source-routing techniques to c ..."
Abstract
-
Cited by 63 (6 self)
- Add to MetaCart
Internet end users and ISPs alike have little control over how packets are routed outside of their own AS, restricting their ability to achieve levels of performance, reliability, and utility that might otherwise be attained. While researchers have proposed a number of source-routing techniques to combat this limitation, there has thus far been no way for independent ASes to ensure that such traffic does not circumvent local traffic policies, nor to accurately determine the correct party to charge for forwarding the traffic. We present Platypus, an authenticated source routing system built around the concept of network capabilities. Network capabilities allow for accountable, fine-grained path selection by cryptographically attesting to policy compliance at each hop along a source route. Capabilities can be composed to construct routes through multiple ASes and can be delegated to third parties. Platypus caters to the needs of both end users and ISPs: users gain the ability to pool their resources and select routes other than the default, while ISPs maintain control over where, when, and whose packets traverse their networks. We describe how Platypus can be used to address several well-known issues in wide-area routing at both the edge and the core, and evaluate its performance, security, and interactions with existing protocols. Our results show that incremental deployment of Platypus can achieve immediate gains.
Negotiation-Based Routing Between Neighboring ISPs
- in Proc. NSDI
, 2005
"... Abstract -We explore negotiation as the basis for cooperation between competing entities, for the specific case of routing between two neighboring ISPs. Interdomain routing is often driven by self-interest and based on a limited view of the internetwork, which hurts the stability and efficiency of ..."
Abstract
-
Cited by 61 (2 self)
- Add to MetaCart
(Show Context)
Abstract -We explore negotiation as the basis for cooperation between competing entities, for the specific case of routing between two neighboring ISPs. Interdomain routing is often driven by self-interest and based on a limited view of the internetwork, which hurts the stability and efficiency of routing. We present a negotiation framework in which adjacent ISPs share information using coarse preferences and jointly decide the paths for the traffic flows they exchange. Our framework enables pairs of ISPs to agree on routing paths based on their specific relationship, even if they have different optimization criteria. We use simulation with over sixty measured ISP topologies to evaluate our framework. We find that the quality of negotiated routing is close to that of globally optimal routing that uses complete, detailed information about both ISPs. We also find that ISPs have incentive to negotiate because both of them benefit compared to routing independently based on local information.