Results 1 -
8 of
8
There is More to IXPs than Meets the Eye
"... This article is an editorial note submitted to CCR. It has NOT been peer reviewed. The authors take full responsibility for this article’s technical content. Comments can be posted through CCR Online. Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network A ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
This article is an editorial note submitted to CCR. It has NOT been peer reviewed. The authors take full responsibility for this article’s technical content. Comments can be posted through CCR Online. Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points (NAPs) that were mandated as part of the decommissioning of the National Science Foundation Network (NSFNET) in 1994/95 to facilitate the transition from the NSFNET to the “public Internet ” as we know it today. While this popular view does not tell the whole story behind the early beginnings of IXPs, what is true is that since around 1994, the number of operational IXPs worldwide has grown to more than 300 (as of May 2013 1), with the largest IXPs handling daily traffic volumes comparable to those carried by the largest Tier-1 ISPs. However, IXPs have never really attracted much attention from the networking research community. At first glance, this lack of interest
On the Benefits of Using a Large IXP as an Internet Vantage Point
- In IMC
, 2013
"... In the context of measuring the Internet, a long-standing question has been whether there exist well-localized physical entities in to-day’s network where traffic from a representative cross-section of the constituents of the Internet can be observed at a fine-enough granularity to paint an accurate ..."
Abstract
-
Cited by 6 (5 self)
- Add to MetaCart
(Show Context)
In the context of measuring the Internet, a long-standing question has been whether there exist well-localized physical entities in to-day’s network where traffic from a representative cross-section of the constituents of the Internet can be observed at a fine-enough granularity to paint an accurate and informative picture of how these constituents shape and impact much of the structure and evo-lution of today’s Internet and the actual traffic it carries. In this paper, we first answer this question in the affirmative by mining 17 weeks of continuous sFlow data from one of the largest European IXPs. Examining these weekly snapshots, we discover a vantage point with excellent visibility into the Internet, seeing week-in and week-out traffic from all 42K+ routed ASes, almost all 450K+ routed prefixes, from close to 1.5M servers, and around a quarter billion IPs from all around the globe. Second, to show the potential of such vantage points, we analyze the server-related portion of the traffic at this IXP, identify the server IPs and cluster them according to the organizations responsible for delivering the content. In the process, we observe a clear trend among many of the critical Internet players towards network heterogenization; that is, either hosting servers of third-party networks in their own infras-tructures or pursuing massive deployments of their own servers in strategically chosen third-party networks. While the latter is a well-known business strategy of companies such as Akamai, Google, and Netflix, we show in this paper the extent of network heteroge-nization in today’s Internet and illustrate how it enriches the tradi-tional, largely traffic-agnostic AS-level view of the Internet.
Quo vadis Open-IX?
"... ABSTRACT The recently launched initiative by the Open-IX Association (OIX) to establish the European-style Internet eXchange Point (IXP) model in the US suggests an intriguing strategy to tackle a problem that some Internet stakeholders in the US consider to be detrimental to their business; i.e., ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT The recently launched initiative by the Open-IX Association (OIX) to establish the European-style Internet eXchange Point (IXP) model in the US suggests an intriguing strategy to tackle a problem that some Internet stakeholders in the US consider to be detrimental to their business; i.e., a lack of diversity in available peering opportunities. We examine in this paper the cast of Internet stakeholders that are bound to play a critical role in determining the fate of this Open-IX effort. These include the large content and cloud providers, CDNs, Tier-1 ISPs, the well-established and some of the newer commercial datacenter and colocation companies, and the largest IXPs in Europe. In particular, we comment on these different parties' current attitudes with respect to public and private peering and discuss some of the economic arguments that will ultimately determine whether or not the currently pursued strategy by OIX will succeed in achieving the main OIX-articulated goal -a more level playing field for private and public peering in the US such that the actual demand and supply for the different peering opportunities will be reflected in the cost structure.
Characterizing IPv4 Anycast Adoption and Deployment
"... ABSTRACT This paper provides a comprehensive picture of IP-layer anycast adoption in the current Internet. We carry on multiple IPv4 anycast censuses, relying on latency measurement from PlanetLab. Next, we leverage our novel technique for anycast detection, enumeration, and geolocation [17] to qua ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT This paper provides a comprehensive picture of IP-layer anycast adoption in the current Internet. We carry on multiple IPv4 anycast censuses, relying on latency measurement from PlanetLab. Next, we leverage our novel technique for anycast detection, enumeration, and geolocation [17] to quantify anycast adoption in the Internet. Our technique is scalable and, unlike previous efforts that are bound to exploiting DNS, is protocolagnostic. Our results show that major Internet companies (including tier-1 ISPs, over-the-top operators, Cloud providers and equipment vendors) use anycast: we find that a broad range of TCP services are offered over anycast, the most popular of which include HTTP and HTTPS by anycast CDNs that serve websites from the top-100k Alexa list. Additionally, we complement our characterization of IPv4 anycast with a description of the challenges we faced to collect and analyze largescale delay measurements, and the lessons learned.
Measuring DANE TLSA Deployment
"... framework uses DNSSEC to provide a source of trust, and with TLSA it can serve as a root of trust for TLS certificates. This serves to comple-ment traditional certificate authentication methods, which is important given the risks inherent in trusting hundreds of organizations—risks al-ready demonstr ..."
Abstract
- Add to MetaCart
(Show Context)
framework uses DNSSEC to provide a source of trust, and with TLSA it can serve as a root of trust for TLS certificates. This serves to comple-ment traditional certificate authentication methods, which is important given the risks inherent in trusting hundreds of organizations—risks al-ready demonstrated with multiple compromises. The TLSA protocol was published in 2012, and this paper presents the first systematic study of its deployment. We studied TLSA usage, developing a tool that actively probes all signed zones in.com and.net for TLSA records. We find the TLSA use is early: in our latest measurement, of the 485k signed zones, we find only 997 TLSA names. We characterize how it is being used so far, and find that around 7–13 % of TLSA records are invalid. We find 33 % of TLSA responses are larger than 1500 Bytes and will very likely be fragmented. 1
Akamai Technologies
"... Content Delivery Networks (CDNs) deliver much of the world’s web, video, and application content on the Internet today. A key component of a CDN is the mapping system that uses the DNS protocol to route each client’s request to a “proximal” server that serves the requested content. While traditional ..."
Abstract
- Add to MetaCart
(Show Context)
Content Delivery Networks (CDNs) deliver much of the world’s web, video, and application content on the Internet today. A key component of a CDN is the mapping system that uses the DNS protocol to route each client’s request to a “proximal” server that serves the requested content. While traditional mapping systems identify a client using the IP of its name server, we describe our experience in building and rolling-out a novel system called end-user mapping that identifies the client directly by using a prefix of the client’s IP ad-dress. Using measurements from Akamai’s production net-work during the roll-out, we show that end-user mapping provides significant performance benefits for clients who use public resolvers, including an eight-fold decrease in map-ping distance, a two-fold decrease in RTT and content down-load time, and a 30 % improvement in the time-to-first-byte. We also quantify the scaling challenges in implementing end-user mapping such as the 8-fold increase in DNS queries. Fi-nally, we show that a CDN with a larger number of deploy-ment locations is likely to benefit more from end-user map-ping than a CDN with a smaller number of deployments. 1.
Assessing Affinity Between Users and CDN Sites
"... Abstract. Large web services employ CDNs to improve user perfor-mance. CDNs improve performance by serving users from nearby Front-End (FE) Clusters. They also spread users across FE Clusters when one is overloaded or unavailable and others have unused capacity. Our paper is the first to study the d ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Large web services employ CDNs to improve user perfor-mance. CDNs improve performance by serving users from nearby Front-End (FE) Clusters. They also spread users across FE Clusters when one is overloaded or unavailable and others have unused capacity. Our paper is the first to study the dynamics of the user-to-FE Cluster mapping for Google and Akamai from a large range of client prefixes. We measure how 32,000 prefixes associate with FE Clusters in their CDNs every 15 minutes for more than a month. We study geographic and latency effects of mapping changes, showing that 50–70 % of prefixes switch between FE Clusters that are very distant from each other (more than 1,000 km), and that these shifts sometimes (28–40 % of the time) result in large latency shifts (100 ms or more). Most prefixes see large latencies only briefly, but a few (2–5%) see high latency much of the time. We also find that many prefixes are directed to several countries over the course of a month, complicating questions of jurisdiction. 1
Back-Office Web Traffic on The Internet
"... Although traffic between Web servers and Web browsers is read-ily apparent to many knowledgeable end users, fewer are aware of the extent of server-to-server Web traffic carried over the public Internet. We refer to the former class of traffic as front-office In-ternet Web traffic and the latter as ..."
Abstract
- Add to MetaCart
(Show Context)
Although traffic between Web servers and Web browsers is read-ily apparent to many knowledgeable end users, fewer are aware of the extent of server-to-server Web traffic carried over the public Internet. We refer to the former class of traffic as front-office In-ternet Web traffic and the latter as back-office Internet Web traffic (or just front-office and back-office traffic, for short). Back-office traffic, which may or may not be triggered by end-user activity, is essential for today’s Web as it supports a number of popular but complex Web services including large-scale content delivery, so-cial networking, indexing, searching, advertising, and proxy ser-vices. This paper takes a first look at back-office traffic, measuring it from various vantage points, including from within ISPs, IXPs, and CDNs. We describe techniques for identifying back-office traf-fic based on the roles that this traffic plays in the Web ecosystem. Our measurements show that back-office traffic accounts for a sig-nificant fraction not only of core Internet traffic, but also of Web transactions in the terms of requests and responses. Finally, we dis-cuss the implications and opportunities that the presence of back-office traffic presents for the evolution of the Internet ecosystem.