Results 1 - 10
of
23
Exploring EDNS-Client-Subnet Adopters in your Free Time
- In ACM IMC
, 2013
"... The recently proposed DNS extension, EDNS-Client-Subnet (ECS), has been quickly adopted by major Internet companies such as Google to better assign user requests to their servers and improve end-user experience. In this paper, we show that the adoption of ECS also offers unique, but likely unintende ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
The recently proposed DNS extension, EDNS-Client-Subnet (ECS), has been quickly adopted by major Internet companies such as Google to better assign user requests to their servers and improve end-user experience. In this paper, we show that the adoption of ECS also offers unique, but likely unintended, opportunities to un-cover details about these companies ’ operational practices at almost no cost. A key observation is that ECS allows to resolve domain names of ECS adopters on behalf of any arbitrary IP/prefix in the Internet. In fact, by utilizing only a single residential vantage point and relying solely on publicly available information, we are able to (i) uncover the global footprint of ECS adopters with very little effort, (ii) infer the DNS response cacheability and end-user clus-tering of ECS adopters for an arbitrary network in the Internet, and (iii) capture snapshots of user to server mappings as practiced by major ECS adopters. While pointing out such new measurement opportunities, our work is also intended to make current and future ECS adopters aware of which operational information gets exposed when utilizing this recent DNS extension.
On the Benefits of Using a Large IXP as an Internet Vantage Point
- In IMC
, 2013
"... In the context of measuring the Internet, a long-standing question has been whether there exist well-localized physical entities in to-day’s network where traffic from a representative cross-section of the constituents of the Internet can be observed at a fine-enough granularity to paint an accurate ..."
Abstract
-
Cited by 6 (5 self)
- Add to MetaCart
(Show Context)
In the context of measuring the Internet, a long-standing question has been whether there exist well-localized physical entities in to-day’s network where traffic from a representative cross-section of the constituents of the Internet can be observed at a fine-enough granularity to paint an accurate and informative picture of how these constituents shape and impact much of the structure and evo-lution of today’s Internet and the actual traffic it carries. In this paper, we first answer this question in the affirmative by mining 17 weeks of continuous sFlow data from one of the largest European IXPs. Examining these weekly snapshots, we discover a vantage point with excellent visibility into the Internet, seeing week-in and week-out traffic from all 42K+ routed ASes, almost all 450K+ routed prefixes, from close to 1.5M servers, and around a quarter billion IPs from all around the globe. Second, to show the potential of such vantage points, we analyze the server-related portion of the traffic at this IXP, identify the server IPs and cluster them according to the organizations responsible for delivering the content. In the process, we observe a clear trend among many of the critical Internet players towards network heterogenization; that is, either hosting servers of third-party networks in their own infras-tructures or pursuing massive deployments of their own servers in strategically chosen third-party networks. While the latter is a well-known business strategy of companies such as Akamai, Google, and Netflix, we show in this paper the extent of network heteroge-nization in today’s Internet and illustrate how it enriches the tradi-tional, largely traffic-agnostic AS-level view of the Internet.
From Paris to Tokyo: On the Suitability of ping to Measure Latency
"... Monitoring Internet performance and measuring user qual-ity of experience are drawing increased attention from both research and industry. To match this interest, large-scale measurement infrastructures have been constructed. We be-lieve that this e↵ort must be combined with a critical review and ca ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Monitoring Internet performance and measuring user qual-ity of experience are drawing increased attention from both research and industry. To match this interest, large-scale measurement infrastructures have been constructed. We be-lieve that this e↵ort must be combined with a critical review and calibrarion of the tools being used to measure perfor-mance. In this paper, we analyze the suitability of ping for delay measurement. By performing several experiments on di↵er-ent source and destination pairs, we found cases in which ping gave very poor estimates of delay and jitter as they might be experienced by an application. In those cases, delay was heavily dependent on the flow identifier, even if only one IP path was used. For accurate delay measure-ment we propose to replace the ping tool with an adapta-tion of paris-traceroute which supports delay and jitter estimation, without being biased by per-flow network load balancing.
Remote Peering: More Peering without Internet Flattening
"... The trend toward more peering between networks is com-monly conflated with the trend of Internet flattening, i.e., reduction in the number of intermediary organizations on Internet paths. Indeed, direct peering interconnections by-pass layer-3 transit providers and make the Internet flat-ter. This p ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
The trend toward more peering between networks is com-monly conflated with the trend of Internet flattening, i.e., reduction in the number of intermediary organizations on Internet paths. Indeed, direct peering interconnections by-pass layer-3 transit providers and make the Internet flat-ter. This paper studies an emerging phenomenon that sepa-rates the two trends: we present the first systematic study of remote peering, an interconnection where remote networks peer via a layer-2 provider. Our measurements reveal sig-nificant presence of remote peering at IXPs (Internet eX-change Points) worldwide. Based on ground truth traffic, we also show that remote peering has a substantial potential to offload transit traffic. Generalizing the empirical results, we analytically derive conditions for economic viability of remote peering versus transit and direct peering. Because remote-peering services are provided on layer 2, our results challenge the traditional reliance on layer-3 topologies in modeling the Internet economic structure. We also discuss broader implications of remote peering for reliability, secu-rity, accountability, and other aspects of Internet research.
Are We One Hop Away from a Better Internet
- In IMC ’15
"... ABSTRACT The Internet suffers from well-known performance, reliability, and security problems. However, proposed improvements have seen little adoption due to the difficulties of Internet-wide deployment. We observe that, instead of trying to solve these problems in the general case, it may be poss ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT The Internet suffers from well-known performance, reliability, and security problems. However, proposed improvements have seen little adoption due to the difficulties of Internet-wide deployment. We observe that, instead of trying to solve these problems in the general case, it may be possible to make substantial progress by focusing on solutions tailored to the paths between popular content providers and their clients, which carry a large share of Internet traffic. In this paper, we identify one property of these paths that may provide a foothold for deployable solutions: they are often very short. Our measurements show that Google connects directly to networks hosting more than 60% of end-user prefixes, and that other large content providers have similar connectivity. These direct paths open the possibility of solutions that sidestep the headache of Internetwide deployability, and we sketch approaches one might take to improve performance and security in this setting.
Peering at the Internet’s Frontier: A First Look at ISP Interconnectivity in Africa
"... Abstract. In developing regions, the performance to commonly visited destina-tions is dominated by the network latency, which in turn depends on the con-nectivity from ISPs in these regions to the locations that host popular sites and content. We take a first look at ISP interconnectivity between va ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. In developing regions, the performance to commonly visited destina-tions is dominated by the network latency, which in turn depends on the con-nectivity from ISPs in these regions to the locations that host popular sites and content. We take a first look at ISP interconnectivity between various regions in Africa and discover many circuitous Internet paths that should remain local often detour through Europe. We investigate the causes of circuitous Internet paths and evaluate the benefits of increased peering and better cache proxy placement for reducing latency to popular Internet sites. 1
Quo vadis Open-IX?
"... ABSTRACT The recently launched initiative by the Open-IX Association (OIX) to establish the European-style Internet eXchange Point (IXP) model in the US suggests an intriguing strategy to tackle a problem that some Internet stakeholders in the US consider to be detrimental to their business; i.e., ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT The recently launched initiative by the Open-IX Association (OIX) to establish the European-style Internet eXchange Point (IXP) model in the US suggests an intriguing strategy to tackle a problem that some Internet stakeholders in the US consider to be detrimental to their business; i.e., a lack of diversity in available peering opportunities. We examine in this paper the cast of Internet stakeholders that are bound to play a critical role in determining the fate of this Open-IX effort. These include the large content and cloud providers, CDNs, Tier-1 ISPs, the well-established and some of the newer commercial datacenter and colocation companies, and the largest IXPs in Europe. In particular, we comment on these different parties' current attitudes with respect to public and private peering and discuss some of the economic arguments that will ultimately determine whether or not the currently pursued strategy by OIX will succeed in achieving the main OIX-articulated goal -a more level playing field for private and public peering in the US such that the actual demand and supply for the different peering opportunities will be reflected in the cost structure.
Characterizing IPv4 Anycast Adoption and Deployment
"... ABSTRACT This paper provides a comprehensive picture of IP-layer anycast adoption in the current Internet. We carry on multiple IPv4 anycast censuses, relying on latency measurement from PlanetLab. Next, we leverage our novel technique for anycast detection, enumeration, and geolocation [17] to qua ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT This paper provides a comprehensive picture of IP-layer anycast adoption in the current Internet. We carry on multiple IPv4 anycast censuses, relying on latency measurement from PlanetLab. Next, we leverage our novel technique for anycast detection, enumeration, and geolocation [17] to quantify anycast adoption in the Internet. Our technique is scalable and, unlike previous efforts that are bound to exploiting DNS, is protocolagnostic. Our results show that major Internet companies (including tier-1 ISPs, over-the-top operators, Cloud providers and equipment vendors) use anycast: we find that a broad range of TCP services are offered over anycast, the most popular of which include HTTP and HTTPS by anycast CDNs that serve websites from the top-100k Alexa list. Additionally, we complement our characterization of IPv4 anycast with a description of the challenges we faced to collect and analyze largescale delay measurements, and the lessons learned.
Satellite: Joint Analysis of CDNs and Network-Level Interference Satellite: Joint Analysis of CDNs and Network-Level Interference
"... Abstract Satellite is a methodology, tool chain, and data-set for understanding global trends in website deployment and accessibility using only a single or small number of standard measurement nodes. Satellite collects information on DNS resolution and resource availability around the Internet by ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract Satellite is a methodology, tool chain, and data-set for understanding global trends in website deployment and accessibility using only a single or small number of standard measurement nodes. Satellite collects information on DNS resolution and resource availability around the Internet by probing the IPv4 address space. These measurements are valuable in their breadth and sustainability -they do not require the use of a distributed measurement infrastructure, and therefore can be run at low cost and by multiple organizations. We demonstrate a clustering procedure which accurately captures the IP footprints of CDN deployments, and then show how this technique allows for more accurate determination of correct and incorrect IP resolutions. Satellite has multiple applications. It reveals the prevalence of CDNs by showing that 20% of the top 10,000 Alexa domains are hosted on shared infrastructure, and that CloudFlare alone accounts for nearly 10% of these sites. The same data-set detects 4,819 instances of ISP level DNS hijacking in 117 countries.
EONA: Experience-Oriented Network Architecture
"... There is a growing recognition among researchers, industry practitioners, and service providers of the need to optimize user-perceived application experience. Network infrastruc-ture owners (i.e., ISPs) have traditionally been left out of this equation, leading to repeated tussles between content pr ..."
Abstract
- Add to MetaCart
(Show Context)
There is a growing recognition among researchers, industry practitioners, and service providers of the need to optimize user-perceived application experience. Network infrastruc-ture owners (i.e., ISPs) have traditionally been left out of this equation, leading to repeated tussles between content providers and ISPs. In parallel, application providers have to deploy complex workarounds that reverse engineer the net-work’s impact on application-level metrics. In this work, we make the case for EONA, a new network paradigm where application providers and network providers can collaborate meaningfully to improve application experience. We ob-serve a confluence of technology trends that are enablers for EONA: the ability to collect large volumes of client-side ap-plication measurements, the emergence of novel “big data” platforms for real-time analytics, and new control plane ca-pabilities for ISPs (e.g., SDN, IXPs, NFV). We highlight the challenges and opportunities in designing suitable EONA interfaces between infrastructure and application providers and EONA-enhanced control loops that leverage these inter-faces to optimize user experience.