• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Hypertext Transfer Protocol --HTTP/1.1, http://www.w3.org/Protocols/rfc2616/rfc2616.html (last access: (0)

by R Fielding
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 908
Next 10 →

Linked Data -- The story so far

by Christian Bizer, et al.
"... The term Linked Data refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertion ..."
Abstract - Cited by 739 (15 self) - Add to MetaCart
The term Linked Data refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions- the Web of Data. In this article we present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. We describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.

Principled design of the modern web architecture

by Roy T. Fielding, Richard N. Taylor - ACM Trans. Internet Techn
"... The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of c ..."
Abstract - Cited by 531 (14 self) - Add to MetaCart
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. In this paper, we introduce the Representational State Transfer (REST) architectural style, developed as an abstract model of the Web architecture to guide our redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers. We describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. We then compare the abstract model to the currently deployed Web architecture in order to elicit mismatches between the existing protocols and the applications they are intended to support.

The design and implementation of an intentional naming system

by William Adjie-Winoto, Elliot Schwartz, Hari Balakrishnan, Jeremy Lilley - 17TH ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES (SOSP '99) PUBLISHED AS OPERATING SYSTEMS REVIEW, 34(5):186--201, DEC. 1999 , 1999
"... This paper presents the design and implementation of the Intentional Naming System (INS), a resource discovery and service location system for dynamic and mobile networks of devices and computers. Such environments require a naming system that is (i) expressive, to describe and make requests based o ..."
Abstract - Cited by 518 (14 self) - Add to MetaCart
This paper presents the design and implementation of the Intentional Naming System (INS), a resource discovery and service location system for dynamic and mobile networks of devices and computers. Such environments require a naming system that is (i) expressive, to describe and make requests based on specific properties of services, (ii) responsive, to track changes due to mobility and performance, (iii) robust, to handle failures, and (iv) easily configurable. INS uses a simple language based on attributes and values for its names. Applications use the language to describe what they are looking for (i.e., their intent), not where to find things (i.e., not hostnames). INS implements a late binding mechanism that integrates name resolution and message routing, enabling clients to continue communicating with end-nodes even if the name-to-address mappings change while a session is in progress. INS resolvers self-configure to form an application-level overlay network, which they use to discover new services, perform late binding, and maintain weak consistency of names using soft-state name exchanges and updates. We analyze the performance of the INS algorithms and protocols, present measurements of a Java-based implementation, and describe three applications we have implemented that demonstrate the feasibility and utility of INS.
(Show Context)

Citation Context

...ing information about a name-specifier: • The IP addresses for this name-specifier and a set of [port-number, transport-type] pairs for each IP address. The port number and transport type (e.g., HTTP =-=[2]-=-, RTP [38], TCP [34], etc.) are returned to the client to allow it to implement early binding. • An application-advertised metric for intentional anycast and early binding that reflects any property t...

Proxy Prefix Caching for Multimedia Streams

by Subhabrata Sen, Jennifer Rexford, Don Towsley , 1999
"... Proxies are emerging as an important way to reduce user-perceived latency and network resource requirements in the Internet. While relaying traffic between servers and clients, a proxy can cache resources in the hope of satisfying future client requests directly at the proxy. However, existing techn ..."
Abstract - Cited by 288 (17 self) - Add to MetaCart
Proxies are emerging as an important way to reduce user-perceived latency and network resource requirements in the Internet. While relaying traffic between servers and clients, a proxy can cache resources in the hope of satisfying future client requests directly at the proxy. However, existing techniques for caching text and images are not appropriate for the rapidly growing number of continuous media streams. In addition, high latency and loss rates in the Internet make it difficult to stream audio and video without introducing a large playback delay. To address these problems, we propose that, instead of caching entire audio or video streams (which may be quite large), the proxy should store a prefix consisting of the initial frames of each clip. Upon receiving a request for the stream, the proxy immediately initiates transmission to the client, while simultaneously requesting the remaining frames from the server. In addition to hiding the latency between the server and the proxy, st...

An Empirical Model of HTTP Network Traffic

by Bruce A. Mah , 1997
"... The workload of the global Internet is dominated by the Hypertext Transfer Protocol (HTTP), an application protocol used by World Wide Web clients and servers. Simulation studies of this environment will require a model of the traffic patterns of the World Wide Web, in order to investigate the perfo ..."
Abstract - Cited by 271 (1 self) - Add to MetaCart
The workload of the global Internet is dominated by the Hypertext Transfer Protocol (HTTP), an application protocol used by World Wide Web clients and servers. Simulation studies of this environment will require a model of the traffic patterns of the World Wide Web, in order to investigate the performance aspects of this increasingly popular application. We have developed an empirical model of network traffic produced by HTTP. Instead of relying on server or client logs, our approach is based on gathering packet traces of HTTP network conversations. Through traffic analysis, we have determined statistics and distributions for higher-level quantities such as the size of HTTP items retrieved, the number of items per "Web page", think time, and user browsing behavior. These quantities form a model can then be used by simulations to mimic World Wide Web network applications in wide-area IP internetworks. Keywords: World Wide Web, HTTP, traffic model, traffic measurements, workload, Interne...

Low-Cost Traffic Analysis Of Tor

by Steven J. Murdoch, George Danezis - In Proceedings of the 2005 IEEE Symposium on Security and Privacy. IEEE CS , 2005
"... Tor is the second generation Onion Router, supporting the anonymous transport of TCP streams over the Internet. Its low latency makes it very suitable for common tasks, such as web browsing, but insecure against trafficanalysis attacks by a global passive adversary. We present new traffic-analysis t ..."
Abstract - Cited by 231 (8 self) - Add to MetaCart
Tor is the second generation Onion Router, supporting the anonymous transport of TCP streams over the Internet. Its low latency makes it very suitable for common tasks, such as web browsing, but insecure against trafficanalysis attacks by a global passive adversary. We present new traffic-analysis techniques that allow adversaries with only a partial view of the network to infer which nodes are being used to relay the anonymous streams and therefore greatly reduce the anonymity provided by Tor. Furthermore, we show that otherwise unrelated streams can be linked back to the same initiator. Our attack is feasible for the adversary anticipated by the Tor designers. Our theoretical attacks are backed up by experiments performed on the deployed, albeit experimental, Tor network. Our techniques should also be applicable to any low latency anonymous network. These attacks highlight the relationship between the field of traffic-analysis and more traditional computer security issues, such as covert channel analysis. Our research also highlights that the inability to directly observe network links does not prevent an attacker from performing traffic-analysis: the adversary can use the anonymising network as an oracle to infer the traffic load on remote nodes in order to perform traffic-analysis. 1
(Show Context)

Citation Context

...s by deceiving anonymous users and making them access an attacker controlled server. This way, arbitrary data streams can be sent back and forth, and get detected. Where Tor is used to access an HTTP =-=[19]-=- (web) service, the attacks can be mounted much more simply, by including traffic-analysis bugs within the page, in the same way as web bugs [3, 12] are embedded today. These initiate a request for an...

Exploring the Bounds of Web Latency Reduction from Caching and Prefetching

by Thomas M. Kroeger , Darrell D. E. Long, Jeffrey C. Mogul , 1997
"... Prefetching and caching are techniques commonly used in I/O systems to reduce latency. Many researchers have advocated the use of caching and prefetching to reduce latency in the Web. We derive several bounds on the performance improvements seen from these techniques, and then use traces of Web prox ..."
Abstract - Cited by 226 (7 self) - Add to MetaCart
Prefetching and caching are techniques commonly used in I/O systems to reduce latency. Many researchers have advocated the use of caching and prefetching to reduce latency in the Web. We derive several bounds on the performance improvements seen from these techniques, and then use traces of Web proxy activity taken at Digital Equipment Corporation to quantify these bounds. We found that for these traces, local proxy caching could reduce latency by at best 26%, prefetching could reduce latency by at best 57%, and a combined caching and prefetching proxy could provide at best a 60% latency reduction. Furthermore, we found that how far in advance a prefetching algorithm was able to prefetch an object was a significant factor in its ability to reduce latency. We note that the latency reduction from caching is significantly limited by the rapid changes of objects in the Web. We conclude that for the workload studied caching offers moderate assistance in reducing latency. Prefetching can of...

SIP: session initiation protocol

by Jonathan Rosenberg, Henning Schulzrinne, Columbia U, Gonzalo Camarillo, Alan Johnston, Jon Peterson, Robert Sparks, Mark Handley - IETF RFC 3261 , 2002
"... This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet ..."
Abstract - Cited by 174 (18 self) - Add to MetaCart
This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress". The list of current Internet-Drafts can be accessed at
(Show Context)

Citation Context

...of terms to refer to the roles played by participants in SIP communications. The definitions of client, server and proxy are similar to those used by the Hypertext Transport Protocol (HTTP) (RFC 2616 =-=[8]-=-). The terms and generic syntax of URI and URL are defined in RFC 2396 [9]. The following terms have special significance for SIP. Back-to-Back user agent: A back-to-back user agent (B2BUA) is a logic...

Microreboot - A Technique for Cheap Recovery

by George Candea, George C, Shinichi Kawamoto, Yuichi Fujiki, Armando Fox, Greg Friedman, O Fox , 2004
"... A significant fraction of software failures in large-scale Internet systems are cured by rebooting, even when the exact failure causes are unknown. However, rebooting can be expensive, causing nontrivial service disruption or downtime even when clusters and failover are employed. In this work we sep ..."
Abstract - Cited by 171 (2 self) - Add to MetaCart
A significant fraction of software failures in large-scale Internet systems are cured by rebooting, even when the exact failure causes are unknown. However, rebooting can be expensive, causing nontrivial service disruption or downtime even when clusters and failover are employed. In this work we separate process recovery from data recovery to enable microrebooting -- a fine-grain technique for surgically recovering faulty application components, without disturbing the rest of the application.

The Cache Location Problem

by P. Krishnan, Krishnan Danny Raz, Yuval Shavitt - IEEE/ACM Transactions on Networking
"... This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number o ..."
Abstract - Cited by 165 (6 self) - Add to MetaCart
This paper studies the problem of where to place network caches. Emphasis is given to caches that are transparent to the clients since they are easier to manage and they require no cooperation from the clients. Our goal is to minimize the overall flow or the average delay by placing a given number of caches in the network.
(Show Context)

Citation Context

... client. Otherwise, the cache contacts the web server, refreshes its local copy, and sends the page to the client. Current protocols allow caches to validate the freshness of locally stored data [2], =-=[16]-=-. The performance of a caching scheme is a function of the network topology, the request pattern, the assignment of caches to requests, the cache sizes, and the cache replacement algorithms used. Our ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University