Results 11 - 20
of
1,416
On the Scale and Performance of Cooperative Web Proxy Caching
- ACM Symposium on Operating Systems Principles
, 1999
"... While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale World Wide Web environment. This paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter- ..."
Abstract
-
Cited by 313 (15 self)
- Add to MetaCart
(Show Context)
While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale World Wide Web environment. This paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement potential of cooperation between 200 small-organization proxies within a university environment, and between two large-organization proxies handling 23,000 and 60,000 clients, respectively. With our model, we extend beyond these populations to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance benefits only within limited population bounds. We also use our model to examine the implications of future trends in Web-access behavior and traffic.
Deriving Traffic Demands for Operational IP networks: Methodology and Experience
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 2001
"... Engineering a large IP backbone network without an accurate, network-wide view of the traffic demands is challenging. Shifts in user behavior, changes in routing policies, and failures of network elements can result in significant (and sudden) fluctuations in load. In this paper, we present a model ..."
Abstract
-
Cited by 297 (39 self)
- Add to MetaCart
(Show Context)
Engineering a large IP backbone network without an accurate, network-wide view of the traffic demands is challenging. Shifts in user behavior, changes in routing policies, and failures of network elements can result in significant (and sudden) fluctuations in load. In this paper, we present a model of traffic demands to support traffic engineering and performance debugging of large Internet Service Provider networks. By de ning a traffic demand as a volume of load originating from an ingress link and destined to a set of egress links, we can capture and predict how routing affects the traffic traveling between domains. To infer the traffic demands, we propose a measurement methodology that combines flow-level measurements collected at all ingress links with reachability information about all egress links. We discuss how to cope with situations where practical considerations limit the amount and quality of the necessary data. Specifically, we show how to infer interdomain traffic demands using measurements collected at a smaller number of edge links -- the peering links connecting to neighboring providers. We report on our experiences in deriving the traffic demands in the AT&T IP Backbone, by collecting, validating, and joining very large and diverse sets of usage, configuration, and routing data over extended periods of time. The paper concludes with a preliminary analysis of the observed dynamics of the traffic demands and a discussion of the practical implications for traffic engineering.
Flash: An efficient and portable Web server
, 1999
"... This paper presents the design of a new Web server architecture called the asymmetric multiprocess event-driven (AMPED) architecture, and evaluates the performance of an implementation of this architecture, the Flash Web server. The Flash Web server combines the high performance of single-process ev ..."
Abstract
-
Cited by 296 (27 self)
- Add to MetaCart
This paper presents the design of a new Web server architecture called the asymmetric multiprocess event-driven (AMPED) architecture, and evaluates the performance of an implementation of this architecture, the Flash Web server. The Flash Web server combines the high performance of single-process event-driven servers on cached workloads with the performance of multi-process and multithreaded servers on disk-bound workloads. Furthermore, the Flash Web server is easily portable since it achieves these results using facilities available in all modern operating systems. The performance of different Web server architectures is evaluated in the context of a single implementation in order to quantify the impact of a server's concurrency architecture on its performance. Furthermore, the performance of Flash is compared with two widely-used Web servers, Apache and Zeus. Results indicate that Flash can match or exceed the performance of existing Web servers by up to 50 % across a wide range of real workloads. We also present results that show the contribution of various optimizations embedded in Flash.
Dynamically Forecasting Network Performance Using the Network Weather Service
, 1998
"... this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different ..."
Abstract
-
Cited by 291 (37 self)
- Add to MetaCart
this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling [5], and by the metacomputing software infrastructure to develop quality-of-service guarantees [10, 17]. Keywords: scheduling, metacomputing, quality-of-service, statistical forecasting, network performance monitoring
REAL LIFE, REAL USERS, AND REAL NEEDS: A STUDY AND ANALYSIS OF USER QUERIES ON THE WEB
, 2000
"... We analyzed transaction logs containing 51,473 queries posed by 18,113 users of Excite, a major Internet search service. We provide data on: (i) sessions- changes in queries during a session, number of pages viewed, and use of relevance feedback, (ii) queries- the number of search terms, and the u ..."
Abstract
-
Cited by 286 (25 self)
- Add to MetaCart
We analyzed transaction logs containing 51,473 queries posed by 18,113 users of Excite, a major Internet search service. We provide data on: (i) sessions- changes in queries during a session, number of pages viewed, and use of relevance feedback, (ii) queries- the number of search terms, and the use of logic and modifiers, and (iii) terms- their rank/frequency distribution and the most highly used search terms. We then shift the focus of analysis from the query to the user to gain insight to the characteristics of the Web user. With these characteristics as a basis, we then conducted a failure analysis, identifying trends among user mistakes. We conclude with a summary of findings and a discussion of the implications of these findings.
An Empirical Model of HTTP Network Traffic
, 1997
"... The workload of the global Internet is dominated by the Hypertext Transfer Protocol (HTTP), an application protocol used by World Wide Web clients and servers. Simulation studies of this environment will require a model of the traffic patterns of the World Wide Web, in order to investigate the perfo ..."
Abstract
-
Cited by 271 (1 self)
- Add to MetaCart
The workload of the global Internet is dominated by the Hypertext Transfer Protocol (HTTP), an application protocol used by World Wide Web clients and servers. Simulation studies of this environment will require a model of the traffic patterns of the World Wide Web, in order to investigate the performance aspects of this increasingly popular application. We have developed an empirical model of network traffic produced by HTTP. Instead of relying on server or client logs, our approach is based on gathering packet traces of HTTP network conversations. Through traffic analysis, we have determined statistics and distributions for higher-level quantities such as the size of HTTP items retrieved, the number of items per "Web page", think time, and user browsing behavior. These quantities form a model can then be used by simulations to mimic World Wide Web network applications in wide-area IP internetworks. Keywords: World Wide Web, HTTP, traffic model, traffic measurements, workload, Interne...
Dynamics of IP traffic: A study of the role of variability and the impact of control
, 1999
"... Using the ns-2-simulator to experiment with different aspects of user- or session-behaviors and network configurations and focusing on the qualitative aspects of a wavelet-based scaling analysis, we present a systematic investigation into how and why variability and feedback-control contribute to th ..."
Abstract
-
Cited by 271 (12 self)
- Add to MetaCart
(Show Context)
Using the ns-2-simulator to experiment with different aspects of user- or session-behaviors and network configurations and focusing on the qualitative aspects of a wavelet-based scaling analysis, we present a systematic investigation into how and why variability and feedback-control contribute to the intriguing scaling properties observed in actual Internet traces (as our benchmark data, we use measured Internet traffic from an ISP). We illustrate how variability of both user aspects and network environments (i) causes self-similar scaling behavior over large time scales, (ii) determines a more or less pronounced change in scaling behavior around a specific time scale, and (iii) sets the stage for the emergence of surprisingly rich scaling dynamics over small time scales; i.e., multifractal scaling. Moreover, our scaling analyses indicate whether or not open-loop controls such as UDP or closed-loop controls such as TCP impact the local or small-scale behavior of the traffic and how the...
On the Relationship Between File Sizes, Transport Protocols, and Self-Similar Network Traffic
- In Proc. IEEE International Conference on Network Protocols
, 1996
"... Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is th ..."
Abstract
-
Cited by 269 (23 self)
- Add to MetaCart
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. First, we show that in a “realistic ” client/server network environment—i.e., one with bounded resources and coupling among traffic sources competing for resources—the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is robust with respect to changes in network resources (bottleneck bandwidth and
Workload characterization of the 1998 world cup web site
, 1999
"... Web, workload characterization, performance, servers, caching, World Cup This paper presents a detailed workload characterization study of the 1998 World Cup Web site. Measurements from this site were collected over a three month period. During this time the site received 1.35 billion requests, maki ..."
Abstract
-
Cited by 252 (7 self)
- Add to MetaCart
(Show Context)
Web, workload characterization, performance, servers, caching, World Cup This paper presents a detailed workload characterization study of the 1998 World Cup Web site. Measurements from this site were collected over a three month period. During this time the site received 1.35 billion requests, making this the largest Web workload analyzed to date. By examining this extremely busy site and through comparison with existing characterization studies we are able to determine how Web server workloads are evolving. We find that improvements in the caching architecture of the World-Wide Web are changing the workloads of Web servers, but that major improvements to that architecture are still necessary. In particular, we uncover evidence that a better consistency mechanism is required for World-Wide Web caches.
Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks
, 1998
"... Router mechanisms designed to achieve fair bandwidth allocations, like Fair Queueing, have many desirable properties for congestion control in the Internet. However, such mechanisms usually need to maintain state, manage buffers, and/or perform packet scheduling on a per flow basis, and this complex ..."
Abstract
-
Cited by 251 (11 self)
- Add to MetaCart
Router mechanisms designed to achieve fair bandwidth allocations, like Fair Queueing, have many desirable properties for congestion control in the Internet. However, such mechanisms usually need to maintain state, manage buffers, and/or perform packet scheduling on a per flow basis, and this complexity may prevent them from being cost-effectively implemented and widely deployed. In this paper, we propose an architecture that significantly reduces this implementation complexity yet still achieves approximately fair bandwidth allocations. We apply this approach to an island of routers -- that is, a contiguous region of the network -- and we distinguish between edge routers and core routers. Edge routers maintain per flow state; they estimate the incoming rate of each flow and insert a label into each packet header based on this estimate. Core routers maintain no per flow state; they use FIFO packet scheduling augmented by a probabilistic dropping algorithm that uses the packet labels an...