• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 5,116
Next 10 →

Evaluating content management techniques for Web proxy caches

by Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich, Tai Jin - In Proceedings of the 2nd Workshop on Internet Server Performance , 1999
"... The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. Current Web proxy caches utilize simple replacement policies to determine which files to retain ..."
Abstract - Cited by 92 (4 self) - Add to MetaCart
in the cache. We utilize a trace of client requests to a busy Web proxy in an ISP environment to evaluate the performance of several existing replacement policies and of two new, parameterless replacement policies that we introduce in this paper. Finally, we introduce Virtual Caches, an approach for improving

My cache or yours? Making storage more exclusive

by Theodore M. Wong, John Wilkes - In Proceedings of the 2002 USENIX Annual Technical Conference , 2002
"... Modern high-end disk arrays often have several gigabytes of cache RAM. Unfortunately, most array caches use management policies which duplicate the same data blocks at both the client and array levels of the cache hierarchy: they are inclusive. Thus, the aggregate cache behaves as if it was only as ..."
Abstract - Cited by 125 (0 self) - Add to MetaCart
Modern high-end disk arrays often have several gigabytes of cache RAM. Unfortunately, most array caches use management policies which duplicate the same data blocks at both the client and array levels of the cache hierarchy: they are inclusive. Thus, the aggregate cache behaves as if it was only

Caching on the World Wide Web

by Charu Aggarwal, Joel L. Wolf, Philip S. Yu - 125 Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6 , 2000
"... Abstract—With the recent explosion in usage of the World Wide Web, the problem of caching Web objects has gained considerable importance. Caching on the Web differs from traditional caching in several ways. The nonhomogeneity of the object sizes is probably the most important such difference. In thi ..."
Abstract - Cited by 128 (1 self) - Add to MetaCart
Abstract—With the recent explosion in usage of the World Wide Web, the problem of caching Web objects has gained considerable importance. Caching on the Web differs from traditional caching in several ways. The nonhomogeneity of the object sizes is probably the most important such difference

Architectural support for operating system-driven CMP cache management

by Nauman Rafique, Won-taek Lim, Mithuna Thottethodi - In Proc. of the International Conference on Parallel Architectures and Compilation Techniques , 2006
"... The role of the operating system (OS) in managing shared resources such as CPU time, memory, peripherals, and even energy is well motivated and understood [23]. Unfortu-nately, one key resource|lower-level shared cache in chip multi-processors|is commonly managed purely in hardware by rudimentary re ..."
Abstract - Cited by 88 (1 self) - Add to MetaCart
The role of the operating system (OS) in managing shared resources such as CPU time, memory, peripherals, and even energy is well motivated and understood [23]. Unfortu-nately, one key resource|lower-level shared cache in chip multi-processors|is commonly managed purely in hardware by rudimentary

Fido: A cache that learns to fetch

by Mark Palmer - In Proceedings of the 17th International Conference on Very Large Data Bases , 1991
"... This paper describes Fido, a predictive cache [Palmer 19901 that prefetches by employing an associative memory to recognize access patterns within a context over time. Repeated training adapts the associative memory contents to data and access pattern changes, allowing on-line access predictions for ..."
Abstract - Cited by 122 (1 self) - Add to MetaCart
for prefetching. We discuss two salient elements of Fido- MLP, a replace-ment policy for managing prefetched objects, and Estimating Prophet, the component that recognizes patterns and predicts access. We then present some early simulation results which suggest that predictive caching works well and conclude

1 Cache Management Policies for Semantic Caching

by Themis Palpanas, Per-åke Larson, Jonathan Goldstein
"... Commercial database systems make extensive use of caching to speed up query execution. Semantic caching is the idea of caching actual query results in the hope of being able to reuse them to speed up subsequent queries. This paper deals with cache management policies, which refer to policies for adm ..."
Abstract - Add to MetaCart
Commercial database systems make extensive use of caching to speed up query execution. Semantic caching is the idea of caching actual query results in the hope of being able to reuse them to speed up subsequent queries. This paper deals with cache management policies, which refer to policies

A theory of friendly boards

by Renée B. Adams, Daniel Ferreira - Journal of Finance , 2007
"... We analyze the consequences of the board’s dual role as advisor as well as monitor of management. Given this dual role, the CEO faces a trade-off in disclosing information to the board: If he reveals his information, he receives better advice; however, an informed board will also monitor him more in ..."
Abstract - Cited by 198 (8 self) - Add to MetaCart
intensively. Since an independent board is a tougher monitor, the CEO may be reluctant to share information with it. Thus, management-friendly boards can be optimal. Using the insights from the model, we analyze the differences between sole and dual board systems. We highlight several policy implications

CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms

by Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César A. F. De Rose, Rajkumar Buyya , 2010
"... Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as “services ” to endusers under a usage-based payment model. They can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application se ..."
Abstract - Cited by 199 (23 self) - Add to MetaCart
of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organisations such as HP Labs in USA are using

Application-Controlled File Caching Policies

by Pei Cao, Edward W. Felten, Kai Li - IN PROC. THE 1994 SUMMER USENIX TECHNICAL CONFERENCE , 1994
"... We consider how to improve the performance of file caching by allowing user-level control over file cache replacement decisions. We use two-level cache management: the kernel allocates physical pages to individual applications (allocation), and each application is responsible for deciding how to use ..."
Abstract - Cited by 80 (5 self) - Add to MetaCart
We consider how to improve the performance of file caching by allowing user-level control over file cache replacement decisions. We use two-level cache management: the kernel allocates physical pages to individual applications (allocation), and each application is responsible for deciding how

Managing Distributed, Shared L2 Caches through OS-Level Page Allocation

by Sangyeun Cho, Lei Jin - IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE , 2006
"... This paper presents and studies a distributed L2 cache management approach through OS-level page allocation for future many-core processors. L2 cache management is a crucial multicore processor design aspect to overcome non-uniform cache access latency for good program performance and to reduce on-c ..."
Abstract - Cited by 134 (11 self) - Add to MetaCart
This paper presents and studies a distributed L2 cache management approach through OS-level page allocation for future many-core processors. L2 cache management is a crucial multicore processor design aspect to overcome non-uniform cache access latency for good program performance and to reduce on
Next 10 →
Results 11 - 20 of 5,116
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University