• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 207
Next 10 →

DCD - Disk Caching Disk: A New Approach for Boosting I/O Performance

by Yiming Hu, Qing Yang - In Proceedings of the 23rd International Symposium on Computer Architecture , 1996
"... This paper presents a novel disk storage architecture called DCD, Disk Caching Disk, for the purpose of optimizing I/O performance. The main idea of the DCD is to use a small log disk, referred to as cache-disk, as a secondary disk cache to optimize write performance. While the cache-disk and the n ..."
Abstract - Cited by 101 (18 self) - Add to MetaCart
This paper presents a novel disk storage architecture called DCD, Disk Caching Disk, for the purpose of optimizing I/O performance. The main idea of the DCD is to use a small log disk, referred to as cache-disk, as a secondary disk cache to optimize write performance. While the cache

Distributed Prefetch-buffer/Cache Design for High Performance Memory Systems

by Thomas Alexander, Gershon Kedem
"... Microprocessor execution speeds are improving at a rate of 50%-80% per year while DRAM access times are improving at a much lower rate of 5%-10% per year. Computer systems are rapidly approaching the point at which overall system performance is determined not by the speed of the CPU but by the memor ..."
Abstract - Cited by 32 (1 self) - Add to MetaCart
small (32 KB) prediction cache we can get an effective main memory access time that is close to the access time of larger secondary caches.

The V-Way Cache: Demand-Based Associativity via Global Replacement

by Moinuddin K. Qureshi, David Thompson, Yale N. Patt - In Proceedings of the 32nd Annual International Symposium on Computer Architecture (ISCA), pp 544 – 555 , 2005
"... As processor speeds increase and memory latency be-comes more critical, intelligent design and management of secondary caches becomes increasingly important. The efficiency of current set-associative caches is reduced be-cause programs exhibit a non-uniform distribution of mem-ory accesses across di ..."
Abstract - Cited by 18 (0 self) - Add to MetaCart
As processor speeds increase and memory latency be-comes more critical, intelligent design and management of secondary caches becomes increasingly important. The efficiency of current set-associative caches is reduced be-cause programs exhibit a non-uniform distribution of mem-ory accesses across

A Fully Associative Software-Managed Cache Design

by unknown authors
"... As DRAM access latencies approach a thousand instructionexecution times and on-chip caches grow to multiple megabytes, it is not clear that conventional cache structures continue to be appropriate. Two key features⎯full associativity and software management⎯have been used successfully in the virtual ..."
Abstract - Add to MetaCart
in the virtual-memory domain to cope with disk access latencies. Future systems will need to employ similar techniques to deal with DRAM latencies. This paper presents a practical, fully associative, software-managed secondary cache system that provides performance competitive with or superior to traditional

Instruction Cache Prefetching Using Multilevel Branch Prediction

by Alexander V. Veidenbaum - International Symposium on High Performance Systems , 1997
"... This paper presents an instruction cache prefetching mechanism capable of prefetching past branches in multiple-issue processors. Such processors at high clock rates often use small instruction caches which have significant miss rates. Prefetching from secondary cache can hide the instruction cache ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
This paper presents an instruction cache prefetching mechanism capable of prefetching past branches in multiple-issue processors. Such processors at high clock rates often use small instruction caches which have significant miss rates. Prefetching from secondary cache can hide the instruction cache

Caching Queues in Memory Buffers

by Rajeev Motwani, Dilys Thomas - IN PROCEEDINGS OF THE 15 TH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS , 2004
"... Motivated by the need for maintaining multiple, large queues of data in modern high-performance systems, we study the problem of caching queues in memory under the following simple, but widely applicable, model. At each clock-tick, any number of data items may enter the various queues, while data-it ..."
Abstract - Cited by 5 (2 self) - Add to MetaCart
-items are consumed from the heads of the queues. Since the number of unconsumed items may exceed memory buffer size, some items in the queues need to be spilled to secondary storage and later moved back into memory for consumption. We provide online queue-caching algorithms under a number of interesting cost models.

FlashCache: A NAND flash memory file cache for low power web servers

by Taeho Kgil, Trevor Mudge - In International Conference on Compilers on Compilers, Architecture, and Synthesis for Embedded Systems , 2006
"... We propose an architecture that uses NAND flash memory to reduce main memory power in web server platforms. Our architecture uses a two level file buffer cache composed of a relatively small DRAM, which includes a primary file buffer cache, and a flash memory secondary file buffer cache. Compared to ..."
Abstract - Cited by 52 (11 self) - Add to MetaCart
We propose an architecture that uses NAND flash memory to reduce main memory power in web server platforms. Our architecture uses a two level file buffer cache composed of a relatively small DRAM, which includes a primary file buffer cache, and a flash memory secondary file buffer cache. Compared

Design and Evaluation of a Distributed Cache Architecture with Prediction

by Thomas Alexander, Gershon Kedem , 1994
"... We propose a secondary cache architecture that combines a predictive fetch strategy with a distributed cache to build a high performance memory system. The cache is partitioned into smaller units and distributed evenly in the main memory space. The architecture o#ers high bandwidth between the c ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
We propose a secondary cache architecture that combines a predictive fetch strategy with a distributed cache to build a high performance memory system. The cache is partitioned into smaller units and distributed evenly in the main memory space. The architecture o#ers high bandwidth between

An object-based processor cache

by Gordon Russell, Paul Shaw, G Xh , 1993
"... In the past, many persistent object-oriented architecture designs have been based on traditional processor technologies. Such architectures invariantly attempt to insert an object-level abstraction mechanism over the traditional processor’s virtual addressing scheme; this results in an architecture ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
via tag bits, object- and page-based locking, range checking, object to virtual mapping function, and use of a secondary descriptor cache. The cache design results in a processor which is no slower than conventional processors based on virtual memory. The design is then extensively analysed

Adaptive Spatial Sample Caching

by Andreas Dietrich
"... Figure 1: Three example scenes rendered using sample caching. Using caching during walkthrough animations allows for reusing shading results in subsequent frames, and to reduce the number of secondary rays to be cast by more than an order of magnitude. Despite tremendous progress in the last few yea ..."
Abstract - Add to MetaCart
Figure 1: Three example scenes rendered using sample caching. Using caching during walkthrough animations allows for reusing shading results in subsequent frames, and to reduce the number of secondary rays to be cast by more than an order of magnitude. Despite tremendous progress in the last few
Next 10 →
Results 11 - 20 of 207
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University