• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches (2006)

Cached

  • Download as a PDF

Download Links

  • [www.ece.utexas.edu]
  • [users.ece.utexas.edu]
  • [www.ece.cmu.edu]
  • [www.ece.cmu.edu]
  • [www.ece.cmu.edu]
  • [researcher.ibm.com]
  • [researcher.watson.ibm.com]
  • [www.ece.cmu.edu]
  • [www.cs.sfu.ca]
  • [www.ece.cmu.edu]
  • [www.ece.northwestern.edu]
  • [users.eecs.northwestern.edu]
  • [users.eecs.northwestern.edu]
  • [web.ece.ucdavis.edu]
  • [www.ece.cmu.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Moinuddin K. Qureshi , Yale N. Patt
Venue:IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE
Citations:259 - 5 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Qureshi06utility-basedcache,
    author = {Moinuddin K. Qureshi and Yale N. Patt},
    title = {Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches},
    booktitle = {IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE},
    year = {2006},
    pages = {423--432},
    publisher = {IEEE Computer Society}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning.

Keyphrases

cache resource    utility-based cache partitioning    runtime mechanism    partition shared cache    dual-core system    demand basis    used lru policy    monitoring circuit    low demand    multiple application    high demand    multiprogrammed workload    additional cache resource    mechanism monitor    cache miss    lru-based cache partitioning    hardware circuit    partitioning algorithm   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University