Results 1 - 10
of
2,053
An adaptive, nonuniform cache structure for wire-delay dominated on-chip caches
- In International Conference on Architectural Support for Programming Languages and Operating Systems
, 2002
"... Growing wire delays will force substantive changes in the designs of large caches. Traditional cache architectures assume that each level in the cache hierarchy has a single, uniform access time. Increases in on-chip communication delays will make the hit time of large on-chip caches a function of a ..."
Abstract
-
Cited by 314 (39 self)
- Add to MetaCart
Growing wire delays will force substantive changes in the designs of large caches. Traditional cache architectures assume that each level in the cache hierarchy has a single, uniform access time. Increases in on-chip communication delays will make the hit time of large on-chip caches a function
Managing Wire Delay in Large Chip-Multiprocessor Caches
- IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE
, 2004
"... In response to increasing (relative) wire delay, architects have proposed various technologies to manage the impact of slow wires on large uniprocessor L2 caches. Block migration (e.g., D-NUCA and NuRapid) reduces average hit latency by migrating frequently used blocks towards the lower-latency bank ..."
Abstract
-
Cited by 157 (4 self)
- Add to MetaCart
In response to increasing (relative) wire delay, architects have proposed various technologies to manage the impact of slow wires on large uniprocessor L2 caches. Block migration (e.g., D-NUCA and NuRapid) reduces average hit latency by migrating frequently used blocks towards the lower
Temperature Dependent Wire Delay Estimation in
"... Abstract—Due to large variations in temperature in VLSI cir-cuits and the linear relationship between metal resistance and temperature, the delay through wires of the same length can be different. Traditional thermal aware floorplanning algorithms use wirelength to estimate delay and routability. In ..."
Abstract
- Add to MetaCart
Abstract—Due to large variations in temperature in VLSI cir-cuits and the linear relationship between metal resistance and temperature, the delay through wires of the same length can be different. Traditional thermal aware floorplanning algorithms use wirelength to estimate delay and routability
The Future of Wires
, 1999
"... this paper we first discuss the wire metrics of interest and examine them in a contemporary 0.25m process. We then discuss technology scaling over the next several generations, from SIA and other predictions, and how our wire metrics trend over that time. We will examine the delay and bandwidth lim ..."
Abstract
-
Cited by 516 (7 self)
- Add to MetaCart
this paper we first discuss the wire metrics of interest and examine them in a contemporary 0.25m process. We then discuss technology scaling over the next several generations, from SIA and other predictions, and how our wire metrics trend over that time. We will examine the delay and bandwidth
Reducing Wire Delay Penalty through Value Prediction
- In International Symposium on Microarchitecture
, 2000
"... In this work we show that value prediction can be used to avoid the penalty of long wire delays by predicting the data that is communicated through these long wires and validating the prediction locally where the value is produced. Only in the case of misprediction, the long wire delay is experience ..."
Abstract
- Add to MetaCart
In this work we show that value prediction can be used to avoid the penalty of long wire delays by predicting the data that is communicated through these long wires and validating the prediction locally where the value is produced. Only in the case of misprediction, the long wire delay
Reducing Wire Delay Penalty through Value Prediction
- In International Symposium on Microarchitecture
, 2000
"... In this work we show that value prediction can be used to avoid the penalty of long wire delays by predicting the data that is communicated through these long wires and validating the prediction locally where the value is produced. Only in the case of misprediction, the long wire delay is experience ..."
Abstract
-
Cited by 25 (3 self)
- Add to MetaCart
In this work we show that value prediction can be used to avoid the penalty of long wire delays by predicting the data that is communicated through these long wires and validating the prediction locally where the value is produced. Only in the case of misprediction, the long wire delay
Exploiting Low Entropy to Reduce Wire Delay
"... Abstract — Wires shrink less efficiently than transistors. Smaller dimensions increase relative delay and the probability of crosstalk. Solutions to this problem include adding additional latency with pipelining, using “fat wires ” at higher metal levels, and advances in process and material technol ..."
Abstract
- Add to MetaCart
Abstract — Wires shrink less efficiently than transistors. Smaller dimensions increase relative delay and the probability of crosstalk. Solutions to this problem include adding additional latency with pipelining, using “fat wires ” at higher metal levels, and advances in process and material
Evaluation of the Raw Microprocessor: An Exposed-Wire-Delay Architecture for ILP and Streams
- In ISCA
, 2004
"... This paper evaluates the Raw microprocessor. Raw addresses the challenge of building a general-purpose architecture that performs well on a larger class of stream and embedded computing applications than existing microprocessors, while still running existing ILP-based sequential programs with reason ..."
Abstract
-
Cited by 82 (14 self)
- Add to MetaCart
with reasonable performance in the face of increasing wire delays. Raw approaches this challenge by implementing plenty of on-chip resources -- including logic, wires, and pins -- in a tiled arrangement, and exposing them through a new ISA, so that the software can take advantage of these resources for parallel
Managing Wire Delay in Chip Multiprocessor Caches
, 2006
"... Increasing on-chip wire delay and growing off-chip miss latency, present two key challenges in designing large Level-2 (L2) CMP caches. Currently, some CMPs use a shared L2 cache to maximize cache capacity and minimize off-chip misses. Others use private L2 caches, replicating data to limit the dela ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Increasing on-chip wire delay and growing off-chip miss latency, present two key challenges in designing large Level-2 (L2) CMP caches. Currently, some CMPs use a shared L2 cache to maximize cache capacity and minimize off-chip misses. Others use private L2 caches, replicating data to limit
Managing Wire Delay in Large Chip-Multiprocessor Caches
"... In response to increasing (relative) wire delay, architects have proposed various technologies to manage the impact of slow wires on large uniprocessor L2 caches. Block migration (e.g., D-NUCA [27] and NuRapid [12]) reduces average hit latency by migrating frequently used blocks towards the lower-la ..."
Abstract
- Add to MetaCart
In response to increasing (relative) wire delay, architects have proposed various technologies to manage the impact of slow wires on large uniprocessor L2 caches. Block migration (e.g., D-NUCA [27] and NuRapid [12]) reduces average hit latency by migrating frequently used blocks towards the lower
Results 1 - 10
of
2,053