Results 1  10
of
45
Pedram  Leakage current reduction in CMOS VLSI circuits by input vector control
 IEEE transaction on VLSI systems
, 2004
"... Abstract—The first part of this paper describes two runtime mechanisms for reducing the leakage current of a CMOS circuit. In both cases, it is assumed that the system or environment produces a “sleep ” signal that can be used to indicate that the circuit is in a standby mode. In the first method, ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The first part of this paper describes two runtime mechanisms for reducing the leakage current of a CMOS circuit. In both cases, it is assumed that the system or environment produces a “sleep ” signal that can be used to indicate that the circuit is in a standby mode. In the first method, the “sleep ” signal is used to shift in a new set of external inputs and preselected internal signals into the circuit with the goal of setting the logic values of all of the internal signals so as to minimize the total leakage current in the circuit. This minimization is possible because the leakage current of a CMOS gate is strongly dependent on the input combination applied to its inputs. In the second method, nMOS and pMOS transistors are added to some of the gates in the circuit to increase the controllability of the internal signals of the circuit and decrease the leakage current of the gates using the “stack effect”. This is, however, done carefully so that the minimum leakage is achieved subject to a delay constraint for all input–output paths in the circuit. In both cases, Boolean satisfiability is used to formulate the problems, which are subsequently solved by employing a highly efficient SAT solver. Experimental results on the combinational circuits in the MCNC91 benchmark suite demonstrate that it is possible to reduce the leakage current in combinational circuits by an average of 25 % with only a 5 % delay penalty. The second part of this paper presents a design technique for applying the minimum leakage input to a sequential circuit. The proposed method uses the builtin scanchains in a VLSI circuit to drive it with the minimum leakage vector when it enters the sleep mode. The use of these scan registers eliminates the area and delay overhead of the additional circuitry that would otherwise be needed to apply the minimum leakage vector to the circuit. Experimental results on the sequential circuits in the MCNC91 benchmark suit show that, by using the proposed method, it is possible to reduce the leakage by an average of 25 % with practically no delay penalty. Index Terms—Leakage current control, low power design, minimum leakage vector, scan chain, VLSI circuits.
Microarchitecture level power and thermal simulation considering temperature dependent leakage model
 In Proceedings of ISLPED (Aug
, 2003
"... In this paper, we present power models with clock and temperature scaling, and develop the first of its type coupled thermal and power simulation with temperaturedependent leakage power model at microarchitecture level. We show that leakage energy and total energy can be different by up to 2.5X a ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we present power models with clock and temperature scaling, and develop the first of its type coupled thermal and power simulation with temperaturedependent leakage power model at microarchitecture level. We show that leakage energy and total energy can be different by up to 2.5X and 2X for temperatures between 90oC and 130oC, respectively. Given such big energy variations, no power model at microarchitecture level is accurate without considering temperature dependent leakage models.
A Combined Gate Replacement and Input Vector Control Approach for Leakage Current Reduction
 IEEE Transactions on Very Large Scale Integration Systems
, 2006
"... Due to the increasing role of leakage power in CMOS circuit’s total power dissipation, leakage reduction has attracted a lot of attention recently. Input vector control (IVC) takes advantage of the transistor stack effect to apply the minimum leakage vector (MLV) to the primary inputs of the circuit ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Due to the increasing role of leakage power in CMOS circuit’s total power dissipation, leakage reduction has attracted a lot of attention recently. Input vector control (IVC) takes advantage of the transistor stack effect to apply the minimum leakage vector (MLV) to the primary inputs of the circuit during the standby mode. However, IVC techniques become less effective for circuits of large logic depth because the MLV at primary inputs has little impact on internal gates at high logic level. In this paper, we propose a technique to overcome this limitation by directly controlling the inputs to the internal gates that are in their worst leakage states. Specifically, we propose a gate replacement technique that replaces such gates by other library gates while maintaining the circuit’s correct functionality at the active mode. This modification of the circuit does not require changes of the design flow, but it opens the door for further leakage reduction, when the MLV is not effective. We then describe a divideandconquer approach that combines the gate replacement and input vector control techniques. It integrates an algorithm that finds the optimal MLV for tree circuits, a fast gate replacement heuristic, and a genetic algorithm that connects the tree circuits. We have conducted experiments on all the MCNC91 benchmark circuits. The results reveal that 1) the gate replacement technique itself can provide 10 % more leakage current reduction over the best known IVC methods with no delay penalty and little area increase; 2) the divideandconquer approach outperforms the best pure IVC method by 24 % and the existing control point insertion method by 12%; 3) when we obtain the optimal MLV for small circuits from exhaustive search, the proposed gate replacement alone can still reduce leakage current by 13 % while the divideandconquer approach reduces 17%. ∗Parts of this manuscript will appear in the 42nd ACM/IEEE Design Automation Conference. 1
Fixed priority scheduling for reducing overall energy on variable voltage processors
 In 25th IEEE RealTime System Symposium
, 2004
"... Abstract — While Dynamic Voltage Scaling (DVS) is an efficient technique in reducing the dynamic energy consumption of a CMOS processor, methods that employ DVS without considering leakage current are quickly becoming less efficient when considering the processor’s overall energy consumption. A leak ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract — While Dynamic Voltage Scaling (DVS) is an efficient technique in reducing the dynamic energy consumption of a CMOS processor, methods that employ DVS without considering leakage current are quickly becoming less efficient when considering the processor’s overall energy consumption. A leakage conscious DVS voltage schedule may require the processor to run at a higherthannecessary speed to execute a given set of realtime tasks, which can result in a large number of idle intervals. To effectively reduce the energy consumption during these idle intervals, and therefore the overall energy consumption, the DVS schedule must dictate that the processor both enter and leave the power down state during these idle intervals, while carefully considering the time and energy cost of doing so. In this paper, we present a scheduling technique that can effectively reduce the overall energy consumption for hard realtime systems scheduled according to a fixed priority (FP) scheme. Experimental results demonstrate that a processor using our strategy consumes as less as 15 % of the idle energy of a processor employing the conventional strategy. I.
A Heuristic to Determine Low Leakage Sleep State Vectors for CMOS Combinational Circuits
 in Proc. ICCAD
, 2003
"... Input vector control has been used to minimize the leakage power consumption of a circuit in sleep state [1]. In this paper, we present a novel heuristic for determining a low leakage vector to be applied to a circuit in sleep state. The heuristic is a greedy search based on the controllability of n ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Input vector control has been used to minimize the leakage power consumption of a circuit in sleep state [1]. In this paper, we present a novel heuristic for determining a low leakage vector to be applied to a circuit in sleep state. The heuristic is a greedy search based on the controllability of nodes in the circuit and uses the functional dependencies among cells in the circuit to guide the search. Results on a set of ISCAS and MCNC benchmark circuits show that in all cases our heuristic returns a vector having a leakage within 5 % of that of the vector obtained using an extensive random search, with orders of magnitude improvement in computational speed. 1.
Accurate Stacking Effect MacroModeling of Leakage Power
 in Proceedings of the 18th International Conference on VLSI Design
, 2005
"... Abstract — An accurate and efficient stacking effect macromodel for leakage power in sub100nm circuits is presented in this paper. Leakage power, including subthreshold leakage power and gate leakage power, is becoming more significant compared to dynamic power when technology scaling down below 1 ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract — An accurate and efficient stacking effect macromodel for leakage power in sub100nm circuits is presented in this paper. Leakage power, including subthreshold leakage power and gate leakage power, is becoming more significant compared to dynamic power when technology scaling down below 100nm. Consequently, fast and accurate leakage power estimation models, which are strongly dependent on precise modeling of the stacking effect on subthreshold leakage and gate leakage, are vital for evaluating optimizations. In this work, making use of the interactions between subthreshold leakage and gate leakage, we focus our attention on analyzing the effects of transistor stacking on gate leakage between the channel and the gate and that between the drain/source and the gate. The contribution of the latter has been largely ignored in prior work, while our work shows that it is an important factor. Based on the stacking effect analysis, we have proposed a new best input vector to reduce the total leakage power; and an efficient and accurate leakage power estimation macromodel which achieves a mean error of 3.1% when compared to HSPICE. I.
Optimal Simultaneous Module and MultiVoltage Assignment for Low Power
 ACM TODAES
, 2006
"... Reducing power consumption through highlevel synthesis has attracted a growing interest from researchers due to its large potential for power reduction. In this work we study functional unit binding (or module assignment) given a scheduled data flow graph under a multiVdd framework. We assume that ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Reducing power consumption through highlevel synthesis has attracted a growing interest from researchers due to its large potential for power reduction. In this work we study functional unit binding (or module assignment) given a scheduled data flow graph under a multiVdd framework. We assume that each functional unit can be driven by different Vdd levels dynamically during run time to save dynamic power. We develop a polynomialtime optimal algorithm for assigning low Vdds to as many operations as possible under the resource and latency constraints, and in the same time minimizing total switching activity through functional unit binding. Our algorithm shows consistent improvement over a design flow that separates voltage assignment from functional unit binding. We also change the initial scheduling to examine power/energylatency tradeoff scenarios under different voltage level combinations. Experimental results show that we can achieve 28.1% and 33.4 % power reductions when the latency bound is the tightest with two and threeVdd levels respectively compared with the singleVdd case. When latency is relaxed, multiVdd offers larger power reductions (up to 46.7%). We also show comparison data of energy consumption under the same experimental settings.
Enhanced leakage reduction Technique by gate replacement
 In Proc. of the 42nd DAC
, 2005
"... Input vector control (IVC) technique utilizes the stack effect in CMOS circuit to apply the minimum leakage vector (MLV) to the circuit at the sleep mode to reduce leakage. Additional logic gates can be inserted as control points to make it more effective. In this paper, we propose a gate replacemen ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Input vector control (IVC) technique utilizes the stack effect in CMOS circuit to apply the minimum leakage vector (MLV) to the circuit at the sleep mode to reduce leakage. Additional logic gates can be inserted as control points to make it more effective. In this paper, we propose a gate replacement technique that further enhances the leakage reduction. The basic idea is to replace a gate that is in its worst leakage state by another library gate while keeping the circuit’s correct functionality at the active mode. We also develop a divideandconquer approach that integrates a fast gate replacement heuristic, an optimal MLV search strategy for tree circuit, and a genetic algorithm to connect the tree circuits. We conduct experiments on the MCNC91 benchmark circuits. The results reveal that our technique can reduce additional 10 % to 24 % leakage over the best known IVC methods and the optimal MLV with no delay penalty and little area increase.
An efficient voltage scaling algorithm for complex socs with few number of voltage modes
 In Proceedings of the 2004 ISLPED
, 2004
"... Increasing demand for larger highperformance applications requires developing more complex systems with hundreds of processing cores on a single chip. To allow dynamic voltage scaling in each onchip cores individually, many onchip voltage regulators must be used. However, the limitations in imple ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Increasing demand for larger highperformance applications requires developing more complex systems with hundreds of processing cores on a single chip. To allow dynamic voltage scaling in each onchip cores individually, many onchip voltage regulators must be used. However, the limitations in implementation of onchip inductors can reduce the efficiency, accuracy and the number of voltage modes generated by regulators. Therefore the future voltage scheduling algorithms must be efficient, even in the presence of few voltage modes; and fast, in order to handle complex applications. Techniques proposed to date, need many finegrained voltage modes to produce energy efficient results and their quality degrades significantly as the number of modes decreases. This paper presents a new technique called Adaptive Stochastic Gradient Voltage and Task Scheduling (ASGVTS) that quickly generates very energy efficient results irrespective of the number of available voltage modes. The results of comparing our algorithm to the most efficient approaches (RVS and EEGLSA) show that in the presence of only four valid modes, the ASGVTS saves up to 26 % and 33% more energy. On the other hand, other approaches require at least ten modes to reach the same level of energy saving that ASGVTS achieves with only four modes. Therefore our algorithm can also be used to explore and minimize the number of required voltage levels in the system. Categories and Subject Descriptors C.3 [Specialpurpose and applicationbased systems]: Realtime