• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Logic synthesis and optimization benchmarks user guide (version 3.0 (1991)

by S Yang
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 331
Next 10 →

Cost-effective approach for reducing soft error failure rate in logic circuits

by Kartik Mohanram, Nur A. Touba , 2003
"... In this paper, a new paradigm for designing logic circuits with concurrent error detection (CED) is described. The key idea is to exploit the asymmetric soft error susceptibility of nodes in a logic circuit. Rather than target all modeled faults, CED is targeted towards the nodes that have the highe ..."
Abstract - Cited by 108 (8 self) - Add to MetaCart
In this paper, a new paradigm for designing logic circuits with concurrent error detection (CED) is described. The key idea is to exploit the asymmetric soft error susceptibility of nodes in a logic circuit. Rather than target all modeled faults, CED is targeted towards the nodes that have the highest soft error susceptibility to achieve cost-effective tradeoffs between overhead and reduction in the soft error failure rate. Under this new paradigm, we present one particular approach that is based on partial duplication and show that it is capable of reducing the soft error failure rate significantly with a fraction of the overhead required for full duplication. A procedure for characterizing the soft error susceptibility of nodes in a logic circuit, and a heuristic procedure for selecting the set of nodes for partial duplication are described. A full set of experimental results demonstrate the cost-effective tradeoffs that can be achieved. 1.

MONA Implementation Secrets

by Nils Klarlund, Anders Møller, Michael Schwatzbach , 2000
"... The MONA tool provides an implementation of the decision procedures for the logics WS1S and WS2S. It has been used for numerous applications, and it is remarkably efficient in practice, even though it faces a theoretically non-elementary worst-case complexity. The implementation has matured over a p ..."
Abstract - Cited by 84 (6 self) - Add to MetaCart
The MONA tool provides an implementation of the decision procedures for the logics WS1S and WS2S. It has been used for numerous applications, and it is remarkably efficient in practice, even though it faces a theoretically non-elementary worst-case complexity. The implementation has matured over a period of six years. Compared to the first naive version, the present tool is faster by several orders of magnitude. This speedup is obtained from many different contributions working on all levels of the compilation and execution of formulas. We present a selection of implementation "secrets" that have been discovered and tested over the years, including formula reductions, DAGification, guided tree automata, three-valued logic, eager minimization, BDD-based automata representations, and cache-conscious data structures. We describe these techniques and quantify their respective effects by experimenting with separate versions of the MONA tool that in turn omit each of them.
(Show Context)

Citation Context

...nc91 bbsse.mona – verification of sequential hardware circuits; the first verifies that an 8-bit von Neumann adder is equivalent to a standard carry-chain adder, the second is a benchmark from MCNC91 =-=[29]-=-. Provided by Sebastian Mödersheim. xbar theory.mona – encodes a part of a theory of natural languages in the Chomsky tradition. It was used to verify the theory and led to the discovery of mistakes i...

Temperature and supply voltage aware performance and power modeling at microarchitecture level

by Weiping Liao, Student Member, Lei He, Kevin M. Lepak - IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems , 2005
"... Abstract—Performance and power are two primary design issues for systems ranging from server computers to handhelds. Performance is affected by both temperature and supply voltage because of the temperature and voltage dependence of circuit delay. Furthermore, as semiconductor technology scales down ..."
Abstract - Cited by 75 (5 self) - Add to MetaCart
Abstract—Performance and power are two primary design issues for systems ranging from server computers to handhelds. Performance is affected by both temperature and supply voltage because of the temperature and voltage dependence of circuit delay. Furthermore, as semiconductor technology scales down, leakage power’s exponential dependence on temperature and supply voltage becomes significant. Therefore, future design studies call for temperature and voltage aware performance and power modeling. In this paper, we study microarchitecture-level temperature and voltage aware performance and power modeling. We present a leakage power model with temperature and voltage scaling, and show that leakage and total energy vary by 38 % and 24%, respectively, between 65 C and 110 C. We study thermal runaway induced by the interdependence between temperature and leakage power, and demonstrate that without temperature-aware modeling, underestimation of leakage power may lead to the failure of thermal controls, and overestimation of leakage power may result in excessive performance penalties of up to 5.24%. All of these studies underscore the necessity of temperature-aware power modeling. Furthermore, we study optimal voltage scaling for best performance with dynamic power and thermal management under different packaging options. We show that dynamic power and thermal management allows designs to target at the common-case thermal scenario among benchmarks and improves performance by 6.59 % compared to designs targeted at the worst case thermal scenario without dynamic power and thermal management. Additionally, the optimal for the best performance may not be the largest allowed by the given packaging platform, and that advanced cooling techniques can improve throughput significantly. Index Terms—Floorplan, leakage power, microarchitecture, temperature, thermal management. I.
(Show Context)

Citation Context

...rent is . Based on these observation, we propose the following formula for considering temperature and voltage scaling: Fig. 1. I of random logic. The circuits are selected from MCNC’91 benchmark set =-=[26]-=- including circuits for ALU, control, multiplier, decoder, counter, etc. we apply a genetic algorithm presented in [25] to obtain the input vectors for both maximum and minimum leakage currents. First...

Symmetry detection and dynamic variable ordering of decision diagrams'',

by Shipra Panda , Fabio Somenzi , Bernard F Plessier - In Proceedings of the International Conference on Computer-Aided Design, , 1994
"... Abstract Knowing that some variables are symmetric in a function has numerous applications; in particular, it can help produce better variable orders for Binary Decision Diagrams (BDDs) and related data structures (e.g., Algebraic Decision Diagrams). It has been conjectured that there always exists ..."
Abstract - Cited by 68 (2 self) - Add to MetaCart
Abstract Knowing that some variables are symmetric in a function has numerous applications; in particular, it can help produce better variable orders for Binary Decision Diagrams (BDDs) and related data structures (e.g., Algebraic Decision Diagrams). It has been conjectured that there always exists an optimum order for a BDD wherein symmetric variables are contiguous. We propose a new algorithm for the detection of symmetries, based on dynamic reordering, and we study its interaction with the reordering algorithm itself. We show that combining sifting with an ecient symmetry check for contiguous variables results in the fastest symmetry detection algorithm reported to date and produces better variable orders for many BDDs. The overhead on the sifting algorithm is negligible.
(Show Context)

Citation Context

... for which the analysis of the circuit suggests that they should be kept close. 5 Experimental Results In this section we present experiments conducted on several circuits from the IWLS benchmark set =-=[Yan91]-=- and on some additional circuits. Tables 1 and 2 summarize the runs of different combinations of ordinary sifting and sifting combined with symmetry check. In all experiments, BDDs are built for the c...

Gate sizing to radiation harden combinational logic

by Quming Zhou, Kartik Mohanram , 2006
"... A gate-level radiation hardening technique for cost– effective reduction of the soft error failure rate in combinational logic circuits is described. The key idea is to exploit the asymmetric logical masking probabilities of gates, hardening gates that have the lowest logical masking probability to ..."
Abstract - Cited by 50 (3 self) - Add to MetaCart
A gate-level radiation hardening technique for cost– effective reduction of the soft error failure rate in combinational logic circuits is described. The key idea is to exploit the asymmetric logical masking probabilities of gates, hardening gates that have the lowest logical masking probability to achieve cost– effective tradeoffs between overhead and soft error failure rate reduction. The asymmetry in the logical masking probabilities at a gate is leveraged by decoupling the physical from the logical (Boolean) aspects of soft error susceptibility of the gate. Gates are hardened to single-event upsets (SEUs) with specified worst case characteristics in increasing order of their logical masking probability, thereby maximizing the reduction in the soft error failure rate for specified overhead costs (area, power, and delay). Gate sizing for radiation hardening uses a novel gate (transistor) sizing technique that is both efficient and accurate. A full set of experimental results for process technologies ranging from 180 to 70 nm demonstrates the cost-effective tradeoffs that can be achieved. On average, the proposed technique has a radiation hardening overhead of 38.3%, 27.1%, and 3.8 % in area, power, and delay for worst case SEUs across the four process technologies.

A New Method to Express Functional Permissibilities for LUT based FPGAs and Its Applications

by Shigeru Yamashita, Hiroshi Sawada, Akira Nagoya - In International Conference on Computer Aided Design, p. 254 – 261 , 1996
"... This paper presents a new method to express functional permissibilities for look-up table (LUT) based field programmable gate arrays (FPGAs). The method represents functional permissibilities by using sets of pairs of functions, not by incompletely specified functions. It makes good use of the prope ..."
Abstract - Cited by 47 (5 self) - Add to MetaCart
This paper presents a new method to express functional permissibilities for look-up table (LUT) based field programmable gate arrays (FPGAs). The method represents functional permissibilities by using sets of pairs of functions, not by incompletely specified functions. It makes good use of the properties of LUTs such that their internal logics can be freely changed. The permissibilities expressed by the proposed method have the desired property that at many points of a network they can be simultaneously treated. Applications of the proposed method are also presented; a method to optimize networks and a method to remove connections that are obstacles at the routing step. Preliminary experimental results are given to show the effectiveness of our proposed method. 1 Introduction Because of their low cost, re-programmability and rapid turnaround times, field programmable gate arrays (FPGAs) have emerged as an attractive means to implement low volume applications and prototypes[1]. FPGAs ...
(Show Context)

Citation Context

...automatically. This approach raises the possibility of successful automatic routing. 5 Experimental Results We have implemented the methods presented here and performed preliminary experiments on MCNC=-=[16]-=- benchmark circuits. BDD was used for representing functions, and the maximum number of usable BDD nodes was limited to 1,000,000. Therefore, some large circuits, e.g., C3540, C7552, C2670, etc., coul...

Phased logic: supporting the synchronous design paradigm with delay-insensitive circuitry,”

by D Linder, J Harden - IEEE Transactions on Computers, , 1996
"... ..."
Abstract - Cited by 46 (0 self) - Add to MetaCart
Abstract not found

Exact and heuristic algorithms for the minimization of incompletely specified state machines

by June-kyung Rho, Gary D. Hachtel, Fabio Somenzi, Reily M. Jacoby - IEEE Transactions on Computer-Aided Design , 1994
"... Abstract-In this paper we present two exact algorithms for state minimization of FSM’s. Our results prove that exact state minimization is feasible for a large class of practical examples, certainly including most hand-designed FSM’s. We also present heuristic algorithms, that can handle large, mach ..."
Abstract - Cited by 43 (0 self) - Add to MetaCart
Abstract-In this paper we present two exact algorithms for state minimization of FSM’s. Our results prove that exact state minimization is feasible for a large class of practical examples, certainly including most hand-designed FSM’s. We also present heuristic algorithms, that can handle large, machine-generated, FSM’s. The possibly many different reduced machines with the same number of states have different implementation costs. We discuss two steps of the minimization procedure, called state mapping and solution shrinking, that have received little prior attention in the literature, though they play a significant role in delivering an optimally implemented reduced machine. We also introduce an algorithm whose main virtue is the ability to cope with very general cost functions, while providing high performance. I.

Pueblo: A hybrid pseudo-boolean SAT solver

by Hossein M. Sheini, Karem A. Sakallah - Journal on Satisfiability, Boolean Modeling and Computation , 2006
"... This paper introduces a new hybrid method for efficiently integrating Pseudo-Boolean (PB) constraints into generic SAT solvers in order to solve PB satisfiability and optimization problems. To achieve this, we adopt the cutting-plane technique to draw inferences among PB constraints and combine it w ..."
Abstract - Cited by 36 (0 self) - Add to MetaCart
This paper introduces a new hybrid method for efficiently integrating Pseudo-Boolean (PB) constraints into generic SAT solvers in order to solve PB satisfiability and optimization problems. To achieve this, we adopt the cutting-plane technique to draw inferences among PB constraints and combine it with generic implication graph analysis for conflictinduced learning. Novel features of our approach include a light-weight and efficient hybrid learning and backjumping strategy for analyzing PB constraints and CNF clauses in order to simultaneously learn both a CNF clause and a PB constraint with minimum overhead and use both to determine the backtrack level. Several techniques for handling the original and learned PB constraints are introduced. Overall, our method benefits significantly from the pruning power of the learned PB constraints, while keeping the overhead of adding them into the problem low. In this paper, we also address two other methods for solving PB problems, namely Integer Linear Programming (ILP) and pre-processing to CNF SAT, and present a thorough comparison between them and our hybrid method. Experimental comparison of our method against other hybrid approaches is also demonstrated. Additionally, we provide details of the MiniSAT-based implementation of our solver Pueblo to enable the reader to construct a similar one.
(Show Context)

Citation Context

...lts. 5. Experimental Analysis We present comprehensive experimental analysis of the methods described in this paper using the benchmarks in [31]. These benchmarks include instances of logic synthesis =-=[38]-=-, prime DIMACS [30], FPGA and global routing [1], the progressive party problem [35] and model RB [37]. Detailed results and analysis of the performance of various solvers on these benchmarks are repo...

High-Level Area and Power Estimation for VLSI Circuits

by Mahadevamurty Nemani, Farid N. Najm , 1997
"... This paper addresses the problem of computing the area complexity of a multi-output combinational logic circuit, given only its functional description, i.e., Boolean equations, where area complexity is measured in terms of the number of gates required for an optimal multilevel implementation of the ..."
Abstract - Cited by 35 (4 self) - Add to MetaCart
This paper addresses the problem of computing the area complexity of a multi-output combinational logic circuit, given only its functional description, i.e., Boolean equations, where area complexity is measured in terms of the number of gates required for an optimal multilevel implementation of the combinational logic. The proposed area model is based on transforming the given multi-output Boolean function description into an equivalent single-output function. The model is empirical, and results demonstrating its feasibility and utility are presented. Also, a methodology for converting the gate count estimates, obtained from the area model, into capacitance estimates is presented. Highlevel power estimates based on the total capacitance estimates and average activity estimates are also presented.
(Show Context)

Citation Context

...el description of the function. In this paper we adopt the above model for estimating the power. The above power approximation was tested on several benchmark circuits from the ISCAS-89 [15] and MCNC =-=[16]-=- benchmark suites. These circuits (described at the gate level) were simulated under realistic gate delay models, for randomly generated vector sequences, for input probabilities ranging from 0.1 to 0...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University