Results 1  10
of
32
Trustworthy Numerical Computation in Scala
"... Modern computing has adopted the floating point type as a default way to describe computations with real numbers. Thanks to dedicated hardware support, such computations are efficient on modern architectures, even in double precision. However, rigorous reasoning about the resulting programs remains ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Modern computing has adopted the floating point type as a default way to describe computations with real numbers. Thanks to dedicated hardware support, such computations are efficient on modern architectures, even in double precision. However, rigorous reasoning about the resulting programs remains difficult. This is in part due to a large gap between the finite floating point representation and the infiniteprecision realnumber semantics that serves as the developers’ mental model. Because programming languages do not provide support for estimating errors, some computations in practice are performed more and some less precisely than needed. We present a library solution for rigorous arithmetic computation. Our numerical data type library tracks a (double) floating point value, but also a guaranteed upper bound on the error between this value and the ideal value that would be computed in the realvalue semantics. Our implementation involves a set of linear approximations based on an extension of affine arithmetic. The derived approximations cover most of the standard mathematical operations, including trigonometric functions, and are more comprehensive than any publicly available ones. Moreover, while interval arithmetic rapidly yields overly pessimistic estimates, our approach remains precise for several computational tasks of interest. We evaluate the library on a number of examples from numerical analysis and physical simulations. We found it to be a useful tool for gaining confidence in the correctness of the computation.
Automatic AccuracyGuaranteed BitWidth Optimization for Fixed and FloatingPoint Systems
 In Proc. FPL
, 2007
"... ABSTRACT In this paper we present Minibit+, an approach that optimizes the bitwidths of fixedpoint and floatingpoint designs, while guaranteeing accuracy. Our approach adopts different levels of analysis giving the designer the opportunity to terminate it at any stage to obtain a result. Range a ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT In this paper we present Minibit+, an approach that optimizes the bitwidths of fixedpoint and floatingpoint designs, while guaranteeing accuracy. Our approach adopts different levels of analysis giving the designer the opportunity to terminate it at any stage to obtain a result. Range analysis is achieved using a combined affine and interval arithmetic approach to reduce the number of bits. Precision analysis involves a coarsegrain and finegrain analysis. The best representation, in fixedpoint or floatingpoint, for the numbers is then chosen based on the range, precision and latency. Three case studies are used: discrete cosine transform, BSplines and RGB to YCbCr color conversion. Our analysis can run over 200 times faster than current approaches to this problem while producing more accurate results, on average within 23% of an exhaustive search.
Unobtrusive Methods
 in Social Research
, 2000
"... ncp.sagepub.com hosted at online.sagepub.com Clinical Research Neonates in the neonatal intensive care unit (NICU) frequently take nutrition via feeding tubes1 and central venous catheters (CVCs),2 and it is important that tubes and catheters be placed in intended positions because misplacement ca ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
ncp.sagepub.com hosted at online.sagepub.com Clinical Research Neonates in the neonatal intensive care unit (NICU) frequently take nutrition via feeding tubes1 and central venous catheters (CVCs),2 and it is important that tubes and catheters be placed in intended positions because misplacement can cause complications, such as aspiration or perforation of the gastrointestinal tract for feeding tubes3 or sepsis or thromboembolism for CVCs.4 Although plain radiography is generally accepted as a goldstandard method for determining the correct placement of tubes/catheters, it is sometimes difficult to detect their tips in radiographs, especially when diameters are extremely small. Moreover, the localization of tubes and catheters may be more difficult in an NICU setting in the absence of a dedicated highresolution picture archiving communication system (PACS) monitor when neonates undergo radiography in a supine position with a portable device without any breath holding. The purpose of this study was to evaluate the abilities of pediatric residents to identify placement of nutrition tubes and intravenous (IV) nutrition catheters in neonates in a NICU setting using plain radiographs.
Synthesis of MinimalError Control Software
"... Software implementations of controllers for physical systems are at the core of many embedded systems. The design of controllers uses the theory of dynamical systems to construct a mathematical control law that ensures that the controlled system has certain properties, such as asymptotic convergence ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Software implementations of controllers for physical systems are at the core of many embedded systems. The design of controllers uses the theory of dynamical systems to construct a mathematical control law that ensures that the controlled system has certain properties, such as asymptotic convergence to an equilibrium point, and optimizes some performance criteria such as LQRLQG. However, owing to quantization errors arising from the use of fixedpoint arithmetic, the implementation of this control law can only guarantee practical stability: under the actions of the implementation, the trajectories of the controlled system converge to a bounded set around the equilibrium point, and the size of the bounded set is proportional to the error in the implementation. The problem of verifying whether a controller implementation achieves practical stability for a given bounded set has been studied before. In this paper, we change the emphasis from verification to automatic synthesis. We give a technique to synthesize embedded control software that is Pareto optimal w.r.t. both performance criteria and practical stability regions. Our technique uses static analysis to estimate quantizationrelated errors for specific controller implementations, and performs stochastic local search over the space of possible controllers using particle swarm optimization. The effectiveness of our technique is illustrated using several standard control system examples: in most examples, we find controllers with closetooptimal LQRLQG performance but with implementation errors, hence regions of practical stability, several times as small.
Instrumented multistage wordlength optimization
 In Proceedings of the International Conference on FieldProgrammable Technology
, 2007
"... In this paper we present a tool, LengthFinder, for optimizing wordlengths of hardware designs with fixedpoint arithmetic based on analytical error models that guarantee accuracy. LengthFinder adopts a multistage approach, with four novel features. First, the code analysis stage selects loops to i ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
In this paper we present a tool, LengthFinder, for optimizing wordlengths of hardware designs with fixedpoint arithmetic based on analytical error models that guarantee accuracy. LengthFinder adopts a multistage approach, with four novel features. First, the code analysis stage selects loops to instrument, such that information about the number of iterations can be extracted to generate more accurate results. Second, aggressive heuristics are used to produce nonuniform wordlengths rapidly while meeting requirements from the guaranteed error functions. Third, a method capable of reducing the search space has been developed for datapartitioning with a variable wordlength reduction. Fourth, a genetic algorithm with selectivecrossover and high mutation probability is applied to obtain nearoptimal results. The benefits of LengthFinder are illustrated with various case studies. We show that LengthFinder can run over 200 times faster than previous techniques [6], while producing more accurate results, relative to values obtained from integer linear programming. 1
Synthesis of FixedPoint Programs
"... Several problems in the implementations of control systems, signalprocessing systems, and scientific computing systems reduce to compiling a polynomial expression over the reals into an imperative program using fixedpoint arithmetic. Fixedpoint arithmetic only approximates real values, and its o ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Several problems in the implementations of control systems, signalprocessing systems, and scientific computing systems reduce to compiling a polynomial expression over the reals into an imperative program using fixedpoint arithmetic. Fixedpoint arithmetic only approximates real values, and its operators do not have the fundamental properties of real arithmetic, such as associativity. Consequently, a naive compilation process can yield a program that significantly deviates from the real polynomial, whereas a different order of evaluation can result in a program that is close to the real value on all inputs in its domain. We present a compilation scheme for realvalued arithmetic expressions to fixedpoint arithmetic programs. Given a realvalued polynomial expression t, we find an expression t ′ that is equivalent to t over the reals, but whose implementation as a series of fixedpoint operations minimizes the error between the fixedpoint value and the value of t over the space of all inputs. We show that the corresponding decision problem, checking whether there is an implementation t ′ of t whose error is less than a given constant, is NPhard. We then propose a solution technique based on genetic programming. Our technique evaluates the fitness of each candidate program using a static analysis based on affine arithmetic. We show that our tool can significantly reduce the error in the fixedpoint implementation on a set of linear control system benchmarks. For example, our tool found implementations whose errors are only one half of the errors in the original fixedpoint expressions.
1 A SCALABLE PRECISION ANALYSIS FRAMEWORK
"... Abstract—In embedded computing, typically some form of silicon area or power budget restricts the potential performance achievable. For algorithms with limited dynamic range, custom hardware accelerators manage to extract significant additional performance for such a budget via mapping operations in ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In embedded computing, typically some form of silicon area or power budget restricts the potential performance achievable. For algorithms with limited dynamic range, custom hardware accelerators manage to extract significant additional performance for such a budget via mapping operations in the algorithm to fixedpoint. However, for complex applications requiring floatingpoint computation, the potential performance improvement over software is reduced. Nonetheless, custom hardware can still customise the precision of floatingpoint operators, unlike software which is restricted to IEEE standard single or double precision, to increase the overall performance at the cost of increasing the error observed in the final computational result. Unfortunately, because it is difficult to determine if this error increase is tolerable, this task is rarely performed. We present a new analytical technique to calculate bounds on the range or relative error of output variables, enabling custom hardware accelerators to be tolerant of floating point errors by design. In contrast to existing tools that perform this task, our approach scales to larger examples and obtains tighter bounds, within a smaller execution time. Furthermore, it allows a user to trade the quality of bounds with execution time of the procedure, making it suitable for both small and largescale algorithms. I.
A Bit Too Precise? Bounded Verification of Quantized Digital Filters ⋆
"... Abstract. Digital filters are simple yet ubiquitous components of a wide variety of digital processing and control systems. Errors in the filters can be catastrophic. Traditionally digital filters have been verified using methods from control theory and extensive testing. We study two alternative ve ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Digital filters are simple yet ubiquitous components of a wide variety of digital processing and control systems. Errors in the filters can be catastrophic. Traditionally digital filters have been verified using methods from control theory and extensive testing. We study two alternative verification techniques: bitprecise analysis and realvalued error approximations. In this paper, we empirically evaluate several variants of these two fundamental approaches for verifying fixedpoint implementations of digital filters. We design our comparison to reveal the best possible approach towards verifying realworld designs of infinite impulse response (IIR) digital filters. Our study reveals broader insights into cases where bitreasoning is absolutely necessary and suggests efficient approaches using modern satisfiabilitymodulotheories (SMT) solvers. 1
Symbolic noise analysis approach to computational hardware optimization
 DAC 2008. 45th ACM/IEEE, 813 June 2008 Page(s):391 – 396
"... This paper addresses the problem of computational error modeling and analysis. Choosing different wordlengths for each functional unit in hardware implementations of numerical algorithms always results in an optimization problem of trading computational error with implementation costs. In this stu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper addresses the problem of computational error modeling and analysis. Choosing different wordlengths for each functional unit in hardware implementations of numerical algorithms always results in an optimization problem of trading computational error with implementation costs. In this study, a symbolic noise analysis method is introduced for highlevel synthesis, which is based on symbolic modeling of the error bounds where the error symbols are considered to be specified with a probability distribution function over a known range. The ability to combine wordlength optimization with highlevel synthesis parameters and costs to minimize the overall design cost is demonstrated using case studies.
Novel algorithms for word length optimization
 European Association for Signal Processing (EURASIP), 2011  ISSN
"... ABSTRACT Digital signal processing applications are specified with floatingpoint data types but they are usually implemented in embedded systems with fixedpoint arithmetic to minimize cost and power consumption. The floatingtofixed point conversion requires an optimization algorithm to determin ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT Digital signal processing applications are specified with floatingpoint data types but they are usually implemented in embedded systems with fixedpoint arithmetic to minimize cost and power consumption. The floatingtofixed point conversion requires an optimization algorithm to determine a combination of optimum wordlength for each operator. This paper proposes new algorithms based on Greedy Randomized Adaptive Search Procedure (GRASP): accuracybased GRASP and accuracy/costbased GRASP. Those algorithms are iterative stochastic local searches and result in the best result through many test cases, including IIR, NLMS and FFT filters.