Results 1  10
of
4,733
64bit floatingpoint FPGA matrix multiplication
 In ACM/SIGDA FieldProgrammable Gate Arrays
, 2005
"... We introduce a 64bit ANSI/IEEE Std 7541985 floating point design of a hardware matrix multiplier optimized for FPGA implementations. A general block matrix multiplication algorithm, applicable for an arbitrary matrix size is proposed. The algorithm potentially enables optimum performance by exploi ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
We introduce a 64bit ANSI/IEEE Std 7541985 floating point design of a hardware matrix multiplier optimized for FPGA implementations. A general block matrix multiplication algorithm, applicable for an arbitrary matrix size is proposed. The algorithm potentially enables optimum performance
64bit Floating Point Coprocessor Instruction Set
"... The uMFPU64 floating point coprocessor provides extensive support for 32bit IEEE 754 compatible floating point and integer operations, 64bit IEEE 754 compatible floating point and integer operations, and local peripheral device support. A typical calculation involves sending instructions and data ..."
Abstract
 Add to MetaCart
The uMFPU64 floating point coprocessor provides extensive support for 32bit IEEE 754 compatible floating point and integer operations, 64bit IEEE 754 compatible floating point and integer operations, and local peripheral device support. A typical calculation involves sending instructions
Pipelined Datapath for an IEEE754 64Bit FloatingPoint Jacobi Solver
"... Solving linear equations is essential for certain embedded applications such as adaptive beam forming and synthetic aperture radar. When direct methods like Cholesky factorization are not viable, it becomes necessary to use an iterative approach. Even when the convergence of the basic iterative meth ..."
Abstract
 Add to MetaCart
methods like Jacobi or GaussSeidel cannot be guaranteed, they are often used as preconditioners for more advanced methods like generalized minimum residual (GMRES) [1]. This paper presents a binary tree datapath for an IEEE754 64bit floatingpoint Jacobi iterative solver. The datapath component
A Survey on 64 Bit Floating Point Multiplier Based on Vedical Multiplication Techniques
"... Abstract — Floating point number’s multiplication is the most important process in the area of graph theory, multidimentional graphics, and digital signal processing, high performance computing etc. However, computers use binary numbers and it would like more precision however, it was found that bin ..."
Abstract
 Add to MetaCart
that binary numbers should be precise enough for most scientific and engineering calculations. So it was decided to double the amount of memory allocated. The Binary Floating point numbers are represented in Single and Double formats. The Single consist of 32 bits and the Double consist of 64 bits
Exploiting Mixed Precision Floating Point Hardware in Scientific Computations
, 2007
"... By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional proc ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
By using a combination of 32bit and 64bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64bit accuracy of the resulting solution. The approach presented here can apply not only to conventional
Exploiting the Performance of 32 bit Floating Point Arithmetic in Obtaining 64 bit Accuracy
 In Proceedings of the 2006 ACM/IEEE Conference on Supercomputing
, 2006
"... Recent versions of microprocessors exhibit performance characteristics for 32 bit floating point arithmetic (single precision) that is substantially higher than 64 bit floating point arithmetic (double precision). Examples include the Intel’s Pentium IV and M processors, AMD’s Opteron architectures ..."
Abstract

Cited by 54 (9 self)
 Add to MetaCart
Recent versions of microprocessors exhibit performance characteristics for 32 bit floating point arithmetic (single precision) that is substantially higher than 64 bit floating point arithmetic (double precision). Examples include the Intel’s Pentium IV and M processors, AMD’s Opteron architectures
High Throughput Compression of DoublePrecision FloatingPoint
 Data.” Data Compression Conference
, 2007
"... This paper describes FPC, a lossless compression algorithm for linear streams of 64bit floatingpoint data. FPC is designed to compress well while at the same time meeting the high throughput demands of scientific computing environments. On our thirteen datasets, it achieves a substantially higher ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
This paper describes FPC, a lossless compression algorithm for linear streams of 64bit floatingpoint data. FPC is designed to compress well while at the same time meeting the high throughput demands of scientific computing environments. On our thirteen datasets, it achieves a substantially higher
The Quickhull algorithm for convex hulls
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1996
"... The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the twodimensional Quickhull Algorithm with the generaldimension BeneathBeyond Algorithm. It is similar to the randomized, incremental algo ..."
Abstract

Cited by 713 (0 self)
 Add to MetaCart
is implemented with floatingpoint arithmetic, this assumption can lead to serious errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick ” facets that contain all possible exact convex hulls of the input. A variation
Results 1  10
of
4,733