Results 1  10
of
298
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 213 (3 self)
 Add to MetaCart
(Show Context)
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
Liegroup methods
 ACTA NUMERICA
, 2000
"... Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having ..."
Abstract

Cited by 154 (24 self)
 Add to MetaCart
Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having introduced requisite elements of differential geometry, this paper surveys the novel theory of numerical integrators that respect Liegroup structure, highlighting theory, algorithmic issues and a number of applications.
The Classical Moment Problem as a SelfAdjoint Finite Difference Operator
, 1998
"... This is a comprehensive exposition of the classical moment problem using methods from the theory of finite difference operators. Among the advantages of this approach is that the Nevanlinna functions appear as elements of a transfer matrix and convergence of Pade approximants appears as the strong r ..."
Abstract

Cited by 148 (8 self)
 Add to MetaCart
(Show Context)
This is a comprehensive exposition of the classical moment problem using methods from the theory of finite difference operators. Among the advantages of this approach is that the Nevanlinna functions appear as elements of a transfer matrix and convergence of Pade approximants appears as the strong resolvent convergence of finite matrix approximations to a Jacobi matrix. As a bonus of this, we obtain new results on the convergence of certain Pade approximants for series of Hamburger.
NONLINEAR SEQUENCE TRANSFORMATIONS FOR THE ACCELERATION OF CONVERGENCE AND THE SUMMATION OF DIVERGENT SERIES
, 2003
"... Slowly convergent series and sequences as well as divergent series occur quite frequently in the mathematical treatment of scientific problems. In this report, a large number of mainly nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series are di ..."
Abstract

Cited by 71 (12 self)
 Add to MetaCart
Slowly convergent series and sequences as well as divergent series occur quite frequently in the mathematical treatment of scientific problems. In this report, a large number of mainly nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series are discussed. Some of the sequence transformations of this report as for instance Wynn’s ǫ algorithm or Levin’s sequence transformation are well established in the literature on convergence acceleration, but the majority of them is new. Efficient algorithms for the evaluation of these transformations are derived. The theoretical properties of the sequence transformations in convergence acceleration and summation processes are analyzed. Finally, the performance of the sequence transformations of this report are tested by applying them to certain slowly convergent and divergent series, which are hopefully realistic models for a large part of the slowly convergent or divergent series that can occur in scientific problems and in applied mathematics.
Model reduction of state space systems via an Implicitly Restarted Lanczos method
 Numer. Algorithms
, 1996
"... The nonsymmetric Lanczos method has recently received significant attention as a model reduction technique for largescale systems. Unfortunately, the Lanczos method may produce an unstable partial realization for a given, stable system. To remedy this situation, inexpensive implicit restarts are de ..."
Abstract

Cited by 68 (8 self)
 Add to MetaCart
The nonsymmetric Lanczos method has recently received significant attention as a model reduction technique for largescale systems. Unfortunately, the Lanczos method may produce an unstable partial realization for a given, stable system. To remedy this situation, inexpensive implicit restarts are developed which can be employed to stabilize the Lanczos generated model.
Simulation of HighSpeed Interconnects
 PROC. IEEE, MAY 2001
, 2001
"... With the rapid developments in very largescale integration (VLSI) technology, design and computeraided design (CAD) techniques, at both the chip and package level, the operating frequencies are fast reaching the vicinity of gigahertz and switching times are getting to the subnanosecond levels. Th ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
With the rapid developments in very largescale integration (VLSI) technology, design and computeraided design (CAD) techniques, at both the chip and package level, the operating frequencies are fast reaching the vicinity of gigahertz and switching times are getting to the subnanosecond levels. The ever increasing quest for highspeed applications is placing higher demands on interconnect performance and highlighted the previously negligible effects of interconnects, such as ringing, signal delay, distortion, reflections, and crosstalk. In this review paper, various highspeed interconnect effects are briefly discussed. In addition, recent advances in transmission line macromodeling techniques are presented. Also, simulation of highspeed interconnects using modelreductionbased algorithms is discussed in detail.
Approximating the logarithm of a matrix to specified accuracy
 SIAM J. Matrix Anal. Appl
, 2001
"... Abstract. The standard inverse scaling and squaring algorithm for computing the matrix logarithm begins by transforming the matrix to Schur triangular form in order to facilitate subsequent matrix square root and Padé approximation computations. A transformationfree form of this method that exploit ..."
Abstract

Cited by 47 (20 self)
 Add to MetaCart
(Show Context)
Abstract. The standard inverse scaling and squaring algorithm for computing the matrix logarithm begins by transforming the matrix to Schur triangular form in order to facilitate subsequent matrix square root and Padé approximation computations. A transformationfree form of this method that exploits incomplete Denman–Beavers square root iterations and aims for a specified accuracy (ignoring roundoff) is presented. The error introduced by using approximate square roots is accounted for by a novel splitting lemma for logarithms of matrix products. The number of square root stages and the degree of the finalPadé approximation are chosen to minimize the computationalwork. This new method is attractive for highperformance computation since it uses only the basic building blocks of matrix multiplication, LU factorization and matrix inversion.
On the Laguerre method for numerically inverting Laplace transforms
 INFORMS Journal on Computing
, 1996
"... The Laguerre method for numerically inverting Laplace transforms is an old established method based on the 1935 TricomiWidder theorem, which shows (under suitable regularity conditions) that the desired function can be represented as a weighted sum of Laguerre functions, where the weights are coeff ..."
Abstract

Cited by 42 (7 self)
 Add to MetaCart
(Show Context)
The Laguerre method for numerically inverting Laplace transforms is an old established method based on the 1935 TricomiWidder theorem, which shows (under suitable regularity conditions) that the desired function can be represented as a weighted sum of Laguerre functions, where the weights are coefficients of a generating function constructed from the Laplace transform using a bilinear transformation. We present a new variant of the Laguerre method based on: (1) using our previously developed variant of the Fourierseries method to calculate the coefficients of the Laguerre generating function, (2) developing systematic methods for scaling, and (3) using Wynn’s ɛalgorithm to accelerate convergence of the Laguerre series when the Laguerre coefficients do not converge to zero geometrically fast. These contributions significantly expand the class of transforms that can be effectively inverted by the Laguerre method. We provide insight into the slow convergence of the Laguerre coefficients as well as propose a remedy. Before acceleration, the rate of convergence can often be determined from the Laplace transform by applying Darboux’s theorem. Even when the Laguerre coefficients converge to zero geometrically fast, it can be difficult to calculate the desired functions for large arguments because of roundoff errors. We solve this problem by calculating very small Laguerre coefficients with low relative error through appropriate scaling. We also develop another acceleration technique for the case in which the Laguerre coefficients converge to zero geometrically fast. We illustrate the effectiveness of our algorithm through numerical examples. Subject classifications: Mathematics, functions: Laplace transforms. Probability, distributions: calculation by transform inversion. Queues, algorithms: Laplace transform inversion.
Design of Hybrid Filter Banks for Analog/Digital Conversion
 IEEE Trans. on Signal Processing
, 1998
"... This paper presents design algorithms for hybrid filter banks (HFB's) for highspeed, highresolution conversion between analog and digital signals. The HFB is an unconventional class of filter bank that employs both analog and digital filters. When used in conjunction with an array of slower s ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
(Show Context)
This paper presents design algorithms for hybrid filter banks (HFB's) for highspeed, highresolution conversion between analog and digital signals. The HFB is an unconventional class of filter bank that employs both analog and digital filters. When used in conjunction with an array of slower speed converters, the HFB improves the speed and resolution of the conversion compared with the standard timeinterleaved array conversion technique. The analog and digital filters in the HFB must be designed so that they adequately isolate the channels and do not introduce reconstruction errors that limit the resolution of the system. To design continuoustime analog filters for HFB's, a discretetimetocontinuoustime ("ZtoS") transform is developed to convert a perfect reconstruction (PR) discretetime filter bank into a nearPR HFB; a computationally efficient algorithm based on the fast Fourier transform (FFT) is developed to design the digital filters for HFB's. A twochannel HFB is designed with sixthorder continuoustime analog filters and length 64 FIR digital filters that yield 086 dB average aliasing error. To design discretetime analog filters (e.g., switchedcapacitors or chargecoupled devices) for HFB's, a lossless factorization of a PR discretetime filter bank is used so that reconstruction error is not affected by filter coefficient quantization. A gain normalization technique is developed to maximize the dynamic range in the finiteprecision implementation. A fourchannel HFB is designed with 9bit (integer) filter coefficients. With internal aliasing error is 070 dB, and with the equivalent of 20 bits internal precision, maximum aliasing is 0100 dB. The 9bit filter coefficients degrade the stopband attenuation (compared with unquantized coefficients)...