Results 1  10
of
12,316
HOMOGENIZATION AND TWOSCALE CONVERGENCE
, 1992
"... Following an idea of G. Nguetseng, the author defines a notion of "twoscale" convergence, which is aimed at a better description of sequences of oscillating functions. Bounded sequences in L2(f) are proven to be relatively compact with respect to this new type of convergence. A corrector ..."
Abstract

Cited by 451 (14 self)
 Add to MetaCart
Following an idea of G. Nguetseng, the author defines a notion of "twoscale" convergence, which is aimed at a better description of sequences of oscillating functions. Bounded sequences in L2(f) are proven to be relatively compact with respect to this new type of convergence. A corrector
A NEW POLYNOMIALTIME ALGORITHM FOR LINEAR PROGRAMMING
 COMBINATORICA
, 1984
"... We present a new polynomialtime algorithm for linear programming. In the worst case, the algorithm requires O(tf'SL) arithmetic operations on O(L) bit numbers, where n is the number of variables and L is the number of bits in the input. The running,time of this algorithm is better than the ell ..."
Abstract

Cited by 860 (3 self)
 Add to MetaCart
We present a new polynomialtime algorithm for linear programming. In the worst case, the algorithm requires O(tf'SL) arithmetic operations on O(L) bit numbers, where n is the number of variables and L is the number of bits in the input. The running,time of this algorithm is better than
An iterative method for the solution of the eigenvalue problem of linear differential and integral
, 1950
"... The present investigation designs a systematic method for finding the latent roots and the principal axes of a matrix, without reducing the order of the matrix. It is characterized by a wide field of applicability and great accuracy, since the accumulation of rounding errors is avoided, through the ..."
Abstract

Cited by 537 (0 self)
 Add to MetaCart
the process of "minimized iterations". Moreover, the method leads to a well convergent successive approximation procedure by which the solution of integral equations of the Fredholm type and the solution of the eigenvalue problem of linear differential and integral operators may be accomplished. I.
Bundle Adjustment  A Modern Synthesis
 VISION ALGORITHMS: THEORY AND PRACTICE, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 562 (13 self)
 Add to MetaCart
covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than
LINEAR CONVERGENCE OF THE LZI ALGORITHM FOR WEAKLY POSITIVE TENSORS
, 2012
"... We define weakly positive tensors and study the relations among essentially positive tensors, weakly positive tensors, and primitive tensors. In particular, an explicit linear convergence rate of the LiuZhouIbrahim(LZI) algorithm for finding the largest eigenvalue of an irreducible nonnegative ten ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
We define weakly positive tensors and study the relations among essentially positive tensors, weakly positive tensors, and primitive tensors. In particular, an explicit linear convergence rate of the LiuZhouIbrahim(LZI) algorithm for finding the largest eigenvalue of an irreducible nonnegative
The Linear Convergence of a Successive Linear Programming Algorithm
, 1996
"... We present a successive linear programming algorithm for solving constrained nonlinear optimization problems. The algorithm employs an Armijo procedure for updating a trust region radius. We prove the linear convergence of the method by relating the solutions of our subproblems to standard trust re ..."
Abstract
 Add to MetaCart
We present a successive linear programming algorithm for solving constrained nonlinear optimization problems. The algorithm employs an Armijo procedure for updating a trust region radius. We prove the linear convergence of the method by relating the solutions of our subproblems to standard trust
Local Linear Convergence of ADMM on Quadratic or Linear Programs
"... In this paper, we analyze the convergence of the Alternating Direction Method of Multipliers (ADMM) as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits almo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we analyze the convergence of the Alternating Direction Method of Multipliers (ADMM) as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits
Results 1  10
of
12,316