Results 1  10
of
4,185,034
On a Variable Smoothing Procedure for Conjugate Gradient Type Methods
, 1995
"... The smoothing procedure has been introduced by Schonauer as an acceleration algorithm for the Generalized Conjugate Gradient methods. In this paper, after giving our definition of Conjugate Gradient type methods, and using a Generalized Hessenberg process, we introduce a Hybrid generalized Conjug ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The smoothing procedure has been introduced by Schonauer as an acceleration algorithm for the Generalized Conjugate Gradient methods. In this paper, after giving our definition of Conjugate Gradient type methods, and using a Generalized Hessenberg process, we introduce a Hybrid generalized
c World Scientic Publishing Company MULTIGRIDCONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS*
, 2002
"... Multigridconjugate gradient type methods for reactiondiffusion systems ..."
On the performance of parallel normalized explicit preconditioned conjugate gradient  type methods
 In IPDPS’2006, 20th International Parallel and Distributed Processing Symposium, Rhodes Island
, 2006
"... A new class of parallel normalized preconditioned conjugate gradient type methods in conjunction with normalized approximate inverses algorithms, based on normalized approximate factorization procedures, for solving sparse linear systems of irregular structure, which are derived from the finite elem ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A new class of parallel normalized preconditioned conjugate gradient type methods in conjunction with normalized approximate inverses algorithms, based on normalized approximate factorization procedures, for solving sparse linear systems of irregular structure, which are derived from the finite
2006), Minimization of edgepreserving regularization functional by conjugate gradient type methods
 Proc. 1st Int. Conf. PDEbased Image Proc
"... Summary. Recently, a powerful twophase method for removing impulse noise has been developed. It gives a satisfactory result even for images with 90 % pixels corrupted by impulse noise. However, the twophase method is not computationally efficient, because it requires the minimization of a nonsmoo ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
convergence of the conjugate gradient type method applied to our functional. Simulation results show that our method is several times faster than the relaxationbased method when the noise ratio is high. 1
Solving general sparse linear systems using conjugate gradienttype methods
 In Proceedings of the 1990 International Conference on Supercomputing
, 1990
"... The problem of finding an approximation of z = At b (where At is the pseudoinverse of A E gimzn with m 2 n and Tank(A) = n) is discussed. It is assumed that A is sparse but has neither a special pattern (as bandedness) nor a special property (as symmetry or positive definiteness). In this paper it ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
it is shown that preconditioners obtained by neglecting small elements during the decomposition of A into easily invertible matrices can be used efficiently with conjugate gradienttype methods if an adaptive strategy for deciding when an element is small is implemented. The resulting precondi
TWO CONJUGATEGRADIENTTYPE METHODS FOR UNSYMMETRIC LINEAR EQUATIONS*
, 1988
"... Abstract. We propose two new conjugategradienttype methods for the solution of sparse unsymmetric linear systems. We present a new tridiagonalization process for unsymmetric matrices that is closely related to the Lanczos process. We use orthogonal factorizations of the tridiagonal matrix to deriv ..."
Abstract
 Add to MetaCart
Abstract. We propose two new conjugategradienttype methods for the solution of sparse unsymmetric linear systems. We present a new tridiagonalization process for unsymmetric matrices that is closely related to the Lanczos process. We use orthogonal factorizations of the tridiagonal matrix
On a Conjugate GradientType Method for Solving Complex Symmetric Linear Systems
, 1992
"... We consider large sparse linear systems Ax = b with complex symmetric coefficient matrices A = A T which arise, e.g., from the discretization of partial differential equations with complex coefficients. For the solution of such systems we present a new conjugate gradienttype iterative method, CSY ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We consider large sparse linear systems Ax = b with complex symmetric coefficient matrices A = A T which arise, e.g., from the discretization of partial differential equations with complex coefficients. For the solution of such systems we present a new conjugate gradienttype iterative method
Solution of Robust Linear Regression Problems by Preconditioned Conjugate Gradient Type Methods
, 2000
"... In this paper, we consider solving the robust linear regression problem by an iterative method. We show that iteratively reweighted least squares method and Newton method can each be combined with an iterative method to solve large, sparse, rectangular systems of linear, algebraic equations efficien ..."
Abstract
 Add to MetaCart
. Finally, we give numerical results that demonstrate the effectiveness of the suggested preconditioners in solving robust linear regression problems. key words: Robust linear regression, Iteratively reweighted least squares method, Newton's method, New weighting function, Conjugate gradient least
Conjugate Gradient Type Methods for Solving Large Scale Eigenvalue Problems
, 2010
"... Let us recall that for given symmetric A, B ∈ R n×n and B positive definite, the Rayleigh Quotient for the matrix pencil A − λB is defined by ρ(x) = xTAx ..."
Abstract
 Add to MetaCart
Let us recall that for given symmetric A, B ∈ R n×n and B positive definite, the Rayleigh Quotient for the matrix pencil A − λB is defined by ρ(x) = xTAx
Learning to rank using gradient descent
 In ICML
, 2005
"... We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data f ..."
Abstract

Cited by 534 (17 self)
 Add to MetaCart
We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data
Results 1  10
of
4,185,034