Results 1  10
of
408,136
gradient maps
"... In this paper we propose an extension for the algorithms of imagetogeometry registration by Mutual Information(MI) to improve the performance and the quality of the alignment. Proposed for the registration of multi modal medical images, in the last years MI has been adapted to align a 3D model to ..."
Abstract
 Add to MetaCart
of the acquisition environment; the characteristics of the image background, especially non uniform background, that can degrade the convergence of the registration. To improve the quality of the registration in these cases we propose to compute the MI between the gradient map of the 3D rendering and the gradient
CONVEXITY PROPERTIES OF GRADIENT MAPS
, 2007
"... We consider the action of a real reductive group G on a Kähler manifold Z which is the restriction of a holomorphic action of the complexified group G C. We assume that the induced action of a compatible maximal compact subgroup U of G C on Z is Hamiltonian. We have an associated gradient map µp: Z ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We consider the action of a real reductive group G on a Kähler manifold Z which is the restriction of a holomorphic action of the complexified group G C. We assume that the induced action of a compatible maximal compact subgroup U of G C on Z is Hamiltonian. We have an associated gradient map µp
Snakes, Shapes, and Gradient Vector Flow
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1998
"... Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to boundary concavities, however, have limited their utility. This paper presents a new extern ..."
Abstract

Cited by 743 (16 self)
 Add to MetaCart
external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a graylevel or binary edge map derived from the image. It differs fundamentally from traditional snake external forces
Learning to rank using gradient descent
 In ICML
, 2005
"... We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data f ..."
Abstract

Cited by 510 (17 self)
 Add to MetaCart
We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data
Greedy Function Approximation: A Gradient Boosting Machine
 Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract

Cited by 951 (12 self)
 Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed
Mean shift, mode seeking, and clustering
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1995
"... AbstractMean shift, a simple iterative procedure that shifts each data point to the average of data points in its neighborhood, is generalized and analyzed in this paper. This generalization makes some kmeans like clustering algorithms its special cases. It is shown that mean shift is a modeseeki ..."
Abstract

Cited by 620 (0 self)
 Add to MetaCart
seeking process on a surface constructed with a “shadow ” kernel. For Gaussian kernels, mean shift is a gradient mapping. Convergence is studied for mean shift iterations. Cluster analysis is treated as a deterministic problem of finding a fixed point of mean shift that characterizes the data. Applications
Gradient flows in metric spaces and in the space of probability measures
 LECTURES IN MATHEMATICS ETH ZÜRICH, BIRKHÄUSER VERLAG
, 2005
"... ..."
Quasiregular Gradient Mappings and Strong Solutions of Elliptic Equations
 CONTEMPORARY MATHEMATICS
"... We prove that quasiregular gradient mappings exhibit higher degree of Hölder continuity than the one that is optimal for general quasiregular mappings. This improves a classical result of Morrey on the regularity of strong solutions of uniformly elliptic PDEs with measurable coefficients. Our Hölde ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We prove that quasiregular gradient mappings exhibit higher degree of Hölder continuity than the one that is optimal for general quasiregular mappings. This improves a classical result of Morrey on the regularity of strong solutions of uniformly elliptic PDEs with measurable coefficients. Our
Improved methods for building protein models in electron density maps and the location of errors in these models. Acta Crystallogr. sect
 A
, 1991
"... Map interpretation remains a critical step in solving the structure of a macromolecule. Errors introduced at this early stage may persist throughout crystallographic refinement and result in an incorrect structure. The normally quoted crystallographic residual is often a poor description for the q ..."
Abstract

Cited by 1016 (9 self)
 Add to MetaCart
Map interpretation remains a critical step in solving the structure of a macromolecule. Errors introduced at this early stage may persist throughout crystallographic refinement and result in an incorrect structure. The normally quoted crystallographic residual is often a poor description
A scaled conjugate gradient algorithm for fast supervised learning
 NEURAL NETWORKS
, 1993
"... A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural netwo ..."
Abstract

Cited by 441 (0 self)
 Add to MetaCart
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural
Results 1  10
of
408,136