Results 1  10
of
449,016
A. Convergence Proof
"... If we do gradient descent with η ∗ (t), then almost surely, the algorithm converges (for the quadratic model). To prove that, we follow classical techniques based on Lyapunov stability theory (Bucy, 1965). Notice that the expected loss follows E J θ (t+1))  θ (t)] [ ( (1 − η ∗ h)(θ (t) − θ ∗ ) + η ..."
Abstract
 Add to MetaCart
If we do gradient descent with η ∗ (t), then almost surely, the algorithm converges (for the quadratic model). To prove that, we follow classical techniques based on Lyapunov stability theory (Bucy, 1965). Notice that the expected loss follows E J θ (t+1))  θ (t)] [ ( (1 − η ∗ h)(θ (t) − θ
Mean Shift: Construction and Convergence Proof
, 2006
"... In most lowlevel computer vision problems, very little information (if any) is known about the true underlying probability density function, such as its shape, number of mixture components, etc.. Due to this lack of knowledge, parametric approaches are less relevant, rather one has to rely on nonp ..."
Abstract
 Add to MetaCart
parametric methods. In this note we consider the construction and convergence proof of the nonparametric mean shift method which was developed by Fukunaga and Hostetler (Fukunaga & Hostetler, 1975), later adapted by Cheng (Cheng, 1995) for the purpose of image analysis and more recently popularized
Convergence Proofs For Numerical IVP Software
"... . The study of the running times of algorithms in computer science can be broken down into two broad types: worstcase and averagecase analyses. For many problems this distinction is very important as the orders of magnitude (in terms of some measure of the problem size) of the running times may di ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
. The study of the running times of algorithms in computer science can be broken down into two broad types: worstcase and averagecase analyses. For many problems this distinction is very important as the orders of magnitude (in terms of some measure of the problem size) of the running times may differ significantly in each case, providing useful information about the merits of the algorithm. Historically averagecase analyses were first done with respect to a measure on the input data; to counter the argument that it is often difficult to find a natural measure on the data, randomised algorithms were then developed. In this paper similar questions are studied for adaptive software used to integrate initial value problems for ODEs. In the worst case these algorithms may fail completely giving O(1) errors. We consider the probability of failure for generic vector fields with random initial data chosen from a ball and perform averagecase and worstcase analyses. We then perform a diffe...
Convergence Proofs for Simulated Annealing Falsification of Safety Properties*
"... Abstract — The problem of falsifying temporal logic properties of hybrid automata can be posed as a minimization problem by utilizing quantitative semantics for temporal logics. Previous work has used a variation of Simulated Annealing (SA) to solve the problem. While SA is known to converge to the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
to the global minimum of a continuous objective function over a closed and bounded search space, or when the search space is discrete, there do not exist convergence proofs for the cases addressed in that previous work. Namely, when the objective function is discontinuous, and when the objective is a vector
On the computational content of convergence proofs via Banach limits
"... This paper addresses new developments in the ongoing proof mining program, i.e. the use of tools from proof theory to extract effective quantitative information from prima facie ineffective proofs in analysis. Very recently, the current authors developed a method to extract rates of metastability (i ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(in the sense of Tao) from convergence proofs in nonlinear analysis that are based on Banach limits and so (for all what is known) rely on the axiom of choice. In this paper we apply this method to a proof due to Shioji and Takahashi on the convergence of Halpern iterations in spaces X with a
A Convergence Proof for the Softassign Quadratic Assignment Algorithm
, 1997
"... The softassign quadratic assignment algorithm has recently emerged as an effective strategy for a variety of optimization problems in pattern recognition and combinatorial optimization. While the effectiveness of the algorithm was demonstrated in thousands of simulations, there was no known proof of ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
of convergence. Here, we provide a proof of convergence for the most general form of the algorithm.
Optimization Flow Control, I: Basic Algorithm and Convergence
 IEEE/ACM TRANSACTIONS ON NETWORKING
, 1999
"... We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using gradient projection algorithm. In thi ..."
Abstract

Cited by 690 (64 self)
 Add to MetaCart
at different times and with different frequencies. We provide asynchronous distributed algorithms and prove their convergence in a static environment. We present measurements obtained from a preliminary prototype to illustrate the convergence of the algorithm in a slowly timevarying environment.
Greedy Adaptive Critics for LQR Problems: Convergence Proofs
"... A number of success stories have been told where reinforcement learning has been applied to problems in continuous state spaces using neural nets or other sorts of function approximators in the adaptive critics. However, the theoretical understanding of why and when these algorithms work is inadequa ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
is inadequate. This is clearly exemplified by the lack of convergence results for a number of important situations. To our knowledge only two such results been presented for systems in the continuous state space domain. The first is due to Werbos and is concerned with linear function approximation and heuristic
Convergence Proofs for the COSEMML and COSEMMAP Algorithms
 Medical Imag. Proc. Lab., State Univ. of
, 2003
"... Statistical reconstruction has become increasingly popular in emission computed tomography (ECT) due to its ability to accurately model noise and the imaging physics. In addition, information regarding the object can be incorporated using Bayesian priors. In emission tomography, a Poisson loglikeli ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Statistical reconstruction has become increasingly popular in emission computed tomography (ECT) due to its ability to accurately model noise and the imaging physics. In addition, information regarding the object can be incorporated using Bayesian priors. In emission tomography, a Poisson loglikelihood projection data model is widely used because the photon noise is independently Poisson at each detector bin. Given the Poisson likelihood,
A Convergence Proof for Linked Cluster Expansions
, 2008
"... We prove that for a general Ncomponent model on a ddimensional lattice Z d with pairwise nearestneighbor coupling and general local interaction obeying a stability bound the linked cluster expansion has a finite radius of convergence. The proof uses Mayer Montroll equations for connected Green fu ..."
Abstract
 Add to MetaCart
We prove that for a general Ncomponent model on a ddimensional lattice Z d with pairwise nearestneighbor coupling and general local interaction obeying a stability bound the linked cluster expansion has a finite radius of convergence. The proof uses Mayer Montroll equations for connected Green
Results 1  10
of
449,016