Results 1  10
of
10
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 2127 (61 self)
 Add to MetaCart
(Show Context)
In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an αexpansion: for a label α, this move assigns an arbitrary set of pixels the label α. Our second
Learning from Labeled and Unlabeled Data using Graph Mincuts
, 2001
"... Many application domains suffer from not having enough labeled training data for learning. However, large amounts of unlabeled examples can often be gathered cheaply. As a result, there has been a great deal of work in recent years on how unlabeled data can be used to aid classification. We consi ..."
Abstract

Cited by 336 (6 self)
 Add to MetaCart
Many application domains suffer from not having enough labeled training data for learning. However, large amounts of unlabeled examples can often be gathered cheaply. As a result, there has been a great deal of work in recent years on how unlabeled data can be used to aid classification. We consider an algorithm based on finding minimum cuts in graphs, that uses pairwise relationships among the examples in order to learn from both labeled and unlabeled data. Our algorithm
Efficient GraphBased Energy Minimization Methods In Computer Vision
, 1999
"... ms (we show that exact minimization in NPhard in these cases). These algorithms produce a local minimum in interesting large move spaces. Furthermore, one of them nds a solution within a known factor from the optimum. The algorithms are iterative and compute several graph cuts at each iteration. Th ..."
Abstract

Cited by 115 (6 self)
 Add to MetaCart
ms (we show that exact minimization in NPhard in these cases). These algorithms produce a local minimum in interesting large move spaces. Furthermore, one of them nds a solution within a known factor from the optimum. The algorithms are iterative and compute several graph cuts at each iteration. The running time at each iteration is eectively linear due to the special graph structure. In practice it takes just a few iterations to converge. Moreover most of the progress happens during the rst iteration. For a certain piecewise constant prior we adapt the algorithms developed for the piecewise smooth prior. One of them nds a solution within a factor of two from the optimum. In addition we develop a third algorithm which nds a local minimum in yet another move space. We demonstrate the eectiveness of our approach on image restoration, stereo, and motion. For the data with ground truth, our methods signicantly outperform standard methods. Biographical Sketch Olga
Geometric Algorithms for Online Optimization
 Journal of Computer and System Sciences
, 2002
"... In this paper, we consider a natural online version of linear optimization: the problem has to be solved repeatedly over a sequence of periods, where the objective direction for the upcoming period is unknown. This models online versions of optimization problems, such as maxcut, variants of cluster ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
In this paper, we consider a natural online version of linear optimization: the problem has to be solved repeatedly over a sequence of periods, where the objective direction for the upcoming period is unknown. This models online versions of optimization problems, such as maxcut, variants of clustering, and also the classic online binary search tree problem. We present algorithms for this problem that are (1 + epsilon)competitive with the optimal static solution chosen in hindsight. Our algorithms and proofs are motivated by geometric considerations.
Comparing and unifying searchbased and similaritybased approaches to semisupervised clustering
 In Proceedings of the ICML2003 Workshop on the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining
, 2003
"... Semisupervised clustering employs a small amount of labeled data to aid unsupervised learning. Previous work in the area has employed one of two approaches: 1) Searchbased methods that utilize supervised data to guide the search for the best clustering, and 2) Similaritybased methods that use supe ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Semisupervised clustering employs a small amount of labeled data to aid unsupervised learning. Previous work in the area has employed one of two approaches: 1) Searchbased methods that utilize supervised data to guide the search for the best clustering, and 2) Similaritybased methods that use supervised data to adapt the underlying similarity metric used by the clustering algorithm. This paper presents a unified approach based on the KMeans clustering algorithm that incorporates both of these techniques. Experimental results demonstrate that the combined approach generally produces better clusters than either of the individual approaches. 1.
Approximate Classification via Earthmover Metrics
 In SODA ’04: Proceedings of the fifteenth annual ACMSIAM symposium on Discrete algorithms
, 2004
"... Given a metric space (X, d), a natural distance measure on probability distributions over X is the earthmover metric. We use randomized rounding of earthmover metrics to devise new approximation algorithms for two wellknown classification problems, namely, metric labeling and 0extension. ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Given a metric space (X, d), a natural distance measure on probability distributions over X is the earthmover metric. We use randomized rounding of earthmover metrics to devise new approximation algorithms for two wellknown classification problems, namely, metric labeling and 0extension.
New algorithms for convex cost tension problem with application to computer vision
 a) hal00530369, version 1  28 Oct 2010 (b) (c
"... Motivated by various applications to computer vision, we consider the convex cost tension problem, which is the dual of the convex cost
ow problem. In this paper, we rst propose a primal algorithm for computing an optimal solution of the problem. Our primal algorithm iteratively updates primal vari ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Motivated by various applications to computer vision, we consider the convex cost tension problem, which is the dual of the convex cost
ow problem. In this paper, we rst propose a primal algorithm for computing an optimal solution of the problem. Our primal algorithm iteratively updates primal variables by solving associated minimum cut problems. We show that the time complexity of the primal algorithm is O(K T (n; m)), where K is the range of primal variables and T (n; m) is the time needed to compute a minimum cut in a graph with n nodes and m edges. We then develop an improved version of the primal algorithm, called the primaldual algorithm, by making good use of dual variables in addition to primal variables. Although its time complexity is the same as that of the primal algorithm, we can expect a better performance in practice. We nally consider an application to a computer vision problem called the panoramic image stitching. Key words: minimum cost tension, minimum cost
ow, discrete convex function, submodular function 1.
Quadratic Minimization for Labeling Problems
, 2002
"... Many tasks in Computer Vision can be formulated in the framework of Labeling Problems. Thereby we are asked to assign labels to objects. The assignment is based on a prior model for observationals in the sehensfeld and posteriori data. The labeling is to compute which minimizes ambiguities in the me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Many tasks in Computer Vision can be formulated in the framework of Labeling Problems. Thereby we are asked to assign labels to objects. The assignment is based on a prior model for observationals in the sehensfeld and posteriori data. The labeling is to compute which minimizes ambiguities in the measurements. This computation involves an appropriate functional over objects and labels, which defines a notion of consistency.
Submitted for Publication
"... Semisupervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach speci es a limited number of mustlink and cannotlink constraints between pairs of examples. ..."
Abstract
 Add to MetaCart
Semisupervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach speci es a limited number of mustlink and cannotlink constraints between pairs of examples.
Follow the Leader for Online Optimization
, 2002
"... Linear optimization is a central algorithmic problem with many applications. In this paper, we consider a natural online version: the optimization problem has to be solved repeatedly over a sequence of periods, where the objective direction for the upcoming period is unknown. This models online vers ..."
Abstract
 Add to MetaCart
(Show Context)
Linear optimization is a central algorithmic problem with many applications. In this paper, we consider a natural online version: the optimization problem has to be solved repeatedly over a sequence of periods, where the objective direction for the upcoming period is unknown. This models online versions of wellknown optimization problems, such as maxcut, variants of clustering, and also the classic online binary search tree problem. We present algorithms for these problem which are (1 + o(1))competitive with the optimal static solution chosen in hindsight. Our algorithms and proofs are motivated by geometric considerations.