Results 1  10
of
96
What energy functions can be minimized via graph cuts?
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2004
"... In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are co ..."
Abstract

Cited by 1048 (23 self)
 Add to MetaCart
(Show Context)
In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a generalpurpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.
Image Restoration with Discrete Constrained Total Variation Part I: Fast and Exact Optimization
, 2006
"... This paper deals with the total variation minimization problem in image restoration for convex data fidelity functionals. We propose a new and fast algorithm which computes an exact solution in the discrete framework. Our method relies on the decomposition of an image into its level sets. It maps ..."
Abstract

Cited by 94 (9 self)
 Add to MetaCart
This paper deals with the total variation minimization problem in image restoration for convex data fidelity functionals. We propose a new and fast algorithm which computes an exact solution in the discrete framework. Our method relies on the decomposition of an image into its level sets. It maps the original problems into independent binary Markov Random Field optimization problems at each level. Exact solutions of these binary problems are found thanks to minimum cost cut techniques in graphs. These binary solutions are proved to be monotone increasing with levels and yield thus an exact solution of the discrete original problem. Furthermore we show that minimization of total variation under L1 data fidelity term yields a selfdual contrast invariant filter. Finally we present some results.
Minimizing Sparse Higher Order Energy Functions of Discrete Variables
"... Higher order energy functions have the ability to encode high level structural dependencies between pixels, which have been shown to be extremely powerful for image labeling problems. Their use, however, is severely hampered in practice by the intractable complexity of representing and minimizing su ..."
Abstract

Cited by 74 (13 self)
 Add to MetaCart
(Show Context)
Higher order energy functions have the ability to encode high level structural dependencies between pixels, which have been shown to be extremely powerful for image labeling problems. Their use, however, is severely hampered in practice by the intractable complexity of representing and minimizing such functions. We observed that higher order functions encountered in computer vision are very often “sparse”, i.e. many labelings of a higher order clique are equally unlikely and hence have the same high cost. In this paper, we address the problem of minimizing such sparse higher order energy functions. Our method works by transforming the problem into an equivalent quadratic function minimization problem. The resulting quadratic function can be minimized using popular message passing or graph cut based algorithms for MAP inference. Although this is primarily a theoretical paper, it also shows how higher order functions can be used to obtain impressive results for the binary texture restoration problem.
Multidocument summarization via budgeted maximization of submodular functions
 In Proceedings of Human Language Technologies: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACLHLT
, 2010
"... We treat the text summarization problem as maximizing a submodular function under a budget constraint. We show, both theoretically and empirically, a modified greedy algorithm can efficiently solve the budgeted submodular maximization problem nearoptimally, and we derive new approximation bounds in ..."
Abstract

Cited by 69 (14 self)
 Add to MetaCart
(Show Context)
We treat the text summarization problem as maximizing a submodular function under a budget constraint. We show, both theoretically and empirically, a modified greedy algorithm can efficiently solve the budgeted submodular maximization problem nearoptimally, and we derive new approximation bounds in doing so. Experiments on DUC’04 task show that our approach is superior to the bestperforming method from the DUC’04 evaluation on ROUGE1 scores. 1
A fully combinatorial algorithm for submodular function minimization
 J. COMBIN. THEORY
"... This paper presents a new simple algorithm for minimizing submodular functions. For integer valued submodular functions, the algorithm runs in O(n6EO log nM) time, where n is the cardinality of the ground set, M is the maximum absolute value of the function value, and EO is the time for function eva ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
(Show Context)
This paper presents a new simple algorithm for minimizing submodular functions. For integer valued submodular functions, the algorithm runs in O(n6EO log nM) time, where n is the cardinality of the ground set, M is the maximum absolute value of the function value, and EO is the time for function evaluation. The algorithm can be improved to run in O((n4EO+n 5) log nM) time. The strongly polynomial version of this faster algorithm runs in O((n5EO + n6) log n) time for real valued general submodular functions. These are comparable to the best known running time bounds for submodular function minimization. The algorithm can also be implemented in strongly polynomial time using only additions, subtractions, comparisons, and the oracle calls for function evaluation. This is the first fully combinatorial submodular function minimization algorithm that does not rely on the scaling method.
Structured sparsityinducing norms through submodular functions
 IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS
, 2010
"... Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turnedinto a convex optimization problem byreplacing the cardinality function by its convex en ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turnedinto a convex optimization problem byreplacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the ℓ1norm. In this paper, we investigate more general setfunctions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nonincreasing submodular setfunctions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or highdimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rankstatistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as nonfactorial priors for supervised learning.
Bisubmodular Function Minimization
 Mathematical Programming
, 2000
"... This paper presents the rst combinatorial, polynomialtime algorithm for minimizing bisubmodular functions, extending the scaling algorithm for submodular function minimization due to Iwata, Fleischer, and Fujishige. A bisubmodular function arises as a rank function of a deltamatroid. The scali ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
This paper presents the rst combinatorial, polynomialtime algorithm for minimizing bisubmodular functions, extending the scaling algorithm for submodular function minimization due to Iwata, Fleischer, and Fujishige. A bisubmodular function arises as a rank function of a deltamatroid. The scaling algorithm naturally leads to the rst combinatorial polynomialtime algorithm for testing membership in deltamatroid polyhedra. Unlike the case of matroid polyhedra, it remains open to develop a combinatorial strongly polynomial algorithm for this problem. Division of Systems Science, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 5608531, Japan (fujishig@sys.es.osakau.ac.jp). Research partly carried out while at Forschungsinstut fur Diskrete Mathematik, Universitat Bonn. y Department of Mathematical Engineering and Information Physics, University of Tokyo, Tokyo 1138656, Japan (iwata@sr3.t.utokyo.ac.jp). 1 1 Introduction Let V be a nite none...
Approximating Rankwidth and Cliquewidth Quickly
, 2006
"... Rankwidth was defined by Oum and Seymour [2006. Approximating cliquewidth and branchwidth. J. Combin. Theory Ser. B 96, 4, 514–528] to investigate cliquewidth. They constructed an algorithm that either outputs a rankdecomposition of width at most f(k) for some function f or confirms that rankw ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
(Show Context)
Rankwidth was defined by Oum and Seymour [2006. Approximating cliquewidth and branchwidth. J. Combin. Theory Ser. B 96, 4, 514–528] to investigate cliquewidth. They constructed an algorithm that either outputs a rankdecomposition of width at most f(k) for some function f or confirms that rankwidth is larger than k in time O(V 9 log V ) for an input graph G = (V,E) and a fixed k. We develop three separate algorithms of this kind with faster running time. We construct an O(V 4)time algorithm with f(k) = 3k + 1 by constructing a subroutine for the previous algorithm; we avoid generic algorithms minimizing submodular functions used by Oum and Seymour. Another one is an O(V 3)time algorithm with f(k) = 24k by giving a reduction from graphs to binary matroids; then we use an approximation algorithm for matroid branchwidth by Hliněny ́ [2005. A parametrized algorithm for matroid branchwidth. SIAM J. Comput. 35, 2, 259–277]. Finally we construct an O(V 3)time algorithm with f(k) = 3k − 1 by combining the ideas of above two cited papers.
A Faster Scaling Algorithm for Minimizing Submodular Functions
, 2001
"... Combinatorial strongly polynomial algorithms for minimizing submodular functions have been developed by Iwata,Fleischer,and Fujishige (IFF) and by Schrijver. The IFF algorithm employs a scaling scheme for submodular functions,whereas Schrijver’s algorithm achieves strongly polynomial bound with the ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
(Show Context)
Combinatorial strongly polynomial algorithms for minimizing submodular functions have been developed by Iwata,Fleischer,and Fujishige (IFF) and by Schrijver. The IFF algorithm employs a scaling scheme for submodular functions,whereas Schrijver’s algorithm achieves strongly polynomial bound with the aid of distance labeling. Subsequently,Fleischer and Iwata have described a push/relabel version of Schrijver’s algorithm to improve its time complexity. This paper combines the scaling scheme with the push/relabel framework to yield a faster combinatorial algorithm for submodular function minimization. The resulting algorithm improves over the previously best known bound by essentially a linear factor in the size of the underlying ground set.
Submodular Approximation: Samplingbased Algorithms and Lower Bounds
, 2008
"... We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimummakespan scheduling, submodular sparsest cu ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimummakespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a functionvalue oracle. The approximation guarantees for most of our algorithms are of the order of √ n/lnn. We show that this is the inherent difficulty of the problems by proving matching lower bounds. We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.