Results 1 
5 of
5
Markov Random Field Modeling, Inference & Learning in Computer Vision & Image Understanding: A Survey
, 2013
"... ..."
Fast Semidifferentialbased Submodular Function Optimization
, 2013
"... We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub and superdifferentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, off ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub and superdifferentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular minimization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many stateoftheart maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments.
Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions
 IN NIPS
, 2013
"... We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAClike setting [28]), and constrained minimization of submodular functions. We show that the complexity of all three problems depe ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAClike setting [28]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the “curvature” of the submodular function, and provide lower and upper bounds that refine and improve previous results [2, 6, 8, 27]. Our proof techniques are fairly generic. We either use a blackbox transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [3, 29], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.
Joint Energybased Detection and Classification of Multilingual Text Lines
"... Abstract. This paper proposes a new hierarchical MDLbased model for a joint detection and classification of multilingual text lines in images taken by handheld cameras. The majority of related text detection methods assume alphabetbased writing in a single language, e.g. in Latin. They use simp ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper proposes a new hierarchical MDLbased model for a joint detection and classification of multilingual text lines in images taken by handheld cameras. The majority of related text detection methods assume alphabetbased writing in a single language, e.g. in Latin. They use simple clustering heuristics specific to such texts: proximity between letters within one line, larger distance between separate lines, etc. We are interested in a significantly more ambiguous problem where images combine alphabet and logographic characters from multiple languages and typographic rules vary a lot (e.g. English, Korean, and Chinese). Complexity of detecting and classifying text lines in multiple languages calls for a more principled approach based on informationtheoretic principles. Our new MDL model includes data costs combining geometric errors with classification likelihoods and a hierarchical sparsity term based on label costs. This energy model can be efficiently minimized by fusion moves. We demonstrate robustness of the proposed algorithm on a large new database of multilingual text images collected in the public transit system of Seoul.
Monotone Closure of Relaxed Constraints in Submodular Optimization: Connections Between Minimization and Maximization: Extended Version
, 2014
"... It is becoming increasingly evident that many machine learning problems may be reduced to some form of submodular optimization. Previous work addresses generic discrete approaches and specific relaxations. In this work, we take a generic view from a relaxation perspective. We show a relaxation form ..."
Abstract
 Add to MetaCart
(Show Context)
It is becoming increasingly evident that many machine learning problems may be reduced to some form of submodular optimization. Previous work addresses generic discrete approaches and specific relaxations. In this work, we take a generic view from a relaxation perspective. We show a relaxation formulation and simple rounding strategy that, based on the monotone closure of relaxed constraints, reveals analogies between minimization and maximization problems, and includes known results as special cases and extends to a wider range of settings. Our resulting approximation factors match the corresponding integrality gaps. The results in this paper complement, in a sense explained in the paper, related discrete gradient based methods [30], and are particularly useful given the ever increasing need for efficient submodular optimization methods in very largescale machine learning. For submodular maximization, a number of relaxation approaches have been proposed. A critical challenge for the practical applicability of these techniques, however, is the complexity of evaluating the multilinear extension. We show that this extension can be efficiently evaluated for a number of useful submodular functions, thus making these otherwise impractical algorithms viable for many realworld machine learning problems.