Results 11  20
of
70
Algorithms for approximate minimization of the difference between submodular functions, with applications
, 2012
"... We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the pe ..."
Abstract

Cited by 12 (11 self)
 Add to MetaCart
(Show Context)
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the periteration cost of our algorithms is much less than [30], and our algorithms can be used to efficiently minimize a difference between submodular functions under various combinatorial constraints, a problem not previously addressed. We provide computational bounds and a hardness result on the multiplicative inapproximability of minimizing the difference between submodular functions. We show, however, that it is possible to give worstcase additive bounds by providing a polynomial time computable lowerbound on the minima. Finally we show how a number of machine learning problems can be modeled as minimizing the difference between submodular functions. We experimentally show the validity of our algorithms by testing them on the problem of feature selection with submodular cost features.
Modeling Disease Progression via Fused Sparse Group Lasso
"... Alzheimer’s Disease (AD) is the most common neurodegenerative disorder associated with aging. Understanding how the disease progresses and identifying related pathological biomarkers for the progression is of primary importance in the clinical diagnosis and prognosis of Alzheimer’s disease. In this ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Alzheimer’s Disease (AD) is the most common neurodegenerative disorder associated with aging. Understanding how the disease progresses and identifying related pathological biomarkers for the progression is of primary importance in the clinical diagnosis and prognosis of Alzheimer’s disease. In this paper, we develop novel multitask learning techniques to predict the disease progression measured by cognitive scores and select biomarkers predictive of the progression. In multitask learning, the prediction of cognitive scores at each time point is considered as a task, and multiple prediction tasks at different time points are performed simultaneously to capture the temporal smoothness of the prediction models across different time points. Specifically, we propose a novel convex fused sparse group Lasso (cFSGL)
Structured Support Vector Machines for Noise Robust Continuous Speech Recognition
"... The use of discriminative models is an interesting alternative to generative models for speech recognition. This paper examines one form of these models, structured support vector machines (SVMs), for noise robust speech recognition. One important aspect of structured SVMs is the form of the joint f ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
(Show Context)
The use of discriminative models is an interesting alternative to generative models for speech recognition. This paper examines one form of these models, structured support vector machines (SVMs), for noise robust speech recognition. One important aspect of structured SVMs is the form of the joint feature space. In this work features based on generative models are used, which allows modelbased compensation schemes to be applied to yield robust joint features. However, these features require the segmentation of frames into words, or subwords, to be specified. In previous work this segmentation was obtained using generative models. Here the segmentations are refined using the parameters of the structured SVM. A Viterbilike scheme for obtaining “optimal ” segmentations, and modifications to the training algorithm to allow them to be efficiently used, are described. The performance of the approach is evaluated on a noise corrupted continuous digit task: AURORA 2. Index Terms: speech recognition, structural SVMs, optimal alignment, large margin, log linear model
UNCONDITIONALLY STABLE SCHEMES FOR HIGHER ORDER INPAINTING
"... Abstract. Inpainting methods with third and fourth order equations have certain advantages in comparison with equations of second order such as the smooth interpolation of image information even over large distances. Because of this such methods became very popular in the last couple of years. Solvi ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Inpainting methods with third and fourth order equations have certain advantages in comparison with equations of second order such as the smooth interpolation of image information even over large distances. Because of this such methods became very popular in the last couple of years. Solving higher order equations numerically can be a computational demanding task though. Discretizing a fourth order evolution equation with a brute force method may restrict the time steps to a size up to order ∆x 4 where ∆x denotes the step size of the spatial grid. In this work we will present a more educated way of discretization, namely efficient semiimplicit schemes that are guaranteed to be unconditionally stable. We will explain the main idea of these schemes and present applications in image processing for inpainting with the CahnHilliard equation, TVH −1 inpainting, and inpainting with LCIS (low curvature image simplifiers). 1.
D.: Continuous ratio optimization via convex relaxation with applications to multiview 3d reconstruction
 In: CVPR
, 2009
"... We introduce a convex relaxation framework to optimally minimize continuous surface ratios. The key idea is to minimize the continuous surface ratio by solving a sequence of convex optimization problems. We show that such minimal ratios are superior to traditionally used minimal surface formulations ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We introduce a convex relaxation framework to optimally minimize continuous surface ratios. The key idea is to minimize the continuous surface ratio by solving a sequence of convex optimization problems. We show that such minimal ratios are superior to traditionally used minimal surface formulations in that they do not suffer from a shrinking bias and no longer require the choice of a regularity parameter. The absence of a shrinking bias in the minimal ratio model is proven analytically. Furthermore we demonstrate that continuous ratio optimization can be applied to derive a new algorithm for reconstructing threedimensional silhouetteconsistent objects from multiple views. Experimental results confirm that our approach allows to accurately reconstruct deep concavities even without the specification of tuning parameters. 1.
Learning Shared Body Plans
"... We cast the problem of recognizing related categories as a unified learning and structured prediction problem with shared body plans. When provided with detailed annotations of objects and their parts, these body plans model objects in terms of shared parts and layouts, simultaneously capturing a va ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
We cast the problem of recognizing related categories as a unified learning and structured prediction problem with shared body plans. When provided with detailed annotations of objects and their parts, these body plans model objects in terms of shared parts and layouts, simultaneously capturing a variety of categories in varied poses. We can use these body plans to jointly train many detectors in a shared framework with structured learning, leading to significant gains for each supervised task. Using our model, we can provide detailed predictions of objects and their parts for both familiar and unfamiliar categories. 1.
Revisiting Bayesian blind deconvolution,” arXiv
 arXiv:1305.2362., 2013. [Online]. Available: http://arxiv.org/abs/1305.2362
"... ar ..."
(Show Context)
Parameter Learning and Convergent Inference for Dense Random Fields
"... Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applicatio ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applications. However, parameter estimation for dense random fields is still poorly understood. In this paper, we present an efficient algorithm for learning parameters in dense random fields. All parameters are estimated jointly, thus capturing dependencies between them. We show that gradients of a variety of loss functions over the mean field marginals can be computed efficiently. The resulting algorithm learns parameters that directly optimize the performance of mean field inference in the model. As a supporting result, we present an efficient inference algorithm for dense random fields that is guaranteed to converge. 1.
A Unified Optimization Framework for Robust Pseudorelevance Feedback Algorithms
"... We present a flexible new optimization framework for finding effective, reliable pseudorelevance feedback models that unifies existing complementary approaches in a principled way. The result is an algorithmic approach that not only brings together different benefits of previous methods, such as pa ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We present a flexible new optimization framework for finding effective, reliable pseudorelevance feedback models that unifies existing complementary approaches in a principled way. The result is an algorithmic approach that not only brings together different benefits of previous methods, such as parameter selftuning and risk reduction from term dependency modeling, but also allows a rich new space of model search strategies to be investigated. We compare the effectiveness of a unified algorithm to existing methods by examining iterative performance and riskreward tradeoffs. We also discuss extensions for generating new algorithms within our framework.
Robust classification with adiabatic quantum optimization
 Proc. 29th Int. Conf. on Machine Learning
, 2012
"... We propose a nonconvex training objective for robust binary classification of data sets in which label noise is present. The design is guided by the intention of solving the resulting problem by adiabatic quantum optimization. Two requirements are imposed by the engineering constraints of existin ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We propose a nonconvex training objective for robust binary classification of data sets in which label noise is present. The design is guided by the intention of solving the resulting problem by adiabatic quantum optimization. Two requirements are imposed by the engineering constraints of existing quantum hardware: training problems are formulated as quadratic unconstrained binary optimization; and model parameters are represented as binary expansions of low bitdepth. In the present work we validate this approach by using a heuristic classical solver as a standin for quantum hardware. Testing on several popular data sets and comparing with a number of existing losses we find substantial advantages in robustness as measured by test error under increasing label noise. Robustness is enabled by the nonconvexity of our hardwarecompatible loss function, which we name qloss. 1.