Results 1  10
of
15
What is optimized in tight convex relaxations for multilabel problems
 in Proc. CVPR
, 2012
"... In this work we present a unified view onMarkov random fields and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased towards the grid geometry than Markov random fields. It turns out that the continuous methods ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
In this work we present a unified view onMarkov random fields and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased towards the grid geometry than Markov random fields. It turns out that the continuous methods are nonlinear extensions of the local polytope MRF relaxation. In view of this result a better understanding of these tight convex relaxations in the discrete setting is obtained. Further, a wider range of optimization methods is now applicable to find a minimizer of the tight formulation. We propose two methods to improve the efficiency of minimization. One uses a weaker, but more efficient continuously inspired approach as initialization and gradually refines the energy where it is necessary. The other one reformulates the dual energy enabling smooth approximations to be used for efficient optimization. We demonstrate the utility of our proposed minimization schemes in numerical experiments. 1.
A Cooccurrence Prior for Continuous MultiLabel Optimization
"... Abstract. To obtain highquality segmentation results the integration of semantic information is indispensable. In contrast to existing segmentation methods which use a spatial regularizer, i.e. a local interaction between image points, the cooccurrence prior [15] imposes penalties on the coexiste ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract. To obtain highquality segmentation results the integration of semantic information is indispensable. In contrast to existing segmentation methods which use a spatial regularizer, i.e. a local interaction between image points, the cooccurrence prior [15] imposes penalties on the coexistence of different labels in a segmentation. We propose a continuous domain formulation of this prior, using a convex relaxation multilabeling approach. While the discrete approach [15] is employs minimization by sequential alpha expansions, our continuous convex formulation is solved by efficient primaldual algorithms, which are highly parallelizable on the GPU. Also, our framework allows isotropic regularizers which do not exhibit grid bias. Experimental results on the MSRC benchmark confirm that the use of cooccurrence priors leads to drastic improvements in segmentation compared to the classical Potts model formulation when applied.
Proximity priors for variational semantic segmentation and recognition
 In ICCV Workshop
, 2013
"... Abstract In this paper, we introduce the concept of proximity priors into semantic segmentation in order to discourage the presence of certain object classes (such as 'sheep' and 'wolf') 'in the vicinity' of each other. 'Vicinity' encompasses spatial distance ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract In this paper, we introduce the concept of proximity priors into semantic segmentation in order to discourage the presence of certain object classes (such as 'sheep' and 'wolf') 'in the vicinity' of each other. 'Vicinity' encompasses spatial distance as well as specific spatial directions simultaneously, e.g. 'plates' are found directly above 'tables', but do not fly over them. In this sense, our approach generalizes the cooccurrence prior by Ladicky et al. [3], which does not incorporate spatial information at all, and the nonmetric label distance prior by Strekalovskiy et al.
Convex optimization for scene understanding
 In ICCV Workshop
, 2013
"... Abstract In this paper we give a convex optimization approach for scene understanding. Since segmentation, object recognition and scene labeling strongly benefit from each other we propose to solve these tasks within a single convex optimization problem. In contrast to previous approaches we do not ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract In this paper we give a convex optimization approach for scene understanding. Since segmentation, object recognition and scene labeling strongly benefit from each other we propose to solve these tasks within a single convex optimization problem. In contrast to previous approaches we do not rely on preprocessing techniques such as object detectors or superpixels. The central idea is to integrate a hierarchical label prior and a set of convex constraints into the segmentation approach, which combine the three tasks by introducing highlevel scene information. Instead of learning label cooccurrences from limited benchmark training data, the hierarchical prior comes naturally with the way humans see their surroundings.
What Is Optimized in Convex Relaxations for MultiLabel Problems: Connecting Discrete and ContinuouslyInspired MAP Inference
 SUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
"... In this work we present a unified view on Markov random fields and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased towards the grid geometry than Markov random fields (MRFs) on grids. It turns out that the con ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In this work we present a unified view on Markov random fields and recently proposed continuous tight convex relaxations for multilabel assignment in the image plane. These relaxations are far less biased towards the grid geometry than Markov random fields (MRFs) on grids. It turns out that the continuous methods are nonlinear extensions of the wellestablished local polytope MRF relaxation. In view of this result a better understanding of these tight convex relaxations in the discrete setting is obtained. Further, a wider range of optimization methods is now applicable to find a minimizer of the tight formulation. We propose two methods to improve the efficiency of minimization. One uses a weaker, but more efficient continuously inspired approach as initialization and gradually refines the energy where it is necessary. The other one reformulates the dual energy enabling smooth approximations to be used for efficient optimization. We demonstrate the utility of our proposed minimization schemes in numerical experiments. Finally, we generalize the underlying energy formulation from isotropic metric smoothness costs to arbitrary nonmetric and orientation dependent smoothness terms.
An iteratively reweighted Algorithm for Nonsmooth Nonconvex Optimization in Computer Vision
, 2014
"... Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge of optimization. Recently, iteratively reweighed `1 minimization (IRL1) has been proposed as a way ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge of optimization. Recently, iteratively reweighed `1 minimization (IRL1) has been proposed as a way to tackle a class of nonconvex functions by solving a sequence of convex `2`1 problems. We extend the problem class to the sum of a convex function and a (nonconvex) nondeceasing function applied to another convex function. The proposed algorithm sequentially optimizes suitably constructed convex majorizers. Convergence to a critical point is proved when the Kurdyka Lojasiewicz property and additional mild restrictions hold for the objective function. The efficiency of the algorithm and the practical importance of the algorithm is demonstrated in computer vision tasks such as image denoising and optical flow. Most applications seek smooth results with sharp discontinuities. This is achieved by combining nonconvexity
MAPInference on Large Scale HigherOrder Discrete Graphical Models by Fusion Moves
"... Many computer vision problems can be cast into optimization problems over discrete graphical models also known as Markov or conditional random fields. Standard methods are able to solve those problems quite efficiently. However, problems with huge label spaces and or higherorder structure remain c ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Many computer vision problems can be cast into optimization problems over discrete graphical models also known as Markov or conditional random fields. Standard methods are able to solve those problems quite efficiently. However, problems with huge label spaces and or higherorder structure remain challenging or intractable even for approximate methods. We reconsider the work of Lempitsky et al. 2010 on fusion moves and apply it to general discrete graphical models. We propose two alternatives for calculating fusion moves that outperform the standard in several applications. Our generic software framework allows us to easily use different proposal generators which spans a large class of inference algorithms and thus makes exhaustive evaluation feasible. Because these fusion algorithms can be applied to models with huge label spaces and higherorder terms, they might stimulate and support research of such models which may have not been possible so far due to the lack of adequate inference methods. 1
Nature
"... In this paper we give a convex optimization approach for scene understanding. Since segmentation, object recognition and scene labeling strongly benefit from each other we propose to solve these tasks within a single convex optimization problem. In contrast to previous approaches we do not rely on p ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we give a convex optimization approach for scene understanding. Since segmentation, object recognition and scene labeling strongly benefit from each other we propose to solve these tasks within a single convex optimization problem. In contrast to previous approaches we do not rely on preprocessing techniques such as object detectors or superpixels. The central idea is to integrate a hierarchical label prior and a set of convex constraints into the segmentation approach, which combine the three tasks by introducing highlevel scene information. Instead of learning label cooccurrences from limited benchmark training data, the hierarchical prior comes naturally with the way humans see their surroundings.
Convex Relaxation of Vectorial Problems with Coupled Regularization
, 2014
"... We propose convex relaxations for nonconvex energies on vectorvalued functions which are both tractable yet as tight as possible. In contrast to existing relaxations, we can handle the combination of nonconvex data terms with coupled regularizers such as l2regularizers. The key idea is to consi ..."
Abstract
 Add to MetaCart
(Show Context)
We propose convex relaxations for nonconvex energies on vectorvalued functions which are both tractable yet as tight as possible. In contrast to existing relaxations, we can handle the combination of nonconvex data terms with coupled regularizers such as l2regularizers. The key idea is to consider a collection of hypersurfaces with a relaxation that takes into account the entire functional rather than separately treating the data term and the regularizers. We provide a theoretical analysis, detail the implementations for different functionals, present run time and memory requirements, and experimentally demonstrate that the coupled l2regularizers give systematic improvements regarding denoising, inpainting and optical flow estimation.