Results 1  10
of
23
Decoding binary node labels from censored edge measurements: Phase transition and efficient recovery
, 2014
"... We consider the problem of clustering a graphG into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables Y = BGx⊕Z, where BG is the incidence matrix of a graph G, x is the vector of unknown vertex variables (with a ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
We consider the problem of clustering a graphG into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables Y = BGx⊕Z, where BG is the incidence matrix of a graph G, x is the vector of unknown vertex variables (with a uniform prior) and Z is a noise vector with Bernoulli(ε) i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of x is possible if and only the graph G is connected, with a sharp threshold at the edge probability log(n)/n for ErdősRényi random graphs. The first goal of this paper is to determine how the edge probability p needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by α = np / log(n), it is shown that exact recovery is possible if and only if α> 2/(1 − 2ε)2 + o(1/(1 − 2ε)2). In other words, 2/(1 − 2ε)2 is the information theoretic threshold for exact recovery at lowSNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph G, defining the degree rate as α = d / log(n), where d is the minimum degree of the graph, it is shown that the proposed method achieves the rate α> 4((1 + λ)/(1 − λ)2)/(1 − 2ε)2 + o(1/(1 − 2ε)2), where 1 − λ is the spectral gap of the graph G. A preliminary version of this paper appeared in ISIT 2014 [ABBS14].
Nearoptimal joint object matching via convex relaxation. arxiv preprint arXiv:1402.1473
, 2014
"... Joint object matching aims at aggregating information from a large collection of similar instances (e.g. images, graphs, shapes) to improve the correspondences computed between pairs of objects, typically by exploiting global map compatibility. Despite some practical advances on this problem, fro ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Joint object matching aims at aggregating information from a large collection of similar instances (e.g. images, graphs, shapes) to improve the correspondences computed between pairs of objects, typically by exploiting global map compatibility. Despite some practical advances on this problem, from the theoretical point of view, the errorcorrection ability of existing algorithms are limited by a constant barrier — none of them can provably recover the correct solution when more than a constant fraction of input correspondences are corrupted. Moreover, prior approaches focus mostly on fully similar objects, while it is practically more demanding and realistic to match instances that are only partially similar to each other. In this paper, we propose an algorithm to jointly match multiple objects that exhibit only partial similarities, where the provided pairwise feature correspondences can be densely corrupted. By encoding a consistent partial map collection into a 01 semidefinite matrix, we attempt recovery via a twostep procedure, that is, a spectral technique followed by a parameterfree convex program called MatchLift. Under a natural randomized model, MatchLift exhibits nearoptimal errorcorrection ability, i.e. it guarantees the recovery of the groundtruth maps even when a dominant fraction of the inputs are randomly corrupted. We evaluate the proposed algorithm on various benchmark data sets including synthetic examples and realworld examples, all of which confirm the practical applicability of the proposed algorithm.
Estimating Image Depth Using Shape Collections
"... Figure 1: We attribute a single 2D image of an object (left) with depth by transporting information from a 3D shape deformation subspace learned by analyzing a network of related but different shapes (middle). For visualization, we color code the estimated depth with values increasing from red to bl ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Figure 1: We attribute a single 2D image of an object (left) with depth by transporting information from a 3D shape deformation subspace learned by analyzing a network of related but different shapes (middle). For visualization, we color code the estimated depth with values increasing from red to blue (right). Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively ‘lifting ’ it back to 3D, by exploiting a collection of aligned 3D models of related objects shape. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shapespecific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depthenhanced image editing and image relighting.
Linear inverse problems on ErdősRényi graphs: Informationtheoretic limits and efficient recovery
"... Abstract—This paper considers the inverse problem with observed variables Y = BGX ⊕Z, where BG is the incidence matrix of a graph G, X is the vector of unknown vertex variables with a uniform prior, and Z is a noise vector with Bernoulli(ε) i.i.d. entries. All variables and operations are Boolean. T ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract—This paper considers the inverse problem with observed variables Y = BGX ⊕Z, where BG is the incidence matrix of a graph G, X is the vector of unknown vertex variables with a uniform prior, and Z is a noise vector with Bernoulli(ε) i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery of X is possible if and only the graph G is connected, with a sharp threshold at the edge probability log(n)/n for ErdősRényi random graphs. The first goal of this paper is to determine how the edge probability p needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by α = np / log(n), it is shown that exact recovery is possible if and only if α> 2/(1−2ε)2+o(1/(1−2ε)2). In other words, 2/(1−2ε)2 is the information theoretic threshold for exact recovery at lowSNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. Full version available in [1]. I.
Creating Consistent Scene Graphs Using a Probabilistic Grammar
"... Figure 1: Our algorithm processes raw scene graphs with possible oversegmentation (a), obtained from repositories such as the Trimble Warehouse, into consistent hierarchies capturing semantic and functional groups (b,c). The hierarchies are inferred by parsing the scene geometry with a probabilisti ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Figure 1: Our algorithm processes raw scene graphs with possible oversegmentation (a), obtained from repositories such as the Trimble Warehouse, into consistent hierarchies capturing semantic and functional groups (b,c). The hierarchies are inferred by parsing the scene geometry with a probabilistic grammar learned from a set of annotated examples. Apart from generating meaningful groupings at multiple scales, our algorithm also produces object labels with higher accuracy compared to alternative approaches. Growing numbers of 3D scenes in online repositories provide new opportunities for datadriven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for datadriven applications because it lacks consistent segmentations, category labels, and/or functional groupings required for coanalysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes,
Tightness of the maximum likelihood semidefinite relaxation for angular synchronization. Available online at arXiv:1411.3272 [math.OC
, 2014
"... Abstract Many maximum likelihood estimation problems are, in general, intractable optimization problems. As a result, it is common to approximate the maximum likelihood estimator (MLE) using convex relaxations. Semidefinite relaxations are among the most popular. Sometimes, the relaxations turn out ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract Many maximum likelihood estimation problems are, in general, intractable optimization problems. As a result, it is common to approximate the maximum likelihood estimator (MLE) using convex relaxations. Semidefinite relaxations are among the most popular. Sometimes, the relaxations turn out to be tight. In this paper, we study such a phenomenon. The angular synchronization problem consists in estimating a collection of n phases, given noisy measurements of some of the pairwise relative phases. The MLE for the angular synchronization problem is the solution of a (hard) nonbipartite Grothendieck problem over the complex numbers. It is known that its semidefinite relaxation enjoys worstcase approximation guarantees. In this paper, we consider a stochastic model on the input of that semidefinite relaxation. We assume there is a planted signal (corresponding to a ground truth set of phases) and the measurements are corrupted with random noise. Even though the MLE does not coincide with the planted signal, we show that the relaxation is, with high probability, tight. This holds even for high levels of noise. This analysis explains, for the interesting case of angular synchronization, a phenomenon which has been observed without explanation in many other settings. Namely, the fact that even when exact recovery of the ground truth is impossible, semidefinite relaxations for the MLE tend to be tight (in favorable noise regimes).
FlowWeb: Joint Image Set Alignment by Weaving Consistent, Pixelwise Correspondences
"... Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixelwise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fullyconnected correspondence flow graph with each node repre ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixelwise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fullyconnected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an offtheshelf, pairwise flow method. We then iteratively update the graph to force it to be more selfconsistent. Once the algorithm converges, dense, globallyconsistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods. 1.
Solving Random Quadratic Systems of Equations is nearly as easy as . . .
, 2015
"... We consider the fundamental problem of solving quadratic systems of equations in n variables, where yi = 〈ai,x〉2, i = 1,...,m and x ∈ Rn is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider the fundamental problem of solving quadratic systems of equations in n variables, where yi = 〈ai,x〉2, i = 1,...,m and x ∈ Rn is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach [11]. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data {ai} and {yi} as soon as the ratio m/n between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have yi ≈ 〈ai,x〉2 and prove that our algorithms achieve a statistical accuracy, which is nearly unimprovable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size—hence the title of this paper. For instance, we
Controlling Singular Values with Semidefinite Programming
"... Controlling the singular values of ndimensional matrices is often required in geometric algorithms in graphics and engineering. This paper introduces a convex framework for problems that involve singular values. Specifically, it enables the optimization of functionals and constraints expressed in t ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Controlling the singular values of ndimensional matrices is often required in geometric algorithms in graphics and engineering. This paper introduces a convex framework for problems that involve singular values. Specifically, it enables the optimization of functionals and constraints expressed in terms of the extremal singular values of matrices. Towards this end, we introduce a family of convex sets of matrices whose singular values are bounded. These sets are formulated using Linear Matrix Inequalities (LMI), allowing optimization with standard convex Semidefinite Programming (SDP) solvers. We further show that these sets are optimal, in the sense that there exist no larger convex sets that bound singular values. A number of geometry processing problems are naturally described in terms of singular values. We employ the proposed framework to optimize and improve upon standard approaches. We experiment with this new framework in several applications: volumetric mesh deformations, extremal quasiconformal mappings in three dimensions, nonrigid shape registration and averaging of rotations. We show that in all applications the proposed approach leads to algorithms that compare favorably to stateofart algorithms.