Results 1  10
of
30
Learning spatiotemporal graphs of human activities
 In ICCV
, 2011
"... Complex human activities occurring in videos can be defined in terms of temporal configurations of primitive actions. Prior work typically handpicks the primitives, their total number, and temporal relations (e.g., allow only followedby), and then only estimates their relative significance for act ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
(Show Context)
Complex human activities occurring in videos can be defined in terms of temporal configurations of primitive actions. Prior work typically handpicks the primitives, their total number, and temporal relations (e.g., allow only followedby), and then only estimates their relative significance for activity recognition. We advance prior work by learning what activity parts and their spatiotemporal relations should be captured to represent the activity, and how relevant they are for enabling efficient inference in realistic videos. We represent videos by spatiotemporal graphs, where nodes correspond to multiscale video segments, and edges capture their hierarchical, temporal, and spatial relationships. Access to video segments is provided by our new, multiscale segmenter. Given a set of training spatiotemporal graphs, we learn their archetype graph, and pdf’s associated with model nodes and edges. The model adaptively learns from data relevant video segments and their relations, addressing the “what ” and “how. ” Inference and learning are formulated within the same framework – that of a robust, leastsquares optimization – which is invariant to arbitrary permutations of nodes in spatiotemporal graphs. The model is used for parsing new videos in terms of detecting and localizing relevant activity parts. We outperform the state of the art on benchmark Olympic and UT humaninteraction datasets, under a favorable complexityvs.accuracy tradeoff. 1.
Two proposals for robust PCA using semidefinite programming
, 2010
"... The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data while damping the effect of outliers. The second method produces a lowleverage decomposition (LLD) of the data that attempts to form a lowrank model for the data by separating out corrupted observations. This paper also presents efficient computational methods for solving these SDPs. Numerical experiments confirm the value of these new techniques.
Todorovic.: From Contours to 3D Object Detection and Pose Estimation
 IEEE International Conference on Computer Vision
, 2011
"... This paper addresses viewinvariant object detection and pose estimation from a single image. While recent work focuses on objectcentered representations of pointbased object features, we revisit the viewercentered framework, and use image contours as basic features. Given training examples of ar ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
This paper addresses viewinvariant object detection and pose estimation from a single image. While recent work focuses on objectcentered representations of pointbased object features, we revisit the viewercentered framework, and use image contours as basic features. Given training examples of arbitrary views of an object, we learn a sparse object model in terms of a few viewdependent shape templates. The shape templates are jointly used for detecting object occurrences and estimating their 3D poses in a new image. Instrumental to this is our new midlevel feature, called bag of boundaries (BOB), aimed at lifting from individual edges toward their more informative summaries for identifying object boundaries amidst the background clutter. In inference, BOBs are placed on deformable grids both in the image and the shape templates, and then matched. This is formulated as a convex optimization problem that accommodates invariance to nonrigid, locally affine shape deformations. Evaluation on benchmark datasets demonstrates our competitive results relative to the state of the art. 1.
EXACT AND STABLE RECOVERY OF ROTATIONS FOR ROBUST SYNCHRONIZATION
, 1211
"... Abstract. The synchronization problem over the special orthogonal group SO(d) consists of estimating a set of unknown rotations R1, R2,..., Rn from noisy measurements of a subset of their pairwise ratios R −1 i Rj. The problem has found applications in computer vision, computer graphics, and sensor ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
Abstract. The synchronization problem over the special orthogonal group SO(d) consists of estimating a set of unknown rotations R1, R2,..., Rn from noisy measurements of a subset of their pairwise ratios R −1 i Rj. The problem has found applications in computer vision, computer graphics, and sensor network localization, among others. Its least squares solution can be approximated by either spectral relaxation or semidefinite programming followed by a rounding procedure, analogous to the approximation algorithms of MaxCut. The contribution of this paper is threefold: First, we introduce a robust penalty function involving the sum of unsquared deviations and derive a relaxation that leads to a convex optimization problem; Second, we apply the alternating direction method to minimize the penalty function; Finally, under a specific model of the measurement noise and the measurement graph, we prove that the rotations are exactly and stably recovered, exhibiting a phase transition behavior in terms of the proportion of noisy measurements. Numerical simulations confirm the phase transition behavior for our method as well as its improved accuracy compared to existing methods. Key words. Synchronization of rotations; least unsquared deviation; semidefinite relaxation; and alternating direction method 1. Introduction. The
LARGE DEVIATIONS OF VECTORVALUED MARTINGALES IN 2SMOOTH NORMED SPACES
 SUBMITTED TO THE ANNALS OF PROBABILITY
, 2008
"... In this paper, we derive exponential bounds on probabilities of large deviations for “light tail” martingales taking values in finitedimensional normed spaces. Our primary emphasis is on the case where the bounds are dimensionindependent or nearly so. We demonstrate that this is the case when the n ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we derive exponential bounds on probabilities of large deviations for “light tail” martingales taking values in finitedimensional normed spaces. Our primary emphasis is on the case where the bounds are dimensionindependent or nearly so. We demonstrate that this is the case when the norm on the space can be approximated, within an absolute constant factor, by a norm which is differentiable on the unit sphere with a Lipschitz continuous gradient. We also present various examples of spaces possessing the latter property.
Low Rank Matrixvalued Chernoff Bounds and Approximate Matrix Multiplication
"... In this paper we develop algorithms for approximating matrix multiplication with respect to the spectral norm. Let A ∈ R n×m and B ∈ R n×p be two matrices and ε>0. We approximate the product A ⊤ B using two sketches e A ∈ R t×m and e B ∈ R t×p,wheret≪n, such that ‚ eA ⊤ eB − A ⊤ B ‚ ≤ ε ‖A‖2 ‖B‖ ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
In this paper we develop algorithms for approximating matrix multiplication with respect to the spectral norm. Let A ∈ R n×m and B ∈ R n×p be two matrices and ε>0. We approximate the product A ⊤ B using two sketches e A ∈ R t×m and e B ∈ R t×p,wheret≪n, such that ‚ eA ⊤ eB − A ⊤ B ‚ ≤ ε ‖A‖2 ‖B‖2 2 with high probability. We analyze two different sampling procedures for constructing e A and e B; one of them is done by i.i.d. nonuniform sampling rows from A and B and the other by taking random linear combinations of their rows. We prove bounds on t that depend only on the intrinsic dimensionality of A and B, that is their rank and their stable rank. For achieving bounds that depend on rank when taking random linear combinations we employ standard tools from highdimensional geometry such as concentration of measure arguments combined with elaborate εnet constructions. For bounds that depend on the smaller parameter of stable rank this technology itself seems weak. However, we show that in combination with a simple truncation argument it is amenable to provide such bounds. To handle similar bounds for row sampling, we develop a novel matrixvalued Chernoff bound inequality which we call low rank matrixvalued Chernoff bound. Thanks to this inequality, we are able to give bounds that depend only on the stable rank of the input matrices. We highlight the usefulness of our approximate matrix multiplication bounds by supplying two applications. First we give an approximation algorithm for the ℓ2regression problem that returns an approximate solution by randomly projecting the initial problem to dimensions linear on the rank of the constraint matrix. Second we give improved approximation algorithms for the low rank matrix approximation problem with respect to the spectral norm. 1
Tail bounds for all eigenvalues of a sum of random matrices
, 2011
"... This work introduces the minimax Laplace transform method, a modification of the cumulantbased matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random selfadjoint matrices. This machinery is used to derive eigenvalue ana ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This work introduces the minimax Laplace transform method, a modification of the cumulantbased matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random selfadjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherencelike quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(ε−2κ2` ` log p) samples, where κ ` = λ1(C)/λ`(C), are sufficient to ensure that the dominant ` eigenvalues of the covariance matrix of a N (0,C) random vector are estimated to within a factor of 1 ± ε with high probability.
Efficient Rounding for the Noncommutative Grothendieck Inequality (Extended Abstract)
, 2013
"... The classical Grothendieck inequality has applications to the design of approximation algorithms for NPhard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main res ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
The classical Grothendieck inequality has applications to the design of approximation algorithms for NPhard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a constantfactor polynomial time approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principle component analysis and the orthogonal Procrustes problem.
Global registration of multiple point clouds using semidefinite programming. arXiv:1306.5226 [cs.CV
, 2013
"... ABSTRACT. Consider N points in R d and M local coordinate systems that are related through unknown rigid transforms. For each point we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordin ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Consider N points in R d and M local coordinate systems that are related through unknown rigid transforms. For each point we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The leastsquares formulation, though nonconvex, has a well known closedform solution for the case M = 2 (based on the singular value decomposition). However, no closed form solution is known for M ≥ 3. In this paper, we propose a semidefinite relaxation of the leastsquares formulation, and prove conditions for exact and stable recovery for both this relaxation and for a previously proposed spectral relaxation. In particular, using results from rigidity theory and the theory of semidefinite programming, we prove that the semidefinite relaxation can guarantee recovery under more adversarial measurements compared to the spectral counterpart. We perform numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the semidefinite program (i.e., we are able to solve the original nonconvex problem) up to a certain noise threshold, and (b) the semidefinite program performs significantly better than spectral and manifoldoptimization methods, particularly at large noise levels.