Results 1  10
of
29
Globally optimal solution to multiobject tracking with merged measurements
 In ICCV
, 2011
"... Multiple object tracking has been formulated recently as a global optimization problem, and solved efficiently with optimal methods such as the Hungarian Algorithm. A severe limitation is the inability to model multiple objects that are merged into a single measurement, and track them as a group, w ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
Multiple object tracking has been formulated recently as a global optimization problem, and solved efficiently with optimal methods such as the Hungarian Algorithm. A severe limitation is the inability to model multiple objects that are merged into a single measurement, and track them as a group, while retaining optimality. This work presents a new graph structure that encodes these multiplematch events as standard onetoone matches, allowing computation of the solution in polynomial time. Since identities are lost when objects merge, an efficient method to identify groups is also presented, as a flow circulation problem. The problem of tracking individual objects across groups is then posed as a standard optimal assignment. Experiments show increased performance on the PETS 2006 and 2009 datasets compared to stateoftheart algorithms. 1.
Riffled Independence for Ranked Data
"... Representing distributions over permutations can be a daunting task due to the fact that the number of permutations of n objects scales factorially in n. One recent way that has been used to reduce storage complexity has been to exploit probabilistic independence, but as we argue, full independence ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Representing distributions over permutations can be a daunting task due to the fact that the number of permutations of n objects scales factorially in n. One recent way that has been used to reduce storage complexity has been to exploit probabilistic independence, but as we argue, full independence assumptions impose strong sparsity constraints on distributions and are unsuitable for modeling rankings. We identify a novel class of independence structures, called riffled independence, which encompasses a more expressive family of distributions while retaining many of the properties necessary for performing efficient inference and reducing sample complexity. In riffled independence, one draws two permutations independently, then performs the riffle shuffle, common in card games, to combine the two permutations to form a single permutation. In ranking, riffled independence corresponds to ranking disjoint sets of objects independently, then interleaving those rankings. We provide a formal introduction and present algorithms for using riffled independence within Fouriertheoretic frameworks which have been explored by a number of recent papers. 1
Estimating Probabilities in Recommendation Systems
"... Modeling ranked data is an essential component in a number of important applications including recommendation systems and websearch. In many cases, judges omit preference among unobserved items and between unobserved and observed items. This case of analyzing incomplete rankings is very important fr ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Modeling ranked data is an essential component in a number of important applications including recommendation systems and websearch. In many cases, judges omit preference among unobserved items and between unobserved and observed items. This case of analyzing incomplete rankings is very important from a practical perspective and yet has not been fully studied due to considerable computational difficulties. We show how to avoid such computational difficulties and efficiently construct a nonparametric model for rankings with missing items. We demonstrate our approach and show how it applies in the context of collaborative filtering. 1
Efficient Probabilistic Inference with Partial Ranking Queries
"... Distributions over rankings are used to model data in various settings such as preference analysis and political elections. The factorial size of the space of rankings, however, typically forces one to make structural assumptions, such as smoothness, sparsity, or probabilistic independence about the ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Distributions over rankings are used to model data in various settings such as preference analysis and political elections. The factorial size of the space of rankings, however, typically forces one to make structural assumptions, such as smoothness, sparsity, or probabilistic independence about these underlying distributions. We approach the modeling problem from the computational principle that one should make structural assumptions which allow for efficient calculation of typical probabilistic queries. For ranking models, typical queries predominantly take the form of partial ranking queries (e.g., given a user's topk favorite movies, what are his preferences over remaining movies?). In this paper, we argue that riffled independence factorizations proposed in recent literature [7, 8] are a natural structural assumption for ranking distributions, allowing for particularly efcient processing of partial ranking queries.
Probabilistic models over ordered partitions with applications in document ranking and collaborative filtering
 In Proc. of SIAM Conference on Data Mining (SDM
, 2011
"... ar ..."
Incorporating Domain Knowledge in Matching Problems via Harmonic Analysis
"... Matching one set of objects to another is a ubiquitous task in machine learning and computer vision that often reduces to some form of the quadratic assignment problem (QAP). The QAP is known to be notoriously hard, both in theory and in practice. Here, we investigate if this difficulty can be mitig ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Matching one set of objects to another is a ubiquitous task in machine learning and computer vision that often reduces to some form of the quadratic assignment problem (QAP). The QAP is known to be notoriously hard, both in theory and in practice. Here, we investigate if this difficulty can be mitigated when some additional piece of information is available: (a) that all QAP instances of interest come from the same application, and (b) the correct solution for a set of such QAP instances is given. We propose a new approach to accelerate the solution of QAPs based on learning parameters for a modified objective function from prior QAP instances. A key feature of our approach is that it takes advantage of the algebraic structure of permutations, in conjunction with special methods for optimizing functions over the symmetric group Sn in Fourier space. Experiments show that in practical domains the new method can outperform existing approaches. 1.
Agnostically learning under permutation invariant distributions
, 2013
"... We generalize algorithms from computational learning theory that are successful under the uniform distribution on the Boolean hypercube {0, 1} n to algorithms successful on permutation invariant distributions, distributions where the probability mass remains constant upon permutations in the instanc ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We generalize algorithms from computational learning theory that are successful under the uniform distribution on the Boolean hypercube {0, 1} n to algorithms successful on permutation invariant distributions, distributions where the probability mass remains constant upon permutations in the instances. While the tools in our generalization mimic those used for the Boolean hypercube, the fact that permutation invariant distributions are not product distributions presents a significant obstacle. Under the uniform distribution, halfspaces can be agnostically learned in polynomial time for constant ɛ. The main tools used are a theorem of Peres [Per04] bounding the noise sensitivity of a halfspace, a result of [KOS04] that this theorem implies Fourier concentration, and a modification of the LowDegree algorithm of Linial, Mansour, and Nisan [LMN93] made by Kalai et. al. [KKMS08]. These results are extended to arbitrary product distributions in [BOW10]. We prove analogous results for permutation invariant distributions; more generally, we work in the domain of the symmetric group. We define noise sensitivity in this setting, and show that noise sensitivity has a nice combinatorial interpretation in terms of Young tableaux. The main technical innovations involve techniques from the representation theory of the symmetric group, especially the combinatorics of Young tableaux. We show that low noise sensitivity implies concentration on “simple ” components of the Fourier spectrum, and that this fact will allow us to agnostically learn halfspaces under permutation invariant distributions to constant accuracy in roughly the same time as in the uniform distribution over the Boolean hypercube case. 1
P.: A generative statistical model for tracking multiple smooth trajectories
 In: CVPR
, 2011
"... We present a general model for tracking smooth trajectories of multiple targets in complex data sets, where tracks potentially cross each other many times. As the number of overlapping trajectories grows, exploiting smoothness becomes increasingly important to disambiguate the association of succ ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We present a general model for tracking smooth trajectories of multiple targets in complex data sets, where tracks potentially cross each other many times. As the number of overlapping trajectories grows, exploiting smoothness becomes increasingly important to disambiguate the association of successive points. However, in many important problems an effective parametric model for the trajectories does not exist. Hence we propose modeling trajectories as independent realizations of Gaussian processes with kernel functions which allow for arbitrary smooth motion. Our generative statistical model accounts for the data as coming from an unknown number of such processes, together with expectations for noise points and the probability that points are missing. For inference we compare two methods: A modified version of the Markov chain Monte Carlo data association (MCMCDA) method, and a Gibbs sampling method which is much simpler and faster, and gives better results by being able to search the solution space more efficiently. In both cases, we compare our results against the smoothing provided by linear dynamical systems (LDS). We test our approach on videos of birds and fish, and on 82 image sequences of pollen tubes growing in a petri dish, each with up to 60 tubes with multiple crossings. We achieve 93 % accuracy on image sequences with up to ten trajectories (35 sequences) and 88 % accuracy when there are more than ten (42 sequences). This performance surpasses that of using an LDS motion model, and far exceeds a simple heuristic tracker. 1.
Ranking with kernels in Fourier space
"... In typical ranking problems the total number n of items to be ranked is relatively large, but each data instance involves only k << n items. This paper examines the structure of such partial rankings in Fourier space. Specifically, we develop a kernel–based framework for solving ranking proble ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
In typical ranking problems the total number n of items to be ranked is relatively large, but each data instance involves only k << n items. This paper examines the structure of such partial rankings in Fourier space. Specifically, we develop a kernel–based framework for solving ranking problems, define some canonical kernels on permutations, and show that by transforming to Fourier space, the complexity of computing the kernel between two partial rankings can be reduced from O((n−k)! 2) to O((2k) 2k+3). 1