Results 1  10
of
33
Distance transforms of sampled functions
 Cornell Computing and Information Science
, 2004
"... This paper provides lineartime algorithms for solving a class of minimization problems involving a cost function with both local and spatial terms. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary ..."
Abstract

Cited by 173 (11 self)
 Add to MetaCart
(Show Context)
This paper provides lineartime algorithms for solving a class of minimization problems involving a cost function with both local and spatial terms. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary sampled function. Alternatively they can be viewed in terms of the minimum convolution of two functions, which is an important operation in grayscale morphology. A useful consequence of our techniques is a simple, fast method for computing the Euclidean distance transform of a binary image. The methods are also applicable to Viterbi decoding, belief propagation and optimal control. 1
Smoothing algorithms for statespace models
 in Submission IEEE Transactions on Signal Processing
, 2004
"... A prevalent problem in statistical signal processing, applied statistics, and time series analysis is the calculation of the smoothed posterior distribution, which describes the uncertainty associated with a state, or a sequence of states, conditional on data from the past, the present, and the futu ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
(Show Context)
A prevalent problem in statistical signal processing, applied statistics, and time series analysis is the calculation of the smoothed posterior distribution, which describes the uncertainty associated with a state, or a sequence of states, conditional on data from the past, the present, and the future. The aim of this paper is to provide a rigorous foundation for the calculation, or approximation, of such smoothed distributions, to facilitate a robust and efficient implementation. Through a cohesive and generic exposition of the scientific literature we offer several novel extensions such that one can perform smoothing in the most general case. Experimental results for: a Jump Markov Linear System; a comparison of particle smoothing methods; and parameter estimation using a particle implementation of the EM algorithm, are provided.
Fast Particle Smoothing: If I Had a Million Particles
 In International Conference on Machine Learning (ICML
, 2006
"... We propose e#cient particle smoothing methods for generalized statespaces models. ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
We propose e#cient particle smoothing methods for generalized statespaces models.
Temporal Dynamics of OnLine Information Streams
 IN DATA STREAM MANAGEMENT: PROCESSING HIGHSPEED DATA
, 2006
"... A number of recent computing applications involve information arriving continuously over time in the form of a data stream, and this has led to new ways of thinking about traditional problems in a variety of areas. In some cases, the rate and overall volume of data in the stream may be so great that ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
A number of recent computing applications involve information arriving continuously over time in the form of a data stream, and this has led to new ways of thinking about traditional problems in a variety of areas. In some cases, the rate and overall volume of data in the stream may be so great that it cannot all be stored for processing, and this leads to new requirements for efficiency and scalability. In other cases, the quantities of information may still be manageable, but the data stream perspective takes what has generally been a static view of a problem and adds a strong temporal dimension to it. Our focus here is on some of the challenges that this latter issue raises in the settings of text mining, online information, and information retrieval. Many information sources have a streamlike structure, in which the way content arrives over time carries an essential part of its meaning. News coverage is a basic example; understanding the pattern of a developing news
Efficient MRF deformation model for nonrigid image matching
 In IEEE Transactions on International Conference on Pattern Recognition
, 2007
"... We propose a novel MRFbased model for deformable image matching. Given two images, the task is to estimate a mapping from one image to the other maximizing the quality of the match. We consider mappings defined by a discrete deformation field constrained to preserve 2D continuity. We pose the task ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
(Show Context)
We propose a novel MRFbased model for deformable image matching. Given two images, the task is to estimate a mapping from one image to the other maximizing the quality of the match. We consider mappings defined by a discrete deformation field constrained to preserve 2D continuity. We pose the task as finding MAP configurations of a pairwise MRF. We propose a more compact MRF representation of the problem which leads to a weaker, though computationally more tractable, linear programming relaxation – the approximation technique we choose to apply. The number of dual LP variables grows linearly with the search window side, rather than quadratically as in previous approaches. To solve the relaxed problem (suboptimally), we apply TRWS (Sequential TreeReweighted Message passing) algorithm [13, 5]. Using our representation and the chosen optimization scheme, we are able to match much wider deformations than was considered previously in global optimization framework. We further elaborate on continuity and data terms to achieve more appropriate description of smooth deformations. The performance of our technique is demonstrated on both synthetic and realworld experiments. 1.
Simultaneous object detection, tracking, and event recognition
 CoRR
"... The common internal structure and algorithmic organization of object detection, detectionbased tracking, and event recognition facilitates a general approach to integrating these three components. This supports multidirectional information flow between these components allowing object detection ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
The common internal structure and algorithmic organization of object detection, detectionbased tracking, and event recognition facilitates a general approach to integrating these three components. This supports multidirectional information flow between these components allowing object detection to influence tracking and event recognition and event recognition to influence tracking and object detection. The performance of the combination can exceed the performance of the components in isolation. This can be done with linear asymptotic complexity.
CarpeDiem: an Algorithm for the Fast Evaluation of SSL Classifiers
"... In this paper we present a novel algorithm, CarpeDiem. It significantly improves on the time complexity of Viterbi algorithm, preserving the optimality of the result. This fact has consequences on Machine Learning systems that use Viterbi algorithm during learning or classification. We show how the ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
In this paper we present a novel algorithm, CarpeDiem. It significantly improves on the time complexity of Viterbi algorithm, preserving the optimality of the result. This fact has consequences on Machine Learning systems that use Viterbi algorithm during learning or classification. We show how the algorithm applies to the Supervised Sequential Learning task and, in particular, to the HMPerceptron algorithm. We illustrate CarpeDiem in full details, and provide experimental results that support the proposed approach. 1.
VOGUE: A Variable Order Hidden Markov Model with Duration based on Frequent Sequence Mining
"... We present VOGUE, a novel, variable order hidden Markov model with state durations, that combines two separate techniques for modeling complex patterns in sequential data: pattern mining and data modeling. VOGUE relies on a variable gap sequence mining method to extract frequent patterns with differ ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present VOGUE, a novel, variable order hidden Markov model with state durations, that combines two separate techniques for modeling complex patterns in sequential data: pattern mining and data modeling. VOGUE relies on a variable gap sequence mining method to extract frequent patterns with different lengths and gaps between elements. It then uses these mined sequences to build a variable order hidden Markov model, that explicitly models the gaps. The gaps implicitly model the order of the HMM, and they explicitly model the duration of each state. We apply VOGUE to a variety of real sequence data taken from domains such as protein sequence classification, web usage logs, intrusion detection, and spelling correction. We show that VOGUE has superior classification accuracy compared to regular HMMs, higherorder HMMs, and even special purpose HMMs like HMMER, which is a stateoftheart method for protein classification. The VOGUE implementation and the datasets used in this paper are available as opensource at: www.cs.rpi.edu/~zaki/software/VOGUE.
On Table Arrangements, Scrabble Freaks, and Jumbled Pattern Matching?
"... Abstract. Given a string s, the Parikh vector of s, denoted p(s), counts the multiplicity of each character in s. Searching for a match of Parikh vector q (a “jumbled string”) in the text s requires to find a substring t of s with p(t) = q. The corresponding decision problem is to verify whether a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Given a string s, the Parikh vector of s, denoted p(s), counts the multiplicity of each character in s. Searching for a match of Parikh vector q (a “jumbled string”) in the text s requires to find a substring t of s with p(t) = q. The corresponding decision problem is to verify whether at least one such match exists. So, for example for the alphabet Σ = {a, b, c}, the string s = abaccbabaaa has Parikh vector p(s) = (6, 3, 2), and the Parikh vector q = (2, 1, 1) appears once in s in position (1, 4). Like its more precise counterpart, the renown Exact String Matching, Jumbled Pattern Matching has ubiquitous applications, e.g., string matching with a dyslectic word processor, table rearrangements, anagram checking, Scrabble playing and, allegedly, also analysis of mass spectrometry data. We consider two simple algorithms for Jumbled Pattern Matching and use very complicated data structures and analytic tools to show that they are not worse than the most obvious algorithm. We also show that we can achieve nontrivial efficient average case behavior, but that’s less fun to describe in this abstract so we defer the details to the main part of the article, to be read at the reader’s risk... well, at the reader’s discretion. 1
Fast Methods for Inference in Graphical Models and Beat Tracking the Graphical Model
, 2004
"... Abstract 2 This thesis presents two related bodies of work. The first is about methods for speeding up inference in graphical models, and the second is an application of the graphical model framework to the beat tracking problem in sampled music. Graphical models have become ubiquitous modelling too ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract 2 This thesis presents two related bodies of work. The first is about methods for speeding up inference in graphical models, and the second is an application of the graphical model framework to the beat tracking problem in sampled music. Graphical models have become ubiquitous modelling tools; they are commonly used in computer vision, bioinformatics, coding theory, and speech recognition, and are central to many machine learning techniques. Graphical models allow statistical independence relationships between random variables to be expressed in a flexible, powerful, and intuitive manner. Given observations, there are standard algorithms to compute probability distributions over unknown states (marginals) or to find the most likely configuration (maximum a posteriori, MAP, state). However, if each node in the graphical model has N states, then these computations cost O � N 2 �. This is a particular concern when dealing with continuous (or large discrete) state spaces. In such state spaces, Monte Carlo methods are of great use; these methods typically