Results 1  10
of
677
Feature detection with automatic scale selection
 International Journal of Computer Vision
, 1998
"... The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works ..."
Abstract

Cited by 723 (34 self)
 Add to MetaCart
(Show Context)
The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a socalled scalespace representation. Traditional scalespace theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. This article proposes a systematic methodology for dealing with this problem. A framework is proposed for generating hypotheses about interesting scale levels in image data, based on a general principle stating that local extrema over scales of different combinations of γnormalized derivatives are likely candidates to correspond to interesting structures. Specifically, it is shown how this idea can be used as a major mechanism in algorithms for automatic scale selection, which
The Lifting Scheme: A Construction Of Second Generation Wavelets
, 1997
"... We present the lifting scheme, a simple construction of second generation wavelets, wavelets that are not necessarily translates and dilates of one fixed function. Such wavelets can be adapted to intervals, domains, surfaces, weights, and irregular samples. We show how the lifting scheme leads to a ..."
Abstract

Cited by 539 (15 self)
 Add to MetaCart
(Show Context)
We present the lifting scheme, a simple construction of second generation wavelets, wavelets that are not necessarily translates and dilates of one fixed function. Such wavelets can be adapted to intervals, domains, surfaces, weights, and irregular samples. We show how the lifting scheme leads to a faster, inplace calculation of the wavelet transform. Several examples are included.
Waveletbased statistical signal processing using hidden Markov models
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 1998
"... Waveletbased statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many realworld signals. In this paper, we develop a new framework for statistical signal processing b ..."
Abstract

Cited by 415 (50 self)
 Add to MetaCart
Waveletbased statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many realworld signals. In this paper, we develop a new framework for statistical signal processing based on waveletdomain hidden Markov models (HMM’s) that concisely models the statistical dependencies and nonGaussian statistics encountered in realworld signals. Waveletdomain HMM’s are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMM’s to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of waveletdomain HMM’s, we develop novel algorithms for signal denoising, classification, and detection.
Splines: A Perfect Fit for Signal/Image Processing
 IEEE SIGNAL PROCESSING MAGAZINE
, 1999
"... ..."
(Show Context)
Edge Detection and Ridge Detection with Automatic Scale Selection
 CVPR'96
, 1996
"... When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of ..."
Abstract

Cited by 347 (24 self)
 Add to MetaCart
(Show Context)
When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting onedimensional features, such as edges and ridges. Anovel concept of a scalespace edge is introduced, defined as a connected set of points in scalespace at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important property of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail. It is shown that by expressing these in terms of &gamma;normalized derivatives, an immediate consequence of this definition is that fine scales are selected for sharp edges (so as to reduce the shape distortions due to scalespace smoothing), whereas coarse scales are selected for diffuse edges, such that an edge model constitutes a valid abstraction of the intensity profile across the edge. With slight modifications, this idea can be used for formulating a ridge detector with automatic scale selection, having the characteristic property that the selected scales on a scalespace ridge instead reflect the width of the ridge.
Bayesian TreeStructured Image Modeling using Waveletdomain Hidden Markov Models
 IEEE Trans. Image Processing
, 1999
"... Waveletdomain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of realworld data. One potential drawback to the HMT framework ..."
Abstract

Cited by 184 (15 self)
 Add to MetaCart
Waveletdomain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of realworld data. One potential drawback to the HMT framework is the need for computationally expensive iterative training to fit an HMT model to a given data set (using the ExpectationMaximization algorithm, for example). In this paper, we greatly simplify the HMT model by exploiting the inherent selfsimilarity of realworld images. This simplified model specifies the HMT parameters with just nine metaparameters (independent of the size of the image and the number of wavelet scales). We also introduce a Bayesian universal HMT (uHMT) that fixes these nine parameters. The uHMT requires no training of any kind. While extremely simple, we show using a series of image estimation /denoising experiments that these two new models retain nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shiftinvariant HMT estimation algorithm that outperforms other waveletbased estimators in the current literature, both in meansquare error and visual metrics.
Efficient Iris Recognition by Characterizing Key Local Variations
 IEEE Trans. on Image Processing
, 2004
"... Abstract—Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper ..."
Abstract

Cited by 165 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of onedimensional intensity signals is constructed to effectively characterize the most important information of the original twodimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2 255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature. Index Terms—Biometrics, iris recognition, local sharp variations, personal identification, transient signal analysis, wavelet transform. I.
Oversampled Filter Banks
 IEEE Trans. Signal Processing
, 1998
"... Perfect reconstruction oversampled filter banks are equivalent to a particular class of frames in ` (Z). These frames are the subject of this paper. First, necessary and sufficient conditions on a filter bank for implementing a frame or a tight frame expansion are established, as well as a neces ..."
Abstract

Cited by 127 (2 self)
 Add to MetaCart
Perfect reconstruction oversampled filter banks are equivalent to a particular class of frames in ` (Z). These frames are the subject of this paper. First, necessary and sufficient conditions on a filter bank for implementing a frame or a tight frame expansion are established, as well as a necessary and sufficient condition for perfect reconstruction using FIR filters after an FIR analysis. Complete parameterizations of oversampled filter banks satisfying these conditions are given. Further, we study the condition under which the frame dual to the frame associated with an FIR filter bank is also FIR and give a parameterization of a class of filter banks satisfying this property. Then, we focus on nonsubsampled filter banks. Nonsubsampled filter banks implement transforms similar to continuoustime transforms and allow for very flexible design. We investigate relations of these filter banks to continuoustime filtering and illustrate the design flexibility by giving a procedure for designing maximally flat twochannel filter banks that yield highly regular wavelets with a given number of vanishing moments.
RidgeValley Lines on Meshes via Implicit Surface Fitting
 ACM TRANS. GRAPH
, 2004
"... We propose a simple and effective method for detecting view and scaleindependent ridgevalley lines defined via first and secondorder curvature derivatives on shapes approximated by dense triangle meshes. A highquality estimation of highorder surface derivatives is achieved by combining multil ..."
Abstract

Cited by 123 (8 self)
 Add to MetaCart
We propose a simple and effective method for detecting view and scaleindependent ridgevalley lines defined via first and secondorder curvature derivatives on shapes approximated by dense triangle meshes. A highquality estimation of highorder surface derivatives is achieved by combining multilevel implicit surface fitting and finite difference approximations. We demonstrate that the ridges and valleys are geometrically and perceptually salient surface features and, therefore, can be potentially used for shape recognition, coding, and quality evaluation purposes.