Results 1  10
of
84
, Alfred Hero y+
, 2002
"... Matching a reference image to a secondary image extracted from a database of transformed exemplars constitutes an important image retrieval task. Two related problems are: specication of a general class of discriminatory image features and an appropriate similarity measure to rank the closeness of ..."
Abstract
 Add to MetaCart
Matching a reference image to a secondary image extracted from a database of transformed exemplars constitutes an important image retrieval task. Two related problems are: specication of a general class of discriminatory image features and an appropriate similarity measure to rank the closeness of the query to the database. In this paper we present a general method based on matching high dimensional image features, using entropic similarity measures that can be empirically estimated using entropic graphs such as the minimal spanning tree (MST). The entropic measures we consider are generalizations of the well known KullbackLiebler (KL) distance, the mutual information (MI) measure, and the Jensen dierence. Our entropic graph approach has the advantage of being implementable for high dimensional feature spaces for which other entropybased pattern matching methods are computationally diÆcult. We compare our technique to previous entropy matching methods for a variety of continuous and discrete features sets including: single pixel gray levels; tag subimage features; and independent component analysis (ICA) features. We illustrate the methodology for multimodal face retrieval and ultrasound (US) breast image registration.
Reconstructing Signaling Pathways from High Throughput Data Chair: Alfred O Hero III
, 2006
"... by ..."
WEIGHTED kNN GRAPHSFORRÉNYI ENTROPY ESTIMATION IN HIGHDIMENSIONS Kumar Sricharan,Alfred O.HeroIII
"... Rényi entropy is an informationtheoretic measure of randomness which is fundamental to several applications. Several estimators of Rényi entropy based on knearest neighbor (kNN) based distances have been proposed in literature. For ddimensional densities f, the variance of these Rényi entropy es ..."
Abstract
 Add to MetaCart
Rényi entropy is an informationtheoretic measure of randomness which is fundamental to several applications. Several estimators of Rényi entropy based on knearest neighbor (kNN) based distances have been proposed in literature. For ddimensional densities f, the variance of these Rényi entropy estimatorsoff decayas O(M −1),whereM isthesample size drawnfromf. Ontheotherhand,thebias,because ofthecurse of dimensionality, decays as O(M −1/d). As a result the bias dominates the mean square error (MSE) in high dimensions. To address this large bias in high dimensions, we propose a weighted kNN estimator where the weights serve to lower the bias to O(M −1/2), which then ensures convergence of the weightedestimator atthe parametric rate of O(M −1/2). These weights are determined bysolving a convex optimizationproblem. We subsequently use the weighted estimator to perform anomaly detectioninwireless sensor networks. Index Terms — Rényi entropy estimation, weighted kNN graphs, curse of dimensionality, parametric convergence rate 1.
HUB DISCOVERY IN PARTIAL CORRELATION GRAPHS ALFRED HERO AND BALA RAJARATNAM
"... Abstract. One of the most important problems in large scale inference problems is the identification of variables that are highly dependent on several other variables. When dependency is measured by partial correlations these variables identify those rows of the partial correlation matrix that have ..."
Abstract
 Add to MetaCart
Abstract. One of the most important problems in large scale inference problems is the identification of variables that are highly dependent on several other variables. When dependency is measured by partial correlations these variables identify those rows of the partial correlation matrix that have several entries with large magnitudes; i.e., hubs in the associated partial correlation graph. This paper develops theory and algorithms for discovering such hubs from a few observations of these variables. We introduce a hub screening framework in which the user specifies both a minimum (partial) correlation ρ and a minimum degree δ to screen the vertices. The choice of ρ and δ can be guided by our mathematical expressions for the phase transition correlation threshold ρc governing the average number of discoveries. They can also be guided by our asymptotic expressions for familywise discovery rates under the assumption of large number p of variables, fixed number n of multivariate samples, and weak dependence. Under the null hypothesis that the dispersion (covariance) matrix is sparse these limiting expressions can be used to enforce familywise error constraints and to rank the discoveries in order of increasing statistical significance. For n ≪ p the computational complexity of the proposed partial correlation screening method is low and is therefore highly scalable. Thus it can be applied to significantly larger problems than previous approaches. The theory is applied to discovering hubs in a high dimensional gene microarray dataset.
HUB DISCOVERY IN PARTIAL CORRELATION GRAPHICAL MODELS ALFRED HERO AND BALA RAJARATNAM
"... Abstract. One of the most important problems in large scale inference problems is the identification of variables that are highly dependent on several other variables. When dependency is measured by partial correlations these variables identify those rows of the partial correlation matrix that have ..."
Abstract
 Add to MetaCart
Abstract. One of the most important problems in large scale inference problems is the identification of variables that are highly dependent on several other variables. When dependency is measured by partial correlations these variables identify those rows of the partial correlation matrix that have several entries with magnitude close to one; i.e., hubs in the associated partial correlation graph. This paper develops theory and algorithms for discovering such hubs from a few observations of these variables. We introduce a hub screening framework in which the user specifies both a minimum (partial) correlation ρ and a minimum degree δ to screen the vertices. The choice of ρ and δ can be guided by our mathematical expressions for the phase transition correlation threshold ρc governing the average number of discoveries. We also give asymptotic expressions for familywise discovery rates under the assumption of large p, fixed number n of multivariate samples, and weak dependence. Under the null hypothesis that the covariance matrix is sparse these limiting expressions can be used to enforce FWER constraints and to rank these discoveries in order of statistical significance (pvalue). For n p the computational complexity of the proposed partial correlation screening method is low and is therefore highly scalable. Thus it can be applied to significantly larger problems than previous approaches. The theory is applied to discovering hubs in a high dimensional gene microarray dataset. Keywords Gaussian graphical models, correlation networks, nearest neighbor dependency, node de
Sublinear Time Algorithms for the Sparse Recovery Problem
, 2013
"... Foremost, I would like to express my deepest appreciation to my adviser, Prof. Martin Strauss, for his continuous support and guidance of my Ph.D. study and research. I am also indebted to Prof. Anna Gilbert deeply for introducing me to various workshop opportunities in addition to her guidance of m ..."
Abstract
 Add to MetaCart
very grateful to David Woodruff for the many long phone discussions. I would also like to thank my thesis committee members, Assoc. Prof. Kevin Compton, Prof. Alfred Hero III and Assoc. Prof. Yaoyun Shi, for their reviews and helpful comments. As a nondriver in Ann Arbor, I am obliged to Caoxie Zhang
Hero III. Shift and scale invariant detection
 In Proceedings of the 1997 IEEE International Conference on Acoustics Speech and Signal Processing [25
"... Dierent signal realizations generated from a given source may not appear the same. Time shifts, frequency shifts, and scales are among the signal variations commonly encountered. Timefrequency distributions (TFDs) covariant to time and frequency shifts and scale changes re
ect these variations i ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Dierent signal realizations generated from a given source may not appear the same. Time shifts, frequency shifts, and scales are among the signal variations commonly encountered. Timefrequency distributions (TFDs) covariant to time and frequency shifts and scale changes re
ect these variations in a predictable manner. Based on such TFDs, representations invariant to these signal distortions are possible. Presented here are two approaches for discriminating between signal classes where within class translation and scale variation occur. The rst method uses an autocorrelation followed by a scale transform to achieve the invariances. The second method treats the TFD as a twodimensional probability density function and applies a transformation that removes the mean and variance to provide the shift and scale invariance. Each method employs discrimination mechanisms to yield powerful results. 1.
1 Contents STATISTICAL METHODS FOR SIGNAL PROCESSING c○Alfred Hero 1999 2
, 2008
"... This set of notes is the primary source material for the course EECS564 “Estimation, filtering and ..."
Abstract
 Add to MetaCart
This set of notes is the primary source material for the course EECS564 “Estimation, filtering and
1 STATISTICAL METHODS FOR SIGNAL PROCESSING c©Alfred Hero 1999 2 Contents
, 2006
"... This set of notes is the primary source material for the course EECS564 “Estimation, filtering and detection ” used over the period 19992006 at the University of Michigan Ann Arbor. The author ..."
Abstract
 Add to MetaCart
This set of notes is the primary source material for the course EECS564 “Estimation, filtering and detection ” used over the period 19992006 at the University of Michigan Ann Arbor. The author
RealTime Forecast Averaging with ALFRED
"... This paper presents empirical evidence on the efficacy of forecast averaging using the ALFRED (ArchivaL Federal Reserve Economic Data) realtime database. The authors consider averages over a variety of bivariate vector autoregressive models. These models are distinguished from one another based on ..."
Abstract
 Add to MetaCart
This paper presents empirical evidence on the efficacy of forecast averaging using the ALFRED (ArchivaL Federal Reserve Economic Data) realtime database. The authors consider averages over a variety of bivariate vector autoregressive models. These models are distinguished from one another based
Results 1  10
of
84