Results 1  10
of
12
Statistical pattern recognition: A review
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques ..."
Abstract

Cited by 1035 (30 self)
 Add to MetaCart
The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have bean receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the wellknown methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
QuasiConvexity and Optimal Binary Fusion for Distributed Detection with Identical Sensors in Generalized Gaussian Noise
, 2001
"... In this correspondence, we present a technique to find the optimal threshold for the binary hypothesis detection problem with identical and independent sensors. The sensors all use an identical and single threshold to make local decisions, and the fusion center makes a global decision based on the ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
In this correspondence, we present a technique to find the optimal threshold for the binary hypothesis detection problem with identical and independent sensors. The sensors all use an identical and single threshold to make local decisions, and the fusion center makes a global decision based on the local binary decisions. For generalized Gaussian noises and some nonGaussian noise distributions, we show that for any admissible fusion rule, the probability of error is a quasiconvex function of threshold . Hence, the problem decomposes into a series of quasiconvex optimization problems that may be solved using wellknown techniques. Assuming equal a priori probability, we give a sufficient condition of the nonGaussian noise distribution ( ) for the probability of error to be quasiconvex. Furthermore, this technique is extended to Bayes risk and NeymanPearson criteria. We also demonstrate that, in practice, it takes fewer than twice as many binary sensors to give the performance of infinite precision sensors in our scenario.
Optimality of KLT for HighRate Transform Coding of Gaussian VectorScale Mixtures: Application to Reconstruction, Estimation and Classification
, 2006
"... The KarhunenLoève transform (KLT) is known to be optimal for highrate transform coding of Gaussian vectors for both fixedrate and variablerate encoding. The KLT is also known to be suboptimal for some nonGaussian models. This paper proves highrate optimality of the KLT for variablerate encod ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
The KarhunenLoève transform (KLT) is known to be optimal for highrate transform coding of Gaussian vectors for both fixedrate and variablerate encoding. The KLT is also known to be suboptimal for some nonGaussian models. This paper proves highrate optimality of the KLT for variablerate encoding of a broad class of nonGaussian vectors: Gaussian vectorscale mixtures (GVSM), which extend the Gaussian scale mixture (GSM) model of natural signals. A key concavity property of the scalar GSM (same as the scalar GVSM) is derived to complete the proof. Optimality holds under a broad class of quadratic criteria, which include mean squared error (MSE) as well as generalized fdivergence loss in estimation and binary classification systems. Finally, the theory is illustrated using two applications: signal estimation in multiplicative noise and joint optimization of classification/reconstruction systems.
Optimal Binary Distributed Detection
 Proceedings of the 33rd Asilomar Conference on Signals, Systems, and Computers
, 1999
"... In this paper, we present a technique to find the optimal threshold and fusion rule for local sensors in the distributed detection of s 2 f\Gammam; mg, where the ith of n local sensors observes x i = s + z i with i.i.d. additive noise z i . The fusion center makes a decision based on the n local ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we present a technique to find the optimal threshold and fusion rule for local sensors in the distributed detection of s 2 f\Gammam; mg, where the ith of n local sensors observes x i = s + z i with i.i.d. additive noise z i . The fusion center makes a decision based on the n local binary decisions. For generalized Gaussian noises and some nonGaussian noise distributions, we show that for any admissible fusion rule, the probability of error is a quasiconvex function of threshold . Hence, the problem decomposes into a series of n quasiconvex optimization problems that may be solved using well known techniques. Our results indicate that at most one quasiconvex problem needs to be solved in practice, since the optimal fusion rule is always essentially majority vote. 1. Introduction Consider distributed detection of s 2 f\Gammam; mg, where the ith of n local sensors observes x i = s + z i with i.i.d. noise z i . The ith sensor compares x i to a threshold to compute ...
On the Relationship between Dependence Tree Classification Error and Bayes Error Rate
 IEEE Trans. Patt. Anal. Mach. Intel
"... AbstractWong and Poon ..."
(Show Context)
ErrorProtection Techniques for Source and Channel Coding
, 2002
"... ii DEDICATION To my parents, who love me, To Wei, who loves and supports me, To Frank, who likes smiling ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
ii DEDICATION To my parents, who love me, To Wei, who loves and supports me, To Frank, who likes smiling
A general class of lower bounds on the probability of error in multiple hypothesis testing
 IEEE 25TH CONVENTION OF ELECTRICAL AND ELECTRONICS ENGINEERS IN ISRAEL, 2008
, 2008
"... In this paper, a new class of lower bounds on the probability of error for mary hypothesis tests is proposed. Computation of the minimum probability of error which is attained by the maximum aposteriori probability (MAP) criterion, is usually not tractable. The new class is derived using Hölder& ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper, a new class of lower bounds on the probability of error for mary hypothesis tests is proposed. Computation of the minimum probability of error which is attained by the maximum aposteriori probability (MAP) criterion, is usually not tractable. The new class is derived using Hölder's inequality. The bounds in this class are continuous and differentiable function of the conditional probability of error and they provide good prediction of the minimum probability of error in multiple hypothesis testing. It is shown that for binary hypothesis testing problem this bound asymptotically coincides with the optimum probability of error provided by the MAP criterion. This bound is compared with other existing lower bounds in several typical detection and classification problems in terms of tightness and computational complexity.
1 General Classes of Lower Bounds on the Probability of Error in Multiple Hypothesis Testing
"... ar ..."
IMPLEMENTATION OF PATTERN RECOGNITION TECHNIQUES AND OVERVIEW OF ITS APPLICATIONS IN VARIOUS AREAS OF ARTIFICIAL INTELLIGENCE
"... A pattern is an entity, vaguely defined, that could be given a name, e.g. fingerprint image, handwritten word, human face, speech signal, DNA sequence. Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and m ..."
Abstract
 Add to MetaCart
A pattern is an entity, vaguely defined, that could be given a name, e.g. fingerprint image, handwritten word, human face, speech signal, DNA sequence. Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the categories of the patterns. The goal of pattern recognition research is to clarify complicated mechanisms of decision making processes and automatic these function using computers. Pattern recognition systems can be designed using the following main approaches: template matching, statistical methods, syntactic methods and neural networks. This paper reviews Pattern Recognition, Process, Design Cycle, Application, Models etc. This paper focuses on Statistical method of pattern Recognition.