Results 1  10
of
876
Accurate Methods for the Statistics of Surprise and Coincidence
 COMPUTATIONAL LINGUISTICS
, 1993
"... Much work has been done on the statistical analysis of text. In some cases reported in the literature, inappropriate statistical methods have been used, and statistical significance of results have not been addressed. In particular, asymptotic normality assumptions have often been used unjustifiably ..."
Abstract

Cited by 1057 (1 self)
 Add to MetaCart
work well, the likelihood ratio tests described here are nearly identical.This paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of text.
The Laplacian Pyramid as a Compact Image Code
, 1983
"... We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixeltopixel correlations a ..."
Abstract

Cited by 1388 (12 self)
 Add to MetaCart
We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixeltopixel correlations
Optimal BiLevel Quantization of i.i.d. Sensor Observations for Binary Hypothesis Testing
 IEEE Trans. Inform. Theory
, 2002
"... We consider the problem of binary hypothesis testing using binary decisions from independent and identically distributed (i.i.d). sensors. Identical likelihoodratio quantizers with threshold are used at the sensors to obtain sensor decisions. Under this condition, the optimal fusion rule is known t ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We consider the problem of binary hypothesis testing using binary decisions from independent and identically distributed (i.i.d). sensors. Identical likelihoodratio quantizers with threshold are used at the sensors to obtain sensor decisions. Under this condition, the optimal fusion rule is known
Grassmannian beamforming for multipleinput multipleoutput wireless systems
 IEEE TRANS. INFORM. THEORY
, 2003
"... Transmit beamforming and receive combining are simple methods for exploiting the significant diversity that is available in multipleinput and multipleoutput (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamformi ..."
Abstract

Cited by 329 (38 self)
 Add to MetaCart
beamforming vector which are not always realizable in practice. In this correspondence, a quantized maximum signaltonoise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter. By using
Performances of the Likelihoodratio Classifier based on Different Data Modelings
"... Abstract—The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent trainingtest subjects. The reason lies in the inaccurate estimation of the underlying userspecific feature density. Firstly, the feature density estimation suffers from in ..."
Abstract
 Add to MetaCart
. Index Terms—likelihoodratio classifier, density estimation, quantization I.
LIKELIHOODRATIO EMPIRICAL KERNELS FOR IVECTOR BASED PLDASVM SCORING
"... Likelihood ratio (LR) scoring in PLDA speaker verification systems only uses the information of background speakers implicitly. This paper exploits the notion of empirical kernel maps to incorporate background speaker information into the scoring process explicitly. This is achieved by training a sc ..."
Abstract
 Add to MetaCart
Likelihood ratio (LR) scoring in PLDA speaker verification systems only uses the information of background speakers implicitly. This paper exploits the notion of empirical kernel maps to incorporate background speaker information into the scoring process explicitly. This is achieved by training a
Quantization of Lie bialgebras
, 1996
"... This paper is a continuation of [EK14]. The goal of this paper is to define and study the notion of a quantum vertex operator algebra (VOA) in the setting of the formal deformation theory and give interesting examples of such algebras. Our definition of a quantum VOA is based on the ideas of the pa ..."
Abstract

Cited by 168 (17 self)
 Add to MetaCart
structure a braided VOA. However, a braided VOA does not necessarily satisfy the associativity property, which is one of the main properties of a usual VOA. More precisely, instead of associativity it satisfies a quasiassociativity identity, which differs from associativity
Simulating ratios of normalizing constants via a simple identity: A theoretical exploration
 Statistica Sinica
, 1996
"... Abstract: Let pi(w),i =1, 2, be two densities with common support where each density is known up to a normalizing constant: pi(w) =qi(w)/ci. We have draws from each density (e.g., via Markov chain Monte Carlo), and we want to use these draws to simulate the ratio of the normalizing constants, c1/c2. ..."
Abstract

Cited by 187 (3 self)
 Add to MetaCart
2. We also introduce several generalizations of this identity for handling more complicated settings (e.g., estimating several ratios simultaneously) and pose several open problems that appear to have practical as well as theoretical value. Furthermore, we discuss related theoretical and empirical
1 Quantization of LogLikelihood Ratios to Maximize Mutual Information Wolfgang
"... Abstract—We propose a quantization scheme for loglikelihood ratios which optimizes the tradeoff between rate and accuracy in the sense of rate distortion theory: as distortion measure we use mutual information to determine quantization and decision levels maximizing mutual information for a given ..."
Abstract
 Add to MetaCart
Abstract—We propose a quantization scheme for loglikelihood ratios which optimizes the tradeoff between rate and accuracy in the sense of rate distortion theory: as distortion measure we use mutual information to determine quantization and decision levels maximizing mutual information for a given
Soft Learning Vector Quantization
 NEURAL COMPUTATION
, 2002
"... Learning Vector Quantization is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here we take a more principled approach and derive two variants of Learning Vector Quantiz ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
Quantization using a Gaussian mixture ansatz. We propose an objective function which is based on a likelihood ratio and we derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of "
Results 1  10
of
876