Results 1 - 10
of
6,048
Improved Maximum Mutual Information Estimation Training of Continuous Density HMMs
- Proc. EUROSPEECH
, 2001
"... In maximum mutual information estimation (MMIE) training, the currently widely used update equations derive from the Extended Baum-Welch (EBW) algorithm, which was originally designed for the discrete hidden Markov model (HMM) and was extended to continuous Gaussian density HMMs through approximatio ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
In maximum mutual information estimation (MMIE) training, the currently widely used update equations derive from the Extended Baum-Welch (EBW) algorithm, which was originally designed for the discrete hidden Markov model (HMM) and was extended to continuous Gaussian density HMMs through
Maximum Mutual Information Estimation with Unlabeled Data for Phonetic Classification
"... This paper proposes a new training framework for mixed labeled and unlabeled data and evaluates it on the task of binary phonetic classification. Our training objective function combines Maximum Mutual Information (MMI) for labeled data and Maximum Likelihood (ML) for unlabeled data. Through the mod ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
This paper proposes a new training framework for mixed labeled and unlabeled data and evaluates it on the task of binary phonetic classification. Our training objective function combines Maximum Mutual Information (MMI) for labeled data and Maximum Likelihood (ML) for unlabeled data. Through
Margin-Enhanced Maximum Mutual Information Estimation for Hidden Markov Models
"... Abstract-A discriminative training algorithm to estimate continuous-density hidden Markov model (CDHMM) for automatic speech recognition is considered. The algorithm is based on the criterion, called margin-enhanced maximum mutual information (MEMMI), and it estimates the CDHMM parameters by maximiz ..."
Abstract
- Add to MetaCart
Abstract-A discriminative training algorithm to estimate continuous-density hidden Markov model (CDHMM) for automatic speech recognition is considered. The algorithm is based on the criterion, called margin-enhanced maximum mutual information (MEMMI), and it estimates the CDHMM parameters
Word Association Norms, Mutual Information, and Lexicography
, 1990
"... This paper will propose an objective measure based on the information theoretic notion of mutual information, for estimating word association norms from computer readable corpora. (The standard method of obtaining word association norms, testing a few thousand subjects on a few hundred words, is b ..."
Abstract
-
Cited by 1144 (11 self)
- Add to MetaCart
This paper will propose an objective measure based on the information theoretic notion of mutual information, for estimating word association norms from computer readable corpora. (The standard method of obtaining word association norms, testing a few thousand subjects on a few hundred words
The information bottleneck method
, 1999
"... We define the relevant information in a signal x ∈ X as being the information that this signal provides about another signal y ∈ Y. Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. ..."
Abstract
-
Cited by 540 (35 self)
- Add to MetaCart
. Understanding the signal x requires more than just predicting y, it also requires specifying which features of X play a role in the prediction. We formalize this problem as that of finding a short code for X that preserves the maximum information about Y. That is, we squeeze the information that X provides
Fast and robust fixed-point algorithms for independent component analysis
- IEEE TRANS. NEURAL NETW
, 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Abstract
-
Cited by 884 (34 self)
- Add to MetaCart
information-theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information
Limited information estimators and exogeneity tests for simultaneous probit models
, 1988
"... A two-step maximum likelihood procedure is proposed for estimating simultaneous probit models and is compared to alternative limited information estimators. Conditions under which each estimator attains the Cramer-Rao lower bound are obtained. Simple tests for exogeneity based on the new two-step es ..."
Abstract
-
Cited by 464 (0 self)
- Add to MetaCart
A two-step maximum likelihood procedure is proposed for estimating simultaneous probit models and is compared to alternative limited information estimators. Conditions under which each estimator attains the Cramer-Rao lower bound are obtained. Simple tests for exogeneity based on the new two
Approximating discrete probability distributions with dependence trees
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 1968
"... A method is presented to approximate optimally an n-dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence. The problem is to find an optimum set of n-1 first order dependence relationship among the n variables ..."
Abstract
-
Cited by 881 (0 self)
- Add to MetaCart
variables. It is shown that the procedure derived in this paper yields an approximation of a minimum difference in information. It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate
Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm
- IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limi ..."
Abstract
-
Cited by 639 (15 self)
- Add to MetaCart
that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported
Brain magnetic resonance imaging with contrast dependent on blood oxygenation.
- Proc. Natl. Acad. Sci. USA
, 1990
"... ABSTRACT Paramagnetic deoxyhemoglobin in venous blood is a naturally occurring contrast agent for magnetic resonance imaging (MRI). By accentuating the effects of this agent through the use of gradient-echo techniques in high fields, we demonstrate in vivo images of brain microvasculature with imag ..."
Abstract
-
Cited by 648 (1 self)
- Add to MetaCart
to regional neural activity. Magnetic resonance imaging (MRI) is a widely accepted modality for providing anatomical information. Current research (1) involves extending MRI methods to provide information about biological function, in addition to the concomitant anatomical information. In addition
Results 1 - 10
of
6,048