Results 11  20
of
994
A Hidden Markov Model approach to variation among sites in rate of evolution.
 Mol Biol Evol
, 1996
"... Abstract The method of hidden Markov models is used to allow for unequal and unknown evolutionary rates at different sites in molecular sequences. Rates of evolution at different sites are assumed to be drawn from a set of possible rates, with a finite number of possibilities. The overall likelihoo ..."
Abstract

Cited by 244 (1 self)
 Add to MetaCart
(Show Context)
Abstract The method of hidden Markov models is used to allow for unequal and unknown evolutionary rates at different sites in molecular sequences. Rates of evolution at different sites are assumed to be drawn from a set of possible rates, with a finite number of possibilities. The overall likelihood of a phylogeny is calculated as a sum of terms, each term being the probability of the data given a particular assignment of rates to sites, times the prior probability of that particular combination of rates. The probabilities of different rate combinations are specified by a stationary Markov chain that assigns rate categories to sites. While there will be a very large number of possible ways of assigning rates to sites, a simple recursive algorithm allows the contributions to the likelihood from all possible combinations of rates to be summed, in a time proportional to the number of different rates at a single site. Thus with 3 rates, the effort involved is no greater than 3 times that for a single rate. This "hidden Markov model" method allows for rates to differ between sites, and for correlations between the rates of neighboring sites. By summing over all possibilities it does not require us to know the rates at individual sites. However it does not allow for correlation of rates at nonadjacent sites, nor does it allow for a continuous distribution of rates over sites. It is shown how to use the NewtonRaphson method to estimate branch lengths of a phylogeny, and to infer from a phylogeny what assignment of rates to sites has the largest posterior probability. An example is given using βhemoglobin DNA sequences in 8 mammal species; the regions of high and low evolutionary rates are inferred and also the average length of patches of similar rates.
Ratedistortion methods for image and video compression
 IEEE Signal Process. Mag. 1998
"... In this paper we provide an overview of ratedistortion (RD) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical ratedistortion theory and then we show how in many practical coding scenarios, such as in standardsco ..."
Abstract

Cited by 224 (7 self)
 Add to MetaCart
(Show Context)
In this paper we provide an overview of ratedistortion (RD) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical ratedistortion theory and then we show how in many practical coding scenarios, such as in standardscompliant coding environments, resource allocation can be put in an RD framework. We then introduce two popular techniques for resource allocation, namely, Lagrangian optimization and dynamic programming. After a discussion of these two techniques as well as some of their extensions, we conclude with a quick review of recent literature in these areas citing a number of applications related to image and video compression and transmission. We
Improved lowdensity paritycheck codes using irregular graphs
 IEEE Trans. Inform. Theory
, 2001
"... Abstract—We construct new families of errorcorrecting codes based on Gallager’s lowdensity paritycheck codes. We improve on Gallager’s results by introducing irregular paritycheck matrices and a new rigorous analysis of harddecision decoding of these codes. We also provide efficient methods for ..."
Abstract

Cited by 223 (15 self)
 Add to MetaCart
Abstract—We construct new families of errorcorrecting codes based on Gallager’s lowdensity paritycheck codes. We improve on Gallager’s results by introducing irregular paritycheck matrices and a new rigorous analysis of harddecision decoding of these codes. We also provide efficient methods for finding good irregular structures for such decoding algorithms. Our rigorous analysis based on martingales, our methodology for constructing good irregular codes, and the demonstration that irregular structure improves performance constitute key points of our contribution. We also consider irregular codes under belief propagation. We report the results of experiments testing the efficacy of irregular codes on both binarysymmetric and Gaussian channels. For example, using belief propagation, for rate I R codes on 16 000 bits over a binarysymmetric channel, previous lowdensity paritycheck codes can correct up to approximately 16 % errors, while our codes correct over 17%. In some cases our results come very close to reported results for turbo codes, suggesting that variations of irregular low density paritycheck codes may be able to match or beat turbo code performance. Index Terms—Belief propagation, concentration theorem, Gallager codes, irregular codes, lowdensity paritycheck codes.
An Introduction to Factor Graphs
 IEEE SIGNAL PROCESSING MAG., JAN. 2004
, 2004
"... A large variety of algorithms in coding, signal processing, and artificial intelligence may be viewed as instances of the summaryproduct algorithm (or belief/probability ..."
Abstract

Cited by 197 (34 self)
 Add to MetaCart
A large variety of algorithms in coding, signal processing, and artificial intelligence may be viewed as instances of the summaryproduct algorithm (or belief/probability
Preliminaries to a Theory of Speech Disfluencies
, 1994
"... This thesis examines disfluencies (e.g., "um", repeated words, and a variety of forms of selfrepair) in the spontaneous speech of adult normal speakers of American English. Despite their prevalence, disfluencies have traditionally been viewed as irregular events and have received little a ..."
Abstract

Cited by 182 (7 self)
 Add to MetaCart
(Show Context)
This thesis examines disfluencies (e.g., "um", repeated words, and a variety of forms of selfrepair) in the spontaneous speech of adult normal speakers of American English. Despite their prevalence, disfluencies have traditionally been viewed as irregular events and have received little attention. The goal of the thesis is to provide evidence that, on the contrary, disfluencies show remarkably regular trends in a number of dimensions. These regularities have consequences for models of human language production; they can also be exploited to improve performance in speech applications. The method includes analysis of over 5000 handannotated disfluencies from a database (250,000 words) containing three different styles of spontaneous speech: taskoriented humancomputer dialog, taskoriented humanhuman dialog, and humanhuman conversation on a prescribed topic. The approach is theoryneutral and strongly datadriven. The annotations correspond to observable characteristics ("features") ...
PerSurvivor Processing: a General Approach to MLSE in Uncertain Environments,”
 IEEE Trans. Commun.,
, 1995
"... ..."
(Show Context)
Measurement and Analysis of the Error Characteristics of an InBuilding Wireless Network
, 1996
"... There is general belief that networks based on wireless technologies have much higher error rates than those based on more traditional technologies such as optical fiber, coaxial cable, or twisted pair wiring. This difference has motivated research on new protocol suites specifically for wireless ne ..."
Abstract

Cited by 155 (4 self)
 Add to MetaCart
(Show Context)
There is general belief that networks based on wireless technologies have much higher error rates than those based on more traditional technologies such as optical fiber, coaxial cable, or twisted pair wiring. This difference has motivated research on new protocol suites specifically for wireless networks. While the error characteristics of wired networks have been well documented, less experimental data is available for wireless LANs. In this
Optimal and SubOptimal Maximum A Posteriori Algorithms Suitable for Turbo Decoding
 ETT
, 1997
"... For estimating the states or outputs of a Markov process, the symbolbysymbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a ..."
Abstract

Cited by 155 (26 self)
 Add to MetaCart
(Show Context)
For estimating the states or outputs of a Markov process, the symbolbysymbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a high number of additions and multiplications. MAP like algorithms operating in the logarithmic domain presented in the past solve the numerical problem and reduce the computational complexity, but are suboptimal especially at low SNR (a common example is the MaxLogMAP because of its use of the max function). A further simplification yields the softoutput Viterbi algorithm (SOVA). In this paper, we present a LogMAP algorithm that avoids the approximations in the MaxLogMAP algorithm and hence is equivalent to the true MAP, but without its major disadvantages. We compare the (Log)MAP, MaxLogMAP and SOVA from a theoretical point of view to illuminate their commonalities and differences. As a practical example forming the basis for simulations, we consider Turbo decoding, where recursive systematic convolutional component codes are decoded with the three algorithms, and we also demonstrate the practical suitability of the LogMAP by including quantization effects. The SOVA is, at 10
Multiresolution markov models for signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coheren ..."
Abstract

Cited by 153 (17 self)
 Add to MetaCart
(Show Context)
This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts–in particular making ties to topics such as wavelets and multigrid methods. A third is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for selfsimilar and 1/f processes. We also illustrate how these methods have been used in practice. We discuss the construction of MR models on trees and show how questions that arise in this context make contact with wavelets, state space modeling of time series, system and parameter identification, and hidden
A Robust System for Natural Spoken Dialogue
 ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
, 1996
"... This paper describes a system that leads us to believe in the feasibility of constructing natural spoken dialogue systems in taskoriented domains. It specifically addresses the issue of robust interpretation of speech in the presence of recognition errors. Robustness is achieved by a combination of ..."
Abstract

Cited by 145 (12 self)
 Add to MetaCart
This paper describes a system that leads us to believe in the feasibility of constructing natural spoken dialogue systems in taskoriented domains. It specifically addresses the issue of robust interpretation of speech in the presence of recognition errors. Robustness is achieved by a combination of statistical error postcorrection, syntactically and semanticallydriven robust parsing, and extensive use of the dialogue context. We present an evaluation of the system using timetocompletion and the quality of the final solution that suggests that most native speakers of English can use the system successfully with virtually no training.