Results 1  10
of
26,846
Fitting a mixture model by expectation maximization to discover motifs in biopolymers.
 Proc Int Conf Intell Syst Mol Biol
, 1994
"... Abstract The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expect~tiou ma.,dmization to fit a twocomponent finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to th ..."
Abstract

Cited by 947 (5 self)
 Add to MetaCart
Abstract The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expect~tiou ma.,dmization to fit a twocomponent finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model
Maximizing the Spread of Influence Through a Social Network
 In KDD
, 2003
"... Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in gametheoretic settings, and the effects of ..."
Abstract

Cited by 990 (7 self)
 Add to MetaCart
Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in gametheoretic settings, and the effects
Simultaneous Multithreading: Maximizing OnChip Parallelism
, 1995
"... This paper examines simultaneous multithreading, a technique permitting several independent threads to issue instructions to a superscalar’s multiple functional units in a single cycle. We present several models of simultaneous multithreading and compare them with alternative organizations: a wide s ..."
Abstract

Cited by 823 (48 self)
 Add to MetaCart
This paper examines simultaneous multithreading, a technique permitting several independent threads to issue instructions to a superscalar’s multiple functional units in a single cycle. We present several models of simultaneous multithreading and compare them with alternative organizations: a wide
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 639 (15 self)
 Add to MetaCart
The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic
On the Maximal Models of Horn Formulas
, 1998
"... We investigate the problem of generating the maximal models of Horn formulas. Based on the Resolution Theorem of Kavvadias and Stavropoulos [Computer Technology Institute, CTI TR 98.2.4, February 1998], the generation of the maximal models of a Horn formula with constant number of variables with pos ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We investigate the problem of generating the maximal models of Horn formulas. Based on the Resolution Theorem of Kavvadias and Stavropoulos [Computer Technology Institute, CTI TR 98.2.4, February 1998], the generation of the maximal models of a Horn formula with constant number of variables
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 561 (18 self)
 Add to MetaCart
as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word
MIXED MNL MODELS FOR DISCRETE RESPONSE
 JOURNAL OF APPLIED ECONOMETRICS J. APPL. ECON. 15: 447470 (2000)
, 2000
"... This paper considers mixed, or random coefficients, multinomial logit (MMNL) models for discrete response, and establishes the following results. Under mild regularity conditions, any discrete choice model derived from random utility maximization has choice probabilities that can be approximated as ..."
Abstract

Cited by 487 (15 self)
 Add to MetaCart
This paper considers mixed, or random coefficients, multinomial logit (MMNL) models for discrete response, and establishes the following results. Under mild regularity conditions, any discrete choice model derived from random utility maximization has choice probabilities that can be approximated
The genetical evolution of social behaviour
 I. J. Theor. Biol.
, 1964
"... A genetical mathematical model is described which allows for interactions between relatives on one another's fitness. Making use of Wright's Coefficient of Relationship as the measure of the proportion of replica genes in a relative, a quantity is found which incorporates the maximizing p ..."
Abstract

Cited by 932 (2 self)
 Add to MetaCart
A genetical mathematical model is described which allows for interactions between relatives on one another's fitness. Making use of Wright's Coefficient of Relationship as the measure of the proportion of replica genes in a relative, a quantity is found which incorporates the maximizing
Hierarchical mixtures of experts and the EM algorithm
, 1993
"... We present a treestructured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a maximum likelihood ..."
Abstract

Cited by 885 (21 self)
 Add to MetaCart
We present a treestructured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a maximum likelihood
A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants
 Learning in Graphical Models
, 1998
"... . The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the d ..."
Abstract

Cited by 993 (18 self)
 Add to MetaCart
. The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect
Results 1  10
of
26,846