Results 1  10
of
2,127,364
The many faces of Publish/Subscribe
, 2003
"... This paper factors out the common denominator underlying these variants: full decoupling of the communicating entities in time, space, and synchronization. We use these three decoupling dimensions to better identify commonalities and divergences with traditional interaction paradigms. The many v ..."
Abstract

Cited by 727 (23 self)
 Add to MetaCart
This paper factors out the common denominator underlying these variants: full decoupling of the communicating entities in time, space, and synchronization. We use these three decoupling dimensions to better identify commonalities and divergences with traditional interaction paradigms. The many
An introduction to variational methods for graphical models
 TO APPEAR: M. I. JORDAN, (ED.), LEARNING IN GRAPHICAL MODELS
"... ..."
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 800 (26 self)
 Add to MetaCart
The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical
From Few to many: Illumination cone models for face recognition under variable lighting and pose
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... We present a generative appearancebased method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a smal ..."
Abstract

Cited by 747 (12 self)
 Add to MetaCart
We present a generative appearancebased method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a
Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation
, 2002
"... There are many sources of systematic variation in cDNA microarray experiments which affect the measured gene expression levels (e.g. differences in labeling efficiency between the two fluorescent dyes). The term normalization refers to the process of removing such variation. A constant adjustment is ..."
Abstract

Cited by 699 (9 self)
 Add to MetaCart
There are many sources of systematic variation in cDNA microarray experiments which affect the measured gene expression levels (e.g. differences in labeling efficiency between the two fluorescent dyes). The term normalization refers to the process of removing such variation. A constant adjustment
Convex Analysis
, 1970
"... In this book we aim to present, in a unified framework, a broad spectrum of mathematical theory that has grown in connection with the study of problems of optimization, equilibrium, control, and stability of linear and nonlinear systems. The title Variational Analysis reflects this breadth. For a lo ..."
Abstract

Cited by 5350 (67 self)
 Add to MetaCart
In this book we aim to present, in a unified framework, a broad spectrum of mathematical theory that has grown in connection with the study of problems of optimization, equilibrium, control, and stability of linear and nonlinear systems. The title Variational Analysis reflects this breadth. For a
Divergence measures based on the Shannon entropy
 IEEE Transactions on Information theory
, 1991
"... AbstractA new class of informationtheoretic divergence measures based on the Shannon entropy is introduced. Unlike the wellknown Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, ..."
Abstract

Cited by 657 (0 self)
 Add to MetaCart
, their close relationship with the variational distance and the probability of misclassification error are established in terms of bounds. These bounds are crucial in many applications of divergence measures. The new measures are also well characterized by the properties of nonnegativity, finiteness
Orthonormal bases of compactly supported wavelets
, 1993
"... Several variations are given on the construction of orthonormal bases of wavelets with compact support. They have, respectively, more symmetry, more regularity, or more vanishing moments for the scaling function than the examples constructed in Daubechies [Comm. Pure Appl. Math., 41 (1988), pp. 90 ..."
Abstract

Cited by 2182 (27 self)
 Add to MetaCart
Several variations are given on the construction of orthonormal bases of wavelets with compact support. They have, respectively, more symmetry, more regularity, or more vanishing moments for the scaling function than the examples constructed in Daubechies [Comm. Pure Appl. Math., 41 (1988), pp
Why Do Some Countries Produce So Much More Output Per Worker Than Others?
, 1998
"... Output per worker varies enormously across countries. Why? On an accounting basis, our analysis shows that differences in physical capital and educational attainment can only partially explain the variation in output per worker — we find a large amount of variation in the level of the Solow residual ..."
Abstract

Cited by 2363 (22 self)
 Add to MetaCart
Output per worker varies enormously across countries. Why? On an accounting basis, our analysis shows that differences in physical capital and educational attainment can only partially explain the variation in output per worker — we find a large amount of variation in the level of the Solow
A comparison of event models for Naive Bayes text classification
, 1998
"... Recent work in text classification has used two different firstorder probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multivariate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey ..."
Abstract

Cited by 1002 (27 self)
 Add to MetaCart
Recent work in text classification has used two different firstorder probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multivariate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e
Results 1  10
of
2,127,364