Results 1  10
of
1,519,172
On the optimality of the simple Bayesian classifier under zeroone loss
 MACHINE LEARNING
, 1997
"... The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containin ..."
Abstract

Cited by 802 (26 self)
 Add to MetaCart
loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadraticloss optimality of the Bayesian classifier is in fact a secondorder infinitesimal fraction of the region of zeroone optimality. This implies that the Bayesian classifier has a much greater
The Sybil Attack
, 2002
"... Largescale peertopeer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby unde ..."
Abstract

Cited by 1490 (1 self)
 Add to MetaCart
Largescale peertopeer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby
Robust Principal Component Analysis?
, 2009
"... This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse co ..."
Abstract

Cited by 542 (26 self)
 Add to MetaCart
since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving
The strength of weak learnability
 MACHINE LEARNING
, 1990
"... This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high prob ..."
Abstract

Cited by 852 (24 self)
 Add to MetaCart
probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions
A learning algorithm for Boltzmann machines
 Cognitive Science
, 1985
"... The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a probl ..."
Abstract

Cited by 573 (13 self)
 Add to MetaCart
The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a
CURE: An Efficient Clustering Algorithm for Large Data sets
 Published in the Proceedings of the ACM SIGMOD Conference
, 1998
"... Clustering, in data mining, is useful for discovering groups and identifying interesting distributions in the underlying data. Traditional clustering algorithms either favor clusters with spherical shapes and similar sizes, or are very fragile in the presence of outliers. We propose a new clustering ..."
Abstract

Cited by 709 (5 self)
 Add to MetaCart
and then shrinking them toward the center of the cluster by a specified fraction. Having more than one representative point per cluster allows CURE to adjust well to the geometry of nonspherical shapes and the shrinking helps to dampen the effects of outliers. To handle large databases, CURE employs a combination
Least squares quantization in pcm
 IEEE Transactions on Information Theory
, 1982
"... AbstractIt has long been realized that in pulsecode modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as th ..."
Abstract

Cited by 1342 (0 self)
 Add to MetaCart
as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the onethird power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1371 (16 self)
 Add to MetaCart
for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant
Econometric methods for fractional response variables with an application to 401 (K) plan participation rates
, 1996
"... We develop attractive functional forms and simple quasilikelihood estimation methods for regression models with a fractional dependent variable. Compared with logodds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need to ..."
Abstract

Cited by 442 (8 self)
 Add to MetaCart
We develop attractive functional forms and simple quasilikelihood estimation methods for regression models with a fractional dependent variable. Compared with logodds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need
Integration of trade and disintegration of production in the global economy
 Journal of Economic Perspectives
, 1998
"... The last few decades have seen a spectacular integration of the global economy through trade. The rising integration of world markets has brought with it a disintegration of the production process, however, as manufacturing or services activities done abroad are combined with those performed at home ..."
Abstract

Cited by 482 (7 self)
 Add to MetaCart
seemingly small fraction of U.S. GDP. This is not surprising in view of the fact that large economies trade less with others, and more internally. But the modest share of trade in total national income hides the fact that merchandise trade as a share of merchandise valueadded is quite high for the U
Results 1  10
of
1,519,172