Results 1  10
of
2,074,515
Greed is Good: Algorithmic Results for Sparse Approximation
, 2004
"... This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho’s basis pursuit (BP) paradigm can recover the optimal representa ..."
Abstract

Cited by 916 (8 self)
 Add to MetaCart
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho’s basis pursuit (BP) paradigm can recover the optimal
Square roots with many good approximants
, 2005
"... Let d be a positive integer that is not a perfect square. It was proved by Mikusinski in 1954 that if the period s(d) of the continued fraction expansion of sqrt(d) satisfies s(d)=1 or 2, then all Newton's approximants R_n = (p_n/q_n+dq_n/p_n)/2 are convergents of sqrt(d). If R_n is a convergen ..."
Abstract
 Add to MetaCart
convergent of sqrt(d), then we say that R n is a good approximant. Let b(d) denote the number of good approximants among the numbers R n , n = 0, 1, . . . , s(d)1. In this paper we show that the quantity b(d) can be arbitrary large. Moreover, we construct families of examples which show that for every
Loopy Belief Propagation for Approximate Inference: An Empirical Study
 In Proceedings of Uncertainty in AI
, 1999
"... Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performa ..."
Abstract

Cited by 680 (18 self)
 Add to MetaCart
limit performance of "Turbo Codes"  codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something special about the errorcorrecting code context, or does loopy propagation work as an approximate
On the Private Provision of Public Goods
 Journal of Public Economics
, 1986
"... We consider a general model of the noncooperative provision of a public good. Under very weak assumptions there will always exist a unique Nash equilibrium in our model. A small redistribution of wealth among the contributing consumers will not change the equilibrium amount of the public good. Howe ..."
Abstract

Cited by 546 (8 self)
 Add to MetaCart
We consider a general model of the noncooperative provision of a public good. Under very weak assumptions there will always exist a unique Nash equilibrium in our model. A small redistribution of wealth among the contributing consumers will not change the equilibrium amount of the public good
The space complexity of approximating the frequency moments
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 1996
"... The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, ..."
Abstract

Cited by 855 (12 self)
 Add to MetaCart
The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly
Cooperation and Punishment in Public Goods Experiments
 AMERICAN ECONOMIC REVIEW
, 2000
"... This paper provides evidence that free riders are heavily punished even if punishment is costly and does not provide any material benefits for the punisher. The more free riders negatively deviate from the group standard the more they are punished. As a consequence, the existence of an opportunity f ..."
Abstract

Cited by 485 (36 self)
 Add to MetaCart
This paper provides evidence that free riders are heavily punished even if punishment is costly and does not provide any material benefits for the punisher. The more free riders negatively deviate from the group standard the more they are punished. As a consequence, the existence of an opportunity for costly punishment causes a large increase in cooperation levels because potential free riders face a credible threat. We show, in particular, that in the presence of a costly punishment opportunity almost complete cooperation can be achieved and maintained although, under the standard assumptions of rationality and selfishness, there should be no cooperation at all. We also show that free riding causes strong negative emotions among cooperators. The intensity of these emotions is the stronger the more the free riders deviate from the group standard. Our results provide, therefore, support for the hypothesis that emotions are guarantors of credible threats.
A Threshold of ln n for Approximating Set Cover
 JOURNAL OF THE ACM
, 1998
"... Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhar ..."
Abstract

Cited by 778 (5 self)
 Add to MetaCart
hard. We prove that (1 \Gamma o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low order terms) between the ratio of approximation achievable by the greedy algorithm (which is (1 \Gamma
Greedy Function Approximation: A Gradient Boosting Machine
 Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract

Cited by 951 (12 self)
 Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed
A Guided Tour to Approximate String Matching
 ACM COMPUTING SURVEYS
, 1999
"... We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining t ..."
Abstract

Cited by 584 (38 self)
 Add to MetaCart
We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the central ideas of the algorithms and their complexities. We present a number of experiments to compare the performance of the different algorithms and show which are the best choices according to each case. We conclude with some future work directions and open problems.
Results 1  10
of
2,074,515