Results 1  10
of
7,326
The theory and practice of corporate finance: Evidence from the field
 Journal of Financial Economics
, 2001
"... We survey 392 CFOs about the cost of capital, capital budgeting, and capital structure. Large firms rely heavily on present value techniques and the capital asset pricing model, while small firms are relatively likely to use the payback criterion. We find that a surprising number of firms use their ..."
Abstract

Cited by 725 (23 self)
 Add to MetaCart
We survey 392 CFOs about the cost of capital, capital budgeting, and capital structure. Large firms rely heavily on present value techniques and the capital asset pricing model, while small firms are relatively likely to use the payback criterion. We find that a surprising number of firms use
The space complexity of approximating the frequency moments
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 1996
"... The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, ..."
Abstract

Cited by 845 (12 self)
 Add to MetaCart
The frequency moments of a sequence containing mi elements of type i, for 1 ≤ i ≤ n, are the numbers Fk = �n i=1 mki. We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly
Boosting the margin: A new explanation for the effectiveness of voting methods
 IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract

Cited by 897 (52 self)
 Add to MetaCart
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract

Cited by 1441 (15 self)
 Add to MetaCart
resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals
Error and attack tolerance of complex networks
, 2000
"... Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network [1]. C ..."
Abstract

Cited by 1013 (7 self)
 Add to MetaCart
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network [1
On Spectral Clustering: Analysis and an algorithm
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS
, 2001
"... Despite many empirical successes of spectral clustering methods  algorithms that cluster points using eigenvectors of matrices derived from the distances between the points  there are several unresolved issues. First, there is a wide variety of algorithms that use the eigenvectors in slightly ..."
Abstract

Cited by 1713 (13 self)
 Add to MetaCart
the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.
Statistical phrasebased translation
, 2003
"... We propose a new phrasebased translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrasebased translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrasebased models outpe ..."
Abstract

Cited by 944 (11 self)
 Add to MetaCart
We propose a new phrasebased translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrasebased translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrasebased models
Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations
, 2005
"... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..."
Abstract

Cited by 541 (48 self)
 Add to MetaCart
, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should
On the optimality of the simple Bayesian classifier under zeroone loss
 MACHINE LEARNING
, 1997
"... The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containin ..."
Abstract

Cited by 818 (27 self)
 Add to MetaCart
The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
propagation converges, it gives a surprisingly good ap proximation to the correct marginals. Since the dis tinction between convergence and oscillation is easy to make after a small number of iterations, this may sug gest a way of checking whether loopy propagation is appropriate for a given problem. Acknowl
Results 1  10
of
7,326