Results 11  20
of
4,887,681
Detection and Tracking of Point Features
 International Journal of Computer Vision
, 1991
"... The factorization method described in this series of reports requires an algorithm to track the motion of features in an image stream. Given the small interframe displacement made possible by the factorization approach, the best tracking method turns out to be the one proposed by Lucas and Kanade i ..."
Abstract

Cited by 631 (2 self)
 Add to MetaCart
in 1981. The method defines the measure of match between fixedsize feature windows in the past and current frame as the sum of squared intensity differences over the windows. The displacement is then defined as the one that minimizes this sum. For small motions, a linearization of the image intensities
Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test
 REVIEW OF FINANCIAL STUDIES
, 1988
"... In this article we test the random walk hypothesis for weekly stock market returns by comparing variance estimators derived from data sampled at different frequencies. The random walk model is strongly rejected for the entire sample period (19621985) and for all subperiod for a variety of aggrega ..."
Abstract

Cited by 512 (16 self)
 Add to MetaCart
of aggregate returns indexes and sizesorted portofolios. Although the rejections are due largely to the behavior of small stocks, they cannot be attributed completely to the effects of infrequent trading or timevarying volatilities. Moreover, the rejection of the random walk for weekly returns does
An algorithm for finding best matches in logarithmic expected time
 ACM Transactions on Mathematical Software
, 1977
"... An algorithm and data structure are presented for searching a file containing N records, each described by k real valued keys, for the m closest matches or nearest neighbors to a given query record. The computation required to organize the file is proportional to kNlogN. The expected number of recor ..."
Abstract

Cited by 763 (2 self)
 Add to MetaCart
of records examined in each search is independent of the file size. The expected computation to perform each search is proportionalto 1ogN. Empirical evidence suggests that except for very small files, this algorithm is considerably faster than other methods.
Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series Tests, With an Application to the PPP Hypothesis; New Results. Working paper
, 1997
"... We examine properties of residualbased tests for the null of no cointegration for dynamic panels in which both the shortrun dynamics and the longrun slope coefficients are permitted to be heterogeneous across individual members of the panel+ The tests also allow for individual heterogeneous fixed ..."
Abstract

Cited by 528 (13 self)
 Add to MetaCart
fixed effects and trend terms, and we consider both pooled within dimension tests and group mean between dimension tests+ We derive limiting distributions for these and show that they are normal and free of nuisance parameters+ We also provide Monte Carlo evidence to demonstrate their small sample size
A comparison of event models for Naive Bayes text classification
, 1998
"... Recent work in text classification has used two different firstorder probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multivariate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey ..."
Abstract

Cited by 1026 (26 self)
 Add to MetaCart
comparing their classification performance on five text corpora. We find that the multivariate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizesproviding on average a 27% reduction in error over the multi
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 674 (15 self)
 Add to MetaCart
if this oscillatory behavior in the QMRDT case was related to the size of the network does loopy propagation tend to converge less for large networks than small networks? To investigate this question, we tried to cause oscil lation in the toyQMR network. We first asked what, besides the size, is different between
Algorithms for Quantum Computation: Discrete Logarithms and Factoring
, 1994
"... A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factol: It is not clear whether this is still true when quantum mechanics is taken into consider ..."
Abstract

Cited by 1107 (5 self)
 Add to MetaCart
A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factol: It is not clear whether this is still true when quantum mechanics is taken
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 538 (0 self)
 Add to MetaCart
hat, Me, or RMSEA (TLI, Me, and RMSEA are less preferable at small sample sizes). With the ADF method, we recommend the use of SRMR, supplemented by TLI, BL89, RNI, or CFI. Finally, most of the MLbased fit indices outperformed those obtained from GLS and ADF
A model of growth through creative destruction
, 1990
"... This paper develops a model based on Schumpeter's process of creative destruction. It departs from existing models of endogeneous growth in emphasizing obsolescence of old technologies induced by the accumulation of knowledge and the resulting process or industrial innovations. This has both ..."
Abstract

Cited by 1936 (27 self)
 Add to MetaCart
a tendency for laissezfaire economies to generate too many innovations, i.e too much growth. This "businessstealing" effect is partly compensated by the fact that innovations tend to be too small under laissezfaire. The model possesses a unique balanced growth equilibrium in which
Breaking and Fixing the NeedhamSchroeder PublicKey Protocol using FDR
, 1996
"... In this paper we analyse the well known NeedhamSchroeder PublicKey Protocol using FDR, a refinement checker for CSP. We use FDR to discover an attack upon the protocol, which allows an intruder to impersonate another agent. We adapt the protocol, and then use FDR to show that the new protocol is s ..."
Abstract

Cited by 718 (13 self)
 Add to MetaCart
is secure, at least for a small system. Finally we prove a result which tells us that if this small system is secure, then so is a system of arbitrary size. 1 Introduction In a distributed computer system, it is necessary to have some mechanism whereby a pair of agents can be assured of each other
Results 11  20
of
4,887,681