• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 9,344
Next 10 →

The Cache Performance and Optimizations of Blocked Algorithms

by Monica S. Lam, Edward E. Rothberg, Michael E. Wolf - In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems , 1991
"... Blocking is a well-known optimization technique for improving the effectiveness of memory hierarchies. Instead of operating on entire rows or columns of an array, blocked algorithms operate on submatrices or blocks, so that data loaded into the faster levels of the memory hierarchy are reused. This ..."
Abstract - Cited by 574 (5 self) - Add to MetaCart
is highly sensitive to the stride of data accesses and the size of the blocks, and can cause wide variations in machine performance for different matrix sizes. The conventional wisdom of trying to use the entire cache, or even a fixed fraction of the cache, is incorrect. If a fixed block size is used for a

Shape and motion from image streams under orthography: a factorization method

by Carlo Tomasi, Takeo Kanade - INTERNATIONAL JOURNAL OF COMPUTER VISION , 1992
"... Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orth ..."
Abstract - Cited by 1094 (38 self) - Add to MetaCart
Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an ill-conditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under

Multivariable Feedback Control: Analysis

by Sigurd Skogestad, Ian Postlethwaite - span (B∗) und Basis B∗ = { ω1 , 2005
"... multi-input, multi-output feed-back control design for linear systems using the paradigms, theory, and tools of robust con-trol that have arisen during the past two decades. The book is aimed at graduate students and practicing engineers who have a basic knowledge of classical con-trol design and st ..."
Abstract - Cited by 564 (24 self) - Add to MetaCart
and state-space con-trol theory for linear systems. A basic knowledge of matrix theory and linear algebra is required to appreciate and digest the material offered. This edition is a revised and expanded version of the first edition, which was published in 1996. The size of the

How much should we trust differences-in-differences estimates?

by Marianne Bertrand, Esther Duflo, Sendhil Mullainathan , 2003
"... Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on femal ..."
Abstract - Cited by 828 (1 self) - Add to MetaCart
into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post

Stable signal recovery from incomplete and inaccurate measurements,”

by Emmanuel J Candès , Justin K Romberg , Terence Tao - Comm. Pure Appl. Math., , 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract - Cited by 1397 (38 self) - Add to MetaCart
Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y

Sequential minimal optimization: A fast algorithm for training support vector machines

by John C. Platt - Advances in Kernel Methods-Support Vector Learning , 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract - Cited by 461 (3 self) - Add to MetaCart
possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation

Using the Nyström Method to Speed Up Kernel Machines

by Christopher Williams, Matthias Seeger - Advances in Neural Information Processing Systems 13 , 2001
"... A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n ), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix ..."
Abstract - Cited by 434 (6 self) - Add to MetaCart
matrix can be computed by the Nyström method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using

Spectral clustering for a large data set by reducing the similarity matrix size

by Hiroyuki Shinnou, Minoru Sasaki - In Proc. of the 6th Int. Conf. on Language Resources and Evaluation (LREC , 2008
"... Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data se ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call

The benefits of coding over routing in a randomized setting

by Tracey Ho, Ralf Koetter, Muriel Médard, David R. Karger, Michelle Effros - In Proceedings of 2003 IEEE International Symposium on Information Theory , 2003
"... Abstract — We present a novel randomized coding approach for robust, distributed transmission and compression of information in networks. We give a lower bound on the success probability of a random network code, based on the form of transfer matrix determinant polynomials, that is tighter than the ..."
Abstract - Cited by 361 (44 self) - Add to MetaCart
Abstract — We present a novel randomized coding approach for robust, distributed transmission and compression of information in networks. We give a lower bound on the success probability of a random network code, based on the form of transfer matrix determinant polynomials, that is tighter than

A review of algebraic multigrid

by K. Stüben , 2001
"... Since the early 1990s, there has been a strongly increasing demand for more efficient methods to solve large sparse, unstructured linear systems of equations. For practically relevant problem sizes, classical one-level methods had already reached their limits and new hierarchical algorithms had to b ..."
Abstract - Cited by 347 (11 self) - Add to MetaCart
Since the early 1990s, there has been a strongly increasing demand for more efficient methods to solve large sparse, unstructured linear systems of equations. For practically relevant problem sizes, classical one-level methods had already reached their limits and new hierarchical algorithms had
Next 10 →
Results 1 - 10 of 9,344
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University