Results 1  10
of
913
Learning to predict by the methods of temporal differences
 MACHINE LEARNING
, 1988
"... This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional predictionlearning methods assign credit by means of the difference between predi ..."
Abstract

Cited by 1521 (56 self)
 Add to MetaCart
(Show Context)
This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional predictionlearning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporaldifference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervisedlearning methods. For most realworld prediction problems, temporaldifference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporaldifference methods can be applied to advantage.
Information flow and cooperative control of vehicle formations.
 In Proceeings of 15th IFAC Conference,
, 2002
"... Abstract We consider the problem of cooperation among a collection of vehicles performing a shared task using intervehicle communication to coordinate their actions. We apply tools from graph theory to relate the topology of the communication network to formation stability. We prove a Nyquist crite ..."
Abstract

Cited by 551 (11 self)
 Add to MetaCart
(Show Context)
Abstract We consider the problem of cooperation among a collection of vehicles performing a shared task using intervehicle communication to coordinate their actions. We apply tools from graph theory to relate the topology of the communication network to formation stability. We prove a Nyquist criterion that uses the eigenvalues of the graph Laplacian matrix to determine the effect of the graph on formation stability. We also propose a method for decentralized information exchange between vehicles. This approach realizes a dynamical system that supplies each vehicle with a common reference to be used for cooperative motion. We prove a separation principle that states that formation stability is achieved if the information flow is stable for the given graph and if the local controller stabilizes the vehicle. The information flow can be rendered highly robust to changes in the graph, thus enabling tight formation control despite limitations in intervehicle communication capability.
A simple distributed autonomous power control algorithm and its convergence
 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
, 1993
"... For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells. By effecting the lowest interference environment, in meeting a required minimum signaltointerf ..."
Abstract

Cited by 477 (3 self)
 Add to MetaCart
(Show Context)
For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells. By effecting the lowest interference environment, in meeting a required minimum signaltointerference ratio of p per user, channel reuse is maximized. Distributed procedures for doing this are of special interest, since the centrally administered alternative requires added infrastructure, latency, and network vulnerability. Successful distributed powering entails guiding the evolution of the transmitted power level of each of the signals, using only local measurements, so that eventually all users meet the p requirement. The local per channel power measurements include that of the intended signal as well as the undesired interference from other users (plus receiver noise). For a certain simple distributed type of algorithm, whenever power settings exist for which all users meet the p requirement, we demonstrate exponentially fast convergence to these settings.
A Class of Generalized Stochastic Petri Nets for the Performance Analysis of Multiprocessor Systems
 ACM Trans. on Comp. Systems
, 1984
"... A class of generalised stochastic petri nets for the performance evaluation of multiprocessor systems ..."
Abstract

Cited by 307 (4 self)
 Add to MetaCart
(Show Context)
A class of generalised stochastic petri nets for the performance evaluation of multiprocessor systems
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods
, 1994
"... This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied ..."
Abstract

Cited by 268 (7 self)
 Add to MetaCart
(Show Context)
This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied
Efficient and reliable schemes for nonlinear diffusion filtering
 IEEE Transactions on Image Processing
, 1998
"... AbstractNonlinear diffusion filtering is usually performed with explicit schemes. They are only stable for very small time steps, which leads to poor efficiency and limits their practical use. Based on a recent discrete nonlinear diffusion scalespace framework we present semiimplicit schemes whi ..."
Abstract

Cited by 231 (21 self)
 Add to MetaCart
AbstractNonlinear diffusion filtering is usually performed with explicit schemes. They are only stable for very small time steps, which leads to poor efficiency and limits their practical use. Based on a recent discrete nonlinear diffusion scalespace framework we present semiimplicit schemes which are stable for all time steps. These novel schemes use an additive operator splitting (AOS), which guarantees equal treatment of all coordinate axes. They can be implemented easily in arbitrary dimensions, have good rotational invariance and reveal a computational complexity and memory requirement which is linear in the number of pixels. Examples demonstrate that, under typical accuracy requirements, AOS schemes are at least ten times more efficient than the widely used explicit schemes.
A sparse approximate inverse preconditioner for nonsymmetric linear systems
 SIAM J. SCI. COMPUT
, 1998
"... This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner f ..."
Abstract

Cited by 197 (22 self)
 Add to MetaCart
This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner for conjugate gradient–type methods. Some theoretical properties of the preconditioner are discussed, and numerical experiments on test matrices from the Harwell–Boeing collection and from Tim Davis’s collection are presented. Our results indicate that the new preconditioner is cheaper to construct than other approximate inverse preconditioners. Furthermore, the new technique insures convergence rates of the preconditioned iteration which are comparable with those obtained with standard implicit preconditioners.
ImplicitExplicit Methods For TimeDependent PDEs
 SIAM J. NUMER. ANAL
, 1997
"... Implicitexplicit (IMEX) schemes have been widely used, especially in conjunction with spectral methods, for the time integration of spatially discretized PDEs of diffusionconvection type. Typically, an implicit scheme is used for the diffusion term and an explicit scheme is used for the convection ..."
Abstract

Cited by 178 (6 self)
 Add to MetaCart
Implicitexplicit (IMEX) schemes have been widely used, especially in conjunction with spectral methods, for the time integration of spatially discretized PDEs of diffusionconvection type. Typically, an implicit scheme is used for the diffusion term and an explicit scheme is used for the convection term. Reactiondiffusion problems can also be approximated in this manner. In this work we systematically analyze the performance of such schemes, propose improved new schemes and pay particular attention to their relative performance in the context of fast multigrid algorithms and of aliasing reduction for spectral methods. For the prototype linear advectiondiffusion equation, a stability analysis for first, second, third and fourth order multistep IMEX schemes is performed. Stable schemes permitting large time steps for a wide variety of problems and yielding appropriate decay of high frequency error modes are identified. Numerical experiments demonstrate that weak decay of high freque...
Block preconditioning for the conjugate gradient method
, 1982
"... Abstract. Block preconditionings for the conjugate gradient method are investigated for solving positive definite block tridiagonal systems of linear equations arising from discretization of boundary value problems for elliptic partial differential equations. The preconditionings rest on the use of ..."
Abstract

Cited by 102 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Block preconditionings for the conjugate gradient method are investigated for solving positive definite block tridiagonal systems of linear equations arising from discretization of boundary value problems for elliptic partial differential equations. The preconditionings rest on the use of sparse approximate matrix inverses to generate incomplete block Cholesky factorizations. Carrying out of the factorizations can be guaranteed under suitable conditions. Numerical experiments on test problems for two dimensions indicate that a particularly attractive preconditioning, which uses special properties of tridiagonal matrix inverses, can be computationally more efficient for the same computer storage than other preconditionings, including the popular point incomplete Cholesky factorization. Key words, conjugate gradient method, elliptic partial differential equations, incomplete factorization, iterative methods, preconditioning, sparse matrices 1. Introduction. In