Results 1  10
of
7,425
Approximate Equivalence of Markov Decision Processes
"... Abstract. We consider the problem of finding the minimal s^equivalent MDP for an MDP given in its tabular form. We show that the problem is NPHard and then give a bicriteria approximation algorithm to the problem. We suggest that the right measure for finding minimal s^equivalent model is L1 rath ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. We consider the problem of finding the minimal s^equivalent MDP for an MDP given in its tabular form. We show that the problem is NPHard and then give a bicriteria approximation algorithm to the problem. We suggest that the right measure for finding minimal s^equivalent model is L1
Quasiapproximate Equivalence and Essential Equivalence of Operators
"... Abstract: It is known that the projective tensor product X ˆ⊗πY and the injective tensor product X ˇ⊗εY of Banach lattices X and Y may not be Banach lattices, while the Fremlin projective tensor product X ˆ ⊗ πY and the Wittstock injective tensor product X ˇ ⊗ εY of Banach lattices X and Y are a ..."
Abstract
 Add to MetaCart
Abstract: It is known that the projective tensor product X ˆ⊗πY and the injective tensor product X ˇ⊗εY of Banach lattices X and Y may not be Banach lattices, while the Fremlin projective tensor product X ˆ ⊗ πY and the Wittstock injective tensor product X ˇ ⊗ εY of Banach lattices X and Y are always Banach lattices. In this talk, we will discuss under what circumstances X ˆ⊗πY and X ˇ⊗εY are Banach lattices, and under what circumstances some geometric properties can be inherited from two Banach spaces X and Y to their tensor products X ˆ⊗πY and X ˇ⊗εY, and from two Banach lattices X and Y to their tensor products X ˆ ⊗ πY and X ˇ ⊗ εY. Extension of maps on operator spaces and completely rank nonincreasing linear maps
A Method for Approximate Equivalence Checking \Lambda
"... Abstract An approximate equivalence checking method is developedbased on the use of partial Haar spectral diagrams (HSDs). Partial HSDs are defined and used to represent a subset ofthe Haar spectral coefficients for two functions. Due to the uniqueness properties of the Haar transform, a necessaryco ..."
Abstract
 Add to MetaCart
Abstract An approximate equivalence checking method is developedbased on the use of partial Haar spectral diagrams (HSDs). Partial HSDs are defined and used to represent a subset ofthe Haar spectral coefficients for two functions. Due to the uniqueness properties of the Haar transform, a
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
limit performance of "Turbo Codes" codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something spe cial about the errorcorrecting code context, or does loopy propagation work as an ap proximate inference scheme
Discovering spatial relationships between approximately equivalent patterns
 In The 4th ACM SIGKDD Workshop on Data Mining in Bioinformatics (BIOKDD
, 2004
"... ..."
SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2007
"... We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when th ..."
Abstract

Cited by 472 (11 self)
 Add to MetaCart
We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
Synthesis of Abstraction Hierarchies for Constraint Satisfaction by Clustering Approximately Equivalent Objects
 In Tenth International Conference on Machine Learning
, 1993
"... ion Hierarchies for Constraint Satisfaction by Clustering Approximately Equivalent Objects Thomas Ellman Department of Computer Science Hill Center for Mathematical Sciences Rutgers University, New Brunswick, NJ 08903 ellman@cs.rutgers.edu LCSRTR200 Abstract Abstraction techniques are importan ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
ion Hierarchies for Constraint Satisfaction by Clustering Approximately Equivalent Objects Thomas Ellman Department of Computer Science Hill Center for Mathematical Sciences Rutgers University, New Brunswick, NJ 08903 ellman@cs.rutgers.edu LCSRTR200 Abstract Abstraction techniques
On the distribution of the largest eigenvalue in principal components analysis
 ANN. STATIST
, 2001
"... Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribu ..."
Abstract

Cited by 422 (4 self)
 Add to MetaCart
Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart
An equivalence between sparse approximation and Support Vector Machines
 A.I. Memo 1606, MIT Arti cial Intelligence Laboratory
, 1997
"... This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. The pathname for this publication is: aipublications/15001999/AIM1606.ps.Z This paper shows a relationship between two di erent approximation techniques: the Support Vector Machines (SVM), proposed by V.Vapnik (1995), ..."
Abstract

Cited by 243 (7 self)
 Add to MetaCart
This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. The pathname for this publication is: aipublications/15001999/AIM1606.ps.Z This paper shows a relationship between two di erent approximation techniques: the Support Vector Machines (SVM), proposed by V.Vapnik (1995
Results 1  10
of
7,425