Results 1 - 10
of
14,763
Non-linear Least Squares Optimization Problem
"... Adaptive least squares matching as a non-linear least squares optimization problem ..."
Abstract
- Add to MetaCart
Adaptive least squares matching as a non-linear least squares optimization problem
1 On Variant Strategies To Solve The Magnitude Least Squares Optimization Problem In Parallel Transmission Pulse Design And Under Strict SAR And Power Constraints
, 2013
"... Abstract—Parallel transmission has been a very promising candidate technology to mitigate the inevitable radio-frequency (RF) field inhomogeneity in magnetic resonance imaging (MRI) at ultra-high field (UHF). For the first few years, pulse design utilizing this technique was expressed as a least squ ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
squares problem with crude power regularizations aimed at controlling the specific absorption rate (SAR), hence the patient safety. This approach being suboptimal for many applications sensitive mostly to the magnitude of the spin excitation, and not its phase, the magnitude least squares (MLS) problem
1On Variant Strategies To Solve The Magnitude Least Squares Optimization Problem In Parallel Transmission Pulse Design And Under Strict SAR And Power Constraints
"... Abstract—Parallel transmission is a very promising candidate technology to mitigate the inevitable radio-frequency (RF) field inhomogeneity in magnetic reso-nance imaging (MRI) at ultra-high field (UHF). For the first few years, pulse design utilizing this technique was expressed as a least squares ..."
Abstract
- Add to MetaCart
problem with crude power regularizations aimed at controlling the specific absorption rate (SAR), hence the patient safety. This approach being suboptimal for many applications sen-sitive mostly to the magnitude of the spin excitation, and not its phase, the magnitude least squares (MLS) problem
Least squares quantization in pcm.
- Bell Telephone Laboratories Paper
, 1982
"... Abstract-It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as t ..."
Abstract
-
Cited by 1362 (0 self)
- Add to MetaCart
conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result
Benchmarking Least Squares Support Vector Machine Classifiers
- NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract
-
Cited by 476 (46 self)
- Add to MetaCart
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set
Valuing American options by simulation: A simple least-squares approach
- Review of Financial Studies
, 2001
"... This article presents a simple yet powerful new approach for approximating the value of America11 options by simulation. The kcy to this approach is the use of least squares to estimate the conditional expected payoff to the optionholder from continuation. This makes this approach readily applicable ..."
Abstract
-
Cited by 517 (9 self)
- Add to MetaCart
This article presents a simple yet powerful new approach for approximating the value of America11 options by simulation. The kcy to this approach is the use of least squares to estimate the conditional expected payoff to the optionholder from continuation. This makes this approach readily
Least-Squares Policy Iteration
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... We propose a new approach to reinforcement learning for control problems which combines value-function approximation with linear architectures and approximate policy iteration. This new approach ..."
Abstract
-
Cited by 462 (12 self)
- Add to MetaCart
We propose a new approach to reinforcement learning for control problems which combines value-function approximation with linear architectures and approximate policy iteration. This new approach
Least angle regression
, 2004
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract
-
Cited by 1326 (37 self)
- Add to MetaCart
implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS
Finite-time analysis of the multiarmed bandit problem
- Machine Learning
, 2002
"... Abstract. Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy’s success in addressing ..."
Abstract
-
Cited by 817 (15 self)
- Add to MetaCart
this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has
Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming
- Journal of the ACM
, 1995
"... We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution ..."
Abstract
-
Cited by 1211 (13 self)
- Add to MetaCart
We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds
Results 1 - 10
of
14,763