Results 1  10
of
27,198
On
"... the infeasibility of training neural networks with small mean squared error ..."
Abstract
 Add to MetaCart
the infeasibility of training neural networks with small mean squared error
A competitive meansquared error approach to beamforming
 IEEE Transactions on Signal Processing
"... Abstract—We treat the problem of beamforming for signal estimation where the goal is to estimate a signal amplitude from a set of array observations. Conventional beamforming methods typically aim at maximizing the signaltointerferenceplusnoise ratio (SINR). However, this does not guarantee a sm ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
small meansquared error (MSE), so that on average the resulting signal estimate can be far from the true signal. Here, we consider strategies that attempt to minimize the MSE between the estimated and unknown signal waveforms. The methods we suggest all maximize the SINR but at the same time
Competitive meansquared error beamforming,” presented at the 12th Annu
 Workshop Adaptive Sensor Array Processing
, 2004
"... We consider the problem of designing a linear beamformer to estimate a source signal s(t) from array observations. Conventional beamforming methods typically aim at maximizing the signaltointerferenceplusnoise ratio (SINR). However this does not guarantee a small meansquared error (MSE), hence ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider the problem of designing a linear beamformer to estimate a source signal s(t) from array observations. Conventional beamforming methods typically aim at maximizing the signaltointerferenceplusnoise ratio (SINR). However this does not guarantee a small meansquared error (MSE), hence
Mutual information and minimum meansquare error in Gaussian channels
 IEEE TRANS. INFORM. THEORY
, 2005
"... This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given the out ..."
Abstract

Cited by 288 (34 self)
 Add to MetaCart
This paper deals with arbitrarily distributed finitepower input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the inputoutput mutual information and the minimum meansquare error (MMSE) achievable by optimal estimation of the input given
xxxiii Competitive MeanSquared Error Beamforming
"... Abstract Beamforming methods are used extensively in a variety of different areas, where one of their main goals is to estimate the source signal amplitude s(t) from the array observations y(t) = s(t)a + i(t) + e(t), t = 1,2,..., where a is the steering vector, i(t) is the interference, and e(t) is ..."
Abstract
 Add to MetaCart
. However, this approach does not guarantee a small MSE, so that on average, the resulting estimate of s(t) may be far from s(t). Instead, it would be desirable to design a robust beamformer whose performance is reasonably good access all possible signal powers. In our work, we propose a minimax regret
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 543 (0 self)
 Add to MetaCart
), and the ML and GLSbased gamma hat, McDonald's centrality index (1989; Me), and rootmeansquare error of approximation (RMSEA) were the most sensitive indices to models with misspecified factor loadings. With ML and GLS methods, we recommend the use of SRMR, supplemented by TLI, BL89, RNI, CFI, gamma
An empirical comparison of voting classification algorithms: Bagging, boosting, and variants.
 Machine Learning,
, 1999
"... Abstract. Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and realworld datasets. We review these algorithms and describe a large empirical study comparing several vari ..."
Abstract

Cited by 707 (2 self)
 Add to MetaCart
in the average tree size in AdaBoost trials and its success in reducing the error. We compare the meansquared error of voting methods to nonvoting methods and show that the voting methods lead to large and significant reductions in the meansquared errors. Practical problems that arise in implementing boosting
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract

Cited by 513 (17 self)
 Add to MetaCart
published methods, both visually and in terms of mean squared error.
An Introduction to the Kalman Filter
 UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
, 1995
"... In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discretedata linear filtering problem. Since that time, due in large part to advances in digital computing, the Kalman filter has been the subject of extensive research and application, particularly in the area o ..."
Abstract

Cited by 1146 (13 self)
 Add to MetaCart
of autonomous or assisted navigation.
The Kalman filter is a set of mathematical equations that provides an efficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean of the squared error. The filter is very powerful in several aspects: it supports
Boosting the margin: A new explanation for the effectiveness of voting methods
 IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract

Cited by 897 (52 self)
 Add to MetaCart
that techniques used in the analysis of Vapnik’s support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins
Results 1  10
of
27,198