Results 1  10
of
10
DeNoising By SoftThresholding
, 1992
"... Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an a ..."
Abstract

Cited by 1279 (14 self)
 Add to MetaCart
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability ^ fn is at least as smooth as f, in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.
Minimax Estimation via Wavelet Shrinkage
, 1992
"... We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coe cients. The shrinkage can be tuned to be nearly minim ..."
Abstract

Cited by 321 (29 self)
 Add to MetaCart
We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coe cients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel and Besovtype smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p <2, so our method can signi cantly outperform every linear method (kernel, smoothing spline, sieve,:::) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which mayvary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems.
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 295 (36 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Nonlinear solution of linear inverse problems by waveletvaguelette decomposition
, 1992
"... We describe the WaveletVaguelette Decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type { such asnumerical di erentiation, inversion of Abeltype ..."
Abstract

Cited by 251 (12 self)
 Add to MetaCart
We describe the WaveletVaguelette Decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type { such asnumerical di erentiation, inversion of Abeltype transforms, certain convolution transforms, and the Radon Transform. We propose to solve illposed linear inverse problems by nonlinearly \shrinking" the WVD coe cients of the noisy, indirect data. Our approach o ers signi cant advantages over traditional SVD inversion in the case of recovering spatially inhomogeneous objects. We suppose that observations are contaminated by white noise and that the object is an unknown element of a Besov space. We prove that nonlinear WVD shrinkage can be tuned to attain the minimax rate of convergence, for L 2 loss, over the entire Besov scale. The important case of Besov spaces Bp;q, p <2, which model spatial inhomogeneity, is included. In comparison, linear procedures { SVD included { cannot attain optimal rates of convergence over such classes in the case p<2. For example, our methods achieve faster rates of convergence, for objects known to lie in the Bump Algebra or in Bounded Variation, than any linear procedure.
Comparison of Feature Selection Methods in Support Vector Machines
, 2012
"... Support vector machines(SVM) may perform poorly in the presence of noise variables; in addition, it is difficult to identify the importance of each variable in the resulting classifier. A feature selection can improve the interpretability and the accuracy of SVM. Most existing studies concern featur ..."
Abstract
 Add to MetaCart
(Show Context)
Support vector machines(SVM) may perform poorly in the presence of noise variables; in addition, it is difficult to identify the importance of each variable in the resulting classifier. A feature selection can improve the interpretability and the accuracy of SVM. Most existing studies concern feature selection in the linear SVM through penalty functions yielding sparse solutions. Note that one usually adopts nonlinear kernels for the accuracy of classification in practice. Hence feature selection is still desirable for nonlinear SVMs. In this paper, we compare the performances of nonlinear feature selection methods such as component selection and smoothing operator(COSSO) and kernel iterative feature extraction(KNIFE) on simulated and real data
RANK ESTIMATING EQUATIONS FOR PARTIAL SPLINE MODELS WITH MONOTONICITY
, 1999
"... Abstract: For partial spline models with a monotone nonlinear component, a class of monotone estimating equations is proposed for estimating the slope parameters of the vector of covariables Z of the linear component, while adjusting for the corresponding ranks of the vector of covariables X of the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: For partial spline models with a monotone nonlinear component, a class of monotone estimating equations is proposed for estimating the slope parameters of the vector of covariables Z of the linear component, while adjusting for the corresponding ranks of the vector of covariables X of the nonlinear component. This approach avoids the technical complications due to the smoothing of estimators for the nonlinear component with monotonicity, as well as the curse of dimensionality. Also, computationally, our inferences do not involve the unknown error probability density function. As an Restimator taking into account the rank correlation between Y and X, the asymptotic relative efficiency with respect to other estimators ignoring X is proportional to the Spearman correlation coefficient between them.
doi:http://dx.doi.org/10.5705/ss.2009.229 TESTS FOR VARIANCE COMPONENTS IN VARYING COEFFICIENT MIXED MODELS
"... Abstract: We consider a general class of varying coefficient mixed models where random effects are introduced to account for betweensubject variation. To address the question of whether a varying coefficient mixed model can be reduced to a simpler varying coefficient model, we develop onesided tes ..."
Abstract
 Add to MetaCart
Abstract: We consider a general class of varying coefficient mixed models where random effects are introduced to account for betweensubject variation. To address the question of whether a varying coefficient mixed model can be reduced to a simpler varying coefficient model, we develop onesided tests for the null hypothesis that all the variance components are zero. In addition to the purely nullbased standard quasiscore test (SQT), we propose an extended quasiscore test (EQT) by constructing estimators that are consistent under both the null and alternative hypotheses. No assumptions are required for the distributions of random effects and random errors. Both SQT and EQT are consistent for global alternatives and local alternatives distinct at certain rates from the null. Furthermore, the asymptotic null distributions are simple and easy to use in practice. For comparison, we also adapt the onesided score test (SST) in Silvapulle and Silvapulle (1995) and the likelihood ratio test (LRT) in Fan, Zhang, and Zhang (2001). Extensive simulations indicate that all proposed tests perform well and the EQT is more powerful than SQT, SST, and LRT. A data example is analyzed for illustration. Key words and phrases: Extended quasilikelihood, likelihood ratio test, longitudinal data, random effects, score test, smoothing spline, variance components, varying coefficient models.
Monografías del Seminario Matemático García de Galdeano 33, 169–176 (2006) BIVARIATE APPROXIMATION BY DISCRETE SMOOTHING PDE SPLINES
"... Abstract. This paper deals with the construction and characterization of discrete PDE splines. For this purpose, we need a PDE equation (usually an elliptic PDE), certain boundary conditions and a set of points to approximate. We give two results about the convergence of a discrete PDE spline to a f ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper deals with the construction and characterization of discrete PDE splines. For this purpose, we need a PDE equation (usually an elliptic PDE), certain boundary conditions and a set of points to approximate. We give two results about the convergence of a discrete PDE spline to a function of a fixed space in two different cases: (1) when the approximation points are fixed; (2) when the boundary points are fixed. We provide a numerical and graphic example of approximation by discrete PDE splines.
18741363/07 2007 Bentham Science Publishers Ltd. An Iterative Nonlinear Regression Method for Microarray Data Normalization
"... Abstract: Normalization is a prerequisite for almost all followup steps in microarray data analysis. Accurate normalization across different experiments and phenotypes assures a common base for comparative yet quantitative studies using gene expression data. In this paper, we report a novel normal ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Normalization is a prerequisite for almost all followup steps in microarray data analysis. Accurate normalization across different experiments and phenotypes assures a common base for comparative yet quantitative studies using gene expression data. In this paper, we report a novel normalization approach, namely iterative nonlinear regression (INR) method, which exploits concurrent identification of invariantly expressed genes (IEGs) and implementation of nonlinear regression normalization. The INR scheme features an iterative process that performs the following two steps alternatively: (1) selection of IEGs and (2) estimation of nonlinear regression function for normalization. We demonstrate the principle and performance of the INR approach on two real microarray data sets. As compared to major peer methods (e.g., linear regression method, Loess method and iterative ranking method), INR method shows an improved performance in achieving low expression variance across replicates and excellent foldchange preservation for differently expressed genes.
Using Multiple Generalized CrossValidation as a Method for Varying Smoothing Effects
, 2006
"... The most commonly used method for the solution of illposed problems is Tikhonov regularization method. The major concept of the Tikhonov regularization scheme is replacement of the original illposed system of, min x ‖Kx − d‖22 (1) with a wellposed problem of; min x (‖Kx − d‖22 + λ2‖Lx‖22) (2) Th ..."
Abstract
 Add to MetaCart
(Show Context)
The most commonly used method for the solution of illposed problems is Tikhonov regularization method. The major concept of the Tikhonov regularization scheme is replacement of the original illposed system of, min x ‖Kx − d‖22 (1) with a wellposed problem of; min x (‖Kx − d‖22 + λ2‖Lx‖22) (2) The solution of this regularization method depends on the choice of the priori, L, and the regularization parameter, λ. we show that rewriting the Tikhonov eq of (2) in a mutilevelregularization approach would result in: min x (Kx − d22 + q∑ i=1 λ2i Lix22) (3) Where q is the number of subdomains of the solution and Li is the local regularization matrix and the regularization vector Λ is a diagonal matrix with q diagonal elements as; Λ = λi... λq (4) 1 The major difficulty in the solution of(3) is the determination of the regularization parameter, λ. For the case of 1D regularization parameter[1], there are two popular methods of Lcurve[3] and Generalized Cross Validation(GCV)[1,2,4]. In this work, we use multiple GCV algorithm, as the method of determination of the regularization parameters. Indeed, the evaluation of the GCV function(in order to determine the regularization parameters) of, GCV (Λ) = ‖ (I −K(KTK + Λ2I)−1KT) d‖22 1 m (trace((I −K(KTK + Λ2I)−1KT)))2