• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 49,908
Next 10 →

De-Noising By Soft-Thresholding

by David L. Donoho , 1992
"... Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an a ..."
Abstract - Cited by 1249 (14 self) - Add to MetaCart
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

by Ingrid Daubechies, Michel Defrise, Christine De Mol , 2008
"... ..."
Abstract - Cited by 752 (9 self) - Add to MetaCart
Abstract not found

Wavelets and Subband Coding

by Martin Vetterli, Jelena Kovačević , 2007
"... ..."
Abstract - Cited by 608 (32 self) - Add to MetaCart
Abstract not found

A Signal Processing Approach To Fair Surface Design

by Gabriel Taubin , 1995
"... In this paper we describe a new tool for interactive free-form fair surface design. By generalizing classical discrete Fourier analysis to two-dimensional discrete surface signals -- functions defined on polyhedral surfaces of arbitrary topology --, we reduce the problem of surface smoothing, or fai ..."
Abstract - Cited by 668 (15 self) - Add to MetaCart
In this paper we describe a new tool for interactive free-form fair surface design. By generalizing classical discrete Fourier analysis to two-dimensional discrete surface signals -- functions defined on polyhedral surfaces of arbitrary topology --, we reduce the problem of surface smoothing, or fairing, to low-pass filtering. We describe a very simple surface signal low-pass filter algorithm that applies to surfaces of arbitrary topology. As opposed to other existing optimization-based fairing methods, which are computationally more expensive, this is a linear time and space complexity algorithm. With this algorithm, fairing very large surfaces, such as those obtained from volumetric medical data, becomes affordable. By combining this algorithm with surface subdivision methods we obtain a very effective fair surface design technique. We then extend the analysis, and modify the algorithm accordingly, to accommodate different types of constraints. Some constraints can be imposed without any modification of the algorithm, while others require the solution of a small associated linear system of equations. In particular, vertex location constraints, vertex normal constraints, and surface normal discontinuities across curves embedded in the surface, can be imposed with this technique. CR Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/image generation - display algorithms; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling - curve, surface, solid, and object representations;J.6[Com- puter Applications]: Computer-Aided Engineering - computeraided design General Terms: Algorithms, Graphics. 1

Markov Random Field Models in Computer Vision

by S. Z. Li , 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract - Cited by 515 (18 self) - Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The latter relates to how data is observed and is problem domain dependent. The former depends on how various prior constraints are expressed. Markov Random Field Models (MRF) theory is a tool to encode contextual constraints into the prior probability. This paper presents a unified approach for MRF modeling in low and high level computer vision. The unification is made possible due to a recent advance in MRF modeling for high level object recognition. Such unification provides a systematic approach for vision modeling based on sound mathematical principles. 1 Introduction Since its beginning in early 1960's, computer vision research has been evolving from heuristic design of algorithms to syste...

An application-specific protocol architecture for wireless networks

by Wendi Beth Heinzelman , 2000
"... ..."
Abstract - Cited by 1217 (18 self) - Add to MetaCart
Abstract not found

For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1-norm Solution is also the Sparsest Solution

by David L. Donoho - Comm. Pure Appl. Math , 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract - Cited by 560 (10 self) - Add to MetaCart
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that for large n, and for all Φ’s except a negligible fraction, the following property holds: For every y having a representation y = Φα0 by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, the solution α1 of the ℓ 1 minimization problem min �x�1 subject to Φα = y is unique and equal to α0. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices.

The English noun phrase in its sentential aspect

by Richard Larson, Steven Paul Abney, Steven Paul Abney , 1987
"... This dissertation is a defense of the hypothesis that the noun phrase is headed by afunctional element (i.e., \non-lexical &quot; category) D, identi ed with the determiner. In this way, the structure of the noun phrase parallels that of the sentence, which is headed by In (ection), under assump ..."
Abstract - Cited by 509 (4 self) - Add to MetaCart
This dissertation is a defense of the hypothesis that the noun phrase is headed by afunctional element (i.e., \non-lexical &quot; category) D, identi ed with the determiner. In this way, the structure of the noun phrase parallels that of the sentence, which is headed by In (ection), under assumptions now standard within the Government-Binding (GB) framework. The central empirical problem addressed is the question of the proper analysis of the so-called \Poss-ing &quot; gerund in English. This construction possesses simultaneously many properties of sentences, and many properties of noun phrases. The problem of capturing this dual aspect of the Possing construction is heightened by current restrictive views of X-bar theory, which, in particular, rule out the obvious structure for Poss-ing, [NP NP VPing], by virtue of its exocentricity. Consideration of languages in which nouns, even the most basic concrete nouns, show agreement (AGR) with their possessors, points to an analysis

Domain Theory

by Samson Abramsky, Achim Jung - Handbook of Logic in Computer Science , 1994
"... Least fixpoints as meanings of recursive definitions. ..."
Abstract - Cited by 546 (25 self) - Add to MetaCart
Least fixpoints as meanings of recursive definitions.

Boosting a Weak Learning Algorithm By Majority

by Yoav Freund , 1995
"... We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas pr ..."
Abstract - Cited by 516 (15 self) - Add to MetaCart
We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the conc...
Next 10 →
Results 1 - 10 of 49,908
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University