Results 1 - 10
of
13,580
Set Constraints on Regular Terms
"... Abstract. Set constraints are a useful formalism for verifying properties of programs. Usually, they are interpreted over the universe of finite terms. However, some logic languages allow infinite regular terms, so it seems natural to consider set constraints over this domain. In the paper we show t ..."
Abstract
- Add to MetaCart
Abstract. Set constraints are a useful formalism for verifying properties of programs. Usually, they are interpreted over the universe of finite terms. However, some logic languages allow infinite regular terms, so it seems natural to consider set constraints over this domain. In the paper we show
Bias of Estimators and Regularization Terms
- in Proceedings of 1998 Workshop on Information-Based Induction Sciences (IBIS'98), Izu
, 1998
"... : In this paper, a role of regularization terms (penalty terms) is discussed from the view point of minimizing the generalization error. First the bias of minimum training error estimation is clarified. The bias is caused by the nonlinearity of the learning system and depends on the number of traini ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
: In this paper, a role of regularization terms (penalty terms) is discussed from the view point of minimizing the generalization error. First the bias of minimum training error estimation is clarified. The bias is caused by the nonlinearity of the learning system and depends on the number
Hybrid Constraints for Regular Terms
"... We define a family of constraints over the domain of regular terms. This family is built by extending equality with general constraints over root labels. We say that the resulting constraints are hybrid. Under the assumption that these constraints are stable with respect to a partial ordering we ..."
Abstract
- Add to MetaCart
We define a family of constraints over the domain of regular terms. This family is built by extending equality with general constraints over root labels. We say that the resulting constraints are hybrid. Under the assumption that these constraints are stable with respect to a partial ordering
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract
-
Cited by 539 (17 self)
- Add to MetaCart
sparseness-inducing (ℓ1) regularization term.Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound
Bounded higher-order unification using regular terms
, 2013
"... We present a procedure for the bounded unification of higher-order terms [22]. The procedure extends G. P. Huet’s pre-unification procedure [11] with rules for the generation and folding of regular terms. The concise form of the procedure allows the reuse of the preunification correctness proof. Fur ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We present a procedure for the bounded unification of higher-order terms [22]. The procedure extends G. P. Huet’s pre-unification procedure [11] with rules for the generation and folding of regular terms. The concise form of the procedure allows the reuse of the preunification correctness proof
Stable signal recovery from incomplete and inaccurate measurements,”
- Comm. Pure Appl. Math.,
, 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract
-
Cited by 1397 (38 self)
- Add to MetaCart
? To recover x 0 , we consider the solution x to the 1 -regularization problem where is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise level As a first example
Estimating the Support of a High-Dimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract
-
Cited by 783 (29 self)
- Add to MetaCart
propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length
Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding
- IEEE TRANS. ON INFORMATION THEORY
, 1999
"... We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing information-embedding rate, mini ..."
Abstract
-
Cited by 496 (14 self)
- Add to MetaCart
, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we
Tests Of Different Regularization Terms In Small Networks
"... Several regularization terms, some of them widely applied to neural networks, such as weight decay and weight elimination, and some others new, are tested when applied to networks with a small number of connections handling continuous variables. These networks are found when using additive algorithm ..."
Abstract
- Add to MetaCart
Several regularization terms, some of them widely applied to neural networks, such as weight decay and weight elimination, and some others new, are tested when applied to networks with a small number of connections handling continuous variables. These networks are found when using additive
Results 1 - 10
of
13,580