Results 1  10
of
18,228
An inventory for measuring clinical anxiety: Psychometric properties
 JOURNAL OF CONSULTING AND CLINICAL PSYCHOLOGY
, 1988
"... The development of a 2 litem selfreport inventory for measuring the severity of anxiety in psychiaric populations i described. The initial item pool f86 items was drawn from three preexisting scales: the Anxiety Checklist, the Physician's Desk Reference Checklist, and the Situational Anxiety ..."
Abstract

Cited by 778 (1 self)
 Add to MetaCart
The development of a 2 litem selfreport inventory for measuring the severity of anxiety in psychiaric populations i described. The initial item pool f86 items was drawn from three preexisting scales: the Anxiety Checklist, the Physician's Desk Reference Checklist, and the Situational
An extensive empirical study of feature selection metrics for text classification
 J. of Machine Learning Research
, 2003
"... Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison ..."
Abstract

Cited by 496 (15 self)
 Add to MetaCart
performance goals, BNS is consistently a member of the pair—e.g., for greatest recall, the pair BNS + F1measure yielded the best performance on the greatest number of tasks by a considerable margin.
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
law), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball
Factoring polynomials with rational coefficients
 MATH. ANN
, 1982
"... In this paper we present a polynomialtime algorithm to solve the following problem: given a nonzero polynomial fe Q[X] in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q[X]. It is well known that this is equivalent to factoring primitive polynomia ..."
Abstract

Cited by 961 (11 self)
 Add to MetaCart
polynomials feZ[X] into irreducible factors in Z[X]. Here we call f ~ Z[X] primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. [8]. Its running time, measured in bit operations, is O(nl2+n9(log[fD3). Here f~Tl[X] is the polynomial
DeNoising By SoftThresholding
, 1992
"... Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an a ..."
Abstract

Cited by 1279 (14 self)
 Add to MetaCart
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1399 (16 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible
An iterative image registration technique with an application to stereo vision
 In IJCAI81
, 1981
"... Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton ..."
Abstract

Cited by 2897 (30 self)
 Add to MetaCart
. The registration problem The translational image registration problem can be characterized as follows: We are given functions F(x) and G(x) which give the respective pixel values at each location x in two images, where x is a vector. We wish to find the disparity vector h which minimizes some measure
An Algorithm for Tracking Multiple Targets
 IEEE Transactions on Automatic Control
, 1979
"... Abstract—An algorithm for tracking multiple targets In a cluttered algorithms. Clustering is the process of dividing the entire environment Is developed. The algorithm Is capable of Initiating tracks, set of targets and measurements into independent groups accounting for false or m[~clngreports, and ..."
Abstract

Cited by 596 (0 self)
 Add to MetaCart
[~clngreports, and groom sets of dSPeU&IIt ~or clusters ~ Instead of solvin one lar e roblem a reports~As each measurement Is received, probabilities are ~*uI~ted for ~ g g ~‘ thehypotheses that the measurement ~ f ~ ~ IUM)Wfl ~ number of smaller problems are solved in parallel. Fiin a target file, or from a new
The ratedistortion function for source coding with side information at the decoder
 IEEE Trans. Inform. Theory
, 1976
"... AbstractLet {(X,, Y,J}r = 1 be a sequence of independent drawings of a pair of dependent random variables X, Y. Let us say that X takes values in the finite set 6. It is desired to encode the sequence {X,} in blocks of length n into a binary stream*of rate R, which can in turn be decoded as a seque ..."
Abstract

Cited by 1060 (1 self)
 Add to MetaCart
sequence { 2k}, where zk E %, the reproduction alphabet. The average distorjion level is (l/n) cl = 1 E[D(X,,z&, where D(x, $ 2 0, x E I, 2 E J, is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information {Yk}. In this paper we determine
Similarity estimation techniques from rounding algorithms
 In Proc. of 34th STOC
, 2002
"... A locality sensitive hashing scheme is a distribution on a family F of hash functions operating on a collection of objects, such that for two objects x, y, Prh∈F[h(x) = h(y)] = sim(x,y), where sim(x,y) ∈ [0, 1] is some similarity function defined on the collection of objects. Such a scheme leads ..."
Abstract

Cited by 449 (6 self)
 Add to MetaCart
A locality sensitive hashing scheme is a distribution on a family F of hash functions operating on a collection of objects, such that for two objects x, y, Prh∈F[h(x) = h(y)] = sim(x,y), where sim(x,y) ∈ [0, 1] is some similarity function defined on the collection of objects. Such a scheme leads
Results 1  10
of
18,228