Results 1  10
of
466,956
Source Coding With Encoder Side Information
 IEEE Transactions on Information Theory
"... We introduce the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. We show that such distortion side information is not only useful at the encoder, but that under certain conditions, knowing it at only the encoder is as good ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We introduce the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. We show that such distortion side information is not only useful at the encoder, but that under certain conditions, knowing it at only the encoder
Capacity of Fading Channels with Channel Side Information
, 1997
"... We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "waterpouring" in time, analogous to waterpouring in frequency for timeinvariant frequencysele ..."
Abstract

Cited by 579 (23 self)
 Add to MetaCart
We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "waterpouring" in time, analogous to waterpouring in frequency for timeinvariant frequency
The ratedistortion function for source coding with side information at the decoder
 IEEE Trans. Inform. Theory
, 1976
"... AbstractLet {(X,, Y,J}r = 1 be a sequence of independent drawings of a pair of dependent random variables X, Y. Let us say that X takes values in the finite set 6. It is desired to encode the sequence {X,} in blocks of length n into a binary stream*of rate R, which can in turn be decoded as a seque ..."
Abstract

Cited by 1055 (1 self)
 Add to MetaCart
as well as the decoder has access to the side information {Y,}. In nearly all cases it is shown that when d> 0 then R*(d)> Rx, y(d), so that knowledge of the side information at the encoder permits transmission of the {X,} at a given distortion level using a smaller transmission rate
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Encoder side information is useful in source coding
 in International Symposium on Information Theory
, 2004
"... Abstract — We introduce the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such side information is not only useful at the encoder, but under many conditions of interest, knowing it at the encoder alone is sufficient and ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract — We introduce the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such side information is not only useful at the encoder, but under many conditions of interest, knowing it at the encoder alone is sufficient
Formal Ontology and Information Systems
, 1998
"... Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We sh ..."
Abstract

Cited by 878 (11 self)
 Add to MetaCart
shall use the generic term information systems, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that socalled ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity
Feeling and thinking: Preferences need no inferences
 American Psychologist
, 1980
"... ABSTRACT: Affect is considered by most contemporary theories to be postcognitive, that is, to occur only after considerable cognitive operations have been accomplished. Yet a number of experimental results on preferences, attitudes, impression formation, and de_ cision making, as well as some cli ..."
Abstract

Cited by 533 (2 self)
 Add to MetaCart
, and for lower organisms they are the dominant reactions. Affective reactions can occur without extensive perceptual and cognitive encoding, are made with greater confidence than cognitive judg
Text Chunking using TransformationBased Learning
, 1995
"... Eric Brill introduced transformationbased learning and showed that it can do partofspeech tagging with fairly high accuracy. The same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including nonrecursive "baseNP" chunks. For ..."
Abstract

Cited by 509 (0 self)
 Add to MetaCart
. For this purpose, it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each word. In automatic tests using Treebankderived data, this technique achieved recall and precision rates of roughly 92% for baseNP chunks and 88% for somewhat more complex chunks
A Volumetric Method for Building Complex Models from Range Images
, 1996
"... A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robus ..."
Abstract

Cited by 1018 (18 self)
 Add to MetaCart
with one range image at a time, we first scanconvert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a runlength encoding of the volume. To achieve time efficiency, we resample the range image to align
The xKernel: An Architecture for Implementing Network Protocols
 IEEE Transactions on Software Engineering
, 1991
"... This paper describes a new operating system kernel, called the xkernel, that provides an explicit architecture for constructing and composing network protocols. Our experience implementing and evaluating several protocols in the xkernel shows that this architecture is both general enough to acc ..."
Abstract

Cited by 663 (21 self)
 Add to MetaCart
, and manage the encoding and decoding of data. To help manage this complexity, network software is divi...
Results 1  10
of
466,956