Results 1  10
of
23,189
Improved algorithms for optimal winner determination in combinatorial auctions and generalizations
, 2000
"... Combinatorial auctions can be used to reach efficient resource and task allocations in multiagent systems where the items are complementary. Determining the winners is NPcomplete and inapproximable, but it was recently shown that optimal search algorithms do very well on average. This paper present ..."
Abstract

Cited by 598 (55 self)
 Add to MetaCart
Combinatorial auctions can be used to reach efficient resource and task allocations in multiagent systems where the items are complementary. Determining the winners is NPcomplete and inapproximable, but it was recently shown that optimal search algorithms do very well on average. This paper
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 800 (26 self)
 Add to MetaCart
fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes
Computationally Manageable Combinatorial Auctions
, 1998
"... There is interest in designing simultaneous auctions for situations in which the value of assets to a bidder depends upon which other assets he or she wins. In such cases, bidders may well wish to submit bids for combinations of assets. When this is allowed, the problem of determining the revenue ma ..."
Abstract

Cited by 347 (1 self)
 Add to MetaCart
maximizing set of nonconflicting bids can be a difficult one. We analyze this problem, identifying several different structures of combinatorial bids for which computational tractability is constructively demonstrated and some structures for which computational tractability 1 Introduction Some auctions
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract

Cited by 1427 (15 self)
 Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
FAST VOLUME RENDERING USING A SHEARWARP FACTORIZATION OF THE VIEWING TRANSFORMATION
, 1995
"... Volume rendering is a technique for visualizing 3D arrays of sampled data. It has applications in areas such as medical imaging and scientific visualization, but its use has been limited by its high computational expense. Early implementations of volume rendering used bruteforce techniques that req ..."
Abstract

Cited by 541 (2 self)
 Add to MetaCart
Volume rendering is a technique for visualizing 3D arrays of sampled data. It has applications in areas such as medical imaging and scientific visualization, but its use has been limited by its high computational expense. Early implementations of volume rendering used bruteforce techniques that require on the order of 100 seconds to render typical data sets on a workstation. Algorithms with optimizations that exploit coherence in the data have reduced rendering times to the range of ten seconds but are still not fast enough for interactive visualization applications. In this thesis we present a family of volume rendering algorithms that reduces rendering times to one second. First we present a scanlineorder volume rendering algorithm that exploits coherence in both the volume data and the image. We show that scanlineorder algorithms are fundamentally more efficient than commonlyused ray casting algorithms because the latter must perform analytic geometry calculations (e.g. intersecting rays with axisaligned boxes). The new scanlineorder algorithm simply streams through the volume and the image in storage order. We describe variants of the algorithm for both parallel and perspective projections and
Data Security
, 1979
"... The rising abuse of computers and increasing threat to personal privacy through data banks have stimulated much interest m the techmcal safeguards for data. There are four kinds of safeguards, each related to but distract from the others. Access controls regulate which users may enter the system and ..."
Abstract

Cited by 611 (3 self)
 Add to MetaCart
The rising abuse of computers and increasing threat to personal privacy through data banks have stimulated much interest m the techmcal safeguards for data. There are four kinds of safeguards, each related to but distract from the others. Access controls regulate which users may enter the system and subsequently whmh data sets an active user may read or wrote. Flow controls regulate the dissemination of values among the data sets accessible to a user. Inference controls protect statistical databases by preventing questioners from deducing confidential information by posing carefully designed sequences of statistical queries and correlating the responses. Statlstmal data banks are much less secure than most people beheve. Data encryption attempts to prevent unauthorized disclosure of confidential information in transit or m storage. This paper describes the general nature of controls of each type, the kinds of problems they can and cannot solve, and their inherent limitations and weaknesses. The paper is intended for a general audience with little background in the area.
A Guided Tour to Approximate String Matching
 ACM COMPUTING SURVEYS
, 1999
"... We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining t ..."
Abstract

Cited by 584 (38 self)
 Add to MetaCart
We survey the current techniques to cope with the problem of string matching allowing errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the central ideas of the algorithms and their complexities. We present a number of experiments to compare the performance of the different algorithms and show which are the best choices according to each case. We conclude with some future work directions and open problems.
Applications Of Circumscription To Formalizing Common Sense Knowledge
 Artificial Intelligence
, 1986
"... We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning first described in (McCarthy 1980) and some applications to formalizing common sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects o ..."
Abstract

Cited by 536 (12 self)
 Add to MetaCart
We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning first described in (McCarthy 1980) and some applications to formalizing common sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects of various entities. Included are nonmonotonic treatments of isa hierarchies, the unique names hypothesis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously defined domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briefly explored. 1 INTRODUCTION ANDNEW DEFINITION OF CIRCUMSCRIPTION (McCarthy 1980) introduces the circumscription method of nonmonotonic reasoning and gives motivation, some mathematical properties and some ex1 amples of its application. The present paper is logically selfcontained, but motivation may be enhanced by reading t...
An introduction to variable and feature selection
 Journal of Machine Learning Research
, 2003
"... Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. ..."
Abstract

Cited by 1283 (16 self)
 Add to MetaCart
Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available.
Studies of transformation of Escherichia coli with plasmids
 J. Mol. Biol
, 1983
"... Factors that affect he probability of genetic transformation f Escherichia coli by plasmids have been evaluated. A set of conditions is described under which about one in every 400 plasmid molecules produces a transformed cell. These conditions include cell growth in medium containing elevated level ..."
Abstract

Cited by 1609 (1 self)
 Add to MetaCart
Factors that affect he probability of genetic transformation f Escherichia coli by plasmids have been evaluated. A set of conditions is described under which about one in every 400 plasmid molecules produces a transformed cell. These conditions include cell growth in medium containing elevated levels of Mg 2+. and incubation of the cells at 0 ~ in a solution of Mn 2+, ("a 2+, Rb + or K +, dimethyl sulfoxide, dithiothreitol, and hexamine cobalt (III). Transibrmation efficiency declines linearly with increasing plasmid size. Relaxed and supercoiled plasmids transfol'm with similar probabilities. Nontransforming DNAs compete consistent with mass. No significant variation is observed between competing DNAs of difi~rent source, complexity, length or form. Competition with both transforming and nontransforming plasmids indicates that each cell is capable of taking up many DNA molecules, and that the establishment of a transformation event is neither helped nor hindered significantly by the presence of multiple plasmids. 1. Introduct ion Both gramposit ive and gramnegative bacteria can take up and stably establish
Results 1  10
of
23,189