Results 1  10
of
13,961
Random Walk on an Arbitrary Set
"... Let I be a countably infinite set of points in R, and suppose that I has no points of accumulation and that its convex hull is the whole of R. It will be convenient to index I as {ui: i ∈ Z}, with ui < ui+1 for every i. Consider a continuoustime Markov chain Y = {Y (t) : t ≥ 0} on I, with the pr ..."
Abstract
 Add to MetaCart
Let I be a countably infinite set of points in R, and suppose that I has no points of accumulation and that its convex hull is the whole of R. It will be convenient to index I as {ui: i ∈ Z}, with ui < ui+1 for every i. Consider a continuoustime Markov chain Y = {Y (t) : t ≥ 0} on I
ABSTRACT FACTORIALS OF ARBITRARY SETS OF INTEGERS
, 2007
"... Given any subset of Z we associate to it another set on which we can define one or more (generally independent) abstract factorial functions. These associated sets are studied and arithmetic relations are revealed. In addition, we show that for an abstract factorial function of an infinite subset ..."
Abstract
 Add to MetaCart
Given any subset of Z we associate to it another set on which we can define one or more (generally independent) abstract factorial functions. These associated sets are studied and arithmetic relations are revealed. In addition, we show that for an abstract factorial function of an infinite subset
The Characteristic Mapping Method for the Linear Advection of Arbitrary Sets
"... In this paper, we present a new numerical method for advecting arbitrary sets in a vector field. The method computes a transformation of the domain instead of dealing with particular sets. We propose a way of decoupling the advection and representation steps of the computations, resulting in signifi ..."
Abstract
 Add to MetaCart
In this paper, we present a new numerical method for advecting arbitrary sets in a vector field. The method computes a transformation of the domain instead of dealing with particular sets. We propose a way of decoupling the advection and representation steps of the computations, resulting
Fast approximate energy minimization via graph cuts
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function’s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when v ..."
Abstract

Cited by 2120 (61 self)
 Add to MetaCart
very large moves are allowed. The first move we consider is an αβswap: for a pair of labels α, β, this move exchanges the labels between an arbitrary set of pixels labeled α and another arbitrary set labeled β. Our first algorithm generates a labeling such that there is no swap move that decreases
General Characterizations of inductive Inference over Arbitrary Sets of Data Presentations
"... General characterizations of inductive inference over arbitrary sets of data presentations ..."
Abstract
 Add to MetaCart
General characterizations of inductive inference over arbitrary sets of data presentations
Boosting and differential privacy
, 2010
"... Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacypreserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the resp ..."
Abstract

Cited by 648 (14 self)
 Add to MetaCart
algorithm obtains a synopsis that is good for all of Q. We ensure privacy for the rows of the database, but the boosting is performed on the queries. We also provide the first synopsis generators for arbitrary sets of arbitrary low sensitivity
Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors
, 2008
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been ext ..."
Abstract

Cited by 100 (41 self)
 Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been
Approximation by Superpositions of a Sigmoidal Function
, 1989
"... In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set ofaffine functionals can uniformly approximate any continuous function of n real variables with support in the unit hypercube; only mild conditions are imposed on the univariate fun ..."
Abstract

Cited by 1248 (2 self)
 Add to MetaCart
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set ofaffine functionals can uniformly approximate any continuous function of n real variables with support in the unit hypercube; only mild conditions are imposed on the univariate
Some optimal inapproximability results
, 2002
"... We prove optimal, up to an arbitrary ffl? 0, inapproximability results for MaxEkSat for k * 3, maximizing the number of satisfied linear equations in an overdetermined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for ..."
Abstract

Cited by 751 (11 self)
 Add to MetaCart
We prove optimal, up to an arbitrary ffl? 0, inapproximability results for MaxEkSat for k * 3, maximizing the number of satisfied linear equations in an overdetermined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 561 (18 self)
 Add to MetaCart
as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word
Results 1  10
of
13,961