Results 1  10
of
17
Sketching Valuation Functions
, 2011
"... Motivated by the problem of querying and communicating bidders ’ valuations in combinatorial auctions, we study how well different classes of set functions can be sketched. More formally, let f be a function mapping subsets of some ground set [n] to the nonnegative real numbers. We say that f ′ is ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
Motivated by the problem of querying and communicating bidders ’ valuations in combinatorial auctions, we study how well different classes of set functions can be sketched. More formally, let f be a function mapping subsets of some ground set [n] to the nonnegative real numbers. We say that f ′ is an αsketch of f if for every set S, the value f ′ (S) lies between f(S)/α and f(S), and f ′ can be specified by poly(n) bits. We show that for every subadditive function f there exists an αsketch where α = n 1/2 · O(polylog(n)). Furthermore, we provide an algorithm that finds these sketches with a polynomial number of demand queries. This is essentially the best we can hope for since: 1. We show that there exist subadditive functions (in fact, XOS functions) that do not admit an o(n 1/2) sketch. (Balcan and Harvey [3] previously showed that there exist functions belonging to the class of substitutes valuations that do not admit an O(n 1/3) sketch.) 2. We prove that every deterministic algorithm that accesses the function via value queries only cannot guarantee a sketching ratio better than n 1−ɛ. We also show that coverage functions, an interesting subclass of submodular functions, admit arbitrarily good sketches.
Optimal bounds on approximation of submodular and xos functions by juntas
 CoRR
"... Abstract—We investigate the approximability of several classes of realvalued functions by functions of a small number of variables (juntas). Our main results are tight bounds on the number of variables required to approximate a function f: {0, 1}n → [0, 1] within `2error over the uniform distribu ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the approximability of several classes of realvalued functions by functions of a small number of variables (juntas). Our main results are tight bounds on the number of variables required to approximate a function f: {0, 1}n → [0, 1] within `2error over the uniform distribution: • If f is submodular, then it is close to a function of
Representation, approximation and learning of submodular functions using lowrank decision trees
 In Proceedings of the Conference on Learning Theory (COLT
, 2013
"... We study the complexity of approximate representation and learning of submodular functions over the uniform distribution on the Boolean hypercube {0, 1}n. Our main result is the following structural theorem: any submodular function is close in `2 to a realvalued decision tree (DT) of depth O(1/2) ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
We study the complexity of approximate representation and learning of submodular functions over the uniform distribution on the Boolean hypercube {0, 1}n. Our main result is the following structural theorem: any submodular function is close in `2 to a realvalued decision tree (DT) of depth O(1/2). This immediately implies that any submodular function is close to a function of at most 2O(1/ 2) variables and has a spectral `1 norm of 2O(1/ 2). It also implies the closest previous result that states that submodular functions can be approximated by polynomials of degree O(1/2) (Cheraghchi et al., 2012). Our result is proved by constructing an approximation of a submodular function by a DT of rank 4/2 and a proof that any rankr DT can be approximated by a DT of depth 52 (r + log(1/)). We show that these structural results can be exploited to give an attributeefficient PAC learning algorithm for submodular functions running in time Õ(n2) · 2O(1/4). The best previous algorithm for the problem requires nO(1/ 2) time and examples (Cheraghchi et al., 2012) but works also in the agnostic setting. In addition, we give improved learning algorithms for a number of related settings. We also prove that our PAC and agnostic learning algorithms are essentially optimal via two lower bounds: (1) an informationtheoretic lower bound of 2Ω(1/ 2/3) on the complexity of learning monotone submodular functions in any reasonable model (including learning with value queries); (2) computational lower bound of nΩ(1/ 2/3) based on a reduction to learning of sparse parities with noise, widelybelieved to be intractable. These are the first lower bounds for learning of submodular functions over the uniform distribution.
Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions
 IN NIPS
, 2013
"... We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAClike setting [28]), and constrained minimization of submodular functions. We show that the complexity of all three problems depe ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAClike setting [28]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the “curvature” of the submodular function, and provide lower and upper bounds that refine and improve previous results [2, 6, 8, 27]. Our proof techniques are fairly generic. We either use a blackbox transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [3, 29], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.
Learning pseudoBoolean kDNF and submodular functions
, 2013
"... We prove that any submodular function f: {0, 1} n → {0, 1,..., k} can be represented as a pseudoBoolean 2kDNF formula. PseudoBoolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
We prove that any submodular function f: {0, 1} n → {0, 1,..., k} can be represented as a pseudoBoolean 2kDNF formula. PseudoBoolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of H˚astad’s switching lemma holds for pseudoBoolean kDNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour’s PAClearning algorithm for kDNFs to pseudoBoolean kDNFs, and hence gives a PAClearning algorithm with membership queries under the uniform distribution for submodular functions of the form f: {0, 1} n → {0, 1,..., k}. Our algorithm runs in time polynomial in n, k O(k log k/ɛ) and log(1/δ) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC ’11), Gupta, Hardt, Roth, Ullman; (STOC ’11), Cheraghchi, Klivans, Kothari, Lee
Submodular Functions: Learnability, Structure, and Optimization
, 2012
"... Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications. They have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we study submodular functions from a learning theoret ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications. They have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we study submodular functions from a learning theoretic angle. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing ways in which submodular functions can be both surprisingly structured and surprisingly unstructured. We provide several concrete implications of our work in other domains including algorithmic game theory and combinatorial optimization. At a technical level, this research combines ideas from many areas, including learning theory (distributional learning and PACstyle analyses), combinatorics and optimization (matroids and submodular functions), and pseudorandomness (lossless expander graphs).
Efficiently learning from revealed preference
 In Internet and Network Economics
, 2012
"... ar ..."
Learning coverage functions and private release of marginals
 In COLT
, 2014
"... We study the problem of approximating and learning coverage functions. A function c: 2[n] → R+ is a coverage function, if there exists a universe U with nonnegative weights w(u) for each u ∈ U and subsets A1, A2,..., An of U such that c(S) = u∈∪i∈SAi w(u). Alternatively, coverage functions can be ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We study the problem of approximating and learning coverage functions. A function c: 2[n] → R+ is a coverage function, if there exists a universe U with nonnegative weights w(u) for each u ∈ U and subsets A1, A2,..., An of U such that c(S) = u∈∪i∈SAi w(u). Alternatively, coverage functions can be described as nonnegative linear combinations of monotone disjunctions. They are a natural subclass of submodular functions and arise in a number of applications. We give an algorithm that for any γ, δ> 0, given random and uniform examples of an unknown coverage function c, finds a function h that approximates c within factor 1 + γ on all but δfraction of the points in time poly(n, 1/γ, 1/δ). This is the first fullypolynomial algorithm for learning an interesting class of functions in the demanding PMAC model of Balcan and Harvey [2012]. Our algorithms are based on several new structural properties of coverage functions. Using the results in [Feldman and Kothari, 2014], we also show that coverage functions are learnable agnostically with excess `1error over all product and symmetric distributions in time nlog(1/). In contrast, we show that, without assumptions on the distribution, learning coverage functions is at least as hard as learning polynomialsize disjoint DNF formulas, a class of functions for which the best known algorithm runs in time 2Õ(n 1/3) [Klivans and Servedio, 2004]. As an application of our learning results, we give simple differentiallyprivate algorithms for releasing monotone conjunction counting queries with low average error. In particular, for any k ≤ n, we obtain private release of kway marginals with average error α ̄ in time nO(log(1/ᾱ)). 1