### Citations

985 | Maximizing the spread of influence through a social network
- Kempe, Kleinberg, et al.
- 2003
(Show Context)
Citation Context ...hms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains submodular then the theory of submodular optimization clearly applies and modest errors translate to modest loss in quality of approximation. But if the approximate vers... |

750 |
An analysis of the approximations for maximizing submodular set functions
- Nemhauser, Wolsey, et al.
- 1978
(Show Context)
Citation Context ...unctions we show that for any fixed β > 0 given access to an 1 n1/3−β -approximately submodular function, no algorithm can obtain an approximation ratio strictly better than O(1/nβ) using polynomially many queries (Theorem 4). 1Observe that for an approximately submodular function F , there exists many submodular functions f of which it is an approximation. All such submodular functions f are called representatives of F . The conversion between an approximation guarantee for F and an approximation guarantee for a representative f of F holds for any choice of the representative. 2Specifically, [22] shows that it possible to obtain a (1−1/e) approximation ratio for a cardinality constraint. 2 The above results imply that even in cases where the objective function is arbitrarily close to being submodular as the number n of elements in N grows, reasonable optimization guarantees are unachievable. The second result shows that this is the case even when we aim to optimize coverage functions. Coverage functions are an important class of submodular functions which are used in numerous applications [11, 21, 18]. Approximation guarantees. The inapproximability results follow from two properties ... |

340 | Near-Optimal sensor placements in Gaussian Processes: Theory, efficient August 23, 2012 DRAFT algorithms and empirical
- KRAUSE, SINGH, et al.
(Show Context)
Citation Context ...antee for a representative f of F holds for any choice of the representative. 2Specifically, [22] shows that it possible to obtain a (1−1/e) approximation ratio for a cardinality constraint. 2 The above results imply that even in cases where the objective function is arbitrarily close to being submodular as the number n of elements in N grows, reasonable optimization guarantees are unachievable. The second result shows that this is the case even when we aim to optimize coverage functions. Coverage functions are an important class of submodular functions which are used in numerous applications [11, 21, 18]. Approximation guarantees. The inapproximability results follow from two properties of the model: the structure of the function (submodularity), and the size of ε in the definition of approximate submodularity. A natural question is whether one can relax either conditions to obtain positive approximation guarantees. We show that this is indeed the case: • In the general case of monotone submodular functions we show that the greedy algorithm achieves a ( 1−1/e−O(δ) ) approximation ratio when ε = δk (Theorem 5). Furthermore, this bound is tight: given a 1/k1−β-approximately submodular function,... |

90 | A class of submodular functions for document summarization
- Lin, Bilmes
- 2011
(Show Context)
Citation Context ...r bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains submodular then the theory of submodular optim... |

89 | Near-optimal observation selection using submodular functions
- Krause, Guestrin
- 2007
(Show Context)
Citation Context ...antee for a representative f of F holds for any choice of the representative. 2Specifically, [22] shows that it possible to obtain a (1−1/e) approximation ratio for a cardinality constraint. 2 The above results imply that even in cases where the objective function is arbitrarily close to being submodular as the number n of elements in N grows, reasonable optimization guarantees are unachievable. The second result shows that this is the case even when we aim to optimize coverage functions. Coverage functions are an important class of submodular functions which are used in numerous applications [11, 21, 18]. Approximation guarantees. The inapproximability results follow from two properties of the model: the structure of the function (submodularity), and the size of ε in the definition of approximate submodularity. A natural question is whether one can relax either conditions to obtain positive approximation guarantees. We show that this is indeed the case: • In the general case of monotone submodular functions we show that the greedy algorithm achieves a ( 1−1/e−O(δ) ) approximation ratio when ε = δk (Theorem 5). Furthermore, this bound is tight: given a 1/k1−β-approximately submodular function,... |

69 | Adaptive submodularity: Theory and applications in active learning and stochastic optimization
- Golovin, Krause
- 2011
(Show Context)
Citation Context ...int k as a function of the error level ε > 0. We provide both lower and upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the funct... |

68 | Batch mode active learning and its application to medical image classification
- Hoi, Jin, et al.
- 2006
(Show Context)
Citation Context ...int k as a function of the error level ε > 0. We provide both lower and upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the funct... |

60 | Structured sparsity-inducing norms through submodular functions.
- Bach
- 2010
(Show Context)
Citation Context ...0. We provide both lower and upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate ... |

50 | Nonmyopic active learning of Gaussian Processes: an exploration-exploitation approach
- KRAUSE, GUESTRIN
- 2007
(Show Context)
Citation Context ...int k as a function of the error level ε > 0. We provide both lower and upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the funct... |

35 |
Inferring networks of diffusion and influence
- Rodriguez, Leskovec, et al.
- 2010
(Show Context)
Citation Context ...d upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approxi... |

29 | Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection.
- Das, Kempe
- 2011
(Show Context)
Citation Context ...0. We provide both lower and upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate ... |

29 |
Submodularity and its applications in optimized information gathering
- Krause, Guestrin
- 2011
(Show Context)
Citation Context |

27 | Learning submodular functions
- Balcan, Harvey
- 2011
(Show Context)
Citation Context ...ne (f(S) ≤ f(T ) for S ⊆ T ). Approximate submodularity appears in various domains. • Optimization with noisy oracles. In these scenarios, we wish to solve optimization problems where one does not have access to a submodular function but a noisy version of it. An example recently studied in [5] involves maximizing information gain in graphical models; this captures many Bayesian experimental design settings. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. • PMAC learning. In the active area of learning submodular functions initiated by Balcan and Harvey [3], the objective is to approximately learn submodular functions. Roughly speaking, the PMAC-learning framework guarantees that the learned function is a constantfactor approximation of the true submodular function with high probability. Therefore, after learning a submodular function, one obtains an approximately submodular function. • Sketching. Since submodular functions have, in general, exponential-size representation, [2] studied the problem of sketching submodular functions: finding a function with polynomialsize representation approximating a given submodular function. The resulting sket... |

23 | Sketching valuation functions.
- Badanidiyuru, Dobzinski, et al.
- 2012
(Show Context)
Citation Context ...n Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. • PMAC learning. In the active area of learning submodular functions initiated by Balcan and Harvey [3], the objective is to approximately learn submodular functions. Roughly speaking, the PMAC-learning framework guarantees that the learned function is a constantfactor approximation of the true submodular function with high probability. Therefore, after learning a submodular function, one obtains an approximately submodular function. • Sketching. Since submodular functions have, in general, exponential-size representation, [2] studied the problem of sketching submodular functions: finding a function with polynomialsize representation approximating a given submodular function. The resulting sketch is an approximately submodular function. Optimization of approximate submodularity. We focus on optimization problems of the form max S : |S|≤k F (S) (2) where F is an ε-approximately submodular function and k ∈ N is the cardinality constraint. We say that a set S ⊆ N is an α-approximation to the optimal solution of (2) if |S |≤ k and F (S) ≥ αmax|T |≤k F (T ). As is common in submodular optimization, we assume the value q... |

15 | Simultaneous learning and covering with adversarial noise.
- Guillory, Bilmes
- 2011
(Show Context)
Citation Context |

14 | Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints.
- Iyer, Bilmes
- 2013
(Show Context)
Citation Context ...es a ( 1−1/e−O(δ) ) approximation ratio when ε = δk (Theorem 5). Furthermore, this bound is tight: given a 1/k1−β-approximately submodular function, the greedy algorithm no longer provides a constant factor approximation guarantee (Proposition 6). • Since our query-complexity lower bound holds for coverage functions, which already contain a great deal of structure, we relax the structural assumption by considering functions with bounded curvature c; this is a common assumption in applications of submodularity to machine learning and has been used in prior work to obtain theoretical guarantees [15, 16]. Under this assumption, we give an algorithm which achieves an approximation ratio of (1− c)( 1−ε1+ε ) 2 (Proposition 8). We state our positive results for the case of a cardinality constraint of k. Similar results hold for matroids of rank k, the proofs of those can be found in the Appendix. Note that cardinality constraints are a special case of matroid constraints, therefore our lower bounds also apply to matroid constraints. 1.2 Discussion and additional related work Before transitioning to the technical results, we briefly survey error in applications of submodularity and the implication... |

14 | On learning to localize objects with minimal supervision.
- Song, Girshick, et al.
- 2014
(Show Context)
Citation Context ...hen ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains submodular then the theory of submodular optimization clearly applies... |

12 | Budgeted nonparametric learning from data streams
- Gomes, Krause
- 2010
(Show Context)
Citation Context ...nential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains submodular then t... |

9 | Curvature and optimal algorithms for learning and minimizing submodular functions.
- Iyer, Jegelka, et al.
- 2013
(Show Context)
Citation Context ...es a ( 1−1/e−O(δ) ) approximation ratio when ε = δk (Theorem 5). Furthermore, this bound is tight: given a 1/k1−β-approximately submodular function, the greedy algorithm no longer provides a constant factor approximation guarantee (Proposition 6). • Since our query-complexity lower bound holds for coverage functions, which already contain a great deal of structure, we relax the structural assumption by considering functions with bounded curvature c; this is a common assumption in applications of submodularity to machine learning and has been used in prior work to obtain theoretical guarantees [15, 16]. Under this assumption, we give an algorithm which achieves an approximation ratio of (1− c)( 1−ε1+ε ) 2 (Proposition 8). We state our positive results for the case of a cardinality constraint of k. Similar results hold for matroids of rank k, the proofs of those can be found in the Appendix. Note that cardinality constraints are a special case of matroid constraints, therefore our lower bounds also apply to matroid constraints. 1.2 Discussion and additional related work Before transitioning to the technical results, we briefly survey error in applications of submodularity and the implication... |

9 | Learning mixtures of submodular functions for image collection summarization.
- Tschiatschek, Iyer, et al.
- 2014
(Show Context)
Citation Context ...ger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains submodular then the theory of submodular optimization clearly applies and modest errors translate... |

4 |
Selecting diverse features via spectral relaxation.
- Das, Dasgupta, et al.
- 2012
(Show Context)
Citation Context ...0. We provide both lower and upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate ... |

4 | A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation,”
- Defazio, Caetano
- 2012
(Show Context)
Citation Context ...d upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approxi... |

3 | Escaping the local minima via simulated annealing: Optimization of approximately convex functions.
- Belloni, Liang, et al.
- 2015
(Show Context)
Citation Context ...re is a coupling between approximate submodularity and erroneous evaluations of a submodular function: if one can evaluate a submodular function within (multiplicative) accuracy of 1 ± ε then this is an ε-approximately submodular function. Additive vs multiplicative approximation. The definition of approximate submodularity in (1) uses relative (multiplicative) approximation. We could instead consider absolute (additive) approximation, i.e. require that f(S) − ε ≤ F (S) ≤ f(S) + ε for all sets S. This definition has been used in the related problem of optimizing approximately convex functions [4, 25], where functions are assumed to have normalized range. For un-normalized functions or functions whose range is unknown, a relative approximation is more informative. When the range is known, specifically if an upper bound B on f(S) is known, an ε/B-approximately submodular function is also an ε-additively approximate submodular function. This implies that our lower bounds and approximation results could equivalently be expressed for additive approximations of normalized functions. Error vs noise. If we interpret Equation (1) in terms of error, we see that no assumption is made on the source o... |

3 |
Submodular inference of diffusion networks from multiple trees.
- Rodriguez, Schölkopf
- 2012
(Show Context)
Citation Context ...d upper bounds: for ε > n−1/2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approxi... |

2 |
Sequential information maximization: When is greedy near-optimal?
- Chen, Hassani, et al.
- 2015
(Show Context)
Citation Context ...(S ∪ T ) + f(S ∩ T ) ≤ f(S) + f(T ). We say that a function F : 2N → R is ε-approximately submodular if there exists a submodular function f : 2N → R s.t. for any S ⊆ N : (1− ε)f(S) ≤ F (S) ≤ (1 + ε)f(S). (1) Unless otherwise stated, all submodular functions f considered are normalized (f(∅) = 0) and monotone (f(S) ≤ f(T ) for S ⊆ T ). Approximate submodularity appears in various domains. • Optimization with noisy oracles. In these scenarios, we wish to solve optimization problems where one does not have access to a submodular function but a noisy version of it. An example recently studied in [5] involves maximizing information gain in graphical models; this captures many Bayesian experimental design settings. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. • PMAC learning. In the active area of learning submodular functions initiated by Balcan and Harvey [3], the objective is to approximately learn submodular functions. Roughly speaking, the PMAC-learning framework guarantees that the learned function is a constantfactor approximation of the true submodular function with high probability. Therefore, after learning a submodular function, one obt... |

2 | Information-theoretic lower bounds for convex optimization with erroneous oracles.
- Singer, Vondrák
- 2015
(Show Context)
Citation Context ...re is a coupling between approximate submodularity and erroneous evaluations of a submodular function: if one can evaluate a submodular function within (multiplicative) accuracy of 1 ± ε then this is an ε-approximately submodular function. Additive vs multiplicative approximation. The definition of approximate submodularity in (1) uses relative (multiplicative) approximation. We could instead consider absolute (additive) approximation, i.e. require that f(S) − ε ≤ F (S) ≤ f(S) + ε for all sets S. This definition has been used in the related problem of optimizing approximately convex functions [4, 25], where functions are assumed to have normalized range. For un-normalized functions or functions whose range is unknown, a relative approximation is more informative. When the range is known, specifically if an upper bound B on f(S) is known, an ε/B-approximately submodular function is also an ε-additively approximate submodular function. This implies that our lower bounds and approximation results could equivalently be expressed for additive approximations of normalized functions. Error vs noise. If we interpret Equation (1) in terms of error, we see that no assumption is made on the source o... |

2 | Submodular attribute selection for action recognition in video.
- Zheng, Jiang, et al.
- 2014
(Show Context)
Citation Context ...2 we show an exponential query-complexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms. 1 Introduction In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [9, 12, 14, 19, 20], sparse reconstruction [1, 6, 7], graph inference [23, 24, 8], video analysis [29], clustering [10], document summarization [21], object detection [27], information retrieval [28], network inference [23, 24], and information diffusion in networks [17]. The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains ... |

1 |
Submodular optimization under noise.
- Hassidim, Singer
- 2016
(Show Context)
Citation Context ...ng submodular it can be shown to be trivially inapproximable (e.g. maximize a function which takes value of 1 for a single arbitrary set S ⊆ N and 0 elsewhere). The question is therefore: How close should a function be to submodular to retain provable approximation guarantees? In recent work, it was shown that for any constant ε > 0 there exists a class of ε-approximately submodular functions for which no algorithm using fewer than exponentially-many queries has a constant approximation ratio for the canonical problem of maximizing a monotone submodular function under a cardinality constraint [13]. Such an impossibility result suggests two natural relaxations: the first is to make additional assumptions about the structure of errors, such a stochastic error model. This is the direction taken in [13], where the main result shows that when errors are drawn i.i.d. from a wide class of distributions, optimal guarantees are obtainable. The second alternative is to assume the error is subconstant, which is the focus of this paper. 1.1 Overview of the results Our main result is a spoiler: even for ε = 1/n1/2−β for any constant β > 0 and n = |N |, no algorithm can obtain a constant-factor appr... |

1 |
Noisy submodular maximization via adaptive sampling with applications to crowdsourced image collection summarization. arXiv preprint arXiv:1511.07211,
- Singla, Tschiatschek, et al.
- 2015
(Show Context)
Citation Context ... F is such that F (S) = ξSf(S) where ξS is drawn independently for each set S from a distribution D. The key aspect of consistent noise is that the random draws occur only once: querying the same set multiple times always returns the same value. This definition is the one adopted in [13]; a similar notion is called persistent noise in [5]. • inconsistent noise: in this model F (S) is a random variable such that f(S) = E[F (S)]. The noisy oracle can be queried multiple times and each query corresponds to a new independent random draw from the distribution of F (S). This model was considered in [26] in the context of dataset summarization and is also implicitly present in [17] where the objective function is defined as an expectation and has to be estimated via sampling. 3 Formal guarantees for consistent noise have been obtained in [13]. A standard way to approach optimization with inconsistent noise is to estimate the value of each set used by the algorithm to an accuracy ε via independent randomized sampling, where ε is chosen small enough so as to obtain approximation guarantees. Specifically, assuming that the algorithm only makes polynomially many value queries and that the functio... |