Results 1  10
of
16
Algorithms in Discrete Convex Analysis
 Math. Programming
, 2000
"... this paper is to describe the f#eA damental results on M and Lconvex f#24L2A+ with special emphasis on algorithmic aspects. ..."
Abstract

Cited by 158 (36 self)
 Add to MetaCart
(Show Context)
this paper is to describe the f#eA damental results on M and Lconvex f#24L2A+ with special emphasis on algorithmic aspects.
Learning Mixtures of Submodular Functions for Image Collection Summarization
"... We address the problem of image collection summarization by learning mixtures of submodular functions. Submodularity is useful for this problem since it naturally represents characteristics such as fidelity and diversity, desirable for any summary. Several previously proposed image summarization sco ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
We address the problem of image collection summarization by learning mixtures of submodular functions. Submodularity is useful for this problem since it naturally represents characteristics such as fidelity and diversity, desirable for any summary. Several previously proposed image summarization scoring methodologies, in fact, instinctively arrived at submodularity. We provide classes of submodular component functions (including some which are instantiated via a deep neural network) over which mixtures may be learnt. We formulate the learning of such mixtures as a supervised problem via largemargin structured prediction. As a loss function, and for automatic summary scoring, we introduce a novel summary evaluation method called VROUGE, and test both submodular and nonsubmodular optimization (using the submodularsupermodular procedure) to learn a mixture of submodular functions. Interestingly, using nonsubmodular optimization to learn submodular functions provides the best results. We also provide a new data set consisting of 14 realworld image collections along with many humangenerated ground truth summaries collected using Amazon Mechanical Turk. We compare our method with previous work on this problem and show that our learning approach outperforms all competitors on this new data set. This paper provides, to our knowledge, the first systematic approach for quantifying the problem of image collection summarization, along with a new data set of image collections and human summaries. 1
Monotone Submodular Maximization over a Matroid via NonOblivious Local Search
, 2013
"... We present an optimal, combinatorial 1−1/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pál and Vondrák, 2008), our algorithm is extremely simple and requires no rounding. It consists of the g ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We present an optimal, combinatorial 1−1/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pál and Vondrák, 2008), our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by local search. Both phases are run not on the actual objective function, but on a related auxiliary potential function, which is also monotone and submodular. In our previous work on maximum coverage (Filmus and Ward, 2012), the potential function gives more weight to elements covered multiple times. We generalize this approach from coverage functions to arbitrary monotone submodular functions. When the objective function is a coverage function, both definitions of the potential function coincide. Our approach generalizes to the case where the monotone submodular function has restricted curvature. For any curvature c, we adapt our algorithm to produce a (1−e −c)/c approximation. This matches results of Vondrák (2008), who has shown that the continuous greedy algorithm produces a (1 − e −c)/c approximation when the objective function has curvature c with respect to the optimum, and proved that achieving any better approximation ratio is impossible in the value oracle model. 1
Nonmonotone Adaptive Submodular Maximization (extended version with supplementary material)
, 2015
"... A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedbac ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies for this general class of adaptive optimization problems can be extremely hard. However, when the objective function is adaptive monotone and adaptive submodular, a simple greedy policy attains a 1 − 1/e approximation ratio in terms of expected utility. Unfortunately, many practical objective functions are naturally nonmonotone; to our knowledge, no existing policy has provable performance guarantees when the assumption of adaptive monotonicity is lifted. We propose the adaptive random greedy policy for maximizing adaptive submodular functions, and prove that it retains the aforementioned 1 − 1/e approximation ratio for functions that are also adaptive monotone, while it additionally provides a 1/e approximation ratio for nonmonotone adaptive submodular functions. We showcase the benefits of adaptivity on three realworld network data sets using two nonmonotone functions, representative of two classes of commonly encountered nonmonotone objectives.
Streaming Algorithms for Submodular Function Maximization
, 2015
"... We consider the problem of maximizing a nonnegative submodular set function f: 2N → R+ subject to a pmatchoid constraint in the singlepass streaming setting. Previous work in this context has considered streaming algorithms for modular functions and monotone submodular functions. The main result i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider the problem of maximizing a nonnegative submodular set function f: 2N → R+ subject to a pmatchoid constraint in the singlepass streaming setting. Previous work in this context has considered streaming algorithms for modular functions and monotone submodular functions. The main result is for submodular functions that are nonmonotone. We describe deterministic and randomized algorithms that obtain a Ω(1p)approximation using O(k log k)space, where k is an upper bound on the cardinality of the desired set. The model assumes value oracle access to f and membership oracles for the matroids defining the pmatchoid constraint.
Deep Submodular Functions: Definitions & Learning
"... Abstract We propose and study a new class of submodular functions called deep submodular functions (DSFs). We define DSFs and situate them within the broader context of classes of submodular functions in relationship both to various matroid ranks and sums of concave composed with modular functions ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We propose and study a new class of submodular functions called deep submodular functions (DSFs). We define DSFs and situate them within the broader context of classes of submodular functions in relationship both to various matroid ranks and sums of concave composed with modular functions (SCMs). Notably, we find that DSFs constitute a strictly broader class than SCMs, thus motivating their use, but that they do not comprise all submodular functions. Interestingly, some DSFs can be seen as special cases of certain deep neural networks (DNNs), hence the name. Finally, we provide a method to learn DSFs in a maxmargin framework, and offer preliminary results applying this both to synthetic and realworld data instances.
Causal meets Submodular: Subset Selection with Directed Information
"... Abstract We study causal subset selection with Directed Information as the measure of prediction causality. Two typical tasks, causal sensor placement and covariate selection, are correspondingly formulated into cardinality constrained directed information maximizations. To attack the NPhard probl ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We study causal subset selection with Directed Information as the measure of prediction causality. Two typical tasks, causal sensor placement and covariate selection, are correspondingly formulated into cardinality constrained directed information maximizations. To attack the NPhard problems, we show that the first problem is submodular while not necessarily monotonic. And the second one is "nearly" submodular. To substantiate the idea of approximate submodularity, we introduce a novel quantity, namely submodularity index (SmI), for general set functions. Moreover, we show that based on SmI, greedy algorithm has performance guarantee for the maximization of possibly nonmonotonic and nonsubmodular functions, justifying its usage for a much broader class of problems. We evaluate the theoretical results with several case studies, and also illustrate the application of the subset selection to causal structure learning.
On the Team Selection Problem
"... Abstract We consider a team selection problem that requires to hire a team of individuals that maximizes a profit function defined as difference of the utility of production and the cost of hiring. We show that for any monotone submodular utility of production and any increasing cost function of t ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We consider a team selection problem that requires to hire a team of individuals that maximizes a profit function defined as difference of the utility of production and the cost of hiring. We show that for any monotone submodular utility of production and any increasing cost function of the team size with increasing marginal costs, a natural greedy algorithm guarantees a 1 − log(a)/(a − 1)approximation when a ≤ e and a 1 − a/e(a − 1)approximation when a ≥ e, where a is the ratio of the utility of production and the hiring cost of a profitmaximizing team selection. We also consider the class of testscore algorithms for maximizing a utility of production subject to a cardinality constraint, where the goal is to hire a team of given size based on greedy choices using individual test scores. We show that the existence of test scores that guarantee a constantfactor approximation is equivalent to the existence of special type of test scores so called replication test scores. A set of sufficient conditions is identified that implies the existence of replication test scores that guarantee a constantfactor approximation. These sufficient conditions are shown to hold for a large number of classic models of team production, including a monotone concave function of total production, bestshot, and constant elasticity of substitution production function. We also present some results on the performance of different kinds of test scores for different models of team production, and report empirical results using data from a popular online labour platform for software development.