Results 1  10
of
11
NearOptimal MAP Inference for Determinantal Point Processes
"... Determinantal point processes (DPPs) have recently been proposed as computationally efficient probabilistic models of diverse sets for a variety of applications, including document summarization, image search, and pose estimation. Many DPP inference operations, including normalization and sampling, ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Determinantal point processes (DPPs) have recently been proposed as computationally efficient probabilistic models of diverse sets for a variety of applications, including document summarization, image search, and pose estimation. Many DPP inference operations, including normalization and sampling, are tractable; however, finding the most likely configuration (MAP), which is often required in practice for decoding, is NPhard, so we must resort to approximate inference. This optimization problem, which also arises in experimental design and sensor placement, involves finding the largest principal minor of a positive semidefinite matrix. Because the objective is logsubmodular, greedy algorithms have been used in the past with some empirical success; however, these methods only give approximation guarantees in the special case of monotone objectives, which correspond to a restricted class of DPPs. In this paper we propose a new algorithm for approximating the MAP problem based on continuous techniques for submodular function maximization. Our method involves a novel continuous relaxation of the logprobability function, which, in contrast to the multilinear extension used for general submodular functions, can be evaluated and differentiated exactly and efficiently. We obtain a practical algorithm with a 1/4approximation guarantee for a more general class of nonmonotone DPPs; our algorithm also extends to MAP inference under complex polytope constraints, making it possible to combine DPPs with Markov random fields, weighted matchings, and other models. We demonstrate that our approach outperforms standard and recent methods on both synthetic and realworld data. 1
Fast algorithms for maximizing submodular functions
 In SODA
, 2014
"... There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best know ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best known approximation guarantees, but with significantly improved running times, for maximizing a monotone submodular function f: 2[n] → R+ subject to various constraints. As in previous work, we measure the number of oracle calls to the objective function which is the dominating term in the running time. Our first result is a simple algorithm that gives a (1 − 1/e − )approximation for a cardinality constraint using O(n log n ) queries, and a 1/(p + 2 ` + 1 + )approximation for the intersection of a psystem and ` knapsack (linear) constraints using O ( n2 log 2 n ) queries. This is the first approximation for a psystem combined with linear constraints. (We also show that the factor of p cannot be improved for maximizing over a psystem.) The main idea behind these algorithms serves as a building block in our more sophisticated algorithms. Our main result is a new variant of the continuous greedy algorithm, which interpolates between the classical greedy algorithm and a truly continuous algorithm. We show how this algorithm can be implemented for matroid and knapsack constraints using Õ(n2) oracle calls to the objective function. (Previous variants and alternative techniques were known to use at least Õ(n4) oracle calls.) This leads to an O(n 2 4 log 2 n )time (1 − 1/e − )approximation for a matroid constraint. For a knapsack constraint, we develop a more involved (1−1/e − )approximation algorithm that runs in time O(n2 ( 1 log n) poly(1/)).
Submodular Functions: Learnability, Structure, and Optimization
, 2012
"... Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications. They have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we study submodular functions from a learning theoret ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications. They have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we study submodular functions from a learning theoretic angle. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing ways in which submodular functions can be both surprisingly structured and surprisingly unstructured. We provide several concrete implications of our work in other domains including algorithmic game theory and combinatorial optimization. At a technical level, this research combines ideas from many areas, including learning theory (distributional learning and PACstyle analyses), combinatorics and optimization (matroids and submodular functions), and pseudorandomness (lossless expander graphs).
A (k + 3)/2approximation algorithm for monotone submodular kset packing and general kexchange systems
, 2012
"... ..."
Efficient Submodular Function Maximization under Linear Packing Constraints
"... We study the problem of maximizing a monotone submodular set function subject to linear packing constraints. An instance of this problem consists of a matrix A ∈ [0, 1] m×n, a vector b ∈ [1, ∞) m, and a monotone submodular set function f: 2 [n] → R+. The objective is to find a set S that maximizes ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We study the problem of maximizing a monotone submodular set function subject to linear packing constraints. An instance of this problem consists of a matrix A ∈ [0, 1] m×n, a vector b ∈ [1, ∞) m, and a monotone submodular set function f: 2 [n] → R+. The objective is to find a set S that maximizes f(S) subject to AxS ≤ b. Here, xS stands for the characteristic vector of the set S. A wellstudied special case of this problem is when the objective function f is linear. This special case captures the class of packing integer programs. Our main contribution is an efficient combinatorial algorithm that achieves an approximation ratio of Ω(1/m 1/W), where W = min{bi/Aij: Aij> 0} is the width of the packing constraints. This result matches the best known performance guarantee for the linear case. One immediate corollary of this result is that the algorithm under consideration achieves constant factor approximation when the number of constraints is constant or when the width of the packing constraints is sufficiently large. This motivates us to study the large width setting, trying to determine its exact approximability. We develop an algorithm that has an approximation ratio of (1 − ɛ)(1 − 1/e) when W = Ω(ln m/ɛ 2). This result (almost) matches the theoretical lower bound of 1−1/e, which already holds for maximizing a monotone submodular function subject to a cardinality constraint.
Monotone Submodular Maximization over a Matroid via NonOblivious Local Search
, 2013
"... We present an optimal, combinatorial 1−1/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pál and Vondrák, 2008), our algorithm is extremely simple and requires no rounding. It consists of the g ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We present an optimal, combinatorial 1−1/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pál and Vondrák, 2008), our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by local search. Both phases are run not on the actual objective function, but on a related auxiliary potential function, which is also monotone and submodular. In our previous work on maximum coverage (Filmus and Ward, 2012), the potential function gives more weight to elements covered multiple times. We generalize this approach from coverage functions to arbitrary monotone submodular functions. When the objective function is a coverage function, both definitions of the potential function coincide. Our approach generalizes to the case where the monotone submodular function has restricted curvature. For any curvature c, we adapt our algorithm to produce a (1−e −c)/c approximation. This matches results of Vondrák (2008), who has shown that the continuous greedy algorithm produces a (1 − e −c)/c approximation when the objective function has curvature c with respect to the optimum, and proved that achieving any better approximation ratio is impossible in the value oracle model. 1
Improved Approximations for kExchange Systems (Extended Abstract)
"... Submodular maximization and set systems play a major role in combinatorial optimization. It is long known that the greedy algorithm provides a 1/(k + 1)approximation for maximizing a monotone submodular function over a ksystem. For the special case of kmatroid intersection, a local search appro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Submodular maximization and set systems play a major role in combinatorial optimization. It is long known that the greedy algorithm provides a 1/(k + 1)approximation for maximizing a monotone submodular function over a ksystem. For the special case of kmatroid intersection, a local search approach was recently shown to provide an improved approximation of 1/(k + δ) for arbitrary δ> 0. Unfortunately, many fundamental optimization problems are represented by a ksystem which is not a kintersection. An interesting question is whether the local search approach can be extended to include such problems. We answer this question affirmatively. Motivated by the bmatching and kset packing problems, as well as the more general matroid kparity problem, we introduce a new class of set systems called kexchange systems, that includes kset packing, bmatching, matroid kparity in strongly base orderable matroids, and additional combinatorial optimization problems such as: independent set in (k + 1)claw free graphs, asymmetric TSP, job interval selection with identical lengths and frequency allocation on lines. We give a natural local search algorithm which improves upon the current greedy approximation, for this new class of independence systems. Unlike known local search algorithms for similar problems, we use counting arguments to bound the performance of our algorithm. Moreover, we consider additional objective functions and provide improved approximations for them as well. In the case of linear objective functions, we give a nonoblivious local search algorithm, that improves upon existing local search approaches for matroid kparity.
Bounds on DoubleSided Myopic Algorithms for Unconstrained Nonmonotone Submodular Maximization
, 2014
"... Unconstrained submodular maximization captures many NPhard combinatorial optimization problems, including MaxCut, MaxDiCut, and variants of facility location problems. Recently, Buchbinder et al. [8] presented a surprisingly simple linear time randomized greedylike online algorithm that achiev ..."
Abstract
 Add to MetaCart
(Show Context)
Unconstrained submodular maximization captures many NPhard combinatorial optimization problems, including MaxCut, MaxDiCut, and variants of facility location problems. Recently, Buchbinder et al. [8] presented a surprisingly simple linear time randomized greedylike online algorithm that achieves a constant approximation ratio of 12, matching optimally the hardness result of Feige et al. [19]. Motivated by the algorithm of Buchbinder et al., we introduce a precise algorithmic model called doublesided myopic algorithms. We show that while the algorithm of Buchbinder et al. can be realized as a randomized online doublesided myopic algorithm, no such deterministic algorithm, even with adaptive ordering, can achieve the same approximation ratio. With respect to the MaxDiCut problem, we relate the Buchbinder et al. algorithm and our myopic framework to the online algorithm and inapproximation of BarNoy and Lampis [6]. 1
A Tight . . . Submodular Maximization Subject to a Matroid Constraint
"... We present an optimal, combinatorial 11/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pal and Vondrak, 2008), our algorithm is extremely simple and requires no rounding. It consists of the g ..."
Abstract
 Add to MetaCart
We present an optimal, combinatorial 11/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pal and Vondrak, 2008), our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by local search. Both phases are run not on the actual objective function, but on a related nonoblivious potential function, which is also monotone submodular. In our previous work on maximum coverage (Filmus and Ward, 2011), the potential function gives more weight to elements covered multiple times. We generalize this approach from coverage functions to arbitrary monotone submodular functions. When the objective function is a coverage function, both definitions of the potential function coincide. The parameters used to define the potential function are closely related to Pade approximants of exp(x) evaluated at x = 1. We use this connection to determine the approximation ratio of the algorithm.