Results 1  10
of
49
On Sparse Approximations To Randomized Strategies And Convex Combinations
, 1994
"... A randomized strategy or a convex combination may be represented by a probability vector p = (p 1 ; : : : ; pm ) . p is called sparse if it has only few positive entries. This paper presents an Approximation Lemma and applies it to matrix games, linear programming, computer chess, and uniform sampli ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
A randomized strategy or a convex combination may be represented by a probability vector p = (p 1 ; : : : ; pm ) . p is called sparse if it has only few positive entries. This paper presents an Approximation Lemma and applies it to matrix games, linear programming, computer chess, and uniform sampling spaces. In all cases arbitrary probability vectors can be substituted by sparse ones (with only logarithmically many positive entries) without loosing too much performance.
Approximation Algorithms Via Randomized Rounding: A Survey
 Series in Advanced Topics in Mathematics, Polish Scientific Publishers PWN
, 1999
"... Approximation algorithms provide a natural way to approach computationally hard problems. There are currently many known paradigms in this area, including greedy algorithms, primaldual methods, methods based on mathematical programming (linear and semidefinite programming in particular), local i ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Approximation algorithms provide a natural way to approach computationally hard problems. There are currently many known paradigms in this area, including greedy algorithms, primaldual methods, methods based on mathematical programming (linear and semidefinite programming in particular), local improvement, and "low distortion" embeddings of general metric spaces into special families of metric spaces. Randomization is a useful ingredient in many of these approaches, and particularly so in the form of randomized rounding of a suitable relaxation of a given problem. We survey this technique here, with a focus on correlation inequalities and their applications.
The Geometry of Differential Privacy: The Sparse and Approximate Cases
, 2012
"... In this work, we study tradeoffs between accuracy and privacy in the context of linear queries over histograms. This is a rich class of queries that includes contingency tables and range queries, and has been a focus of a long line of work [BLR08,RR10,DRV10,HT10,HR10,LHR+10,BDKT12]. For a given set ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
In this work, we study tradeoffs between accuracy and privacy in the context of linear queries over histograms. This is a rich class of queries that includes contingency tables and range queries, and has been a focus of a long line of work [BLR08,RR10,DRV10,HT10,HR10,LHR+10,BDKT12]. For a given set of d linear queries over a database x ∈ RN, we seek to find the differentially private mechanism that has the minimum mean squared error. For pure differential privacy, [HT10, BDKT12] give an O(log2 d) approximation to the optimal mechanism. Our first contribution is to give an O(log2 d) approximation guarantee for the case of (ε, δ)differential privacy. Our mechanism is simple, efficient and adds carefully chosen correlated Gaussian noise to the answers. We prove its approximation guarantee relative to the hereditary discrepancy lower bound of [MN12], using tools from convex geometry. We next consider this question in the case when the number of queries exceeds the number of individuals in the database, i.e. when d> n, ‖x‖1. The lower bounds used in the previous approximation algorithm no longer apply, and in fact better mechanisms are known in this setting [BLR08,RR10,HR10,GHRU11,GRU12]. Our second main contribution is to give an (ε, δ)differentially private mechanism that for a given query set A and an upper bound n on ‖x‖1, has mean squared error within polylog(d,N) of the optimal for A and n. This approximation is achieved by coupling the Gaussian noise addition approach with linear regression over the `1 ball. Additionally, we show a similar polylogarithmic approximation guarantee for the best εdifferentially private mechanism in this sparse setting. Our work also shows that for arbitrary counting queries, i.e. A with entries in {0, 1}, there is an εdifferentially private mechanism with expected error Õ(√n) per query, improving on the Õ(n 2 3) bound of [BLR08], and matching the lower bound implied by [DN03] up to logarithmic factors. The connection between hereditary discrepancy and the privacy mechanism enables us to derive the first polylogarithmic approximation to the hereditary discrepancy of a matrix A.
Polynomials with LittlewoodType Coefficient Constraints
 MICHIGAN MATH. J
, 2001
"... This survey paper focuses on my contributions to the area of polynomials with Littlewoodtype coefficient constraints. It summarizes the main results from many of my recent papers some of which are joint with Peter Borwein. ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
This survey paper focuses on my contributions to the area of polynomials with Littlewoodtype coefficient constraints. It summarizes the main results from many of my recent papers some of which are joint with Peter Borwein.
TRACES OF FINITE SETS: EXTREMAL PROBLEMS AND GEOMETRIC APPLICATIONS
, 1992
"... Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas o ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Given a hypergraph H and a subset S of its vertices, the trace of H on S is defined as HS = {E ∩ S: E ∈ H}. The Vapnik–Chervonenkis dimension (VCdimension) of H is the size of the largest subset S for which HS has 2 S edges. Hypergraphs of small VCdimension play a central role in many areas of statistics, discrete and computational geometry, and learning theory. We survey some of the most important results related to this concept with special emphasis on (a) hypergraph theoretic methods and (b) geometric applications.
Lattice Approximation and Linear Discrepancy of Totally Unimodular Matrices (Extended Abstract)
 IN PROCEEDINGS OF THE 12TH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS (SODA
, 2001
"... This paper shows that the lattice approximation problem for totally unimodular matrices A 2 R mn can be solved eciently and optimally via a linear programming approach. The complexity of our algorithm is O(log m) times the complexity of nding an extremal point of a polytope in R n described b ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
This paper shows that the lattice approximation problem for totally unimodular matrices A 2 R mn can be solved eciently and optimally via a linear programming approach. The complexity of our algorithm is O(log m) times the complexity of nding an extremal point of a polytope in R n described by 2(m + n) linear constraints. We also consider the worstcase approximability called linear discrepancy. Here we derive an upper bound for the linear discrepancy of a totally unimodular m n matrix A: lindisc(A) minf1 1 n+1 ; 1 1 m g: This bound is sharp. It proves Spencer's conjecture lindisc(A) (1 1 n+1 ) herdisc(A) for totally unimodular matrices. It seems to be the rst time that linear programming is successfully used for a discrepancy problem. ce ...
Approximation of MultiColor Discrepancy
 Randomization, Approximation and Combinatorial Optimization (Proceedings of APPROXRANDOM 1999), volume 1671 of Lecture Notes in Computer Science
, 1999
"... . In this article we introduce (combinatorial) multicolor discrepancy and generalize some classical results from 2color discrepancy theory to c colors. We give a recursive method that constructs ccolorings from approximations to the 2color discrepancy. This method works for a large class of ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
. In this article we introduce (combinatorial) multicolor discrepancy and generalize some classical results from 2color discrepancy theory to c colors. We give a recursive method that constructs ccolorings from approximations to the 2color discrepancy. This method works for a large class of theorems like the sixstandarddeviation theorem of Spencer, the BeckFiala theorem and the results of Matousek, Welzl and Wernisch for bounded VCdimension. On the other hand there are examples showing that discrepancy in c colors can not be bounded in terms of twocolor discrepancy even if c is a power of 2. For the linear discrepancy version of the BeckFiala theorem the recursive approach also fails. Here we extend the method of floating colors to multicolorings and prove multicolor versions of the the BeckFiala theorem and the BaranyGrunberg theorem. 1 Introduction Combinatorial discrepancy theory deals with the problem of partitioning the vertices of a hypergraph (set...
Bin Packing via Discrepancy of Permutations
, 2011
"... A well studied special case of bin packing is the 3partition problem, where n items of size> 1 have to be packed in 4 a minimum number of bins of capacity one. The famous KarmarkarKarp algorithm transforms a fractional solution of a suitable LP relaxation for this problem into an integral solut ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A well studied special case of bin packing is the 3partition problem, where n items of size> 1 have to be packed in 4 a minimum number of bins of capacity one. The famous KarmarkarKarp algorithm transforms a fractional solution of a suitable LP relaxation for this problem into an integral solution that requires at most O(logn) additional bins. The threepermutationsconjecture of Beck is the following. Given any 3 permutations on n symbols, one can color the symbols red and blue, such that in any interval of any of those permutations, the number of red and blue symbols differs only by a constant. Beck’s conjecture is well known in the field of discrepancy theory. We establish a surprising connection between bin packing and Beck’s conjecture: If the latter holds true, then the additive integrality gap of the 3partition linear programming relaxation is bounded by a constant.
A Trace Bound for the Hereditary Discrepancy
, 2001
"... Let A be the incidence matrix of a set system with m sets and n points, m ≤ n, and let t = tr M, where M = AT A. Finally, let σ = tr M 2 be the sum of squares of the elements of M. We prove that the hereditary discrepancy of the set system is at least 1 4 cnσ/t2√ 1 t/n, with c =. This general trac ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Let A be the incidence matrix of a set system with m sets and n points, m ≤ n, and let t = tr M, where M = AT A. Finally, let σ = tr M 2 be the sum of squares of the elements of M. We prove that the hereditary discrepancy of the set system is at least 1 4 cnσ/t2√ 1 t/n, with c =. This general trace bound allows us to resolve discrepancytype 324 questions for which spectral methods had previously failed. Also, by using this result in conjunction with the spectral lemma for linear circuits, we derive new complexity bounds for range searching. • We show that the (red–blue) discrepancy of the set system formed by n points and n lines in the plane is �(n1/6) in the worst case and always1 Õ(n1/6). • We give a simple explicit construction of n points and n halfplanes with hereditary discrepancy ˜�(n1/4). • We show that in any dimension d = �(log n/log log n), there is a set system of n points and n axisparallel boxes in Rd with discrepancy n�(1/log log n). • Applying these discrepancy results together with a new variation of the spectral lemma, we derive a lower bound of �(n log n) on the arithmetic complexity of offline range searching for points and lines (for nonmonotone circuits). We also prove a lower bound of �(n log n/log log n) on the complexity of orthogonal range searching in any dimension �(log n/log log n).