• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Geometric Discrepancy; An Illustrated Guide (1999)

by J Matousek
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 77
Next 10 →

Control variates for quasi-Monte Carlo

by Fred Hickernell , Christiane Lemieux, Art B. Owen , 2003
"... Quasi-Monte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can, ..."
Abstract - Cited by 11 (3 self) - Add to MetaCart
Quasi-Monte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can,

Variance and Discrepancy with Alternative Scramblings

by Art B. Owen , 2002
"... This paper analyzes some schemes for reducing the computational burden of digital scrambling. Some such schemes have been shown not to affect the mean squared L² discrepancy. This paper shows that some discrepancy-preserving alternative scrambles can change the variance in scrambled net quadrature. ..."
Abstract - Cited by 8 (1 self) - Add to MetaCart
This paper analyzes some schemes for reducing the computational burden of digital scrambling. Some such schemes have been shown not to affect the mean squared L² discrepancy. This paper shows that some discrepancy-preserving alternative scrambles can change the variance in scrambled net quadrature. Even the rate of convergence can be adversely affected by alternative scramblings. Finally, some alternatives reduce the computational burden and can also be shown to improve the rate of convergence for the variance, at least in dimension 1.

Nikolov: Tight hardness results for minimizing discrepancy

by Moses Charikar , Alantha Newman , Aleksandar Nikolov - In Proc. 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA , 2011
"... Abstract In the Discrepancy problem, we are given M sets {S 1 , . . . , S M } on N elements. Our goal is to find an assignment χ of {−1, +1} values to elements, so as to minimize the maximum discrepancy max j | i∈Sj χ(i)|. Recently, Bansal gave an efficient algorithm for achieving O( √ N ) discrepa ..."
Abstract - Cited by 8 (2 self) - Add to MetaCart
Abstract In the Discrepancy problem, we are given M sets {S 1 , . . . , S M } on N elements. Our goal is to find an assignment χ of {−1, +1} values to elements, so as to minimize the maximum discrepancy max j | i∈Sj χ(i)|. Recently, Bansal gave an efficient algorithm for achieving O( √ N ) discrepancy for any set system where M = O(N ) [Ban10], giving a constructive version of Spencer's proof that the discrepancy of any set system is at most We show that from the perspective of computational efficiency, these results are tight for general set systems where M = O(N ). Specifically, we show that it is NP-hard to distinguish between such set systems with discrepancy zero and those with discrepancy Ω( √ N ). This means that even if the optimal solution has discrepancy zero, we cannot hope to efficiently find a coloring with discrepancy o( √ N ). We also consider the hardness of the Discrepancy problem on sets with bounded shatter function, and show that the upper bounds due to Matoušek [Mat95] are tight for these sets systems as well. The hardness results in both settings are obtained from a common framework: we compose a family of high discrepancy set systems with set systems for which it is NP-hard to distinguish instances with discrepancy zero from instances in which a large number of the sets (i.e. constant fraction of the sets) have non-zero discrepancy. Our composition amplifies this zero versus non-zero gap.
(Show Context)

Citation Context

...discrepancy maxj | ∑ i∈Sj χ(i)|. Questions about the discrepancy of various set systems arise in several different areas of mathematics and theoretical computer ∗moses@cs.princeton.edu. Center for Computational Intractability, Department of Computer Science, Princeton University, supported by NSF awards MSPA-MCS 0528414, CCF 0832797, and AF 0916218. †DIMACS, Rutgers University, alantha@dimacs.rutgers.edu ‡anikolov@cs.rutgers.edu science and have given rise to a rich body of research on the subject. For a comprehensive introduction to discrepancy and its applications, the reader is referred to [BS96, CMS95, Mat10]. A celebrated result of Spencer [Spe85] shows that any system of M = O(N) sets has discrepancy at most O( √ N). Spencer’s proof is non-constructive and until very recently, an efficient algorithm to construct a low discrepancy coloring was not known. In a recent breakthrough, Bansal [Ban10] gave an efficient algorithm to find a low discrepancy coloring. For a given set system of M = O(N), the discrepancy of the coloring produced is at most O( √ N) giving a constructive version of Spencer’s proof. Thus Bansal’s algorithm gives a coloring with discrepancy that matches (within constants) the wor...

Lowdiscrepancy curves and efficient coverage of space

by Subramanian Ramamoorthy, Ram Rajagopal, Qing Ruan, Lothar Wenzel - Workshop on Algorithmic Foundations of Robotics VII , 2006
"... Abstract: We introduce the notion of low-discrepancy curves and use it to solve the problem of optimally covering space. In doing so, we extend the notion of low-discrepancy sequences in such a way that sufficiently smooth curves with low discrepancy properties can be defined and generated. Based on ..."
Abstract - Cited by 7 (1 self) - Add to MetaCart
Abstract: We introduce the notion of low-discrepancy curves and use it to solve the problem of optimally covering space. In doing so, we extend the notion of low-discrepancy sequences in such a way that sufficiently smooth curves with low discrepancy properties can be defined and generated. Based on a class of curves that cover the unit square in an efficient way, we define induced low discrepancy curves in Riemannian spaces. This allows us to efficiently cover an arbitrarily chosen abstract surface that admits a diffeomorphism to the unit square. We demonstrate the application of these ideas by presenting concrete examples of low-discrepancy curves on some surfaces that are of interest in robotics. 1
(Show Context)

Citation Context

... is applicable in a wide variety of applications, requiring only that we have a description of the abstract space in the form of a suitable Riemannian metric. Low discrepancy point sets and sequences =-=[3]-=- have a successful history within robotics. They have been successfully used in sampling based motion planning and area coverage applications. This work has been covered well in the past proceedings o...

Optimal sampling in the space of paths: Preliminary results

by Colin J. Green, Alonzo Kelly, Colin J. Green, Alonzo Kelly , 2006
"... Summary. While spatial sampling has received much attention in recent years, our understanding of sampling issues in the function space of trajectories remains limited. This paper presents a structured approach to the selection of a finite control set, derived from the infinite function space of pos ..."
Abstract - Cited by 7 (2 self) - Add to MetaCart
Summary. While spatial sampling has received much attention in recent years, our understanding of sampling issues in the function space of trajectories remains limited. This paper presents a structured approach to the selection of a finite control set, derived from the infinite function space of possible controls, which is optimal in some useful sense. We show from first principles that the degree to which trajec-tories overlap spatially is directly related to the relative completeness that can be expected in sequential motion planning. We define relative completeness to mean the probability, taken over the population of all possible worlds, that at least one tra-jectory searched will not intersect an obstacle. Likewise, trajectories which are more separated from each other perform better in this regard than the alternatives. A suboptimal algorithm is presented which selects a control set from a dense sampling of the continuum of all possible paths. Results show that this algorithm produces control sets which perform significantly better than constant curvature arcs. The resulting control set has been deployed on an autonomous mobile robot operating in complex terrain in order to respond to situations when the robot is surrounded by a dense obstacle field. 1
(Show Context)

Citation Context

...ne of selecting a finite sample of paths from the function space of possible paths with the goal of maximizing relative completeness. While spatial sampling has received much attention in recent years=-=[3, 8]-=-, this type of sampling in the space of paths remains largely unexplored. 3 Theory Before we can begin to describe a better set of trajectories, we must first have a definition of what makes one traje...

On the largest empty axis-parallel box amidst n points

by Adrian Dumitrescu, Minghui Jiang , 2009
"... We give the first nontrivial upper and lower bounds on the maximum volume of an empty axisparallel box inside an axis-parallel unit hypercube in Rd containing n points. For a fixed d, we show that the maximum volume is of the order Θ ( 1 1 n). We then use the fact that the maximum volume is Ω( n) in ..."
Abstract - Cited by 7 (3 self) - Add to MetaCart
We give the first nontrivial upper and lower bounds on the maximum volume of an empty axisparallel box inside an axis-parallel unit hypercube in Rd containing n points. For a fixed d, we show that the maximum volume is of the order Θ ( 1 1 n). We then use the fact that the maximum volume is Ω( n) in our design of the first efficient (1 − ε)-approximation algorithm for the following problem: Given an axis-parallel d-dimensional box R in Rd containing n points, compute the maximum-volume empty axis-parallel d-dimensional box contained in R. The running time of our algorithm is nearly linear in n, for small d. No previous efficient exact or approximation algorithms were known for this problem for d ≥ 4. Confirming our intuition and this status quo, recently Backer and Keil [5] have proved that the problem is NP-hard in arbitrary high dimensions (i.e., when d is part of the input).
(Show Context)

Citation Context

...defined below. Refer to [4, 25] and [9, Lemma 4A, p. 11] for related results. Let Cn be the van der Corput set of n points [10, 11], with coordinates (x(k), y(k)), 0 ≤ k ≤ n−1, constructed as follows =-=[7, 18]-=-: Let x(k) = k/n. If k = ∑ j≥0 aj2 j is the binary representation of k, where aj ∈ {0, 1}, then y(k) = ∑ j≥0 aj2 −j−1. Observe that all points in Cn lie in the unit square U = [0, 1]2. 1Let `(v) and `...

Bounded VC-dimension implies a fractional Helly theorem

by Jiri Matousek , 2002
"... We prove that every set system of bounded VC-dimension has a fractional Helly property. More precisely, if the dual shatter function of a set system F is bounded by o(m ), then F has fractional Helly number k. This means that for every ff ? 0 there exists a fi ? 0 such that if F 1 ; F 2 ; : : ..."
Abstract - Cited by 7 (1 self) - Add to MetaCart
We prove that every set system of bounded VC-dimension has a fractional Helly property. More precisely, if the dual shatter function of a set system F is bounded by o(m ), then F has fractional Helly number k. This means that for every ff ? 0 there exists a fi ? 0 such that if F 1 ; F 2 ; : : : ; Fn 2 F are sets with i2I F i 6= ; for at least sets I ` f1; 2; : : : ; ng of size k, then there exists a point common to at least fin of the F i . This further implies a (p; k)-theorem: for every F as above and every p k there exists T such that if G ` F is a finite subfamily where among every p sets, some k intersect, then G has a transversal of size T . The assumption about bounded dual shatter function applies, for example, to families of sets in R definable by a bounded number of polynomial inequalities of bounded degree; in this case, we obtain fractional Helly number d+1.

eps-samples for kernels

by Jeff M. Phillips - Proceedings 24th Annual ACM-SIAM Symposium on Discrete Algorithms , 2013
"... We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel d ..."
Abstract - Cited by 5 (5 self) - Add to MetaCart
We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel density estimates of the input distribution with that of the subset and bound the worst case error. If the maximum error is ε, then this subset can be thought of as an ε-sample (aka an ε-approximation) of the range space defined with the input distribution as the ground set and the fixed kernel representing the family of ranges. Interestingly, in this case the ranges are not binary, but have a continuous range (for simplicity we focus on kernels with range of [0, 1]); these allow for smoother notions of range spaces. It turns out, the use of this smoother family of range spaces has an added benefit of greatly decreasing the size required for ε-samples. For instance, in the plane the size is O((1/ε 4/3) log 2/3 (1/ε)) for disks (based on VC-dimension arguments) but is only O((1/ε) √ log(1/ε)) for Gaussian kernels and for kernels with bounded slope that only affect a bounded domain. These bounds are accomplished by studying the discrepancy of these “kernel ” range spaces, and here the improvement in bounds are even more pronounced. In the plane, we show the discrepancy is O ( √ log n) for these kernels, whereas for
(Show Context)

Citation Context

...K(x, p) for a specific kernel Kx, often the subscript x is dropped when it is apparent. Then the minimum kernel discrepancy of a kernel range space is defined d(P, K) = minχ dχ(P, K). See Matou´sek’s =-=[26]-=- and Chazelle’s [10] books for a masterful treatments of this field when restricted to combinatorial discrepancy. Constructing ε-samples. Given a (binary) range space (P, A) an ε-sample (a.k.a. an ε-a...

Beck’s three permutations conjecture: A counterexample and some consequences

by Alantha Newman, Ofer Neiman , Aleksandar Nikolov , 2012
"... Given three permutations on the integers 1 through n, consider the set system consisting of each interval in each of the three permutations. In 1982, Beck conjectured that the discrepancy of this set system is O(1). In other words, the conjecture says that each integer from 1 through n can be colore ..."
Abstract - Cited by 5 (1 self) - Add to MetaCart
Given three permutations on the integers 1 through n, consider the set system consisting of each interval in each of the three permutations. In 1982, Beck conjectured that the discrepancy of this set system is O(1). In other words, the conjecture says that each integer from 1 through n can be colored either red or blue so that the number of red and blue integers in each interval of each permutations differs only by a constant. (The discrepancy of a set system based on two permutations is at most two.) Our main result is a counterexample to this conjecture: for any positive integer n = 3 k, we construct three permutations whose corresponding set system has discrepancy Ω(log n). Our counterexample is based on a simple recursive construction, and our proof of the discrepancy lower bound is by induction. This construction also disproves a generalization of Beck’s conjecture due to Spencer, Srinivasan and Tetali, who conjectured that a set system corresponding to ℓ permutations has discrepancy O ( √ ℓ). Our work was inspired by an intriguing paper from SODA 2011 by Eisenbrand, Pálvölgyi and Rothvoß, who show a surprising connection between the discrepancy of three permutations

Coding Theory And Uniform Distributions

by M. M. Skriganov, A M S-t , 1998
"... In the present paper we introduce and study finite point subsets of a special kind, called optimum distributions, in the n-dimensional unit cube. Such distributions are close related with known (#, s, n)-nets of low discrepancy. It turns out that optimum distributions have a rich combinatorial struc ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
In the present paper we introduce and study finite point subsets of a special kind, called optimum distributions, in the n-dimensional unit cube. Such distributions are close related with known (#, s, n)-nets of low discrepancy. It turns out that optimum distributions have a rich combinatorial structure. Namely, we show that optimum distributions can be characterized completely as maximum distance separable codes with respect to a non-Hamming metric. Weight spectra of such codes can be evaluated precisely. We also consider linear codes and distributions and study their general properties including the duality with respect to a suitable inner product. The corresponding generalized MacWilliams identies for weight enumerators are brifly discussed. Broad classes of linear maximum distance separable codes and linear optimum distributions are explicitly constructed in the paper by the Hermite interpolations over finite fields. 1991 Mathematics Subject Classification. 11K38, 11T71, 94B60 Key...
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University