Results 1  10
of
14
Discrepancy estimates for variance bounding Markov chain quasiMonte Carlo
, 2014
"... Discrepancy estimates for variance bounding ..."
(Show Context)
Finding optimal volume subintervals with k points and computing the star discrepancy are NPhard problems
 JOURNAL OF COMPLEXITY
, 2008
"... The wellknown star discrepancy is a common measure for the uniformity of point distributions. It is used, e.g., in multivariate integration, pseudo random number generation, experimental design, statistics, or computer graphics. We study here the complexity of calculating the star discrepancy of po ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
The wellknown star discrepancy is a common measure for the uniformity of point distributions. It is used, e.g., in multivariate integration, pseudo random number generation, experimental design, statistics, or computer graphics. We study here the complexity of calculating the star discrepancy of point sets in the ddimensional unit cube and show that this is an NPhard problem. To establish this complexity result, we first prove NPhardness of the following related problems in computational geometry: Given n points in the ddimensional unit cube, find a subinterval of minimum or maximum volume that contains k of the n points. Our results for the complexity of the subinterval problems settle a conjecture of
A new randomized algorithm to approximate the star discrepancy based on threshold accepting
 SIAM J. Numer. Anal
"... ar ..."
Probabilistic star discrepancy bounds for double infinite random
, 2012
"... matrices ..."
(Show Context)
Hardness of discrepancy computation and epsilonnet verification in high dimension
 CoRR
"... Discrepancy measures how uniformly distributed a point set is with respect to a given set of ranges. There are two notions of discrepancy, namely continuous discrepancy and combinatorial discrepancy. Depending on the ranges, several possible variants arise, for example star discrepancy, box discrepa ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Discrepancy measures how uniformly distributed a point set is with respect to a given set of ranges. There are two notions of discrepancy, namely continuous discrepancy and combinatorial discrepancy. Depending on the ranges, several possible variants arise, for example star discrepancy, box discrepancy, and discrepancy of halfspaces. In this paper, we investigate the hardness of these problems with respect to the dimension d of the underlying space. All these problems are solvable in time nO(d), but such a time dependency quickly becomes intractable for highdimensional data. Thus it is interesting to ask whether the dependency on d can be moderated. We answer this question negatively by proving that the canonical decision problems are W[1]hard with respect to the dimension. This is done via a parameterized reduction from the Clique problem. As the parameter stays linear in the input parameter, the results moreover imply that these problems require nΩ(d) time, unless 3Sat can be solved in 2o(n) time. Further, we derive that testing whether a given set is an εnet with respect to halfspaces takes nΩ(d) time under the same assumption. As intermediate results, we discover the W[1]hardness of other well known problems,
Construction of Minimal Bracketing Covers for Rectangles
"... We construct explicit δbracketing covers with minimal cardinality for the set system of (anchored) rectangles in the two dimensional unit cube. More precisely, the cardinality of these δbracketing covers are bounded from above by δ −2 +o(δ −2). A lower bound for the cardinality of arbitrary δbrac ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We construct explicit δbracketing covers with minimal cardinality for the set system of (anchored) rectangles in the two dimensional unit cube. More precisely, the cardinality of these δbracketing covers are bounded from above by δ −2 +o(δ −2). A lower bound for the cardinality of arbitrary δbracketing covers for ddimensional anchored boxes from [M. Gnewuch, Bracketing numbers for axisparallel boxes and applications to geometric discrepancy, J. Complexity 24 (2008) 154172] implies the lower bound δ −2 + O(δ −1) in dimension d = 2, showing that our constructed covers are (essentially) optimal. We study also other δbracketing covers for the set system of rectangles, deduce the coefficient of the most significant term δ −2 in the asymptotic expansion of their cardinality, and compute their cardinality for explicit values of δ. 1
doi:10.1017/S09624929XXXXXX Printed in the United Kingdom Acta Numerica: High dimensional integration – the QuasiMonte Carlo way
"... This paper is a contemporary review of QMC (“QuasiMonte Carlo”) methods, i.e., equalweight rules for the approximate evaluation of high dimensional integrals over the unit cube [0, 1]s, where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in ..."
Abstract
 Add to MetaCart
This paper is a contemporary review of QMC (“QuasiMonte Carlo”) methods, i.e., equalweight rules for the approximate evaluation of high dimensional integrals over the unit cube [0, 1]s, where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst case error as s increases. A crucial role is played by parameters called “weights”, since a careful use of the weight parameters is needed to ensure that the worst case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy,