Results 1  10
of
20
Settling the Complexity of Computing TwoPlayer Nash Equilibria
"... We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the c ..."
Abstract

Cited by 88 (5 self)
 Add to MetaCart
We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the complexity of fourplayer Nash equilibria [21], settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of twoplayer Nash equilibria. In particular, we prove the following theorems: • Bimatrix does not have a fully polynomialtime approximation scheme unless every problem in PPAD is solvable in polynomial time. • The smoothed complexity of the classic LemkeHowson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: • ArrowDebreu market equilibria are PPADhard to compute.
A discrete Laplace–Beltrami operator for simplicial surfaces
 Discrete Comput. Geom
"... Abstract We define a discrete LaplaceBeltrami operator for simplicial surfaces (Definition 16). It depends only on the intrinsic geometry of the surface and its edge weights are positive. Our Laplace operator is similar to the well known finiteelements Laplacian (the so called "cotan formula ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
Abstract We define a discrete LaplaceBeltrami operator for simplicial surfaces (Definition 16). It depends only on the intrinsic geometry of the surface and its edge weights are positive. Our Laplace operator is similar to the well known finiteelements Laplacian (the so called "cotan formula") except that it is based on the intrinsic Delaunay triangulation of the simplicial surface. This leads to new definitions of discrete harmonic functions, discrete mean curvature, and discrete minimal surfaces. The definition of the discrete LaplaceBeltrami operator depends on the existence and uniqueness of Delaunay tessellations in piecewise flat surfaces. While the existence is known, we prove the uniqueness. Using Rippa's Theorem we show that, as claimed, Musin's harmonic index provides an optimality criterion for Delaunay triangulations, and this can be used to prove that the edge flipping algorithm terminates also in the setting of piecewise flat surfaces. Keywords: Laplace operator, Delaunay triangulation, Dirichlet energy, simplicial surfaces, discrete differential geometry 1 Dirichlet energy of piecewise linear functions Let S be a simplicial surface in 3dimensional Euclidean space, i.e. a geometric simplicial complex in Ê 3 whose carrier S is a 2dimensional submanifold,
Hardwareassisted visibility sorting for unstructured volume rendering
 IEEE Transactions on Visualization and Computer Graphics
, 2005
"... Abstract—Harvesting the power of modern graphics hardware to solve the complex problem of realtime rendering of large unstructured meshes is a major research goal in the volume visualization community. While, for regular grids, texturebased techniques are wellsuited for current GPUs, the steps ne ..."
Abstract

Cited by 54 (11 self)
 Add to MetaCart
Abstract—Harvesting the power of modern graphics hardware to solve the complex problem of realtime rendering of large unstructured meshes is a major research goal in the volume visualization community. While, for regular grids, texturebased techniques are wellsuited for current GPUs, the steps necessary for rendering unstructured meshes are not so easily mapped to current hardware. We propose a novel volume rendering technique that simplifies the CPUbased processing and shifts much of the sorting burden to the GPU, where it can be performed more efficiently. Our hardwareassisted visibility sorting algorithm is a hybrid technique that operates in both objectspace and imagespace. In objectspace, the algorithm performs a partial sort of the 3D primitives in preparation for rasterization. The goal of the partial sort is to create a list of primitives that generate fragments in nearly sorted order. In imagespace, the fragment stream is incrementally sorted using a fixeddepth sorting network. In our algorithm, the objectspace work is performed by the CPU and the fragmentlevel sorting is done completely on the GPU. A prototype implementation of the algorithm demonstrates that the fragmentlevel sorting achieves rendering rates of between one and six million tetrahedral cells per second on an ATI Radeon 9800. Index Terms—Volume visualization, graphics processors, visibility sorting. 1
Connectivitybased Localization of Large Scale Sensor Networks with Complex Shape
"... Abstract—We study the problem of localizing a large sensor network having a complex shape, possibly with holes. A major challenge with respect to such networks is to figure out the correct network layout, i.e., avoid global flips where a part of the network folds on top of another. Our algorithm fir ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
Abstract—We study the problem of localizing a large sensor network having a complex shape, possibly with holes. A major challenge with respect to such networks is to figure out the correct network layout, i.e., avoid global flips where a part of the network folds on top of another. Our algorithm first selects landmarks on network boundaries with sufficient density, then constructs the landmark Voronoi diagram and its dual combinatorial Delaunay complex on these landmarks. The key insight is that the combinatorial Delaunay complex is provably globally rigid and has a unique realization in the plane. Thus an embedding of the landmarks by simply gluing the Delaunay triangles properly recovers the faithful network layout. With the landmarks nicely localized, the rest of the nodes can easily localize themselves by trilateration to nearby landmark nodes. This leads to a practical and accurate localization algorithm for large networks using only network connectivity. Simulations on various network topologies show surprisingly good results. In comparison, previous connectivitybased localization algorithms such as multidimensional scaling and rubberband representation generate globally flipped or distorted localization results. I.
Alexandrov’s theorem, weighted Delaunay triangulations, and mixed volumes
, 2007
"... We present a constructive proof of Alexandrov’s theorem on the existence of a convex polytope with a given metric on the boundary. The polytope is obtained by deforming certain generalized convex polytopes with the given boundary. We study the space of generalized convex polytopes and discover a c ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
(Show Context)
We present a constructive proof of Alexandrov’s theorem on the existence of a convex polytope with a given metric on the boundary. The polytope is obtained by deforming certain generalized convex polytopes with the given boundary. We study the space of generalized convex polytopes and discover a connection with weighted Delaunay triangulations of polyhedral surfaces. The existence of the deformation follows from the nondegeneracy of the Hessian of the total scalar curvature of generalized convex polytopes with positive singular curvature. This Hessian is shown to be equal to the Hessian of the volume of the dual generalized polyhedron. We prove the nondegeneracy by applying the technique used in the proof of AlexandrovFenchel inequality. Our construction of a convex polytope from a given metric is implemented in a computer program.
A VARIATIONAL PRINCIPLE FOR WEIGHTED DELAUNAY TRIANGULATIONS AND HYPERIDEAL Polyhedra
, 2008
"... We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with nonintersecting sitecircles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperb ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with nonintersecting sitecircles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus, the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra.
Overlaying Surface Meshes, Part II: Topology Preservation and Feature Detection
 International Journal on Computational Geometry and Applications
, 2004
"... In Part I, we described an efficient and robust algorithm for computing a common refinement of two surface meshes. In this paper, we present a theoretical verification ofthe robustness of our algorithm by showing the topological preservation of the intersection principle, which we used to resolve to ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In Part I, we described an efficient and robust algorithm for computing a common refinement of two surface meshes. In this paper, we present a theoretical verification ofthe robustness of our algorithm by showing the topological preservation of the intersection principle, which we used to resolve topological inconsistencies caused by numerical errors. To enhance robustness in practice for complex geometries, we further propose techniques to detect and match geometric features, such as ridges, corners, and nonmatching boundaries. We report experimental results using our enhanced overlay algorithm with feature matching for complex geometries from realworld applications.
Filtering Relocations on a Delaunay Triangulation
 COMPUTER GRAPHICS FORUM (2009)
, 2009
"... Updating a Delaunay triangulation when its vertices move is a bottleneck in several domains of application. Rebuilding the whole triangulation from scratch is surprisingly a very viable option compared to relocating the vertices. This can be explained by several recent advances in efficient construc ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Updating a Delaunay triangulation when its vertices move is a bottleneck in several domains of application. Rebuilding the whole triangulation from scratch is surprisingly a very viable option compared to relocating the vertices. This can be explained by several recent advances in efficient construction of Delaunay triangulations. However, when all points move with a small magnitude, or when only a fraction of the vertices move, rebuilding is no longer the best option. This paper considers the problem of efficiently updating a Delaunay triangulation when its vertices are moving under small perturbations. The main contribution is a set of filters based upon the concept of vertex tolerances. Experiments show that filtering relocations is faster than rebuilding the whole triangulation from scratch under certain conditions.
Threedimensional laser scanning for geometry documentation and construction management of highway tunnels during excavation
 Sensors 2012
"... sensors ..."
(Show Context)
Overlaying surface meshes, part I: Algorithms
 INT. J. COMPUT. GEOM. APPL.
, 2004
"... We describe an efficient and robust algorithm for computing a common refinement of two meshes modeling the same surface of arbitrary shape by overlaying them on lop of each other. A common refinement is an important data structure for transferring data between meshes that have different combinatoria ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We describe an efficient and robust algorithm for computing a common refinement of two meshes modeling the same surface of arbitrary shape by overlaying them on lop of each other. A common refinement is an important data structure for transferring data between meshes that have different combinatorial structures. Our algorithm is optimal in time and space, with linear complexity, and is robust even with inexact computations, through the techniques of error analysis, detection of topological inconsistencies, and automatic resolution of such inconsistencies. We present the verification and some further enhancement of robustness in Part II.