Results 1  10
of
83
Nearoptimal sensor placements in gaussian processes
 In ICML
, 2005
"... When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in t ..."
Abstract

Cited by 333 (34 self)
 Add to MetaCart
(Show Context)
When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in the GP model, and A, D, or Eoptimal design. In this paper, we tackle the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected. We prove that the problem of finding the configuration that maximizes mutual information is NPcomplete. To address this issue, we describe a polynomialtime approximation that is within (1 − 1/e) of the optimum by exploiting the submodularity of mutual information. We also show how submodularity can be used to obtain online bounds, and design branch and bound search procedures. We then extend our algorithm to exploit lazy evaluations and local structure in the GP, yielding significant speedups. We also extend our approach to find placements which are robust against node failures and uncertainties in the model. These extensions are again associated with rigorous theoretical approximation guarantees, exploiting the submodularity of the objective function. We demonstrate the advantages of our approach towards optimizing mutual information in a very extensive empirical study on two realworld data sets.
An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments
, 2003
"... Digital 3D models of the environment are needed in rescue and inspection robotics, facility managements and architecture. This paper presents an automatic system for gaging and digitalization of 3D indoor environments. It consists of an autonomous mobile robot, a reliable 3D laser range finder and t ..."
Abstract

Cited by 116 (23 self)
 Add to MetaCart
Digital 3D models of the environment are needed in rescue and inspection robotics, facility managements and architecture. This paper presents an automatic system for gaging and digitalization of 3D indoor environments. It consists of an autonomous mobile robot, a reliable 3D laser range finder and three elaborated software modules. The first module, a fast variant of the Iterative Closest Points algorithm, registers the 3D scans in a common coordinate system and relocalizes the robot. The second module, a next best view planner, computes the next nominal pose based on the acquired 3D data while avoiding complicated obstacles. The third module, a closedloop and globally stable motor controller, navigates the mobile robot to a nominal pose on the base of odometry and avoids collisions with dynamical obstacles. The 3D laser range finder acquires a 3D scan at this pose. The proposed method allows one to digitalize large indoor environments fast and reliably without any intervention and solves the SLAM problem. The results of two 3D digitalization experiments are presented using a fast octreebased visualization method.
Counting people in crowds with a realtime network of image sensors
 in Proc. of IEEE ICCV
, 2003
"... Estimating the number of people in a crowded environment is a central task in civilian surveillance. Most visionbased counting techniques depend on detecting individuals in order to count, an unrealistic proposition in crowded settings. We propose an alternative approach that directly estimates the ..."
Abstract

Cited by 80 (2 self)
 Add to MetaCart
Estimating the number of people in a crowded environment is a central task in civilian surveillance. Most visionbased counting techniques depend on detecting individuals in order to count, an unrealistic proposition in crowded settings. We propose an alternative approach that directly estimates the number of people. In our system, groups of image sensors segment foreground objects from the background, aggregate the resulting silhouettes over a network, and compute a planar projection of the scene’s visual hull. We introduce a geometric algorithm that calculates bounds on the number of persons in each region of the projection, after phantom regions have been eliminated. The computational requirements scale well with the number of sensors and the number of people, and only limited amounts of data are transmitted over the network. Because of these properties, our system runs in realtime and can be deployed as an untethered wireless sensor network. We describe the major components of our system, and report preliminary experiments with our first prototype implementation. 1.
Sampling Based SensorNetwork Deployment
"... In this paper, we consider the problem of placing networked sensors in a way that guarantees coverage and connectivity. We focus on sampling based deployment and present algorithms that guarantee coverage and connectivity with a small number of sensors. We consider two different scenarios based on t ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
In this paper, we consider the problem of placing networked sensors in a way that guarantees coverage and connectivity. We focus on sampling based deployment and present algorithms that guarantee coverage and connectivity with a small number of sensors. We consider two different scenarios based on the flexibility of deployment. If deployment has to be accomplished in one step, like airborne deployment, then the main question becomes how many sensors are needed. If deployment can be implemented in multiple steps, then awareness of coverage and connectivity can be updated. For this case, we present incremental deployment algorithms which consider the current placement to adjust the sampling domain. The algorithms are simple, easy to implement, and require a small number of sensors. We believe the concepts and algorithms presented in this paper will provide a unifying framework for existing and future deployment algorithms which consider many practical issues not considered in the present work.
A general method for sensor planning in multisensor systems: Extension to random occlusion
, 2005
"... Abstract. Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some userspecified characteristics of such objects. For such systems, we deal wit ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Abstract. Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some userspecified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance. We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine “average ” visibility rates and a deterministic approach to address worstcase scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multiview capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are
A constantfactor approximation algorithm for optimal terrain guarding
 In Proc. ACMSIAM Symposium on Discrete Algorithms
, 2005
"... We present the first constantfactor approximation algorithm for a nontrivial instance of the optimal guarding (coverage) problem in polygons. In particular, we give an O(1)approximation algorithm for placing the fewest point guards on a 1.5D terrain, so that every point of the terrain is seen by ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
We present the first constantfactor approximation algorithm for a nontrivial instance of the optimal guarding (coverage) problem in polygons. In particular, we give an O(1)approximation algorithm for placing the fewest point guards on a 1.5D terrain, so that every point of the terrain is seen by at least one guard. While polylogarithmicfactor approximations follow from set cover results, our new results exploit geometric structure of terrains to obtain a substantially improved approximation algorithm. 1
Performance of a distributed robotic system using shared communications channels
 IEEE Trans. on Robotics and Automation
, 2002
"... Abstract—We have designed and built a set of miniature robots, called Scouts, and have developed a distributed software system to control them. This paper addresses the fundamental choices we made in the design of the control software, describes experimental results in a surveillance task, and analy ..."
Abstract

Cited by 29 (15 self)
 Add to MetaCart
(Show Context)
Abstract—We have designed and built a set of miniature robots, called Scouts, and have developed a distributed software system to control them. This paper addresses the fundamental choices we made in the design of the control software, describes experimental results in a surveillance task, and analyzes the factors that affect robot performance. Space and power limitations on the Scouts severely restrict the computational power of their onboard computers, requiring a proxyprocessing scheme in which the robots depend on remote computers for their computing needs. While this allows the robots to be autonomous, the fact that robots ’ behaviors are executed remotely introduces an additional complication – sensor data and motion commands have to be exchanged using wireless communications channels. Communications channels cannot always be shared, thus requiring the robots to obtain exclusive access to them. We present experimental results on a surveillance task in which multiple robots patrol an area and watch for motion. We discuss how the limited communications bandwidth affects robot performance in accomplishing the task and analyze how performance depends on the number of robots that share the bandwidth. Index Terms—Multiple robots, Mobile robots, Distributed software architecture, Resource allocation.
Approximation Algorithms for Two Optimal Location Problems in Sensor Networks
 in Broadnets, 2005. [Online]. Available: http://valis.cs.uiuc.edu/ sariel/papers/04/sensors
, 2004
"... This paper studies two problems that arise in optimization of sensor networks: First, we devise provable approximation schemes for locating a base station and constructing a network among a set of sensors each of which has a data stream to get to the base station. Subject to power constraints at ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
This paper studies two problems that arise in optimization of sensor networks: First, we devise provable approximation schemes for locating a base station and constructing a network among a set of sensors each of which has a data stream to get to the base station. Subject to power constraints at the sensors, our goal is to locate the base station and establish a network in order to maximize the lifespan of the network.
Guarding galleries and terrains
 Information Processing Letters
, 2006
"... Let P be a polygon with n vertices. We say that two points of P see each other if the line segment connecting them lies inside (the closure of) P. In this paper we present efficient approximation algorithms for finding the smallest set G of points of P so that each point of P is seen by at least one ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
Let P be a polygon with n vertices. We say that two points of P see each other if the line segment connecting them lies inside (the closure of) P. In this paper we present efficient approximation algorithms for finding the smallest set G of points of P so that each point of P is seen by at least one point of G, and the points of G are constrained to be belong to the set of vertices of an arbitrarily dense grid. We also present similar algorithms for terrains and polygons with holes. 1
Simplifying Complex Environments using Incremental Textured Depth Meshes
 ACM TRANS. GRAPH
, 2003
"... We present an incremental algorithm to compute imagebased simplifications of a large environment. We use an optimizationbased approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The op ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We present an incremental algorithm to compute imagebased simplifications of a large environment. We use an optimizationbased approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The optimization function minimizes artifacts such as skins and cracks in the reconstruction. We also present an encoding scheme for multiple TDMs that exploits spatial coherence among different viewpoints. The resulting simplifications, incremental textured depth meshes (ITDMs), reduce preprocessing, storage, rendering costs and visible artifacts. Our algorithm has been applied to large, complex synthetic environments comprising millions of primitives. It is able to render them at 20  40 frames a second on a PC with little loss in visual fidelity.