Results 1  10
of
261
FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem
 In Proceedings of the AAAI National Conference on Artificial Intelligence
, 2002
"... The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filterbase ..."
Abstract

Cited by 599 (10 self)
 Add to MetaCart
(Show Context)
The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filterbased algorithms, for example, require time quadratic in the number of landmarks to incorporate each sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map. This algorithm is based on a factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM algorithm on both simulated and realworld data.
Statistical design of reverse dye microarrays. Bioinformatics 2003;19:803–10
"... Motivation: In cDNA microarray experiments all samples are labelled with either Cy3 dye or Cy5 dye. Certain genes exhibit dye bias—a tendency to bind more efficiently to one of the dyes. The common reference design avoids the problem of dye bias by running all arrays ‘forward’, so that the samples b ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
Motivation: In cDNA microarray experiments all samples are labelled with either Cy3 dye or Cy5 dye. Certain genes exhibit dye bias—a tendency to bind more efficiently to one of the dyes. The common reference design avoids the problem of dye bias by running all arrays ‘forward’, so that the samples being compared are always labelled with the same dye. But comparison of samples labelled with different dyes is sometimes of interest. In these situations, it is necessary to run some arrays ‘reverse’—with the dye labelling reversed—in order to correct for the dye bias. The design of these experiments will impact one’s ability to identify genes that are differentially expressed in different tissues or conditions. We address the design issue of how many specimens are needed, how many forward and reverse labelled arrays to perform, and how to optimally assign Cy3 and Cy5 labels to the specimens. Results: We consider three types of experiments for which some reverse labelling is needed: paired samples, samples from two predefined groups, and reference design data when comparison with the reference is of interest. We present simple probability models for the data, derive optimal estimators for relative gene expression, and compare the efficiency of the estimators for a range of designs. In each case, we present the optimal design and sample size formulas. We show that reverse labelling of individual arrays is generally not required. Contact:
Inference and Hierarchical Modeling in the Social Sciences
, 1995
"... this paper I (1) examine three levels of inferential strength supported by typical social science datagathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (2) reconsider the use of HMs in school effective ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
this paper I (1) examine three levels of inferential strength supported by typical social science datagathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (2) reconsider the use of HMs in school effectiveness studies and metaanalysis from the perspective of causal inference; and (3) recommend the increased use of Gibbs sampling and other Markovchain Monte Carlo (MCMC) methods in the application of HMs in the social sciences, so that comparisons between MCMC and betterestablished fitting methodsincluding full or restricted maximum likelihood estimation based on the EM algorithm, Fisher scoring or iterative generalized least squaresmay be more fully informed by empirical practice.
Parametric and semiparametric estimation of regression models fitted to survey data
 Sankhya
, 1999
"... SUMMARY. This paper proposes two new classes of estimators for regression models fitted to survey data. The proposed estimators account for the effect of nonignorable sampling schemes which are known to bias standard estimators. Both classes derive from relationships between the population distribut ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
(Show Context)
SUMMARY. This paper proposes two new classes of estimators for regression models fitted to survey data. The proposed estimators account for the effect of nonignorable sampling schemes which are known to bias standard estimators. Both classes derive from relationships between the population distribution and the sample distribution of the sample measurements. The first class consists of parametric estimators. These are obtained by extracting the sample distribution as a function of the population distribution and the sample selection probabilities and applying maximum likelihood theory to this distribution. The second class consists of semiparametric estimators, obtained by utilizing existing relationships between moments of the two distributions. New tests for sampling ignorability based on these relationships are developed. The proposed estimators and other estimators in common use are applied to real data and further compared in a simulation study. The simulations enable also to study the performance of the sampling ignorability tests and bootstrap variance estimators. 1.
Analysis of Stochastic Dual Dynamic Programming Method
"... Abstract. In this paper we discuss statistical properties and rates of convergence of the Stochastic Dual Dynamic Programming (SDDP) method applied to multistage linear stochastic programming problems. We assume that the underline data process is stagewise independent and consider the framework wher ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we discuss statistical properties and rates of convergence of the Stochastic Dual Dynamic Programming (SDDP) method applied to multistage linear stochastic programming problems. We assume that the underline data process is stagewise independent and consider the framework where at first a random sample from the original (true) distribution is generated and consequently the SDDP algorithm is applied to the constructed Sample Average Approximation (SAA) problem.
A pseudo empirical likelihood approach to the effective use of auxiliary information in complex surveys.
, 1996
"... Abstract: In this paper, we develop a pseudo empirical likelihood approach to incorporating auxiliary information into estimates from complex surveys. In simple random sampling without replacement, the method reduces to the empirical likelihood approach of ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
Abstract: In this paper, we develop a pseudo empirical likelihood approach to incorporating auxiliary information into estimates from complex surveys. In simple random sampling without replacement, the method reduces to the empirical likelihood approach of
Adaptively scaling the Metropolis algorithm using expected squared jumped distance. To appear: Statistica Sinica
, 2010
"... Abstract A good choice of the proposal distribution is crucial for the rapid convergence of the Metropolis algorithm. In this paper, given a family of parametric Markovian kernels, we develop an adaptive algorithm for selecting the best kernel that maximizes the expected squared jumped distance, an ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
Abstract A good choice of the proposal distribution is crucial for the rapid convergence of the Metropolis algorithm. In this paper, given a family of parametric Markovian kernels, we develop an adaptive algorithm for selecting the best kernel that maximizes the expected squared jumped distance, an objective function that characterizes the Markov chain under its ddimensional stationary distribution. The adaptive algorithm uses the information accumulated by a single path and adapts the choice of the parametric kernel in the direction of the local maximum of the objective function using multiple importance sampling techniques. We demonstrate the effectiveness of our method in several examples.
Accuracy assessment for the U.S. Geological Survey regional landcover mapping Program: New York and New Jersey region
, 2000
"... The U.S. Geological Survey, in cooperation with other government and private organizations, is producing a conterminous U.S. landcover map using Landsat Thematic Mapper 30meter data for the Federal regions designated by the U.S. Environmental Protection Agency. Accuracy assessment is to be condu ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
The U.S. Geological Survey, in cooperation with other government and private organizations, is producing a conterminous U.S. landcover map using Landsat Thematic Mapper 30meter data for the Federal regions designated by the U.S. Environmental Protection Agency. Accuracy assessment is to be conducted for each Federal region to estimate overall and classspecific accuracies. In Region 2, consisting of New York and New Jersey, the accuracy assessment was completed for 15 landcover and landuse classes, using interpreted 1:40,000scale aerial photographs as reference data. The methodology used for Region 2 features a twostage, geographically stratified approach, with a general sample of all classes (1,033 sample sites), and a separate sample for rare classes (294 sample sites). A confidence index was recorded for each landcover interpretation on the 1:40,000scale aerial photography The estimated overall accuracy for Region 2 was 63 percent [standard error 1.4 percent) using all sample sites, and 75.2 percent (standard error 1.5 percent) using only reference sites with a highconfidence index. User's and producer's accuracies for the general sample and user's accuracy for the sample of rare classes, as well as variance for the estimated accuracy parameters, were also reported. Narrowly defined landuse classes and heterogeneous conditions of land cover are the major causes of misclassification errors. Recommendations for modifying the accuracy assessment methodology for use in the other nine Federal regions are provided.
The design of a multilevel survey of children, families, and communities: The Los Angeles Family and Neighborhood Survey
 Social Science Research
"... Papers published in the OPR Working Paper Series reflect the views of individual authors. They may be cited in other publications, ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
Papers published in the OPR Working Paper Series reflect the views of individual authors. They may be cited in other publications,