Results 1  10
of
64,360
A Volumetric Method for Building Complex Models from Range Images
, 1996
"... A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robus ..."
Abstract

Cited by 1018 (18 self)
 Add to MetaCart
A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scanconvert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a runlength encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the f...
Attention, similarity, and the identificationCategorization Relationship
, 1986
"... A unified quantitative approach to modeling subjects ' identification and categorization of multidimensional perceptual stimuli is proposed and tested. Two subjects identified and categorized the same set of perceptually confusable stimuli varying on separable dimensions. The identification dat ..."
Abstract

Cited by 663 (28 self)
 Add to MetaCart
A unified quantitative approach to modeling subjects ' identification and categorization of multidimensional perceptual stimuli is proposed and tested. Two subjects identified and categorized the same set of perceptually confusable stimuli varying on separable dimensions. The identification
Additive Logistic Regression: a Statistical View of Boosting
 Annals of Statistics
, 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract

Cited by 1719 (25 self)
 Add to MetaCart
data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the twoclass problem, boosting can
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
Multiresolution Analysis of Arbitrary Meshes
, 1995
"... In computer graphics and geometric modeling, shapes are often represented by triangular meshes. With the advent of laser scanning systems, meshes of extreme complexity are rapidly becoming commonplace. Such meshes are notoriously expensive to store, transmit, render, and are awkward to edit. Multire ..."
Abstract

Cited by 605 (16 self)
 Add to MetaCart
In computer graphics and geometric modeling, shapes are often represented by triangular meshes. With the advent of laser scanning systems, meshes of extreme complexity are rapidly becoming commonplace. Such meshes are notoriously expensive to store, transmit, render, and are awkward to edit
Text Classification from Labeled and Unlabeled Documents using EM
 MACHINE LEARNING
, 1999
"... This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large qua ..."
Abstract

Cited by 1033 (19 self)
 Add to MetaCart
, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice
Trade Liberalization, Exit, and Productivity Improvements: Evidence from Chilean Plants
 Review of Economic Studies
, 2002
"... This paper empirically investigates the effects of liberalized trade on plant productivity in the case of Chile. Chile presents an interesting setting to study this relationship since it underwent a massive trade liberalization that significantly exposed its plants to competition from abroad during ..."
Abstract

Cited by 530 (14 self)
 Add to MetaCart
This paper empirically investigates the effects of liberalized trade on plant productivity in the case of Chile. Chile presents an interesting setting to study this relationship since it underwent a massive trade liberalization that significantly exposed its plants to competition from abroad during the late 1970s and early 1980s. Methodologically, I approach this question in two steps. In the first step, I estimate a production function to obtain a measure of plant productivity. I estimate the production function semiparametrically to correct for the presence of selection and simultaneity biases in the estimates of the input coefficients required to construct a productivity measure. I explicitly incorporate plant exit in the estimation to correct for the selection problem induced by liquidated plants. These methodological aspects are important in obtaining a reliable plantlevel productivity measure based on consistent estimates of the input coefficients. In the second step, I identify the impact of trade on plantsâ€™ productivity in a regression framework allowing variation in productivity over time and across tradedand nontradedgoods sectors. Using plantlevel panel data on Chilean manufacturers, I find evidence of within plant productivity improvements that can be attributed to a liberalized trade for the plants in the importcompeting sector. In many cases, aggregate productivity improvements stem from the reshuffling of resources and output from less to more efficient producers.
Preference Parameters And Behavioral Heterogeneity: An Experimental Approach In The Health And Retirement Study
, 1997
"... This paper reports measures of preference parameters relating to risk tolerance, time preference, and intertemporal substitution. These measures are based on survey responses to hypothetical situations constructed using an economic theorist's concept of the underlying parameters. The individual ..."
Abstract

Cited by 524 (12 self)
 Add to MetaCart
This paper reports measures of preference parameters relating to risk tolerance, time preference, and intertemporal substitution. These measures are based on survey responses to hypothetical situations constructed using an economic theorist's concept of the underlying parameters. The individual measures of preference parameters display heterogeneity. Estimated risk tolerance and the elasticity of intertemporal substitution are essentially uncorrelated across individuals. Measured risk tolerance is positively related to risky behaviors, including smoking, drinking, failing to have insurance, and holding stocks rather than Treasury bills. These relationships are both statistically and quantitatively significant, although measured risk tolerance explains only a small fraction of the variation of the studied behaviors.
Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming
 Journal of the ACM
, 1995
"... We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution ..."
Abstract

Cited by 1231 (13 self)
 Add to MetaCart
We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of ...
Results 1  10
of
64,360