Results 1  10
of
72,137
Imagebased visual hulls
 IN PROCEEDINGS OF ACM SIGGRAPH 2000
, 2000
"... In this paper, we describe an efficient imagebased approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from the computati ..."
Abstract

Cited by 342 (18 self)
 Add to MetaCart
In this paper, we describe an efficient imagebased approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from
SMOTE: Synthetic Minority Oversampling Technique
 Journal of Artificial Intelligence Research
, 2002
"... An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often realworld data sets are predominately composed of ``normal'' examples with only a small percentag ..."
Abstract

Cited by 614 (28 self)
 Add to MetaCart
sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
The strength of weak learnability
 Machine Learning
, 1990
"... Abstract. This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with h ..."
Abstract

Cited by 861 (24 self)
 Add to MetaCart
well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error e.
Systematic design of program analysis frameworks
 In 6th POPL
, 1979
"... Semantic analysis of programs is essential in optimizing compilers and program verification systems. It encompasses data flow analysis, data type determination, generation of approximate invariant ..."
Abstract

Cited by 771 (52 self)
 Add to MetaCart
Semantic analysis of programs is essential in optimizing compilers and program verification systems. It encompasses data flow analysis, data type determination, generation of approximate invariant
Mean shift, mode seeking, and clustering
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1995
"... AbstractMean shift, a simple iterative procedure that shifts each data point to the average of data points in its neighborhood, is generalized and analyzed in this paper. This generalization makes some kmeans like clustering algorithms its special cases. It is shown that mean shift is a modeseeki ..."
Abstract

Cited by 620 (0 self)
 Add to MetaCart
AbstractMean shift, a simple iterative procedure that shifts each data point to the average of data points in its neighborhood, is generalized and analyzed in this paper. This generalization makes some kmeans like clustering algorithms its special cases. It is shown that mean shift is a modeseeking process on a surface constructed with a “shadow ” kernel. For Gaussian kernels, mean shift is a gradient mapping. Convergence is studied for mean shift iterations. Cluster analysis is treated as a deterministic problem of finding a fixed point of mean shift that characterizes the data. Applications in clustering and Hough transform are demonstrated. Mean shift is also considered as an evolutionary strategy that performs multistart global optimization. Index TermsMean shift, gradient descent, global optimization, Hough transform, cluster analysis, kmeans clustering. I.
Primitives for the manipulation of general subdivisions and the computations of Voronoi diagrams
 ACM Tmns. Graph
, 1985
"... The following problem is discussed: Given n points in the plane (the sites) and an arbitrary query point 4, find the site that is closest to q. This problem can be solved by constructing the Voronoi diagram of the given sites and then locating the query point in one of its regions. Two algorithms ar ..."
Abstract

Cited by 543 (11 self)
 Add to MetaCart
The following problem is discussed: Given n points in the plane (the sites) and an arbitrary query point 4, find the site that is closest to q. This problem can be solved by constructing the Voronoi diagram of the given sites and then locating the query point in one of its regions. Two algorithms are given, one that constructs the Voronoi diagram in O(n log n) time, and another that inserts a new site in O(n) time. Both are based on the use of the Voronoi dual, or Delaunay triangulation, and are simple enough to be of practical value. The simplicity of both algorithms can be attributed to the separation of the geometrical and topological aspects of the problem and to the use of two simple but powerful primitives, a geometric predicate and an operator for manipulating the topology of the diagram. The topology is represented by a new data structure for generalized diagrams, that is, embeddings of graphs in twodimensional manifolds. This structure represents simultaneously an embedding, its dual, and its mirror image. Furthermore, just two operators are sufficient for building and modifying arbitrary diagrams.
Quantile Regression
 JOURNAL OF ECONOMIC PERSPECTIVES—VOLUME 15, NUMBER 4—FALL 2001—PAGES 143–156
, 2001
"... We say that a student scores at the fifth quantile of a standardized exam if he performs better than the proportion � of the reference group of students and worse than the proportion (1–�). Thus, half of students perform better than the median student and half perform worse. Similarly, the quartiles ..."
Abstract

Cited by 937 (10 self)
 Add to MetaCart
We say that a student scores at the fifth quantile of a standardized exam if he performs better than the proportion � of the reference group of students and worse than the proportion (1–�). Thus, half of students perform better than the median student and half perform worse. Similarly, the quartiles divide the population into four segments with equal proportions of the reference population in each segment. The quintiles divide the population into five parts; the deciles into ten parts. The quantiles, or percentiles, or occasionally fractiles, refer to the general case. Quantile regression as introduced by Koenker and Bassett (1978) seeks to extend these ideas to the estimation of conditional quantile functions—models in which quantiles of the conditional distribution of the response variable are expressed as functions of observed covariates. In Figure 1, we illustrate one approach to this task based on Tukey’s boxplot (as in McGill, Tukey and Larsen, 1978). Annual compensation for the chief executive officer (CEO) is plotted as a function of firm’s market value of equity. A sample of 1,660 firms was split into ten groups of equal size according to their market capitalization. For each group of 166 firms, we compute the three quartiles of CEO compensation: salary, bonus and other compensation, including stock options (as valued by the BlackScholes formula at the time of the grant). For each group, the bowtielike box represents the middle half of the salary distribution lying between the first and third quartiles. The horizontal line near the middle of each box represents the median compensation for each group of CEOs, and the
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 800 (26 self)
 Add to MetaCart
The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide varietyof algorithms — among them sumproduct, cluster variational methods, expectationpropagation, mean field methods, maxproduct and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in largescale statistical models.
Results 1  10
of
72,137