Results 1  10
of
12
Reconstruction and Representation of 3D Objects with Radial Basis Functions
 Computer Graphics (SIGGRAPH ’01 Conf. Proc.), pages 67–76. ACM SIGGRAPH
, 2001
"... We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from pointcloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs al ..."
Abstract

Cited by 505 (1 self)
 Add to MetaCart
We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from pointcloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBFpreviously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages. The energyminimisation characterisation of polyharmonic splines result in a "smoothest" interpolant. This scaleindependent characterisation is wellsuited to reconstructing surfaces from nonuniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a noninterpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and remeshing applications. Results are presented for realworld rangefinder data.
Regularization on discrete spaces
 Pattern Recognition
, 2005
"... Abstract. We consider the classification problem on a finite set of objects. Some of them are labeled, and the task is to predict the labels of the remaining unlabeled ones. Such an estimation problem is generally referred to as transductive inference. It is wellknown that many meaningful inductive ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the classification problem on a finite set of objects. Some of them are labeled, and the task is to predict the labels of the remaining unlabeled ones. Such an estimation problem is generally referred to as transductive inference. It is wellknown that many meaningful inductive or supervised methods can be derived from a regularization framework, which minimizes a loss function plus a regularization term. In the same spirit, we propose a general discrete regularization framework defined on finite object sets, which can be thought of as discrete analogue of classical regularization theory. A family of transductive inference schemes is then systemically derived from the framework, including our earlier algorithm for transductive inference, with which we obtained encouraging results on many practical classification problems. The discrete regularization framework is built on discrete analysis and geometry developed by ourselves, in which a number of discrete differential operators of various orders are constructed, which can be thought of as discrete analogues of their counterparts in the continuous case. 1
Transductive Link Spam Detection
, 2007
"... Web spam can significantly deteriorate the quality of search engines. Early web spamming techniques mainly manipulate page content. Since linkage information is widely used in web search, linkbased spamming has also developed. So far, many techniques have been proposed to detect link spam. Those ap ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Web spam can significantly deteriorate the quality of search engines. Early web spamming techniques mainly manipulate page content. Since linkage information is widely used in web search, linkbased spamming has also developed. So far, many techniques have been proposed to detect link spam. Those approaches are basically variants of linkbased web ranking methods. In contrast, we cast the link spam detection problem into a machine learning problem of classification on directed graphs. We develop discrete analysis on directed graphs, and construct a discrete analogue of classical regularization theory via discrete analysis. A classification algorithm for directed graphs is then derived from the discrete regularization. We have applied the approach to realworld link spam detection problems, and encouraging results have been obtained.
Control theoretic smoothing splines are approximate linear filters
 Communications in Information and Systems
, 2004
"... Abstract. The problem of constructing and approximating control theoretic smoothing splines is considered in this paper. It is shown that the optimal approximating function can be given as the solution of a forced Hamiltonian system, that can be explicitly solved using the Riccati transform, and an ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The problem of constructing and approximating control theoretic smoothing splines is considered in this paper. It is shown that the optimal approximating function can be given as the solution of a forced Hamiltonian system, that can be explicitly solved using the Riccati transform, and an explicit linear filter can be constructed. We show that the bandwidth of the filter can be naturally controlled and thus for control theoretic smoothing splines the far past and the far future are unimportant. Hence smoothing splines are “local ” in nature rather than “global”. We conclude that while spline approximations are not causal the far future is not important. Key words: control theoretic smoothing splines, feedback, kernel approximation, forced Hamiltonian system 1. Introduction. In
Generalised elastic nets
, 2003
"... The elastic net was introduced as a heuristic algorithm for combinatorial optimisation and has been applied, among other problems, to biological modelling. It has an energy function which trades off a fitness term against a tension term. In the original formulation of the algorithm the tension term ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The elastic net was introduced as a heuristic algorithm for combinatorial optimisation and has been applied, among other problems, to biological modelling. It has an energy function which trades off a fitness term against a tension term. In the original formulation of the algorithm the tension term was implicitly based on a firstorder derivative. In this paper we generalise the elastic net model to an arbitrary quadratic tension term, e.g. derived from a discretised differential operator, and give an efficient learning algorithm. We refer to these as generalised elastic nets (GENs). We give a theoretical analysis of the tension term for 1D nets with periodic boundary conditions, and show that the model is sensitive to the choice of finite difference scheme that represents the discretised derivative. We illustrate some of these issues in the context of cortical map models, by relating the choice of tension term to a cortical interaction function. In particular, we prove that this interaction takes the form of a Mexican hat for the original elastic net, and of progressively more oscillatory Mexican hats for higherorder derivatives. The results apply not only to generalised elastic nets but also to other methods using discrete differential penalties, and are expected to be useful in other areas, such as data analysis, computer graphics and optimisation problems. The elastic net was first proposed as a method to obtain good solutions to the travelling salesman problem (TSP; Durbin and Willshaw, 1987) and was subsequently also found to be a very successful cortical map model
Interactive Learning Using Manifold Geometry
"... We present an interactive learning method that enables a user to iteratively refine a regression model. The user examines the output of the model, visualized as the vertical axis of a 2D scatterplot, and provides corrections by repositioning individual data instances to the correct output level. Eac ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We present an interactive learning method that enables a user to iteratively refine a regression model. The user examines the output of the model, visualized as the vertical axis of a 2D scatterplot, and provides corrections by repositioning individual data instances to the correct output level. Each repositioned data instance acts as a control point for altering the learned model, using the geometry underlying the data. We capture the underlying structure of the data as a manifold, on which we compute a set of basis functions as the foundation for learning. Our results show that manifoldbased interactive learning improves performance monotonically with each correction, outperforming alternative approaches.
HAL author manuscript Parsimonious Additive Models Computational Statistics & Data Analysis 2007; 51(6): 28512870
"... We present a new method for function estimation and variable selection, specifically designed for additive models fitted by cubic splines. Our method involves regularizing additive models using the l1–norm, which generalizes Tibshirani’s lasso to the nonparametric setting. As in the linear case, it ..."
Abstract
 Add to MetaCart
(Show Context)
We present a new method for function estimation and variable selection, specifically designed for additive models fitted by cubic splines. Our method involves regularizing additive models using the l1–norm, which generalizes Tibshirani’s lasso to the nonparametric setting. As in the linear case, it shrinks coefficients, some of them reducing exactly to zero. It gives parsimonious models, select significant variables, and reveal nonlinearities in the effects of predictors. Two strategies for finding a parsimonious additive model solutions are proposed. Both algorithms are based on a fixed point algorithm, combined with a singular value decomposition that considerably reduces computation. The empirical behavior of parsimonious additive models is compared to the adaptive backfitting BRUTO algorithm. The results allow us to characterise the domains in which our approach is effective: it performs significantly better than BRUTO when model estimation is challenging. An implementation of this method is illustrated using real data from the Cophar 1 ANRS 102 trial. Parsimonious additive models are applied to predict the indinavir plasma concentration in HIV patients. Results suggest that our method is a promising technique for the research and application areas. Key words: model selection, supervised learning, nonparametric regression, function estimation, splines, smoothing, variable selection, lasso, penalization, interpretable models.