Results 1  10
of
472
Large steps in cloth simulation
 SIGGRAPH 98 Conference Proceedings
, 1998
"... The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particle ..."
Abstract

Cited by 578 (5 self)
 Add to MetaCart
(Show Context)
The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particles with an implicit integration method. The simulator models cloth as a triangular mesh, with internal cloth forces derived using a simple continuum formulation that supports modeling operations such as local anisotropic stretch or compression; a unified treatment of damping forces is included as well. The implicit integration method generates a large, unbanded sparse linear system at each time step which is solved using a modified conjugate gradient method that simultaneously enforces particles ’ constraints. The constraints are always maintained exactly, independent of the number of conjugate gradient iterations, which is typically small. The resulting simulation system is significantly faster than previous accounts of cloth simulation systems in the literature. Keywords—Cloth, simulation, constraints, implicit integration, physicallybased modeling. 1
Shallow Parsing with Conditional Random Fields
, 2003
"... Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluati ..."
Abstract

Cited by 575 (8 self)
 Add to MetaCart
Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluation datasets and extensive comparison among methods. We show here how to train a conditional random field to achieve performance as good as any reported base nounphrase chunking method on the CoNLL task, and better than any reported single model. Improved training methods based on modern optimization algorithms were critical in achieving these results. We present extensive comparisons between models and training methods that confirm and strengthen previous results on shallow parsing and training methods for maximumentropy models.
Fast maximum margin matrix factorization for collaborative prediction
 In Proceedings of the 22nd International Conference on Machine Learning (ICML
, 2005
"... Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to lowrank approximations and standard factor models. MMMF can be formulated as a semidefinite programming (SDP) and learned using standard SDP solvers. However, cu ..."
Abstract

Cited by 241 (8 self)
 Add to MetaCart
(Show Context)
Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to lowrank approximations and standard factor models. MMMF can be formulated as a semidefinite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradientbased optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested. 1.
Training a support vector machine in the primal
 Neural Computation
, 2007
"... Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and nonlinear SVMs, and that there is no reason for ignoring this possibilty. On the cont ..."
Abstract

Cited by 154 (5 self)
 Add to MetaCart
(Show Context)
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and nonlinear SVMs, and that there is no reason for ignoring this possibilty. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.
Mapping and Visualizing the Internet
 In Proceedings of the 2000 USENIX Annual Technical Conference
, 2000
"... We have been collecting and recording routing paths from a test host to each of over 90,000 registered networks on the Internet since August 1998. The resulting database contains interesting routing and reachability information, and is available to the public for research purposes. The daily scan ..."
Abstract

Cited by 113 (2 self)
 Add to MetaCart
(Show Context)
We have been collecting and recording routing paths from a test host to each of over 90,000 registered networks on the Internet since August 1998. The resulting database contains interesting routing and reachability information, and is available to the public for research purposes. The daily scans cover approximately a tenth of the networks on the Internet, with a full scan run roughly once a month. We have also been collecting Lucent's intranet data, and applied these tools to understanding its size and connectivity. We have also detecting the loss of power to routers in Yugoslavia as the result of NATO bombing. A simulated springforce algorithm lays out the graphs that results from these databases. This algorithm is well known, but has never been applied to such a large problem. The Internet graph, with around 88,000 nodes and 100,000 edges, is much larger than those previously considered tractable by the data visualization community. The resulting Internet layouts are pleasant, though rather cluttered. On smaller networks, like Lucent's intranet, the layouts present the data in a useful way. For the Internet data, we have tried plotting a minimum distance spanning tree; by throwing away edges, the remaining graph can be made more accessible. Once a layout is chosen, it can be colored in various ways to show networkrelevant data, such as IP address, domain information, location, ISPs, location of firewalls, etc. This paper expands and updates the description of the project given in [2]. 1
The design and implementation of a generic sparse bundle adjustment software package based on the levenbergmarquardt algorithm
, 2004
"... The most recent revision of this document will always be found at ..."
Abstract

Cited by 111 (4 self)
 Add to MetaCart
(Show Context)
The most recent revision of this document will always be found at
Everything Old Is New Again: A Fresh Look at Historical Approaches
 in Machine Learning. PhD thesis, MIT
, 2002
"... 2 Everything Old Is New Again: A Fresh Look at Historical ..."
Abstract

Cited by 106 (7 self)
 Add to MetaCart
(Show Context)
2 Everything Old Is New Again: A Fresh Look at Historical
Exactly sparse delayedstate filters for viewbased SLAM
 IEEE Transactions on Robotics
, 2006
"... Abstract—This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayedstate framework. Such a framework is used in viewbased representations of the environment that rely upon scanmatching raw sensor data to obtain virt ..."
Abstract

Cited by 102 (21 self)
 Add to MetaCart
Abstract—This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayedstate framework. Such a framework is used in viewbased representations of the environment that rely upon scanmatching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayedstate information matrix is in contrast to other recent featurebased SLAM information algorithms, such as sparse extended information filter or thin junctiontree filter, since these methods have to make approximations in order to force the featurebased SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayedstate framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the fullcovariance solution. The approach is validated experimentally using monocular imagery for two datasets: a testtank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic. Index Terms—Information filters, Kalman filtering, machine vision, mobile robot motion planning, mobile robots, recursive estimation, robot vision systems, simultaneous localization and mapping (SLAM), underwater vehicles. I.
Regularized LeastSquares Classification
"... We consider the solution of binary classification problems via Tikhonov regularization in a Reproducing Kernel Hilbert Space using the square loss, and denote the resulting algorithm Regularized LeastSquares Classification (RLSC). We sketch ..."
Abstract

Cited by 100 (1 self)
 Add to MetaCart
We consider the solution of binary classification problems via Tikhonov regularization in a Reproducing Kernel Hilbert Space using the square loss, and denote the resulting algorithm Regularized LeastSquares Classification (RLSC). We sketch