Results 1  10
of
2,735
Normalized Cuts and Image Segmentation
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... ..."
(Show Context)
Regularization and variable selection via the Elastic Net
 Journal of the Royal Statistical Society, Series B
, 2005
"... Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where ..."
Abstract

Cited by 967 (11 self)
 Add to MetaCart
Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the p n case. An algorithm called LARSEN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lasso.
Blind Beamforming for Non Gaussian Signals
 IEE ProceedingsF
, 1993
"... This paper considers an application of blind identification to beamforming. The key point is to use estimates of directional vectors rather than resorting to their hypothesized value. By using estimates of the directional vectors obtained via blind identification i.e. without knowing the arrray mani ..."
Abstract

Cited by 721 (31 self)
 Add to MetaCart
(Show Context)
This paper considers an application of blind identification to beamforming. The key point is to use estimates of directional vectors rather than resorting to their hypothesized value. By using estimates of the directional vectors obtained via blind identification i.e. without knowing the arrray manifold, beamforming is made robust with respect to array deformations, distortion of the wave front, pointing errors, etc ... so that neither array calibration nor physical modeling are necessary. Rather surprisingly, `blind beamformers' may outperform `informed beamformers' in a plausible range of parameters, even when the array is perfectly known to the informed beamformer. The key assumption blind identification relies on is the statistical independence of the sources, which we exploit using fourthorder cumulants. A computationally efficient technique is presented for the blind estimation of directional vectors, based on joint diagonalization of 4thorder cumulant matrices
A Signal Processing Approach To Fair Surface Design
, 1995
"... In this paper we describe a new tool for interactive freeform fair surface design. By generalizing classical discrete Fourier analysis to twodimensional discrete surface signals  functions defined on polyhedral surfaces of arbitrary topology , we reduce the problem of surface smoothing, or fai ..."
Abstract

Cited by 655 (15 self)
 Add to MetaCart
In this paper we describe a new tool for interactive freeform fair surface design. By generalizing classical discrete Fourier analysis to twodimensional discrete surface signals  functions defined on polyhedral surfaces of arbitrary topology , we reduce the problem of surface smoothing, or fairing, to lowpass filtering. We describe a very simple surface signal lowpass filter algorithm that applies to surfaces of arbitrary topology. As opposed to other existing optimizationbased fairing methods, which are computationally more expensive, this is a linear time and space complexity algorithm. With this algorithm, fairing very large surfaces, such as those obtained from volumetric medical data, becomes affordable. By combining this algorithm with surface subdivision methods we obtain a very effective fair surface design technique. We then extend the analysis, and modify the algorithm accordingly, to accommodate different types of constraints. Some constraints can be imposed without any modification of the algorithm, while others require the solution of a small associated linear system of equations. In particular, vertex location constraints, vertex normal constraints, and surface normal discontinuities across curves embedded in the surface, can be imposed with this technique.
VBLAST: An Architecture for Realizing Very High Data Rates Over the RichScattering Wireless Channel,
 ISSSE.
, 1998
"... ..."
(Show Context)
Power and centrality: A family of measures.
 13656 www.pnas.org/cgi/doi/10.1073/pnas.1401211111 Contractor and DeChurch
, 1987
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about J ..."
Abstract

Cited by 596 (3 self)
 Add to MetaCart
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Although network centrality is generally assumed to produce power, recent research shows that this is not the case in exchange networks. This paper proposes a generalization of the concept of centrality that accounts for both the usual positive relationship between power and centrality and Cook et al.'s recent exceptional results. I propose a family of centrality measures c(a, 3) generated by two parameters, a and P. The parameter P reflects the degree to which an individual's status is a function of the statuses of those to whom he or she is connected. If P is positive, c(a, P) is a conventional centrality measure in which each unit's status is a positive function of the statuses of those with which it is in contact.2 In a communication network, for example, a 1 Requests for reprints should be sent to Phillip Bonacich,
Large steps in cloth simulation
 SIGGRAPH 98 Conference Proceedings
, 1998
"... The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particle ..."
Abstract

Cited by 577 (5 self)
 Add to MetaCart
(Show Context)
The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particles with an implicit integration method. The simulator models cloth as a triangular mesh, with internal cloth forces derived using a simple continuum formulation that supports modeling operations such as local anisotropic stretch or compression; a unified treatment of damping forces is included as well. The implicit integration method generates a large, unbanded sparse linear system at each time step which is solved using a modified conjugate gradient method that simultaneously enforces particles ’ constraints. The constraints are always maintained exactly, independent of the number of conjugate gradient iterations, which is typically small. The resulting simulation system is significantly faster than previous accounts of cloth simulation systems in the literature. Keywords—Cloth, simulation, constraints, implicit integration, physicallybased modeling. 1
Consistency of spectral clustering
, 2004
"... Consistency is a key property of statistical algorithms, when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about consistency of most clustering algorithms. In this paper we investigate consistency of a popular family of spe ..."
Abstract

Cited by 572 (15 self)
 Add to MetaCart
Consistency is a key property of statistical algorithms, when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about consistency of most clustering algorithms. In this paper we investigate consistency of a popular family of spectral clustering algorithms, which cluster the data with the help of eigenvectors of graph Laplacian matrices. We show that one of the two of major classes of spectral clustering (normalized clustering) converges under some very general conditions, while the other (unnormalized), is only consistent under strong additional assumptions, which, as we demonstrate, are not always satisfied in real data. We conclude that our analysis provides strong evidence for the superiority of normalized spectral clustering in practical applications. We believe that methods used in our analysis will provide a basis for future exploration of Laplacianbased methods in a statistical setting.
Lambertian Reflectance and Linear Subspaces
, 2000
"... We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wi ..."
Abstract

Cited by 526 (20 self)
 Add to MetaCart
(Show Context)
We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a lowdimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. Finally, we show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. Research conducted w...