Results 1  10
of
26
A Tour of Modern Image Filtering  New insights and methods, both practical and theoretical
 IEEE SIGNAL PROCESSING MAGAZINE [106]
, 2013
"... Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include moving least square (from graphics), the bilateral filter (BF) and anisotropic diffusion (from compute ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include moving least square (from graphics), the bilateral filter (BF) and anisotropic diffusion (from computer vision), boosting, kernel, and spectral methods (from machine learning), nonlocal means (NLM) and its variants (from signal processing), Bregman iterations (from applied math), kernel regression, and iterative scaling (from statistics). While these approaches found their inspirations in diverse fields of nascence, they are deeply connected. Digital Object Identifier 10.1109/MSP.2011.2179329 Date of publication: 5 December 2012 In this article, I present a practical and accessible framework to understand some of the basic underpinnings of these methods, with the intention of leading the reader to a broad understanding of how they interrelate. I also illustrate connections between these techniques and more classical (empirical) Bayesian approaches. The proposed framework is used to arrive at new insights and methods, both practical and theoretical. In particular, several novel optimality properties of algorithms in wide use such as blockmatching and threedimensional (3D) filtering (BM3D), and methods for their iterative improvement (or nonexistence thereof) are discussed. A general approach is laid out to enable the performance analysis and subsequent improvement of many existing filtering algorithms. While much of the material discussed is applicable to the wider class of linear degradation models beyond noise (e.g., blur,) to keep matters focused, we consider the problem of denoising here.
A Tour of Modern Image Filtering
, 2011
"... Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include
Sinkhorn distances: Lightspeed computation of optimal transport
 In Advances in Neural Information Processing Systems
, 2013
"... Abstract. Optimal transportation distances are a fundamental family of parameterized distances for histograms. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cos ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Optimal transportation distances are a fundamental family of parameterized distances for histograms. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost is prohibitive whenever the histograms ’ dimension exceeds a few hundreds. We propose in this work a new family of optimal transportation distances that look at transportation problems from a maximumentropy perspective. We smooth the classical optimal transportation problem with an entropic regularization term, and show that the resulting optimum is also a distance which can be computed through SinkhornKnopp’s matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transportation solvers. We also report improved performance over classical optimal transportation distances on the MNIST benchmark problem. 1.
Compass: A scalable simulator for an architecture for Cognitive Computing
"... Abstract—Inspired by the function, power, and volume of the organic brain, we are developing TrueNorth, a novel modular, nonvon Neumann, ultralow power, compact architecture. TrueNorth consists of a scalable network of neurosynaptic cores, with each core containing neurons, dendrites, synapses, an ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Inspired by the function, power, and volume of the organic brain, we are developing TrueNorth, a novel modular, nonvon Neumann, ultralow power, compact architecture. TrueNorth consists of a scalable network of neurosynaptic cores, with each core containing neurons, dendrites, synapses, and axons. To set sail for TrueNorth, we developed Compass, a multithreaded, massively parallel functional simulator and a parallel compiler that maps a network of longdistance pathways in the macaque monkey brain to TrueNorth. We demonstrate nearperfect weak scaling on a 16 rack IBM ® Blue Gene®/Q (262144 CPUs, 256 TB memory), achieving an unprecedented scale of 256 million neurosynaptic cores containing 65 billion neurons and 16 trillion synapses running only 388 × slower than real time with an average spiking rate of 8.1 Hz. By using emerging PGAS communication primitives, we also demonstrate 2 × better realtime performance over MPI primitives on a 4 rack Blue Gene/P (16384 CPUs, 16 TB memory). I.
Symmetrizing Smoothing Filters
, 2013
"... We study a general class of nonlinear and shiftvarying smoothing filters that operate based on averaging. This important class of filters includes many wellknown examples such as the bilateral filter, nonlocal means, general adaptive moving average filters, and more. (Many linear filters such as ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
We study a general class of nonlinear and shiftvarying smoothing filters that operate based on averaging. This important class of filters includes many wellknown examples such as the bilateral filter, nonlocal means, general adaptive moving average filters, and more. (Many linear filters such as linear minimum meansquared error smoothing filters, Savitzky–Golay filters, smoothing splines, and wavelet smoothers can be considered special cases.) They are frequently used in both signal and image processing as they are elegant, computationally simple, and high performing. The operators that implement such filters, however, are not symmetric in general. The main contribution of this paper is to provide a provably stable method for symmetrizing the smoothing operators. Specifically, we propose a novel approximation of smoothing operators by symmetric doubly stochastic matrices and show that this approximation is stable and accurate, even more so in higher dimensions. We demonstrate that there are several important advantages to this symmetrization, particularly in image processing/filtering applications such as denoising. In particular, (1) doubly stochastic filters generally lead to improved performance over the baseline smoothing procedure; (2) when the filters are applied iteratively, the symmetric ones can be guaranteed to lead to stable algorithms; and (3) symmetric smoothers allow an orthonormal eigendecomposition which enables us to peer into the complex behavior of such nonlinear and shiftvarying filters in a locally adapted basis using principal components. Finally, a doubly stochastic filter has a simple and intuitive interpretation. Namely, it implies the very natural property that every pixel in the given input image has the same sum total contribution to the output image.
A FAST ALGORITHM FOR MATRIX BALANCING
"... Abstract. As long as a square nonnegative matrix A contains sufficient nonzero elements, then the matrix can be balanced, that is we can find a diagonal scaling of A that is doubly stochastic. A number of algorithms have been proposed to achieve the balancing, the most well known of these being the ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Abstract. As long as a square nonnegative matrix A contains sufficient nonzero elements, then the matrix can be balanced, that is we can find a diagonal scaling of A that is doubly stochastic. A number of algorithms have been proposed to achieve the balancing, the most well known of these being the SinkhornKnopp algorithm. In this paper we derive new algorithms based on innerouter iteration schemes. We show that the SinkhornKnopp algorithm belongs to this family, but other members can converge much more quickly. In particular, we show that while stationary iterative methods offer little or no improvement in many cases, a scheme using a preconditioned conjugate gradient method as the inner iteration can give quadratic convergence at low cost. Key words. Matrix balancing, SinkhornKnopp algorithm, doubly stochastic matrix, conjugate gradient iteration. AMS subject classifications. 15A48, 15A51, 65F10, 65H10.
OffenseDefense Approach to Ranking Team Sports
"... The rank of an object is its relative importance to the other objects in the set. Often a rank is an integer assigned from the set {1, 2,..., n}. Ideally an assignment of available ranks ({1, 2,..., n}) to n objects is onetoone. However in certain circumstances it is possible that more than one o ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The rank of an object is its relative importance to the other objects in the set. Often a rank is an integer assigned from the set {1, 2,..., n}. Ideally an assignment of available ranks ({1, 2,..., n}) to n objects is onetoone. However in certain circumstances it is possible that more than one object is assigned the same rank. A ranking model is
Nonparametric sparsification of complex multiscale networks
 PLoS ONE
"... Many realworld networks tend to be very dense. Particular examples of interest arise in the construction of networks that represent pairwise similarities between objects. In these cases, the networks under consideration are weighted, generally with positive weights between any two nodes. Visualizat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Many realworld networks tend to be very dense. Particular examples of interest arise in the construction of networks that represent pairwise similarities between objects. In these cases, the networks under consideration are weighted, generally with positive weights between any two nodes. Visualization and analysis of such networks, especially when the number of nodes is large, can pose significant challenges which are often met by reducing the edge set. Any effective ‘‘sparsification’’ must retain and reflect the important structure in the network. A common method is to simply apply a hard threshold, keeping only those edges whose weight exceeds some predetermined value. A more principled approach is to extract the multiscale ‘‘backbone’ ’ of a network by retaining statistically significant edges through hypothesis testing on a specific null model, or by appropriately transforming the original weight matrix before applying some sort of threshold. Unfortunately, approaches such as these can fail to capture multiscale structure in which there can be small but locally statistically significant similarity between nodes. In this paper, we introduce a new method for backbone extraction that does not rely on any particular null model, but instead uses the empirical distribution of similarity weight to determine and then retain statistically significant edges. We show that our method adapts to the heterogeneity of local edge weight distributions in several paradigmatic real world networks, and in doing so retains their multiscale structure with relatively insignificant additional computational costs. We anticipate that this simple approach will be of great use in the analysis of massive,
A Tour of Modern Image Processing
, 2011
"... Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include Moving Least Square (from Graphics), the Bilateral Filter and Anisotropic Diffusion (from Machine Visi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include Moving Least Square (from Graphics), the Bilateral Filter and Anisotropic Diffusion (from Machine Vision), Boosting and Spectral Methods (from Machine Learning), Nonlocal Means (from Signal Processing), Bregman Iterations (from Applied Math), Kernel Regression and Iterative Scaling (from Statistics). While these approaches found their inspirations in diverse ﬁelds of nascence, they are deeply connected. In this paper
I present a practical and uniﬁed framework to understand some of the basic underpinnings of these methods, with the intention of leading the reader to a broad understanding of how they interrelate.
I also illustrate connections between these techniques and Bayesian approaches.
The proposed framework is used to arrive at new insights, methods, and both practical and theoretical results. In particular, several novel optimality properties of algorithms in wide use such as BM3D, and methods for their iterative improvement (or nonexistence thereof) are discussed.
Several theoretical results are discussed which will enable the performance analysis and subsequent improvement of any existing restoration algorithm. While much of the material discussed is applicable to wider class of linear degradation models (e.g. noise, blur, etc.,) in order to keep matters focused, we consider the problem of denoising here.
Asymptotic nearness of stochastic and doublystochastic matrices,” submitted
"... at Stanford. In 2005 he founded MotionDSP Inc., which has brought stateofart video enhancement technology to consumer and forensic markets. His technical interests are in statistical signal, image and video processing, and computational vision. He is a Fellow of the IEEE. We prove that the set of ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
at Stanford. In 2005 he founded MotionDSP Inc., which has brought stateofart video enhancement technology to consumer and forensic markets. His technical interests are in statistical signal, image and video processing, and computational vision. He is a Fellow of the IEEE. We prove that the set of n × n positive (row)stochastic matrices and the corresponding set of doublystochastic matrices are asymptotically close. More specifically, random matrices within each of these classes are arbitrarily close in sufficiently high dimensions.