Results

**11 - 14**of**14**### Chapter 2 Blind Source Separation Based on Dictionary Learning: A Singularity-Aware Approach

"... Abstract This chapter surveys recent works in applying sparse signal processing techniques, in particular, dictionary learning algorithms to solve the blind source separation problem. For the proof of concepts, the focus is on the scenario where the number of mixtures is not less than that of the so ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract This chapter surveys recent works in applying sparse signal processing techniques, in particular, dictionary learning algorithms to solve the blind source separation problem. For the proof of concepts, the focus is on the scenario where the number of mixtures is not less than that of the sources. Based on the assumption that the sources are sparsely represented by some dictionaries, we present a joint source separation and dictionary learning algorithm (SparseBSS) to separate the noise corrupted mixed sources with very little extra information. We also discuss the singularity issue in the dictionary learning process, which is one major reason for algorithm failure. Finally, two approaches are presented to address the singularity issue. Blind source separation (BSS) has been investigated during the last two decades; many algorithms have been developed and applied in a wide range of applications including biomedical engineering, medical imaging, speech processing, astronomical

### ANALYSIS DICTIONARY LEARNING BASED ON NESTEROV’S GRADIENT WITH APPLICATION TO SAR IMAGE DESPECKLING

"... We focus on the dictionary learning problem for the analysis model. A simple but effective algorithm based on Nesterov’s gradient is proposed. This algorithm assumes that the analysis dictionary contains unit 2 norm atoms and trains the dictio-nary iteratively with Nesterov’s gradient. We show that ..."

Abstract
- Add to MetaCart

(Show Context)
We focus on the dictionary learning problem for the analysis model. A simple but effective algorithm based on Nesterov’s gradient is proposed. This algorithm assumes that the analysis dictionary contains unit 2 norm atoms and trains the dictio-nary iteratively with Nesterov’s gradient. We show that our proposed algorithm is able to learn the dictionary effectively with experiments on synthetic data. We also present examples demonstrating the promising performance of our algorithm in despeckling synthetic aperture radar (SAR) images. Index Terms — Analysis model, analysis dictionary learning, Nesterov’s gradient

### Over th...

"... A block-based approach coupled with adaptive dictionary learning is presented for underdetermined blind speech separation. The proposed algorithm, derived as a multi-stage method, is established by reformulating the underdetermined blind source separation problem as a sparse coding problem. First, t ..."

Abstract
- Add to MetaCart

(Show Context)
A block-based approach coupled with adaptive dictionary learning is presented for underdetermined blind speech separation. The proposed algorithm, derived as a multi-stage method, is established by reformulating the underdetermined blind source separation problem as a sparse coding problem. First, the mixing matrix is estimated in the transform domain by a clustering algorithm. Then a dictionary is learned by an adaptive learning algorithm for which three algorithms have been tested, including the simultaneous codeword optimization (SimCO) technique that we have proposed recently. Using the estimated mixing matrix and the learned dic-tionary, the sources are recovered from the blocked mixtures by a signal recovery approach. The separated source components from all the blocks are concatenated to reconstruct the whole signal. The block-based operation has the advantage of improving considerably the computational eciency of the source recovery process without degrading its separation performance. Numerical experiments are provided to show the competitive separation performance of the proposed algorithm, as com-pared with the state-of-the-art approaches. Using mutual coherence and sparsity index, the performance of a variety of dictionaries that are applied in underdeter-mined speech separation is compared and analysed, such as the dictionaries learned from speech mixtures and ground truth speech sources, as well as those predened by mathematical transforms such as discrete cosine transform (DCT) and short time Fourier transform (STFT).

### A SPLIT-AND-MERGE DICTIONARY LEARNING ALGORITHM FOR SPARSE REPRESENTATION

"... In big data image/video analytics, we encounter the problem of learning an overcomplete dictionary for sparse representation from a large training dataset, which can not be processed at once because of storage and computational constraints. To tackle the problem of dic-tionary learning in such scena ..."

Abstract
- Add to MetaCart

(Show Context)
In big data image/video analytics, we encounter the problem of learning an overcomplete dictionary for sparse representation from a large training dataset, which can not be processed at once because of storage and computational constraints. To tackle the problem of dic-tionary learning in such scenarios, we propose an algorithm for par-allel dictionary learning. The fundamental idea behind the algorithm is to learn a sparse representation in two phases. In the first phase, the whole training dataset is partitioned into small non-overlapping subsets, and a dictionary is trained independently on each small database. In the second phase, the dictionaries are merged to form a global dictionary. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy operating on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques, that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduc-tion of training time, without significantly affecting the denoising performance.