#### DMCA

## Video Compression by Neural Networks

Citations: | 2 - 0 self |

### Citations

53 | Neural Network Approaches to Image Compression.
- Dony, Haykin
- 1995
(Show Context)
Citation Context ...ge overhead giving a faster and computationally more convenient solution to the compression problem. Moreover neural networks are able to adapt over long term variations in the frame image statistics =-=[3]-=-. In the following, linear and nonlinear PCA are described. Linear PCA: Hebbian learning Linear PCA is an efficient solution to eigendecomposition computation. In [2] a mechanism inspired to neurobiol... |

50 |
Image compression by backpropagation: An example of extensional programming.
- Cottrell, Munro, et al.
- 1988
(Show Context)
Citation Context ... the previous m-1. More details can be found in [74]. y1 yM yN y1 yM yN ˆy ˆysNonlinear PCA: Multilayer Perceptron In 1988, Cottrel, Murno and Zipper applied a two-layer perceptron to the PCA problem =-=[5]-=-. The net was trained with the so called autoassociative backpropagation. This work opened the way to a large number of future developments. Figure 14 shows the proposed architecture. In a first formu... |

37 | Image compression with neural networks -A survey,
- Jiang
- 2010
(Show Context)
Citation Context ...ctors corresponding to the M larger eigenvalues: M< N yˆ = Wx ˆ = ∑ wx i (5) i= 1 The representation error is bounded by the sum of the squared eigenvalues corresponding to the discarded eigenvectors =-=[1]-=-. It can be shown that the output vector coefficients are uncorrelated and therefore the redundancy due to correlation between neighbouring pixels is removed. Unfortunately application of KLT to vide... |

29 | Online learning algorithms of locally recurrent neural networks,”
- Campolucci, Uncini, et al.
- 1999
(Show Context)
Citation Context ...n Fig. 18. Locally recurrent neuron for multilayer neural networks Several learning algorithms for recurrent architectures exist in literature, although a comprehensive framework is still missing. In =-=[9]-=- a very effective algorithm was introduced for learning of locally recurrent neural networks. Learning is performed by a new gradient-based on-line algorithm [9], called causal recursive back-propagat... |

23 | Multilayer Feedforward Networks with Adaptive Spline Activation Function
- Guarnirei, Piazza, et al.
- 1999
(Show Context)
Citation Context ...block. The output of each neuron is quantized with 4 bits. Learning capabilities were improved by use of adaptable sigmoidal functions. In alternative, spline adaptive models were fruitfully employed =-=[8]-=-. Performance in video compression are usually evaluated on the basis of the Peak Signal to Noise Ratio (PSNR), defined as: ⎛ ⎞ ⎜ 2 256 ⎟ PSNR = 10⋅log ⎜ ⎟ 10 M N (11) ⎜ 1 2 ⎟ ⎜ ⋅∑∑⎡ pixorg ( m, n) pi... |

4 |
Image compression using neural network with learning capability of variable function of a neural unit
- Kohno, Arai, et al.
- 1990
(Show Context)
Citation Context ...yer perceptron trained by autoassociative backpropagation Other approaches developed neural networks with sigmoidal activation functions, yielding better results with respect to the linear network [3]=-=[4]-=-. A critical issue in neural PCA is the fixed compression ratio of each processed block: the network performs the compression with a low distortion on uniform blocks but produces higher distortion on ... |

3 |
Size-Adaptive Neural Network for Image Compression
- Parodi, Passaggio
- 1994
(Show Context)
Citation Context ...ssed block: the network performs the compression with a low distortion on uniform blocks but produces higher distortion on less uniform ones. In order to overcome this problem, size-adaptive networks =-=[6]-=- can be employed to perform compression depending on block activity. This allows for higher compression of blocks with low activity level and good reconstruction of blocks with higher activity level. ... |

2 |
O(1949) The organizazion of behaviour
- Hebb
- 1949
(Show Context)
Citation Context ...ons in the frame image statistics [3]. In the following, linear and nonlinear PCA are described. Linear PCA: Hebbian learning Linear PCA is an efficient solution to eigendecomposition computation. In =-=[2]-=- a mechanism inspired to neurobiology was proposed, where synaptic connections between neurons are modified by learning. Hebb’s assumption consists in reinforcing the synaptic connection between two n... |

2 |
Azrozullah M
- Namphon, Chin
- 1996
(Show Context)
Citation Context ...o ooo 2 p 2 1 p 2 1 1 2 M Fig. 17. Multilayer neural network for high order data compression-decompression Hierarchical neural networks (HNNs) take into account the information about block contiguity =-=[7]-=-. The idea is to divide a scene into N disjoint sub-scenes, each one segmented in n× npixels blocks. Blocks are processed together by the hierarchical structure shown in figure 17. The HNN consists of... |

2 |
Tsoi A C (1991) FIR and IIR synapses, a new neural network architecture for time series modelling
- Back
(Show Context)
Citation Context ...alled temporal depth, whereas the number of adaptable parameters divided by the temporal depth is named temporal resolution. An example of architecture used in this context is the IIR-MLP proposed in =-=[10]-=-[11], where static synapses are replaced by conventional IIR adaptive filters, as depicted in figure 18. [ ] x n ( l − 1) j IIR IIR IIR ooo IIR ( l −1) ij [ ] y n ∑ Sigmoid () l i [ ] x n Fig. 18. Loc... |