Results 1 - 10
of
316
Distributed source coding for sensor networks
- In IEEE Signal Processing Magazine
, 2004
"... n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf pre-vious milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that con-sist of many tiny, low- ..."
Abstract
-
Cited by 224 (4 self)
- Add to MetaCart
(Show Context)
n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf pre-vious milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that con-sist of many tiny, low-power and cheap wireless sensors as the number one emerging technology. Unlike PCs or the Internet, which are designed to support all types of applications, sensor networks are usually mission driven and application specific (be it detection of biological agents and toxic chemicals; environmental measure-ment of temperature, pressure and vibration; or real-time area video surveillance). Thus they must operate under a set of unique constraints and requirements. For example, in contrast to many other wireless devices (e.g., cellular phones, PDAs, and laptops), in which energy can be recharged from time to time, the energy provisioned for a wireless sensor node is not expected to be renewed throughout its mission. The limited amount of energy available to wireless sensors has a significant impact on all aspects of a wireless sensor network, from the amount of information that the node can process, to the volume of wireless communication it can carry across large distances. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies; it relies on many com-ponents working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies for sensor networks is distributed source coding (DSC), which refers to the compression of multiple correlated sensor out-puts [1]–[4] that do not communicate with each other (hence distributed coding). These sensors send their compressed outputs to a central point [e.g., the base station (BS)] for joint decoding. I
The Distributed Karhunen-Loève Transform
- IEEE Trans. Inform. Theory
, 2003
"... The Karhunen-Loeve transform (KLT) is a key element of many signal processing tasks, including approximation, compression, and classification. Many recent applications involve distributed signal processing where it is not generally possible to apply the KLT to the signal; rather, the KLT must be ..."
Abstract
-
Cited by 91 (15 self)
- Add to MetaCart
(Show Context)
The Karhunen-Loeve transform (KLT) is a key element of many signal processing tasks, including approximation, compression, and classification. Many recent applications involve distributed signal processing where it is not generally possible to apply the KLT to the signal; rather, the KLT must be approximated in a distributed fashion.
Video compression -- From concepts to the H.264/AVC standard
- PROCEEDINGS OF THE IEEE
, 2005
"... Over the last one and a half decades, digital video compression technologies have become an integral part of the way we create, communicate, and consume visual information. In this paper, techniques for video compression are reviewed, starting from basic concepts. The rate-distortion performance of ..."
Abstract
-
Cited by 72 (1 self)
- Add to MetaCart
Over the last one and a half decades, digital video compression technologies have become an integral part of the way we create, communicate, and consume visual information. In this paper, techniques for video compression are reviewed, starting from basic concepts. The rate-distortion performance of modern video compression schemes is the result of an interaction between motion representation techniques, intra-picture prediction techniques, waveform coding of differences, and waveform coding of various refreshed regions. The paper starts with an explanation of the basic concepts of video codec design and then explains how these various features have been integrated into international standards, up to and including the most recent such standard, known as H.264/AVC.
Trends and Perspectives in Image and Video Coding
- PROCEEDINGS OF THE IEEE (2005
, 2005
"... ..."
(Show Context)
Multimedia Communication in Wireless Sensor Networks
"... The technological advances in Micro Electro-Mechanical Systems (MEMS) and wireless communications have enabled the realization of wireless sensor networks (WSN) comprised of large number of low-cost, low-power, multifunctional sensor nodes. These tiny sensor nodes communicate in short distances an ..."
Abstract
-
Cited by 35 (2 self)
- Add to MetaCart
The technological advances in Micro Electro-Mechanical Systems (MEMS) and wireless communications have enabled the realization of wireless sensor networks (WSN) comprised of large number of low-cost, low-power, multifunctional sensor nodes. These tiny sensor nodes communicate in short distances and collaboratively work toward fulfilling the applicationspecific objectives of WSN. However, realization of wide range of envisioned WSN applications necessitates effective communication protocols which can address the unique challenges posed by the WSN paradigm. Since many of these envisioned applications may also involve in collecting information in the form of multimedia such as audio, image, and video; additional challenges due to the unique requirements of multimedia delivery over WSN, e.g., diverse reliability requirements, time-constraints, high bandwidth demands, must be addressed as well. Thus far, vast majority of the research efforts has been focused on addressing the problems of conventional data communication in WSN. Therefore, there exists an urgent need for research on the problems of multimedia communication in WSN. In this paper, a survey of the research challenges and the current status of the literature on the multimedia communication in WSN is presented. More specifically, the multimedia WSN applications, factors influencing multimedia delivery over WSN, currently proposed solutions in application, transport, and network layers, are pointed out along with their shortcomings and open research issues.
Distributed Monoview and Multiview Video Coding
, 2007
"... A growing percentage of the world population now uses image and video coding technologies on a regular basis. These technologies are behind the success and quick deployment of services and products such as digital pictures, digital television, DVDs, and Internet video communications. Today’s digital ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
A growing percentage of the world population now uses image and video coding technologies on a regular basis. These technologies are behind the success and quick deployment of services and products such as digital pictures, digital television, DVDs, and Internet video communications. Today’s digital video coding paradigm represented by the ITU-T and MPEG standards mainly relies on a hybrid of blockbased transform and interframe predictive coding approaches. In this coding framework, the encoder architecture has the task to exploit both the temporal and spatial redundancies present in the video sequence, which is a rather complex exercise. As a consequence, all standard video encoders have a much higher computational complexity than the decoder (typically five to ten times more complex), mainly due to the temporal correlation exploitation tools, notably the motion estimation process. This type of architecture is well-suited for applications where the video is encoded once and decoded many times, i.e., one-to-many topologies, such as broadcasting or video-on-demand, where the cost of the decoder is more critical
Side information generation for multiview distributed video coding using a fusion approach
- The 7th Nordic Signal Processing Sym
, 2007
"... Distributed Source Coding (DSC) aims at achieving efficient compression by locating the source redundancies at the decoder instead of the encoder. Moreover, DSC exhibits many properties like low-complexity encoding or embedded error resilience that make it very convenient for some emerging new appli ..."
Abstract
-
Cited by 26 (1 self)
- Add to MetaCart
(Show Context)
Distributed Source Coding (DSC) aims at achieving efficient compression by locating the source redundancies at the decoder instead of the encoder. Moreover, DSC exhibits many properties like low-complexity encoding or embedded error resilience that make it very convenient for some emerging new applications. Among the many challenging topics related to DSC there is the generation of the Side Information, an estimation made by the decoder of the data being decoded. In the particular field of Multiview Distributed Video Coding (Multiview DVC) this Side Information can be generated by inter-camera or intra-camera interpolation. This paper briefly describes both techniques and proposes two approaches that combine them by evaluating the reliability of each interpolation at the pixel level. 1.
Distributed compressed video sensing
- in Proc. of IEEE International Conference on Image Processing,Nov
"... This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) – a solution for Distributed Video Coding (DVC) based on the recently emerging Compressed Sensing theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it ..."
Abstract
-
Cited by 24 (3 self)
- Add to MetaCart
(Show Context)
This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) – a solution for Distributed Video Coding (DVC) based on the recently emerging Compressed Sensing theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it recovers video frames jointly at the decoder by exploiting an interframe sparsity model and by performing sparse recovery with side information. In particular, along with global frame-based measurements, the DISCOS encoder also acquires local block-based measurements for block prediction at the decoder. Our interframe sparsity model mimics state-of-the-art video codecs: the sparsest representation of a block is a linear combination of a few temporal neighboring blocks that are in previously reconstructed frames or in nearby key frames. This model enables a block to be optimally predicted from its local measurements by l1-minimization. The DISCOS decoder also employs a sparse recovery with side information to jointly reconstruct a frame from its global measurements and its local block-based prediction. Simulation results show that the proposed framework outperforms the baseline compressed sensing-based scheme of intraframecoding and intraframe-decoding by 8 − 10dB. Finally, unlike conventional DVC schemes, our DISCOS framework can perform most encoding operations in the analog domain with very low-complexity, making it be a promising candidate for real-time, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz imaging. Index Terms — distributed video coding, Wyner-Ziv coding, compressed sensing, compressive sensing, sparse recovery with decoder side information, structurally random matrices. 1.
Interactive streaming of stored multiview video using redundant frame structures
- IEEE Trans. Image Process
, 2011
"... Abstract—While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients ..."
Abstract
-
Cited by 23 (13 self)
- Add to MetaCart
(Show Context)
Abstract—While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients can interactively switch views during video playback. Thus, as a client is playing back successive frames (in time) for a given view, it can send a request to the server to switch to a different view while continuing uninterrupted temporal playback. Noting that standard tools for random access (i.e., I-frame insertion) can be bandwidth-ineffi-cient for this application, we propose a redundant representation of I-, P-, and “merge ” frames, where each original picture can be encoded into multiple versions, appropriately trading off expected transmission rate with storage, to facilitate view switching. We first present ad hoc frame structures with good performance when the view-switching probabilities are either very large or very small. We then present optimization algorithms that generate more general frame structures with better overall performance for the general case. We show in our experiments that we can generate redundant frame structures offering a range of tradeoff points between transmission and storage, e.g., outperforming simple I-frame insertion structures by up to 45 % in terms of bandwidth efficiency at twice the storage cost. Index Terms—Media interaction, multiview video coding, video streaming. I.