• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Real-time motion estimation and visualization on graphics cards (2004)

by R Strzodka, C Garbe
Venue:IEEE Vis
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 28
Next 10 →

A duality based approach for realtime tv-l1 optical flow

by C. Zach, T. Pock, H. Bischof - In Ann. Symp. German Association Patt. Recogn , 2007
"... Abstract. Variational methods are among the most successful approaches to calculate the optical flow between two image frames. A particularly appealing formulation is based on total variation (TV) regularization and the robust L 1 norm in the data fidelity term. This formulation can preserve discont ..."
Abstract - Cited by 198 (15 self) - Add to MetaCart
Abstract. Variational methods are among the most successful approaches to calculate the optical flow between two image frames. A particularly appealing formulation is based on total variation (TV) regularization and the robust L 1 norm in the data fidelity term. This formulation can preserve discontinuities in the flow field and offers an increased robustness against illumination changes, occlusions and noise. In this work we present a novel approach to solve the TV-L 1 formulation. Our method results in a very efficient numerical scheme, which is based on a dual formulation of the TV energy and employs an efficient point-wise thresholding step. Additionally, our approach can be accelerated by modern graphics processing units. We demonstrate the real-time performance (30 fps) of our approach for video inputs at a resolution of 320 × 240 pixels. 1
(Show Context)

Citation Context

...omous robot navigation, it is necessary to calculate displacement fields in real-time. Real-time optical flow techniques typically consider only the data fidelity term to generate displacement fields =-=[10, 18]-=-. One of the first variational approaches to compute the optical flow in realtime was presented by Bruhn et al. [8, 9]. In their work a highly efficient multigrid approach is employed to obtain real-t...

Full-frame video stabilization with motion inpainting

by Yasuyuki Matsushita, Eyal Ofek, Xiaoou Tang, Senior Member, Heung-yeung Shum - IEEE Trans. Patt. Anal. Mach. Intell , 2006
"... Abstract—Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end u ..."
Abstract - Cited by 59 (0 self) - Add to MetaCart
Abstract—Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce fullframe videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos. Index Terms—Video analysis, video stabilization, video completion, motion inpainting, sharpning and deblurring, video enhancement. æ 1
(Show Context)

Citation Context

...re visible in the original video. (b) The spots are removed from the entire sequence by masking out the spot areas and applying our video completion method. tion. Utilizing GPU power as it is done in =-=[41]-=-, it will be possible to significantly improve the speed. 5.4 Other Video Enhancement Applications In addition to video stabilization, the video completion and deblurring algorithms we developed in th...

An improved algorithm for TV-L1 optical flow

by A. Wedel, T. Pock, C. Zach, H. Bischof, D. Cremers - In: Visual Motion Analysis Workshop, LNCS 5604 , 2009
"... Fig. 1. Optical flow for the backyard and mini cooper scene of the Middlebury optical flow benchmark. Optical flow captures the dynamics of a scene by estimating the motion of every pixel between two frames of an image sequence. The displacement of every pixel is shown as displacement vectors on top ..."
Abstract - Cited by 54 (5 self) - Add to MetaCart
Fig. 1. Optical flow for the backyard and mini cooper scene of the Middlebury optical flow benchmark. Optical flow captures the dynamics of a scene by estimating the motion of every pixel between two frames of an image sequence. The displacement of every pixel is shown as displacement vectors on top of the commonly used flow color scheme (see Figure 5). Abstract. A look at the Middlebury optical flow benchmark [5] reveals that nowadays variational methods yield the most accurate optical flow fields between two image frames. In this work we propose an improvement variant of the original duality based TV-L 1 optical flow algorithm in [31] and provide implementation details. This formulation can preserve discontinuities in the flow field by employing total variation (TV) regularization. Furthermore, it offers robustness against outliers by applying the robust L 1 norm in the data fidelity term. Our contributions are as follows. First, we propose to perform a structure-texture decomposition of the input images to get rid of violations in the optical flow constraint due to illumination changes. Second, we propose to integrate a median filter into the numerical scheme to further increase the robustness to sampling artefacts in the image data. We experimentally show that very precise and robust estimation of optical flow can be achieved with a variational approach in realtime. The numerical scheme and the implementation are described in a detailed way, which enables reimplementation of this high-end method. 2 A. Wedel, T. Pock, C. Zach, H. Bischof, and D. Cremers 1
(Show Context)

Citation Context

...omous robot navigation, it is necessary to calculate displacement fields in real-time. Real-time optical flow techniques typically consider only the data fidelity term to generate displacement fields =-=[12, 25]-=-. One of the first variational approaches to compute the optical flow in real-time was presented byAn Improved Algorithm for TV-L 1 Optical Flow 3 Bruhn et al. [10, 11]. In their work a highly effici...

Using graphics devices in reverse: GPU-based Image Processing and Computer Vision

by James Fung, Steve Mann - IEEE International Conference on Multimedia and Expo , 2008
"... Graphics and vision are approximate inverses of each other: ordinarily Graphics Processing Units (GPUs) are used to convert “numbers into pictures ” (i.e. computer graphics). In this paper, we discus the use of GPUs in approximately the reverse way: to assist in “converting pictures into numbers” (i ..."
Abstract - Cited by 21 (0 self) - Add to MetaCart
Graphics and vision are approximate inverses of each other: ordinarily Graphics Processing Units (GPUs) are used to convert “numbers into pictures ” (i.e. computer graphics). In this paper, we discus the use of GPUs in approximately the reverse way: to assist in “converting pictures into numbers” (i.e. computer vision). For graphical operations, GPUs currently provide many hundreds of gigaflops of processing power. This paper discusses how this processing power is being harnessed for Image Processing and Computer Vision, thereby providing dramatic speedups on commodity, readily available graphics hardware. A brief review of algorithms mapped to the GPU by using the graphics API for vision is presented. The recent NVIDIA CUDA programming model is then introduced as a way of expressing program parallelism without the need for graphics expertise.
(Show Context)

Citation Context

...r used to create applications in Mediated Reality [15] where multiple graphics cards were used in parallel to run multiple algorithms concurrently and maintain interactive framerates. Strzodka et. al =-=[16]-=- implement a motion estimation algorithm that provides dense estimates from optical flow. They achieve a 2.8 times speedup on a GeForce 5800 Ultra GPU over a optimized Pentium 4 CPU implementation. Im...

Visual signatures in video visualization

by Min Chen, Ralf P. Botchen, Rudy R. Hashim, Daniel Weiskopf, Ieee Computer Society, Thomas Ertl, Ieee Computer Society, Ian M. Thornton - IEEE Transactions on Visualization and Computer Graphics , 2006
"... Abstract — Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline ..."
Abstract - Cited by 17 (9 self) - Add to MetaCart
Abstract — Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline involving concept formulation, system development, a path-finding user study, and a field trial with real application data. In particular, we have conducted a fundamental study on the visualization of motion events in videos. We have, for the first time, deployed flow visualization techniques in video visualization. We have compared the effectiveness of different abstract visual representations of videos. We have conducted a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques. We have applied our understanding and the developed techniques to a set of application video clips. Our study has demonstrated that video visualization is both technically feasible and cost-effective. It has provided the first set of evidence confirming that ordinary users can be accustomed to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events. Index Terms—Video visualization, volume visualization, flow visualization, human factors, user study, visual signatures, video processing, optical flow, GPU rendering. 1
(Show Context)

Citation Context

...ategories, namely object matching and optical flow [29]. The former (e.g., [1]) involves the knowledge of about some known objects such as its 3D geometry or IK-skeleton, The latter (e.g., [13], [2], =-=[26]-=-) is less accurate but can be applied to almost any arbitrary situation. To compute optical flow of a video, we adopted the gradient-based differential method by Horn and Schunck [13]. The original al...

GPUCV: A FRAMEWORK FOR IMAGE PROCESSING ACCELERATION WITH GRAPHICS PROCESSORS

by Patrick Horain, Erwan Guehenneux, Yannick Alusse
"... This paper presents a state of the art report on using graphics hardware for image processing and computer vision. Then we describe GPUCV, an open library for easily developing GPU accelerated image processing and analysis operators and applications. 1. ..."
Abstract - Cited by 10 (0 self) - Add to MetaCart
This paper presents a state of the art report on using graphics hardware for image processing and computer vision. Then we describe GPUCV, an open library for easily developing GPU accelerated image processing and analysis operators and applications. 1.
(Show Context)

Citation Context

...shader which alters the depth buffer. An adaptive version is also presented. Performances are impressive and this method is up to 34 times depending on the complexity of the image. Strzodka and Galbe =-=[9]-=- introduced a motion estimation method based on eigen vectors analysis in a spatio-temporal tensor. The sequence is transmitted frame by frame in video memory and eigen vectors, tensor and visualizati...

Real-Time Texture Detection Using the LU-Transform

by Alireza Tavakoli Targhi, Eric Hayman, Jan-olof Eklundh
"... Abstract. This paper introduces a fast texture descriptor, the LU-transform. It is inspired by previous methods, the SVD-transform and Eigen-transform, which yield measures of image roughness by considering the singular values or eigenvalues of matrices formed by copying greyvalues from a square pat ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
Abstract. This paper introduces a fast texture descriptor, the LU-transform. It is inspired by previous methods, the SVD-transform and Eigen-transform, which yield measures of image roughness by considering the singular values or eigenvalues of matrices formed by copying greyvalues from a square patch around a pixel directly into a matrix of the same size. The SVD and Eigen-transforms therefore capture the degree to which linear dependencies are present in the image patch. In this paper we demonstrate that similar information can be recovered by examining the properties of the LU factorization of the matrix, and in particular the diagonal part of the U matrix. While the LU-transform yields an output qualitatively similar to the those of the SVD and Eigen-transforms, it can be computed about an order of magnitude faster. It is a much simpler algorithm and well-suited to implementation on parallel architectures. We capitalise on these properties in an implementation of the algorithm on a Graphics Processor Unit (GPU) which makes it even faster than a CPU implementation, and frees the CPU for other computations. 1
(Show Context)

Citation Context

...ral purpose processing. See [22] for more information on general purpose GPU initiatives. Some examples of applications in real-time computer vision exist, like depth matching [23], motion estimation =-=[24]-=- and figure-ground segmentation [25]. Unlike typical CPUs, most GPUs are based on singleinstruction-multiple-data (SIMD) architectures, with multiple processing units working in parallel on different ...

GpuCV: A GPU-accelerated framework for image processing and computer vision

by Yannick Allusse, Patrick Horain, Ankit Agarwal, Cindula Saipriyadarshan - in Intl. Symp. on Advances in Visual Computing , 2008
"... Abstract. This paper presents briefly describes the state of the art of accelerating image processing with graphics hardware (GPU) and discusses some of its caveats. Then it describes GpuCV, an open source multi-platform library for GPU-accelerated image processing and Computer Vision operators and ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
Abstract. This paper presents briefly describes the state of the art of accelerating image processing with graphics hardware (GPU) and discusses some of its caveats. Then it describes GpuCV, an open source multi-platform library for GPU-accelerated image processing and Computer Vision operators and applications. It is meant for computer vision scientist not familiar with GPU technologies. GpuCV is designed to be compatible with the popular OpenCV library by offering GPUaccelerated operators that can be integrated into native OpenCV applications. The GpuCV framework transparently manages hardware capabilities, data synchronization, activation of low level GLSL and CUDA programs, on-the-fly benchmarking and switching to the most efficient implementation and finally offers a set of image processing operators with GPU acceleration available.
(Show Context)

Citation Context

...ragment shader to compute a Fast Fourier Transform on GPU four times faster than on CPU. Strzodka presented a GPUaccelerated generalized bi-dimensional distance transform in [15] and motion estimation=-=[16]-=-. The GPUGems books serie[17] discusses image processing, including image filtering(color adjustment, anti aliasing), image processing in theTitle Suppressed Due to Excessive Length 3 OpenVidia frame...

An hardware architecture for 3d object tracking and motion estimation

by P. Lanvin, J. -c. Noyer, M. Benjelloun - In Proc. of Intl. Conf. on Multimedia and Expo (ICME , 2005
"... We present a method to track and estimate the motion of a 3D object with a monocular image sequence. The problem is based on the state equations and is solved by a sequen-tial Monte Carlo Method. The method uses a CAD model of the object whose projection can be compared directly with the pixels of t ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
We present a method to track and estimate the motion of a 3D object with a monocular image sequence. The problem is based on the state equations and is solved by a sequen-tial Monte Carlo Method. The method uses a CAD model of the object whose projection can be compared directly with the pixels of the image. The advantage is to obtain a better accuracy and a direct estimation of the pose and motion in the 3D world. However, this algorithm needs a massive load in computing. For real-time use, we develop in this paper a distributed al-gorithm that dispatches the processing between the Central Processing Unit (CPU) and the Graphics Processing Unit (GPU) of a consumer-market computer. Some experimen-tal results show that it is possible to obtain an accurate 3D tracking of the object with low computing costs. 1.
(Show Context)

Citation Context

...tement of the particle-based solution, which leads to signicant speed-ups and better estimation accuracies. Modern GPUs are able of complex 3D rendering and more general computer vision applications =-=[7, 8]-=-. They are theoretically more powerful than CPU, with a high price/performance ratio. This architecture allows to fully use the processors and exploits a form of parallelism between CPU and GPU. The g...

Analysis of Large Magnitude Discontinuous Non-rigid Motion

by Mani V. Thomas , 2008
"... ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...he throughput from motion analysis. Additionally, with the availability of programmable graphics hardware, motion can now be estimated at a significantly improved speed as shown by Strzodka and Garbe =-=[135]-=-. As new and varied sensors are being continuously developed and deployed, the possible extensions of many of the algorithms is inevitable. New models will have to be developed to handle the nuances o...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University