Results 1 - 10
of
17
T.: Action-based multi-field video visualization
- IEEE Transactions on Visualization and Computer Graphics
"... Abstract — One challenge in video processing is to detect actions and events, known or unknown, in video streams dynamically. This paper proposes a visualization solution, where a video stream is depicted as a series of snapshots at a relatively sparse interval, and detected actions are highlighted ..."
Abstract
-
Cited by 20 (9 self)
- Add to MetaCart
(Show Context)
Abstract — One challenge in video processing is to detect actions and events, known or unknown, in video streams dynamically. This paper proposes a visualization solution, where a video stream is depicted as a series of snapshots at a relatively sparse interval, and detected actions are highlighted with continuous abstract illustrations. The combined imagery and illustrative visualization conveys multi-field information in a manner similar to electrocardiograms (ECG) and seismographs. We thus name this type of video visualization as VideoPerpetuoGram (VPG). In this paper, we describe a system that handles the raw and processed information of the video stream in a multi-field visualization pipeline. As examples, we consider the needs for highlighting several types of processed information, including detected actions in video streams, and estimated relationship between recognized objects. We examine the effective means for depicting multi-field information in VPG, and support our choice of visual mappings through a survey. Our GPU implementation facilitates the VPGspecific viewing specification through a sheared object space, as well as volume bricking and combinational rendering of volume data and glyphs. Index Terms — Video visualization, multi-field visualization, volume rendering, GPU rendering, video processing, actions and
Contextualized Videos: Combining Videos with Environment Models to Support Situational Understanding
"... Abstract — Multiple spatially-related videos are increasingly used in security, communication, and other applications. Since it can be difficult to understand the spatial relationships between multiple videos in complex environments (e.g. to predict a person�s path through a building), some visualiz ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
Abstract — Multiple spatially-related videos are increasingly used in security, communication, and other applications. Since it can be difficult to understand the spatial relationships between multiple videos in complex environments (e.g. to predict a person�s path through a building), some visualization techniques, such as video texture projection, have been used to aid spatial understanding. In this paper, we identify and begin to characterize an overall class of visualization techniques that combine video with 3D spatial context. This set of techniques, which we call contextualized videos, forms a design palette which must be well understood so that designers can select and use appropriate techniques that address the requirements of particular spatial video tasks. In this paper, we first identify user tasks in video surveillance that are likely to benefit from contextualized videos and discuss the video, model, and navigation related dimensions of the contextualized video design space. We then describe our contextualized video testbed which allows us to explore this design space and compose various video visualizations for evaluation. Finally, we describe the results of our process to identify promising design patterns through user selection of visualization features from the design space, followed by user interviews. Index Terms — situational awareness, videos, virtual environment models, design space, testbed design and evaluation. 1
Interactive storyboard for overall time-varying data visualization
- In Proc of IEEE PacificVis
, 2008
"... Large amounts of time-varying datasets create great challenges for users to understand and explore them. This paper proposes an efficient visualization method for observing overall data contents and changes throughout an entire time-varying dataset. We develop an interactive storyboard approach by c ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
Large amounts of time-varying datasets create great challenges for users to understand and explore them. This paper proposes an efficient visualization method for observing overall data contents and changes throughout an entire time-varying dataset. We develop an interactive storyboard approach by composing sample volume renderings and descriptive geometric primitives that are generated through data analysis processes. Our storyboard system integrates automatic visualization generation methods and interactive adjustment procedures to provide new tools for visualizing and exploring time-varying datasets. We also provide a flexible framework to quantify data differences and automatically select representative datasets through exploring scientific data distribution features. Since this approach reduces the visualized data amount into a more understandable size and format for users, it can be used to effectively visualize, represent, and explore a large time-varying dataset. Initial user study results show that our approach shortens the exploration time and reduces the number of datasets that users visualized individually. This visualization method is especially useful for situations that require close observance or are not capable of interactive rendering, such as documentation and demonstration.
State of the art report on video-based graphics and video visualization,”
- Comp. Graph. Forum,
, 2012
"... Abstract In recent years, a collection of new techniques which deal with video as input data ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Abstract In recent years, a collection of new techniques which deal with video as input data
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop
- IEEE Transactions on Visualization and Computer Graphics
"... Abstract — Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training dat ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Abstract — Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. 1
Space-time visual analytics of eye-tracking data for dynamic stimuli
- IEEE Transactions on Visualization and Computer Graphics
, 2013
"... Fig. 1. Space-time cube visualization of eye-tracking data for a video stimulus, enriched by spatiotemporal clustering of eye fixations. Abstract—We introduce a visual analytics method to analyze eye movement data recorded for dynamic stimuli such as video or animated graphics. The focus lies on the ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Fig. 1. Space-time cube visualization of eye-tracking data for a video stimulus, enriched by spatiotemporal clustering of eye fixations. Abstract—We introduce a visual analytics method to analyze eye movement data recorded for dynamic stimuli such as video or animated graphics. The focus lies on the analysis of data of several viewers to identify trends in the general viewing behavior, including time sequences of attentional synchrony and objects with strong attentional focus. By using a space-time cube visualization in combination with clustering, the dynamic stimuli and associated eye gazes can be analyzed in a static 3D representation. Shot-based, spatiotemporal clustering of the data generates potential areas of interest that can be filtered interactively. We also facilitate data drill-down: the gaze points are shown with density-based color mapping and individual scan paths as lines in the space-time cube. The analytical process is supported by multiple coordinated views that allow the user to focus on different aspects of spatial and temporal information in eye gaze data. Common eye-tracking visualization techniques are extended to incorporate the spatiotemporal characteristics of the data. For example, heat maps are extended to motion-compensated heat maps and trajectories of scan paths are included in the space-time visualization. Our visual analytics approach is assessed in a qualitative users study with expert users, which showed the usefulness of the approach and uncovered that the experts applied different analysis strategies supported by the system. Index Terms—Eye-tracking, space-time cube, dynamic areas of interest, spatiotemporal clustering, motion-compensated heat map 1
Video Visualization for Snooker Skill Training
"... We present a feasibility study on using video visualization to aid snooker skill training. By involving the coaches and players in the loop of intelligent reasoning, our approach addresses the difficulties of automated semantic reasoning, while benefiting from mature video processing techniques. Thi ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
We present a feasibility study on using video visualization to aid snooker skill training. By involving the coaches and players in the loop of intelligent reasoning, our approach addresses the difficulties of automated semantic reasoning, while benefiting from mature video processing techniques. This work was conducted in conjunction with a snooker club and a sports scientist. In particular, we utilized the principal design of the VideoPerpetuoGram (VPG) to convey spatiotemporal information to the viewers through static visualization, removing the burden of repeated video viewing. We extended the VPG design to accommodate the need for depicting multiple video streams and respective temporal attribute fields, including silhouette extrusion, spatial attributes, and non-spatial attributes. Our results and evaluation have shown that video visualization can provide snooker coaching with visually quantifiable and comparable summary records, and is thus a cost-effective means for assessing skill levels and monitoring progress objectively and consistently. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation—Display algorithms I.3.m [Computer Graphics]: Miscellaneous—Video visualization
CycleStack: Inferring Periodic Behavior via Temporal Sequence Visualization in Ultrasound Video
"... A range of well-known treatment methods for destroying tumor and similar harmful growth in human body utilizes the coherence between the inherently periodic movement of the affected body part and periodic respiratory signal of the patient, with the objective of minimizing damage to surrounding norma ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
A range of well-known treatment methods for destroying tumor and similar harmful growth in human body utilizes the coherence between the inherently periodic movement of the affected body part and periodic respiratory signal of the patient, with the objective of minimizing damage to surrounding normal tissues. Such methods require constant monitoring by an operator who observes the 3D body motion via its 2D projection onto an ultrasound imaging plane and studies the synchronism of this motion with the respiratory signal. Keeping an attentive eye on the respiratory signal as well as the ultrasound video for the entire treatment period is often inconvenient and burdensome. In this paper, we propose a video visualization technique called CycleStack Plot which reduces this cognitive overhead by blending the video and the signal together in a stack-like layout. This visualization reveals the inherent synchronism between the target’s movement and the respiratory signal, visually highlights significant phase shifts of either of the two cyclic phenomena, with the hope of arresting the operator’s attention. Our proposed visualization also provides a visual overview for the posttreatment analysis which enables educated users to quickly and effectively skim through the excessively long process. This paper demonstrates the utility of CycleStack Plot with a case study using real ultrasound videos. In addition, a user study has been performed to evaluate the merits and limitations of the proposed method with respect to the conventional way of watching a video and a signal side-by-side. Even though the motivation of the proposed visualization is improvement of medical applications that use ultrasound, the core techniques discussed here have potential to be extended to other application domains requiring analysis of cyclic patterns from videos.
A Survey on Video-based Graphics and Video Visualization
, 2011
"... In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video-based graphics and video visualization. We provide a comprehensive review of techniques for making photo-realis ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video-based graphics and video visualization. We provide a comprehensive review of techniques for making photo-realistic or artistic computer-generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We propose a new taxonomy to categorize the concepts and techniques in this newly-emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g., feature extraction, detection, tracking and so on) have been featured in video-based modeling and rendering pipelines for graphics and visualization.
GPU-assisted Multi-field Video Volume Visualization
"... GPU-assisted multi-field rendering provides a means of generating effective video volume visualization that can convey both the objects in a spatiotemporal domain as well as the motion status of these objects. In this paper, we present a technical framework that enables combined volume and flow visu ..."
Abstract
- Add to MetaCart
GPU-assisted multi-field rendering provides a means of generating effective video volume visualization that can convey both the objects in a spatiotemporal domain as well as the motion status of these objects. In this paper, we present a technical framework that enables combined volume and flow visualization of a video to be synthesized using GPU-based techniques. A bricking-based volume rendering method is deployed for handling large video datasets in a scalable manner, which is particularly useful for synthesizing a dynamic visualization of a video stream. We have implemented a number of image processing filters, and in particular, we employ an optical flow filter for estimating motion flows in a video. We have devised mechanisms for combining volume objects in a scalar field with glyph and streamline geometry from an optical flow. We demonstrate the effectiveness of our approach with example visualizations constructed from two benchmarking problems in computer vision.