Results 1 -
3 of
3
Fusion of Foreground Object, Spatial and Frequency Domain Motion Information for Video Summarization
"... Abstract. Surveillance video camera captures a large amount of continuous video stream every day. To analyze or investigate any significant events from the huge video data, it is laborious and boring job to identify these events. To solve this problem, a video summarization technique combining foreg ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Surveillance video camera captures a large amount of continuous video stream every day. To analyze or investigate any significant events from the huge video data, it is laborious and boring job to identify these events. To solve this problem, a video summarization technique combining foreground objects as well as motion information in spatial and frequency domain is proposed in this paper. We extract foreground object using background modeling and motion information in spatial domain and frequency domain. Frame transition is applied for obtaining motion information in spatial domain. For acquiring motion information in frequency domain, phase correlation (PC) technique is applied. Later, foreground objects and motions in spatial and frequency domain are fused and key frames are extracted. Experimental results reveal that the proposed method performs better than the state-of-the-art method.
Event Detection and Highlight Detection of Broadcasted Game Videos
"... Efficient access of game videos is urgently demanded due to the emergence of live streaming platforms and the explosive numbers of gamers and viewers. In this work we facilitate efficient access from two aspects: game event detection and highlight detection. By recognizing predefined text displayed ..."
Abstract
- Add to MetaCart
(Show Context)
Efficient access of game videos is urgently demanded due to the emergence of live streaming platforms and the explosive numbers of gamers and viewers. In this work we facilitate efficient access from two aspects: game event detection and highlight detection. By recognizing predefined text displayed on screen when some events occur, we associate game events with time stamps to fa-cilitate direct access. We jointly consider visual features, events, and viewer’s reaction to construct two highlight models, and enable compact game presentation. Experimental results show the effec-tiveness of the proposed methods. As one of the early attempts on analyzing broadcasted game videos from the perspective of mul-timedia content analysis, our contributions are twofold. First, we design and extract game-specific features considering visual con-tent, event semantics, and viewer’s reaction. Second, we integrate clues from these three domains based on a psychological approach and a data-driven approach to characterize game highlights.
Connotative Feature Extraction For Movie Recommendation
"... It is difficult to assess the emotions subject to the emotional responses to the content of the film by exploring the film connotative properties. Connotation is used to represent the emotions described by the audiovisual descriptors so that it predicts the emotional reaction of user. The connotativ ..."
Abstract
- Add to MetaCart
(Show Context)
It is difficult to assess the emotions subject to the emotional responses to the content of the film by exploring the film connotative properties. Connotation is used to represent the emotions described by the audiovisual descriptors so that it predicts the emotional reaction of user. The connotative features can be used for the recommendation of movies. There are various methodologies for the recommendation of movies. This paper gives comparative analysis of some of these methods. This paper introduces some of the audio features that can be useful in the analysis of the emotions represented in the movie scenes. The video features can be mapped with emotions. This paper provides methodology for mapping audio features with some emotional states such as happiness, sleepiness, excitement, sadness, relaxation, anger, distress, fear, tension, boredom, comedy and fight. In this paper movie’s audio is used for connotative feature extraction which is extended to recognize emotions. This paper also provides comparative analysis of some of the methods that can be used for the recommendation of movies based on user’s emotions.