Results 1 -
4 of
4
Discovering Latent Domains for Multisource Domain Adaptation
"... Abstract. Recent domain adaptation methods successfully learn crossdomain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available tra ..."
Abstract
-
Cited by 18 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Recent domain adaptation methods successfully learn crossdomain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available training data contains multiple unknown domains. In this paper, we present both a novel domain transform mixture model which outperforms a single transform model when multiple domains are present, and a novel constrained clustering method that successfully discovers latent domains. Our discovery method is based on a novel hierarchical clustering technique that uses available object category information to constrain the set of feasible domain separations. To illustrate the effectiveness of our approach we present experiments on two commonly available image datasets with and without known domain labels: in both cases our method outperforms baseline techniques which use no domain adaptation or domain adaptation methods that presume a single underlying domain shift. 1
Action Recognition in the Presence of One Egocentric and Multiple Static Cameras
"... Abstract. In this paper, we study the problem of recognizing human actions in the presence of a single egocentric camera and multiple static cameras. Some actions are better presented in static cameras, where the whole body of an actor and the context of actions are visible. Some other actions are b ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. In this paper, we study the problem of recognizing human actions in the presence of a single egocentric camera and multiple static cameras. Some actions are better presented in static cameras, where the whole body of an actor and the context of actions are visible. Some other actions are better recognized in egocentric cameras, where subtle move-ments of hands and complex object interactions are visible. In this paper, we introduce a model that can benefit from the best of both worlds by learning to predict the importance of each camera in recognizing ac-tions in each frame. By joint discriminative learning of latent camera importance variables and action classifiers, our model achieves success-ful results in the challenging CMU-MMAC dataset. Our experimental results show significant gain in learning to use the cameras according to their predicted importance. The learned latent variables provide a level of understanding of a scene that enables automatic cinematography by smoothly switching between cameras in order to maximize the amount of relevant information in each frame. 1
Abstract Opened Coffee Jar
"... In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state ..."
Abstract
- Add to MetaCart
In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state of objects and materials in the environment. We propose a weakly supervised method for learning the object and material states that are necessary for recognizing daily actions. Once these state detectors are learned, we can apply them to input videos and pool their outputs to detect actions. We further demonstrate that our method can be used to segment discrete actions from a continuous video of an activity. Our results outperform state-of-the-art action recognition and activity segmentation results. 1.
LEARNING DESCRIPTIVE MODELS OF OBJECTS AND ACTIVITIES FROM EGOCENTRIC VIDEO Approved by:
, 2013
"... ACKNOWLEDGEMENTS Many thanks to my mother Sussan. I owe you all of this. Also thanks to my father Mohammad and my sister Shaghayegh. ..."
Abstract
- Add to MetaCart
ACKNOWLEDGEMENTS Many thanks to my mother Sussan. I owe you all of this. Also thanks to my father Mohammad and my sister Shaghayegh.