• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

Recognizing action at a distance (2003)

Cached

  • Download as a PDF

Download Links

  • [lear.inrialpes.fr]
  • [cs.gmu.edu]
  • [www.cs.sfu.ca]
  • [www.cs.sfu.ca]
  • [acberg.com]
  • [www.cs.berkeley.edu]
  • [www.cs.sfu.ca]
  • [www.cs.sfu.ca]
  • [graphics.cs.cmu.edu]
  • [www.eecs.berkeley.edu]
  • [luthuli.cs.uiuc.edu]
  • [www.cs.berkeley.edu]
  • [www.csd.uwo.ca]
  • [www.csd.uwo.ca]
  • [www.csd.uwo.ca]
  • [luthuli.cs.uiuc.edu]
  • [www.csd.uwo.ca]
  • [www1.idc.ac.il]
  • [www.csd.uwo.ca]
  • [www.csd.uwo.ca]
  • [www.cs.sfu.ca]
  • [www.cs.sfu.ca]
  • [luthuli.cs.uiuc.edu]
  • [www.csd.uwo.ca]
  • [www.csd.uwo.ca]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Alexei A. Efros , Alexander C. Berg , Greg Mori , Jitendra Malik
Venue:PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION
Citations:504 - 20 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Efros03recognizingaction,
    author = {Alexei A. Efros and Alexander C. Berg and Greg Mori and Jitendra Malik},
    title = {Recognizing action at a distance},
    booktitle = {PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION},
    year = {2003},
    pages = {726--733},
    publisher = {}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Our goal is to recognize human actions at a distance, at resolutions where a whole person may be, say, 30 pixels tall. We introduce a novel motion descriptor based on optical flow measurements in a spatio-temporal volume for each stabilized human figure, and an associated similarity measure to be used in a nearest-neighbor framework. Making use of noisy optical flow measurements is the key challenge, which is addressed by treating optical flow not as precise pixel displacements, but rather as a spatial pattern of noisy measurements which are carefully smoothed and aggregated to form our spatio-temporal motion descriptor. To classify the action being performed by a human figure in a query sequence, we retrieve nearest neighbor(s) from a database of stored, annotated video sequences. We can also use these retrieved exemplars to transfer 2D/3D skeletons onto the figures in the query sequence, as well as two forms of data-based action synthesis “Do as I Do” and “Do as I Say”. Results are demonstrated on ballet, tennis as well as football datasets.

Keyphrases

query sequence    noisy optical flow measurement    video sequence    noisy measurement    human figure    precise pixel displacement    similarity measure    novel motion descriptor    data-based action synthesis    spatio-temporal volume    retrieved exemplar    key challenge    whole person    spatial pattern    football datasets    optical flow    spatio-temporal motion descriptor    optical flow measurement    human action    stabilized human figure    nearest-neighbor framework   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University