• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 14,502
Next 10 →

TABLE 2 RESEARCH ON VISUAL LEARNING

in Citations (this article cites 31 articles hosted on the SAGE Journals Online and HighWire Press platforms):
by Irvine Clarke, Theresa B. Flaherty, Michael Yankey

Table 1. Related work in visual learning.

in Visual learning by evolutionary feature synthesis
by Krzysztof Krawiec, Bir Bhanu 2003
"... In PAGE 1: ... Current recognition systems are mostly open-loop and human input in the design of these systems is still predominant. Only a few contributions, summarized in Table1 , attempt to close the feedback loop of the learning process at the highest (e.g.... ..."
Cited by 5

Table 2. Mean ratings and their SD for performance, effectiveness of visual space, and learning under the two camera conditions

in Dynamic shared visual spaces: Experimenting with automatic camera control in a remote repair task
by Abhishek Ranjan, Jeremy P. Birnholtz, Ravin Balakrishnan 2007
"... In PAGE 7: ... In all of these cases, workers were assessing the perceived utility of this information to their partners, since they themselves were not relying on the video view. As Table2 shows, participants generally did not find the video useful (as the mean rating is below the midpoint on the 7-point scale) in the static camera condition, but did find it to be useful in the automatic camera condition (F(1,20)=45.... ..."
Cited by 1

TABLE II DETERMINATION OF THE ADDRESSEE, BASED ON ESTIMATING THE VISUAL TARGET. RESULTS WITH TRUE AND LEARNED PARAMETERS.

in IEEE TRANSACTIONS ON ROBOTICS: SPECIAL ISSUE ON HUMAN-ROBOT INTERACTION 1 Enabling Multimodal Human-Robot Interaction for the Karlsruhe Humanoid Robot
by Rainer Stiefelhagen, Hazım Kemal Ekenel, Christian Fügen, Petra Gieselmann, Hartwig Holzapfel, Florian Kraft, Kai Nickel, Michael Voit, Alex Waibel

Table 4 An Example of the Visual Concept GeometricConcept. Some geometric features are given. These features are used during the visual concept learning process. Restric- tions on the domain of the features are also deflned.

in Ontology Based Complex Object Recognition Abstract
by Nicolas Maillot, Monique Thonnat, Inria Sophia, Antipolis Orion Team

Table 4: Determination of the acoustical addressee, based on (visually) estimated head pose. Results with true and learned model parameter (distribu- tions and priors) are given.

in Identifying the Addressee in Human-Human-Robot Interactions Based On . . .
by Michael Katzenmaier, et al.
"... In PAGE 4: ....4.3 Estimation of the addressee Since our previous experiments indicate that visual focus is a good indicator for the addressee of an utterance - es- pecially if the visual target was a human - we can use the estimated visual target as an estimate of the (acoustic) ad- dressee. Table4 summarizes the results of detection of the addressee based on estimating the visual target as described in the previous section. Result for both, hand-tuned true head pose distributions, as well as learned distributions and... ..."

Table 4: Determination of the acoustical addressee, based on (visually) estimated head pose. Results with true and learned model parameter (distribu- tions and priors) are given.

in ABSTRACT Identifying the Addressee in Human-Human-Robot Interactions based on Head Pose and Speech
by Michael Katzenmaier
"... In PAGE 4: ....4.3 Estimation of the addressee Since our previous experiments indicate that visual focus is a good indicator for the addressee of an utterance - es- pecially if the visual target was a human - we can use the estimated visual target as an estimate of the (acoustic) ad- dressee. Table4 summarizes the results of detection of the addressee based on estimating the visual target as described in the previous section. Result for both, hand-tuned true head pose distributions, as well as learned distributions and... ..."

Table 1. Texture clusters used in learning similarity. The visual similarity within each cluster are identi ed by the people in our research group. Cluster Texture Class Cluster Texture Class

in Learning Similarity for Texture Image Retrieval
by Guodong Guo , Stan Z. Li, Kap Luk Chan 2000
"... In PAGE 9: ...; ; 5). The dimension of the feature vector is thus 48. The 112 texture image classes are grouped into 32 clusters, each containing 1 to 8 similar textures. This classi cation was done manually and Table1 shows the various clusters and their corresponding texture classes. Note that we use all the 112 texture classes.... ..."
Cited by 5

Table 2. Visual features (visual object/technique)

in unknown title
by unknown authors
"... In PAGE 4: ...g., Winn and Holliday, 1982; Kosslyn, 1989; Keller and Keller, 1993; Tufte, 1997), we have identified and summa- rized a set of features for describing various visual objects and visual techniques ( Table2 ). Compared with previous studies, which usually use a qualitative analysis, our feature set describes visual objects and techniques at a much finer granularity.... In PAGE 6: ... From about 500 illustrations, we have identified 60 presen- tations that are related to visual revealing. Based on our learning goal in Formula (3), we initially extract five features from Table 1 and Table2 as our learn- ing input. We use one target output to specify one of the three Reveal techniques: Expose, Separate, and Overlay.... ..."

Table 2. Visual features (visual object/technique)

in unknown title
by unknown authors
"... In PAGE 4: ...g., Winn and Holliday, 1982; Kosslyn, 1989; Keller and Keller, 1993; Tufte, 1997), we have identified and summa- rized a set of features for describing various visual objects and visual techniques ( Table2 ). Compared with previous studies, which usually use a qualitative analysis, our feature set describes visual objects and techniques at a much finer granularity.... In PAGE 6: ... From about 500 illustrations, we have identified 60 presen- tations that are related to visual revealing. Based on our learning goal in Formula (3), we initially extract five features from Table 1 and Table2 as our learn- ing input. We use one target output to specify one of the three Reveal techniques: Expose, Separate, and Overlay.... ..."
Next 10 →
Results 1 - 10 of 14,502
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University