• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 3,407
Next 10 →

Table 8. Characteristics of the Movie Review Data Set

in Advised by:
by Xue Bai, Peter Spirtes 2005

Table 2: Movie review sentiment analysis mean-absolute-error for each author. Dataset l/u/test SVR SSL Improvement

in Semisupervised regression with order preferences
by Xiaojin Zhu, Andrew B. Goldberg 2006
Cited by 1

Table 1: Classification of movies in sample. Num- bers in parentheses represent percentage of reviewed samples.

in Analysis of security vulnerabilities in the movie production and distribution process
by Simon Byers, Lorrie Cranor, Dave Kormann, Patrick Mcdaniel, Eric Cronin 2003
"... In PAGE 10: ... Likewise, those with overt watermarks or textual markers also had shorter lag times. Table1 shows the classifications of the movies in our data set along with the average lag times for each classification. Note that we have multiple sam- ples for about half of the movies in our data set, for example both a through-the-air quality sample and a DVD quality sample.... ..."
Cited by 4

Table 1: Classification of movies in sample. Numbers in paren- theses represent percentage of reviewed samples.

in Analysis of Security Vulnerabilities in the Movie
by Production And Distribution, Simon Byers, Lorrie Cranor, Dave Korman, Patrick Mcdaniel 2003
Cited by 4

(Table 4), it is apparent that the system is reasonably confusing adjacent categories (e.g., 1 star with 2 stars, 4 stars with 5 stars, etc.). Confusion matrices of book and movie review also show the same pattern [5]. In Section5, we present an analysis of the features that contributed to the success of the music review binary rating classification experiment. Table 3: Rating classification results Experiment

in Contents TrendMine: Utilizing Authorship Profiling and Tone Analysis in Context 4
by Ozlem Uzuner, Michael Gamon, Julio Gonzalo, Iryna Gurevych, Gary Kacmarcik, Gilad Mishne, Yan Qu, Avik Sarkar, Kevyn Collins-thompson, James Shanahan, Navot Akiva, Johnathan Schler 2006

Table 1: SO accuracy, per review type.

in Analyzing appraisal automatically
by Maite Taboada, Jack Grieve 2004
"... In PAGE 2: ... By averaging weighted SOs, and setting the split between negative and positive reviews at 0.228, we obtained the re- sults in Table1 . As we can see in the table, there are differ- ences between book, movie and music reviews on the one hand, and phones, cars and cookware on the other.... ..."
Cited by 10

Table 4: Top-10 results of the MAX algorithm for the query movies

in Using Non-Linear Dynamical Systems for Web Searching and Ranking
by unknown authors
"... In PAGE 10: ... The community that contains the seed node, and the co-citation of the seed node with the remaining nodes in the community determine the focus of the MAX algorithm. For example, Table4 (Appendix A) shows the top-10 results for the query movies . The seed node is the Internet Movie Database4 (IMDB), and the algorithm converges to a set of movie databases and movie reviews sites.... ..."

Table 4: Top-10 results of the MAX algorithm for the query movies

in unknown title
by unknown authors
"... In PAGE 10: ... The community that contains the seed node, and the co-citation of the seed node with the remaining nodes in the community determine the focus of the MAX algorithm. For example, Table4 (Appendix A) shows the top-10 results for the query movies . The seed node is the Internet Movie Database4 (IMDB), and the algorithm converges to a set of movie databases and movie reviews sites.... ..."

Table 1: Movie/Theater features and their meaning

in View through MetaLens: Usage Patterns for a Meta-Recommendation System
by J. Ben Schafer, Joseph A. Konstan, John Riedl 2004
"... In PAGE 10: ... On the preferences screen, users indicate their ephemeral requirements for their movie search. They do this by providing information concerning nineteen features of movies and theaters including genre, MPAA rating, critical reviews, and distance to the theater ( Table1 ). For each feature the user may indicate the specific factors he considers important (e.... In PAGE 28: ...75 (0.66) Table1... ..."
Cited by 1

Table 2: Genre classification experiments

in Contents TrendMine: Utilizing Authorship Profiling and Tone Analysis in Context 4
by Ozlem Uzuner, Michael Gamon, Julio Gonzalo, Iryna Gurevych, Gary Kacmarcik, Gilad Mishne, Yan Qu, Avik Sarkar, Kevyn Collins-thompson, James Shanahan, Navot Akiva, Johnathan Schler 2006
"... In PAGE 23: ...assigned status. Table2 : Reasoning components for the ComplaintEngine tools (Table 1). The right column contains the examples of rules for domain-independent complaint-specific ontology.... In PAGE 37: ... 2. EXPERIMENTS ON OBJECT GENRES Table2 provides an overview of the genre classification tests on the book, movie and music reviews (5250 reviews in total). The genres involved are shown in Table 1.... ..."
Next 10 →
Results 1 - 10 of 3,407
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University