Results 1 - 10
of
3,407
Table 2: Movie review sentiment analysis mean-absolute-error for each author. Dataset l/u/test SVR SSL Improvement
2006
Cited by 1
Table 1: Classification of movies in sample. Num- bers in parentheses represent percentage of reviewed samples.
2003
"... In PAGE 10: ... Likewise, those with overt watermarks or textual markers also had shorter lag times. Table1 shows the classifications of the movies in our data set along with the average lag times for each classification. Note that we have multiple sam- ples for about half of the movies in our data set, for example both a through-the-air quality sample and a DVD quality sample.... ..."
Cited by 4
Table 1: Classification of movies in sample. Numbers in paren- theses represent percentage of reviewed samples.
2003
Cited by 4
(Table 4), it is apparent that the system is reasonably confusing adjacent categories (e.g., 1 star with 2 stars, 4 stars with 5 stars, etc.). Confusion matrices of book and movie review also show the same pattern [5]. In Section5, we present an analysis of the features that contributed to the success of the music review binary rating classification experiment. Table 3: Rating classification results Experiment
2006
Table 1: SO accuracy, per review type.
2004
"... In PAGE 2: ... By averaging weighted SOs, and setting the split between negative and positive reviews at 0.228, we obtained the re- sults in Table1 . As we can see in the table, there are differ- ences between book, movie and music reviews on the one hand, and phones, cars and cookware on the other.... ..."
Cited by 10
Table 4: Top-10 results of the MAX algorithm for the query movies
"... In PAGE 10: ... The community that contains the seed node, and the co-citation of the seed node with the remaining nodes in the community determine the focus of the MAX algorithm. For example, Table4 (Appendix A) shows the top-10 results for the query movies . The seed node is the Internet Movie Database4 (IMDB), and the algorithm converges to a set of movie databases and movie reviews sites.... ..."
Table 4: Top-10 results of the MAX algorithm for the query movies
"... In PAGE 10: ... The community that contains the seed node, and the co-citation of the seed node with the remaining nodes in the community determine the focus of the MAX algorithm. For example, Table4 (Appendix A) shows the top-10 results for the query movies . The seed node is the Internet Movie Database4 (IMDB), and the algorithm converges to a set of movie databases and movie reviews sites.... ..."
Table 1: Movie/Theater features and their meaning
2004
"... In PAGE 10: ... On the preferences screen, users indicate their ephemeral requirements for their movie search. They do this by providing information concerning nineteen features of movies and theaters including genre, MPAA rating, critical reviews, and distance to the theater ( Table1 ). For each feature the user may indicate the specific factors he considers important (e.... In PAGE 28: ...75 (0.66) Table1... ..."
Cited by 1
Table 2: Genre classification experiments
2006
"... In PAGE 23: ...assigned status. Table2 : Reasoning components for the ComplaintEngine tools (Table 1). The right column contains the examples of rules for domain-independent complaint-specific ontology.... In PAGE 37: ... 2. EXPERIMENTS ON OBJECT GENRES Table2 provides an overview of the genre classification tests on the book, movie and music reviews (5250 reviews in total). The genres involved are shown in Table 1.... ..."
Results 1 - 10
of
3,407