Results 1 - 10
of
343
Making mashups with marmite: towards end-user programming for the web
- In Proc. CHI ’07
, 2007
"... There is a tremendous amount of web content available today, but it is not always in a form that supports end-users’ needs. In many cases, all of the data and services needed to accomplish a goal already exist, but are not in a form amenable to an end-user. To address this problem, we have developed ..."
Abstract
-
Cited by 121 (1 self)
- Add to MetaCart
(Show Context)
There is a tremendous amount of web content available today, but it is not always in a form that supports end-users’ needs. In many cases, all of the data and services needed to accomplish a goal already exist, but are not in a form amenable to an end-user. To address this problem, we have developed an end-user programming tool called Marmite, which lets end-users create so-called mashups that re-purpose and combine existing web content and services. In this paper, we present the design, implementation, and evaluation of Marmite. An informal user study found that programmers and some spreadsheet users had little difficulty using the system. Author Keywords mashup, end-user programming, web, spreadsheet, user study ACM Classification Keywords H5.m. Information interfaces and presentation (e.g., HCI):
ConceptNet 3: a flexible, multilingual semantic network for common sense knowledge
- the 22nd Conference on Artificial Intelligence
, 2007
"... The Open Mind Common Sense project has been collecting common-sense knowledge from volunteers on the Internet since 2000. This knowledge is represented in a machine-interpretable semantic network called ConceptNet. We present ConceptNet 3, which improves the acquisition of new knowledge in ConceptNe ..."
Abstract
-
Cited by 98 (19 self)
- Add to MetaCart
(Show Context)
The Open Mind Common Sense project has been collecting common-sense knowledge from volunteers on the Internet since 2000. This knowledge is represented in a machine-interpretable semantic network called ConceptNet. We present ConceptNet 3, which improves the acquisition of new knowledge in ConceptNet and facilitates turning edges of the network back into natural language. We show how its modular design helps it adapt to different data sets and languages. Finally, we evaluate the content of ConceptNet 3, showing that the information it contains is comparable with WordNet and the
Adding semantics to detectors for video retrieval
- IEEE Transactions on Multimedia
, 2007
"... Abstract — In this paper, we propose an automatic video retrieval method based on high-level concept detectors. Research in video analysis has reached the point where over 100 concept detectors can be learned in a generic fashion, albeit with mixed performance. Such a set of detectors is very small ..."
Abstract
-
Cited by 77 (14 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we propose an automatic video retrieval method based on high-level concept detectors. Research in video analysis has reached the point where over 100 concept detectors can be learned in a generic fashion, albeit with mixed performance. Such a set of detectors is very small still compared to ontologies aiming to capture the full vocabulary a user has. We aim to throw a bridge between the two fields by building a multimedia thesaurus, i.e. a set of machine learned concept detectors that is enriched with semantic descriptions and semantic structure obtained from WordNet. Given a multimodal user query, we identify three strategies to select a relevant detector from this thesaurus, namely: text matching, ontology querying, and semantic visual querying. We evaluate the methods against the automatic search task of the TRECVID 2005 video retrieval benchmark, using a news video archive of 85 hours in combination with a thesaurus of 363 machine learned concept detectors. We assess the influence of thesaurus size on video search performance, evaluate and compare the multimodal selection strategies for concept detectors, and finally discuss their combined potential using oracle fusion. The set of queries in the TRECVID 2005 corpus is too small to be definite in our conclusions, but the results suggest promising new lines of research. Index Terms — Video retrieval, concept learning, knowledge modeling, content analysis and indexing, multimedia information systems I.
Analogyspace: reducing the dimensionality of common sense knowledge. In:
- AAAI’08: Proceedings of the 23rd national conference on Artificial intelligence.
, 2008
"... Abstract We are interested in the problem of reasoning over very large common sense knowledge bases. When such a knowledge base contains noisy and subjective data, it is important to have a method for making rough conclusions based on similarities and tendencies, rather than absolute truth. We pres ..."
Abstract
-
Cited by 64 (28 self)
- Add to MetaCart
(Show Context)
Abstract We are interested in the problem of reasoning over very large common sense knowledge bases. When such a knowledge base contains noisy and subjective data, it is important to have a method for making rough conclusions based on similarities and tendencies, rather than absolute truth. We present AnalogySpace, which accomplishes this by forming the analogical closure of a semantic network through dimensionality reduction. It self-organizes concepts around dimensions that can be seen as making distinctions such as "good vs. bad" or "easy vs. hard", and generalizes its knowledge by judging where concepts lie along these dimensions. An evaluation demonstrates that users often agree with the predicted knowledge, and that its accuracy is an improvement over previous techniques.
A goal-oriented web browser
- In Proc. of the SIGCHI Conf. on Human Factors in Computing Systems (CHI ’06
, 2006
"... p0220 p0225 p0230 Many users are familiar with the interesting but limited functionality of data detector interfaces like Microsoft’s Smart Tags and Google’s AutoLink. In this chapter we significantly expand the breadth and functionality of this type of user interface through the use of large-scale ..."
Abstract
-
Cited by 52 (1 self)
- Add to MetaCart
(Show Context)
p0220 p0225 p0230 Many users are familiar with the interesting but limited functionality of data detector interfaces like Microsoft’s Smart Tags and Google’s AutoLink. In this chapter we significantly expand the breadth and functionality of this type of user interface through the use of large-scale knowledge bases of semantic information. The result is a Web browser that is able to generate personalized semantic hypertext, providing a goal-oriented browsing experience. We present (1) Creo, a programming-by-example system for the Web that allows users to create a general purpose procedure with a single example; and (2) Miro, a data detector that matches the content of a page to high-level user goals. An evaluation with 34 subjects found that they were more efficient using our system, and that the subjects would use features like these if they were integrated into their Web browser. s0010 p0235 p0240
Commonsense reasoning in and over natural language
- PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS (KES-2004
, 2004
"... ConceptNet is a very large semantic network of commonsense knowledge suitable for making various kinds of practical inferences over text. ConceptNet captures a wide range of commonsense concepts and relations like those in Cyc, while its simple semantic network structure lends it an ease-of-use co ..."
Abstract
-
Cited by 50 (3 self)
- Add to MetaCart
ConceptNet is a very large semantic network of commonsense knowledge suitable for making various kinds of practical inferences over text. ConceptNet captures a wide range of commonsense concepts and relations like those in Cyc, while its simple semantic network structure lends it an ease-of-use comparable to WordNet. To meet the dual challenge of having to encode complex higher-order concepts, and maintaining ease-of-use, we introduce a novel use of semi-structured natural language fragments as the knowledge representation of commonsense concepts. In this paper, we present a methodology for reasoning flexibly about these semi-structured natural language fragments. We also examine the tradeoffs associated with representing commonsense knowledge in formal logic versus in natural language. We conclude that the flexibility of natural language makes it a highly suitable representation for achieving practical inferences over text, such as context finding, inference chaining, and conceptual analogy.
Modelling Relational Data using Bayesian Clustered Tensor Factorization
"... We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us “understand ” a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferen ..."
Abstract
-
Cited by 42 (2 self)
- Add to MetaCart
(Show Context)
We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us “understand ” a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data. 1
Towards automated game design
- IN AI*IA 2007: ARTIFICIAL INTELLIGENCE AND HUMAN-ORIENTED COMPUTING
, 2007
"... Game generation systems perform automated, intelligent design of games (i.e. videogames, boardgames), reasoning about both the abstract rule system of the game and the visual realization of these rules. Although, as an instance of the problem of creative design, game generation shares some common re ..."
Abstract
-
Cited by 37 (16 self)
- Add to MetaCart
(Show Context)
Game generation systems perform automated, intelligent design of games (i.e. videogames, boardgames), reasoning about both the abstract rule system of the game and the visual realization of these rules. Although, as an instance of the problem of creative design, game generation shares some common research themes with other creative AI systems such as story and art generators, game generation extends such work by having to reason about dynamic, playable artifacts. Like AI work on creativity in other domains, work on game generation sheds light on the human game design process, offering opportunities to make explicit the tacit knowledge involved in game design and test game design theories. Finally, game generation enables new game genres which are radically customized to specific players or situations; notable examples are cell phone games customized for particular users and newsgames providing commentary on current events. We describe an approach to formalizing game mechanics and generating games using those mechanics, using WordNet and ConceptNet to assist in performing common-sense reasoning about game verbs and nouns. Finally, we demonstrate and describe in detail a prototype that designs micro-games in the style of Nintendo’s WarioWare series. 1
Packet-Switched vs. Time-Multiplexed FPGA Overlay Networks
- in Proceedings of the IEEE Symposium on Field-Programmable Custom Computing Machines. IEEE
, 2006
"... Abstract — Dedicated, spatially configured FPGA interconnect is efficient for applications that require high throughput connections between processing elements (PEs) but with a limited degree of PE interconnectivity (e.g. wiring up gates and datapaths). Applications which virtualize PEs may require ..."
Abstract
-
Cited by 29 (10 self)
- Add to MetaCart
(Show Context)
Abstract — Dedicated, spatially configured FPGA interconnect is efficient for applications that require high throughput connections between processing elements (PEs) but with a limited degree of PE interconnectivity (e.g. wiring up gates and datapaths). Applications which virtualize PEs may require a large number of distinct PE-to-PE connections (e.g. using one PE to simulate 100s of operators, each requiring input data from thousands of other operators), but with each connection having low throughput compared with the PE’s operating cycle time. In these highly interconnected conditions, dedicating spatial interconnect resources for all possible connections is costly and inefficient. Alternatively, we can time share physical network resources by virtualizing interconnect links, either by statically scheduling the sharing of resources prior to runtime or by dynamically negotiating resources at runtime. We explore the tradeoffs (e.g. area, route latency, route quality) between time-multiplexed and packetswitched networks overlayed on top of commodity FPGAs. We demonstrate modular and scalable networks which operate on a Xilinx XC2V6000-4 at 166MHz. For our applications, timemultiplexed, offline scheduling offers up to a 63 % performance increase over online, packet-switched scheduling for equivalent topologies. When applying designs to equivalent area, packetswitching is up to 2 × faster for small area designs while timemultiplexing is up to 5 × faster for larger area designs. When limited to the capacity of a XC2V6000, if all communication is known, time-multiplexed routing outperforms packet-switching; however when the active set of links drops below 40 % of the potential links, packet-switched routing can outperform timemultiplexing. I.
How to Wreck a Nice Beach You Sing Calm Incense
- Proceedings of the 10th international conference on Intelligent user interfaces
, 2005
"... A principal problem in speech recognition is distinguishing between words and phrases that sound similar but have different meanings. Speech recognition programs produce a list of weighted candidate hypotheses for a given audio segment, and choose the "best " candidate. If the choi ..."
Abstract
-
Cited by 26 (4 self)
- Add to MetaCart
(Show Context)
A principal problem in speech recognition is distinguishing between words and phrases that sound similar but have different meanings. Speech recognition programs produce a list of weighted candidate hypotheses for a given audio segment, and choose the "best " candidate. If the choice is incorrect, the user must invoke a correction interface that displays a list of the hypotheses and choose the desired one. The correction interface is time-consuming, and accounts for much of the frustration of today's dictation systems. Conventional dictation systems prioritize hypotheses based on language models derived from statistical techniques such as n-grams and Hidden Markov Models. We propose a supplementary method for ordering hypotheses based on Commonsense Knowledge. We filter acoustical and word-frequency hypotheses by testing their plausibility with a semantic network derived from 700,000 statements about everyday life. This often filters out possibilities that "don't make sense " from the user's viewpoint, and leads to improved recognition. Reducing the hypothesis space in this way also makes possible streamlined correction interfaces that improve the overall throughput of dictation systems.