• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 7,321
Next 10 →

Supervision, Training, and Management

by Michael Moran, Michael J. Moran
"... pages, including appendixand bibliograp& $51 hardcover T he World of Culinary Superuiswn, ..."
Abstract - Add to MetaCart
pages, including appendixand bibliograp& $51 hardcover T he World of Culinary Superuiswn,

ON NEURAL NETWORK CLASSIFIERS WITH SUPERVISED TRAINING

by Marius Kloetzer, Octavian Pastravanu
"... Abstract: A study on classification capability of neural networks is presented, considering two types of architectures with supervised training, namely Multilayer Perceptron (MLP) and Radial-Basis Function (RBF). To illustrate the classifiers’ construction, we have chosen a problem that occurs in re ..."
Abstract - Add to MetaCart
Abstract: A study on classification capability of neural networks is presented, considering two types of architectures with supervised training, namely Multilayer Perceptron (MLP) and Radial-Basis Function (RBF). To illustrate the classifiers’ construction, we have chosen a problem that occurs

Weakly Supervised Training of Semantic Parsers

by Jayant Krishnamurthy, Tom M. Mitchell
"... We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a ..."
Abstract - Cited by 20 (0 self) - Add to MetaCart
We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a

Unsupervised word sense disambiguation rivaling supervised methods

by David Yarowsky - IN PROCEEDINGS OF THE 33RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS , 1995
"... This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints -- that words tend to have ..."
Abstract - Cited by 638 (4 self) - Add to MetaCart
This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints -- that words tend to have

Supervised Training via Error Backpropagation:

by Derivations This Chapter
"... t must be very small for each q. If this were all that there is to it, it would be a simple process, provided that we had a strategy that would adjust the weights properly. Unfortunately, the MLP architecture must be designed properly for the particular dataset to assure that the network will ..."
Abstract - Add to MetaCart
will learn robustly and will be reasonably efficient. The main questions in laying out the architecture and then training the MLP are listed below. 1. How many layers of neurodes should we use? 2. How many input nodes should we use? 137 3. How many neurodes in the hidden layers should we use? 4. How

Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms

by Thomas G. Dietterich , 1998
"... This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I err ..."
Abstract - Cited by 723 (8 self) - Add to MetaCart
error). Two widely used statistical tests are shown to have high probability of type I error in certain situations and should never be used: a test for the difference of two proportions and a paired-differences t test based on taking several random train-test splits. A third test, a paired

Semi-supervised training for the averaged perceptron POS tagger

by Drahomíra Spoustová, Jan Hajič, Jan Raab, Miroslav Spousta - In Proceedings of the EACL , 2009
"... ufal.mff.cuni.cz This paper describes POS tagging experiments with semi-supervised training as an extension to the (supervised) averaged perceptron algorithm, first introduced for this task by (Collins, 2002). Experiments with an iterative training on standard-sized supervised (manually annotated) d ..."
Abstract - Cited by 24 (1 self) - Add to MetaCart
ufal.mff.cuni.cz This paper describes POS tagging experiments with semi-supervised training as an extension to the (supervised) averaged perceptron algorithm, first introduced for this task by (Collins, 2002). Experiments with an iterative training on standard-sized supervised (manually annotated

Improving lightly supervised training for broadcast transcriptions

by Y. Long, M. J. F. Gales, P. Lanchantin, X. Liu, M. S. Seigel, P. C. Woodl - in Proc. Interspeech , 2013
"... This paper investigates improving lightly supervised acous-tic model training for an archive of broadcast data. Standard lightly supervised training uses automatically derived decoding hypotheses using a biased language model. However, as the actual speech can deviate significantly from the original ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
This paper investigates improving lightly supervised acous-tic model training for an archive of broadcast data. Standard lightly supervised training uses automatically derived decoding hypotheses using a biased language model. However, as the actual speech can deviate significantly from

SEMI-SUPERVISED TRAINING IN LOW-RESOURCE ASR AND KWS

by unknown authors
"... In particular for “low resource ” Keyword Search (KWS) and Speech-to-Text (STT) tasks, more untranscribed test data may be available than training data. Several approaches have been proposed to make this data useful during system development, even when initial systems have Word Error Rates (WER) abo ..."
Abstract - Add to MetaCart
of Tamil, when significantly more test data than training data is available, we integrated semi-supervised training and speaker adaptation on the test data, and achieved significant additional improvements in STT and KWS. Index Terms — spoken term detection, automatic speech recog-nition, low-resource LTs

The TreeBanker: a Tool for Supervised Training of Parsed Corpora

by David Carter , 1997
"... I describe the TreeBanker, a graphical tool for the supervised training involved in domain customization of the disambiguation component of a speech- or languageunderstanding system. The TreeBanker presents a user, who need not be a system expert, with a range of properties that distinguish c ..."
Abstract - Cited by 57 (6 self) - Add to MetaCart
I describe the TreeBanker, a graphical tool for the supervised training involved in domain customization of the disambiguation component of a speech- or languageunderstanding system. The TreeBanker presents a user, who need not be a system expert, with a range of properties that distinguish
Next 10 →
Results 1 - 10 of 7,321
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University