Results 1 - 10
of
163,569
Table 3-5: Set of 8 broad classes used in the broad class recognition experiment shown in Table 3-4.
"... In PAGE 8: ...rame-based search with landmarks. . . . . . . . . . . . . . . . . . . . 34 3-5 Set of 8 broad classes used in the broad class recognition experiment shown in Table3 -4.... In PAGE 33: ... Optimizing the pruning threshold for each con guration should result in similar recognition error rates and similar relative computational improvements. Table3 -1 shows the results for the simulated frame-based Viterbi search and Table 3-2 shows the results for the true frame-based search. Both experiments were done for three di erent frame-rates.... In PAGE 33: ... Optimizing the pruning threshold for each con guration should result in similar recognition error rates and similar relative computational improvements. Table 3-1 shows the results for the simulated frame-based Viterbi search and Table3 -2 shows the results for the true frame-based search. Both experiments were done for three di erent frame-rates.... In PAGE 33: ... This can be attributed to a caching mechanism that prevents previously computed scores from being recomputed in the simulated frame-based search. The next table, Table3 -3, shows the results for the true frame-based search using landmarks. Comparing Table 3-2 and Table 3-3 shows that the landmarks did not signi cantly degrade error rate, but signi cantly reduced computation.... In PAGE 33: ... The next table, Table 3-3, shows the results for the true frame-based search using landmarks. Comparing Table3 -2 and Table 3-3 shows that the landmarks did not signi cantly degrade error rate, but signi cantly reduced computation. The last table, Table 3-4, shows broad-class recognition results using a true frame- based search with landmarks.... In PAGE 33: ... Comparing Table 3-2 and Table 3-3 shows that the landmarks did not signi cantly degrade error rate, but signi cantly reduced computation. The last table, Table3 -4, shows broad-class recognition results using a true frame- based search with landmarks. To conduct this experiment, the TIMIT reference pho- netic transcriptions were converted into broad-class transcriptions according to Ta- ble 3-5.... In PAGE 34: ...Table3 -1: TIMIT dev set recognition results, using the frame-based search simulated with a segment-based search. Frame-rate Error Rate (%) Real-Time Factor 10ms 28.... In PAGE 34: ...0 1.09 Table3 -2: TIMIT dev set recognition results, using the true frame-based search. Frame-rate Error Rate (%) Real-Time Factor 10ms 28.... In PAGE 34: ...4 1.01 Table3 -3: TIMIT dev set recognition results, using the true frame-based search with landmarks. Frame-rate Error Rate (%) Real-Time Factor Landmarks 28.... In PAGE 34: ...5 0.92 Table3 -4: TIMIT dev set recognition results on broad classes, using the true frame- based search with landmarks. Frame-rate Error Rate (%) Real-Time Factor Landmarks 24.... ..."
Table 2: Broad sound classes used as garbage models.
1996
"... In PAGE 3: ... Instead of being mono- phones or states of keyword models as used in prior exper- iments in the literature, the models that were used covered broad classes of basic sounds found in American English. These are listed in Table2 . Such models provide good cov- erage of the English language and are amenable to train- ing.... ..."
Cited by 32
Table 2. Broad sound classes used as garbage models.
"... In PAGE 5: ... We used 12 fillers (garbage models) to model extra- neous speech in our experiment. Rather than using monophones or states of keyword models (as researchers have used in prior experiments), we used models that cover broad classes of basic sounds found in American English (listed in Table2 ). Such models adequately cover the English language and are amenable to training.... ..."
Table 1: The four broad categorisations of audio used in the present study.
2005
"... In PAGE 2: ... This approach is based on the principles used by [13] but contains novel enhancements. The number of classification categories for each channel is increased from two (speech / nonspeech) to the four shown in Table1 . These additional classes increase the flexibility of the system and more closely guide future analysis (such as enhancement of crosstalk-contami- nated speech).... In PAGE 2: ... MFCC, Energy and Zero Crossing Rate Similar to [13], MFCC features for 20 critical bands up to 8 kHz were extracted. MFCC vectors are used since they encode the spectral shape of the signal (a property which should change significantly between the four channel classifi- cations in Table1 ). The short-time log energy and zero cross- ing rate (ZCR) were also computed.... In PAGE 5: ... The training data consisted of one million frames per channel classification of conversational speech extracted at random from four ICSI meetings (bro012, bmr006, bed008, bed010). For each channel, a label file spec- ifying the four different crosstalk categories (see Table1 ) was automatically created from the existing ASR word-level tran- scriptions. For the feature selection experiments, the test data consisted of 15000 frames per channel classification extracted at random from one ICSI meeting (bmr001).... In PAGE 8: ...segments indicate performance equal to that obtained using the ground truth segments. To conclude, a multi-channel activity classification system has been described which can distinguish between the four activity categories shown in Table1 . Furthermore, the seg- mentation of speaker alone activity has been shown to be par- ticularly reliable for speech recognition applications: ASR performances using the eHMM segments and the transcribed ground truth segments are extremely similar.... ..."
Cited by 8
Table 2: Syntactic categories and broad semantic categories used in the corpus analysis
"... In PAGE 56: ... ---------------------------------------------------------------------------------------------------- Insert Table 1 about here ---------------------------------------------------------------------------------------------------- For many of the analyses we wanted to examine both the syntactic and semantic coding categories by first dividing the items by the type of attachment and then breaking them down into larger semantic subclasses. The description of this system is given in Table2 . ---------------------------------------------------------------------------------------------------- Insert Table 2 about here ---------------------------------------------------------------------------------------------------- Results The children produced both NP-attached and VP-attached with-phrases in proportions roughly similar to those of the adults around them.... In PAGE 56: ... The description of this system is given in Table 2. ---------------------------------------------------------------------------------------------------- Insert Table2 about here ---------------------------------------------------------------------------------------------------- Results The children produced both NP-attached and VP-attached with-phrases in proportions roughly similar to those of the adults around them. All three children used with-phrases productively before age 3 (1;10 for Eve, 2;7 for Adam; 2;9 for Sarah).... In PAGE 61: ... Coding: Each completion was coded as a modifier, instrument, other VP-attachment, a higher attachment or an ambiguous attachment. These codes were based on the same criteria as in the corpus analysis (see Table2 ), except that ancilliary instrument, undergoer, and objective responses were coded as instruments. This was done both because it was impossible to reliably code distinctions between these categories without a context and because they involved very similar PP-objects and relations.... ..."
Table 2: Syntactic categories and broad semantic categories used in the corpus analysis
"... In PAGE 62: ... ---------------------------------------------------------------------------------------------------- Insert Table 1 about here ---------------------------------------------------------------------------------------------------- For many of the analyses we wanted to examine both the syntactic and semantic coding categories by first dividing the items by the type of attachment and then breaking them down into larger semantic subclasses. The description of this system is given in Table2 . ---------------------------------------------------------------------------------------------------- Insert Table 2 about here ---------------------------------------------------------------------------------------------------- Results The children produced both NP-attached and VP-attached with-phrases in proportions roughly similar to those of the adults around them.... In PAGE 62: ... The description of this system is given in Table 2. ---------------------------------------------------------------------------------------------------- Insert Table2 about here ---------------------------------------------------------------------------------------------------- Results The children produced both NP-attached and VP-attached with-phrases in proportions roughly similar to those of the adults around them. All three children used with-phrases productively before age 3 (1;10 for Eve, 2;7 for Adam; 2;9 for Sarah).... In PAGE 67: ... Coding: Each completion was coded as a modifier, instrument, other VP-attachment, a higher attachment or an ambiguous attachment. These codes were based on the same criteria as in the corpus analysis (see Table2 ), except that ancillary instrument, undergoer, and objective responses were coded as instruments. This was done both because it was impossible to reliably code distinctions between these categories without a context and because they involved very similar PP-objects and relations.... ..."
Table 5-1: Set of 21 broad classes used for nal JUPITER experiments.
"... In PAGE 58: ...using a set of 20 broad-classes shown in Table5 -1 were run. This con guration of the algorithm achieved an improvement in word error rate and number of segments at a much more reasonable level of computation.... In PAGE 58: ... This con guration of the algorithm achieved an improvement in word error rate and number of segments at a much more reasonable level of computation. Table5 -3 summarizes the test set results for JUPITER. In the table, the result with the computational constraint used the set of 20 broad-class models, and the result without the computational constraint used the full set of models.... In PAGE 59: ...Table5 -2: Final TIMIT recognition results on the test set. Error Rate (%) Segments/Second Baseline 29.... In PAGE 59: ...1 61.3 Table5 -3: Final JUPITER recognition results on the test set. Error Rate (%) Segments/Second Baseline 10.... ..."
Table 2 Syntactic categories and broad semantic categories used in the corpus analysis Syntactic category Broad semantic category Semantic coding category
2004
Table 4.3: Summary of group membership dynamics and composition for the 6 larger broad- casts using the system.
2004
Cited by 2
Table 5.2: Zimbabwe: Elecirical Energy used by Broad Consumer Groups (millions of kWh)
Results 1 - 10
of
163,569