### Table 1: Details of the evaluation set and the context-free grammar.

in Tight Coupling of Speech Recognition and Dialog Management -- Dialog-Context Dependent Grammar . . .

### Table 1: An excerpt of the context-free grammar for the recognition of semantic interconnections.

### Table 1 Chomsky Hierarchy

2004

"... In PAGE 15: ...Table 1 Chomsky Hierarchy From Table1 , we can see that recognition algorithms for regular languages and context-free languages belong to tractable complexity classes. For these rea- son, these two classes have become the most extensively studied formalisms.... ..."

### Table 5.5: Performance comparison for English speech recognition in word error rate (WER). Only on Switchboard (SWB) corpus, on training data (Train), all own data (all own) and all available data (total) exept the accordingly tested one (Development or Evaluation) for language model (LM) and context free grammar (CFG).

in Advisors:

2004

### Table 1: Grammar 1, which generates Language 1. Paren- theses denote optional constituents, which occur with proba- bility 0.5. A slash between two productions means they each occur with probability 0.5. Generation of a string is accom- plished by expanding S. The grammar generates an infinite string by repeatedly generating a finite string and appending it to its previous output.

"... In PAGE 2: ... We take languages to be distributions of strings. Table1 shows a grammar for generating Language 1, a context free language which is not a finite state lan- guage. Figure 1 shows an SLDA for processing Language 1.... ..."

### Table 1. Sample Context Free Grammar

### Table 1. Probabilistic Context Free Grammar

"... In PAGE 4: ... 3.2 PCFG The PCFG used in the experiments is shown in Table1 . Nonterminals are la- belled with capital letters, and terminals with lower-case.... In PAGE 7: ...3. This strategy assumes that the tree is rooted at the sentence rule in Table1 , and the... In PAGE 8: ... As is shown in Table 4, of the total of 15 runs for these sentences, 13 runs found complete parse trees. Sentences 4 and 5 in Table 3 have ambiguous parse trees, primarily caused by the WH rules in Table1 . Sentence 4 was successfully parsed in all these runs.... ..."

### Table 2 Relationship between context-free grammars, parse trees, feature structures, DAGs and unification.

2001

"... In PAGE 12: ...2001b). .................................................................................................... 10 Table2 Relationship between context-free grammars, parse trees, feature structures, DAGs and unification.... In PAGE 37: ... Table2 summarizes how context-free grammars, parse trees, feature structures, DAGs and unification relate to each other. Table 2 Relationship between context-free grammars, parse trees, feature structures, DAGs and unification.... ..."

Cited by 6

### Table 1: Example Context Free Grammar G Used as Declarative Bias for Equation Discovery The only restriction on the grammar G is that it has to generate expressions that are legal in the C pro- gramming language. This means that it can use all C built-in operators and functions. Additional func- tions, representing background knowledge about the addressed domain can be used, as long as they are de ned in conjunction with the grammar. Note that the derived equations may be nonlinear in both the parameters and the system variables.

1997

"... In PAGE 3: ... begin with derivation tree T consisting of the node S repeatchoose a non-terminal leaf node A in T choose a production p = A ! A1A2 : : : Al 2 PA expand A with the successors A1; A2; : : : Al until all leaf nodes in T are terminals Table 2: Algorithm for Deriving Expressions from a Context Free Grammar The height of a derivation tree h(T ) is the maximal length of a path from the root S to some of the leaf nodes. Figure 1 shows derivation trees for the expressions const N=(const + N) and const N N=(const + N), generated with the grammar from Table1 . The height of both derivation trees is h(Ta) = h(Tb) = 4.... In PAGE 3: ... Then, we can use the following reccursive formulas to calculate these numbers: A 2 T: nG(A; 0) = 1 h 1 : nG(A; h) = 0 A 2 N: nG(A; 0) = 0 nG(A; 1) = number of prods A ! w : w 2 T h 2 : nG(A; h) = PA!A1:::Al2PA hQl i=1 NG(Ai; h ? 1) ? Ql i=1 NG(Ai; h ? 2)i NG(A; h) = Ph k=0 nG(A; k) The complexity of the search space is the number NG(S; hmax) of derivation trees with the starting non- terminal symbol S in the root and height up to hmax. Using the formulas above, we can compare the number of expressions that can be derived using the example grammar G from Table1 and the number of expres- sions derived using the universal grammar U: E ! E + F j E ? F j F F ! F T j F=T j T T ! const j v j (E) Table 3 lists the numbers of equations at di erent heights for each of the two grammars. h NG(E; h) NU(E; h) 1 1 0 2 1 0 3 1 7 4 121 36 5 1831 7300 6 27481 14674005 7 412231 2:3607 1012 8 6183481 5:5481 1021 9 92752231 3:8267 1038 10 1:3913 109 1:2462 1068 Table 3: Search Space Size for Two Grammars All arithmetical expressions involving common arith-... In PAGE 4: ... Optionally, the height of a re nement can be limited by parameter hmax. Consider the grammar in Table1 again. The heights of the productions of the grammar are given in Table 5.... In PAGE 5: ...01. The grammar from Table1 was used. Note that the monod function is de ned, which represents back- ground knowledge about population growth that comes from the area of ecological modeling.... ..."

Cited by 39

### Table 5: Results for learning a weighted context-free grammar on the Penn Treebank.

2005

"... In PAGE 29: ... To solve the argmax in line 6 of the algorithm, we use a modified version of the CKY parser of Mark Johnson.3 The results are given in Table5 . They show micro-averaged precision, recall, and F1 for the training and the test set.... In PAGE 29: ...n Taskar et al. (2004a). While their algorithm cannot optimize F1-score as the training loss, they report substantial gains from the use of complex features. In terms of training time, Table5 shows that the total number of constraints added to the working set is small. It is roughly twice the number of training examples in all cases.... ..."

Cited by 49