### Table 2. NP-hard problems

1998

"... In PAGE 3: ... Hence, only these two problems are proved to be ordinarily NP-hard. The other NP-hard problems in Table2 as well as the NP-hard J2jno wait; rj; pij = 1jCmax, which is equivalent to J2jno wait; pij = 1jLmax by symmetry, and J2jno wait; pij = 1j Tj are open for the ordinary or strong NP-hardness. The letter C in the machine environment eld denotes a cycle shop, a special case of a job shop, where all the jobs have the same route passing through the machines like in a ow shop but repetitions of machines in the route are allowed.... ..."

Cited by 10

### Table 10: Zero temperature SASAT on hard problems.

1993

Cited by 34

### Table 4: Zero temperature SASAT on hard problems.

1993

Cited by 34

### Table 4: Zero temperature SASAT on hard problems.

1993

Cited by 34

### Table 10: Zero temperature SASAT on hard problems.

1993

Cited by 34

### Table: 1: GSAT and NNSAT on hard problems

1996

Cited by 1

### Table 3: Correlations with problem hardness

"... In PAGE 5: ... Problem hardness Problem hardness is taken to be the log of the number of search nodes required by satz. We present the most inter- esting correlations with problem hardness in Table3 , and discuss them in the text following. The strongest correlation with problem hardness we found was the size of the smallest strong backdoors.... ..."

### Table 3: Correlations with problem hardness

"... In PAGE 5: ... Problem hardness Problem hardness is taken to be the log of the number of search nodes required by satz. We present the most inter- esting correlations with problem hardness in Table3 , and discuss them in the text following. The strongest correlation with problem hardness we found was the size of the smallest strong backdoors.... ..."

### Table 3: Performance of NLRowithNLRh and SBP on two hard problems.

1998

"... In PAGE 6: ... The interest of this result is that GP suggested a NLR in which the weakness of a supervised learning rule (SBP) are removed by combining it with an unsupervised learning rule (HB). Table3 shows the results of the two algorithms (SBP and NLRowithNLRh) for the vowel and sonar problems. By changing the parameters of the NLRh ( ; k1; and k2) we found that = 0:01; k1 = 0:5; and k2 = 0 give the best results on the vowel problem while on the sonar problem the best parameters are = 0:1; k1 = 0:5; and k2 = 0.... ..."

Cited by 1

### Table 3. Performance of NLRo with NLRh and SBP on two hard problems.

1998

"... In PAGE 10: ... The interest of this result is that GP suggested a new learning rule in which the weakness of a supervised learning rule (SBP) are removed by combining it with an unsupervised learning rule (HB). Table3 shows the results of the two algorithms (SBP and NLRo with NLRh) for the vowel and sonar problems. By changing the parameters of the NLRh ( ; k1; and k2) we found that = 0:01; k1 = 0:5; and k2 = 0 give the best results on the vowel problem while on the sonar problem the best parameters are = 0:1; k1 = 0:5; and k2 = 0.... ..."

Cited by 1