• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 32,100
Next 10 →

Table 2: Average generalisation capability.

in Improved Neural Network-based Interpretation of Colonoscopy Images Through On-Line Learning and Evolution
by George D. Magoulas, Vassilis P. Plagianakos, Michael N. Vrahatis 2001
"... In PAGE 5: ...6% success, and the best Rprop-trained network in the second experiment (trained off-line and tested using data from Frame 2) had 93% success. With regards to the on-line BP, although the results of on-line BP in Table2 are from networks that were trained and tested using data from only the corresponding frame, the performance of the method in terms of generalisation is not satisfactory. On the other hand, networks trained with on-line evolution are able to perform satisfactory in changing conditions, as data from different frames are presented to the same network.... ..."
Cited by 2

Table 2: Average generalisation capability.

in Improved Neural Network-based Interpretation of Colonoscopy Images Through On-line Learning and Evolution
by George D. Magoulas, Vassilis P. Plagianakos, Michael N. Vrahatis 2001
"... In PAGE 5: ...6% success, and the best Rprop-trained network in the second experiment (trained off-line and tested using data from Frame 2) had 93% success. With regards to the on-line BP, although the results of on-line BP in Table2 are from networks that were trained and tested using data from only the corresponding frame, the performance of the method in terms of generalisation is not satisfactory. On the other hand, networks trained with on-line evolution are able to perform satisfactory in changing conditions, as data from different frames are presented to the same network.... ..."
Cited by 2

Table 5{2: Generalised Hebb Rule

in unknown title
by unknown authors 1993
"... In PAGE 50: ...60 0.16 6 - 163 149 7:4 Table5 {1: Maximum Spans for di erent Resetting Probabilities Figs 5{2 and 5{3 show the functions Q(R;; r) and S( ;;r) from the results of numerical analysis. The theoretical maximum span obtainable for a 512(9) net is apos; 163, which occurs when r =0:0016 and =6.... In PAGE 57: ....g. [Stanton amp; Sejnowski 89], that increases in synaptic strength, or Long-Term Potentiation (LTP), may be balanced by decreases in e cacy through Long-Term Depression (LDP) - although the results are often in debate [Willshaw amp; Morris 89]. However, it is possible to explore an abstract space of possibilities - for example [Palm 88] draws up a generalised framework for weight update rules (for continuous weighted nets) as shown in Table5 {2. In Palm apos;s formalism the variables w ; z correspond to real-valued weightchanges.... In PAGE 58: ... This would be achieved by converting the weight increments or decrements for real-valued weights into probabilities for triggering or resetting binary-valued weights. Then the variables in Table5 {2 would correspond to: w;; z = prob(w ij ! 1) x;; y = prob(w ij ! 0) The relevant probabilities for switches are shown schematically in Fig 5{9. Below are listed some of the particular values for the probabilities w ; z investigated:... In PAGE 64: ...0 0.17 6 178 168 7:3 Table5 {3: Maximum Spans for di erent Generalised Learning Methods smaller threshold of 6, the much better span of apos; 178 is obtained, with x =0:0875 ( 0:0025) instead. The following are worth noting: The prediction from Eq 5.... In PAGE 65: ... However, this may not be the case if patterns were correlated or pattern coding was less sparse. Palimpsest Scheme Optimal Parameters Span (approx) Random Resetting r =0:0016, z =1, =6 149 Weight Ageing r(A)=1$ A gt;1900 1,700 Generalised Learning x =0:088, z =1,w = y =0, =6 168 Table5 {4: Comparison of Palimpsest Schemes in WillshawNet 5.4 Summary All the palimpsest schemes discussed above, given certain choices of parameters, can allow a net to function as a short-term memory with a stable span.... ..."

Table 2: The break points of the rotation mode and their corresponding values of d

in A High-Speed CORDIC Algorithm and Architecture for DSP Applications
by Martin Kuhlmann, Keshab K. Parhi 1999
"... In PAGE 4: ... Obvi- ously, a small change in causes a large change in d. By taking the di erence between the two values in the third column, d can be computed d = 0:000001111101010011111011011111 = 0:03059360291809: (8) Table2 shows all break points and their corresponding values of d for a precision of 16 bits.... In PAGE 7: ...1 The Rotation Mode The pre-processing part of the new architecture (see Fig. 4) consists of a ROM of 26 entries in which the breakpoints bpi and the corresponding errors i and i+1 are stored, respectively (see Table2 ). The ROM is accessed by the ve MSB bits of .... ..."
Cited by 2

Table 1: Two-PC CFs, their rotational dofs, and corresponding rotational axes

in On Random Sampling in Contact Configuration Space
by Xuerong Ji, Jing Xiao 2000
Cited by 6

Table 2. The break points of the rotation mode and their corresponding values of d

in A Novel CORDIC Rotation Method for Generalized Coordinate Systems
by Martin Kuhlmann, Keshab K. Parhi
"... In PAGE 3: ... Obviously, a small change in causes a large change in d. By taking the difference between the two values in the third column, d can be computed d = 0:0000011111010100111110110111112 (8) = 0:0305936029180910: (9) Table2 shows all 26 possible break points and their cor- responding values of di for a precision of 16 bits. Every 0.... In PAGE 6: ...1 The Rotation Mode The pre-processing part of the new architecture (see Fig. 4) consists of a ROM of 26 entries in which the breakpoints bpi and the corresponding errors i and i?1 are stored, re- spectively (see Table2 ). The ROM is accessed by the five MSB bits of .... ..."

TABLE V COMPARISON OF AVERAGE PERFORMANCE (MEASURED IN NANO-SECONDS) OF ROTATED AND UNROTATED NETWORKS. ACNR CORRESPONDS TO AVERAGE-CASE MAPPING WITHOUT ROTATION. ACR CORRESPONDS TO AVERAGE-CASE MAPPING WITH ROTATION.

in Average-Case Technology Mapping of Asynchronous Burst-Mode Circuits
by Wei-chun Chou, Peter A. Beerel, Kenneth Y. Yun 1999
Cited by 2

Table 5{3: Maximum Spans for di erent Generalised Learning Methods

in unknown title
by unknown authors
"... In PAGE 50: ...60 0.16 6 - 163 149 7:4 Table5 {1: Maximum Spans for di erent Resetting Probabilities Figs 5{2 and 5{3 show the functions Q(R; r) and S( ; r) from the results of numerical analysis. The theoretical maximum span obtainable for a 512(9) net is apos; 163, which occurs when r = 0:0016 and = 6.... In PAGE 57: ...j = 0 v(O) j = 1 v(I) i = 0 w y v(I) i = 1 x z Table5 {2: Generalised Hebb Rule 5.3 Generalised Learning Generalised Learning methods are not random, but \directed quot; in the sense that the switches reset are in some way dependent on the particular pattern vectors presented at time t.... In PAGE 57: ....g. [Stanton amp; Sejnowski 89], that increases in synaptic strength, or Long-Term Potentiation (LTP), may be balanced by decreases in e cacy through Long-Term Depression (LDP) - although the results are often in debate [Willshaw amp; Morris 89]. However, it is possible to explore an abstract space of possibilities - for example [Palm 88] draws up a generalised framework for weight update rules (for continuous weighted nets) as shown in Table5 {2. In Palm apos;s formalism the variables w ? z correspond to real-valued weight changes.... In PAGE 58: ... This would be achieved by converting the weight increments or decrements for real-valued weights into probabilities for triggering or resetting binary-valued weights. Then the variables in Table5 {2 would correspond to: w; z = prob(wij ! 1) x; y = prob(wij ! 0) The relevant probabilities for switches are shown schematically in Fig 5{9. Below are listed some of the particular values for the probabilities w ? z investigated:... In PAGE 65: ...case if patterns were correlated or pattern coding was less sparse. Palimpsest Scheme Optimal Parameters Span (approx) Random Resetting r = 0:0016, z = 1, = 6 149 Weight Ageing r(A) = 1 $ A gt; 1900 1,700 Generalised Learning x = 0:088, z = 1, w = y = 0, = 6 168 Table5 {4: Comparison of Palimpsest Schemes in Willshaw Net 5.4 Summary All the palimpsest schemes discussed above, given certain choices of parameters, can allow a net to function as a short-term memory with a stable span.... ..."

Table 5{3: Maximum Spans for di erent Generalised Learning Methods

in unknown title
by unknown authors 1993
"... In PAGE 50: ...60 0.16 6 - 163 149 7:4 Table5 {1: Maximum Spans for di erent Resetting Probabilities Figs 5{2 and 5{3 show the functions Q(R;; r) and S( ;;r) from the results of numerical analysis. The theoretical maximum span obtainable for a 512(9) net is apos; 163, which occurs when r =0:0016 and =6.... In PAGE 57: ...(O) j = 0 v (O) j =1 v (I) i =0 w y v (I) i =1 x z Table5 {2: Generalised Hebb Rule 5.3 Generalised Learning Generalised Learning methods are not random, but \directed quot; in the sense that the switches reset are in some way dependent on the particular pattern vectors presented at time t.... In PAGE 57: ....g. [Stanton amp; Sejnowski 89], that increases in synaptic strength, or Long-Term Potentiation (LTP), may be balanced by decreases in e cacy through Long-Term Depression (LDP) - although the results are often in debate [Willshaw amp; Morris 89]. However, it is possible to explore an abstract space of possibilities - for example [Palm 88] draws up a generalised framework for weight update rules (for continuous weighted nets) as shown in Table5 {2. In Palm apos;s formalism the variables w ; z correspond to real-valued weightchanges.... In PAGE 58: ... This would be achieved by converting the weight increments or decrements for real-valued weights into probabilities for triggering or resetting binary-valued weights. Then the variables in Table5 {2 would correspond to: w;; z = prob(w ij ! 1) x;; y = prob(w ij ! 0) The relevant probabilities for switches are shown schematically in Fig 5{9. Below are listed some of the particular values for the probabilities w ; z investigated:... In PAGE 65: ... However, this may not be the case if patterns were correlated or pattern coding was less sparse. Palimpsest Scheme Optimal Parameters Span (approx) Random Resetting r =0:0016, z =1, =6 149 Weight Ageing r(A)=1$ A gt;1900 1,700 Generalised Learning x =0:088, z =1,w = y =0, =6 168 Table5 {4: Comparison of Palimpsest Schemes in WillshawNet 5.4 Summary All the palimpsest schemes discussed above, given certain choices of parameters, can allow a net to function as a short-term memory with a stable span.... ..."

Table 1n3a Twon2dPC CFsn2c their rotational dofsn2c and corresponding rotational axes

in On Random Sampling in Contact Configuration Space
by Xuerong Ji, Jing Xiao
Next 10 →
Results 1 - 10 of 32,100
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University