#### DMCA

## Orthogonal forward selection for constructing the radial basis function network with tunable nodes

Venue: | in Proc. Int. Conf. Intell. Comput |

Citations: | 3 - 2 self |

### Citations

12870 | Statistical Learning Theory
- Vapnik
- 1998
(Show Context)
Citation Context ...The algorithm is summarised as follows. Let u be the vector that contains µn and Σn. Give the following initial conditions: e (0) k = yk and η (0) k = 1, 1 ≤ k ≤ N, and J0 = 1 N yT y = 1 N N∑ k=1 y2k =-=(11)-=- Specify the following algorithmic parameters: PS – population size, NG – number of generations in the repeated search, and ξB – accuracy for terminating the weighted boosting search. Outer loop: gene... |

2679 | Atomic decomposition by basis pursuit.
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ... Then 1. For 1 ≤ i ≤ PS , generate gi)n from u[l]i , the candidates for the nth model column, and orthogonalise them: α i) j,n = pTj g i) n pTj pj , 1 ≤ j < n (12) pi)n = g i) n − n−1∑ j=1 α i) j,npj =-=(13)-=- θi)n = ( pi)n )T y ( pi)n )T pi)n + λ (14) 2. For 1 ≤ i ≤ PS , calculate the LOO cost function value of each u[l]i : e (n) k (i) = e (n−1) k − pi)n (k)θi)n , 1 ≤ k ≤ N (15) η (n) k (i) = η (n−1) k − ... |

1007 |
Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond
- Scholkopf
- 2001
(Show Context)
Citation Context ...i) n − n−1∑ j=1 α i) j,npj (13) θi)n = ( pi)n )T y ( pi)n )T pi)n + λ (14) 2. For 1 ≤ i ≤ PS , calculate the LOO cost function value of each u[l]i : e (n) k (i) = e (n−1) k − pi)n (k)θi)n , 1 ≤ k ≤ N =-=(15)-=- η (n) k (i) = η (n−1) k − ( p i) n (k) )2 ( pi)n )T pi)n + λ , 1 ≤ k ≤ N (16) J i)n = 1 N N∑ k=1 ( e (n) k (i) η (n) k (i) )2 (17) where pi)n (k) is the kth element of pi)n . OFS for Constructing the... |

947 | Sparse Bayesian learning and the relevance vector machine
- Tipping
- 2001
(Show Context)
Citation Context ...m u[l]i , the candidates for the nth model column, and orthogonalise them: α i) j,n = pTj g i) n pTj pj , 1 ≤ j < n (12) pi)n = g i) n − n−1∑ j=1 α i) j,npj (13) θi)n = ( pi)n )T y ( pi)n )T pi)n + λ =-=(14)-=- 2. For 1 ≤ i ≤ PS , calculate the LOO cost function value of each u[l]i : e (n) k (i) = e (n−1) k − pi)n (k)θi)n , 1 ≤ k ≤ N (15) η (n) k (i) = η (n−1) k − ( p i) n (k) )2 ( pi)n )T pi)n + λ , 1 ≤ k ... |

422 |
Orthogonal least squares learning algorithm for radial basis function networks
- Chen, Cowan, et al.
- 1991
(Show Context)
Citation Context ...put data points as candidate RBF centres and employing a common variance for every RBF node. A parsimonious RBF network is then identified using the efficient orthogonal least squares (OLS) algorithm =-=[7]-=--[10]. Similarly, the support vector machine (SVM) and other sparse kernel modelling methods [11]-[17] also fit the kernel centres to the training input data points and adopt a common variance for eve... |

340 | Support vector machines for classification and regression
- Gunn
- 1998
(Show Context)
Citation Context ... = 1PS , 1 ≤ i ≤ PS , for the population. Then 1. For 1 ≤ i ≤ PS , generate gi)n from u[l]i , the candidates for the nth model column, and orthogonalise them: α i) j,n = pTj g i) n pTj pj , 1 ≤ j < n =-=(12)-=- pi)n = g i) n − n−1∑ j=1 α i) j,npj (13) θi)n = ( pi)n )T y ( pi)n )T pi)n + λ (14) 2. For 1 ≤ i ≤ PS , calculate the LOO cost function value of each u[l]i : e (n) k (i) = e (n−1) k − pi)n (k)θi)n , ... |

84 | Kernel matching pursuit
- Vincent, Bengio
(Show Context)
Citation Context ...For 1 ≤ i ≤ PS , calculate the LOO cost function value of each u[l]i : e (n) k (i) = e (n−1) k − pi)n (k)θi)n , 1 ≤ k ≤ N (15) η (n) k (i) = η (n−1) k − ( p i) n (k) )2 ( pi)n )T pi)n + λ , 1 ≤ k ≤ N =-=(16)-=- J i)n = 1 N N∑ k=1 ( e (n) k (i) η (n) k (i) )2 (17) where pi)n (k) is the kth element of pi)n . OFS for Constructing the Radial Basis Function Network with Tunable Nodes 781 Inner loop: weighted boo... |

48 | Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
- Chen, Hong, et al.
- 2004
(Show Context)
Citation Context ...data points as candidate RBF centres and employing a common variance for every RBF node. A parsimonious RBF network is then identified using the efficient orthogonal least squares (OLS) algorithm [7]-=-=[10]-=-. Similarly, the support vector machine (SVM) and other sparse kernel modelling methods [11]-[17] also fit the kernel centres to the training input data points and adopt a common variance for every ke... |

46 |
Sparse kernel regression modeling using combined locally regularized orthogonal least squares and D-optimality experimental design
- Chen, Hong, et al.
- 2003
(Show Context)
Citation Context ...mance for the engine data set by the 15-node RBF network constructed by the OFS-LOO algorithm: (a) the model output ŷk superimposed on the system output yk, and (b) the modelling error ek = yk − ŷk =-=[9]-=-,[10] has shown that this data set can be modelled adequately as yi = fs(xi) + ei with yi = y(i), xi = [y(i− 1) u(i− 1) u(i− 2)]T , where fs(•) describes the unknown underlying system to be identified... |

42 |
Recursive hybrid algorithm for non-linear system identification using radial basis function networks. lnt
- Ckn, Billings, et al.
- 1992
(Show Context)
Citation Context ...ular matrix with unity diagonal elements and P = [p1 p2 · · ·pM ] with the orthogonal columns that satisfy pTi pj = 0, if i = j. The regression model (4) can alternatively be expressed as y = Pθ + e =-=(5)-=- OFS for Constructing the Radial Basis Function Network with Tunable Nodes 779 where the weight vector θ = [θ1 θ2 · · · θM ]T in the orthogonal model space satisfies the triangular system Aw = θ. Sinc... |

30 |
The identification of linear and nonlinear models of a turbocharged automotive diesel engine
- Billings, Chen
- 1989
(Show Context)
Citation Context ...del column pn, and the weight θn, as well as the nterm modelling errors e(n)k and associated LOO modelling error weightings η (n) k for 1 ≤ k ≤ N . 3 Modelling Examples Example 1. The engine data set =-=[19]-=- was used to demonstrate the effectiveness of the proposed OFS-LOO algorithm. The data were collected from a Leyland TL11 turbocharged, direct injection diesel engine operated at low engine speed, whe... |

25 |
Evolving Space-Filling Curves to Distribute Radial Basis Functions Over an Input Space
- Whitehead, Choate
- 1994
(Show Context)
Citation Context ...the weights that connect the RBF nodes to its output node. The parameters of a RBF network can be learned via nonlinear optimisation using the gradient based algorithm [1], the evolutionary algorithm =-=[2]-=- or the E-M algorithm [3]. Such a nonlinear learning approach is computationally expensive and may encounter the local minima problem. Additionally, the network structure or the number of RBF nodes ha... |

23 | Experiments with repeating weighted boosting search for optimization in signal processing applications
- Chen, Wang, et al.
- 2005
(Show Context)
Citation Context ...Since this optimisation problem is non-convex, a gradient-based algorithm may become trapped at a local minimum. We adopt a global search algorithm called the repeated weighted boosting search (RWBS) =-=[18]-=- to determine µn and Σn. The algorithm is summarised as follows. Let u be the vector that contains µn and Σn. Give the following initial conditions: e (0) k = yk and η (0) k = 1, 1 ≤ k ≤ N, and J0 = 1... |

18 |
Nonlinear time series modeling and prediction using Gaussian RBF networks with enhanced clustering and
- Chen
- 1995
(Show Context)
Citation Context ...the number of RBF nodes has to be determined via other means. Alternatively, clustering algorithms can be applied to find the RBF centre vectors as well as the associated basis function variances [4]-=-=[6]-=-. This leaves the RBF weights to be determined by the usual linear least squares solution. Again, the number of the clusters has to be determined via other means, such as cross validation. A popular a... |

15 |
Combined genetic algorithm optimisation and regularised orthogonal least squares learning for radial basis function networks
- Chen, Wu, et al.
- 1999
(Show Context)
Citation Context ... (n) i the usual n-term modelling error, and η(n)i the LOO modelling error weighting. Note that e (n) k and η(n)k can be computed recursively using e (n) k = yk − n∑ i=1 θipi(k) = e (n−1) k − θnpn(k) =-=(8)-=- and η (n) k = 1 − n∑ i=1 p2i (k) pTi pi + λ = η(n−1)k − p2n(k) pTnpn + λ (9) respectively, where λ ≥ 0 is a small regularisation parameter. Therefore, the computation of the LOO criterion Jn is very ... |

13 |
Robust maximum likelihood training of heteroscedastic probabilistic neural networks
- Yang, Chen
- 1998
(Show Context)
Citation Context ...the RBF nodes to its output node. The parameters of a RBF network can be learned via nonlinear optimisation using the gradient based algorithm [1], the evolutionary algorithm [2] or the E-M algorithm =-=[3]-=-. Such a nonlinear learning approach is computationally expensive and may encounter the local minima problem. Additionally, the network structure or the number of RBF nodes has to be determined via ot... |

12 |
G.M.: Time Series Analysis forecasting and control. Holden-Day
- Box, Jenkins
- 1976
(Show Context)
Citation Context ... 1. Fig. 3 illustrates the modelling performance of the 15-node RBF network constructed by the OFS-LOO algorithm. Example 2. This example constructed a model for the gas furnace data set (Series J in =-=[20]-=-). The data set, depicted in Fig. 4, contained 296 pairs of input-output points. The input uk was the coded input gas feed rate and the output yk represented CO2 concentration from the gas furnace. Al... |

9 |
Comparative aspects of neural network algorithms for on-line modeling of dynamic processes
- An, Brown, et al.
- 1993
(Show Context)
Citation Context ... the basis functions as well as the weights that connect the RBF nodes to its output node. The parameters of a RBF network can be learned via nonlinear optimisation using the gradient based algorithm =-=[1]-=-, the evolutionary algorithm [2] or the E-M algorithm [3]. Such a nonlinear learning approach is computationally expensive and may encounter the local minima problem. Additionally, the network structu... |

6 |
Darken C.J.: Fast Learning
- Moody
- 1989
(Show Context)
Citation Context ... or the number of RBF nodes has to be determined via other means. Alternatively, clustering algorithms can be applied to find the RBF centre vectors as well as the associated basis function variances =-=[4]-=--[6]. This leaves the RBF weights to be determined by the usual linear least squares solution. Again, the number of the clusters has to be determined via other means, such as cross validation. A popul... |