x First Second Output hidden hidden ayer ayer layer FIGURE 5.2 multilayer perceptron models
FIGURE 5.2 multilayer perceptron models. 1 y 2 y my 1 x 2 x n x ( ) 1 1 x ( ) 2 1 x ( ) 1 1 n x ( ) 1 2 x ( ) 2 2 x ( ) 2 2 n x Output layer Second hidden layer First hidden layer
he neurons in the first layer of the multilayer perceptron perform computations, and the outputs of these neurons are given by (1) (1) (1) with j=1, 2.n, The neurons in the second layer of the multilayer perceptron perform computations, and the outputs of these neurons are given by (2) f(2)( 1(2)y(1) with j=1, 2, . n2. The neurons in the third layer of the multilayer perceptron perform computations, and the outputs of these neurons are given by (∑ with j=1, 2, ..m
The neurons in the first layer of the multilayer perceptron perform computations, and the outputs of these neurons are given by with j = 1,2.....n1 . The neurons in the second layer of the multilayer perceptron perform computations, and the outputs of these neurons are given by with j = 1,2,..., n2 . The neurons in the third layer of the multilayer perceptron perform computations, and the outputs of these neurons are given by with j = 1,2, ...,m. (1) (1) (1) (1) 1 (( ) ) n j j ij i j i x f w x = = − 1 (2) (2) (2) (1) (2) 1 (( ) ) n j j ij i j i x f w x = = − 2 (2) 1 (( ) ) n j j ij i j i y f w x = = −
CHhe parameters(scalar real numbers)wl are called the weights of the first hidden layer. The w 2) are called the weights of the second hidden layer. The w are called the weights of the output layer. The parameters 0 are called the biases of the first hidden layer. The parameters o(2)are called the biases of the second hidden layer, and the e. are the biases of the output layer. The functions f; (for the output layer), f(for the second hidden layer), and fo (for the first hidden layer represent the activation functions. the activation functions can be different for each neuron in the multilayer perception(e.g, the first layer could have one type of sigmoid, while the next two layers could have different sigmoid functions or threshold functions) This completes the definition of the multilayer perception. Next, we will introduce the radial basis function neural network. Afte that we explain how both of these neural networks relate to the other topics covered in this book
The parameters (scalar real numbers) are called the weights of the first hidden layer. The are called the weights of the second hidden layer. The are called the weights of the output layer. The parameters are called the biases of the first hidden layer. The parameters are called the biases of the second hidden layer, and the are the biases of the output layer. The functions f j (for the output layer), (for the second hidden layer), and (for the first hidden layer) represent the activation functions. The activation functions can be different for each neuron in the multilayer perception (e.g., the first layer could have one type of sigmoid, while the next two layers could have different sigmoid functions or threshold functions). This completes the definition of the multilayer perception. Next, we will introduce the radial basis function neural network. After that we explain how both of these neural networks relate to the other topics covered in this book. (1) ij w (2) w ij wij (1) j (2) j j (2) j f (1) j f
53.2 Radial Basis Function Locally tuned NeAapial reetv percksound in parts of the cerebral cortex, in the visual cortex, and in other parts of the brain The radial basis function neural network model is based on these biological systems A radial basis function neural network is shown in Figure 5.3 There, the inputs are X;, i=1, 2,,n, and the output is y =f(/x) where represents the processing by the entire radial basis function neural network. Let x=[x, x2, .x,]. The input to the ith receptive field unit is x, and its output is denoted with R;(). It has what is called a strength"which we denote by y,. Assume that there are M receptive field units Hence, from Figure 5.3 y=f(x)=∑歹R(x) (5.3) is the output of the radial basis function neural network
5.3.2 Radial Basis Function A locally tuned, overlapping receptive field is found in parts of Neural Networks the cerebral cortex, in the visual cortex, and in other parts of the brain. The radial basis function neural network model is based on these biological systems. A radial basis function neural network is shown in Figure 5.3. There, the inputs are xi , i = 1,2,..., n, and the output is y =f(x) where f represents the processing by the entire radial basis function neural network. Let . The input to the i th receptive field unit is x, and its output is denoted with Ri (x). lt has what is called a "strength" which we denote by . Assume that there are M receptive field units. Hence, from Figure 5.3, (5.3) is the output of the radial basis function neural network. 1 2 , , T n x x x x = i y ( ) ( ) 1 M i i i y f x y R x = = =
FIGURE 5.3 Radial basis function neural network model
FIGURE 5.3 Radial basis function neural network model. 1 x 2 x n x y