(i). Consider a 3-layer perceptron neural network consisting of 1 input neuron, 2 hidden neurons with sigmoid activation functions, and 1 output neuron with linear activation function. Assuming v1 and v2 represent the connection weights from the hidden neurons to the output neuron, v0 represents the bias for the output neuron, w11 and w21 represent the connection weights from the input neuron to the hidden neurons, and w10 and w20 represent the biases for the 2 hidden neurons. Let x represent the input to the neural network and y represent the output from the neural network. Suppose the vector of neural network weights is: w = [v0 v1 v2 w10 w11 w20 w21]T = [0 0.1 0.5 0 -1 1 0.2]T. Calculate the output y if input x = 0. Calculate y again for x = 1. (ii). For the same neural network as in (1), calculate the derivative of y with respect the neural network weight parameters v0, v1, v2 , w10 , w11 , w20, and w21 for x = 0. Calculate the derivatives again for x = 1.
(i).
Consider a 3-layer perceptron neural network consisting of 1 input neuron, 2
hidden neurons with sigmoid activation functions, and 1 output neuron with linear
activation function. Assuming v1 and v2 represent the connection weights from
the hidden neurons to the output neuron, v0 represents the bias for the output
neuron, w11 and w21 represent the connection weights from the input neuron to
the hidden neurons, and w10 and w20 represent the biases for the 2 hidden
neurons. Let x represent the input to the neural network and y represent the output
from the neural network.
Suppose the
w = [v0 v1 v2 w10 w11 w20 w21]T = [0 0.1 0.5 0 -1 1 0.2]T.
Calculate the output y if input x = 0. Calculate y again for x = 1.
(ii).
For the same neural network as in (1), calculate the derivative of y with respect the neural network weight parameters v0, v1, v2 , w10 , w11 , w20, and w21 for x = 0. Calculate the derivatives again for x = 1.
Trending now
This is a popular solution!
Step by step
Solved in 2 steps with 1 images