tute4(1)

pdf

School

Nanyang Technological University *

*We aren’t endorsed by this school

Course

MISC

Subject

Computer Science

Date

Nov 24, 2024

Type

pdf

Pages

2

Uploaded by DukeLemur2884

Report
SC4001 Tutorial 4 Deep neural networks Figure 1 1. The two-layer feedforward perceptron network shown in figure 1 has weights and biases initialized as indicated and receives 2-dimensional inputs (࠵? ! , ࠵? " ) . The network is to respond with ࠵? ! = ’ 0 1 * and ࠵? " = ’ 1 0 * for input patterns ࠵? ! = ’ 1.0 3.0 * and ࠵? " = ’ −2.0 −2.0 * , respectively. Analyse a single feedforward and feedback step for gradient decent learning of the two patterns by doing the following: (a) Find the weight matrix ࠵? to the hidden-layer and weight matrix ࠵? to the output-layer, and the corresponding biases. (b) Calculate the synaptic input ࠵? and output ࠵? of the hidden-layer, and the synaptic input ࠵? and output ࠵? = (࠵? ! , ࠵? " ) of the output layer. (c) Find the mean square error cost ࠵? between the outputs and targets. (d) Calculate the gradients ࠵? # ࠵? and ࠵? $ ࠵? at the output-layer and the hidden-layer, respectively. (e) Compute the new weights and biases. (f) Write a program to continue iterations until convergence and find the final weights and biases. Assume a learning rate of 0.05. Repeat above (a) – (f) for stochastic gradient decent learning. ࠵? ! +1 ࠵? " +1 1 2 -2 0 3 -1 1 1 0 -2 -2 3 ࠵? ! ࠵? "
SC4001 Tutorial 4 Deep neural networks 2. A feedforward neural network with one hidden layer is to perform the following classification: class inputs A (1.0, 1.0), (0.0, 1.0) B (3.0, 4.0), (2.0, 2.0) C (2.0, −2.0), (−2.0, −3.0) The network has a hidden layer consisting of three perceptrons and a softmax output layer. Initialize the weights ࠵? and biases ࠵? to the hidden layer, and the weights ࠵? and biases ࠵? to the output layer as follows: ࠵? = ’ −0.10 0.97 0.18 −0.70 0.38 0.93 *, ࠵? = @ 0.0 0.0 0.0 A ࠵? = @ 1.01 0.09 −0.39 0.79 −0.45 −0.22 0.28 0.96 −0.07 A, ࠵? = @ 0.0 0.0 0.0 A Show one iteration of gradient descent learning and plot the learning curves until convergence at a learning rate ࠵? = 0.1. Determine the weights and biases at convergence. Find the class labels predicted by the trained network for patterns: ࠵? ! = ’ 2.5 1.5 * and ࠵? " = ’ −1.5 0.5 * 3. Design a deep neural network consisting of two ReLU hidden layers to approximate the following function: ࠵?(࠵?, ࠵?) = 0.8࠵? " − ࠵? % + 2.5࠵?࠵? for −1.0 ≤ ࠵?, ࠵? ≤ 1.0 . Use 10 neurons and 5 neurons at first and second hidden layers, respectively, and a linear neuron at the output layer. (a) Divide the input space equally into square regions of size 0.25 × 0.25 and use grid points as data points to learn the function ࠵? . (b) Train the network using gradient decent learning at learning rate ࠵? = 0.01 and plot the learning curve (mean square error vs. iterations) and the predicted data points. (c) Compare the behaviour of learning by plotting learning curves at rates ࠵? = 0.005, 0.001, 0.01, and 0.05 .
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help