disc12-regular

pdf

School

Hong Kong Polytechnic University *

*We aren’t endorsed by this school

Course

273

Subject

Computer Science

Date

Nov 24, 2024

Type

pdf

Pages

2

Uploaded by lixun73230

Report
CS 188 Spring 2023 Regular Discussion 12 1 Naive Bayes In this question, we will train a Naive Bayes classifier to predict class labels Y as a function of input features A and B . Y , A , and B are all binary variables, with domains 0 and 1. We are given 10 training points from which we will estimate our distribution. A 1 1 1 1 0 1 0 1 1 1 B 1 0 0 1 1 1 1 0 1 1 Y 1 1 0 0 0 1 1 0 0 0 1. What are the maximum likelihood estimates for the tables P ( Y ), P ( A | Y ), and P ( B | Y )? Y P ( Y ) 0 1 A Y P ( A | Y ) 0 0 1 0 0 1 1 1 B Y P ( B | Y ) 0 0 1 0 0 1 1 1 2. Consider a new data point ( A = 1, B = 1). What label would this classifier assign to this sample? 3. Let’s use Laplace Smoothing to smooth out our distribution. Compute the new distribution for P ( A | Y ) given Laplace Smoothing with k = 2. A Y P ( A | Y ) 0 0 1 0 0 1 1 1 1
Q2. Backpropagation (a) Perform forward propagation on the neural network below for x = 1 by filling in the values in the table. Note that (i), . . . , (vii) are outputs after performing the appropriate operation as indicated in the node. (i) (ii) (iii) (iv) (v) (vi) (vii) x 2 3 4 max min max (i) (ii) (iii) (iv) (v) (vi) (vii) (b) Below is a neural network with weights a, b, c, d, e, f . The inputs are x 1 and x 2 . The first hidden layer computes r 1 = max( c · x 1 + e · x 2 , 0) and r 2 = max( d · x 1 + f · x 2 , 0). The second hidden layer computes s 1 = 1 1+exp( a · r 1 ) and s 2 = 1 1+exp( b · r 2 ) . The output layer computes y = s 1 + s 2 . Note that the weights a, b, c, d, e, f are indicated along the edges of the neural network here. Suppose the network has inputs x 1 = 1 , x 2 = 1. The weight values are a = 1 , b = 1 , c = 4 , d = 1 , e = 2 , f = 2. Forward propagation then computes r 1 = 2 , r 2 = 0 , s 1 = 0 . 9 , s 2 = 0 . 5 , y = 1 . 4. Note: some values are rounded. x 1 x 2 r 1 r 2 s 1 s 2 y a b c d e f Using the values computed from forward propagation , use backpropagation to numerically calcu- late the following partial derivatives. Write your answers as a single number (not an expression). You do not need a calculator. Use scratch paper if needed. Hint: For g ( z ) = 1 1+exp( z ) , the derivative is ∂g ∂z = g ( z )(1 g ( z )). ∂y ∂a ∂y ∂b ∂y ∂c ∂y ∂d ∂y ∂e ∂y ∂f 2
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help