Tanmay_ML_A2-merged

pdf

School

New York University *

*We aren’t endorsed by this school

Course

6143

Subject

Electrical Engineering

Date

Feb 20, 2024

Type

pdf

Pages

23

Uploaded by tanmayr71

Report
import numpy as np import matplotlib.pyplot as plt Loading the dataset from the .mat file to Numpy Array import scipy.io data3 = scipy.io.loadmat( '/content/data3.mat' ) type (data3) dict data3.keys() dict_keys(['__header__', '__version__', '__globals__', 'data']) data = data3[ 'data' ] Checking the dataset values and shape of the dataset data.shape (200, 3) type (data) numpy.ndarray Extracting the features into X And target variable to y X = data[:, : - 1 ] X = np.concatenate((np.ones((X.shape[ 0 ], 1 )),X), axis = 1 ) y = data[:, - 1 ].reshape(( - 1 , 1 )) def stepFn(a): a[a >= 0 ] = 1 a[a < 0 ] =- 1 return a import math def linear_perceptron_reg(X, y, num_iters = 6000 , l_rate = 1 , random_theta = True , show_stats = False ): m, n = X.shape iter_i = 0 seed_value = 42 np.random.seed(seed_value) w = np.random.random_sample((n, 1 )) if random_theta else
np.zeros((n, 1 )) perceptron_error_history, classification_error_history = [], [] while iter_i < num_iters: random_i = np.random.randint(m) output = X[random_i].reshape(( 1 , 3 )).dot(w) if stepFn(np.multiply(y[random_i],output))[ 0 ] ==- 1 : w += l_rate * X[random_i].reshape(( 3 , 1 )).dot(y[random_i].reshape(( 1 , 1 ))) y_pred = X.dot(w) classification_error = float (np.count_nonzero(stepFn(y_pred) != y)) classification_error_history.append(classification_error / m) risk = np. sum (stepFn(np.multiply( - y, y_pred))) perceptron_error_history.append(risk / m) if show_stats and (iter_i % math.ceil(num_iters / 10 ) == 0 or iter_i == (num_iters - 1 )): print ( f"Iteration { iter_i :5} : Perceptron Error { float (perceptron_error_history[ - 1 ]) :8.2f} , Classification Error { float (classification_error_history[ - 1 ]) :.6f} " ) if classification_error_history[ - 1 ] == 0 : if show_stats: print ( f'Convergence Reached! Iteration/(Max No of Iter): { iter_i } / { num_iters } ' ) print ( f"Iteration { iter_i :5} : Perceptron Error { float (perceptron_error_history[ - 1 ]) :8.2f} , Classification Error { float (classification_error_history[ - 1 ]) :.6f} " ) break iter_i += 1 return w, perceptron_error_history, classification_error_history def plot_data_and_decision_boundary(w, x, y): plt.figure( 0 ) samples_num = x.shape[ 0 ] for i in range (samples_num): if y[i] == 1 : plt.plot(x[i, 1 ], x[i, 2 ], 'g.' , label = 'Class +1' if i == 0 else "" ) else : plt.plot(x[i, 1 ], x[i, 2 ], 'r.' , label = 'Class -1' if i == 0 else "" ) min_x = min (x[:, 1 ])
max_x = max (x[:, 1 ]) y_min_x = ( - w[ 0 ] - w[ 1 ] * min_x) / w[ 2 ] y_max_x = ( - w[ 0 ] - w[ 1 ] * max_x) / w[ 2 ] plt.plot([min_x, max_x], [y_min_x, y_max_x], '-k' , label = 'Decision Boundary' ) plt.xlabel( 'Feature X1' ) plt.ylabel( 'Feature X2' ) plt.legend(loc = 'upper left' ) plt.show() def plot_classification_and_risk_error_trends(start_iterations, stop_iterations, perceptron_error_history, classification_error_history): plt.figure( 1 ,figsize = ( 12 , 8 )) plt.plot( range (start_iterations,stop_iterations), classification_error_history, 'b-' ,label = 'Classification Error' ) plt.plot( range (start_iterations,stop_iterations), perceptron_error_history, 'y-' ,label = 'Perceptron Error/Risk' ) plt.xlabel( 'Iterations' ) plt.title( 'Iterations vs Classification Error/Empirical Risk' ) plt.legend() plt.show() w, J, B = linear_perceptron_reg(X, y, show_stats = True ) Iteration 0: Perceptron Error 0.09, Classification Error 0.545000 Iteration 600: Perceptron Error -0.91, Classification Error 0.045000 Iteration 1200: Perceptron Error -0.80, Classification Error 0.100000 Convergence Reached! Iteration/(Max No of Iter): 1295/6000 Iteration 1295: Perceptron Error -1.00, Classification Error 0.000000 B[ - 1 ] 0.0 plot_data_and_decision_boundary(w, X, y)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
plot_classification_and_risk_error_trends( 0 , len (B), J, B)
last_n = 20 if len (B) >= last_n: start_index = len (B) - last_n plot_classification_and_risk_error_trends(start_index, len (B), J[start_index:], B[start_index:]) else : print ( f"There are less than { last_n } iterations." )
Convergence Behaviour as we change Step Size (l_rate/ Learning Rate) step_sizes = [ 0.1 , 0.01 , 0.001 , 1 , 10 ] plt.figure( 1 ,figsize = ( 12 , 8 )) for l_rate in step_sizes: print ( f' \n Learning Rate: { l_rate } ' ) w, perceptron_error_history, classification_error_history = linear_perceptron_reg( X, y, num_iters = 6000 , l_rate = l_rate, random_theta = True , show_stats = True ) plt.plot(classification_error_history, label = f'Step Size: { l_rate } ' ) plt.xlabel( 'Iterations' ) plt.ylabel( 'Classification Error' ) plt.title( 'Convergence Behavior with Different Step Sizes' ) plt.legend() plt.show() Learning Rate: 0.1 Iteration 0: Perceptron Error 0.00, Classification Error 0.500000
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Iteration 600: Perceptron Error -0.71, Classification Error 0.145000 Convergence Reached! Iteration/(Max No of Iter): 929/6000 Iteration 929: Perceptron Error -1.00, Classification Error 0.000000 Learning Rate: 0.01 Iteration 0: Perceptron Error 0.00, Classification Error 0.500000 Iteration 600: Perceptron Error 0.53, Classification Error 0.765000 Iteration 1200: Perceptron Error -0.66, Classification Error 0.170000 Convergence Reached! Iteration/(Max No of Iter): 1399/6000 Iteration 1399: Perceptron Error -1.00, Classification Error 0.000000 Learning Rate: 0.001 Iteration 0: Perceptron Error 0.00, Classification Error 0.500000 Iteration 600: Perceptron Error 0.10, Classification Error 0.550000 Iteration 1200: Perceptron Error 0.15, Classification Error 0.575000 Iteration 1800: Perceptron Error 0.21, Classification Error 0.605000 Iteration 2400: Perceptron Error 0.21, Classification Error 0.605000 Iteration 3000: Perceptron Error 0.31, Classification Error 0.655000 Iteration 3600: Perceptron Error 0.54, Classification Error 0.770000 Iteration 4200: Perceptron Error 0.77, Classification Error 0.885000 Iteration 4800: Perceptron Error 0.79, Classification Error 0.895000 Iteration 5400: Perceptron Error 0.73, Classification Error 0.865000 Iteration 5999: Perceptron Error 0.63, Classification Error 0.815000 Learning Rate: 1 Iteration 0: Perceptron Error 0.09, Classification Error 0.545000 Iteration 600: Perceptron Error -0.91, Classification Error 0.045000 Iteration 1200: Perceptron Error -0.80, Classification Error 0.100000 Convergence Reached! Iteration/(Max No of Iter): 1295/6000 Iteration 1295: Perceptron Error -1.00, Classification Error
0.000000 Learning Rate: 10 Iteration 0: Perceptron Error 0.00, Classification Error 0.500000 Iteration 600: Perceptron Error -0.85, Classification Error 0.075000 Convergence Reached! Iteration/(Max No of Iter): 943/6000 Iteration 943: Perceptron Error -1.00, Classification Error 0.000000 For the very low learning rate (0.001), the perceptron error is fluctuating and not reaching -1 even after 6000 iterations, showing non-convergence. Very Low Learning Rate (e.g., 0.001): Hasn’t converged even after 6000 iterations, indicating it might be too slow for practical purposes in this case. fig, axs = plt.subplots( len (step_sizes), 1 , sharex = True , figsize = ( 10 , 15 )) for i, l_rate in enumerate (step_sizes): w, perceptron_error_history, classification_error_history = linear_perceptron_reg( X, y, num_iters = 6000 , l_rate = l_rate, random_theta = True
) axs[i].plot(classification_error_history, label = f'Step Size: { l_rate } ' ) axs[i].set_ylabel( 'Classification Error' ) axs[i].legend() axs[i].set_yscale( 'log' ) plt.xlabel( 'Iterations' ) plt.suptitle( 'Convergence Behavior with Different Step Sizes' ) plt.show()
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
On the other hand, say for example a higher learning rate 10, it is able to converge quickly but if the learning rate is too high, the updates may be so large that they carry the parameters past the minimum to the other side of the loss curve. This can cause the optimization process to oscillate back and forth across the minimum (Visible in the graph for Step size 10), potentially never settling at the minimum point. This can lead to divergence, where the error actually increases with each step instead of decreasing. Higher learning rates lead to faster convergence but might overshoot the minimum, while lower learning rates might converge more accurately but are slower. Based on this run, a learning rate of 1 or 0.1 might be suitable for this problem as it offers a good balance, converging to 0 classification error relatively quickly. print ( 'Tanmay Anil Rathi \n Assignment 2' ) Tanmay Anil Rathi Assignment 2
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help