Some training data The code below generates a simple 2D dataset of n positive examples followed by n negative examples. The cell after that plots them. The code also prepends a 1 in each example so that the bias term will simply correspond to the first weight. n = 10 X= np.concatenate((np.random.rand(n, np.random.rand(n, 2))) X = np.hstack((np.expand_dims (np.ones (2*n), 1), x)) Y = [1] *n+ [1] n colors= c = ['r'] *n+ ['g'] *n # Randomize the order of the instances just for fun rng= np.random.default_rng() state= rng._getstate_() rng.shuffle(x) rng._setstate_(state) rng.shuffle(Y) rng._setstate_(state) rng. shuffle(colors) plt.scatter(X[1,1], X[:,2], c = colors) Task 3: Full gradient descent If you've done everthing above correctly, the code below will perform gradient descent to train the classifier. Modify this code so that it runs until one epoch produces no classification errors rather than running for a fixed number of iterations. n [ ]: clf Perceptron (3) for epoch in range(100): for x, y in zip(X, Y): clf.weights gd_step(clf, x, y, 0.01, loss_hinge) print(clf.weights) 2) + 1, Task 4: Plot some hyperplanes Run full gradient descent 5 times and write a routine to convert the weights into slope/intercept form. Then use the function below to plot the hyperplanes learned by the perceptron along with the data in one graph. The second cell below shows how that can be done. Write a paragraph explaining what you see in the plot, touching on how much variation there is from run to run and whether the separators seem like "good" ones. n [ ]: def weights_to_slope_intercept(weights): pass n [ ]: def abline(slope, intercept): """Plot a line from slope and intercept*** axes plt.gca() x_vals = np.array(axes.get_xlim()) y_vals intercept + slope* x_vals plt.plot(x_vals, y_vals, ') n[]: plt.xlim([0, 2]) plt.ylim([0, 2]) plt.scatter (X[:,1], X[:,2], c = colors) abline(-1, 2) Task 2: One step of gradient descent Fill in the function below which takes the following arguments: • clf-An instance of the perceptron class above • X-A training instance y- The corresponding true class label • learning_rate - A learning rate in the range (0, 1) • loss_fn- A function that takes as input the true class label and the activation and returns a real number which is the loss on that example • epsilon - The delta to use when using the method of finite differences The function must return a new set of weights to use in the perceptron after performing one step of gradient descent update using the training example and loss function. To do that it will: pass • Loop over each of the weights • Compute the partial derivative of the loss with respect to the weight using the method of finite differences • Use the computed gradient (the list of partials with respect to each of the weights) to compute a new weight vector as w=w-ag where w is the weight vector, a is the learning rate, and g is the computed gradient. . Return the new weight vector Note: Be careful not to modify the weights of the preceptron in place in the routine below. def gd_step(clf, x, y, learning_rate, loss_fn, epsilon = 0.001):
Some training data The code below generates a simple 2D dataset of n positive examples followed by n negative examples. The cell after that plots them. The code also prepends a 1 in each example so that the bias term will simply correspond to the first weight. n = 10 X= np.concatenate((np.random.rand(n, np.random.rand(n, 2))) X = np.hstack((np.expand_dims (np.ones (2*n), 1), x)) Y = [1] *n+ [1] n colors= c = ['r'] *n+ ['g'] *n # Randomize the order of the instances just for fun rng= np.random.default_rng() state= rng._getstate_() rng.shuffle(x) rng._setstate_(state) rng.shuffle(Y) rng._setstate_(state) rng. shuffle(colors) plt.scatter(X[1,1], X[:,2], c = colors) Task 3: Full gradient descent If you've done everthing above correctly, the code below will perform gradient descent to train the classifier. Modify this code so that it runs until one epoch produces no classification errors rather than running for a fixed number of iterations. n [ ]: clf Perceptron (3) for epoch in range(100): for x, y in zip(X, Y): clf.weights gd_step(clf, x, y, 0.01, loss_hinge) print(clf.weights) 2) + 1, Task 4: Plot some hyperplanes Run full gradient descent 5 times and write a routine to convert the weights into slope/intercept form. Then use the function below to plot the hyperplanes learned by the perceptron along with the data in one graph. The second cell below shows how that can be done. Write a paragraph explaining what you see in the plot, touching on how much variation there is from run to run and whether the separators seem like "good" ones. n [ ]: def weights_to_slope_intercept(weights): pass n [ ]: def abline(slope, intercept): """Plot a line from slope and intercept*** axes plt.gca() x_vals = np.array(axes.get_xlim()) y_vals intercept + slope* x_vals plt.plot(x_vals, y_vals, ') n[]: plt.xlim([0, 2]) plt.ylim([0, 2]) plt.scatter (X[:,1], X[:,2], c = colors) abline(-1, 2) Task 2: One step of gradient descent Fill in the function below which takes the following arguments: • clf-An instance of the perceptron class above • X-A training instance y- The corresponding true class label • learning_rate - A learning rate in the range (0, 1) • loss_fn- A function that takes as input the true class label and the activation and returns a real number which is the loss on that example • epsilon - The delta to use when using the method of finite differences The function must return a new set of weights to use in the perceptron after performing one step of gradient descent update using the training example and loss function. To do that it will: pass • Loop over each of the weights • Compute the partial derivative of the loss with respect to the weight using the method of finite differences • Use the computed gradient (the list of partials with respect to each of the weights) to compute a new weight vector as w=w-ag where w is the weight vector, a is the learning rate, and g is the computed gradient. . Return the new weight vector Note: Be careful not to modify the weights of the preceptron in place in the routine below. def gd_step(clf, x, y, learning_rate, loss_fn, epsilon = 0.001):
Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
Related questions
Question
Expert Solution
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
This is a popular solution!
Trending now
This is a popular solution!
Step by step
Solved in 3 steps
Knowledge Booster
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.Recommended textbooks for you
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education