d2l.laurentian.ca_d2l_lms_quizzing_user_quiz_submissions_attempt

png

School

Laurentian University *

*We aren’t endorsed by this school

Course

5617

Subject

Computer Science

Date

Dec 6, 2023

Type

png

Pages

1

Uploaded by BrigadierWillpower16288

Report
Quiz 3 - Results X Attempt 1 of 1 Written Oct 4, 2023 7:40 PM - Oct 4, 2023 7:47 PM Attempt Score 3/5-60% Overall Grade (Highest Attempt) 3/5-60 % Question 1 1/ 1 point Which of the following statements are true regarding logistic regression? (Select all that apply.) v . Logistic regression is used for binary classification problems. v ~ The output of logistic regression is a probability score between 0 and 1. ~ Inlogistic regression, the linearity assumption implies that the decision boundary is always a straight line. v Logistic regression uses the mean squared error as its primary loss function. Question 2 0/ 1 point Which of the following best describes underfitting? It arises when the regression model is too complex with a high number of features leading to high variance. It refers to a scenario where the regression model fits the training data perfectly with zero residuals. - It happens when the model is too simplistic, failing to capture the underlying patterns in the data. % o Itoccurs when the model captures noise in the training data and performs poorly on new, unseen data. Question 3 1/ 1 point Which of the following best describes the primary function of the gradient descent optimization algorithm in machine learning? ~ o ltincrementally adjusts model parameters to minimize a cost function by following the direction of steepest descent. It predicts future values based on historical time-series data. It computes the maximum likelihood estimate for a given dataset. It is used to classify data into distinct clusters based on inherent patterns. Question 4 0/ 1 point In the context of regularization techniques for machine learning models, how do L1 and L2 regularization differ? % o L1 regularization adds the sum of the squared values of coefficients to the loss function, while L2 adds the absolute values of coefficients. Neither L1 nor L2 regularization has any effect on the magnitude of the model coefficients. = L1 regularization has the effect of shrinking some of the model's coefficients to zero, whereas L2 tends to produce smaller coefficients but doesn't force them to zero Both L1 and L2 regularization will always perform better than non-regularized model. Question 5 1/ 1 point In the context of linear regression optimized using gradient descent, what role does the learning rate play? It determines the number of features to be used in the regression model. It specifies the maximum number of iterations before the gradient descent algorithm terminates. v o It controls the magnitude of updates made to the model's parameters during each iteration. It defines the correlation coefficient between the dependent and independent variables.
Discover more documents: Sign up today!
Unlock a world of knowledge! Explore tailored content for a richer learning experience. Here's what you'll get:
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help