We are given the following training examples (1.2, 3.2), (2.8, 8.5), (2,4.7), (0.9, 2.9), (5.1,11) Suppose the weights are wo = wi = 1 initially. We want to minimize the cumulative loss Ls(h, c) that corresponds to the half of the residual sum of squares; i.e., we have that m 1 Ls(h, c) = RSSs(h, c) E(yi – h(x;))? i=1 0.01, perform three iterations of full gradient descent, listing wo and wi (a) at each itgration. , Using 7 = (b). What is the value that we predict at the following points r1 = 1.5 and r2 = 4.5?
We are given the following training examples (1.2, 3.2), (2.8, 8.5), (2,4.7), (0.9, 2.9), (5.1,11) Suppose the weights are wo = wi = 1 initially. We want to minimize the cumulative loss Ls(h, c) that corresponds to the half of the residual sum of squares; i.e., we have that m 1 Ls(h, c) = RSSs(h, c) E(yi – h(x;))? i=1 0.01, perform three iterations of full gradient descent, listing wo and wi (a) at each itgration. , Using 7 = (b). What is the value that we predict at the following points r1 = 1.5 and r2 = 4.5?
Algebra & Trigonometry with Analytic Geometry
13th Edition
ISBN:9781133382119
Author:Swokowski
Publisher:Swokowski
Chapter7: Analytic Trigonometry
Section7.6: The Inverse Trigonometric Functions
Problem 94E
Related questions
Question
100%
![We are given the following training examples
(1.2, 3.2), (2.8, 8.5), (2,4.7), (0.9, 2.9), (5.1, 11)
Suppose the weights are wo = wi
that corresponds to the half of the residual sum of squares; i.e., we have that
= 1 initially. We want to minimize the cumulative loss Ls(h, c)
m
1
RSSS(h, c)
1
Ls(h, c) =
E(yi – h(x;))?
i=1
, Using n = 0.01, perform three iterations of full gradient descent, listing wo and wi
(a)
at each iteration.
(b).
What is the value that we predict at the following points x1 = 1.5 and x2
4.5?](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F3be63e81-0a65-46b2-8fde-c24324f63593%2F465c18f9-d9e6-4457-b9a6-e5bf1b8c04d7%2Fpmkdzf_processed.jpeg&w=3840&q=75)
Transcribed Image Text:We are given the following training examples
(1.2, 3.2), (2.8, 8.5), (2,4.7), (0.9, 2.9), (5.1, 11)
Suppose the weights are wo = wi
that corresponds to the half of the residual sum of squares; i.e., we have that
= 1 initially. We want to minimize the cumulative loss Ls(h, c)
m
1
RSSS(h, c)
1
Ls(h, c) =
E(yi – h(x;))?
i=1
, Using n = 0.01, perform three iterations of full gradient descent, listing wo and wi
(a)
at each iteration.
(b).
What is the value that we predict at the following points x1 = 1.5 and x2
4.5?
![Another Method: Gradient Descent
Unthresholded perceptron:
Xo = 1
a Wo
W1
X1
W2
n
h(x) = o(x) = wT.x= ) w; · x;
X2
Σ
i=0
Wn
Define the cumulative loss Ls (h, c) to be
1
Ls (h, c) =
Yd
(X4.Ya)ES
We want to minimize this cumulative loss.
• Note that Ls (h, c) = "Rs (h, c), using the square loss lsq.
• This is very typical in machine learning. We usually minimize an
objective function that is a modification of the empirical risk.
• Here the modification is for mathematical convenience; we try to make
our update rule simpler. In other cases we will be introducing more
terms trying to satisfy additional goals/constraints.](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F3be63e81-0a65-46b2-8fde-c24324f63593%2F465c18f9-d9e6-4457-b9a6-e5bf1b8c04d7%2F8yahp9p_processed.jpeg&w=3840&q=75)
Transcribed Image Text:Another Method: Gradient Descent
Unthresholded perceptron:
Xo = 1
a Wo
W1
X1
W2
n
h(x) = o(x) = wT.x= ) w; · x;
X2
Σ
i=0
Wn
Define the cumulative loss Ls (h, c) to be
1
Ls (h, c) =
Yd
(X4.Ya)ES
We want to minimize this cumulative loss.
• Note that Ls (h, c) = "Rs (h, c), using the square loss lsq.
• This is very typical in machine learning. We usually minimize an
objective function that is a modification of the empirical risk.
• Here the modification is for mathematical convenience; we try to make
our update rule simpler. In other cases we will be introducing more
terms trying to satisfy additional goals/constraints.
Expert Solution
![](/static/compass_v2/shared-icons/check-mark.png)
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
This is a popular solution!
Trending now
This is a popular solution!
Step by step
Solved in 2 steps with 2 images
![Blurred answer](/static/compass_v2/solution-images/blurred-answer.jpg)
Recommended textbooks for you
Algebra & Trigonometry with Analytic Geometry
Algebra
ISBN:
9781133382119
Author:
Swokowski
Publisher:
Cengage
![Linear Algebra: A Modern Introduction](https://www.bartleby.com/isbn_cover_images/9781285463247/9781285463247_smallCoverImage.gif)
Linear Algebra: A Modern Introduction
Algebra
ISBN:
9781285463247
Author:
David Poole
Publisher:
Cengage Learning
![College Algebra](https://www.bartleby.com/isbn_cover_images/9781337282291/9781337282291_smallCoverImage.gif)
Algebra & Trigonometry with Analytic Geometry
Algebra
ISBN:
9781133382119
Author:
Swokowski
Publisher:
Cengage
![Linear Algebra: A Modern Introduction](https://www.bartleby.com/isbn_cover_images/9781285463247/9781285463247_smallCoverImage.gif)
Linear Algebra: A Modern Introduction
Algebra
ISBN:
9781285463247
Author:
David Poole
Publisher:
Cengage Learning
![College Algebra](https://www.bartleby.com/isbn_cover_images/9781337282291/9781337282291_smallCoverImage.gif)
![College Algebra](https://www.bartleby.com/isbn_cover_images/9781938168383/9781938168383_smallCoverImage.gif)
![Glencoe Algebra 1, Student Edition, 9780079039897…](https://www.bartleby.com/isbn_cover_images/9780079039897/9780079039897_smallCoverImage.jpg)
Glencoe Algebra 1, Student Edition, 9780079039897…
Algebra
ISBN:
9780079039897
Author:
Carter
Publisher:
McGraw Hill
![College Algebra](https://www.bartleby.com/isbn_cover_images/9781305115545/9781305115545_smallCoverImage.gif)
College Algebra
Algebra
ISBN:
9781305115545
Author:
James Stewart, Lothar Redlin, Saleem Watson
Publisher:
Cengage Learning