2. In Lecture 12, we viewed both the simple linear regression model and the multiple linear regression model through the lens of linear algebra. The key geometric insight was that if we train a model on some design matrix X and true response vector Y, our predicted response Y^ = XO^is the vector in span(X) that is closest to Y. In the simple linear regression case, our optimal vector 0 is Ô = [0, 1], and our design
2. In Lecture 12, we viewed both the simple linear regression model and the multiple linear regression model through the lens of linear algebra. The key geometric insight was that if we train a model on some design matrix X and true response vector Y, our predicted response Y^ = XO^is the vector in span(X) that is closest to Y. In the simple linear regression case, our optimal vector 0 is Ô = [0, 1], and our design
A First Course in Probability (10th Edition)
10th Edition
ISBN:9780134753119
Author:Sheldon Ross
Publisher:Sheldon Ross
Chapter1: Combinatorial Analysis
Section: Chapter Questions
Problem 1.1P: a. How many different 7-place license plates are possible if the first 2 places are for letters and...
Related questions
Question
![Geometric Perspective of Simple Linear Regression
2. In Lecture 12, we viewed both the simple linear regression model and the multiple
linear regression model through the lens of linear algebra. The key geometric insight
was that if we train a model on some design matrix X and true response vector Y, our
predicted response Y^ = XO^is the vector in span(X) that is closest to Y.
In the simple linear regression case, our optimal vector is Ô = [0, 1], and our design
matrix is
X2
--11-
X =
This means we can write our predicted response vector as Ŷ = x [6] = 601, + Ôrē.
X
In this problem, 1, is the n-vector of all 1s and refers to the n-length vector [x₁, x2, ..., Xn]¹.
Note, is a feature, not an observation.
= 1n x
For this problem, assume we are working with the simple linear regression model,
though the properties we establish here hold for any linear regression model that contains
an intercept term.
this question, explain why
and e = [e₁, €2,...,en]T.)
(a) Recall in the last assignment, we showed that ei = 0 algebraically. In
Σ
n
n
i=1
e; = 0 using a geometric property. (Hint: e = Y - Y,
i=1
n
(b) Similarly, show that
Σ
eixi 0 using a geometric property. (Hint: Your
liti
i=1
answer should be very similar to the above)
(c) Briefly explain why the vector Y^ must also be orthogonal to the residual
vector e.](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2Fff93c792-abd7-4eff-ba98-8a1e3cbd1351%2Fa4e3a1ce-ccff-4174-9c30-04ea872158d4%2F7qqivne_processed.jpeg&w=3840&q=75)
Transcribed Image Text:Geometric Perspective of Simple Linear Regression
2. In Lecture 12, we viewed both the simple linear regression model and the multiple
linear regression model through the lens of linear algebra. The key geometric insight
was that if we train a model on some design matrix X and true response vector Y, our
predicted response Y^ = XO^is the vector in span(X) that is closest to Y.
In the simple linear regression case, our optimal vector is Ô = [0, 1], and our design
matrix is
X2
--11-
X =
This means we can write our predicted response vector as Ŷ = x [6] = 601, + Ôrē.
X
In this problem, 1, is the n-vector of all 1s and refers to the n-length vector [x₁, x2, ..., Xn]¹.
Note, is a feature, not an observation.
= 1n x
For this problem, assume we are working with the simple linear regression model,
though the properties we establish here hold for any linear regression model that contains
an intercept term.
this question, explain why
and e = [e₁, €2,...,en]T.)
(a) Recall in the last assignment, we showed that ei = 0 algebraically. In
Σ
n
n
i=1
e; = 0 using a geometric property. (Hint: e = Y - Y,
i=1
n
(b) Similarly, show that
Σ
eixi 0 using a geometric property. (Hint: Your
liti
i=1
answer should be very similar to the above)
(c) Briefly explain why the vector Y^ must also be orthogonal to the residual
vector e.
Expert Solution

This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
This is a popular solution!
Trending now
This is a popular solution!
Step by step
Solved in 4 steps

Recommended textbooks for you

A First Course in Probability (10th Edition)
Probability
ISBN:
9780134753119
Author:
Sheldon Ross
Publisher:
PEARSON


A First Course in Probability (10th Edition)
Probability
ISBN:
9780134753119
Author:
Sheldon Ross
Publisher:
PEARSON
