2. In Lecture 12, we viewed both the simple linear regression model and the multiple linear regression model through the lens of linear algebra. The key geometric insight was that if we train a model on some design matrix X and true response vector Y, our predicted response Y^ = XO^is the vector in span(X) that is closest to Y. In the simple linear regression case, our optimal vector 0 is Ô = [0, 1], and our design

A First Course in Probability (10th Edition)
10th Edition
ISBN:9780134753119
Author:Sheldon Ross
Publisher:Sheldon Ross
Chapter1: Combinatorial Analysis
Section: Chapter Questions
Problem 1.1P: a. How many different 7-place license plates are possible if the first 2 places are for letters and...
icon
Related questions
Question
Geometric Perspective of Simple Linear Regression
2. In Lecture 12, we viewed both the simple linear regression model and the multiple
linear regression model through the lens of linear algebra. The key geometric insight
was that if we train a model on some design matrix X and true response vector Y, our
predicted response Y^ = XO^is the vector in span(X) that is closest to Y.
In the simple linear regression case, our optimal vector is Ô = [0, 1], and our design
matrix is
X2
--11-
X =
This means we can write our predicted response vector as Ŷ = x [6] = 601, + Ôrē.
X
In this problem, 1, is the n-vector of all 1s and refers to the n-length vector [x₁, x2, ..., Xn]¹.
Note, is a feature, not an observation.
= 1n x
For this problem, assume we are working with the simple linear regression model,
though the properties we establish here hold for any linear regression model that contains
an intercept term.
this question, explain why
and e = [e₁, €2,...,en]T.)
(a) Recall in the last assignment, we showed that ei = 0 algebraically. In
Σ
n
n
i=1
e; = 0 using a geometric property. (Hint: e = Y - Y,
i=1
n
(b) Similarly, show that
Σ
eixi 0 using a geometric property. (Hint: Your
liti
i=1
answer should be very similar to the above)
(c) Briefly explain why the vector Y^ must also be orthogonal to the residual
vector e.
Transcribed Image Text:Geometric Perspective of Simple Linear Regression 2. In Lecture 12, we viewed both the simple linear regression model and the multiple linear regression model through the lens of linear algebra. The key geometric insight was that if we train a model on some design matrix X and true response vector Y, our predicted response Y^ = XO^is the vector in span(X) that is closest to Y. In the simple linear regression case, our optimal vector is Ô = [0, 1], and our design matrix is X2 --11- X = This means we can write our predicted response vector as Ŷ = x [6] = 601, + Ôrē. X In this problem, 1, is the n-vector of all 1s and refers to the n-length vector [x₁, x2, ..., Xn]¹. Note, is a feature, not an observation. = 1n x For this problem, assume we are working with the simple linear regression model, though the properties we establish here hold for any linear regression model that contains an intercept term. this question, explain why and e = [e₁, €2,...,en]T.) (a) Recall in the last assignment, we showed that ei = 0 algebraically. In Σ n n i=1 e; = 0 using a geometric property. (Hint: e = Y - Y, i=1 n (b) Similarly, show that Σ eixi 0 using a geometric property. (Hint: Your liti i=1 answer should be very similar to the above) (c) Briefly explain why the vector Y^ must also be orthogonal to the residual vector e.
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 4 steps

Blurred answer
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
A First Course in Probability (10th Edition)
A First Course in Probability (10th Edition)
Probability
ISBN:
9780134753119
Author:
Sheldon Ross
Publisher:
PEARSON
A First Course in Probability
A First Course in Probability
Probability
ISBN:
9780321794772
Author:
Sheldon Ross
Publisher:
PEARSON