a) In class we derived a least-squares (LS) estimate for the slope of a linear model. Suppose instead that we believe our data can be represented well by a quadratic model (a parabola), y = ax². Assuming that we have training data pairs (x, y), i=1,K, N, derive a general expression for the LS estimate à of the model parameter a. b) Use your expression from part (a) to compute the LS estimate à from the following noisy training data (N=5). (The blank columns are there so you'll have space to do calculations.) i 1 -2 5 21 1 2345 -0.5 1 5 2 3 3 0 4 1 Sketch the estimated model function y = âx², and on the same graph plot the noisy data points (x, y),i=1,K ,5. c) State the definition of the k nearest-neighbor (k-NN) model for regression.
a) In class we derived a least-squares (LS) estimate for the slope of a linear model. Suppose instead that we believe our data can be represented well by a quadratic model (a parabola), y = ax². Assuming that we have training data pairs (x, y), i=1,K, N, derive a general expression for the LS estimate à of the model parameter a. b) Use your expression from part (a) to compute the LS estimate à from the following noisy training data (N=5). (The blank columns are there so you'll have space to do calculations.) i 1 -2 5 21 1 2345 -0.5 1 5 2 3 3 0 4 1 Sketch the estimated model function y = âx², and on the same graph plot the noisy data points (x, y),i=1,K ,5. c) State the definition of the k nearest-neighbor (k-NN) model for regression.
MATLAB: An Introduction with Applications
6th Edition
ISBN:9781119256830
Author:Amos Gilat
Publisher:Amos Gilat
Chapter1: Starting With Matlab
Section: Chapter Questions
Problem 1P
Related questions
Question
Show all work please!
Regression/curve fitting
![a) In class we derived a least-squares (LS) estimate for the slope of a linear model. Suppose instead that we believe our data can be represented well by a quadratic model (a parabola), \( y = ax^2 \). Assuming that we have training data pairs \((x_i, y_i)\), \( i = 1, K, N \), derive a general expression for the LS estimate \(\hat{a}\) of the model parameter \( a \).
b) Use your expression from part (a) to compute the LS estimate \(\hat{a}\) from the following noisy training data (\(N = 5\)). (The blank columns are there so you’ll have space to do calculations.)
\[
\begin{array}{|c|c|c|}
\hline
i & x_i & y_i \\
\hline
1 & -2 & 5 \\
2 & 1 & 1 \\
3 & 0 & -0.5 \\
4 & 1 & 1 \\
5 & 2 & 3 \\
\hline
\end{array}
\]
Sketch the estimated model function \( y = \hat{a}x^2 \), and on the same graph plot the noisy data points \((x_i, y_i), i = 1, K, 5\).
c) State the definition of the k nearest-neighbor (k-NN) model for regression.
d) Use this definition to compute the 2-NN (two nearest-neighbor) model of the function \( y = f(x) \) for the training data in part (b), and sketch the resulting function.
e) Which model do you expect to generalize better to data outside the training set, the LS model or the 2-NN model? What is the word used to describe careful fitting to training data, which results in failure to generalize to test data?](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F3831194d-a4dd-43eb-950f-f7d241d3bcca%2Ff2a3c7e5-e9ee-4e85-8f42-5d1ad8a94f3e%2F2wm1xsy_processed.jpeg&w=3840&q=75)
Transcribed Image Text:a) In class we derived a least-squares (LS) estimate for the slope of a linear model. Suppose instead that we believe our data can be represented well by a quadratic model (a parabola), \( y = ax^2 \). Assuming that we have training data pairs \((x_i, y_i)\), \( i = 1, K, N \), derive a general expression for the LS estimate \(\hat{a}\) of the model parameter \( a \).
b) Use your expression from part (a) to compute the LS estimate \(\hat{a}\) from the following noisy training data (\(N = 5\)). (The blank columns are there so you’ll have space to do calculations.)
\[
\begin{array}{|c|c|c|}
\hline
i & x_i & y_i \\
\hline
1 & -2 & 5 \\
2 & 1 & 1 \\
3 & 0 & -0.5 \\
4 & 1 & 1 \\
5 & 2 & 3 \\
\hline
\end{array}
\]
Sketch the estimated model function \( y = \hat{a}x^2 \), and on the same graph plot the noisy data points \((x_i, y_i), i = 1, K, 5\).
c) State the definition of the k nearest-neighbor (k-NN) model for regression.
d) Use this definition to compute the 2-NN (two nearest-neighbor) model of the function \( y = f(x) \) for the training data in part (b), and sketch the resulting function.
e) Which model do you expect to generalize better to data outside the training set, the LS model or the 2-NN model? What is the word used to describe careful fitting to training data, which results in failure to generalize to test data?
Expert Solution

This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
This is a popular solution!
Trending now
This is a popular solution!
Step by step
Solved in 4 steps with 4 images

Recommended textbooks for you

MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc

Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning

Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning

MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc

Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning

Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning

Elementary Statistics: Picturing the World (7th E…
Statistics
ISBN:
9780134683416
Author:
Ron Larson, Betsy Farber
Publisher:
PEARSON

The Basic Practice of Statistics
Statistics
ISBN:
9781319042578
Author:
David S. Moore, William I. Notz, Michael A. Fligner
Publisher:
W. H. Freeman

Introduction to the Practice of Statistics
Statistics
ISBN:
9781319013387
Author:
David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:
W. H. Freeman