def weighted_mse_grad(w, X, y, V): """ Calculate gradient of the weight MSE loss in the given point w Parameters ---------- w: array_like Current point, in which we want to evaluate gradient (vector with k+1 elements) X: array_like Design matrix with N rows and k+1 columns y: array_like Vector with N observations of the target variable V: array_like Diagonal matrix N x N, with the weights Returns ---------- out: ndarray vector of gradient, k+1 """ # your code here # TEST weighted_mse_grad() w = np.zeros(5) X = np.random.randn(10, 5) y = np.ones(10) V = np.diag(np.ones(10)) L_grad = weighted_mse_grad(w, X, y, V) assert(L_grad.shape == (5,)) w = np.zeros(5) X = np.eye(5) y = np.ones(5) V = np.diag(np.ones(5)) L_grad = weighted_mse_grad(w, X, y, V) assert(L_grad.shape == (5,)), 'The shape of the output on the test case is wrong' assert(np.allclose(5 * L_grad,- y * 2)), 'The values of the output on the test case is wrong'
def weighted_mse_grad(w, X, y, V):
"""
Calculate gradient of the weight MSE loss in the given point w
Parameters
----------
w: array_like
Current point, in which we want to evaluate gradient (
X: array_like
Design matrix with N rows and k+1 columns
y: array_like
Vector with N observations of the target variable
V: array_like
Diagonal matrix N x N, with the weights
Returns
----------
out: ndarray
vector of gradient, k+1
"""
# your code here
# TEST weighted_mse_grad()
w = np.zeros(5)
X = np.random.randn(10, 5)
y = np.ones(10)
V = np.diag(np.ones(10))
L_grad = weighted_mse_grad(w, X, y, V)
assert(L_grad.shape == (5,))
w = np.zeros(5)
X = np.eye(5)
y = np.ones(5)
V = np.diag(np.ones(5))
L_grad = weighted_mse_grad(w, X, y, V)
assert(L_grad.shape == (5,)), 'The shape of the output on the test case is wrong'
assert(np.allclose(5 * L_grad,- y * 2)), 'The values of the output on the test case is wrong'
![Calculate gradient of the weighted MSE loss with respect to parameters for the model
V„L = Vµ||V12 (y – Xw)||? =?
w N
-
Hints: You can use formulas from the lecture.
kxn
Given vector x E R" and matrix A E R*
V||x||} = 2x
V,AX = A"
Using the formula frm the gradient, implement the function weighted_mse_grad , which calculates gradient for any given vector w.](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2Fa040abc5-8ab2-450b-b204-641f94fadada%2F51d30f84-61ba-4ea9-973f-fba4b6dff7c5%2F33lvnke_processed.png&w=3840&q=75)
![](/static/compass_v2/shared-icons/check-mark.png)
Trending now
This is a popular solution!
Step by step
Solved in 4 steps with 2 images
![Blurred answer](/static/compass_v2/solution-images/blurred-answer.jpg)
![Computer Networking: A Top-Down Approach (7th Edi…](https://www.bartleby.com/isbn_cover_images/9780133594140/9780133594140_smallCoverImage.gif)
![Computer Organization and Design MIPS Edition, Fi…](https://www.bartleby.com/isbn_cover_images/9780124077263/9780124077263_smallCoverImage.gif)
![Network+ Guide to Networks (MindTap Course List)](https://www.bartleby.com/isbn_cover_images/9781337569330/9781337569330_smallCoverImage.gif)
![Computer Networking: A Top-Down Approach (7th Edi…](https://www.bartleby.com/isbn_cover_images/9780133594140/9780133594140_smallCoverImage.gif)
![Computer Organization and Design MIPS Edition, Fi…](https://www.bartleby.com/isbn_cover_images/9780124077263/9780124077263_smallCoverImage.gif)
![Network+ Guide to Networks (MindTap Course List)](https://www.bartleby.com/isbn_cover_images/9781337569330/9781337569330_smallCoverImage.gif)
![Concepts of Database Management](https://www.bartleby.com/isbn_cover_images/9781337093422/9781337093422_smallCoverImage.gif)
![Prelude to Programming](https://www.bartleby.com/isbn_cover_images/9780133750423/9780133750423_smallCoverImage.jpg)
![Sc Business Data Communications and Networking, T…](https://www.bartleby.com/isbn_cover_images/9781119368830/9781119368830_smallCoverImage.gif)