Question 1 Matrices as Linear Transformations Let A: R3 R4 be the linear transformation (Links) mapping the 3-vector in the input layer to the 4-vector in the hidden layer, and let B: R4 → R² be the linear transformation (Links) from the hidden layer to the output layer. Write the matrix representations of A and B, and label each edge in the image below with the corresponding entries of A and B. Input x1 Hidden nodes layer Input nodes layer Output nodes layer Output y1 Input x2 Output y2 Input x3 Neuron Links Links Question 2 Change of Basis between standard coordinates in R³ Sometimes, preprocessing of raw data is useful before training a neural net- work. Perhaps we know something about the structure of our feature data and want to put it into a more convenient basis before training. For example, if we know that the data is cylindrically symmetric, it might make sense to train our algorithm based on distance to the center of the cylinder, height on the cylinder, and angle from the center, rather than the x, y, and z distance from the origin. a. Write the change of basis matrix P : R³ →R³ which transforms the standard basis E = {₁, j, k} to a new basis B, where B = {2 0 } b. The transformation from Cartesian coordinates (x1, x2, x3) to cylindrical coordinates (p, o, x3) is given by the following transformation T: R³ → R³ p = √x² + x²², 4 = -1 x2 = tan ' x3 = x3. x1 Is T linear? If yes prove it. If not, give a counter example. c. Consider the following matrix (the Jacobian matrix of this transformation) θρ θρ ap Jx3 Jx1 8x2 do მი მი J = მე1 ǝx2 ǝx3 8x3 ǝx3 Jx3 Show that, if p R² → R and მე1 მ2 Əx3 : R² → R were replaced with linear functions, then the J would be the matrix representation of the T. (Since T is not linear, J can be thought of as a linear approximation of T).
Question 1 Matrices as Linear Transformations Let A: R3 R4 be the linear transformation (Links) mapping the 3-vector in the input layer to the 4-vector in the hidden layer, and let B: R4 → R² be the linear transformation (Links) from the hidden layer to the output layer. Write the matrix representations of A and B, and label each edge in the image below with the corresponding entries of A and B. Input x1 Hidden nodes layer Input nodes layer Output nodes layer Output y1 Input x2 Output y2 Input x3 Neuron Links Links Question 2 Change of Basis between standard coordinates in R³ Sometimes, preprocessing of raw data is useful before training a neural net- work. Perhaps we know something about the structure of our feature data and want to put it into a more convenient basis before training. For example, if we know that the data is cylindrically symmetric, it might make sense to train our algorithm based on distance to the center of the cylinder, height on the cylinder, and angle from the center, rather than the x, y, and z distance from the origin. a. Write the change of basis matrix P : R³ →R³ which transforms the standard basis E = {₁, j, k} to a new basis B, where B = {2 0 } b. The transformation from Cartesian coordinates (x1, x2, x3) to cylindrical coordinates (p, o, x3) is given by the following transformation T: R³ → R³ p = √x² + x²², 4 = -1 x2 = tan ' x3 = x3. x1 Is T linear? If yes prove it. If not, give a counter example. c. Consider the following matrix (the Jacobian matrix of this transformation) θρ θρ ap Jx3 Jx1 8x2 do მი მი J = მე1 ǝx2 ǝx3 8x3 ǝx3 Jx3 Show that, if p R² → R and მე1 მ2 Əx3 : R² → R were replaced with linear functions, then the J would be the matrix representation of the T. (Since T is not linear, J can be thought of as a linear approximation of T).
Linear Algebra: A Modern Introduction
4th Edition
ISBN:9781285463247
Author:David Poole
Publisher:David Poole
Chapter3: Matrices
Section3.6: Introduction To Linear Transformations
Problem 1EQ: 1. Let Ta : ℝ2 → ℝ2 be the matrix transformation corresponding to . Find , where and .
Question
Key Concepts / Background: we study how to train a neural network to classify data with 3 features into 2 classes. Through this example, we will become more familiar with the descriptive power of linear transformations and observe that purely linear neural networks (without nonlinear activation functions) are not good models for machine learning. If you want to ignore details about applications to machine learning, you can skip the underlined or blue text. For each
u ∈ R3 in the training set, the desired output of the neural network for u is either
1
0
or
0
1.
Traditionally, neural networks are comprised of a composition of linear transformations of data (neuron edge weights) and nonlinear transformations (activation functions), although we will not include activation functions here.
Expert Solution
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
Step by step
Solved in 2 steps with 5 images
Recommended textbooks for you
Linear Algebra: A Modern Introduction
Algebra
ISBN:
9781285463247
Author:
David Poole
Publisher:
Cengage Learning
Algebra & Trigonometry with Analytic Geometry
Algebra
ISBN:
9781133382119
Author:
Swokowski
Publisher:
Cengage
Elementary Linear Algebra (MindTap Course List)
Algebra
ISBN:
9781305658004
Author:
Ron Larson
Publisher:
Cengage Learning
Linear Algebra: A Modern Introduction
Algebra
ISBN:
9781285463247
Author:
David Poole
Publisher:
Cengage Learning
Algebra & Trigonometry with Analytic Geometry
Algebra
ISBN:
9781133382119
Author:
Swokowski
Publisher:
Cengage
Elementary Linear Algebra (MindTap Course List)
Algebra
ISBN:
9781305658004
Author:
Ron Larson
Publisher:
Cengage Learning
College Algebra (MindTap Course List)
Algebra
ISBN:
9781305652231
Author:
R. David Gustafson, Jeff Hughes
Publisher:
Cengage Learning
College Algebra
Algebra
ISBN:
9781305115545
Author:
James Stewart, Lothar Redlin, Saleem Watson
Publisher:
Cengage Learning