Recall that the dot product of two vectors (1d matrices) produces a scalar value. The dot product is slightly confusing as the scalar value produced can have an arbitrary meaning that simply represents the mathematical operations of multiplying and summing. In other words, the dot product can simply represent the linear projection of vector onto the number line. This interpretation will be used repeatedly throughout machine learning as our main goal will be to take some features dotted with some weights/parameters. Once again, this can be though of as projecting our features onto a number line where the projection acts as our prediction! Keep this idea in mind as it might not make complete sense as of yet. It also important to know that the dot product can additionally take on other meanings such as a geometric meaning which represents how similar any two vectors are when projected onto one another. Meaning, how much one vector points in the direction of another. Given the following two vectors compute the dot product using the following equation < x, w >:= w. Note that the symbols <> and are used to represent the dot product. |1| |1 x=2 w = 1 09 O-1 O 10 2 O

Computer Networking: A Top-Down Approach (7th Edition)
7th Edition
ISBN:9780133594140
Author:James Kurose, Keith Ross
Publisher:James Kurose, Keith Ross
Chapter1: Computer Networks And The Internet
Section: Chapter Questions
Problem R1RQ: What is the difference between a host and an end system? List several different types of end...
icon
Related questions
Question
Recall that the dot product of two vectors (1d matrices) produces a scalar value. The dot product is slightly confusing as the scalar value produced can have an arbitrary meaning that simply represents the mathematical operations of multiplying and summing. In other words, the dot product can simply represent the linear projection of vector onto the number line. This interpretation will be used repeatedly throughout machine learning as our main goal will be to take some features dotted with some weights/parameters. Once again, this can be thought of as projecting our features onto a number line where the projection acts as our prediction! Keep this idea in mind as it might not make complete sense as of yet.

It is also important to know that the dot product can additionally take on other meanings such as a geometric meaning which represents how similar any two vectors are when projected onto one another. Meaning, how much one vector points in the direction of another.

**Given the following two vectors compute the dot product using the following equation \( \langle x, w \rangle := x^T w \). Note that the symbols <> and \(-\) are used to represent the dot product.**

\[ x = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, w = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} \]

- ○ 9
- ○ -1
- ○ 10
- ○ 2
Transcribed Image Text:Recall that the dot product of two vectors (1d matrices) produces a scalar value. The dot product is slightly confusing as the scalar value produced can have an arbitrary meaning that simply represents the mathematical operations of multiplying and summing. In other words, the dot product can simply represent the linear projection of vector onto the number line. This interpretation will be used repeatedly throughout machine learning as our main goal will be to take some features dotted with some weights/parameters. Once again, this can be thought of as projecting our features onto a number line where the projection acts as our prediction! Keep this idea in mind as it might not make complete sense as of yet. It is also important to know that the dot product can additionally take on other meanings such as a geometric meaning which represents how similar any two vectors are when projected onto one another. Meaning, how much one vector points in the direction of another. **Given the following two vectors compute the dot product using the following equation \( \langle x, w \rangle := x^T w \). Note that the symbols <> and \(-\) are used to represent the dot product.** \[ x = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, w = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} \] - ○ 9 - ○ -1 - ○ 10 - ○ 2
Furthermore, we can use the dot product definition

\[ \langle \mathbf{x}, \mathbf{y} \rangle = \|\mathbf{x}\|_2 \|\mathbf{y}\|_2 \cos(\theta) \]

to further determine the angle between two vectors. For example, see the below image for computing and formula for computing the angle between two vectors using the dot product definition.

The diagram shows two vectors, \( \mathbf{x} \) and \( \mathbf{y} \), with an angle \( \theta \) between them. The equation for calculating the angle is given as:

\[ \theta = \arccos \left( \frac{\mathbf{x} \cdot \mathbf{y}}{|\mathbf{x}| |\mathbf{y}|} \right) \]

Machine learning often uses this idea of cosine similarity and the angle between vectors to compute the similarity between different data samples using their feature vectors (i.e., the columns of a dataset)! See examples [here](#).

---

**Given the following two vectors, find the angle between them (in degrees). Use the above equation, in the picture, but use the L2 norm or \( \| \cdot \|_2 \) when computing \( |\mathbf{x}| \) and \( |\mathbf{y}| \). Recall to compute the L2-norm for a vector \( \mathbf{x} \), we do the following:**

\[
\|\mathbf{x}\|_2 = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}.
\]

\[
\mathbf{x} = 
\begin{bmatrix}
1 \\
1
\end{bmatrix}, \quad \mathbf{y} = 
\begin{bmatrix}
-2 \\
-2
\end{bmatrix}.
\]

- [ ] 180
- [ ] -1
- [ ] 1
- [ ] 90
Transcribed Image Text:Furthermore, we can use the dot product definition \[ \langle \mathbf{x}, \mathbf{y} \rangle = \|\mathbf{x}\|_2 \|\mathbf{y}\|_2 \cos(\theta) \] to further determine the angle between two vectors. For example, see the below image for computing and formula for computing the angle between two vectors using the dot product definition. The diagram shows two vectors, \( \mathbf{x} \) and \( \mathbf{y} \), with an angle \( \theta \) between them. The equation for calculating the angle is given as: \[ \theta = \arccos \left( \frac{\mathbf{x} \cdot \mathbf{y}}{|\mathbf{x}| |\mathbf{y}|} \right) \] Machine learning often uses this idea of cosine similarity and the angle between vectors to compute the similarity between different data samples using their feature vectors (i.e., the columns of a dataset)! See examples [here](#). --- **Given the following two vectors, find the angle between them (in degrees). Use the above equation, in the picture, but use the L2 norm or \( \| \cdot \|_2 \) when computing \( |\mathbf{x}| \) and \( |\mathbf{y}| \). Recall to compute the L2-norm for a vector \( \mathbf{x} \), we do the following:** \[ \|\mathbf{x}\|_2 = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}. \] \[ \mathbf{x} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \quad \mathbf{y} = \begin{bmatrix} -2 \\ -2 \end{bmatrix}. \] - [ ] 180 - [ ] -1 - [ ] 1 - [ ] 90
Expert Solution
steps

Step by step

Solved in 2 steps

Blurred answer
Recommended textbooks for you
Computer Networking: A Top-Down Approach (7th Edi…
Computer Networking: A Top-Down Approach (7th Edi…
Computer Engineering
ISBN:
9780133594140
Author:
James Kurose, Keith Ross
Publisher:
PEARSON
Computer Organization and Design MIPS Edition, Fi…
Computer Organization and Design MIPS Edition, Fi…
Computer Engineering
ISBN:
9780124077263
Author:
David A. Patterson, John L. Hennessy
Publisher:
Elsevier Science
Network+ Guide to Networks (MindTap Course List)
Network+ Guide to Networks (MindTap Course List)
Computer Engineering
ISBN:
9781337569330
Author:
Jill West, Tamara Dean, Jean Andrews
Publisher:
Cengage Learning
Concepts of Database Management
Concepts of Database Management
Computer Engineering
ISBN:
9781337093422
Author:
Joy L. Starks, Philip J. Pratt, Mary Z. Last
Publisher:
Cengage Learning
Prelude to Programming
Prelude to Programming
Computer Engineering
ISBN:
9780133750423
Author:
VENIT, Stewart
Publisher:
Pearson Education
Sc Business Data Communications and Networking, T…
Sc Business Data Communications and Networking, T…
Computer Engineering
ISBN:
9781119368830
Author:
FITZGERALD
Publisher:
WILEY