HW 2.2 - AE 353 _ PrairieLearn

pdf

School

University of Illinois, Urbana Champaign *

*We aren’t endorsed by this school

Course

353

Subject

Mathematics

Date

Feb 20, 2024

Type

pdf

Pages

11

Uploaded by LieutenantPolarBearPerson487

Report
Matrix operations and matrix exponential (numeric) Matrix operations with NumPy Start by importing NumPy as usual. Let's define three matrices: $$ L = \begin{bmatrix} 0 & 1 \\ -3 & 2 \end{bmatrix} \qquad M = \begin{bmatrix} 2 & 0 \\ -1 & 1 \end{bmatrix} \qquad N = \begin{bmatrix} -1 \\ 4 \end{bmatrix} $$ We can add two matrices: array([[ 2., 1.], [-4., 3.]]) We can subtract two matrices: array([[-2., 1.], [-2., 1.]]) BEWARE! NumPy will allow you to add or subtract matrices that you would normally think of as having incompatible sizes. For example, it should not be possible to take the sum $$ M + N $$ because $M$ has size $2 \times 2$ and $N$ has size $2 \times 1$. But look: array([[ 1., -1.], [ 3., 5.]]) What has happened here? Notice that the first column of the result is the sum of $N$ and the first column of $M$, and that the second column of the result is the sum of $N$ and the second column of $M$. In [1]: import numpy as np In [2]: L = np . array ([[ 0. , 1. ], [ - 3. , 2. ]]) M = np . array ([[ 2. , 0. ], [ - 1. , 1. ]]) N = np . array ([[ - 1. ], [ 4. ]]) In [3]: L + M Out[3]: In [4]: L - M Out[4]: In [5]: M + N Out[5]:
This is an example of broadcasting , which is a strategy used to speed up computation when performing the same operation many times. In this case, broadcasting was used to quickly add the same matrix $N$ to each column of $M$. The reason this happened is that NumPy saw you were trying to add two matrices $M$ and $N$ of incompatible sizes, and so assumed that you were trying to broadcast a different operation instead. So, be careful! Make sure that the matrix operation you are implementing is actually what you intended. We can find the transpose of a matrix, for example $M^T$: array([[ 2., -1.], [ 0., 1.]]) We can multiply matrices. For example, suppose we wanted to find the matrix product $$ MN = \begin{bmatrix} 2 & 0 \\ -1 & 1 \end{bmatrix} \begin{bmatrix} -1 \\ 4 \end{bmatrix} = \begin{bmatrix} -2 \\ 5 \end{bmatrix}$$ We do this as follows: array([[-2.], [ 5.]]) Note the use of @ for matrix multiplication. This is shorthand for the function numpy.matmul . BEWARE! NumPy uses the "normal" symbol for multiplication, * , for element-wise multiplication rather than matrix multiplication. Look: array([[-2., -0.], [-4., 4.]]) Clearly, M * N is not the same as M @ N . In particular, notice that: the first column of M * N is what you would get if you multiplied each element of $N$ by the corresponding element in the first column of $M$ the second column of M * N is what you would get if you multiplied each element of $N$ by the corresponding element in the second column of $M$ This is another example of broadcasting. So, be careful! Use @ for matrix multiplication with NumPy. One common situation in which you want to use * is when multiplying a matrix and a scalar (e.g., taking the product $Ft$ when solving a linear system). For example, suppose we want to find In [6]: M . T Out[6]: In [7]: M @ N Out[7]: In [8]: M * N Out[8]:
$$M \cdot 5$$ Here is how we do that: array([[10., 0.], [-5., 5.]]) Matrix-vector operations with NumPy Consider our usual state-space model: $$ \dot{x} = Ax + Bu$$ The parameters $A$ and $B$ are matrices . They are two-dimensional — $A$ has $n_x \times n_x$ elements and $B$ has $n_x \times n_u$ elements. Both in written work and in NumPy, it is customary to represent these parameters by matrices or two-dimensional arrays . For example, if $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \qquad B = \begin{bmatrix} 5 \\ 6 \end{bmatrix} $$ then we would define: A = np.array([[1., 2.], [3., 4.]]) B = np.array([[5.], [6.]]) The state and input are vectors . They are one-dimensional — the state $x$ has $n_x$ elements and the input $u$ has $n_u$ elements. In written work, it is customary to represent these vectors by column matrices: $$ x = \begin{bmatrix} x_1 \\ \vdots \\ x_{n_x} \end{bmatrix} \qquad u = \begin{bmatrix} u_1 \\ \vdots \\ u_{n_u} \end{bmatrix} $$ In NumPy, however, it is customary to represent these vectors by one-dimensional arrays . For example, if $$ x = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \qquad u = \begin{bmatrix} 3 \end{bmatrix} $$ then we would define: x = np.array([1., 2.]) u = np.array([3.]) In particular, we would not define: x = np.array([[1.], [2.]]) u = np.array([[3.]]) To be clear, this is a convention and not a rule. We could define x and u as two-dimensional arrays in NumPy — all the matrix operations would work out. One reason not to do so is to avoid errors in your code. If $x$ and $u$ are never two-dimensional, then we should use one-dimensional In [9]: M * 5 Out[9]:
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
arrays to represent them — why allow ourselves to make a mistake with the sizes and shapes of things? Let's convince ourselves that matrix-vector operations proceed as we expect. Define a vector: $$v = \begin{bmatrix} 2 \\ -3 \end{bmatrix}$$ We can find the product $$ Lv = \begin{bmatrix} 0 & 1 \\ -3 & 2 \end{bmatrix} \begin{bmatrix} 2 \\ -3 \end{bmatrix} = \begin{bmatrix} -3 \\ -12 \end{bmatrix} $$ as follows: array([ -3., -12.]) There are two things to note here. First, we used @ and not * , just as we did before (and as we would do for any matrix multiplication). This is very important. Second, since we represented the vector $v$ as a one-dimensional array, then the resulting product $Lv$ is also a one-dimensional array. That is to say, the product of a matrix and a vector is a vector, and should be represented as such. We can also find the product $Mv$: array([ 4., -5.]) Again, the result is a one-dimensional array. We can not , of course, find the product $$ Nv = \begin{bmatrix} -1 \\ 4 \end{bmatrix} \begin{bmatrix} 2 \\ -3 \end{bmatrix} $$ because $N$ and $v$ have incompatible sizes. If we try, we should get an error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-13-fdeebe536b34> in <module> ----> 1 N @ v ValueError : matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 1) In [10]: v = np . array ([ 2. , - 3. ]) In [11]: L @ v Out[11]: In [12]: M @ v Out[12]: In [13]: N @ v
We can , however, find the product $$ N^Tv = \begin{bmatrix} -1 & 4 \end{bmatrix} \begin{bmatrix} 2 \\ -3 \end{bmatrix} $$ because $N^T$ and $v$ do have compatible sizes: array([-14.]) BEWARE! NumPy will allow you to add or subtract one-dimensional (vectors) and two-dimensional (matrices) arrays, despite the fact that doing so would be fundamentally wrong. Here is an example: array([[ 1., -4.], [ 6., 1.]]) The first column of N + v is the sum of $N$ with the first element of $v$, and the second column is the sum of $N$ with the second element of $v$. This is, yet again, an example of broadcasting. Be careful! Matrix exponential with SciPy In order to evaluate the matrix exponential, we need to import the linalg module from SciPy . Now, we can find the matrix exponential of any square matrix using the function linalg.expm . For example, suppose we wanted to find $$ e^M $$ We would do so as follows: array([[ 7.3890561 , 0. ], [-4.67077427, 2.71828183]]) The result is a square matrix with the same size as $M$, as expected. BEWARE! NumPy will happily take the exponential of a matrix with np.exp : array([[7.3890561 , 1. ], [0.36787944, 2.71828183]]) Look closely - this is not the same result as before! Why not? This is yet another example of broadcasting. What np.exp does is to take the scalar exponential of each element of $M$. This is not the same as taking the matrix exxponential. In [14]: N . T @ v Out[14]: In [15]: N + v Out[15]: In [16]: from scipy import linalg In [17]: linalg . expm ( M ) Out[17]: In [18]: np . exp ( M ) Out[18]:
Take another look at $M$: [[ 2. 0.] [-1. 1.]] See the 0. in the top-right corner (i.e., first row, second column)? It's scalar exponential would be: $$ e^0 = 1 $$ And that is exactly what you see in the top-right corner of np.exp(M) , but is not what you see in the top-right corner of linalg.expm(M) . So, be careful! Use linalg.expm to take the matrix exponential and np.exp to take the scalar exponential. Application to solve a linear system (numeric) You know that the solution to $$ \dot{x} = Fx $$ with initial condition $$ x(0) = x_0 $$ is $$ x(t) = e^{Ft} x_0 $$ where $e^{(\cdot)}$ is the matrix exponential function. Evaluate this solution for given values of $F$, $t$, and $x_0$. [0.74997443 6.63908167] Matrix operations and matrix exponential (symbolic) We can do all of these same things symbolically with SymPy as well. In [19]: print ( M ) In [22]: #grade (write your code in this cell and DO NOT DELETE OR MODIFY THIS LINE) # Description of system, time, and initial condition (do not change) F = np . array ([[ 0. , 1. ], [ 4. , 3. ]]) t = 0.5 x0 = np . array ([ - 1. , 2. ]) ex = linalg . expm ( F * t ) # Solution at time t x = ex@x0 print ( x ) # Remember that you can use "print" statements to check your work.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
First, let's convert $L$, $M$, and $N$ from NumPy arrays to SymPy matrices: Take a look at $L$: $\displaystyle \left[\begin{matrix}0.0 & 1.0\\-3.0 & 2.0\end{matrix}\right]$ Note that it still has floating-point numbers. When doing symbolic math, it is best to be working with rational numbers (i.e., ratios of integers). We can get there with one more conversion: Now look at $L$: $\displaystyle \left[\begin{matrix}0 & 1\\-3 & 2\end{matrix}\right]$ We can add: $\displaystyle \left[\begin{matrix}2 & 1\\-4 & 3\end{matrix}\right]$ Unlike NumPy, SymPy does not use broadcasting for matrix addition, so will throw an error if we try to add two matrices of incompatible sizes: --------------------------------------------------------------------------- ShapeError Traceback (most recent call last) <ipython-input-29-870650536b86> in <module> ----> 1 M + N /opt/conda/lib/python3.8/site-packages/sympy/core/decorators.py in binary_op_wrapper (sel f, other) 135 if f is not None : 136 return f ( self ) --> 137 return func ( self , other ) 138 return binary_op_wrapper 139 return priority_decorator In [23]: import sympy as sym In [24]: L = sym . Matrix ( L ) M = sym . Matrix ( M ) N = sym . Matrix ( N ) In [25]: L Out[25]: In [26]: L = sym . nsimplify ( L , rational = True ) M = sym . nsimplify ( M , rational = True ) N = sym . nsimplify ( N , rational = True ) In [27]: L Out[27]: In [28]: L + M Out[28]: In [29]: M + N
/opt/conda/lib/python3.8/site-packages/sympy/matrices/common.py in __add__ (self, other) 2545 if hasattr ( other , 'shape'): 2546 if self . shape != other . shape : -> 2547 raise ShapeError("Matrix size mismatch: %s + %s" % ( 2548 self.shape, other.shape)) 2549 ShapeError : Matrix size mismatch: (2, 2) + (2, 1) We can subtract: $\displaystyle \left[\begin{matrix}-2 & 1\\-2 & 1\end{matrix}\right]$ We can find the transpose: $\displaystyle \left[\begin{matrix}2 & -1\\0 & 1\end{matrix}\right]$ We can multiply: $\displaystyle \left[\begin{matrix}-2\\5\end{matrix}\right]$ BEWARE! SymPy allows the use of * for matrix multiplication and - because it does not do broadcasting - this will produce a correct result: $\displaystyle \left[\begin{matrix}-2\\5\end{matrix}\right]$ We strongly encourage the use of @ for matrix multiplication in SymPy, to be consistent with NumPy. And finally, we can take the matrix exponential: $\displaystyle \left[\begin{matrix}e^{2} & 0\\e - e^{2} & e\end{matrix}\right]$ It is often helpful to simplify the result: $\displaystyle \left[\begin{matrix}e^{2} & 0\\e \left(1 - e\right) & e\end{matrix}\right]$ Is this the same as our result from numeric computation with NumPy? Let's check, by converting our result to a NumPy array: In [30]: L - M Out[30]: In [31]: M . T Out[31]: In [32]: M @ N Out[32]: In [33]: M * N Out[33]: In [34]: sym . exp ( M ) Out[34]: In [35]: sym . exp ( M ) . simplify () Out[35]:
array([[ 7.3890561 , 0. ], [-4.67077427, 2.71828183]]) Note the syntax - we simply called np.array on the SymPy matrix, using dtype=np.float64 to produce a NumPy array with floating-point numbers. In any case, yes - this result looks the same as what we found before (see above). A wonderful thing about SymPy is that it will take the matrix exponential of matrices with symbolic variables. For example, suppose we wanted to find $$ e^{M t} $$ where $t$ is a scalar variable. Easy! First, we create a symbol to represent $t$: Then, we compute our result: $\displaystyle \left[\begin{matrix}e^{2 t} & 0\\\left(1 - e^{t}\right) e^{t} & e^{t}\end{matrix}\right]$ Wow! Note that you can convert this result to a list and print it, in case you need to copy/paste it as an answer to a PrairieLearn question ( this is just an FYI - we don't ask you to do so in this particular question): [[exp(2*t), 0], [(1 - exp(t))*exp(t), exp(t)]] Matrix-vector operations with SymPy Unlike NumPy, SymPy makes no distinction between matrices and vectors. Everything is two- dimensional. That is to say, if you give it a vector, SymPy will immediately represent that vector as a column matrix. Let's convert $v$ from a one-dimensional NumPy array to a SymPy matrix: Take a look at $v$: In [36]: np . array ( sym . exp ( M ) . simplify (), dtype = np . float64 ) Out[36]: In [37]: t = sym . symbols ( 't' ) In [38]: sym . exp ( M * t ) . simplify () Out[38]: In [39]: print ( sym . exp ( M * t ) . simplify () . tolist ()) In [40]: v = sym . Matrix ( v )
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
$\displaystyle \left[\begin{matrix}2.0\\-3.0\end{matrix}\right]$ Oops, we forgot to convert from floating-point to rational numbers! Let's do that: Now look at $v$: $\displaystyle \left[\begin{matrix}2\\-3\end{matrix}\right]$ See how it looks exactly like a column matrix? Let's check it's shape: (2, 1) Yep — when represented as a SymPy matrix, the vector $v$ is definitely treated as a two- dimensional array, a column matrix. So, everything we said about matrix operations applies equally to matrix-vector operations with SymPy. Just as one example, let's take the product $Lv$: $\displaystyle \left[\begin{matrix}-3\\-12\end{matrix}\right]$ The result in SymPy is a $2 \times 1$ matrix, whereas in NumPy it was a vector — a one-dimensional array — of length $2$. (2, 1) Application to solve a linear system (symbolic) You know that the solution to $$ \dot{x} = Fx $$ with initial condition $$ x(0) = x_0 $$ is $$ x(t) = e^{Ft} x_0 $$ In [41]: v Out[41]: In [42]: v = sym . nsimplify ( v , rational = True ) In [43]: v Out[43]: In [44]: v . shape Out[44]: In [45]: L @ v Out[45]: In [46]: ( L @ v ) . shape Out[46]:
where $e^{(\cdot)}$ is the matrix exponential function. Evaluate this solution for given values of $F$ and $x_0$, and express your result in terms of $t$. In [48]: #grade (write your code in this cell and DO NOT DELETE OR MODIFY THIS LINE) # Description of system and initial condition (do not change) F = np . array ([[ 0. , 1. ], [ 4. , 3. ]]) x0 = np . array ([ - 1. , 2. ]) # Convert F and x0 to SymPy matrices with rational elements F = sym . Matrix ( F ) x0 = sym . Matrix ( x0 ) F = sym . nsimplify ( F , rational = True ) x0 = sym . nsimplify ( x0 , rational = True ) # Create a symbol to represent time t = sym . symbols ( 't' ) ex = sym . exp ( F * t ) . simplify () # Compute solution as a function of time (symbolic) x_sym = ex@x0 # Remember that you can use "print" statements to check your work. In [ ]: