Since I was too elementary in the last time, I would like to list the types of matrices to some extent next. I think that the direction will shift, but we will correct the trajectory on the way. I'll do my best. In this article, there are some words that I don't know compared to the last time, so please take a moment. See other articles for no explanation.
As a concrete example, a 2x2 matrix
\begin{pmatrix}
1 & 1 \\
0 & 3
\end{pmatrix}
A matrix in which the components are folded diagonally is called a transposed matrix.
\begin{pmatrix}
1 & 0 \\
1& 3
\end{pmatrix}
Something like this. The image is. now
import numpy as np
A = np.array([1, 1],
[0,3])
C = A.T
print(C)
Is it like this?
For any vector A, B,
(A+B)^T=A^T+B^T\\
(AB)^T=B^TA^T\\(A^T)^T=A
It has become. Please actually calculate. There is still a formula. ..
You didn't explain the matrix multiplication. For any two matrices A, B
\vec{A}=\begin{pmatrix}
a& b \\
c& d
\end{pmatrix}
\vec{B}=\begin{pmatrix}
e& f \\
g& h
\end{pmatrix}
If there was, the matrix product would be
\vec{AB}=\begin{pmatrix}
a& b \\
c& d
\end{pmatrix}\begin{pmatrix}
e& f \\
g& h
\end{pmatrix}=\begin{pmatrix}
ae+bg& af+bh \\
ce+dg& cf+hd
\end{pmatrix}
It will be. It is like this. It's like doing vertical x horizontal four times. It may be difficult to understand in words, so please take a look at this site. https://mathwords.net/gyouretsuseki
import numpy as np
#Define matrix A
A=np.matrix([
[1,1],
[0,3]
])
#Define matrix B
B=np.matrix([
[1,0],
[1,3]
])
#Matrix product (product of A and B)
C=np.dot(A,B)
print("Matrix product C")
print(C)
It seems that the method of finding the inverse matrix was Saras, cofactor expansion, or sweeping method. This time, I will introduce the 2x2 cofactor expansion.
\vec{A}=\begin{pmatrix}
1 & 1 \\
0 & 3
\end{pmatrix}
Suppose there is a procession of. Use Saras's formula to find the determinant
\begin{vmatrix}
1 & 1 \\
0 & 3
\end{vmatrix}=(1×3)-(1×0)=3=det(A)
Will be. From this determinant, transpose A after the cofactor expansion and divide the value of the determinant to obtain the value of the inverse matrix. When I actually calculate
A^{-1}=\frac{1}{det(A)}A^{\sim}
A^{\sim}=\begin{pmatrix}
3 & 1 \\
0 &-1
\end{pmatrix}\\
A^{-1}=\frac{1}{3}\begin{pmatrix}
3 & 1 \\
0 &-1
\end{pmatrix}
import numpy as np
#Define matrix A
A=np.matrix([
[1,1],
[0,3]
])
#Define matrix B
B=np.matrix([
[1,0],
[1,3]
])
#Matrix product (product of A and B)
C=np.dot(A,B)
D=np.linalg.inv(A)
print("Matrix product C")
print(C)
print(D)
Will be. I will explain the cofactor expansion later. It has become halfway. Next time, I would like to give a rough explanation of linear independence, linear dependence, cofactors, and sweeping methods.
Linear algebra seems to be an interesting subject if it is mastered. I actually know how to do it, but I don't understand the proof at all, or it seems that I got a unit due to a calculation problem. It's sad that I didn't actually learn anything when I was in my first year of college. It's refreshing to remember that this was the way I was writing the article. I completely forgot how to expand the cofactor. This is not good, so I will continue.
To change the story, why is the article on partial differential so popular? I also want you to see the breakout article, which takes about five times as long to make that article.
You may not know what you are doing because all the articles are too half-finished. Here, I would like to revise the previous article and then post a new article again. For example, I have not added a more fundamental explanation of the cofactor expansion and what a matrix is, and I think that my lack of knowledge is quite exposed. Even if you correct it, you will make a mistake, so please point it out with cold eyes.
Recommended Posts