Solve the N-dimensional simultaneous linear equations $ A x = b $ with A as a matrix for $ x $.
(1) Calculate the inverse matrix $ A ^ -1 $ of A by numpy's linalg.inv method. Find the solution vector $ x = A ^ {-1} b $.
(2) Calculate $ x $ directly from the np.linalg.solve method. ** This is generally faster. ** **
%%time
import numpy as np
"""
A x =The x vector that satisfies b is linalg.Ask using solve
"""
#Generation of matrix A
a1_lis = [1, 0, 0]
a2_lis = [0, 2, 0]
a3_lis = [0, 0, 4]
A_matrix=np.array([a1_lis, a2_lis, a3_lis])
# b vector
b = np.array([4,5,6]) #Generate b as a row vector and use it as a transposed matrix"Generate column vector "
b_vec = b.T
A_inv = np.linalg.inv(A_matrix)
x_vec= np.dot(A_inv,b_vec) #Calculate the x vector. In matrix multiplication@Using operators(reference)
print(x_vec)
Or you can write below using "matrix". Readability is high. However, it is not recommended because it is often inconvenient.
import numpy as np
"""
A x =x vector that satisfies b x= A^-1 Calculate by calculation of b
"""
#Generation of matrix A
a1_lis = [1, 0, 0]
a2_lis = [0, 2, 0]
a3_lis = [0, 0, 4]
A_matrix=np.matrix([a1_lis, a2_lis, a3_lis])
# b vector
bb = np.matrix([4,5,6]) #Generate b as a row vector and use it as a transposed matrix"Generate column vector "
b_vec = bb.T
x_vec=A_matrix.I@b_vec #Calculate the x vector. In matrix multiplication@Using operators(reference)
print(x_vec)
[[ 4. ]
[ 2.5]
[ 1.5]]
import numpy as np
"""
A x =The x vector that satisfies b is linalg.Ask using solve
"""
#Generation of matrix A
a1_lis = [1, 0, 0]
a2_lis = [0, 2, 0]
a3_lis = [0, 0, 4]
A_matrix=np.array([a1_lis, a2_lis, a3_lis])
# b vector
b = np.array([4,5,6]) #Generate b as a row vector and use it as a transposed matrix"Generate column vector "
x_vec= np.linalg.solve(A_matrix,b) #Calculate the x vector. In matrix multiplication@Using operators(reference)
print(x_vec)
Result (2) [ 4. 2.5 1.5]
** The numpy.linalg.solve method is internally a Lapack LAPACK dgesv It is a wrapper for [1] and zgesv ]. ** **
● In computational physics, etc., simultaneous linear equations are often obtained. There are various schemes for finding the inverse matrix depending on the nature of the matrix $ A $, but ** I think it's enough to use the solve method when writing code ... ** ** Gaussian elimination method, Jacobi method, Gauss-Seidel method, conjugate gradient method, etc. are available as rudimentary numerical calculation methods for solving simultaneous linear equations [2]. You may want to look it up when you need it.
[1] Description of numpy.linalg.solve: https://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.linalg.solve.html
[2] Ichiro Kawakami, ["Numerical Calculation"](https://www.amazon.co.jp/%E6%95%B0%E5%80%A4%E8%A8%88%E7%AE%97- % E7% 90% 86% E5% B7% A5% E7% B3% BB% E3% 81% AE% E6% 95% B0% E5% AD% A6% E5% 85% A5% E9% 96% 80% E3 % 82% B3% E3% 83% BC% E3% 82% B9-8-% E5% B7% 9D% E4% B8% 8A-% E4% B8% 80% E9% 83% 8E / dp / 4000077783), Iwanami Shoten, 1989.
About @ operator: [[Scientific / technical calculation by Python] Calculation of matrix product by @ operator, python3.5 or later, numpy](Scientific / technical calculation by Python] Calculation of matrix product by @ operator, python3.5 After that, numpy)