In science and engineering, we often solve eigenvalue problems in quantum mechanics and oscillator systems.
Power method is a standard eigenvalue problem.
It is one of the algorithms to find the maximum eigenvalue (dominant eigenvalue) of the absolute value of. ** ** When you open the introductory book around that area, you will often see it first.
** It is a very educational content for learning the elementary numerical solution of the eigenvalue problem. In this article, we will consider this method. ** **
This method is very straightforward.
Calculation procedures and points are
So, the main thing to do is to multiply the ** A to check the convergence of the direction vector. ** ** Then, the origin of the name "power method" is because of the method of evaluating the power of the coefficient matrix A, as can be seen in 2 above.
Also, the power method is a method to find the eigenvalue of the maximum absolute value of A. ** With some ingenuity, it is possible to find the eigenvalue with the smallest absolute value, the eigenvalue closest to the given complex number $ z $, the second eigenvalue, and so on. ** I'm planning to post a hey article.
Solve a simple eigenvalue problem that applies the power method, However, make sure that you get the eigenvalue with the maximum absolute value and the corresponding eigenvector.
As $ A $ of $ A \ mathbf {u} = \ lambda \ mathbf {u} $
A=
\left[
\begin{matrix}
1 & 2 \\
3 & 4
\end{matrix}
\right]
Think about.
The exact solution for the maximum eigenvalue of the absolute value is $ \ frac {5+ \ sqrt {33}} {2} = 5.372281323 ... $.
As a convergence test condition
Absolute value of change in $ \ lambda $ in repeating steps $ k $ and $ k + 1 $
"""
Matrix eigenvalue problem:Power method:
"""
import numpy as np
A=np.array([[1,2],[3,4]])
x0 = np.array([1,0]); x1 = np.array([0,1])
u = 1.0*x0+2.0*x1 #The initial eigenvector. Appropriate.
rel_eps = 0.0001 #Eigenvalue convergence conditions
#Krylov column generation
rel_delta_u=100.0
while rel_delta_u >= rel_eps : #Main loop
uu = u/np.linalg.norm(u) #Normalization(Set the norm to 1)
print("u=",uu)
u = np.dot(A,uu.T)
eigen_value=np.dot(uu,u)/(np.dot(uu,uu.T))
print("eigen_value=",eigen_value)
delta_u_vec = uu-u/np.linalg.norm(u)
abs_delta_u_value= np.linalg.norm(delta_u_vec)
rel_delta_u=abs_delta_u_value/np.abs(eigen_value) #Relative change in eigenvalues for repeated steps
print("rel_delta_u_vec = ",rel_delta_u)
u= [ 0.41612395 0.9093079 ]
eigen_value= 5.37244655582
rel_delta_u_vec = 3.29180183204e-05
You can see that it matches very well with the exact solution 5.372281323 ...
The following books have been helpful in writing this article. [1] is a simple description and easy to understand. [2] is a summary of how to solve the eigenvalue problem using numpy and scipy.
[1] Gilbert Strang, ["World Standard MIT Textbook Strang: Linear Algebra Introduction"](https://www.amazon.co.jp/%E4%B8%96%E7%95%8C%E6%A8%99 % E6% BA% 96MIT% E6% 95% 99% E7% A7% 91% E6% 9B% B8-% E3% 82% B9% E3% 83% 88% E3% 83% A9% E3% 83% B3% E3% 82% B0-% E7% B7% 9A% E5% BD% A2% E4% BB% A3% E6% 95% B0% E3% 82% A4% E3% 83% B3% E3% 83% 88% E3 % 83% AD% E3% 83% 80% E3% 82% AF% E3% 82% B7% E3% 83% A7% E3% 83% B3-% E3% 82% AE% E3% 83% AB% E3% 83% 90% E3% 83% BC% E3% 83% 88 / dp / 4764904055 / ref = pd_lpo_sbs_14_t_0? _ Encoding = UTF8 & psc = 1 & refRID = 9817PCQXDR5497M5GPS2), Modern Science, 2015.
[2] [[Science / technical calculation by Python] Solving (generalized) eigenvalue problem using numpy / scipy, using library] (http://qiita.com/sci_Haru/items/034c6f74d415c1c10d0b)
The power method is the standard eigenvalue problem
In
Starting from the initial vector $ u_0 $
This is a method to find the eigenvalue / eigenvector with the maximum absolute value. (This $ u_0, Au_0, A ^ 2u_0, A ^ 3u_0 $, ... is [Kurilov column](https://ja.wikipedia.org/wiki/%E3%82%AF%E3%83%AA% It is called E3% 83% AD% E3% 83% 95% E9% 83% A8% E5% 88% 86% E7% A9% BA% E9% 96% 93))
Now, the eigenvalue equation
In ** eigenvalues have no degeneracy **, that is,
Suppose. Also, regarding the order of the magnitude of the eigenvalues
Suppose.
Now consider a suitable vector $ u_0 $. Suppose the coefficient $ c_i $ is fixed so that $ u_0 $ can be expanded as follows.
When $ A ^ k $, which should be $ A $, is applied to this,
It will be.
You can expect it to be.
By repeatedly applying $ A $ to the ** initial trial vector $ u_0 $ in this way, a vector parallel to the eigenvector corresponding to the eigenvalue with the maximum absolute value is obtained. ** **
The eigenvector always has a constant multiple (arbitrary complex number multiple) indefiniteness (it means that the eigenvalue does not change even if both sides of the eigenexpression are multiplied by a complex number), so the one that suits the purpose is selected. In most cases, the one with a norm of 1 is chosen.
The eigenvalue $ \ lambda $ with the maximum absolute value is from the eigenvalue equation for a sufficiently large $ k $.
Can be calculated as. This is the Rayleigh quotient (https://ja.wikipedia.org/wiki/%E3%83%AC%E3%82%A4%E3%83%AA%E3%83%BC%E5%95 It is called% 86).
As described above, it is possible to calculate the eigenvalue with the maximum absolute value of the eigenvalue problem and the corresponding eigenvector.