We pursue arbitrary precision diagonalization (dense matrix, sparse matrix) and try various things. I met a library called eigen yesterday, installed it today and used it, so I recorded it as a log.
As arbitrary precision diagonalization, Mpack implements diagonalization of symmetric and Hermite classes of dense matrices, but sparse matrices cannot be solved efficiently because lapack does not provide them in the first place. It seems.
After a lot of research, it seems that a c ++ template library called Eigen can diagonalize dense and coarse matrices. And if you use the unsupported mpfr library, it seems that you can diagonalize with arbitrary precision, so remember to leave a log using eigen.
The main reason for logging is that I'm not a c ++ user and I'm likely to forget it soon.
environment ubuntu 14.04
install
Download and extract latest from Get it on the main page Eigen.
I don't understand what a template library is in the first place, but if you read Eigen getting start, you need to make install the template library. It doesn't seem to exist, and if you run the program in the expanded directory, it seems to work by itself.
Sample program in getting start in the directory extracted for trial (the directory containing Eigen and README.md)
#include <iostream>
#include <Eigen/Dense>
using Eigen::MatrixXd;
int main()
{
MatrixXd m(2,2);
m(0,0) = 3;
m(1,0) = 2.5;
m(0,1) = -1;
m(1,1) = m(1,0) + m(0,1);
std::cout << m << std::endl;
}
I tried to compile
$ g++ test.cpp -o a.out
test.cpp:3:23: fatal error: Eigen/Dense: No such file or directory
compilation terminated.
Because the compilation did not pass
#include "Eigen/Core"
(I think there was a similar phenomenon in C, but I forgot).
Eigen's unsupported module MPFRC ++ Support module for further arbitrary precision calculations Sample
#include <iostream>
#include <Eigen/MPRealSupport>
#include <Eigen/LU>
using namespace mpfr;
using namespace Eigen;
int main()
{
// set precision to 256 bits (double has only 53 bits)
mpreal::set_default_prec(256);
// Declare matrix and vector types with multi-precision scalar type
typedef Matrix<mpreal,Dynamic,Dynamic> MatrixXmp;
typedef Matrix<mpreal,Dynamic,1> VectorXmp;
MatrixXmp A = MatrixXmp::Random(100,100);
VectorXmp b = VectorXmp::Random(100);
// Solve Ax=b using LU
VectorXmp x = A.lu().solve(b);
std::cout << "relative error: " << (A*x - b).norm() / b.norm() << std::endl;
return 0;
}
Include modified as below
#include "unsupported/Eigen/MPRealSupport"
#include "Eigen/LU"
And compile
$ g++ test.cpp -o a.out -lgmp -lmpfr
In file included from test.cpp:2:0:
unsupported/Eigen/MPRealSupport:15:22: fatal error: Eigen/Core: No such file or directory
compilation terminated.
But it gives an error. Since there is no help for it, I made it possible to execute the sample program by sudo copying Eigen / and unsupported / under / usr / local / include /.
MPRealSupport
MPRealSupport because you need to link the gmp and mpfr libraries when compiling. Samples need to be compiled as follows
g++ test.cpp -o a -lgmp -lmpfr
queue
The eigenvalues of are $ \ pm2,0 $. If you solve it with double precision, for example numpy (python)
>>> import numpy as np
>>> np.set_printoptions(precision=18)
array([ -2.000000000000000000e+00, -1.732555071631836086e-16,
1.999999999999999556e+00])
The error propagates to the 0 eigenvalues. The same is true when diagonalizing a machine-precision matrix in Mathematica.
In[15]:= Eigenvalues@N@{{0,1,0},{2,0,2},{0,1,0}}//FullForm
Out[15]//FullForm= List[-2.`,1.9999999999999996`,-1.732555071631836`*^-16]
Is. Of course, nMathematica can be diagonalized with arbitrary precision ($ \ le \ infinty $) (but I'm worried about speed). The program to diagonalize using eigen3 and mpfrc ++
#include <iostream>
#include <Eigen/Eigenvalues>
#include <unsupported/Eigen/MPRealSupport>
#include <Eigen/Dense>
using namespace Eigen;
using namespace mpfr;
int main()
{
// set precision to 256 bits (double has only 53 bits)
mpreal::set_default_prec(53);
// Declare matrix and vector types with multi-precision scalar type
typedef Matrix<mpreal,Dynamic,Dynamic> MatrixXmp;
MatrixXmp A(3,3);
A(0,0) = 0;
A(0,1) = 1;
A(0,2) = 0;
A(1,0) = 2;
A(1,1) = 0;
A(1,2) = 2;
A(2,0) = 0;
A(2,1) = 1;
A(2,2) = 0;
std::cout << A << std::endl;
EigenSolver<MatrixXmp> es(A);
std::cout << es.eigenvalues() << std::endl;
return 0;
}
It seems that you should write it like this, and the execution result is
$ g++ eigenvalues-mpfrc-min.cpp -o a -lgmp -lmpfr
$ ./a
0 1 0
2 0 2
0 1 0
(-2,0)
(2,0)
(-3.28269e-77,0)
It can be seen that it is evaluated with high accuracy. As in the description
// set precision to 256 bits (double has only 53 bits)
mpreal::set_default_prec(53);
Then the execution result is
$ ./a
0 1 0
2 0 2
0 1 0
(-2,0)
(2,0)
(1.77435e-16,0)
It is only as accurate as double.
Interestingly, the 0-eigenvalue lapack (numpy) results and the machine-precision (double) Mathematica results agree in the precision range. If I remember correctly, I think Mathematica is implemented by lapack, so it will be the same result.
On the other hand, eigen3 has different values as far as it can be seen, although I do not know how to increase the number of displayed digits. Is this due to the different algorithms used?
By the way, if you diagonalize with default precision (= 16 digits of mantissa) with mpmath (python)
>>> import mpmath
>>> mpmath.eig(mpmath.matrix([[0,1,0],[2,0,2],[0,1,0]]))
([mpf('-1.9999999999999993'), mpf('1.6358461123585456e-16'), mpf('2.0')],
This is interesting because the results are also different.
to do
Is MPRealSupport only real support by name? Is complex numbers also supported? It's been half a day since I started using c ++ in the first place. I don't know how to use it.
(Addition)
Compare the velocities when diagonalized with eigen, mpack, and mathematica. Since mpack has a limited class of matrices that can be solved, the following symmetric matrix
Is diagonalized as a dense matrix with a precision of 80 digits.
After all Mpack is early. Since the cpu becomes 400 while executing the mpack program, it may be parallelized by blas. Eigen seems to be able to use blas, but what should I do?
The legend ditto (Anorldi 20%) is the diagonalization routine of mathematica, Eigensystem, which specifies the Anorldi method (Lanczos method) and finds the larger eigenvalue by 20%.
This is the comparison when calculated from the smaller one With the Arnoldi method and Lanczos method, the diagonalization of $ 10 ^ 5 \ times10 ^ 5 $ becomes realistic.
Recommended Posts