Involute function
Please check the introductory section for details. Various methods to numerically create the inverse function of a certain function Introduction --Qiita
The first method I came up with was a polynomial approximation. The inverse function I want to find is a monotonically increasing function, and I thought that even a polynomial could be approximated with reasonable accuracy. That is, the inverse function $ y = a_0 + a_1x + a_2x^{2} + ...$ By expressing it like this, it is an idea that a close value can be obtained.
The problem is how to determine the coefficients $ a_0, a_1, a_2, ... $ of the polynomial. You need to determine the coefficients so that the difference between the $ y $ value obtained by the polynomial and the true value is minimized. scikit-learn is a library for machine learning, but I found that it can be used to automatically determine the coefficients of polynomials.
Use the involute function to generate training data for training.
Notebook
def involute(α):
return np.tan(α) - α
Notebook
y = np.linspace(- np.pi / 4, np.pi / 4, 1000)
x = involute(y)
According to machine learning conventions, the input ($ inv \ alpha $) is $ x $ and the output ($ \ alpha $) is $ y $. Prepare the value of $ y $ ($\ alpha $) first, find the value of $ x $ ($ inv \ alpha $) using the involute function, and use it as training data.
You can easily fit polynomials with scikit-learn.
First, import the required libraries.
Notebook
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
The training data generated above is a one-dimensional array, but in scikit-learn, the data is basically a column vector, so it is converted to a column vector.
Notebook
x_column = x.reshape(-1, 1)
y_column = y.reshape(-1, 1)
First, let's fit a 10th order polynomial.
Notebook
model_poly = make_pipeline(PolynomialFeatures(degree=10), LinearRegression())
model_poly.fit(x_column, y_column)
model_poly.score(x_column, y_column)
output
0.95581676585298314
PolynomialFeatures (degree = 10)
is the conversion to a 10th-order polynomial, and LinearRegression ()
is the generation of a linear regression model, which are combined with the make_pipeline function to create a polynomial regression model.
The training data is given to the fit method of the created model to fit the polynomial.
The score method numerically evaluates the degree of estimation accuracy. If this value is 1.0, it means that it can be estimated perfectly.
Now let's plot the values estimated by the polynomial regression model.
Notebook
y_pred = model_poly.predict(x_column).flatten()
fig = figure(width=400, height=400)
fig.scatter(x, np.degrees(y), size=1, legend='true value')
fig.line(x, np.degrees(y_pred), line_color='orange', legend='Estimated value')
fig.xaxis.axis_label = 'invα'
fig.yaxis.axis_label = 'Pressure angle α(deg)'
fig.legend.location = 'top_left'
show(fig)
The input value is given to the predict method to obtain the estimated value. The graph above is the result of plotting the obtained estimates. The estimation accuracy is too bad.
How about increasing the order to 20?
Notebook
model_poly = make_pipeline(PolynomialFeatures(degree=20), LinearRegression())
model_poly.fit(x_column, y_column)
model_poly.score(x_column, y_column)
output
0.97492606041826035
Notebook
y_pred = model_poly.predict(x_column).flatten()
fig = figure(width=400, height=400)
fig.scatter(x, np.degrees(y), size=1, legend='true value')
fig.line(x, np.degrees(y_pred), line_color='orange', legend='Estimated value')
fig.xaxis.axis_label = 'invα'
fig.yaxis.axis_label = 'Pressure angle α(deg)'
fig.legend.location = 'top_left'
show(fig)
Even if it is increased to the 20th order, the wave of fluctuation is only small, and there is not much improvement. It is unlikely that it will be better to increase the order as it is.
I think that the large gradient near the origin is the reason why it cannot be estimated well by polynomial regression. The involute inverse function has an infinite gradient at the origin, but as long as you use a polynomial, you can never express an infinite gradient.
The Notebook used for the explanation is uploaded to Gist. Involute Inverse Function Estimation_Polynomial Regression.ipynb
Recommended Posts