Supervised learning (regression) 2 Advanced edition

Aidemy 2020/10/28

Introduction

Hello, it is Yope! I am a liberal arts student, but I was interested in the possibilities of AI, so I went to the AI-specialized school "Aidemy" to study. I would like to share the knowledge gained here with you, and I am summarizing it on Qiita. I am very happy that many people have read the previous summary article. Thank you! This is the second post of supervised learning. Nice to meet you.

What to learn this time ・ About model generalization

Model generalization

(Review) What is generalization?

-Although the predictions in regression analysis are based on functions, there is a range of actual price fluctuations, and even if the input data is the same, the results may differ. ・ Under these assumptions, if the model relies too much on historical data, the prediction will fail. This is called overfitting, and preventing overfitting is called generalization.

Regularization

-As a means of generalization in linear regression, regularization is used. Regularization is the attempt to generalize the model by penalizing the complexity of the __ model. -There are two types of regularization: __ "L1 regularization" __ and __ "L2 regularization" __. -L1 regularization is to reduce unnecessary information and perform regularization by bringing the coefficient of data that will have a small effect on the prediction closer to 0. ・ L2 regularization is to prevent overfitting and perform regularization by setting a limit on the size of the __coefficient.

Lasso return

-_ Lasso regression __ refers to a regression model that uses L1 regularization. -L1 regularization is highly effective when there is a lot of extra information, so for example, lasso regression is used when the number of parameters (number of columns) is large relative to the number of __ data (number of rows). -How to use the lasso regression should be like __model = Lasso () __.

Ridge regression

-__ Ridge regression __ refers to a regression model that uses L2 regularization. -L2 regularization is easy to generalize because there is an upper limit on the coefficient range. -How to use ridge regression should be like __model = Ridge () __.

Elastic Net regression

-ElasticNet regression refers to a regression model that uses a combination of L1 regularization and L2 regularization. -It has a great merit because it has a point that selects __ information of L1 regularization __ and a point that it is easy to generalize __ of L2 regularization __. -To use ElasticNet regression, set __model = ElasticNet () __. -Also, if you specify __ "l1_ratio = 0.3" __ etc. in the argument, you can specify the ratio of L1 regularization and L2 regularization.

-Execute the above three regression models![Screenshot 2020-10-28 22.59.48.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/698700/ 7aa30b99-e728-d0ab-7ade-8603f2fcca23.png)

・ Result![Screenshot 2020-10-28 22.58.30.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/698700/7994a0d0-686a-64f1- ee41-ca25cb2e5832.png)

Summary

-There is __regularization __ as a means of generalization in linear regression. -Regularization includes __L1 regularization and L2 regularization __, the regression using the former is __Lasso regression __, the latter is ridge regression, and the regression using both is _ElasticNet regression Called _.

This time is over. Thank you for reading until the end.

Recommended Posts

Supervised learning (regression) 2 Advanced edition
Supervised learning (regression) 1 Basics
Python: Supervised Learning (Regression)
Machine Learning: Supervised --Linear Regression
Supervised machine learning (classification / regression)
Basics of Supervised Learning Part 1-Simple Regression- (Note)
Basics of Supervised Learning Part 3-Multiple Regression (Implementation)-(Notes)-
Machine Learning: Supervised --AdaBoost
Machine learning logistic regression
Machine learning linear regression
Reinforcement learning 1 introductory edition
Python: Supervised Learning (Classification)
Python: Supervised Learning: Hyperparameters Part 1
Supervised Learning 3 Hyperparameters and Tuning (2)
Understand machine learning ~ ridge regression ~.
Supervised learning 1 Basics of supervised learning (classification)
Machine Learning: Supervised --Support Vector Machine
House Prices: Advanced Regression Techniques
Supervised learning ~ Beginner's memo ~ (scikit-learn)
Machine learning stacking template (regression)
Supervised learning 2 Hyperparameters and tuning (1)
Machine Learning: Supervised --Decision Tree
Machine learning algorithm (logistic regression)
Machine learning beginners try linear regression
Machine learning algorithm (multiple regression analysis)
Logistic Regression (for beginners) -Code Edition-
Deep learning learned by implementation 1 (regression)
Easy to use E-Cell 4 Advanced Edition
Deep Reinforcement Learning 3 Practical Edition: Breakout
Introduction to Deep Learning ~ Dropout Edition ~
Linear regression (for beginners) -Code edition-
Machine Learning: Supervised --Linear Discriminant Analysis
Ridge Regression (for beginners) -Code Edition-