Machine learning

** What is machine learning? **

Computer programs are said to measure task T with performance index P and, if its performance is improved by experience E, learn from experience E with respect to task T and performance index P (Tom Mitchell 1997).

・ Suppose you enter data into a computer program and solve task T. ・ Output Y1 is output when unknown data is input. ・ Output Y1 can be measured by performance index P ・ Input new data and output Y2 ・ If Y2 is improved from Y1 when measured by the performance index P, it can be said that this computer program has learned.

** Linear regression **

  1. ** Regression problem **
  1. ** Linear regression model **

** Teacher data ** $$ {(x_i, y_i) ; i = 1, ... , n} $$ ** Parameters ** (same number of dimensions as input variables) $$ w = (w_1, w_2, ... , w_m)^T \in ℝ^m $$ ** Linear combination ** (Inner product of unknown parameter w and input x) $$\hat{y} = w^Tx + w_0 = \sum_{j = 1}^{m} w_jx_j + w_0 $$
  1. ** Linear combination ** -Sum of the inner product of the unknown parameter w and the input x -Add the intercept w_0 (intercept = intersection with the y-axis, which has the effect of translating toward the y-axis) -Even if the input vector $ x_j $ is multidimensional, the output will be one-dimensional (scalar).

  2. ** Model parameters ** ・ Set of weights $ w_j $ (How features affect the predicted area) ・ Estimate the best gradient by the least squares method

** Data split **

** Non-linear regression model **

  1. Basis set $ y_i = f(x_i) + \epsilon_i $ Once x is non-linearized by a linear map $ \ phi , the inner product with w is obtained. $ y_i = w_0 + \sum_{i=1}^{m} w_i\phi_j(x_i) + \epsilon_i $$ Examples of basis functions: polynomial functions, Gaussian basis functions, (B) spline functions, etc.

  2. Non-linear regression based on one-dimensional basis functions

  1. Non-linear regression based on two-dimensional basis functions
  1. Basis expansion method
  1. Overfitting and unlearning

** Regularization **

  1. Regularization method (penalization method)
    Give penalties to reduce model complexity  $ S\gamma = (y - \Phi w)^T(y - \Phi w) + \gamma R(w) $

  2. Role of regularization term R

  1. Parameter $ \ gamma $ = Constraint surface size Make $ \ gamma $ smaller → The constraint surface becomes larger Increase $ \ gamma $ → decrease the constraint surface

  2. Choosing the right model

** Logistic regression **

  1. Classification problem ** Explanatory variable (input) ** $ x = (x_1, x_2, ・ ・ ・, x_m) ^ T \ in ℝ ^ m $ ** Objective variable (output) ** $ y \in \{0, 1\}$
  2. Logistic regression model
  1. Sigmoid function
  1. Logistic regression formulation $ P(Y = 1|x) = \sigma(w_0 + w_1x_1 + ... + w_mx_m) $ $ (Probability that Y = 1 when the explanatory variable x is given) (Linear combination to data parameters) $
  2. Bernoulli distribution
  1. Maximum likelihood estimation (what is the most likely P?)

** Probability that $ y_1 $ ~ $ y_n $ ** will occur at the same time in n trials $ P (y_1, y_2, ..., y_n; p) = \ prod_ {i = 1} {n} p ^ {y_i} (1-p) ^ {1-y_i} $ ($ P, y_i) $ Is known)

  1. Log-likelihood function
  1. Gradient descent
  1. Stochastic Gradient Descent (SGD)
  1. Model evaluation

Principal component analysis (PCA)

  1. Dimensional compression
  1. Constraint optimization problem
  1. Contribution rate

K-nearest neighbor method

  1. Set the initial value of the cluster center
  2. For each data point, calculate the distance to each cluster center and assign the closest cluster
  3. Calculate the mean vector (center) of each cluster
  4. Repeat a few steps until it converges --If you change the initial value of the center, the result of clustering also changes. The result changes even if the value of K is changed

Study-AI

Recommended Posts

Machine learning
[Memo] Machine learning
Machine learning classification
Machine Learning sample
Machine learning tutorial summary
About machine learning overfitting
Machine learning ⑤ AdaBoost Summary
Machine Learning: Supervised --AdaBoost
Machine learning logistic regression
Machine learning support vector machine
Studying Machine Learning ~ matplotlib ~
Machine learning linear regression
Machine learning course memo
Machine learning library dlib
Machine learning (TensorFlow) + Lotto 6
Somehow learn machine learning
Machine learning library Shogun
Machine learning rabbit challenge
Introduction to machine learning
Machine Learning: k-Nearest Neighbors
What is machine learning?
Machine learning model considering maintainability
Machine learning learned with Pokemon
Data set for machine learning
Japanese preprocessing for machine learning
An introduction to machine learning
Machine learning / classification related techniques
Machine Learning: Supervised --Linear Regression
Basics of Machine Learning (Notes)
[Machine learning] Understanding random forest
Machine learning with Python! Preparation
Machine Learning Study Resource Notepad
Machine learning ② Naive Bayes Summary
Machine learning article summary (self-authored)
About machine learning mixed matrices
Machine Learning: Supervised --Random Forest
Practical machine learning system memo
Machine learning Minesweeper with PyTorch
Machine learning environment construction macbook 2021
Build a machine learning environment
Python Machine Learning Programming> Keywords
Machine learning algorithm (simple perceptron)
Used in machine learning EDA
Importance of machine learning datasets
Machine learning and mathematical optimization
Machine Learning: Supervised --Support Vector Machine
Supervised machine learning (classification / regression)
I implemented Extreme learning machine
Beginning with Python machine learning
Machine learning algorithm (support vector machine)
Super introduction to machine learning
4 [/] Four Arithmetic by Machine Learning
Machine learning ④ K-nearest neighbor Summary
Pokemon machine learning Nth decoction
Try machine learning with Kaggle
Machine learning stacking template (regression)
Machine Learning: Supervised --Decision Tree
Machine learning algorithm (logistic regression)
real-time-Personal-estimation (learning)
<Course> Machine Learning Chapter 6: Algorithm 2 (k-means)
Introduction to machine learning Note writing