Naive bayes Train discrete features (int required)
alpha:float, optional ->Smoothing parameter(default is 1.0)
fit_prior:boolean ->Whether to use class prior probabilities
class_prior:array-like, size=[n_classes] -> Prior probabilities of the classes
example
import numpy as np
# 0~Random number between 5, size 6*100 array([[The first 100],…,[6th 100]])
X = np.random.randint(5, size=(6, 100)) #Teacher data
Y = np.array([1,2,3,4,5,6]) #class-> X[0]Is class 1, X[2]Is like class 3
from sklearn.naive_bayes import MultinomialNB
#Naive bayes class
clf = MultinomialNB()
#Learning
clf.fit(X, Y) # MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)Is returned
print(clf.predict(X[2])) # ->OK when 3 comes back
Method memo
fit(X,Y) Generation of learner X-> Teacher data, sparse array. Number of samples * Number of features It seems that Y-> class, X [0], X [2] is classified as class 1, x [1] is classified as class 2.
get_params() Get parameters (like {'alpha': 1.0,'class_prior': None,'fit_prior': True})
predict(X) Perform classification X-> Width is the feature number array Return value-> Which class to classify
predict_log_proba(X) Each classification probability (log) Return value array whose width is the number of classes and whose elements are log (probability)
predict_proba(X) No log above ver
score(X,y) Average accuracy Same as fit