I'm new to Python / deep learning. About the verification method of generalization performance investigated at the time of neural network implementation Leave a note as follows.
--There is a generalization performance verification method called k-fold cross-validation / kCV (Reference 1). --A method of dividing the learning data into k pieces, using k-1 pieces for learning, and using one piece for performance evaluation, and repeating the learning k times. --I originally knew that using sklearn.modelselection.traintest_split (TTS), the data at hand could be divided into training data and test data to verify generalization performance. ――A. Basically, is it okay to recognize that TTS is repeated multiple times as kCV? —— b. Is it okay to recognize that kCV can more accurately evaluate model generalization than TTS?
――A. I think so. In addition, at the time of kCV, all k divisions can be used for verification without exception. --b. That seems to be the case. --If TTS is used only once, the data used for verification can never be used as training data, so there is a possibility that unnecessary bias will occur in learning depending on the selection method of verification data. With kCV, you can overcome it (Reference 2). --KCV also says, "If there is a bias in the data within each of the k divisions, the learning result will be biased (the division that contains only dog data and the division that contains only cat data data). (Divided, etc.) ”Disadvantages have been pointed out. Countermeasures against this include stratified k-validated cross-validation (Stratified kCV) (Reference 1).
It's quite natural, but I'll leave it as a memo.
--Reference 1: I tried to sort out the types of cross validation. --Reference 2: KFolds Cross Validation vs train_test_split
Recommended Posts