TY - JOUR
T1 - Deep InterBoost networks for small-sample image classification
AU - Li, Xiaoxu
AU - Chang, Dongliang
AU - Ma, Zhanyu
AU - Tan, Zheng-Hua
AU - Xue, Jing-Hao
AU - Cao, Jie
AU - Jun, Goo
PY - 2021
Y1 - 2021
N2 - Deep neural networks have recently shown excellent performance on numerous image classification tasks. These networks often need to estimate a large number of parameters and require much training data. When the amount of training data is small, however, a network with high flexibility quickly overfits the training data, resulting in a large model variance and poor generalization. To address this problem, we propose a new, simple yet effective ensemble method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two sets of complementary weights for training data, which are used for separately training two base networks of the same structure, and then the two sets of complementary weights are updated for refining the training of the networks through interaction between the two base networks previously trained. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Experimental results on four small-sample datasets, UIUC-Sports, LabelMe, 15Scenes and Caltech101, demonstrate that the proposed ensemble method outperforms existing ones. Moreover, results from the Wilcoxon signed-rank tests show that our method is statistically significantly better than the methods compared. Detailed analysis is also provided for an in-depth understanding of the proposed method.
AB - Deep neural networks have recently shown excellent performance on numerous image classification tasks. These networks often need to estimate a large number of parameters and require much training data. When the amount of training data is small, however, a network with high flexibility quickly overfits the training data, resulting in a large model variance and poor generalization. To address this problem, we propose a new, simple yet effective ensemble method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two sets of complementary weights for training data, which are used for separately training two base networks of the same structure, and then the two sets of complementary weights are updated for refining the training of the networks through interaction between the two base networks previously trained. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Experimental results on four small-sample datasets, UIUC-Sports, LabelMe, 15Scenes and Caltech101, demonstrate that the proposed ensemble method outperforms existing ones. Moreover, results from the Wilcoxon signed-rank tests show that our method is statistically significantly better than the methods compared. Detailed analysis is also provided for an in-depth understanding of the proposed method.
U2 - 10.1016/j.neucom.2020.06.135
DO - 10.1016/j.neucom.2020.06.135
M3 - Journal article
JO - Neurocomputing
JF - Neurocomputing
SN - 0925-2312
ER -