TY - JOUR
T1 - Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
AU - Liu, Zhengying
AU - Pavao, Adrien
AU - Xu, Zhen
AU - Escalera, Sergio
AU - Ferreira, Fabio
AU - Guyon, Isabelle
AU - Hong, Sirui
AU - Hutter, Frank
AU - Ji, Rongrong
AU - Junior, Julio C.
AU - Li, Ge
AU - Lindauer, Marius
AU - Zhipeng, Luo
AU - Madadi, Meysam
AU - Nierhoff, Thomas
AU - Niu, Kangning
AU - Pan, Chunguang
AU - Stoll, Danny
AU - Treger, Sebastien
AU - Jin, Wang
AU - Wang, Peng
AU - Wu, Chengling
AU - Xiong, Youcheng
AU - Zela, Arber
AU - Zhang, Yang
PY - 2021
Y1 - 2021
N2 - This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series, which helped sorting out a profusion of AutoML solutions for Deep Learning (DL) that had been introduced in a variety of settings, but lacked fair comparisons. All input data modalities (time series, images, videos, text, tabular) were formatted as tensors and all tasks were multi-label classification problems. Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly. In this setting, DL methods dominated, though popular Neural Architecture Search (NAS) was impractical. Solutions relied on fine-tuned pre-trained networks, with architectures matching data modality. Post-challenge tests did not reveal improvements beyond the imposed time limit. While no component is particularly original or novel, a high level modular organization emerged featuring a ‘`meta-learner’', ‘`data ingestor’', ‘`model selector’', ‘`model/learner’', and ‘`evaluator’'. This modularity enabled ablation studies, which revealed the importance of (off-platform) meta-learning, ensembling, and efficient data management. Experiments on heterogeneous module combinations further confirm the (local) optimality of the winning solutions. Our challenge legacy includes an ever-lasting benchmark (http://autodl.chalearn.org), the open-sourced code of the winners, and a free 'AutoDL self-service''.
AB - This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series, which helped sorting out a profusion of AutoML solutions for Deep Learning (DL) that had been introduced in a variety of settings, but lacked fair comparisons. All input data modalities (time series, images, videos, text, tabular) were formatted as tensors and all tasks were multi-label classification problems. Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly. In this setting, DL methods dominated, though popular Neural Architecture Search (NAS) was impractical. Solutions relied on fine-tuned pre-trained networks, with architectures matching data modality. Post-challenge tests did not reveal improvements beyond the imposed time limit. While no component is particularly original or novel, a high level modular organization emerged featuring a ‘`meta-learner’', ‘`data ingestor’', ‘`model selector’', ‘`model/learner’', and ‘`evaluator’'. This modularity enabled ablation studies, which revealed the importance of (off-platform) meta-learning, ensembling, and efficient data management. Experiments on heterogeneous module combinations further confirm the (local) optimality of the winning solutions. Our challenge legacy includes an ever-lasting benchmark (http://autodl.chalearn.org), the open-sourced code of the winners, and a free 'AutoDL self-service''.
KW - AutoML
KW - Benchmark testing
KW - Computer architecture
KW - Deep Learning
KW - Deep learning
KW - Hyperparameter Optimization
KW - Internet
KW - Meta-learning
KW - Model Selection
KW - Neural Architecture Search
KW - Task analysis
KW - Tensors
KW - Videos
UR - http://www.scopus.com/inward/record.url?scp=85104569330&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2021.3075372
DO - 10.1109/TPAMI.2021.3075372
M3 - Journal article
SN - 1939-3539
VL - 43
SP - 3108
EP - 3125
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 9
M1 - 9415128
ER -