SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters

Shwai He, Liang Ding*, Daize Dong, Miao Zhang, Dacheng Tao

*Corresponding author for this work

Research output: Contribution to conference without publisher/journalPaper without publisher/journalResearchpeer-review

19 Citations (Scopus)

Abstract

Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, has become an appealing efficient alternative to the full model fine-tuning. Although computationally efficient, the recent adapters often increase parameters (e.g. bottleneck dimension) for matching the performance of full model fine-tuning, which we argue goes against their original intention. In this work, we re-examine the parameter-efficiency of adapters through the lens of network pruning (we name such plug-in concept as SparseAdapter) and find that SparseAdapter can achieve comparable or better performance than standard adapters when the sparse ratio reaches up to 80%. Based on our findings, we introduce an easy but effective setting “Large-Sparse” to improve the model capacity of adapters under the same parameter budget. Experiments on five competitive adapters upon three advanced PLMs show that with proper sparse method (e.g. SNIP) and ratio (e.g. 40%) SparseAdapter can consistently outperform their corresponding counterpart. Encouragingly, with the Large-Sparse setting, we can obtain further appealing gains, even outperforming the full fine-tuning by a large margin. Our code will be released at: https://github.com/Shwai-He/SparseAdapter.

Original languageEnglish
Publication date2022
Number of pages7
Publication statusPublished - 2022
Event2022 Findings of the Association for Computational Linguistics: EMNLP 2022 - Abu Dhabi, United Arab Emirates
Duration: 7 Dec 202211 Dec 2022

Conference

Conference2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period07/12/202211/12/2022

Bibliographical note

Publisher Copyright:
© 2022 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters'. Together they form a unique fingerprint.

Cite this