Projekter pr. år
Abstract
Understanding how linguistic knowledge is encoded in language models is crucial for improving their generalisation capabilities. In this paper, we investigate the processing of morphosyntactic phenomena, by leveraging a recently proposed method for probing language models via Shapley Head Values (SHVs). Using the English language BLiMP dataset, we test our approach on two widely used models, BERT and RoBERTa, and compare how linguistic constructions such as anaphor agreement and filler-gap dependencies are handled. Through quantitative pruning and qualitative clustering analysis, we demonstrate that attention heads responsible for processing related linguistic phenomena cluster together. Our results show that SHV-based attributions reveal distinct patterns across both models, providing insights into how language models organize and process linguistic information. These findings support the hypothesis that language models learn subnetworks corresponding to linguistic theory, with potential implications for cross-linguistic model analysis and interpretability in Natural Language Processing (NLP).
Originalsprog | Engelsk |
---|---|
DOI | |
Status | Afsendt - 15 okt. 2024 |
Fingeraftryk
Dyk ned i forskningsemnerne om 'Linguistically Grounded Analysis of Language Models using Shapley Head Values'. Sammen danner de et unikt fingeraftryk.Projekter
- 1 Igangværende
-
Multilingual Modelling for Resource-Poor Languages
Bjerva, J. (PI (principal investigator)), Lent, H. C. (Projektdeltager), Chen, Y. (Projektdeltager), Ploeger, E. (Projektdeltager), Fekete, M. R. (Projektdeltager) & Lavrinovics, E. (Projektdeltager)
01/09/2022 → 31/08/2025
Projekter: Projekt › Forskning
-
1st Annual AAU NLP Symposium
Lavrinovics, E. (Arrangør) & Bjerva, J. (Arrangør)
3 dec. 2024Aktivitet: Deltagelse i faglig begivenhed › Organisering af eller deltagelse i konference
-
Do Language Models Dream With Linguistics?
Fekete, M. R. (Foredragsholder)
2 dec. 2024Aktivitet: Foredrag og mundtlige bidrag › Konferenceoplæg