Projects per year
Abstract
Language Confusion is a phenomenon where Large Language Models (LLMs) generate text that is neither in the desired language, nor in a contextually appropriate language. This phenomenon presents a critical challenge in text generation by LLMs, often appearing as erratic and unpredictable behavior. We hypothesize that there are linguistic regularities to this inherent vulnerability in LLMs and shed light on patterns of language confusion across LLMs. We introduce a novel metric, Language Confusion Entropy, designed to directly measure and quantify this confusion, based on language distributions informed by linguistic typology and lexical variation. Comprehensive comparisons with the Language Confusion Benchmark (Marchisio et al., 2024) confirm the effectiveness of our metric, revealing patterns of language confusion across LLMs. We further link language confusion to LLM security, and find patterns in the case of multilingual embedding inversion attacks. Our analysis demonstrates that linguistic typology offers theoretically grounded interpretation, and valuable insights into leveraging language similarities as a prior for LLM alignment and security.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics: NAACL 2025 |
Number of pages | 17 |
Publisher | Association for Computational Linguistics |
Publication date | 29 Apr 2025 |
DOIs | |
Publication status | Accepted/In press - Jan 2025 |
Event | The 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics - Albuquerque, United States Duration: 29 Apr 2025 → 4 May 2025 https://2025.naacl.org/ |
Conference
Conference | The 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics |
---|---|
Country/Territory | United States |
City | Albuquerque |
Period | 29/04/2025 → 04/05/2025 |
Internet address |
Keywords
- Large Language Models
- Cybersecurity
- Linguistics
Fingerprint
Dive into the research topics of 'Large Language Models are Easily Confused: A Quantitative Metric, Security Implications and Typological Analysis'. Together they form a unique fingerprint.Projects
- 1 Active
-
Multilingual Modelling for Resource-Poor Languages
Bjerva, J. (PI), Lent, H. C. (Project Participant), Chen, Y. (Project Participant), Ploeger, E. (Project Participant), Fekete, M. R. (Project Participant) & Lavrinovics, E. (Project Participant)
01/09/2022 → 31/08/2025
Project: Research