Description
Large Language Models (LLMs) are spreading like wildfire and are being pushed for implementation across society. While the potential of effectivization is clear, the risks involved when implementing this new technology are largely overlooked. Unlike traditional software, LLMs process natural language inputs from users, which opens a vast array of potential attack vectors, risking data leakage, user manipulation, and even medical misdiagnosis. These aspects can typically not be boiled down to a single line of code to be fixed, but constitute a complex interaction between AI architectures, training data, prompts, and manipulation thereof. We will discuss some of the security risks in LLMs, touching upon the importance of a multilingual perspective, providing an overview of proposed ethical guidelines for LLM security practitioners along the way.Period | 31 Oct 2024 |
---|---|
Event title | Digital Tech Summit 2024 |
Event type | Conference |
Location | Copenhagen, DenmarkShow on map |
Degree of Recognition | National |
Documents & Links
Related content
-
Projects
-
Multilingual Modelling for Resource-Poor Languages
Project: Research
-
Publications
-
Text Embedding Inversion Security for Multilingual Language Models
Research output: Contribution to book/anthology/report/conference proceeding › Article in proceeding › Research › peer-review
-
Against All Odds: Overcoming Typology, Script, and Language Confusion in Multilingual Embedding Inversion Attacks
Research output: Contribution to book/anthology/report/conference proceeding › Article in proceeding › Research › peer-review