Large Language Model Security in a Multilingual World

Activity: Talks and presentationsConference presentations

Description

Large Language Models (LLMs) are spreading like wildfire and are being pushed for implementation across society. While the potential of effectivization is clear, the risks involved when implementing this new technology are largely overlooked. Unlike traditional software, LLMs process natural language inputs from users, which opens a vast array of potential attack vectors, risking data leakage, user manipulation, and even medical misdiagnosis. These aspects can typically not be boiled down to a single line of code to be fixed, but constitute a complex interaction between AI architectures, training data, prompts, and manipulation thereof. We will discuss some of the security risks in LLMs, touching upon the importance of a multilingual perspective, providing an overview of proposed ethical guidelines for LLM security practitioners along the way.
Period31 Oct 2024
Event titleDigital Tech Summit 2024
Event typeConference
LocationCopenhagen, DenmarkShow on map
Degree of RecognitionNational