ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation

Yiyi Chen, Qiongkai Xu*, Johannes Bjerva

*Kontaktforfatter

Publikation: Working paper/PreprintPreprint

Abstract

With the growing popularity of Large Language Models (LLMs) and vector databases, private textual data is increasingly processed and stored as numerical embeddings.
However, recent studies have proven that such embeddings are vulnerable to inversion attacks, where original text is reconstructed to reveal sensitive information.
Previous research has largely assumed access to millions of sentences to train attack models, e.g., through data leakage or nearly unrestricted API access.
With our method, a single data point is sufficient for a partially successful inversion attack.
With as little as 1k data samples, performance reaches an optimum across a range of black-box encoders, without training on leaked data.
We present a Few-shot Textual Embedding Inversion Attack using ALignment and GENeration (ALGEN),
by aligning victim embeddings to the attack space and using a generative model to reconstruct text.
We find that ALGEN attacks can be effectively transferred across domains and languages, revealing key information.
We further examine a variety of defense mechanisms against ALGEN, and find that none are effective, highlighting the vulnerabilities posed by inversion attacks.
By significantly lowering the cost of inversion and proving that embedding spaces can be aligned through one-step optimization, we establish a new textual embedding inversion paradigm with broader applications for embedding alignment in NLP.
OriginalsprogEngelsk
DOI
StatusAfsendt - feb. 2025

Fingeraftryk

Dyk ned i forskningsemnerne om 'ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation'. Sammen danner de et unikt fingeraftryk.

Citationsformater