Blog-Layout

Key Logging Attack on LLM

Michael Hannecke

Key Logging LLM Attack

Researchers from Israel have successfully demonstrated an interesting and in my opinion critical "key-logging" attack on LLMs accessible via API over the internet.


Even though the API communication was encrypted with TLS, the researchers were able to reconstruct the exact communication in almost 30% of the requests and to at least understand the meaning in around 55%. And this without attacking the encryption itself.


The trick here: The researchers focused on the packet lengths of the communication and, using a specially trained LLM, drew conclusions about the tokens used based on the packet lengths and order. The packet length alone is not enough to reconstruct the encrypted text, but since the responses of an LLM show typical patterns, the specially trained LLM was able to achieve astonishing results in "decrypting" the communication.


To ensure the most secure use of closed-source LLMs, it is therefore essential to deploy the API endpoints in a protected cloud environment, e.g. via Azure's OpenAI Service, AWS Bedrock or Google's VertexAI.


Alternatively, you can use a local LLM hosted on-premises or in a secure cloud environment.


The researchers have published a detailed description of their approach in this document:

( https://arxiv.org/pdf/2211.15139 )– definitely a study worth reading.



By Michael Hannecke 27 Dec, 2023
How to deploy kubernetes nodes with NVIDIA GPU support on GCP using Terraform as Infrastructure as code.
05 Dec, 2023
Summary of responsible AI topics
By Michael Hannecke 01 Dec, 2023
Tips for ensuring security in developing AI applications.
By Michael Hannecke 15 Nov, 2023
Typography of adversarial attacks in generative AI, Process and Countermeasures.
More Posts
Share by: