Researchers from Israel have successfully demonstrated an interesting and in my opinion critical "key-logging" attack on LLMs accessible via API over the internet.
Even though the API communication was encrypted with TLS, the researchers were able to reconstruct the exact communication in almost 30% of the requests and to at least understand the meaning in around 55%. And this without attacking the encryption itself.
The trick here: The researchers focused on the packet lengths of the communication and, using a specially trained LLM, drew conclusions about the tokens used based on the packet lengths and order. The packet length alone is not enough to reconstruct the encrypted text, but since the responses of an LLM show typical patterns, the specially trained LLM was able to achieve astonishing results in "decrypting" the communication.
To ensure the most secure use of closed-source LLMs, it is therefore essential to deploy the API endpoints in a protected cloud environment, e.g. via Azure's OpenAI Service, AWS Bedrock or Google's VertexAI.
Alternatively, you can use a local LLM hosted on-premises or in a secure cloud environment.
The researchers have published a detailed description of their approach in this document:
( https://arxiv.org/pdf/2211.15139 )– definitely a study worth reading.
Call: +49 2151 9168231
E-mail: info(a)bluetuple.ai
47809 Krefeld, Germany
Copyright © All Rights reserved.