Researchers at the University of California have discovered that some third-party AI Vast Language Model (LLM) routers may pose security vulnerabilities that could lead to cryptocurrency theft.
A paper published Thursday by researchers measuring malicious indirect attacks on the LLM supply chain revealed four attack vectors, including malicious code injection and credential extraction.
“26 LLM Routers Secretly Inject Malicious Tool Calls and Steal Credentials” he said co-author of the article, Chaofan Shou, on X.
LLM agents are increasingly routing requests through third-party API intermediaries or routers that aggregate access to providers like OpenAI, Anthropic, and Google. However, these routers terminate TLS (Transport Layer Security) Internet connections and have full access to every plaintext message.
This means that developers using AI coding agents like Claude Code to work on shrewd contracts or wallets could pass private keys, seed phrases, and sensitive data through router infrastructure that has not been vetted or secured.
ETH stolen from a counterfeit cryptocurrency wallet
The researchers tested 28 paid routers and 400 free routers collected from public communities.
Their findings were surprising: nine routers were actively injecting malicious code, two were implementing adaptive evasion triggers, 17 had accessed Amazon Web Services credentials belonging to the researchers, and one was siphoning Ether (ETH) from a private key belonging to the researcher.
Related: Anthropic restricts access to the AI model due to concerns about cyberattacks
The researchers initially deposited Ethereum wallet “decoy keys” with nominal balances and reported that the value lost in the experiment was less than $50, but no further details, such as the transaction hash, were provided.
The authors also conducted two “poisoning studies” that showed that even benign routers become hazardous when they reuse credentials that have been leaked through frail relays.
It’s tough to tell if routers are malicious
Researchers said it’s not basic to detect router malware.
“The line between ‘credential servicing’ and ‘credential theft’ is invisible to the client because routers already read plaintext secrets as part of normal transmission.”
Another disturbing discovery was what scientists called “YOLO mode.” This is a setting used in many AI agent environments where the agent executes commands automatically without asking the user to confirm each command.
Previously legitimate routers can be silently weaponized, even without the operator’s knowledge, while free routers can steal credentials by offering budget-friendly API access as a bait, researchers have found.
“LLM API routers are at a critical threshold of trust, which the ecosystem now treats as transparent transport.”
The researchers recommended that developers using AI agents for coding strengthen client-side security, suggesting never allowing private keys or seed phrases to pass through an AI agent session.
The long-term solution is for AI companies to cryptographically sign their responses so that the instructions the agent executes can be mathematically verified as coming from the real model.
Warehouse: No one knows if quantum-secure cryptography will even work
