Key Takeaways
- University of California study identified 26 third-party LLM routing services executing malicious activities including credential theft and code injection
- Researchers lost Ether from a test wallet when one router executed a drainage attack
- These routing services maintain complete plaintext visibility of all transmitted data, exposing seed phrases and private keys
- The “YOLO mode” feature enables AI systems to execute instructions autonomously without requiring user approval
- Security experts strongly advise against transmitting sensitive cryptocurrency information through AI agent platforms
A team from the University of California has uncovered a significant security vulnerability in third-party artificial intelligence routing platforms that threatens cryptocurrency developers with credential theft and malicious code execution.
The research team published their discoveries this week in a comprehensive paper examining what they termed “malicious intermediary attacks” targeting the large language model supply chain infrastructure.
These LLM routing platforms function as intermediary services positioned between developers and major AI providers such as OpenAI, Anthropic, and Google. Their primary function involves managing and directing API traffic across various service providers.
The critical vulnerability stems from these routers terminating encrypted connections. This architectural design grants them complete, unencrypted visibility into every piece of data flowing through their systems.
Cryptocurrency developers utilizing AI-powered development environments like Claude Code for smart contract creation or wallet development may unknowingly expose private keys and recovery phrases through these intermediary services.
The research team conducted comprehensive testing on 28 commercial routing services and an additional 400 free alternatives collected from various online communities.
Results revealed nine routers actively inserting harmful code, two implementing sophisticated evasion mechanisms, and 17 successfully extracting researcher-controlled Amazon Web Services authentication credentials.
In one documented case, a router successfully drained Ether from a deliberately created honeypot wallet. The research team reported the financial loss totaled less than $50.
According to the researchers, distinguishing between legitimate credential processing and malicious theft proves virtually impossible for end users, given that routers inherently access sensitive information in plaintext during normal operations.
Understanding the YOLO Mode Vulnerability
The study highlighted a concerning feature prevalent in numerous AI agent frameworks known as “YOLO mode.” This configuration allows AI agents to autonomously execute commands without requiring user confirmation for each action.
This functionality significantly amplifies the threat landscape. When a router introduces malicious directives, YOLO mode enables automatic execution without human oversight or intervention.
Researchers also discovered that previously trustworthy routing services can be covertly compromised without operators’ awareness. Free routing services particularly may leverage discounted API pricing as bait for user acquisition while simultaneously extracting credentials surreptitiously.
Security Recommendations from Experts
The research team urged developers to enhance client-side security measures and establish strict protocols preventing private keys or recovery phrases from entering AI agent environments.
For sustainable security improvements, researchers advocated for AI providers to implement cryptographic signing of their outputs. This mechanism would enable developers to authenticate that agent-received instructions genuinely originated from the intended model provider.
Co-author Chaofan Shou announced on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds.”
The researchers emphasized that LLM API routing services occupy a crucial trust checkpoint that the artificial intelligence industry currently assumes secure without proper verification.
The published paper did not include specific transaction identifiers or blockchain evidence for the compromised wallet incident.


