As AI agents scale in crypto, researchers warn of a critical security gap
By Olivier Acuna
Published on April 13, 2026.
Researchers from the University of California, Santa Barbara, the University Of California, San Diego, Fuzzland and World Liberty Financial have warned of a security gap in the growing use of AI agents in the cryptocurrency industry as they increasingly assume they are interacting directly with AI models. The researchers found that so-called “LLM routers, or services that sit between users and AI models, can be exploited by malicious actors, posing as powerful attack points. These routers are designed to forward requests to models like OpenAI or Anthropic but also have full access to sensitive data, including private keys, API credentials, and wallet access tokens. The implications for crypto users are severe as these systems often pass through plain text. Researchers demonstrated how easy it is to manipulate parts of the router ecosystem to control downstream systems within hours. They highlighted the vulnerability of these routers and demonstrated how quickly these tools are capable of replacing benign commands with an attacker-controlled ones.
Read Original Article