The ten most critical security risks for applications built on large language models. As organisations deploy AI across their operations, these are the attack surfaces adversaries are already exploiting.
LLM01
Prompt Injection
Crafted inputs that override an LLM's instructions, causing it to leak data, execute unintended actions, or bypass safety controls. Both direct injection and indirect injection through external content.
Strix angle: Our agents are architecturally isolated from the data they scan. They observe and correlate signals without accepting instruction from external inputs, eliminating the prompt injection surface.
LLM02
Sensitive Information Disclosure
LLMs revealing confidential data from their training set, system prompts, or connected data sources. Includes PII leakage, proprietary information exposure, and credential disclosure through model outputs.
Strix angle: Our Dark Web Agent monitors for your organisation's sensitive data surfacing in leak markets and forums, whether the source is an AI system or a traditional breach.
LLM03
Supply Chain Vulnerabilities
Compromised training data, poisoned pre-trained models, vulnerable plugins, and tampered model registries. The AI supply chain introduces new dependency risks that traditional security tooling does not cover.
Strix angle: Our Supply Chain Agent extends monitoring to AI dependencies: model repositories, training data pipelines, and third-party AI services your organisation relies on.
LLM04
Data and Model Poisoning
Adversaries manipulating training data or fine-tuning datasets to embed backdoors, biases, or vulnerabilities into models. The effects can be subtle and persistent, surviving across model versions.
Strix angle: Strix uses multi-agent correlation rather than single-model classification. Poisoning one model does not compromise the consensus across six independent agents.
LLM05
Improper Output Handling
Trusting LLM output without validation. When model responses are passed directly to backends, browsers, or other systems, they become vectors for injection, XSS, SSRF, and privilege escalation.
Strix angle: Our Surface Agent detects exposed application endpoints where LLM outputs interact with backend systems, flagging misconfigurations before they become exploitable chains.
LLM06
Excessive Agency
Granting LLM-based systems too many permissions, too broad access, or too much autonomy. When an AI agent can execute code, access databases, or call APIs without proper constraints, compromise of the LLM means compromise of everything it can reach.
Strix angle: Strix agents operate on the principle of least privilege. Each agent has read-only access to its intelligence domain. They observe and report. They do not execute actions on your infrastructure.
LLM07
System Prompt Leakage
Adversaries extracting the system prompt of an LLM application through carefully crafted queries. Leaked prompts reveal business logic, security controls, API structures, and sensitive configuration details.
Strix angle: Our reconnaissance agents scan for exposed AI endpoints, leaked configuration files, and API documentation that reveals system prompt patterns or internal agent architectures.
LLM08
Vector and Embedding Weaknesses
Attacks on retrieval-augmented generation systems through poisoned embeddings, manipulated vector databases, or adversarial documents that hijack RAG retrieval to inject malicious context.
Strix angle: As organisations adopt RAG architectures, we monitor for exposed vector database endpoints, unsecured embedding APIs, and data sources that could serve as injection points.
LLM09
Misinformation
LLMs generating false, misleading, or fabricated content that appears authoritative. In security contexts, this means false threat reports, incorrect remediation advice, or hallucinated vulnerability details that waste response time.
Strix angle: Every Strix advisory is cross-verified across multiple agents and validated against real-world signals before reaching your team. No single model output becomes an alert without corroboration.
LLM10
Unbounded Consumption
Denial-of-service through resource exhaustion: flooding LLM endpoints with expensive queries, triggering excessive token generation, or exploiting recursive tool calls that spiral compute costs.
Strix angle: Our Surface Agent identifies exposed AI endpoints and model-serving infrastructure that lack rate limiting, authentication, or usage controls before attackers weaponise them.