LangChain and LangGraph security vulnerabilities exposing enterprise AI frameworks to data theft attacks

The developer was just trying to load a prompt template. A simple, routine operation that LangChain handles millions of times per day across enterprises worldwide. But this particular template contained something extra - a carefully crafted path that didn't point to the intended file. Instead, it pointed to /etc/passwd, then to Docker configuration files, then to environment files containing API keys and database credentials.

Within seconds, the AI framework had dutifully served up sensitive system files to an attacker who never needed to authenticate, never needed to exploit a buffer overflow, and never needed sophisticated tooling. They just asked nicely, and LangChain complied.

Welcome to the new reality of AI framework security. On March 27, 2026, cybersecurity researchers at Cyera disclosed three critical vulnerabilities in LangChain and LangGraph - the open-source frameworks powering AI applications with over 84 million combined weekly downloads. The flaws expose three independent paths for attackers to drain sensitive data from enterprise deployments: filesystem access, secret exfiltration, and database compromise.

This isn't a theoretical concern. These vulnerabilities are being actively discussed in security communities, and the exploitation window is measured in hours, not days. If your organization uses LangChain or LangGraph, you need to act now.

The Three Paths to Your Data

CVE-2026-34070: Path Traversal in Prompt Loading (CVSS 7.5)

The first vulnerability resides in LangChain's prompt-loading API (langchain_core/prompts/loading.py). It's a classic path traversal flaw with a modern AI twist.

How It Works:
LangChain allows developers to load prompt templates from files - a common pattern for managing complex prompts. The vulnerability exists because the framework doesn't validate file paths before accessing them. An attacker can supply a specially crafted prompt template containing directory traversal sequences (../) to access arbitrary files on the system.

What Attackers Can Access:

The Attack in Practice:
An attacker sends a request with a malicious prompt template containing paths like ../../../etc/environment or ../../../app/config/secrets.yaml. LangChain's prompt loader follows these paths without validation, returning the file contents as if they were legitimate prompt templates.

Patched In: langchain-core >= 1.2.22

CVE-2025-68664: Deserialization Attack on API Keys (CVSS 9.3)

The second vulnerability is significantly more severe, earning a 9.3 CVSS score. It allows attackers to extract API keys and environment secrets through a deserialization flaw - and it was so significant that Cyera gave it the cryptonym "LangGrinch" when they first discovered it in December 2025.

How It Works:
LangChain uses serialization to pass objects between components. The vulnerability occurs when user input is deserialized without proper validation. An attacker can craft a data structure that tricks LangChain into interpreting it as an already-serialized object rather than regular user data. This deserialized object can then access and exfiltrate sensitive environment variables and API keys.

What Attackers Can Steal:

Why It's Devastating:
API keys are the keys to your AI kingdom. With stolen OpenAI API keys, attackers can run up massive bills, access your fine-tuned models, or exfiltrate data from your AI conversations. With cloud credentials, they can pivot to compromise entire infrastructure environments.

Patched In: langchain-core 0.3.81 and 1.2.5

CVE-2025-67644: SQL Injection in LangGraph (CVSS 7.3)

The third vulnerability affects LangGraph, the framework built on LangChain for sophisticated agentic workflows. It's an SQL injection flaw in the SQLite checkpoint implementation.

How It Works:
LangGraph uses SQLite checkpoints to persist conversation state and workflow progress. The vulnerability exists in how metadata filter keys are processed when querying these checkpoints. An attacker can manipulate SQL queries through specially crafted metadata filters, injecting arbitrary SQL commands.

What Attackers Can Do:

The Business Impact:
Conversation histories often contain sensitive business information - customer data, internal decisions, proprietary analysis. For companies using LangGraph for customer service agents, this could mean exposing customer interactions. For internal tools, it could reveal strategic discussions and confidential data.

Patched In: langgraph-checkpoint-sqlite 3.0.1

The Ripple Effect: Why These Vulnerabilities Matter More Than You Think

The Dependency Web

LangChain doesn't exist in isolation. It sits at the center of a massive dependency ecosystem that stretches across the AI stack. According to PyPI statistics, LangChain and its related packages are downloaded over 84 million times per week:

Hundreds of libraries wrap LangChain, extend it, or depend on it. When a vulnerability exists in LangChain's core, it doesn't just affect direct users. It ripples outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path.

The transitive dependency problem: Your application might not directly use LangChain, but if you use a library that uses LangChain, you're exposed. And many developers don't even realize LangChain is in their dependency tree.

The AI Security Blind Spot

These vulnerabilities expose a fundamental blind spot in AI security: the plumbing matters as much as the model. Security teams have focused heavily on prompt injection, model alignment, and AI-specific attacks. But underneath every AI application is traditional software - with traditional vulnerabilities like path traversal, deserialization flaws, and SQL injection.

The uncomfortable truth: Your AI security posture is only as strong as your foundational software security. The most sophisticated prompt injection defense means nothing if an attacker can steal your API keys through a deserialization bug.

The Speed of Exploitation

The LangChain disclosures come on the heels of another critical AI framework vulnerability: CVE-2026-33017 in Langflow, which was actively exploited within 20 hours of disclosure. Security researchers at Sysdig observed attackers building working exploits directly from the advisory description - no public proof-of-concept required.

This pattern is becoming the norm, not the exception. The window between disclosure and exploitation has collapsed from months to hours. Organizations relying on scheduled patch cycles are operating on timelines that attackers have already outpaced.

The Bigger Picture: AI Framework Security in Crisis

March 2026: The Month AI Infrastructure Fell Under Siege

The LangChain vulnerabilities are part of a broader crisis affecting AI infrastructure in March 2026:

March 17: CVE-2026-33017 disclosed in Langflow (CVSS 9.3), actively exploited within 20 hours
March 18-21: Nine CVEs disclosed for OpenClaw, including critical CVE-2026-22172 (CVSS 9.9)
March 19: TeamPCP compromises Trivy security scanner and LiteLLM packages in supply chain attack
March 27: LangChain and LangGraph vulnerabilities disclosed

That's 13+ critical vulnerabilities in AI infrastructure tools in just 10 days. The message is clear: AI frameworks have become prime targets, and attackers are moving faster than ever.

Why AI Frameworks Are Prime Targets

High Value: AI frameworks often have access to the most sensitive data in an organization - customer conversations, proprietary models, strategic analysis, and API keys to expensive AI services.

Centralized Access: Compromising an AI framework can provide access to multiple systems. A single LangChain deployment might connect to databases, vector stores, APIs, and cloud services.

Rapid Adoption: Organizations are deploying AI tools faster than they can secure them. The pressure to ship AI features often means security takes a back seat.

Complexity: AI applications have complex dependency chains. LangChain alone has dozens of optional dependencies for different use cases - vector stores, LLM providers, document loaders. Each dependency is a potential attack vector.

Immediate Actions: What You Need to Do Right Now

1. Update Immediately

Check your dependency versions and upgrade to patched versions:

# Check current versions
pip show langchain-core langgraph-checkpoint-sqlite

# Upgrade to patched versions
pip install langchain-core>=1.2.22
pip install langgraph-checkpoint-sqlite>=3.0.1

If you're using LangChain 0.3.x, upgrade to langchain-core 0.3.81 or later.

2. Audit Your Dependencies

Find out if LangChain is in your dependency tree even if you don't use it directly:

pipdeptree | grep -i langchain

Review your requirements.txt or pyproject.toml for any packages that might wrap or extend LangChain.

3. Rotate Compromised Secrets

If you've been running vulnerable versions, assume your secrets may be compromised:

4. Implement Defense in Depth

Don't rely solely on patching. Add layers of defense:

Network Segmentation: Run AI frameworks in isolated network segments with limited access to sensitive systems.

Secret Management: Use dedicated secret management tools (HashiCorp Vault, AWS Secrets Manager) rather than environment variables. Implement short-lived credentials where possible.

Input Validation: Add your own input validation layers on top of framework-provided APIs. Don't trust user input, even if the framework should handle it.

Monitoring: Implement monitoring for unusual file access patterns, API key usage, and database queries.

5. Review Your AI Security Posture

These vulnerabilities are a wake-up call. Take this opportunity to review your overall AI security:

FAQ: LangChain and LangGraph Security

How do I know if I'm affected by these vulnerabilities?

Check your langchain-core and langgraph-checkpoint-sqlite versions:

If you're unsure, upgrade to the latest versions. The patches are backward compatible.

Can these vulnerabilities be exploited remotely?

It depends on your deployment. If your LangChain/LangGraph application is exposed to the internet (e.g., via a web API), then yes - these can be exploited remotely without authentication. If your application is internal-only, exploitation requires some level of network access.

What's the difference between LangChain and LangGraph?

LangChain is the core framework for building LLM applications. LangGraph is built on top of LangChain and provides more sophisticated support for agentic workflows - loops, branching, and persistent state. If you use LangGraph, you also use LangChain.

How did these vulnerabilities make it into production code?

These are classic software security vulnerabilities that have affected software for decades. The difference is that LangChain's rapid adoption (0 to 84M+ weekly downloads) meant these vulnerabilities affected far more systems than typical library flaws. The AI gold rush has created pressure to ship features quickly, and security sometimes falls behind.

Should I stop using LangChain?

Not necessarily. LangChain remains a powerful and useful framework. The key is to use it securely: keep it updated, implement defense in depth, and treat AI frameworks with the same security scrutiny as any other critical infrastructure component.

What about other AI frameworks? Are they vulnerable too?

These vulnerabilities are specific to LangChain/LangGraph, but the pattern applies broadly. Any AI framework that processes user input, handles files, or manages secrets could have similar vulnerabilities. The lesson is to treat all AI infrastructure with appropriate security rigor.

How can I detect if I've been exploited?

Look for:

What's the relationship between these vulnerabilities and the Langflow CVE-2026-33017?

They're separate vulnerabilities in different frameworks, but they share a common theme: AI infrastructure tools with critical security flaws being rapidly exploited. The Langflow vulnerability was exploited within 20 hours of disclosure, demonstrating the urgency of patching AI framework vulnerabilities.

Are there any compensating controls I can implement while patching?

Yes:

How do I stay informed about AI security vulnerabilities?

The Path Forward: Securing AI Infrastructure

The LangChain vulnerabilities are not an indictment of the framework or its maintainers. They're a symptom of a larger challenge: AI infrastructure is maturing faster than our security practices can keep up.

What Framework Maintainers Can Do

Security-First Development: Implement secure coding practices, regular security audits, and bug bounty programs.

Rapid Disclosure: The LangChain team patched these vulnerabilities before public disclosure - this is good practice. Continue this approach while ensuring users are notified promptly.

Dependency Transparency: Make it clearer what dependencies are pulled in and what their security implications are.

What Organizations Can Do

Treat AI as Critical Infrastructure: AI frameworks should receive the same security attention as databases, payment systems, and other critical components.

Implement AI Security Programs: Establish dedicated AI security teams or functions with expertise in both AI and traditional security.

Vendor Security Assessment: Before adopting AI tools and frameworks, conduct thorough security assessments.

Incident Response Planning: Include AI-specific scenarios in your incident response plans. Know how to respond when an AI framework is compromised.

What the Industry Can Do

AI Security Standards: Develop industry standards for AI framework security, similar to OWASP for web applications.

Information Sharing: Create mechanisms for sharing threat intelligence specific to AI infrastructure attacks.

Security Tooling: Build tools specifically designed for AI security - dependency scanners, runtime monitors, and vulnerability databases focused on AI frameworks.

Conclusion: The AI Security Awakening

The March 2026 wave of AI infrastructure vulnerabilities marks a turning point. AI frameworks are no longer experimental tools - they're critical infrastructure powering enterprise applications. And attackers have noticed.

The LangChain vulnerabilities demonstrate that AI security isn't just about prompt injection and model alignment. It's about the same fundamentals that have always mattered: input validation, secure deserialization, parameterized queries, and least privilege access.

Organizations that thrive in the AI era will be those that embrace this reality. They'll implement defense in depth, maintain rapid patching capabilities, and treat AI infrastructure with the security rigor it demands.

The message is clear: Update your LangChain and LangGraph installations today. Audit your AI dependencies. Rotate your secrets. And recognize that AI security is now a core competency, not an afterthought.

Your AI is only as secure as the infrastructure it runs on. Secure the foundation.


Stay ahead of AI security threats. Subscribe to the Hexon.bot newsletter for weekly insights on securing your AI infrastructure.