Dashboard-style illustration showing a massive scan of exposed AI services, chatbots, Ollama APIs, and agent platforms across the internet

1 Million Exposed AI Services: The Security Crisis Hiding in Plain Sight

A cybersecurity team just scanned 1 million exposed AI services across 2 million internet hosts. What they found should terrify every CISO. Unauthenticated chatbots leaking enterprise conversations. Wide-open agent management platforms exposing entire business logic. Ollama APIs answering "Hello" with "Greetings, Master. Your command is my law." And authentication disabled by default in the most popular AI infrastructure tools.

This is not a theoretical vulnerability. This is the current state of AI infrastructure security, published on May 5, 2026, by the Intruder security research team. Their findings reveal that the furious pace of AI adoption is putting decades of security progress at risk - and most organizations deploying self-hosted AI have no idea how exposed they really are.

The Scale of the Exposure: 2 Million Hosts, 1 Million Services

The Scan Methodology

The Intruder team used certificate transparency logs to identify over 2 million hosts running 1 million exposed AI services. This is not a targeted audit of known vulnerable systems. This is a broad internet scan revealing the default security posture of AI infrastructure deployed by real organizations across government, finance, marketing, and technology sectors.

Key Stat: 1 million exposed AI services were found across 2 million hosts. The AI infrastructure scanned was more vulnerable, exposed, and misconfigured than any other software category Intruder has ever investigated.

Pro Tip: Certificate transparency logs are public records of SSL/TLS certificates issued for domains. They provide a comprehensive map of internet-facing services - including the ones your organization may have forgotten about or assumed were internal-only.

The Authentication Crisis

The most alarming pattern was not a sophisticated vulnerability. It was the complete absence of authentication. A significant number of hosts had been deployed straight out of the box with no authentication in place - not because administrators forgot to enable it, but because authentication simply is not enabled by default in many popular AI infrastructure projects.

Real user data and company tooling were sitting exposed to anyone who knew where to look. In the wrong hands, the consequences range from reputational damage to full system compromise.

What Was Exposed: Four Critical Categories

1. Freely Accessible Chatbots Leaking Enterprise Conversations

A number of exposed instances involved chatbots that left user conversations completely accessible. One example based on OpenUI exposed a user's full LLM conversation history. While this might seem relatively innocent on the surface, chat histories in enterprise environments can reveal:

More concerning were generic chatbots hosting a wide range of models - including multimodal LLMs - freely available for anyone to use. Malicious users can jailbreak most models to bypass safety guardrails for nefarious purposes, such as generating illegal imagery or soliciting advice with intent to commit crime. They do so without fear of repercussion because they are using someone else's infrastructure.

This is not hypothetical. People are already finding creative ways to abuse company chatbots to access more capable models without paying or having requests logged to their own accounts. The Lasso Security blog documented one case where an Amazon employee used a company chatbot to access capabilities they were not authorized to use personally.

There were also chatbots exposing large volumes of personal NSFW conversations. Even worse, the software running some Claude-powered bots disclosed their API keys in plaintext - meaning anyone who found the exposed service could also extract the credentials to access the underlying AI provider's API at the victim's expense.

2. Wide-Open Agent Management Platforms

The scan also discovered exposed instances of agent management platforms including n8n and Flowise. Some instances that users clearly thought were internal had been exposed to the internet without any authentication.

One of the most egregious examples was a Flowise instance that exposed the entire business logic of an LLM chatbot service. The credential list was exposed too. While Flowise was hardened enough not to reveal stored credential values to unauthenticated visitors, an attacker could still use the tools connected to those credentials to exfiltrate sensitive information.

This is what makes these platforms particularly dangerous. There is a distinct absence of proper access management controls in AI tooling, meaning access to a bot integrated with a third-party system often means access to everything that bot touches.

In another example, the setup exposed internet parsing tools and potentially dangerous local functions such as file writes and code interpreting, making server-side code execution a realistic prospect.

Key Stat: Over 90 exposed instances were identified across sectors including government, marketing, and finance. All of those chatbots, their workflows, prompts, and outward access were completely open. An attacker could modify workflows, redirect traffic, expose user data, or poison responses.

3. Unsecured Ollama APIs: 31% Answer "Hello" Without Authentication

One of the more surprising findings was the sheer number of exposed Ollama APIs accessible without authentication with a model connected. The researchers fired a single prompt - "Hello" - to every server that listed a connected model to see if authentication would be required.

Of the 5,200+ servers queried, 31% answered.

The responses gave a window into what these APIs were being used for. While the researchers could not ethically explore further, the implications are far-reaching. Some responses included:

Ollama does not store messages directly, so there is no immediate risk of conversation data being exposed through the API itself. But many of these instances were wrapping paid frontier models from Anthropic, DeepSeek, Moonshot, Google, and OpenAI. Of all the models identified across all servers, 518 were wrapping well-known frontier models - meaning attackers could abuse these exposed APIs to consume expensive AI credits at the victim's expense.

Common Mistake: Assuming that because Ollama is a local model runner, exposing it to the internet is safe. Any API that accepts prompts and returns responses can be abused for unauthorized access, credit consumption, or as a pivot point for further attacks.

4. Insecure by Design: The Root Cause

After triaging the results, the researchers spent time analyzing the source code of popular AI infrastructure tools. What they found was disturbing: authentication simply is not enabled by default in many of these projects. The tools are designed to work immediately upon installation, with security treated as an optional configuration rather than a default requirement.

This is a fundamental design philosophy problem. Traditional enterprise software has learned - often through painful breaches - that security must be opt-out, not opt-in. AI infrastructure tools, many of which originated as developer utilities or open-source projects, have not absorbed this lesson. They prioritize ease of setup over safe defaults, and organizations deploying them at scale are paying the price.

Why This Matters: The Attack Surface Multiplier

The Speed of AI Adoption

Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. But speed is coming at the expense of security. The scan results demonstrate that organizations are deploying AI infrastructure with the same maturity level as early web applications - before security-by-default became an industry standard.

Key Stat: The ClawdBot fiasco - a viral self-hosted AI assistant - is averaging 2.6 CVEs per day. When a single tool generates more vulnerabilities per day than many entire software categories, it signals a systemic security maturity problem.

The Unique Risk of AI Infrastructure

AI infrastructure creates risks that traditional software does not:

  1. Data Exfiltration Through Conversation - Chat histories contain some of the most sensitive information in an organization, and users treat AI conversations as private by default
  2. Credential Exposure - AI tools often integrate with multiple third-party services, meaning one exposed platform can compromise an entire toolchain
  3. Model Abuse - Exposed APIs can be used to generate harmful content, consume expensive credits, or conduct attacks that trace back to the victim's infrastructure
  4. Prompt Poisoning - Attackers with access to agent management platforms can modify system prompts to change behavior, exfiltrate data, or manipulate outputs
  5. Supply Chain Amplification - Compromised AI infrastructure can poison outputs that feed into downstream systems, creating cascading failures

The Visibility Gap

Many organizations do not know they have exposed AI services. The tools were deployed by individual teams, developers, or business units without central IT oversight. Certificate transparency logs reveal what internal asset inventories miss: the shadow AI infrastructure that exists outside formal governance.

Key Takeaway: If your organization has deployed any self-hosted AI tools - chatbots, model servers, agent platforms, or API wrappers - there is a meaningful probability that some of them are exposed to the internet without authentication. The only way to know is to scan for them proactively.

Who Is Affected: The Full Scope

Critical Risk: Organizations with Exposed AI Infrastructure

Any organization that has deployed self-hosted AI tools without explicit security hardening is potentially affected. This includes:

High Risk: Multi-Cloud and Hybrid Environments

Organizations running AI infrastructure across multiple cloud providers or hybrid environments face elevated risk because:

Medium Risk: Organizations Using Managed AI Services

Even organizations that rely primarily on managed AI services from OpenAI, Anthropic, or Google are not entirely immune. Exposed internal tools that integrate with these services can leak API keys, allowing attackers to access managed services at the victim's expense and with the victim's reputation on the line.

Immediate Defenses: How to Secure Your AI Infrastructure

Priority 1: Discover What You Have

You cannot secure what you cannot see. Start with a comprehensive inventory:

Priority 2: Enable Authentication Everywhere

Authentication must be mandatory, not optional:

Priority 3: Network Segmentation

Assume breach and limit the blast radius:

Priority 4: Configuration Hardening

Secure the tools themselves:

Priority 5: Monitoring and Detection

Detect exposure and abuse:

The Bigger Picture: AI Security Maturity Gap

The Speed vs. Security Paradox

The AI revolution is moving faster than security can keep up. Organizations are deploying AI infrastructure with the urgency of a competitive imperative, but without the security maturity that took decades to develop for traditional software. The result is a massive exposure surface that attackers are already exploiting.

Key Stat: 31% of exposed Ollama APIs answered an unauthenticated "Hello" prompt. This is not a subtle configuration issue. This is the digital equivalent of leaving your front door wide open with a sign saying "Come on in."

The Default Security Problem

The root cause is not administrator negligence. It is a design philosophy that prioritizes immediate functionality over safe defaults. When authentication is not enabled by default, the burden of security falls on every individual administrator - and human nature being what it is, many will not realize they need to act until after a breach.

The software industry learned this lesson with databases, web servers, and cloud storage. AI infrastructure is repeating the same mistakes at a much larger scale and with much higher stakes.

What Vendors Must Do

AI infrastructure vendors need to internalize the security-by-default principle:

What CISOs Should Do This Week

Immediate Actions (Next 24-48 Hours)

Short-Term Actions (Next 2 Weeks)

Strategic Actions (Next Quarter)

FAQ: Exposed AI Infrastructure Security

How do I know if my organization has exposed AI services?

Start with certificate transparency log searches for your domain. Tools like crt.sh or commercial attack surface management platforms can identify unexpected exposed services. Also interview development teams, review cloud configurations, and scan your network perimeter for AI-specific ports and endpoints.

Is Ollama safe to expose to the internet?

No. Ollama does not enable authentication by default, and exposing it to the internet allows anyone to send prompts to your models, consume your resources, and potentially use your infrastructure for attacks that trace back to you. Treat Ollama like any other API - place it behind authentication and network controls.

What is the risk of exposed chatbot conversations?

Exposed chatbot conversations can reveal confidential business information, proprietary technical details, customer data, security procedures, and strategic plans. Users treat AI conversations as private and will share information they would not put in an email or document. When those conversations are exposed, the damage can be severe.

How do I secure Flowise or n8n instances?

Enable authentication, place them behind a VPN or internal network, restrict access to authorized users, keep them updated with security patches, monitor access logs, and avoid connecting them to production systems without security review. Treat agent management platforms as critical infrastructure because they often have broad access to other systems.

What should I do if I find an exposed AI service?

Immediately restrict network access to authorized users, enable authentication, rotate any exposed credentials, review access logs for unauthorized use, assess what data or systems may have been compromised, and document the incident for security review. If the exposure included sensitive data, consider breach notification requirements.

Are managed AI services safer than self-hosted?

Managed services from reputable providers generally have better security defaults and operational security than self-hosted alternatives. However, they are not immune to risk - especially if your internal tools leak API keys or credentials that provide access to managed services. The safest approach is managed services plus strong credential management and access controls.

Why don't AI tools enable authentication by default?

Many AI infrastructure tools originated as developer utilities or open-source projects where ease of use was prioritized over security. The tools were designed for local development or trusted environments, then deployed to production without the security hardening that enterprise software requires. This is a maturity problem that the industry needs to address.

Conclusion: The AI Infrastructure Security Reckoning Is Here

The Intruder scan of 1 million exposed AI services reveals a security crisis hiding in plain sight. Organizations are deploying AI infrastructure at scale without the basic security controls that have been standard in other software categories for years. Unauthenticated APIs, exposed conversations, wide-open agent platforms, and default configurations that prioritize convenience over safety are the norm, not the exception.

This is not a problem that can be solved with a single patch or configuration change. It requires a fundamental shift in how organizations approach AI infrastructure security - from treating it as an afterthought to treating it as a critical security domain deserving the same rigor as databases, networks, and endpoints.

The good news is that the solutions are well understood. Authentication, network segmentation, monitoring, and secure defaults are not new concepts. They just need to be applied to AI infrastructure with the same discipline that has made other enterprise software categories more secure over time.

The 1 million exposed AI services are a wake-up call. The only question is whether your organization will act before becoming a statistic.


Stay ahead of AI infrastructure threats. Subscribe to the Hexon.bot newsletter for weekly cybersecurity insights.