97,000 AI Agents Exposed
Originally published on opena2a.org
TL;DR: We analyzed Shodan index data for 97,013 internet-facing hosts with AI agent infrastructure signatures. 14.4% showed indicators of security misconfiguration. 1,190 had agent configuration files (CLAUDE.md) indexed by Shodan. 645 had MCP tool definitions visible in cached responses. 32 had API key patterns detectable in HTTP response headers.
How We Collected This Data
We used 207 Shodan queries across 10 categories to identify internet-facing hosts with AI agent infrastructure signatures. Shodan is a publicly available search engine that indexes internet-connected devices and their response banners.
Shodan's cached banner data and HTTP response headers were analyzed for security-relevant patterns: exposed configuration files, agent instruction files, API key patterns in headers, MCP tool definitions, gateway endpoints, and debug mode indicators.
All findings are based on analysis of publicly indexed data. Our full methodology is documented at methodology.
What We Found
Across 11,100 scanned hosts, we confirmed 8,449 individual security findings.
| Finding | Count | Severity |
|---|---|---|
| Outdated API Endpoints | 5,042 | Medium |
| CLAUDE.md Exposed | 1,190 | High |
| Outdated Versions | 829 | Medium |
| MCP Tools Exposed | 645 | Critical |
| Gateway Exposed | 289 | Critical |
| Debug Mode Enabled | 272 | Medium |
| Unauthenticated MCP | 58 | Critical |
| Config Files Exposed | 54 | Critical |
| API Keys in Responses | 32 | Critical |
| WebSocket Control Exposed | 22 | Critical |
| MCP SSE Exposed | 14 | Critical |
1,190 Agent Configuration Files Indexed
CLAUDE.md files contain system instructions for AI agents, including behavioral rules, tool access policies, and configuration details. Shodan's index identified 1,190 hosts where these files appeared in HTTP response data or directory listings.
The security risk of publicly accessible agent configuration files:
- Tool access surface — reveals what capabilities the agent has
- Decision logic — may expose authorization rules and guardrail implementations
- Infrastructure details — may reference internal service names and endpoints
- Credential exposure — configuration files sometimes contain hardcoded secrets
645 MCP Tool Definitions Exposed
The Model Context Protocol (MCP) is how AI agents connect to external tools. MCP servers expose a /tools endpoint that lists every available tool with its parameters.
Shodan's cached data showed 645 hosts with MCP tool definitions visible in HTTP response headers or body content. 58 of those showed no authentication indicators in their response headers, suggesting default configurations without access controls.
289 Agent Gateways Indexed
AI agent frameworks like OpenClaw use gateway servers (typically on port 18789) to manage agent sessions, tool execution, and channel integrations. Shodan indexed 289 hosts with gateway signatures on this port. 22 of those also had WebSocket control plane signatures on port 18790.
Review of OpenClaw's open-source gateway code reveals that the default configuration does not require authentication. The config.getAPI method, by design, returns the full configuration object, which may include integration tokens. This is a documented default behavior in the project's source code.
What We're Doing About It
Reporting vulnerabilities without contributing fixes is incomplete work. We are doing both.
Contributing Upstream: OpenClaw Skill Code Safety Scanner
We submitted PR #9806 — a skill/plugin code safety scanner that detects dangerous patterns before they execute:
- dangerous-exec— child_process.exec/spawn command injection
- dynamic-code-execution— eval() and new Function()
- potential-exfiltration— file read + outbound HTTP
- env-harvesting— process.env access + network send
- obfuscated-code— hex-encoded strings, large base64 payloads
- crypto-mining— stratum protocol indicators
- suspicious-network— WebSocket to non-standard ports
HackMyAgent: Scan Your Own Infrastructure
Use HackMyAgent to audit your own AI agent infrastructure for the patterns described in this report:
Recommendations
If you are running AI agents in production:
- Audit your network exposure. Run
hackmyagent scan your-domain.comto check what's reachable from the internet. - Protect CLAUDE.md and config files. Configure your web server to deny access to /.claude/, /CLAUDE.md, /mcp.json, /.env.
- Authenticate MCP endpoints. Every MCP server should require authentication. An exposed /tools endpoint is an invitation to invoke your agent's capabilities.
- Scan plugins before installing. Use static analysis to detect dangerous patterns in plugin code before execution.
- Don't use dangerous config flags in production. Flags like
dangerouslyDisableDeviceAuthexist for local development only. - Rotate exposed credentials immediately. If your config files were publicly accessible, assume any credentials in them are compromised.
Legal Notice: This research is based on analysis of data from the Shodan search engine, a publicly available internet index, combined with review of open-source project documentation and default configurations. No systems were accessed, tested, or exploited. No authentication mechanisms were bypassed. No private data was retrieved or stored. All statistics represent aggregate analysis of publicly indexed information.
Responsible Disclosure: If you believe your infrastructure is affected by the patterns described here, we encourage you to audit your own systems. Contact info@opena2a.org for coordinated disclosure inquiries.
About OpenA2A: OpenA2A builds open-source security tools for AI agents. Our projects include HackMyAgent (security scanner), AIM (agent identity management), and the OpenA2A (AI agent security platform).