Langflow RCE: Attackers Harvesting AI Keys Within 20 Hours of Disclosure
CVE-2026-33017 dropped on March 17. By March 18, attackers were already inside exposed Langflow instances, dumping environment variables and walking off with OpenAI, Anthropic, and AWS credentials.
No public proof-of-concept. No pre-made tooling. Attackers built working exploits from the advisory text alone and had shells in under a day.
What Happened
Langflow — the popular open-source AI workflow builder — patched a critical unauthenticated RCE last year (CVE-2025-3248, CVSS 9.8, since added to CISA's KEV catalog). The fix was simple: add an authentication check to the vulnerable endpoint. Done.
Except the underlying problem wasn't the endpoint. It was the architecture.
Security researcher Aviral Srivastava started reading what was already fixed and looked for the same pattern elsewhere in the codebase. He found it in 20 minutes. Same `exec()` call. Same unsandboxed Python execution. Different endpoint — one that's unauthenticated *by design*.
The vulnerable endpoint is `POST /api/v1/build_public_tmp/{flow_id}/flow` in `langflow/api/v1/chat.py`. It exists to let anonymous users interact with public flows — the backbone of any Langflow-powered chatbot. No auth required. That's the feature.
The problem: the endpoint accepts an optional `data` parameter in the request body. If you provide it, the server uses *your* flow definition instead of the stored one. That definition can contain arbitrary Python code. The server compiles and executes it — through 10 function calls, bottoming out in a bare `exec(compiled_code, exec_globals)` with no sandboxing, no AST filtering, no module restrictions.
An assignment statement like `_x = os.system("id")` executes during graph building, before the flow even runs. One HTTP POST.
CVE-2026-33017 — CVSS 9.3, affects all Langflow versions through 1.8.1. Fixed in 1.9.0 via a one-line change: the `data` parameter was removed from the public endpoint entirely.
20 Hours to First Shell
Sysdig deployed honeypot Langflow instances across cloud providers hours after the advisory went public. Here's what they logged:
Hours 20–21: Automated nuclei scanning from 4 IPs hitting within minutes of each other. No public nuclei template existed — these were privately authored and deployed at scale. Payloads ran `id`, base64-encoded output, exfiltrated via interactsh callbacks.
Hours 21–24: A different attacker at `83.98.164.238` running custom Python. Methodical: directory listing, credential file access, system fingerprinting, then stage-2 dropper delivery via `curl`. Pre-staged infrastructure, ready to go before they confirmed the first hit.
Hours 24–30: Data harvest phase. `env` dumps to pull all environment variables — database connections, API keys, cloud credentials. Targeted `find /app -name "*.db" -o -name "*.env"`. Both the Phase 2 and Phase 3 IPs exfiltrated to the same C2 at `143.110.183.86:8080`.
Six unique source IPs total. Six different providers across DE, SG, NL, FR. One coordinated operation.
What They're Taking
Langflow instances are wired into your AI infrastructure by design. A shell isn't just server access — it's access to everything connected.
Sysdig documented attackers specifically targeting:
- LLM API keys for OpenAI and Anthropic, often with unrestricted spend limits
- AWS credentials and cloud tokens for lateral movement into S3 and connected services
- Database connection strings for PostgreSQL, MySQL, and vector databases
- `.env` files containing internal service URLs and deployment secrets
If your Langflow instance was publicly reachable between March 17 and now and hasn't been patched, treat it as compromised. Don't just patch — rotate everything.
This Is Bigger Than Langflow
CVE-2026-33017 is a data point in a pattern.
n8n disclosed CVE-2026-27577 (CVSS 9.4) and CVE-2026-27493 (CVSS 9.5) in the same month — expression sandbox escapes and unauthenticated injection in form nodes. Shadowserver counted 24,700+ unpatched n8n instances exposed online. CVE-2025-68613, a related predecessor, is already in CISA KEV.
The common thread: AI orchestration tools treat code execution as a product feature. Users want to run custom Python. Builders want to enable that. Security gets bolted on after the fact, one endpoint at a time, while the underlying `exec()` stays untouched.
Srivastava's methodology is replicable and adversaries know it. Start with the CVE that got fixed. Find the same pattern where the developers didn't look.
What to Do Right Now
- Update to Langflow 1.9.0 immediately. Every version through 1.8.1 is vulnerable.
- Rotate all credentials on any instance that was internet-accessible before patching — API keys, database passwords, cloud tokens. All of them.
- Set `AUTO_LOGIN=false` in production. The default (`true`) lets unauthenticated users get a superuser token and create public flows, removing the only prerequisite for exploitation.
- Monitor for outbound connections to interactsh domains (`.oast.live`, `.oast.me`, `.oast.pro`, `.oast.fun`), `oastify.com`, and `interact.sh`. Block known C2 IPs: `143.110.183.86:8080` and `173.212.205.251:8443`.
- Never expose Langflow directly to the internet. Reverse proxy, authentication layer, IP allowlist. Pick at least two.
- Audit your entire AI tooling inventory. Any tool that executes user-supplied or LLM-generated code is a high-risk asset. Most of them aren't behind your standard security review process.
As of March 22, CVE-2026-33017 hasn't been added to CISA's KEV catalog despite confirmed active exploitation. Don't wait for the catalog to make the call for you.
*CybrPulse tracks thousands of security feeds daily. CVE-2026-33017 scored 9.0 in our weighted intelligence pipeline — one of the highest signals this week.*