AI Tools Are the New Attack Surface. Your Security Policies Haven't Caught Up.
This week our feed indexed 3,174 security articles. I read through the high-priority incidents looking for a pattern. It didn't take long.
Four separate stories, four different attack methods, one common thread: attackers are targeting AI tools and developer infrastructure specifically because security teams aren't watching them.
The extensions problem
On March 5, Microsoft published a Security Blog post documenting a wave of fake AI browser extensions — promoted as ChatGPT and DeepSeek productivity helpers — that had accumulated roughly 900,000 installs. Microsoft Defender telemetry found activity across more than 20,000 enterprise tenants. The extensions were harvesting LLM chat histories. In enterprise environments, those histories contain code reviews, internal documentation drafts, HR communications, customer data discussions. The kind of data that doesn't show up in your DLP policies because nobody thought to add "AI chat exports" to the watchlist.
Separately, two Chrome extensions — QuickLens and ShotBird — were found to have turned malicious after ownership transfer. Both had been legitimate. Both had passed Chrome Web Store review. A new owner took over, pushed a malicious update, and the existing install base received it automatically with no additional permissions prompt. The attack surface here isn't a vulnerability in Chrome. It's the extension marketplace's trust model: once an extension is installed, it can be updated to do anything.
The developer tools problem
The Blackbox AI VS Code extension has been installed over 4.7 million times. Three independent security research teams have documented critical vulnerabilities in it since October 2024. The vendor has not responded to a single disclosure attempt in over seven months.
CVE-2024-48139, published on the NVD in October 2024 and rated 7.5 (High), is still listed as "Awaiting Analysis" by NIST as of this week. A proof of concept is publicly available on GitHub. The most recently documented attack: a security researcher sent a crafted PNG image to the extension. It read the image, followed the hidden instructions, downloaded a reverse shell from an attacker-controlled server, executed it, and then ran it again with sudo privileges when prompted. Root access. From a PNG. On a machine with 4.7 million copies of this software installed.
Meanwhile, attackers are buying Google ads to surface fake Claude Code installation pages at the top of search results for "install Claude Code." The fake pages clone Anthropic's legitimate site exactly — same layout, same links, all redirecting to the real site. Only the installation instructions are different. Windows users who follow them download Amatera Stealer. Push Security, who documented this campaign, put it plainly: "Unless you're carefully reading the URL embedded in the install one-liner — and let's be honest, almost nobody does — the page is indistinguishable from the real one."
The "encrypted" problem
Dutch intelligence services AIVD and MIVD issued a warning this week: Russian state-backed hackers are running a large-scale campaign against Signal and WhatsApp accounts of government officials, military personnel, and journalists.
They didn't break end-to-end encryption. They didn't need to.
The attacks use social engineering to get victims to hand over SMS verification codes or scan a QR code that adds the attacker's device as a "linked device" on the account. The victim keeps access. The attacker reads along in real time. Signal's encryption works exactly as advertised. The endpoint, the human on the other end, does not.
What this means
None of these attacks are novel. Malvertising has existed for decades. Extension marketplace abuse has been documented since at least 2017. Social engineering has been the leading attack vector for thirty years.
What's new is the target: AI tools. Developers and knowledge workers have adopted AI tools extremely fast, with almost no security review process. Your endpoint security team knows what your company's approved software list looks like. It probably doesn't include Blackbox AI or whatever extension your developers installed last Tuesday because it looked useful.
There's no single fix here. But the starting point is simple: AI tools installed by employees — browser extensions, IDE plugins, CLI tools — need to go through the same vetting as any other third-party software. They run on your machines, they access your data, and right now most organizations are treating them like they're a website bookmark.
They're not. This week proved it.
*CybrPulse tracks thousands of security publications daily. Sources: Microsoft Security Blog (March 5, 2026), barrack.ai research (March 2026), Push Security via HelpNetSecurity (March 9, 2026), Dutch AIVD/MIVD advisory via Malwarebytes (March 2026), The Hacker News Chrome extension report (March 2026), CVE-2024-48139 (NVD).*