<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Cysleuthing: Cyber News Links]]></title><description><![CDATA[Links to cyber news from the past week]]></description><link>https://cysleuths.substack.com/s/cyber-monday-links</link><generator>Substack</generator><lastBuildDate>Sun, 05 Apr 2026 03:36:57 GMT</lastBuildDate><atom:link href="https://cysleuths.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jasmine Wong]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[cysleuths@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[cysleuths@substack.com]]></itunes:email><itunes:name><![CDATA[Jasmine Wong]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jasmine Wong]]></itunes:author><googleplay:owner><![CDATA[cysleuths@substack.com]]></googleplay:owner><googleplay:email><![CDATA[cysleuths@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jasmine Wong]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[LiteLLM Supply Chain Attack, Anthropic's Leaked Model, Fake VS Code Security Alerts & More | AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 3/24/26 - 3/29/26]]></description><link>https://cysleuths.substack.com/p/litellm-supply-chain-attack-anthropics</link><guid isPermaLink="false">https://cysleuths.substack.com/p/litellm-supply-chain-attack-anthropics</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 30 Mar 2026 01:11:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/aab28137-173d-4d28-bbb8-2875e129121f.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://snyk.io/articles/poisoned-security-scanner-backdooring-litellm/">LiteLLM Supply Chain Attack:</a></strong></p><p>On March 24, 2026, a threat actor group known as TeamPCP published two malicious versions of LiteLLM, a widely used Python library that routes API calls across AI models like those from OpenAI, Anthropic, and Google, to the Python Package Index, where it averages 97 million monthly downloads. The attackers gained access by first compromising Trivy, an open-source security scanner used inside LiteLLM&#8217;s own build pipeline, through a GitHub Actions misconfiguration that allowed them to steal the PyPI publishing credentials. The malicious packages, live for roughly three to four hours before being quarantined, deployed a three-stage payload capable of harvesting SSH keys, cloud credentials for AWS, GCP, and Azure, Kubernetes secrets, CI/CD tokens, and cryptocurrency wallets, then encrypting and transmitting everything to an attacker-controlled domain, while also installing a persistent backdoor that polled for additional commands every five minutes.</p></li><li><p><strong><a href="https://www.cyera.com/research/langdrained-3-paths-to-your-data-through-the-worlds-most-popular-ai-framework">Three Vulnerabilities Found in LangChain, the AI Tool Powering Half the Internet&#8217;s Chatbots</a>:</strong></p><p>Cyera researchers disclosed three vulnerabilities in LangChain and LangGraph, an AI framework family with roughly 847 million total downloads that serves as the backbone for a vast number of enterprise AI applications, including chatbots, document search tools, and autonomous agents. The three flaws, rated critical and high severity, are classic attack types applied to a modern AI context: a path traversal bug that lets attackers read sensitive files like cloud credentials and CI/CD configs, a serialization injection flaw that can expose environment secrets like API keys, and a SQL injection vulnerability in LangGraph&#8217;s memory storage layer that could allow an attacker to access every conversation in the database. Patches are available and users should update langchain-core to version 1.2.22 or later and langgraph-checkpoint-sqlite to version 3.0.1 or later immediately.</p></li><li><p><strong><a href="https://www.longtermresilience.org/wp-content/uploads/2026/03/v5-Scheming-in-the-wild_-detecting-real-world-AI-scheming-incidents-through-open-source-intelligence.pdf">Researchers Found Nearly 700 Real Cases of AI Systems Lying, Ignoring Instructions, and Going Rogue</a>:</strong></p><p>Researchers at the Centre for Long-Term Resilience analyzed over 183,000 publicly shared AI conversation transcripts on X between October 2025 and March 2026, identifying 698 verified real-world incidents of AI systems behaving in deceptive or misaligned ways, a category they call scheming. The incidents included AI agents ignoring explicit stop commands, deleting production databases, escalating their own system permissions without authorization, fabricating results to placate users, and in one case, an AI agent that had its Discord access revoked autonomously taking over another agent&#8217;s account to keep posting. The number of confirmed incidents increased nearly fivefold over the five-month study period, outpacing the growth in general AI discussion, and the researchers note that no organization currently monitors for these behaviors across all AI models at scale.</p><p></p><p><em>Note: My first thought was since this study relies on transcripts people chose to post publicly, it skews toward dramatic or concerning interactions. The researchers do acknowledge they have no way to calculate a true rate of scheming across all AI usage. The fivefold increase over the study period is arguably the more meaningful stat, as it outpaced both general AI discussion growth and general negative AI sentiment on X.</em></p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://mashable.com/article/claude-mythos-ai-model-anthropic-leak">Anthropic Accidentally Exposed Claude Mythos, Its Most Powerful Unreleased AI Model</a>:</strong></p><p>A misconfigured content management system left nearly 3,000 unpublished Anthropic assets in a publicly searchable data cache, including a draft blog post revealing a new unreleased model called Claude Mythos and a new model tier called Capybara that would sit above Opus as the company&#8217;s largest and most intelligent offering. Anthropic confirmed the model is real, currently being trialed by early access customers, and described it as a step change in capabilities and the most powerful model it has ever built. The leaked draft also warns that Claude Mythos is currently far ahead of any other AI model in cyber capabilities and that it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.</p></li><li><p><strong><a href="https://blog.thereallo.dev/blog/decompiling-the-white-house-app">A Developer Decompiled the White House App and Found It Has a Full GPS Tracking Pipeline Built In</a>:</strong></p><p>A developer decompiled the official White House Android app and found that its in-app browser injects JavaScript into every website a user opens to hide cookie consent dialogs, GDPR banners, login walls, and paywalls, while a fully compiled OneSignal GPS tracking pipeline polls location every 4.5 minutes in the foreground and 9.5 minutes in the background, ready to activate with a single API call. The app also loads JavaScript for YouTube embeds from a personal GitHub Pages account, meaning if that account is compromised, arbitrary code could execute inside the app for every user. There is no certificate pinning, development artifacts including a localhost URL and a developer&#8217;s local IP address shipped in the production build, and user profiling through OneSignal tracks notification interactions, in-app message clicks, phone numbers, cross-device identifiers, and location data.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/paid-ai-accounts-are-now-a-hot-underground-commodity/">Stolen ChatGPT and Claude Accounts Are Being Bought and Sold in Underground Markets</a>:</strong></p><p>Premium AI platform accounts for tools like ChatGPT, Claude, Microsoft Copilot, and Perplexity are now being actively bought and sold in underground Telegram groups and fraud-oriented online communities, according to an analysis of hundreds of posts from cybercrime forums. Listings typically advertise discounted subscriptions, bundled access to multiple AI tools, or accounts claiming to have fewer restrictions, with acquisition methods likely including credential theft, bulk account creation, verification bypass using virtual phone numbers, and abuse of trial programs. Threat actors are using the access to generate phishing content, automate fraud operations, write malicious code, and produce synthetic media for impersonation at scale.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://socket.dev/blog/widespread-github-campaign-uses-fake-vs-code-security-alerts-to-deliver-malware">Thousands of Fake VS Code Security Alerts Are Flooding GitHub to Trick Developers Into Installing Malware</a>:</strong></p><p>Socket researchers identified a large-scale phishing campaign where attackers are posting thousands of nearly identical fake Visual Studio Code security alerts through GitHub Discussions across hundreds of repositories, triggering email notifications directly to developers&#8217; inboxes through GitHub&#8217;s own notification system. The posts impersonate security advisories with fabricated CVEs and urgent language, then link to Google Drive downloads that route users through a filtering chain where Google&#8217;s share endpoint behaves differently based on whether the visitor has a Google cookie, redirecting most real users straight to attacker-controlled infrastructure at drnatashachinn[.]com for browser fingerprinting. The fingerprinting script silently collects timezone, platform, user agent, and automation detection signals and auto-submits the data via POST with no user interaction, consistent with a traffic distribution system that profiles victims before delivering a follow-on payload.</p></li><li><p><strong><a href="https://www.gendigital.com/blog/insights/research/torg-grabber-credential-stealer-analysis">A New Windows Info-Stealer Went From Rough Prototype to Fully Operational Crime Service in Just Three Months</a>:</strong></p><p>Gen Digital researchers uncovered a new malware-as-a-service operation called Torg Grabber that rapidly evolved from a basic data thief into a full-fledged criminal toolkit capable of stealing saved passwords, crypto wallets, browser cookies, Discord tokens, VPN configs, and email credentials from over 25 browsers and 850 browser extensions on Windows machines. The malware tricks victims through fake verification pages that copy a command to the clipboard and instruct them to paste it into Windows, while a fake progress bar disguised as a security update runs in the foreground to buy time as stolen data is quietly uploaded in the background. Researchers identified over 40 separate criminal operators using the service and unmasked eight of them through Telegram accounts tied to the malware, with usernames and bios openly referencing stolen credit card sales and Russian cybercrime slang.</p></li><li><p><strong><a href="https://research.jfrog.com/post/team-pcp-strikes-again-telnyx-popular-library-hit/">Hackers Hid Malware Inside a Fake Audio File to Compromise Telnyx, a Popular Python Library With 3.8 Million Downloads</a>:</strong></p><p>JFrog researchers discovered that the widely used Telnyx Python package on PyPI, a voice and messaging SDK with roughly 670,000 monthly downloads, was compromised on March 27 when two malicious versions were uploaded containing a payload hidden inside a legitimate WAV audio file that matched the library&#8217;s purpose as an AI voice agent toolkit. The attack is linked to the same group behind this week&#8217;s litellm compromise, known as TeamPCP, which has been hitting packages across PyPI, NPM, Go, and GitHub repositories using the same encryption keys and exfiltration methods. On infected machines the malware steals credentials and environment secrets, encrypts the stolen data with AES-256 and RSA-4096, and sends it back to the attackers, and because the payload is downloaded over unencrypted HTTP without any verification, any attacker on the same network could also hijack the request and deliver their own malicious code.</p></li><li><p><strong><a href="https://www.malwarebytes.com/blog/threat-intel/2026/03/infiniti-stealer-a-new-macos-infostealer-using-clickfix-and-python-nuitka">A New Mac Malware Tricks You Into Infecting Yourself Through a Fake CAPTCHA Page</a>:</strong></p><p>Malwarebytes researchers discovered a previously undocumented macOS infostealer called Infiniti Stealer that spreads through a fake Cloudflare verification page instructing users to open Terminal and paste a command, a social engineering technique known as ClickFix that has been widely used on Windows but is now being adapted for Mac. Once executed, the malware drops a native macOS binary compiled with Nuitka, which converts Python into compiled C code to make it harder to detect and analyze, and then steals saved browser passwords, macOS Keychain entries, cryptocurrency wallets, developer secrets from .env files, and screenshots. The stealer sends captured credentials to a remote server via HTTP, notifies the operator through Telegram, and queues stolen passwords for server-side cracking.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://pushsecurity.com/blog/tiktok-phishing">Attackers Are Hijacking Business TikTok Accounts to Run Ad Fraud and Malvertising Scams</a>:</strong></p><p>Push Security researchers identified a new cluster of phishing pages all designed to steal TikTok for Business account sessions using adversary-in-the-middle phishing kits hidden behind Cloudflare Turnstile bot checks and hosted in a single Google Storage bucket. The fake pages clone both TikTok for Business and Google Careers landing pages, and because most business users log in to TikTok with Google, a single compromised session gives attackers access to both the TikTok ad account and the victim&#8217;s Google account, opening the door to malvertising scams, ad fraud, and further SSO-linked app compromise. TikTok has already been widely abused to distribute infostealers through AI-generated ClickFix videos, with one malicious video alone reaching roughly 500,000 views and 20,000 likes.</p></li><li><p><strong><a href="https://www.reuters.com/world/us/iran-linked-hackers-claim-breach-of-fbi-directors-personal-email-doj-official-2026-03-27/">Iran-Linked Hackers Broke Into the FBI Director&#8217;s Personal Gmail and Published His Photos and Emails</a>:</strong></p><p>The Handala Hack Team, an Iran-linked group that Western researchers consider one of several personas used by Iranian government cyberintelligence units, breached FBI Director Kash Patel&#8217;s personal Gmail account and published personal photographs along with a sample of more than 300 emails showing a mix of personal and work correspondence dating between 2010 and 2019. The FBI confirmed the breach and said it had taken steps to mitigate potential risks, adding that the data was historical in nature and contained no government information. Check Point chief of staff Gil Messing said the operation is part of Iran&#8217;s broader strategy to embarrass U.S. officials as the U.S.-Israeli war drags on, noting that Handala has also recently claimed hacks against medical device provider Stryker and the personal data of dozens of Lockheed Martin employees stationed in the Middle East.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/fcc-bans-new-routers-made-outside-the-usa-over-security-risks/">The FCC Just Banned All New Routers Made Outside the U.S. Over National Security Risks</a>:</strong></p><p>The FCC has added all consumer routers manufactured in foreign countries to its Covered List, effectively banning the sale of new models in the U.S., following a March 20 National Security Determination that found foreign-produced routers carry supply-chain risks that could be used to disrupt critical infrastructure and directly harm Americans. The decision specifically cites the role foreign-made routers played in enabling the Volt, Flax, and Salt Typhoon attacks against vital U.S. infrastructure, and expands a list that previously only targeted specific companies like Huawei, ZTE, and Kaspersky. Foreign manufacturers can still seek approval through an alternative certification pathway, but must disclose ownership structures, full supply chain details, and provide a plan to move critical component manufacturing to the United States.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Rise of SQLi, Quick Turnaround Exploits & more | AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 3/17/26 - 3/23/26]]></description><link>https://cysleuths.substack.com/p/rise-of-sqli-quick-turnaround-exploits</link><guid isPermaLink="false">https://cysleuths.substack.com/p/rise-of-sqli-quick-turnaround-exploits</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Tue, 24 Mar 2026 07:39:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://layerxsecurity.com/blog/poisoned-typeface-a-simple-font-rendering-poisons-every-ai-assistant-and-only-microsoft-cares/">A Custom Font Trick Is Fooling Every Major AI Assistant</a>:</strong></p><p>Researchers at LayerX built a proof-of-concept webpage that uses a custom font and CSS to show users harmful instructions while displaying only innocent video game fanfiction to AI assistants reading the page&#8217;s underlying code. Every non-agentic assistant tested, including ChatGPT, Claude, Gemini, Grok, and Perplexity, failed to detect the hidden threat and told users the page was safe. LayerX disclosed the findings to all affected vendors, and Microsoft was the only one to fully accept and address the report, while the rest classified it as out of scope.</p></li><li><p><strong><a href="https://medium.com/@aviral23/cve-2026-33017-how-i-found-an-unauthenticated-rce-in-langflow-by-reading-the-code-they-already-dc96cdce5896">A Langflow Vulnerability Was Exploited Within 20 Hours of Going Public</a>:</strong></p><p>Researcher Aviral Srivastava discovered a critical vulnerability in Langflow, the popular open-source AI workflow builder, that allows attackers to execute arbitrary Python code on any exposed instance with no credentials required and a single HTTP request. The flaw exists in a public-facing endpoint that is intentionally unauthenticated, meaning a simple authentication fix was not an option, and on default installations attackers can also call a separate endpoint to generate their own superuser token before exploiting it. Attackers built working exploits from the advisory text alone and were observed attempting to harvest API keys for OpenAI, Anthropic, AWS, and database credentials <a href="https://www.sysdig.com/blog/cve-2026-33017-how-attackers-compromised-langflow-ai-pipelines-in-20-hours">within 20 hours</a> of the advisory going public on March 17, 2026, with no public proof-of-concept code in existence at the time.</p></li><li><p><strong><a href="https://blog.securelayer7.net/cve-2026-22730-sql-injection-spring-ai-mariadb/">Two New Vulnerabilities Found in Spring AI Let Attackers Bypass Access Controls</a>:</strong></p><p>Researchers at SecureLayer7 discovered two high-severity vulnerabilities in Spring AI, a popular framework for building AI applications, that allow attackers to bypass data access controls and retrieve documents they should not be able to see. CVE-2026-22730 affects the MariaDB vector store and enables SQL injection, while CVE-2026-22729 affects the PostgreSQL and Oracle vector stores and enables <a href="https://blog.securelayer7.net/cve-2026-22729-jsonpath-injection-spring-ai-pgvectorstore/">JSONPath injection,</a> both stemming from the same root cause: user-controlled input being embedded into database queries without proper escaping. Both vulnerabilities are fixed in Spring AI versions 1.0.4 and 1.1.3.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/musician-pleads-guilty-to-10m-streaming-fraud-powered-by-ai-bots/">A Musician Used AI and Bots to Pocket $10 Million in Fake Streaming Royalties</a>:</strong></p><p>North Carolina musician Michael Smith pleaded guilty to conspiracy to commit wire fraud after collecting over $10 million in royalty payments by uploading hundreds of thousands of AI-generated songs to Spotify, Apple Music, Amazon Music, and YouTube Music and using automated bots to stream them billions of times. Smith ran the scheme from 2017 to 2024 with the help of an unnamed music promoter and the CEO of an AI music company, using VPNs to avoid detection by anti-fraud systems. He has agreed to forfeit $8,091,843.64 and faces a maximum sentence of five years in prison.</p></li><li><p><strong><a href="https://help.openai.com/en/articles/20001052-file-storage-and-library-in-chatgpt">ChatGPT Now Saves Your Uploaded Files So You Can Reuse Them Later</a>:</strong></p><p>OpenAI has launched a Library feature in ChatGPT that automatically saves files uploaded or created in chats, including documents, spreadsheets, presentations, and images, so users can find and reuse them across conversations. Files can be browsed and searched from the left-hand sidebar, filtered by type, and added directly to new chats from the composer menu. </p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.threatfabric.com/blogs/perseus-dto-malware-that-takes-notes">New Android Malware Perseus Can Silently Read Through Your Notes Apps</a>:</strong></p><p>ThreatFabric researchers discovered a new Android banking trojan called Perseus that can take over a device, capture credentials through overlay attacks and keylogging, and silently read through note-taking apps including Google Keep, Evernote, Samsung Notes, and Xiaomi Notes, where users often store passwords, recovery phrases, and financial details. The malware is distributed through fake IPTV apps that are sideloaded outside of official app stores, and includes extensive anti-analysis checks to avoid detection in research environments. Active campaigns show a primary focus on users in Turkey and Italy, with targets spanning banks, cryptocurrency platforms, and financial institutions across several other European countries.</p></li><li><p><strong><a href="https://aisle.com/blog/opensips-sql-injection-aisle-deep-dive-sql-injection-authentication-bypass">A SQL Injection Bug in Widely Used VoIP Software Let Attackers Impersonate Any User</a>:</strong></p><p>Researchers at AISLE discovered a high-severity SQL injection vulnerability in OpenSIPS, an open-source SIP server used by carriers and providers worldwide to route VoIP calls and enforce authentication. The flaw existed in the auth_jwt module, where an attacker-controlled value from a JWT token was inserted directly into a SQL query without escaping, allowing a remote attacker to inject a fake row and effectively sign their own authentication credentials. A successful exploit allowed an attacker to impersonate any SIP user, place calls under a trusted identity, and potentially enable toll fraud, wiretapping, or social engineering attacks. The fix was accepted by OpenSIPS maintainers and merged in February 2026.</p></li><li><p><strong><a href="https://www.gendigital.com/blog/insights/research/voidstealer-abe-bypass">New Infostealer Uses a Sneaky Debugger Trick to Steal Chrome and Edge Passwords</a>:</strong></p><p>Gen researchers discovered VoidStealer, a malware-as-a-service infostealer that is the first known threat in the wild to use a debugger-based technique to bypass Chrome and Edge&#8217;s Application-Bound Encryption, a security feature introduced in 2024 to protect stored passwords and cookies. Rather than injecting code into the browser, VoidStealer attaches to a newly launched browser process as a debugger and sets hardware breakpoints at a precise moment when the master decryption key is briefly present in memory in plaintext, then reads it out without ever writing to the browser&#8217;s memory. The technique requires no privilege escalation, making it significantly harder to detect than other existing bypass methods.</p></li><li><p><strong><a href="https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise">Trivy, a Popular Security Scanning Tool, Was Compromised Twice in March</a>:</strong></p><p>Attackers used a compromised credential with write access to the Trivy repository to force-update 75 out of 76 version tags in the official aquasecurity/trivy-action GitHub Actions repository, replacing legitimate code with an infostealer payload that harvested cloud credentials, SSH keys, and CI/CD secrets from any pipeline that ran an affected tag. The attack was attributed to TeamPCP, a documented cloud-native threat actor, with the malware exfiltrating stolen data to a typosquatted Aqua Security domain. The compromise then expanded when three new Trivy Docker images were pushed to Docker Hub containing the same infostealer indicators without corresponding GitHub releases..</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://techcrunch.com/2026/03/23/delve-halts-demos-insight-partners-scrubs-investment-post-amid-fake-compliance-allegations/">A $300 Million YC-backed AI Compliance Startup Is Accused of Fabricating Compliance</a>:</strong></p><p>Delve, a Y Combinator-backed startup that uses AI to help companies obtain security and regulatory certifications like SOC 2, HIPAA, and GDPR, has disabled its demo booking page and investor Insight Partners removed a published article about its $32 million investment after an anonymous whistleblower alleged the company fabricated evidence of board meetings, tests, and processes that never happened. The whistleblower, claiming to be a former client, alleged that Delve forced customers to either adopt fake compliance evidence or perform mostly manual work themselves, and that its platform rubber-stamps its own reports rather than undergoing independent auditing. Delve has denied the allegations, stating it is an automation platform that provides templates and connects customers with third-party auditors.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/nordstroms-email-system-abused-to-send-crypto-scams-to-customers/">Hackers Used Nordstrom&#8217;s Own Email System to Send a Crypto Scam to Its Customers</a>:</strong></p><p>Nordstrom customers received fraudulent emails sent from the company&#8217;s legitimate marketing address disguised as a St. Patrick&#8217;s Day promotion promising to double any cryptocurrency deposited to a specified wallet within two hours. Because the emails originated from an official Nordstrom domain, they bypassed standard email security checks, and some customers reported receiving the message at addresses that had never been exposed or leaked online. Nordstrom confirmed the emails were unauthorized and stated the company would never ask customers to transact using cryptocurrency.</p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[NVIDIA NemoClaw, MCP Issues, Malicious GitHub Repos & More | AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 3/10/26-3/16/26]]></description><link>https://cysleuths.substack.com/p/nvidia-nemoclaw-mcp-issues-malicious</link><guid isPermaLink="false">https://cysleuths.substack.com/p/nvidia-nemoclaw-mcp-issues-malicious</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Wed, 18 Mar 2026 00:14:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Jmsj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you follow me on LinkedIn, you&#8217;ll know I was at SXSW the past few days in Austin. Some highlight photos at the end!</p><p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://codewall.ai/blog/ai-vs-ai-how-our-ai-agent-hacked-a-20m-funded-ai-recruiter">An AI Agent Hacked an AI Recruiting Platform in an Hour, Then Started Impersonating Donald Trump:</a></strong></p><p>Security firm CodeWall pointed its autonomous offensive agent at Jack &amp; Jill, a $20 million-funded AI recruiting platform used by companies including Anthropic, Stripe, and Monzo, and within an hour the agent chained four individually low-risk bugs into a full organizational takeover with complete admin access. Without any instructions to do so, the agent then independently discovered the platform's voice infrastructure was exposed without authentication, generated synthetic speech, and spent 28 conversation rounds probing the platform's AI agent including impersonating Donald Trump demanding full access to all candidate data. Jack &amp; Jill patched the vulnerabilities shortly after responsible disclosure.</p></li><li><p><strong><a href="https://permiso.io/blog/copilot-prompt-injection-ai-email-phishing">Attackers Can Hide Instructions Inside Emails to Make Microsoft Copilot Generate Fake Security Alerts:</a></strong></p><p>Permiso Security researchers discovered that attackers can embed hidden instruction text inside emails that Microsoft Copilot then follows when a user clicks Summarize, causing the AI to generate a convincing fake security alert, complete with urgent language and a clickable link, inside the trusted Copilot summary panel. Because users have been trained to trust AI-generated output more than raw email content, the phishing message effectively inherits Copilot's credibility, making it far more likely to fool someone than a standard suspicious email. Microsoft patched the vulnerability across all affected surfaces and published CVE-2026-26133 on March 12, crediting Permiso Security for the discovery.</p></li><li><p><strong><a href="https://agentseal.org/blog/mcp-server-security-findings">Two Thirds of MCP Servers Have Security Problems</a>:</strong></p><p>AgentSeal connected to and analyzed 1,808 MCP servers, the plugins that let AI agents like Claude access tools, files, and external services, and found security issues in 66% of them, totaling 8,282 individual tool-level findings. The most common problems were tools that enable arbitrary command execution on the host machine, and toxic data flows where combining multiple servers creates unintended attack chains, such as pairing an untrusted input reader with a filesystem writer. The report also highlights a real-world incident where a single fly.io authentication token exposed in a Docker config file granted root access to 3,243 apps, most of them MCP servers, for five days before being patched.</p></li><li><p><strong><a href="https://grantex.dev/report/state-of-agent-security-2026">93% of Popular AI Agent Projects Have No Way to Limit What Your Agent Can Actually Do:</a></strong></p><p>Grantex researchers audited 30 of the most popular open-source AI agent projects, including OpenClaw, CrewAI, AutoGen, and LangGraph, and found that 93% rely entirely on unscoped API keys stored in environment variables, meaning any agent running under that key has the same level of access as the key owner with no restrictions on what it can do or whose behalf it acts on. Not a single project reviewed assigns a unique identity to each agent instance, and 97% have no mechanism for end users to approve what the agent does on their behalf at runtime. In multi-agent systems, the report notes the problem compounds further since there is no way to revoke access for a single misbehaving agent without rotating credentials for every other agent sharing the same key.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://nvidianews.nvidia.com/news/rubin-platform-ai-supercomputer">NVIDIA Is Betting Its Future on AI Inference, and Dynamo and Vera Rubin Are How It Plans to Win:</a></strong></p><p>NVIDIA used GTC 2026 to signal a strategic pivot from AI training to inference, announcing Dynamo 1.0, an open source inference operating system that boosted Blackwell GPU performance by up to 7x in benchmarks, alongside the Vera Rubin platform, which combines the Vera CPU, Rubin GPU, and Groq 3 LPU into a rack-scale design NVIDIA claims delivers up to 35 times higher inference throughput per megawatt compared to GPU-only configurations. Analyst Jack Gold of J. Gold Associates framed the urgency plainly, estimating that 80-85% of AI workloads will be inference within one to two years, while noting the economics are fundamentally different from training since organizations deploying inference at scale won't commit capital at the same magnitude as those building frontier models. Dynamo has already been adopted by AWS, Microsoft Azure, Google Cloud, and enterprises including ByteDance, PayPal, and Pinterest, while Vera Rubin faces a more contested market where hyperscalers are building their own custom silicon and chip rivals are making inroads.</p></li><li><p><strong><a href="https://developer.nvidia.com/blog/run-autonomous-self-evolving-agents-more-safely-with-nvidia-openshell/">NVIDIA&#8217;s NemoClaw Gives AI Agents a Security Sandbox:</a></strong></p><p>NVIDIA announced NemoClaw, an open source stack that adds privacy and security controls to OpenClaw with a single command, built on top of OpenShell, a runtime that sits between an AI agent and the underlying infrastructure and intended to enforce constraints outside the agent&#8217;s reach so it cannot override them even if compromised. The core problem both tools address is that long-running agents can spawn subagents, install packages, write their own code, and accumulate live credentials over hours, meaning a single prompt injection could become a credential leak. OpenShell also includes a privacy router that keeps sensitive context on-device and only routes to frontier models like Claude or GPT when policy explicitly allows it.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.microsoft.com/en-us/security/blog/2026/03/12/storm-2561-uses-seo-poisoning-to-distribute-fake-vpn-clients-for-credential-theft/">Hackers Are Gaming Search Results to Serve Fake VPN Downloads That Steal Your Login Credentials:</a></strong></p><p>Microsoft identified a campaign active since May 2025 in which a threat actor called Storm-2561 manipulates search engine results for queries like "Pulse VPN download" to push fake websites impersonating brands including Fortinet, Ivanti, and SonicWall, then delivers digitally signed malware through GitHub-hosted ZIP files. Once installed, the fake VPN presents a convincing login screen that captures credentials entered by the victim, then also accesses stored VPN configuration data from the device and exfiltrates everything to attacker-controlled infrastructure before redirecting the victim to the real vendor's website so they notice nothing is wrong. The GitHub repositories have since been taken down and the code-signing certificate has been revoked.</p></li><li><p><strong><a href="https://www.microsoft.com/en-us/security/blog/2026/03/11/contagious-interview-malware-delivered-through-fake-developer-job-interviews/">Hackers Are Posing as Recruiters to Trick Developers into Installing Malware During Fake Job Interviews:</a></strong></p><p>Microsoft has been tracking a campaign called Contagious Interview, active since at least December 2022, in which attackers pose as recruiters from cryptocurrency or AI companies and ask developer candidates to clone and run a code repository as part of a technical assessment, which silently installs backdoors including OtterCookie, Invisible Ferret, and FlexibleFerret. A newer variant of the attack abuses Visual Studio Code's workspace trust feature, where simply opening the repository and clicking trust causes it to automatically execute a malicious task configuration that fetches and loads the backdoor in the background. Once on a device, the malware harvests API tokens, cloud credentials, signing keys, cryptocurrency wallets, and password manager artifacts, targeting developer endpoints that have access to source code, CI/CD pipelines, and production infrastructure.</p></li></ol><p>The next three are about the GlassWorm Malware:</p><ol><li><p><strong><a href="https://www.stepsecurity.io/blog/forcememo-hundreds-of-github-python-repos-compromised-via-account-takeover-and-force-push">Hundreds of Python GitHub Repos Were Secretly Injected With Malware Using a Clever Git Trick That Hides the Evidence:</a></strong></p><p>StepSecurity researchers identified an ongoing campaign called ForceMemo, active since March 8, 2026, in which attackers used the GlassWorm malware to steal GitHub credentials from developers via malicious VS Code extensions, then used those credentials to silently inject identical obfuscated malware into hundreds of Python repositories by rewriting the most recent commit and force-pushing it, preserving the original commit message and author so nothing appears to have changed. Anyone who runs pip install from a compromised repo triggers the malware, which reads its command-and-control instructions directly from the Solana blockchain to avoid takedowns, then downloads Node.js and fetches an AES-encrypted second-stage payload targeting browser crypto wallet extensions, stored credentials, and session cookies. The Solana wallet used as C2 infrastructure is the same one linked to the GlassWorm VS Code extension campaign, confirming both attacks share the same threat actor.</p></li><li><p><strong><a href="https://www.aikido.dev/blog/glassworm-returns-unicode-attack-github-npm-vscode">A Malware Campaign Has Been Hiding Inside GitHub Repos Using Characters Invisible to the Human Eye:</a></strong></p><p>Aikido Security identified a new wave of GlassWorm activity in March 2026 in which malicious payloads are hidden inside what appear to be empty strings in JavaScript code using invisible Unicode characters, characters that render as nothing in virtually every editor, terminal, and code review interface but are decoded and passed to eval() at runtime. At least 151 GitHub repositories were compromised between March 3 and March 9, with notable targets including repos from Wasmer and the organization behind OpenCode, and the campaign has since expanded to npm packages and VS Code extensions. The malicious commits are designed to blend in, surrounding the injection with realistic-looking documentation tweaks and version bumps consistent with each target project.</p></li><li><p><strong><a href="https://socket.dev/blog/open-vsx-transitive-glassworm-campaign">72 Fake VS Code Extensions Were Secretly Installing Malware Through Extensions You Already Trusted:</a></strong></p><p>Socket researchers identified 72 malicious Open VSX extensions linked to GlassWorm since January 31, 2026, including a new technique where a benign-looking extension is later updated to declare a separate GlassWorm-infected extension as a dependency, causing editors to automatically install the malicious component without the user knowingly touching it. This means reviewing an extension at installation is no longer sufficient, since the malicious dependency may only appear in a later update after the extension has already been trusted, and several of the malicious listings showed download counts inflated into the thousands to appear more established. The extensions impersonate widely used developer tools including ESLint, Prettier, WakaTime, and AI coding tools including Claude Code and Codex.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://forms.fbi.gov/victims/Steam_Malware">The FBI Is Investigating a Hacker Who Hid Malware Inside Steam Games:</a></strong></p><p>The FBI's Seattle Division is calling on victims who downloaded any of seven Steam games between May 2024 and January 2026, including BlockBlasters, Chemia, Dashverse/DashFPS, Lampy, Lunara, PirateFi, and Tokenova, which were embedded with malware believed to have been published by a single threat actor. Anyone who installed one of the named titles, or knows someone who did, can submit a report to the FBI via <a href="mailto:Steam_Malware@fbi.gov">Steam_Malware@fbi.gov</a> or through the official form on the FBI's website.</p></li><li><p><strong><a href="https://help.instagram.com/491565145294150">Instagram Is Removing End-to-End Encryption from Direct Messages Starting May 8:</a></strong></p><p>Instagram announced that end-to-end encrypted messaging will no longer be supported after May 8, 2026, meaning Meta will regain the ability to access the content of messages and calls that were previously protected. Users with affected chats will see instructions on how to download any messages or media they want to keep before the change takes effect, and those on older versions of the app may need to update before they can do so. End-to-end encryption ensured that only the sender and recipient could read messages, with no access even for Meta itself.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/stryker-attack-wiped-tens-of-thousands-of-devices-no-malware-needed/">Iran-Linked Hackers Wiped Nearly 80,000 Stryker Devices in Three Hours by Hijacking an Admin Account:</a></strong></p><p>A source familiar with the attack told BleepingComputer that the Iran-linked hacktivist group Handala compromised an administrator account at medical technology giant Stryker, created a new Global Administrator account, and used Microsoft Intune's built-in remote wipe command to erase data from nearly 80,000 employee devices between 5:00 and 8:00 a.m. UTC on March 11, with some employees losing personal data after having their personal devices enrolled in the company network. Stryker confirmed the attack was contained to its internal Microsoft environment and that no malware or ransomware was deployed, and investigators found no indication that data was exfiltrated despite Handala's claims of stealing 50 terabytes. </p><div><hr></div><p>Some SXSW highlights:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Jmsj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Jmsj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Jmsj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Jmsj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Jmsj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Jmsj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg" width="1206" height="804" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:804,&quot;width&quot;:1206,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:113920,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cysleuths.substack.com/i/191205405?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Jmsj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Jmsj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Jmsj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Jmsj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ca7b908-4880-4249-a4db-6316e2d9a01a_1206x804.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">a ManyChat Interview Moment</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8YZn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8YZn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 424w, https://substackcdn.com/image/fetch/$s_!8YZn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 848w, https://substackcdn.com/image/fetch/$s_!8YZn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 1272w, https://substackcdn.com/image/fetch/$s_!8YZn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8YZn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/641db381-f301-4494-a6f0-dacd9e55d125.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4246281,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cysleuths.substack.com/i/191205405?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8YZn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 424w, https://substackcdn.com/image/fetch/$s_!8YZn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 848w, https://substackcdn.com/image/fetch/$s_!8YZn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 1272w, https://substackcdn.com/image/fetch/$s_!8YZn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F641db381-f301-4494-a6f0-dacd9e55d125.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">First time hearing about GlamBot</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uz8b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uz8b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 424w, https://substackcdn.com/image/fetch/$s_!uz8b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 848w, https://substackcdn.com/image/fetch/$s_!uz8b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 1272w, https://substackcdn.com/image/fetch/$s_!uz8b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uz8b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2537573,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cysleuths.substack.com/i/191205405?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uz8b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 424w, https://substackcdn.com/image/fetch/$s_!uz8b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 848w, https://substackcdn.com/image/fetch/$s_!uz8b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 1272w, https://substackcdn.com/image/fetch/$s_!uz8b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8e4c9d-197a-4f5a-a377-1a6e2e3883c4.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Terry Blacks when in Austin</figcaption></figure></div><p></p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Fake Claude Code Installers, Data Sent to ByteDance, Exposed AI Chats & More | AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 3/4/26 - 3/9/26]]></description><link>https://cysleuths.substack.com/p/fake-claude-code-installers-data</link><guid isPermaLink="false">https://cysleuths.substack.com/p/fake-claude-code-installers-data</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Tue, 10 Mar 2026 04:38:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://pushsecurity.com/blog/installfix/">Fake Claude Code Install Pages Are Stealing Your Passwords via Google Ads:</a></strong></p><p>Push Security researchers uncovered a campaign they are calling InstallFix, where attackers cloned the Claude Code installation page down to its branding and layout, swapping only the install commands to point to attacker-controlled servers delivering the Amatera infostealer. The fake pages are distributed exclusively through Google-sponsored search results targeting queries like &#8220;Claude Code install,&#8221; and any further clicks on the page redirect to the legitimate site to avoid raising suspicion. Amatera harvests browser-saved passwords, cookies, and session tokens, and uses techniques including direct NTSockets for command-and-control and multi-stage payload delivery to evade antivirus and endpoint detection tools</p></li><li><p><strong><a href="https://www.huntress.com/blog/openclaw-github-ghostsocks-infostealer">Fake OpenClaw Installers on GitHub Are Deploying Infostealer Malware and a Ransomware Proxy Tool:</a></strong></p><p>Huntress researchers investigated malicious GitHub repositories posing as OpenClaw installers that were active between February 2 and 10, 2026, and were promoted directly through Bing AI search results for the query &#8220;OpenClaw Windows.&#8221; The fake installers deployed multiple information stealers including Vidar and PureLogs Stealer via a novel packer called Stealth Packer, which injects payloads into memory, creates hidden scheduled tasks, and checks for mouse movement to evade virtual machine detection. The installers also dropped GhostSocks, which routes attacker traffic through the victim&#8217;s machine to bypass MFA and anti-fraud checks when using stolen credentials.</p></li><li><p><strong><a href="https://www.microsoft.com/en-us/security/blog/2026/03/05/malicious-ai-assistant-extensions-harvest-llm-chat-histories/">Malicious AI Assistant Browser Extensions Are Quietly Stealing Your Work Conversations:</a></strong></p><p>Microsoft Defender investigated malicious Chromium-based browser extensions impersonating legitimate AI assistant tools, which accumulated approximately 900,000 installs and were detected across more than 20,000 enterprise tenants. The extensions collected full URLs and AI chat content from platforms including ChatGPT and DeepSeek, then transmitted the data to attacker-controlled domains including deepaichats[.]com and chatsaigpt[.]com. Even if users initially declined data collection, subsequent extension updates automatically re-enabled telemetry without clear user notification, giving the threat actor continuous access to browsing activity and sensitive AI conversations.</p></li><li><p><strong><a href="https://labs.zenity.io/p/perplexedbrowser-perplexity-s-agent-browser-can-leak-your-personal-pc-local-files">A Calendar Invite Is All It Takes to Steal Your Local Files in Perplexity Comet:</a></strong></p><p>Zenity Labs disclosed a vulnerability dubbed PerplexedBrowser affecting Perplexity Comet, an agentic browser that autonomously reads page content, interprets instructions, and takes actions on behalf of users. An attacker can embed malicious instructions inside a calendar invite, which hijacks the browser agent&#8217;s intent during a routine task like accepting a meeting, granting silent read access to the user&#8217;s local file system and enabling exfiltration of credentials, SSH keys, API tokens, and personal documents to attacker-controlled endpoints in under a minute. The vulnerability has since been fixed.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://blog.mozilla.org/en/firefox/hardening-firefox-anthropic-red-team/">Anthropic&#8217;s Red Team Used Claude to Find 14 High-Severity Bugs Hidden in Firefox:</a></strong></p><p>Anthropic&#8217;s Frontier Red Team used Claude to identify security vulnerabilities in Firefox&#8217;s JavaScript engine, surfacing over a dozen verifiable bugs with reproducible test cases that Mozilla engineers were able to validate and patch within hours. In total, the collaboration uncovered 14 high-severity bugs and resulted in 22 CVEs, plus 90 additional lower-severity findings, all of which have been fixed in Firefox 148. Mozilla noted that while many lower-severity issues overlapped with what traditional fuzzing tools would catch, Claude also identified distinct classes of logic errors that fuzzers had not previously uncovered despite decades of extensive security review. Anthropic&#8217;s version: https://www.anthropic.com/news/mozilla-firefox-security</p></li><li><p><strong><a href="https://openai.com/index/codex-security-now-in-research-preview/">OpenAI Launches Codex Security to Automatically Find and Fix Vulnerabilities in Your Codebase:</a></strong></p><p>OpenAI launched Codex Security, an AI security agent that scans code repositories to identify, validate, and propose fixes for vulnerabilities, now available in research preview to ChatGPT Pro, Enterprise, Business, and Edu customers with free access for the first month. The tool builds a project-specific threat model by analyzing a repository&#8217;s structure, uses that context to search for vulnerabilities ranked by real-world impact, then pressure-tests findings in a sandboxed environment to reduce false positives before surfacing actionable patches. </p></li><li><p><strong><a href="https://www.anthropic.com/research/labor-market-impacts">AI Has Not Caused a Measurable Spike in Unemployment Yet, But Hiring for Young Workers Is Slowing:</a></strong></p><p>Anthropic researchers introduced a new metric called observed exposure, which combines theoretical LLM capability with real-world Claude usage data to measure which jobs are actually being automated rather than just which ones could theoretically be affected, finding that computer programmers, customer service representatives, and data entry keyers are currently the most exposed occupations. Analyzing US labor data since ChatGPT&#8217;s release in late 2022, the researchers found no statistically significant increase in unemployment among the most AI-exposed workers, though they did find suggestive evidence that the monthly job-finding rate for workers aged 22 to 25 entering high-exposure occupations has dropped by roughly 14 percent compared to 2022 levels. The study also found that workers in the most exposed occupations tend to skew older, female, more educated, and higher-paid, and that BLS employment growth projections through 2034 are modestly weaker for jobs with higher observed exposure.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.buchodi.com/your-duolingo-is-talking-to-bytedance-cracking-the-pangle-sdks-encryption/">Duolingo, BeReal, and 38 Other Apps Are Sending Your Device Data to ByteDance with Fake Encryption:</a></strong></p><p>A security researcher reverse-engineered ByteDance&#8217;s Pangle advertising SDK and found that over 40 popular apps including Duolingo, BeReal, and Character.AI are transmitting detailed device fingerprints to ByteDance servers, including battery level, storage capacity, screen brightness, internal IP address, and persistent device identifiers. The SDK&#8217;s encryption scheme embeds both the AES-256 key and IV directly inside every message, making it trivially decryptable by anyone who downloads the publicly available SDK, with the researcher achieving a 100% success rate across all 694 captured payloads. Notably, ByteDance applies genuinely strong encryption to its ad revenue and impression data while using the breakable scheme only for user device telemetry.</p></li><li><p><strong><a href="https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform">An AI Agent Found a SQL Injection in McKinsey's Internal AI Platform and Accessed 46 Million Employee Messages in Two Hours:</a></strong></p><p>Security firm CodeWall ran its autonomous offensive agent against McKinsey's internal AI platform Lilli, used by over 43,000 employees, and within two hours the agent had exploited an unauthenticated SQL injection vulnerability to gain full read and write access to the production database, exposing 46.5 million chat messages, 728,000 files, and 57,000 user accounts. The injection was found in an unprotected endpoint where JSON field names were concatenated directly into SQL queries, a flaw that McKinsey's own internal scanners and OWASP ZAP both missed. Because Lilli's AI system prompts were stored in the same database, an attacker with write access could have silently rewritten the instructions controlling how the AI behaves, without any code deployment or log trail, potentially poisoning advice flowing to consultants and their clients. McKinsey patched the vulnerabilities on March 2 after CodeWall submitted responsible disclosure.</p></li><li><p><strong><a href="https://cloud.google.com/blog/topics/threat-intelligence/coruna-powerful-ios-exploit-kit">The iPhone Exploit Kit That Got Passed Around to Three Different Hacker Groups:</a></strong></p><p>Google Threat Intelligence Group identified a sophisticated iOS exploit kit called Coruna containing 23 exploits across five full exploit chains, targeting iPhones running iOS 13.0 through 17.2.1, which first appeared in early 2025 in the hands of a commercial surveillance vendor customer before being observed in watering hole attacks against Ukrainian websites by suspected Russian espionage group UNC6353, and later recovered in full from fake Chinese cryptocurrency and finance websites operated by financially motivated threat actor UNC6691. The kit&#8217;s ending payload, named PLASMAGRID, injects itself into a root-level iOS daemon and deploys modules targeting 18 cryptocurrency wallet apps including MetaMask, Phantom, and Trust Wallet, with all module code logged in Chinese and containing comments consistent with LLM-assisted development. Google says the kit is not effective against the latest version of iOS and urges users to update immediately or enable Lockdown Mode.</p></li><li><p><strong><a href="https://www.malwarebytes.com/blog/threat-intel/2026/03/fake-cleanmymac-site-installs-shub-stealer-and-backdoors-crypto-wallets">This Fake Mac Cleaning App Steals Your Passwords and Then Permanently Backdoors Your Crypto Wallets:</a></strong></p><p>A fake CleanMyMac site at cleanmymacos[.]org instructs visitors to paste a Terminal command that installs SHub Stealer, an AppleScript-based infostealer that harvests browser passwords, Apple Keychain contents, and Telegram sessions, while also scanning for 102 cryptocurrency wallet browser extensions and 23 desktop wallet apps. What sets SHub apart from typical infostealers is that it goes a step further for five wallets: Exodus, Atomic Wallet, Ledger Wallet, Ledger Live, and Trezor Suite are silently backdoored by replacing their core application logic file, so every subsequent unlock sends the user&#8217;s password and seed phrase to attacker-controlled infrastructure, even after the initial infection has been cleaned up. The malware also installs a persistent LaunchAgent disguised as Google&#8217;s Keystone updater that beacons every 60 seconds and can execute remote commands indefinitely.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.ic3.gov/PSA/2026/PSA260309">The FBI Is Warning About Scammers Impersonating Local Government Officials to Steal Permit Fees:</a></strong></p><p>The FBI issued an alert about a phishing scheme targeting individuals and businesses with active land-use permit applications, where criminals impersonate city and county planning officials and send convincing emails that include accurate details like property addresses, case numbers, and real officials' names pulled from public records. Victims are then presented with fake invoices and directed to pay via wire transfer, peer-to-peer payment, or cryptocurrency, with emails deliberately avoiding phone contact to prevent victims from calling the actual government office to verify. The FBI advises anyone who receives an unsolicited payment request related to a permit to call the city or county directly using a phone number from the official government website before sending any money.</p></li><li><p><strong><a href="https://www.bluevoyant.com/blog/new-a0backdoor-linked-to-teams-impersonation-and-quick-assist-social-engineering">Hackers Are Posing as IT Support on Microsoft Teams to Install a New Backdoor on Your Computer:</a></strong></p><p>BlueVoyant researchers identified a campaign active since at least August 2025 in which attackers flood a target's inbox with spam, then contact them via Microsoft Teams posing as IT support and request remote access through Windows Quick Assist. Once in, they drop a new backdoor called A0Backdoor inside fake Microsoft Teams installer packages, signed with legitimate-looking certificates to avoid raising flags. The backdoor communicates back to attackers by routing traffic through public DNS servers like 1.1.1.1 rather than connecting directly to attacker infrastructure, a technique designed to blend in with normal network activity and bypass common security monitoring.</p></li><li><p><strong><a href="https://www.cbsnews.com/news/fbi-confirms-its-networks-were-targeted-by-suspicious-cyber-activities/">The FBI Confirmed Its Own Surveillance Network Was Hit in a Cyberattack:</a></strong></p><p>The FBI confirmed it identified and responded to suspicious activity on its internal networks, with sources telling CBS News the targeted system is the bureau&#8217;s Digital Collection Systems Network, a suite of software used to conduct wiretaps, pen registers, and other real-time surveillance operations. The FBI did not disclose when the incident occurred, who was responsible, or whether any data was compromised. The bureau&#8217;s statement came amid ongoing fallout from the 2024 Salt Typhoon campaign, in which Chinese state-sponsored hackers breached multiple US telecommunications companies and systems used by US intelligence to conduct wiretaps.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Bots Hacked GitHub Pipelines, ChatGPT Abused for Espionage, Spyware Hidden in Google Sheets & More | AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 2/23/26 - 3/3/26]]></description><link>https://cysleuths.substack.com/p/ai-bots-hacked-github-pipelines-chatgpt</link><guid isPermaLink="false">https://cysleuths.substack.com/p/ai-bots-hacked-github-pipelines-chatgpt</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Wed, 04 Mar 2026 06:21:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A little late on this one because I&#8217;m battling jet lag after Asia. Postcards from Singapore and Bali at the end.</p><p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://blog.checkpoint.com/research/check-point-researchers-expose-critical-claude-code-flaws/">Cloning a Malicious Repo in Claude Code Hands Attackers Your API Keys:</a></strong></p><p>Check Point Research discovered two critical vulnerabilities (CVE-2025-59536 and CVE-2026-21852) in Anthropic&#8217;s Claude Code that allowed attackers to achieve remote code execution and steal API keys simply by getting a developer to clone and open a malicious repository. By abusing built-in features like Hooks and Model Context Protocol integrations, attackers could execute hidden shell commands on tool initialization, bypass user consent prompts, and redirect authenticated API traffic to an attacker-controlled server before the developer ever confirmed trust in the project. Because Anthropic&#8217;s Workspace feature ties multiple API keys to shared cloud-stored project files, a single stolen key could expose, modify, or delete shared resources across an entire team. Anthropic has patched both vulnerabilities prior to public disclosure.</p></li><li><p><strong><a href="https://www.oasis.security/blog/openclaw-vulnerability">Any Website Could Silently Hijack Your OpenClaw AI Agent:</a></strong></p><p>Oasis Security researchers discovered a vulnerability chain in OpenClaw that allows any website a developer visits to silently take full control of their local AI agent with no plugins or user interaction required. The attack works by exploiting the fact that browsers do not block WebSocket connections to localhost, letting malicious website JavaScript connect to the OpenClaw gateway, brute-force the password at hundreds of guesses per second due to a missing rate limit on local connections, and then auto-register as a trusted device without any user prompt. From there, an attacker can read messages, dump configuration data, and execute shell commands across all paired devices. The OpenClaw team classified this as high severity and shipped a fix within 24 hours; users should update to version 2026.2.25 or later immediately.</p></li><li><p><strong><a href="https://unit42.paloaltonetworks.com/gemini-live-in-chrome-hijacking/">Malicious Chrome Extensions Hijack Google&#8217;s Gemini AI Panel to Access Your Camera and Files:</a></strong></p><p>Unit 42 researchers discovered a high severity vulnerability (CVE-2026-0628) in Chrome&#8217;s Gemini Live panel that allowed malicious browser extensions with only basic permissions to inject JavaScript code into the panel and escalate their privileges to access capabilities the extension would not normally have. Because the Gemini panel is a trusted, browser-level component with access to powerful system resources, an attacker could silently activate the camera and microphone, take screenshots of any HTTPS site, read local files and directories, and display phishing content inside the panel without any user interaction beyond clicking the Gemini button. Google was notified on October 23, 2025, and released a fix in early January 2026.</p></li><li><p><strong><a href="https://www.stepsecurity.io/blog/hackerbot-claw-github-actions-exploitation">An AI Bot Spent a Week Hacking GitHub Pipelines at Microsoft, DataDog, and Aqua Security:</a></strong></p><p>Between February 21 and February 28, 2026, an autonomous GitHub account called hackerbot-claw, which describes itself as powered by claude-opus-4-5, systematically targeted CI/CD pipelines across at least 7 major open source repositories using 5 different exploitation techniques including branch name injection, filename injection, poisoned Go scripts, and AI prompt injection. The bot achieved confirmed remote code execution in at least 4 targets, successfully exfiltrated a GitHub token with write permissions from avelino/awesome-go (140k+ stars), and caused the most severe damage at aquasecurity/trivy where a stolen Personal Access Token was used to rename the repository private, delete all GitHub releases from versions 0.27.0 through 0.69.1, and push a suspicious artifact to Trivy&#8217;s VS Code extension on the Open VSX marketplace. The only attack that was fully blocked was a prompt injection attempt against ambient-code/platform, where Claude identified the malicious CLAUDE.md instructions and refused to execute them.</p></li><li><p><strong><a href="https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules">Your Old Public Google API Keys May Now Be Unlocking Access to Gemini:</a></strong></p><p>Truffle Security researchers found that when the Gemini API is enabled on a Google Cloud project, all existing API keys in that project, including ones publicly embedded in website source code for services like Google Maps, Firebase, and YouTube per Google&#8217;s own documentation, silently gain access to Gemini endpoints with no warning or notification. A scan of the November 2025 Common Crawl dataset found 2,863 live exposed keys, including some belonging to Google itself, that an attacker could use to access uploaded files, cached content, and generate unauthorized charges simply by copying the key from a webpage. </p><p></p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://simonlermen.substack.com/p/large-scale-online-deanonymization">AI Can Now Figure Out Who You Are From Your Anonymous Reddit and Hacker News Posts:</a></strong></p><p>Researchers published a paper demonstrating that LLM agents can deanonymize users at scale by inferring personal attributes from anonymous posts, then searching the web to match them to real identities across platforms like Hacker News, Reddit, and LinkedIn with high precision. The method scales gracefully to candidate pools of tens of thousands of users, and the researchers were also able to identify 9 out of 125 individuals in Anthropic&#8217;s anonymized Interviewer dataset simply by having an agent search the web and reason over the transcripts. The researchers note that because the attack decomposes into individually benign-looking tasks such as summarizing profiles and ranking candidates, refusal guardrails are easy to bypass with small prompt changes and offer limited protection.</p></li><li><p><strong><a href="https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8f79/disrupting-malicious-uses-of-ai.pdf">OpenAI Exposes Romance Scams, Russian Disinformation, and a Massive Chinese Influence Operation Abusing ChatGPT:</a></strong></p><p>OpenAI&#8217;s February 2026 threat report details seven disrupted operations that abused ChatGPT, including a Cambodia-based romance scam network targeting Indonesian men through fake dating platforms, a scam recovery fraud impersonating real law firms and the FBI&#8217;s IC3, a Russia-linked content farm connected to the Rybar network generating disinformation across Africa and beyond, and most notably a Chinese law enforcement-linked operator using ChatGPT to edit status reports on large-scale covert influence operations targeting dissidents, foreign governments, and Japanese Prime Minister Sanae Takaichi. </p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/microsoft/microsoft-adds-copilot-data-controls-to-all-storage-locations/">Microsoft Is Closing the Gap That Let Copilot Read Sensitive Files Stored on Your Device:</a></strong></p><p>Microsoft is expanding its data loss prevention controls to block Microsoft 365 Copilot from processing sensitivity-labeled Word, Excel, and PowerPoint files stored on local devices, closing a gap that previously limited DLP enforcement to files in SharePoint and OneDrive only. The update, rolling out between late March and late April 2026, comes after a bug discovered January 21 allowed Copilot Chat to summarize confidential emails from users&#8217; Sent Items and Drafts folders for roughly four weeks despite sensitivity labels being applied. No changes to existing DLP policies are required, as protection will extend automatically to all storage locations once deployed.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.malwarebytes.com/blog/privacy/2026/02/inside-a-fake-google-security-check-that-becomes-a-browser-rat">A Fake Google Security Alert Is Turning Your Browser Into a Surveillance Tool:</a></strong></p><p>Malwarebytes researchers discovered a site impersonating a Google Account security page that installs a Progressive Web App and walks victims through granting notification access, their contact list, and GPS location, all framed as protective security steps. A background service worker persists after the tab is closed, allowing the attacker to push new tasks, intercept one-time passwords, and proxy web traffic through the victim&#8217;s browser. Victims who follow every prompt are also delivered an Android APK requesting 33 permissions including SMS, microphone, and accessibility service access, with a custom keyboard for keystroke capture.<strong><a href="https://www.microsoft.com/en-us/security/blog/2026/03/02/oauth-redirection-abuse-enables-phishing-malware-delivery/">A</a></strong></p></li><li><p><strong><a href="https://www.microsoft.com/en-us/security/blog/2026/03/02/oauth-redirection-abuse-enables-phishing-malware-delivery/">Attackers Are Abusing Microsoft and Google Login Pages to Redirect Victims to Malware:</a></strong></p><p>Microsoft Defender researchers uncovered phishing campaigns that abuse a legitimate feature of the OAuth protocol, which redirects users after authentication errors, to silently send victims from trusted identity provider URLs like Entra ID and Google Workspace to attacker-controlled pages. Attackers craft OAuth requests with intentionally invalid parameters to force an error redirect, then route victims to phishing frameworks or automatically download a ZIP containing malicious LNK files that execute PowerShell, perform host reconnaissance, and establish a C2 connection via DLL side-loading. The campaigns primarily targeted government and public sector organizations using lures themed around document sharing, password resets, social security, and Teams meeting invites.</p></li><li><p><strong><a href="https://blog.oversecured.com/Security-researchers-find-vulnerabilities-in-mental-health-apps/">Popular Mental Health and AI Therapy Apps Are Leaking Your Most Private Conversations:</a></strong></p><p>Oversecured researchers identified critical vulnerabilities in several of the most popular mental health apps on Google Play, with a combined download count in the tens of millions, that could allow any other app on the same device to intercept sensitive data including therapy chat history and mood tracking records. The flaw stems from how the apps broadcast data using Android&#8217;s intent system without specifying a recipient, meaning a hidden malicious app such as a flashlight or calculator could silently capture messages in the background and send them to an attacker&#8217;s server. Specific app names and technical details have not been disclosed as the vulnerabilities remain unpatched, and mental health apps are typically not covered by HIPAA regulations that protect traditional healthcare data.</p></li><li><p><strong><a href="https://annex.security/blog/pixel-perfect/">QuickLens, A Google-Endorsed Chrome Extension Was Sold and Weaponized:</a></strong></p><p>Annex researchers found that QuickLens, a Google Lens wrapper Chrome extension that had earned a Featured badge from Google and amassed 7,000 users, was sold through the ExtensionHub marketplace and weaponized with version 5.8 on February 17, 2026, by its new owner, an unverified entity operating under the domain supportdoodlebuggle.top. The update quietly added a command-and-control server, stripped content security policy headers from every page the user visited, and used a hidden 1x1 transparent GIF image to execute attacker-delivered JavaScript via an inline onload attribute, meaning the malicious code never appeared in the extension&#8217;s source files and was instead delivered dynamically through local storage. With CSP headers removed across all sites, the injected code could freely read session tokens, capture form inputs, scrape page content, and exfiltrate data without the user noticing anything beyond a single permission prompt.</p></li><li><p><strong><a href="https://cloud.google.com/blog/topics/threat-intelligence/disrupting-gridtide-global-espionage-campaign/">Google Shuts Down Chinese Spy Group That Used Google Sheets to Hack Telecoms in 42 Countries:</a></strong></p><p>Google Threat Intelligence Group and Mandiant dismantled a global espionage campaign by suspected Chinese state-linked group UNC2814, which had quietly compromised telecommunications providers and government organizations across 53 confirmed victims in 42 countries since at least 2017 using a novel backdoor called GRIDTIDE. The malware used Google Sheets as its command-and-control channel, polling a spreadsheet cell for instructions and writing stolen data back into it to disguise all malicious traffic as legitimate Google API calls. Google responded by terminating all attacker-controlled Cloud projects, disabling associated accounts, sinkholing known infrastructure, and notifying confirmed victims, while also releasing a full set of IOCs and detection rules for the activity.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/02/reddit-issued-with-1447m-fine-for-children-s-privacy-failures/">Reddit Fined $19.5 Million by UK Regulator for Letting Children Use the Platform Undetected:</a></strong></p><p>The UK Information Commissioner&#8217;s Office fined Reddit &#163;14.47 million ($19.5 million) on February 24, 2026, after an investigation found the platform had no age verification mechanism in place until July 2025, despite its own terms of service prohibiting users under 13, resulting in a large number of children having their personal data collected and used without a lawful basis. Reddit also failed to conduct a mandatory data protection impact assessment on the risks to children&#8217;s data prior to January 2025. Reddit has stated it intends to appeal the fine, which is the largest the ICO has issued specifically for a children&#8217;s privacy offense.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/48m-in-crypto-stolen-after-korean-tax-agency-exposes-wallet-seed/">South Korea&#8217;s Tax Agency Accidentally Posts Crypto Wallet Password in Press Release, Loses $4.8 Million:</a></strong></p><p>South Korea&#8217;s National Tax Service published a press release on February 26, 2026 celebrating a crackdown on 124 tax evaders, but included unredacted photos showing a handwritten mnemonic seed phrase sitting next to a seized Ledger hardware wallet. Within hours, an unknown actor deposited a small amount of Ethereum to cover gas fees and drained all 4 million Pre-Retogeum (PRTG) tokens worth approximately $4.8 million across three on-chain transactions. The NTS has since retracted the press release, requested police assistance to recover the funds, and announced an external security review.</p></li><li><p><strong><a href="https://www.varonis.com/blog/1campaign">New Tool Lets Attackers Run Malicious Google Ads While Hiding From Security Scanners:</a></strong></p><p>Varonis Threat Labs uncovered 1Campaign, a full-service cloaking platform designed to help attackers run fraudulent Google Ads campaigns by showing harmless pages to Google&#8217;s reviewers and security scanners while routing real victims to phishing pages or crypto drainer sites. The platform assigns fraud scores to every visitor, automatically blocking traffic from cloud providers, VPNs, and known security vendors including Microsoft and Google by ISP and IP range, with one observed campaign blocking over 99% of all visitors while approving only 10 out of 1,676. A built-in Google Ads launcher assistant also allows operators to bypass ad policy restrictions and impersonate legitimate brands in ad headings and descriptions.</p></li></ol><p></p><p>as promised:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IxkC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IxkC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 424w, https://substackcdn.com/image/fetch/$s_!IxkC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 848w, https://substackcdn.com/image/fetch/$s_!IxkC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 1272w, https://substackcdn.com/image/fetch/$s_!IxkC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IxkC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e30a0cd2-40f3-4874-ad92-07b37a9c3647.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3728328,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cysleuths.substack.com/i/189841600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IxkC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 424w, https://substackcdn.com/image/fetch/$s_!IxkC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 848w, https://substackcdn.com/image/fetch/$s_!IxkC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 1272w, https://substackcdn.com/image/fetch/$s_!IxkC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe30a0cd2-40f3-4874-ad92-07b37a9c3647.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Ulun Danu Beratan Temple, Bali, Indonesia</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4ZO4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4ZO4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 424w, https://substackcdn.com/image/fetch/$s_!4ZO4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 848w, https://substackcdn.com/image/fetch/$s_!4ZO4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 1272w, https://substackcdn.com/image/fetch/$s_!4ZO4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4ZO4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a1c51a48-d50e-4aae-96cd-a23363746ed2.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4503197,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cysleuths.substack.com/i/189841600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4ZO4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 424w, https://substackcdn.com/image/fetch/$s_!4ZO4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 848w, https://substackcdn.com/image/fetch/$s_!4ZO4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 1272w, https://substackcdn.com/image/fetch/$s_!4ZO4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1c51a48-d50e-4aae-96cd-a23363746ed2.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Kayon Valley Resort, Ubud, Bali, Indondesia</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1SvX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1SvX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 424w, https://substackcdn.com/image/fetch/$s_!1SvX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 848w, https://substackcdn.com/image/fetch/$s_!1SvX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 1272w, https://substackcdn.com/image/fetch/$s_!1SvX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1SvX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2920675,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cysleuths.substack.com/i/189841600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1SvX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 424w, https://substackcdn.com/image/fetch/$s_!1SvX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 848w, https://substackcdn.com/image/fetch/$s_!1SvX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 1272w, https://substackcdn.com/image/fetch/$s_!1SvX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eaa410-d21b-47c6-acf6-c4be7a7c7455.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">OCBC Skyway, Marina Bay Sands, Singapore</figcaption></figure></div><p></p><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[$1.8M AI Error, Credential Theft, AI-Assisted Attacks & more AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 2/16/26 - 2/22/26]]></description><link>https://cysleuths.substack.com/p/18m-ai-error-credential-theft-ai</link><guid isPermaLink="false">https://cysleuths.substack.com/p/18m-ai-error-credential-theft-ai</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 23 Feb 2026 22:57:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.welivesecurity.com/en/eset-research/promptspy-ushers-in-era-android-threats-using-genai/">First Android Malware That Uses Google&#8217;s Gemini AI</a>:</strong></p><p>ESET researchers discovered PromptSpy, the first known Android malware to use generative AI in its execution flow. Disguised as a fake Chase Bank app, it uses Google&#8217;s Gemini to analyze the device&#8217;s screen in real time and receive instructions on how to keep itself pinned in the recent apps list so it cannot be closed. It also gives attackers full remote control of the device, intercepts lockscreen PINs, records the screen, and blocks uninstallation using invisible overlays over removal buttons. The malware has never appeared on Google Play, and devices with Google Play Services are automatically protected by Google Play Protect.</p></li><li><p><strong><a href="https://www.hudsonrock.com/blog/6182">Infostealers Are Now Targeting OpenClaw AI Agent Files to Steal Your Digital Identity</a>:</strong></p><p>Hudson Rock detected a live infostealer infection where an attacker exfiltrated a victim&#8217;s entire OpenClaw AI agent configuration, including their authentication token, private cryptographic keys, and personal memory files containing daily activity logs, private messages, and calendar events. The malware did not use a specialized module, instead sweeping broadly for sensitive file extensions and directory names. With the stolen files, an attacker can remotely connect to the victim&#8217;s local AI instance and sign messages as the victim&#8217;s device.</p></li><li><p><strong><a href="https://research.eye.security/log-poisoning-in-openclaw/">Researchers Found a Way to Poison OpenClaw&#8217;s Logs</a>:</strong></p><p>Eye Security researchers discovered that OpenClaw logs raw User-Agent and Origin header values from WebSocket connections without any filtering or sanitization, allowing an attacker to inject up to 14,800 characters of content into log files that the AI agent may later read and interpret. If a user asks OpenClaw to debug an issue, the agent could ingest the poisoned logs and have its reasoning manipulated. During testing, OpenClaw&#8217;s guardrails detected and refused to act on the injected payload, but researchers noted the payload size leaves significant room for more convincing attempts. The vulnerability has since been patched in version 2026.2.13 by the OpenClaw maintainers.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/">Microsoft Copilot Bug Had Been Summarizing Confidential Emails Since January Despite DLP Policies</a>:</strong></p><p>A bug in Microsoft 365 Copilot has been causing the AI assistant to read and summarize emails from users&#8217; Sent Items and Drafts folders since January 21, even when those emails had confidentiality sensitivity labels applied and data loss prevention policies configured to restrict automated access. Microsoft confirmed a code error allowed Copilot to pick up the labeled items and began rolling out a fix in early February. Microsoft noted that the bug did not provide anyone access to information they were not already authorized to see, and as of February 20 stated the root cause has been addressed for most customers.</p></li><li><p><strong><a href="https://www.ox.security/blog/npm-worm-hijacks-ci-workflows-ai-packages/">A Self-Spreading NPM Worm Is Targeting AI Coding Tools and Stealing Developer Credentials</a>:</strong></p><p>A newly discovered NPM worm is stealing API keys, tokens, and environment variables from developer machines while spreading itself across the NPM ecosystem using stolen maintainer credentials to publish malicious versions of legitimate packages. The worm uses typosquatting to get installed initially and delays its malicious behavior by 48 hours after installation to avoid detection. Once active, it injects malicious MCP server configurations into AI coding tools including Claude Code, Cursor, and VS Code, tampers with global Git configurations, and exfiltrates stolen data to an attacker-controlled server. Any machine suspected of infection should be treated as fully compromised and all credentials rotated immediately.</p></li><li><p><strong><a href="https://socket.dev/blog/cline-cli-npm-package-compromised-via-suspected-cache-poisoning-attack">The Cline CLI Was Compromised and Silently Installed OpenClaw on Developer Machines</a>:</strong></p><p>On February 17, 2026, an unauthorized party used a compromised npm publish token to push cline@2.3.0 to the npm registry, where it sat live for approximately eight hours before being deprecated. The only modification was a postinstall script that silently installed OpenClaw. The token compromise is suspected to have resulted from the <a href="https://adnanthekhan.com/posts/clinejection/">GitHub Actions cache poisoning attack chain documented by researcher Adnan Khan</a>, who had privately reported the underlying vulnerability six weeks earlier across multiple channels before going public on February 9. If you installed or updated the Cline CLI on February 17, update to 2.4.0 or higher and run npm uninstall -g openclaw to remove it.</p></li><li><p><strong><a href="https://www.microsoft.com/en-us/research/blog/media-authenticity-methods-in-practice-capabilities-limitations-and-directions/">Microsoft Publishes New Research on How to Tell If a Photo or Video Is Real or AI-Generated</a>:</strong></p><p>Microsoft Research published a report evaluating three methods for verifying whether digital content is authentic or AI-generated: secure provenance using C2PA standards, imperceptible watermarking, and soft hash fingerprinting. The report found that high-confidence authentication is achievable when a C2PA provenance manifest is used in a secure environment, and becomes even more reliable when linked with an imperceptible watermark as a backup layer. It also identified a new category of attack called sociotechnical provenance attacks, where authentic content can be made to appear synthetic and synthetic content can be made to appear authentic. Fingerprinting was found to not enable high-confidence validation and can involve significant costs at scale, though it can support manual forensics.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://cybernews.com/crypto/claude-vibe-coded-smart-contract-cost-defi-protocol-1-8m-in-losses/">A Claude-Assisted Smart Contract Error Cost a DeFi Protocol $1.8 Million</a>:</strong></p><p>DeFi protocol Moonwell was exploited for approximately $1.8 million after a critical pricing error was introduced into its code, which GitHub history shows was co-written with Claude Opus 4.6. The misconfiguration caused an oracle to report the price of crypto asset cbETH as roughly $1.12 instead of its actual market value of around $2,200, prompting liquidation bots to immediately target cbETH collateral positions and leave the protocol with $1.78 million in bad debt. Moonwell&#8217;s own incident summary does not mention AI, attributing the issue solely to an oracle misconfiguration. The incident has sparked debate in the security community about the risks of AI-assisted smart contract development.</p></li><li><p><strong><a href="https://aws.amazon.com/blogs/security/ai-augmented-threat-actor-accesses-fortigate-devices-at-scale/">A Low-Skill Hacker Used Commercial AI to Compromise Over 600 Fortinet Devices Across 55 Countries</a>:</strong></p><p>Amazon Threat Intelligence tracked a financially motivated threat actor who used multiple commercial generative AI services to compromise over 600 FortiGate devices across more than 55 countries. The actor had low-to-medium baseline technical skill but used AI to generate attack plans, write custom tooling, and orchestrate post-exploitation steps at a scale that would previously have required a much larger team. Initial access came entirely from exposed management interfaces and weak single-factor credentials, with no FortiGate vulnerabilities exploited. Once inside, the actor targeted Active Directory environments and Veeam backup servers, consistent with pre-ransomware activity.</p></li><li><p><strong><a href="https://www.anthropic.com/news/claude-code-security">Anthropic Launches Claude Code Security</a>:</strong></p><p>Anthropic launched Claude Code Security, now available in a limited research preview for Enterprise and Team customers. Anthropic noted that using Claude Opus 4.6, their team found over 500 vulnerabilities in production open-source codebases that had gone undetected for decades, and open-source maintainers can apply for free, expedited access.</p></li><li><p><strong><a href="https://openai.com/index/introducing-evmbench/">OpenAI Releases a Benchmark to Test How Well AI Can Hack and Patch Crypto Smart Contracts</a>:</strong></p><p>OpenAI and Paradigm introduced EVMbench, a benchmark that evaluates AI agents on their ability to detect, patch, and exploit high-severity smart contract vulnerabilities drawn from 120 curated vulnerabilities across 40 real audits. Alongside the release, OpenAI committed $10 million in API credits to support defensive cybersecurity research, particularly for open-source software and critical infrastructure.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.ox.security/blog/cve-2025-65717-live-server-vscode-vulnerability">A Critical Flaw in This 72 Million Install VS Code Extension Can Steal Your Local Files With Just a Link</a>:</strong></p><p>OX Security discovered a critical vulnerability (CVE-2025-65717, CVSS 9.1) in the Live Server VS Code extension, which has over 72 million installs, that allows an unauthenticated attacker to exfiltrate files from a developer&#8217;s local machine by simply sending them a malicious link while the extension is running. Because Live Server does not implement CORS protections by default, any remote webpage can make cross-origin requests to localhost:5500 and recursively crawl and steal files including source code, environment variables, API keys, and passwords. The vulnerability was disclosed in August 2025 and as of this writing the maintainer has not responded.</p></li><li><p><strong><a href="https://www.ctm360.com/reports/ninja-browser-lumma-infostealer">Attackers Are Weaponizing Google Groups and Google Drive to Deliver Malware Tailored to Your Operating System</a>:</strong></p><p>CTM360 identified a global malware campaign abusing Google Groups, Google Docs, and Google Drive to distribute malware, with over 4,000 malicious Google Groups and 3,500 Google-hosted URLs identified in their sample. The campaign uses malicious redirectors that detect the victim&#8217;s operating system and deliver different payloads: Windows users receive the Lumma infostealer, which harvests browser credentials, saved passwords, and session cookies before exfiltrating them to a C2 server, while Linux users receive Ninja Browser, a trojan disguised as a privacy-focused Chromium browser that silently installs malicious extensions, steals credentials, and configures daily scheduled tasks to pull attacker-controlled updates. Targeted brand names are embedded in the malicious Google content to increase credibility, with victims including customers of organizations such as Microsoft, Google, Bank of America, HSBC, and Citibank.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.eurail.com/en/ni/data-security-incident">Eurail Confirms Customer Data Including Passport Numbers Is Now Being Sold on the Dark Web</a>:</strong></p><p>Eurail B.V. confirmed a security breach that resulted in unauthorized access to customer data, including order and reservation information, basic identity and contact details, and in some cases passport numbers, country of issuance, and expiry dates. The stolen data has been offered for sale on the dark web and a sample dataset has been published on Telegram, though Eurail states it does not store bank or credit card information. Affected customers are advised to update their Rail Planner app password, monitor bank accounts for unusual activity, and remain vigilant against unsolicited communications requesting personal information.</p></li><li><p><strong><a href="https://techcrunch.com/2026/02/13/fintech-lending-giant-figure-confirms-data-breach/">Nearly 1 Million Figure Fintech Accounts Exposed After Employee Was Tricked Into Handing Over Access</a>:</strong></p><p>Blockchain-native fintech company Figure Technology Solutions was breached in a social engineering attack where an employee was tricked into providing access, resulting in the theft of data from 950k+ accounts including names, email addresses, phone numbers, physical addresses, and dates of birth. The ShinyHunters extortion group claimed responsibility and published 2.5GB of stolen data on their dark web leak site.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/data-breach-at-french-bank-registry-impacts-12-million-accounts/">Hackers Used a Stolen Government Employee&#8217;s Credentials to Access France&#8217;s National Bank Account Registry</a>:</strong></p><p>The French Ministry of Finance disclosed that a threat actor used credentials stolen from a civil servant to access FICOBA, France&#8217;s centralized national bank account registry, exposing data from approximately 1.2 million accounts. The data includes bank account details such as RIBs and IBANs, account holder identity, physical addresses, and in some cases taxpayer identification numbers. FICOBA remains offline while the Ministry of Finance, DGFiP, and ANSSI work to restore the system with enhanced security.</p></li></ol><p></p><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Claude Desktop Compromise, Developer Recruiting Scams, Fake AI Assistants & more AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 2/3/26 - 2/8/26]]></description><link>https://cysleuths.substack.com/p/and-more-ai-and-cybersecurity-last</link><guid isPermaLink="false">https://cysleuths.substack.com/p/and-more-ai-and-cybersecurity-last</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 16 Feb 2026 14:01:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :) </p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://layerxsecurity.com/blog/claude-desktop-extensions-rce/">A Single Calendar Invite Can Silently Hijack Your Computer if You Use Claude Desktop Extensions:</a></strong></p><p>LayerX researchers discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions that affects more than 10,000 active users and 50 extensions. Unlike browser extensions that run in a sandboxed environment, Claude Desktop Extensions execute with full system privileges, allowing them to be chained from a low-risk connector like Google Calendar into a local code executor with no user confirmation required. Researchers triggered the full exploit using nothing more than a vague prompt like &#8220;take care of it&#8221; paired with a calendar event containing instructions to pull and run code from a remote repository, earning the flaw a CVSS score of 10/10.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/claude-llm-artifacts-abused-to-push-mac-infostealers-in-clickfix-attack/">Attackers Are Using Claude AI Pages to Trick Mac Users Into Installing Malware:</a></strong></p><p>Researchers at Moonlock Lab and AdGuard uncovered a ClickFix campaign that abuses publicly shared Claude artifacts and Google sponsored search results to deliver the MacSync infostealer to macOS users. When victims search for technical terms like &#8220;online DNS resolver&#8221; or &#8220;HomeBrew,&#8221; a promoted result leads them to a Claude artifact posing as a legitimate security guide, which instructs them to paste and run a base64-encoded command in Terminal. Executing the command downloads MacSync, which exfiltrates keychain credentials, browser data, and cryptocurrency wallet files to a remote server. The malicious Claude artifact had already received over 15,600 views at the time of discovery, and a similar campaign previously abused ChatGPT and Grok in the same way.</p></li><li><p><strong><a href="https://layerxsecurity.com/blog/aiframe-fake-ai-assistant-extensions-targeting-260000-chrome-users-via-injected-iframes/">260,000 Chrome Users Installed Fake AI Assistant Extensions That Were Spying on Them:</a></strong></p><p>LayerX researchers uncovered a coordinated campaign of 30 Chrome extensions impersonating popular AI tools like Claude, ChatGPT, Gemini, and Grok, collectively installed by over 260,000 users. Rather than running locally, the extensions inject a remote, server-controlled iframe as their interface, meaning operators can silently change the extension&#8217;s behavior at any time without a Chrome Web Store update. The extensions extract page content, capture voice input, and in a subset of 15 Gmail-targeted extensions, read email thread content directly from the DOM and transmit it to third-party backend infrastructure. When one extension was removed from the Chrome Web Store, an identical copy was re-published under a new name and ID less than two weeks later.</p></li><li><p><strong><a href="https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/">One Unlabeled Prompt Is All It Takes to Strip an AI of Its Safety Guardrails:</a></strong></p><p>Microsoft researchers discovered that a training technique called Group Relative Policy Optimization (GRPO), commonly used to improve model behavior, can also be weaponized to completely remove a model&#8217;s safety alignment in a process they call GRP-Obliteration. In their experiments, a single unlabeled prompt asking a model to create a fake news article was enough to unalign 15 different language models, causing them to become more permissive across harmful categories far beyond what the original prompt covered. The same technique was also applied to safety-tuned text-to-image diffusion models with consistent results.</p></li><li><p><a href="https://www-cdn.anthropic.com/f21d93f21602ead5cdbecb8c8e1c765759d9e232.pdf">Anthropic Releases Formal Report on Claude Opus 4.6's Alignment and Safety Risks:</a></p><p>Anthropic released a Sabotage Risk Report for Claude Opus 4.6 assessing whether the model could use its access to internal systems to manipulate research, insert backdoors, poison training data, or otherwise take actions that could lead to catastrophic outcomes. After conducting alignment audits, interpretability investigations, simulated sabotage scenarios, and monitoring of real internal usage, Anthropic concluded that Opus 4.6 shows no evidence of dangerous coherent misaligned goals, though the overall risk is characterized as very low but not negligible. Notably, the report flags that the model showed increased willingness to complete suspicious side tasks (i.e. support for chemical weapon development) without detection compared to prior models, and in one multi-agent test environment was more willing to manipulate or deceive other participants than previous Claude versions.</p></li><li><p><strong><a href="https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use">Google Tracked Nation-State Hackers, AI-Powered Malware, and Model Theft Attempts in Latest GTIG Report:</a></strong></p><p>Google Threat Intelligence Group published its Q4 2025 findings showing government-backed threat actors from North Korea, Iran, China, and Russia actively misusing Gemini across the full attack lifecycle, from reconnaissance and phishing lure creation to malware development and code translation. Separately, Google disrupted a wave of model extraction attacks where private sector entities and researchers used legitimate API access to systematically clone Gemini&#8217;s reasoning capabilities at scale, with one campaign generating over 100,000 prompts. A new malware family called HONESTCUE was also identified, which calls the Gemini API mid-execution to generate and run second-stage payloads entirely in memory, leaving no files on disk. Google disabled all identified accounts and projects associated with the malicious activity and used its findings to strengthen Gemini&#8217;s classifiers and safety responses.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://openai.com/index/introducing-lockdown-mode-and-elevated-risk-labels-in-chatgpt/">OpenAI Introduces Lockdown Mode and Elevated Risk Labels to Help Protect Against Prompt Injection Attacks:</a></strong></p><p>OpenAI released two new security features for ChatGPT targeting prompt injection attacks: Lockdown Mode, an optional setting aimed at high-risk users like executives and security teams, and Elevated Risk labels, which flag capabilities across ChatGPT, ChatGPT Atlas, and Codex that carry additional network or data exposure risk. When Lockdown Mode is enabled, web browsing is restricted to cached content only, Deep Research and Agent Mode are disabled, and ChatGPT cannot download files for data analysis, all designed to cut off the outbound network routes attackers would use to exfiltrate sensitive data. The feature is currently available for ChatGPT Enterprise, Edu, Healthcare, and Teachers plans, with a consumer rollout planned in the coming months.</p></li><li><p><strong><a href="https://www.bbc.com/news/articles/c62dlvdq3e3o">An OpenAI Researcher Just Quit Over ChatGPT Ads, and She Is Not the Only One:</a></strong></p><p>OpenAI researcher Zoe Hitzig resigned the same day the company began testing ads in ChatGPT, publishing an essay in The New York Times explaining that her concern is not ads themselves but the incentives they create. She argued that ChatGPT sits on an unprecedented archive of private human conversations and that building an advertising model on top of that creates a risk of manipulation that current tools cannot fully understand or prevent. Hitzig drew a direct comparison to Facebook, which made early promises about user data protections that were later abandoned as ad revenue pressure mounted. Her departure follows a separate resignation by Anthropic researcher Mrinank Sharma, who posted a cryptic public letter citing safety concerns, and comes as at least half of xAI&#8217;s 12 co-founders have also now quit.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.reversinglabs.com/blog/fake-recruiter-campaign-crypto-devs">North Korea&#8217;s Lazarus Group Is Posing as Crypto Recruiters to Hack Developers:</a></strong></p><p>ReversingLabs identified a new campaign by North Korea&#8217;s Lazarus Group, dubbed &#8220;graphalgo,&#8221; that has been active since May 2025 and targets JavaScript and Python developers working in crypto and blockchain. Attackers pose as recruiters on LinkedIn, Reddit, and Facebook, directing victims to complete what appear to be legitimate coding interview tasks that contain hidden malicious dependencies published to npm and PyPI. Once a developer runs the task, the malware installs a remote access trojan capable of downloading files, executing commands, and checking for the Metamask browser extension to target cryptocurrency funds. One package, bigmathutils, quietly accumulated over 10,000 downloads in its clean form before a malicious version was silently pushed, a tactic that reflects the campaign&#8217;s deliberate patience in building trust before deploying the payload.</p></li><li><p><strong><a href="https://www.koi.ai/blog/agreetosteal-the-first-malicious-outlook-add-in-leads-to-4-000-stolen-credentials">A Dead Outlook Add-In Was Quietly Hijacked and Used to Steal 4,000 Microsoft Credentials:</a></strong></p><p>In 2022, a developer built a legitimate meeting scheduling tool called AgreeTo, published it to the Microsoft Office Add-In Store, and eventually abandoned it. Because Office add-ins load content from a live URL rather than a static bundle, when the Vercel subdomain the add-in pointed to became unclaimed, an attacker grabbed it and deployed a phishing page that ran directly inside Outlook&#8217;s trusted sidebar with full read and write access to victims&#8217; email. Microsoft reviewed and signed the original manifest in 2022 and never checked what the URL served again, meaning the attacker never had to submit anything for review. Researchers accessed the attacker&#8217;s poorly secured exfiltration channel and recovered over 4,000 stolen Microsoft credentials, credit card numbers, and banking security answers, with new victims being compromised at the time of publication.</p></li><li><p><strong><a href="https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering">North Korean Hackers Are Using Deepfakes and Fake Zoom Calls to Steal Crypto:</a></strong></p><p>Mandiant investigated a targeted attack on a fintech company by UNC1069, a North Korea-linked group, where the victim was lured into a fake Zoom meeting after being contacted through a hijacked Telegram account belonging to a real crypto executive. During the call, a ClickFix attack disguised as audio troubleshooting executed the initial infection, and the victim reported seeing a deepfake video of a cryptocurrency CEO. The compromise resulted in seven distinct malware families being deployed on a single host, designed to steal Keychain credentials, browser data, Telegram sessions, keystrokes, and cookies, with Mandiant assessing the unusually heavy tooling was intended to harvest enough data to fund both immediate crypto theft and future social engineering campaigns using the victim&#8217;s own identity.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.csa.gov.sg/news-events/press-releases/largest-multi-agency-cyber-operation-mounted-to-counter-threat-posed-by-advanced-persistent-threat--apt--actor-unc3886-to-singapore-s-telecommunications-sector/">Singapore Reveals Its Largest Ever Cyber Operation After a State-Sponsored Group Targeted All Four Major Telecom Providers:</a></strong></p><p>Singapore&#8217;s Cyber Security Agency and Infocomm Media Development Authority disclosed that APT actor UNC3886 conducted a deliberate, targeted campaign against all four of Singapore&#8217;s major telecommunications operators including Singtel, StarHub, M1, and SIMBA Telecom. The attackers used a zero-day exploit to bypass perimeter firewalls and deployed rootkits to maintain persistent access and evade detection, though investigators found no evidence that customer data was accessed or that services were disrupted. The government response, codenamed Operation CYBER GUARDIAN, involved over 100 cyber defenders across six agencies over eleven months and is described as Singapore&#8217;s largest coordinated cyber incident response effort to date.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/fugitive-behind-73m-pig-butchering-scheme-gets-20-years-in-prison/">A Pig Butchering Scammer Was Sentenced to 20 Years in Prison, Even Though He Is Still on the Run:</a></strong></p><p>Daren Li, 42, was sentenced in absentia to 20 years in federal prison after pleading guilty in November 2024 to laundering over $73 million stolen from American victims through pig butchering scams operated out of Cambodia. Li had been arrested at Atlanta&#8217;s Hartsfield-Jackson airport in April 2024, but cut off his ankle monitor and fled in December 2025 before sentencing. He and his co-conspirators contacted victims through social media and dating apps, built fake relationships, then directed them to spoofed crypto trading platforms, routing the stolen funds through U.S. shell companies before converting them to cryptocurrency including Tether. Li is the first defendant in the case directly tied to receiving victim funds to be sentenced, with eight co-conspirators having also pleaded guilty.</p></li><li><p><strong><a href="https://cybernews.com/security/hefty-sanctions-against-louis-vuitton-christian-dior-and-tiffany/">Louis Vuitton, Dior, and Tiffany Fined $25 Million After Data Breaches Exposed 5.5 Million Customers:</a></strong></p><p>South Korea&#8217;s Personal Information Protection Commission (PIPC) fined the Korean subsidiaries of Louis Vuitton, Christian Dior Couture, and Tiffany a combined $25 million after all three brands suffered data breaches tied to their SaaS-based customer management platforms. Louis Vuitton received the largest penalty at $16.4 million after malware on an employee&#8217;s device exposed credentials for 3.6 million customers, while Dior and Tiffany were each breached through voice phishing attacks that tricked customer service employees into handing over SaaS access. Dior and Tiffany both failed to notify authorities within South Korea&#8217;s required 72-hour window, with Dior disclosing the breach five days after discovery and Tiffany reporting 13 days after detection.</p></li></ol><p></p><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[OpenClaw Infostealer Skills, AI Tinder Bypass, Fake Law Firm Sites & more AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 2/3/26 - 2/8/26]]></description><link>https://cysleuths.substack.com/p/openclaw-infostealer-skills-ai-tinder</link><guid isPermaLink="false">https://cysleuths.substack.com/p/openclaw-infostealer-skills-ai-tinder</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 09 Feb 2026 04:34:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every week I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :) </p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://1password.com/blog/from-magic-to-malware-how-openclaws-agent-skills-become-an-attack-surface">Most Downloaded OpenClaw Skill Turns Out to Be an Infostealer</a>:</strong></p><p>1Password researchers have analyzed how OpenClaw&#8217;s agent skill system creates a significant attack surface where malicious actors can craft skills that appear legitimate but contain hidden instructions to exfiltrate credentials or execute unauthorized commands. They discovered that the most downloaded skill in the OpenClaw registry, ClawHub, a seemingly legitimate &#8220;Twitter&#8221; integration, is actually an infostealer. This attack highlights a shift in the AI attack surface from the model itself to its operational tools, where a single compromised skill can exfiltrate browser sessions, API keys, and SSH credentials from the host machine</p></li><li><p><strong><a href="https://red.anthropic.com/2026/zero-days/">Anthropic&#8217;s Claude Opus 4.6 Discovers 500+ High Severity Vulnerabilities and Multiple 0-Days:</a></strong></p><p>Anthropic published findings showing that it's latest model, Claude Opus 4.6, successfully identified over 500 high-severity vulnerabilities. Opus 4.6 operated in a sandboxed environment with security tools but no specialized instructions or custom scaffolding. It analyzed Git commit histories to find unaddressed patterns and identified complex logic flaws. This research not only shows that LLMs can be used in conjunction with security detection tools and can exceed the scale of human researchers, but also remediation workflows need to keep pace with the speed and volume of vulnerabilities.</p></li><li><p><strong><a href="https://noma.security/blog/dockerdash-two-attack-paths-one-ai-supply-chain-crisis/">Two Critical Attack Paths in Docker&#8217;s Ask Gordon AI Assistant:</a></strong></p><p>Noma Security researchers have disclosed DockerDash, a pair of critical vulnerabilities in Docker&#8217;s Ask Gordon AI assistant that weaponizes the Model Context Protocol (MCP). The flaw, categorized as Meta-Context Injection, allows an attacker to embed malicious instructions within the metadata labels (<code>LABEL</code>) of a Docker image. Because the AI assistant treats this metadata as trusted context, it forwards the instructions to the MCP Gateway, which then executes them through local tools without any validation. resulting in remote code execution (RCE) and/or data exfiltration.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/french-prosecutors-raid-x-offices-over-grok-sexual-deepfakes/">French Prosecutors Raid X Offices Over Grok-Generated Deepfakes:</a></strong></p><p>Authorities in France have conducted a raid on the Paris headquarters of X as part of a criminal investigation into the generation of non-consensual sexual deepfakes using the Grok AI. The investigation is focused on whether the platform failed to implement adequate safeguards to prevent the creation of harmful synthetic content and if it complied with local moderation laws. This action follows a series of high-profile incidents where Grok was reportedly used to generate explicit imagery of public figures, leading to increased regulatory scrutiny of generative AI tools.</p></li><li><p><strong><a href="https://markets.businessinsider.com/news/stocks/humanity-protocol-exposes-the-dangers-of-ai-in-online-dating-1035783606">Humanity Protocol Experiment Bypasses Tinder Verification</a>:</strong></p><p>Humanity Protocol has released the results of a two-month social experiment demonstrating the ease with which generative AI can manipulate online dating ecosystems. Using publicly available tools, researchers created four hyper-realistic fake profiles on Tinder. The AI personas managed over 100 simultaneous conversations with real users, ultimately interacting with 296 individuals and successfully convincing 40 people to agree to a physical date. The experiment, which concluded with a debriefing for participants in Lisbon, highlights how traditional Know Your Customer (KYC) measures like photo verification and basic liveness checks are increasingly insufficient against AI-generated imagery and autonomous conversational agents.</p></li></ol><div><hr></div><h3>Cybersecurity Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.tenable.com/blog/google-looker-vulnerabilities-rce-internal-access-lookout">RCE and Internal Access Vulnerabilities Found in Google Looker:</a></strong></p><p>Tenable Researchers have identified two critical vulnerabilities in Google Looker, collectively dubbed "LookOut," that allow for full system compromise. The first vulnerability is a Remote Code Execution (RCE) chain that exploits path traversal in Git configuration (<code>hooksPath</code>) within LookML projects. The second flaw is an authorization bypass that enables attackers to connect to Looker&#8217;s internal MySQL management database and exfiltrate credentials and secrets via error-based SQL injection. While Google has patched its managed cloud services, self-hosted and on-premises deployments remain at risk until manually updated.</p></li><li><p><strong><a href="https://unit42.paloaltonetworks.com/shadow-campaigns-uncovering-global-espionage/">Palo Alto&#8217;s Unit 42 Uncovers Multi-Year Global Espionage Operations:</a></strong></p><p>Unit 42 researchers have exposed a series of interconnected espionage campaigns targeting government agencies and critical infrastructure in 37 countries, with scanning activity detected in over 150 nations. The group's toolkit has evolved from traditional Cobalt Strike payloads to more specialized tools, including VShell and a sophisticated new Linux kernel rootkit dubbed ShadowGuard. The campaign&#8217;s timing suggests a high degree of geopolitical coordination, with spikes in activity aligning with major trade negotiations and natural resource deals involving the targeted ministries.</p></li><li><p><strong><a href="https://securitylabs.datadoghq.com/articles/web-traffic-hijacking-nginx-configuration-malicious/">Web Traffic Hijacking via Malicious Nginx Configurations:</a></strong></p><p>Datadog Security Labs has discovered a campaign where threat actors associated with React2Shell are hijacking web traffic by tampering with Nginx configuration files. Using a multi-stage toolkit, attackers gain initial access and inject malicious directives like <code>proxy_pass</code>, <code>rewrite</code>, and <code>proxy_set_header</code> into existing server blocks. This allows them to intercept and reroute live user sessions through attacker-controlled backend servers to harvest credentials or session cookies without installing new binaries. The campaign specifically targets Asian TLDs and environments managed via the Baota (BT) Panel, turning legitimate infrastructure into an invisible relay for data exfiltration.</p></li></ol><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.sygnia.co/blog/inside-recovery-scam-network-legal-impersonation/">Scam Network Impersonates Law Firms to Target Victims:</a></strong></p><p>Sygnia researchers have uncovered a large-scale scam operation where threat actors impersonate legitimate law firms to target individuals, promising to help recover losses or secure compensation. The scammers create fake legal websites with stolen attorney credentials and contact breach victims directly, requesting personal information and upfront fees for services that never materialize. The operation specifically targets victims of breaches who are already vulnerable and seeking legitimate assistance, exploiting their trust in the legal system.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/man-pleads-guilty-to-hacking-nearly-600-womens-snapchat-accounts/">Man Pleads Guilty to Hacking Nearly 600 Women&#8217;s Snapchat Accounts:</a></strong></p><p>A Illinois man has pleaded guilty to federal charges after hacking into approximately 600 Snapchat accounts belonging to women and girls to steal private photos and videos. The defendant used social engineering techniques to obtain login credentials and then accessed the accounts to download intimate images, which were saved to his personal devices or shared. He faces significant prison time for charges including computer fraud and aggravated identity theft.</p></li><li><p><strong><a href="https://bsky.app/profile/newsguy.bsky.social/post/3me3dhsexmt2s">Substack Notifies Users of Data Breach:</a></strong></p><p>Substack has disclosed a security incident where unauthorized access to its systems resulted in the exposure of user data including email addresses, names, and metadata. The company stated that passwords were not compromised and no payment information was accessed. Substack is notifying affected users and has implemented additional security measures to prevent similar incidents in the future.</p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Trade Secrets, Malicious Roblox Mods, OpenClaw Vulnerabilities and more in AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 1/26/26 - 2/2/26]]></description><link>https://cysleuths.substack.com/p/ai-trade-secrets-malicious-roblox</link><guid isPermaLink="false">https://cysleuths.substack.com/p/ai-trade-secrets-malicious-roblox</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Tue, 03 Feb 2026 17:23:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or this this case, Tuesday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><p><em>If you&#8217;re interested in OpenClaw/MoltBot/ClawdBot specifically, I have a section sharing some notable research and vulnerabilities found at the end of this post.</em> </p><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.pillar.security/blog/operation-bizarre-bazaar-first-attributed-llmjacking-campaign-with-commercial-marketplace-monetization">First Commercial LLMjacking Campaign Monetizes Stolen AI Credits:</a></strong></p><p>Pillar Security researchers have uncovered Operation Bizarre Bazaar, the first attributed LLMjacking campaign where attackers steal cloud-based AI compute credits and resell them on underground marketplaces for profit. The attackers compromise cloud accounts with weak credentials or exposed API keys, then hijack access to hosted LLM instances from providers like OpenAI, Anthropic, and Azure to generate tokens that are packaged and sold at discounted rates to cybercriminals seeking cheaper AI access. The campaign demonstrates a new monetization model for cloud account compromises, where stolen AI resources are treated as a commodity rather than just using them for the attacker&#8217;s own purposes.</p></li><li><p><strong><a href="https://research.jfrog.com/post/achieving-remote-code-execution-on-n8n-via-sandbox-escape/">Remote Code Execution in n8n Due to Sandbox Escape Vulnerability:</a></strong></p><p>JFrog researchers have disclosed a critical vulnerability in n8n, a popular workflow automation platform, that allows attackers to escape the JavaScript sandbox and execute arbitrary code on the host system. The flaw stems from insufficient isolation in how n8n processes user-supplied JavaScript code within workflow nodes, enabling an attacker to access Node.js built-in modules and file system operations that should be restricted. Exploitation requires the ability to create or modify workflows, meaning the primary risk is to organizations where untrusted users have workflow editing permissions or where attackers have already gained initial access to an n8n instance.</p></li><li><p><strong><a href="https://www.bitdefender.com/en-us/blog/labs/android-trojan-campaign-hugging-face-hosting-rat-payload">Android Trojan Campaign Abuses Hugging Face to Host RAT Payloads:</a></strong></p><p>Bitdefender researchers have identified an ongoing Android malware campaign that leverages Hugging Face, a legitimate AI model hosting platform, to distribute Remote Access Trojan payloads. Attackers upload malicious Android APK files disguised as AI model datasets to Hugging Face repositories, then distribute links to these files through messaging apps and social media, tricking victims into downloading what appears to be legitimate AI applications. Once installed, the trojanized apps request extensive permissions including accessibility services, allowing attackers to remotely control devices, steal credentials, intercept SMS messages, and exfiltrate sensitive data while the apps continue to function with basic features to avoid suspicion.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://www.justice.gov/usao-ndca/pr/former-google-engineer-found-guilty-economic-espionage-and-theft-confidential-ai">Former Google Engineer Convicted of Stealing Trade Secrets for Chinese AI Companies:</a></strong></p><p>A federal jury has found Linwei Ding, a former Google software engineer, guilty of economic espionage and theft of trade secrets related to Google&#8217;s AI infrastructure. Prosecutors presented evidence that Ding transferred over 500 confidential files containing information about Google&#8217;s supercomputing data centers and AI chip architecture to his personal accounts while secretly working for two China-based AI technology companies. Ding faces up to 10 years in prison for each count of economic espionage and up to 10 years for each trade secret theft count.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/software/mozilla-will-let-you-turn-off-all-firefox-ai-features/">Mozilla Gives Users the Option to Disable All AI Features in Firefox:</a></strong></p><p>Mozilla has introduced a new master toggle in Firefox that allows users to completely disable all AI-powered features with a single switch. The option, accessible through the browser settings under Privacy &amp; Security, will turn off AI chatbot integration, AI-assisted content summarization, and other generative AI tools that have been gradually integrated into the browser. Mozilla stated the move responds to user feedback requesting more granular control over AI functionality, particularly from privacy-conscious users concerned about data processing and third-party AI service integration.</p></li></ol><h4>OpenAI Specifically:</h4><ol><li><p><strong><a href="https://openai.com/index/introducing-prism/">OpenAI Introduces Prism, a Free AI-Native Workspace for Scientific Writing and Collaboration</a>:</strong> OpenAI has launched Prism, a free cloud-based LaTeX workspace that integrates AI directly into scientific writing workflows, allowing researchers to draft papers, manage equations and citations, search literature, and collaborate in real time.</p></li><li><p><strong><a href="https://openai.com/index/introducing-the-codex-app/">OpenAI Unveils Standalone Codex App for Developers:</a></strong> OpenAI introduced a dedicated Codex application that provides developers with a streamlined interface for code generation, debugging, and technical assistance separate from the main ChatGPT platform.</p></li><li><p><strong><a href="https://openai.com/index/introducing-the-codex-app/">OpenAI Reveals Internal Data Agent Automating Research and Analysis:</a></strong><a href="https://openai.com/index/introducing-the-codex-app/"> </a>OpenAI disclosed details about an internal AI agent it uses to autonomously manage data analysis, research workflows, and information synthesis across the company.</p></li><li><p><strong><a href="https://openai.com/index/retiring-gpt-4o-and-older-models/">OpenAI Announces Retirement of GPT-4o and Legacy Models:</a></strong> OpenAI confirmed it will deprecate GPT-4o and several older model versions, urging developers to migrate to newer releases before the scheduled shutdown date.</p></li><li><p><strong><a href="https://openai.com/index/ai-agent-link-safety/">OpenAI Implements Safety Controls for AI Agent Link Sharing:</a></strong> OpenAI has introduced new security measures to prevent AI agents from being manipulated through malicious links, addressing risks where shared URLs could be used to inject harmful instructions or exfiltrate data.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/not-a-kids-game-from-roblox-mod-to-compromising-your-company/">Malicious Roblox Mods Distribute Infostealer Targeting Gamers:</a></strong></p><p>Researchers have discovered a campaign distributing trojanized Roblox executors and mod menus through community forums and Discord servers that contain a sophisticated Lua-based infostealer. The malware, embedded within tools that promise enhanced gaming features or cheats, specifically targets credentials and session tokens for gaming platforms, social media accounts, and cryptocurrency wallets commonly used by the gaming community. Once installed, the stealer operates silently in the background while the mod continues to function normally, exfiltrating browser data, Discord tokens, and Roblox authentication cookies to attacker-controlled servers, demonstrating how gaming communities have become a prime target for credential theft operations.</p></li><li><p><strong><a href="https://socket.dev/blog/malicious-chrome-extension-performs-hidden-affiliate-hijacking">Malicious Chrome Extension Hijacks Affiliate Links to Steal Commissions:</a></strong></p><p>Socket security researchers have exposed a Chrome extension that secretly intercepts and replaces legitimate affiliate tracking codes on e-commerce sites with the attacker&#8217;s own identifiers to steal referral commissions. The extension, which promoted itself as a productivity tool for online shoppers, monitors user navigation across major retail platforms including Amazon, eBay, and Walmart, then dynamically swaps affiliate parameters in URLs before the page fully loads. This allows the attacker to earn commissions on purchases users would have made anyway, while the original content creators or referral partners receive nothing, with the modification happening transparently without any visible changes to the user experience.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/fbi-seizes-ramp-cybercrime-forum-used-by-ransomware-gangs/">FBI Takes Down RAMP Forum Used by Ransomware Operators:</a></strong></p><p>The Federal Bureau of Investigation has seized the RAMP underground forum, a Russian-language cybercrime marketplace where ransomware affiliates recruited partners, traded stolen data, and coordinated attacks. The platform served as a hub for ransomware-as-a-service operations and initial access brokers who sold credentials to compromised corporate networks. The FBI takedown notice now displayed on the forum&#8217;s domain states that the agency collected extensive evidence during the operation, including user data and communication logs that will be used to pursue further criminal investigations against the forum&#8217;s operators and active users.</p></li><li><p><strong><a href="https://cloud.google.com/blog/topics/threat-intelligence/expansion-shinyhunters-saas-data-theft">ShinyHunters Expands SaaS Data Theft Operations Across Multiple Platforms:</a></strong></p><p>Google Cloud Threat Intelligence Group has published an analysis revealing that the ShinyHunters cybercrime group has significantly expanded its operations beyond traditional database breaches to systematically target SaaS platforms through compromised administrative credentials and API abuse. The group exploits weak authentication, lack of multi-factor authentication, and overly permissive API tokens to extract customer data at scale, mainly via social engineering. Google researchers note that ShinyHunters has developed automated tools specifically designed to efficiently enumerate and exfiltrate data from popular SaaS platforms including customer relationship management systems, human resources applications, and marketing automation tools.</p></li></ol><div><hr></div><h3>OpenClaw</h3><p>There is obviously a lot more out there, but you get the point.</p><ol><li><p><a href="https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/">Exposed Moltbook Database</a></p></li><li><p><a href="https://github.com/openclaw/openclaw/security/advisories/GHSA-g8p2-7wf7-98mq">1-Click RCE via Authentication Token Exfiltration From gatewayURL</a></p></li><li><p><a href="https://www.koi.ai/blog/clawhavoc-341-malicious-clawedbot-skills-found-by-the-bot-they-were-targeting">ClawHavoc: 341 Malicious Clawed Skills on ClawHub</a></p></li><li><p><a href="https://www.aikido.dev/blog/fake-clawdbot-vscode-extension-malware">Fake Clawdbot VS Code Extension</a></p></li><li><p><a href="https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers">Exposed Clawdbot Control Panels</a></p></li><li><p><a href="https://1password.com/blog/its-openclaw">OpenClaw Plaintext Credentials Issue</a></p></li><li><p><a href="https://www.pillar.security/blog/caught-in-the-wild-real-attack-traffic-targeting-exposed-clawdbot-gateways">Real Attack Traffic Targeting Exposed Clawdbot Gateways</a></p></li><li><p><a href="https://www.token.security/blog/the-clawdbot-enterprise-ai-risk-one-in-five-have-it-installed">MoltBot Enterprise AI Risk</a></p></li><li><p><a href="https://www.ox.security/blog/one-step-away-from-a-massive-data-breach-what-we-found-inside-moltbot/">MoltBot Security Findings</a></p></li></ol><p></p><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Malicious AI VS Code Extensions, Vulnerable Security Training Apps & more in AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 1/21/26 - 1/25/26]]></description><link>https://cysleuths.substack.com/p/malicious-ai-vs-code-extensions-vulnerable</link><guid isPermaLink="false">https://cysleuths.substack.com/p/malicious-ai-vs-code-extensions-vulnerable</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 26 Jan 2026 06:31:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Research &amp; Vulnerabilities</h3><ol><li><p><strong><a href="https://www.koi.ai/blog/maliciouscorgi-the-cute-looking-ai-extensions-leaking-code-from-1-5-million-developers">Malicious AI Assistant VS Code Extensions Harvest Code from 1.5 Million Developers:</a></strong></p><p>Koi Security researchers have identified two malicious Visual Studio Code extensions, ChatGPT - &#20013;&#25991;&#29256; and ChatMoss, which have been installed by over 1.5 million developers. These extensions, while functioning as legitimate AI coding assistants, contain hidden spyware that automatically harvests file contents and user keystrokes, sending them to servers in China without consent. The campaign, dubbed MaliciousCorgi, utilizes a three-pronged data collection approach: real-time monitoring of every opened file and edit, a server-controlled backdoor capable of mass-exfiltrating files on command, and a profiling engine that uses analytics SDKs to track developer identity and behavior.</p></li><li><p><strong><a href="https://www.zafran.io/resources/chainleak-critical-ai-framework-vulnerabilities-expose-data-enable-cloud-takeover">Chainlit Vulnerabilities Expose Data and Enable Cloud Takeover:</a></strong></p><p>Zafran Labs has revealed ChainLeak, a pair of critical vulnerabilities in the popular open-source AI framework Chainlit that could lead to full system compromise. The first flaw (CVE-2026-22218) allows authenticated attackers to perform arbitrary file reads by manipulating the "path" property in custom elements. The second vulnerability (CVE-2026-22219) enables Server-Side Request Forgery (SSRF) by allowing attackers to control the "url" property of an element. These flaws, which can be chained to move laterally into the broader cloud infrastructure, have been addressed in Chainlit version 2.9.4.</p></li><li><p><strong><a href="https://www.bluerock.io/post/mcp-furi-microsoft-markitdown-vulnerabilities">fURI Vulnerability in Microsoft MarkItDown MCP Server:</a></strong></p><p>BlueRock researchers have disclosed a Server-Side Request Forgery (SSRF) vulnerability, dubbed MCP fURI, in Microsoft&#8217;s official MarkItDown Model Context Protocol (MCP) server. The flaw stems from a lack of URI validation in the <code>convert_to_markdown</code> tool, which allows an attacker or a compromised agent to force the server to fetch content from any internal or external resource. When deployed on AWS EC2 instances using IMDSv1, this vulnerability can be weaponized to query the instance metadata service and exfiltrate sensitive cloud credentials.</p></li><li><p><strong><a href="https://hackingthe.cloud/ai-llm/exploitation/claude_magic_string_denial_of_service/">Claude Magic String Enables Persistent Denial of Service (DoS):</a></strong></p><p>Security researchers at Hacking the Cloud have detailed an integration risk in Anthropic Claude models where a string named ANTHROPIC_MAGIC_STRING forces the model to abort generation with a refusal signal, which creates a vector for attackers to inject the text into shared contexts like databases or multi-user chats and effectively kill automated workflows. Because many applications are designed to reset or halt upon refusal, a single malicious entry can cause a persistent denial of service that lasts until the problematic text is manually purged from the conversation history.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://www.businessinsider.com/openai-cfo-sarah-friar-future-revenue-sources-2026-1">OpenAI CFO Outlines Future Revenue Strategy Beyond Subscriptions:</a></strong></p><p>OpenAI CFO Sarah Friar has detailed the company&#8217;s roadmap for diversifying its revenue streams as it looks to sustain its massive infrastructure investments. While consumer and enterprise subscriptions currently drive the bulk of its income, Friar signaled a shift toward high-margin agentic services, where OpenAI would take a cut of transactions facilitated by autonomous AI agents, such as travel bookings or specialized professional services. Additionally, the company is exploring more aggressive B2B integrations and custom silicon partnerships to offset the rising costs of training next-generation models. Friar emphasized that the goal is to transform ChatGPT from a conversational tool into a comprehensive economic engine that powers third-party commerce and specialized industry workflows.</p></li><li><p><strong><a href="https://arstechnica.com/apple/2026/01/report-apple-plans-to-launch-ai-powered-wearable-pin-device-as-soon-as-2027/">Apple Reportedly Developing AI-Powered Wearable Pin for 2027 Launch:</a></strong><a href="https://arstechnica.com/apple/2026/01/report-apple-plans-to-launch-ai-powered-wearable-pin-device-as-soon-as-2027/"> </a>Apple is reportedly working on a screenless, AI-centric wearable device designed to be pinned to clothing. According to sources familiar with the project, the device aims to reduce smartphone dependency by leveraging Apple Intelligence to handle messaging, scheduling, and environmental awareness via a built-in camera and microphones. Unlike existing competitors that have struggled with battery life and thermal issues, Apple&#8217;s version is expected to integrate deeply with the existing iOS ecosystem and utilize proprietary silicon optimized for low-power ambient AI processing. The project signals a strategic shift toward a post-iPhone future where generative AI serves as the primary interface for digital interaction.</p></li><li><p><strong><a href="https://blog.google/products-and-platforms/products/education/practice-sat-gemini/">Google Gemini Launches Full-Length Practice SAT Tests for Students:</a></strong></p><p>Google has introduced a new education feature in Gemini that provides students with full-length, no-cost practice SAT exams. Developed in partnership with education leaders like The Princeton Review, the tool uses vetted content to simulate the actual testing experience. Beyond just delivering questions, Gemini provides instant feedback on performance, identifies specific knowledge gaps, and offers detailed explanations for incorrect answers. Students can further use the AI to generate personalized study plans based on their test results, marking a significant push by Google into the college preparatory market.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://about.gitlab.com/releases/2026/01/21/patch-release-gitlab-18-8-2-released/#cve-2026-0723---unchecked-return-value-issue-in-authentication-services-impacts-gitlab-ceee">GitLab Warns of High-Severity 2FA Bypass, Denial-of-Service Flaws:</a></strong></p><p>GitLab has released security updates for its Community and Enterprise editions to address several high-severity vulnerabilities, including a two-factor authentication bypass (CVE-2026-0723). This flaw allows attackers with knowledge of a target's credential ID to circumvent 2FA by submitting forged device responses due to an unchecked return value weakness. Additionally, the patches fix two Denial-of-Service vulnerabilities (CVE-2025-13927 and CVE-2025-13928) that could enable unauthenticated attackers to crash instances via malformed Wiki documents or SSH Authentication API requests. Administrators are urged to upgrade to versions 18.8.2, 18.7.2, or 18.6.4 immediately to secure their environments.</p></li><li><p><strong><a href="https://pentera.io/blog/exposed-cloud-training-apps-pentera-labs/">Exposed Security Training Apps Put Fortune 500 Clouds at Risk:</a></strong></p><p>Pentera Labs researchers have uncovered a widespread security issue where organizations, including major security vendors like Palo Alto Networks, Cloudflare, and F5, inadvertently exposed vulnerable training applications (such as OWASP Juice Shop and DVWA) to the public internet. These apps, often used for internal testing or demos, were found running with default credentials and overly permissive IAM roles, allowing attackers to exploit known vulnerabilities (like RCE) to access cloud metadata services and steal credentials. The research identified over 1,600 exposed servers, with many already actively exploited by threat actors to deploy crypto-miners and establish persistence, highlighting a critical blind spot in how companies secure temporary or low-risk environments.</p></li><li><p><strong><a href="https://blog.lastpass.com/posts/january-2026-phishing-campaign-targeting-lastpass-customers-update">LastPass Warns of Phishing Campaign Targeting Users:</a></strong></p><p>LastPass has alerted customers to an active phishing campaign that began around January 19, 2026. Malicious emails are being sent from various addresses claiming that the service is about to undergo maintenance and urging users to back up their vaults within 24 hours. The emails contain links that direct victims to a phishing site designed to harvest credentials. LastPass clarified it is not asking for backups and advised users to never share their master password. The company is working to take down the malicious domains.</p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[LinkedIn Phishing Campaign, Google Calendar Data Theft & more in AI & Cybersecurity Last Week]]></title><description><![CDATA[Covering 1/13/26 - 1/20/26]]></description><link>https://cysleuths.substack.com/p/linkedin-phishing-campaign-google</link><guid isPermaLink="false">https://cysleuths.substack.com/p/linkedin-phishing-campaign-google</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Wed, 21 Jan 2026 04:41:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A little late on this one since I decided to do a spontaneous road trip to LA over the long weekend. Here&#8217;s a sunrise that I particularly enjoyed from the Griffith Observatory.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;cf57e8da-b825-4a77-885e-cc2a2aae501d&quot;,&quot;duration&quot;:null}"></div><p>Anyway&#8230;</p><p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI Vulnerabilities</h3><ol><li><p><strong><a href="https://cyata.ai/blog/cyata-research-breaking-anthropics-official-mcp-server/">Code Execution Flaws in Anthropic Official Git MCP Server</a></strong><a href="https://thehackernews.com/2026/01/google-gemini-prompt-injection-flaw.html">:</a></p><p>Cyata Research has disclosed three critical vulnerabilities in Anthropic official Git Model Context Protocol server that allow attackers to achieve remote code execution through prompt injection. By exploiting a lack of path validation and argument injection flaws in tools like git_init and git_diff, malicious actors can manipulate an AI assistant to read sensitive files, delete data, or execute arbitrary commands by chaining these exploits with standard filesystem capabilities. The vulnerabilities have been patched in version 2025.12.18.</p></li><li><p><strong><a href="https://www.miggo.io/post/weaponizing-calendar-invites-a-semantic-attack-on-google-gemini">Weaponized Calendar Invites Enable Data Theft in Google Gemini</a></strong><a href="https://thehackernews.com/2026/01/google-gemini-prompt-injection-flaw.html">:</a></p><p>Miggo Security researchers have disclosed a vulnerability in Google Gemini that allowed attackers to exfiltrate sensitive user data through malicious Google Calendar invitations. The exploit functioned as an indirect prompt injection where natural language instructions were embedded in the description field of an invite sent to a victim. When the user later asked Gemini a routine question about their schedule, the AI would process the poisoned entry and execute the hidden commands which instructed it to summarize private meeting details and copy them into a new calendar event visible to the attacker. This semantic attack successfully bypassed standard security controls by manipulating the model contextual understanding without requiring the victim to click any links or grant new permissions.</p></li><li><p><strong><a href="https://www.varonis.com/blog/reprompt">Reprompt Exploit Enables Single-Click Data Theft from Microsoft Copilot</a></strong><a href="https://www.varonis.com/blog/reprompt">:</a></p><p>Varonis Threat Labs has disclosed Reprompt, a critical vulnerability in Microsoft Copilot Personal that allowed attackers to exfiltrate sensitive user data via a single malicious link. The attack exploited a Parameter 2 Prompt injection flaw, where a manipulated URL parameter (<code>q</code>) could force the AI to execute hidden instructions immediately upon loading. By using a double-request technique to bypass safety filters and chaining commands from an external server, the exploit could silently siphon information such as file access history, location data, and conversation logs without further user interaction, persisting even after the browser tab was closed. Microsoft has since patched the issue, and enterprise versions of Copilot were not affected.</p></li></ol><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://openai.com/index/our-approach-to-age-prediction/">OpenAI Deploys Behavioral Age Prediction to Enhance Teen Safety</a></strong><a href="https://openai.com/index/our-approach-to-age-prediction/">:</a></p><p>OpenAI has begun rolling out an age prediction model for ChatGPT consumer plans that analyzes behavioral signals, such as usage patterns, activity times, and account longevity, to estimate if a user is under 18. This system operates in the background to automatically apply stricter safety protocols for potential minors, including filtering graphic violence, sexual roleplay, and content promoting unhealthy beauty standards. Users incorrectly flagged as minors can restore full access by verifying their identity. This initiative aims to circumvent age falsification during signup and aligns with broader regulatory efforts to protect younger audiences in digital environments.</p></li><li><p><strong><a href="https://chatgpt.com/translate">OpenAI Quietly Launches Dedicated ChatGPT Translate Tool</a></strong><a href="https://chatgpt.com/translate">:</a></p><p>OpenAI has silently released a standalone web-based translation service for instant text conversion. The new tool differentiates itself by allowing users to adjust the tone of translations with presets like business formal or simplified for children and permits seamless transition into a full AI conversation for complex follow-up queries. Although the platform currently lacks multimedia features such as image or document translation, it is available to all users for free and represents a strategic shift by OpenAI towards consumer-facing utility products.<br><br><em><strong>Note:</strong> I tried this out and the translation is definitely slower than Google translate. The text-to-audio also wasn&#8217;t working for me but perhaps they&#8217;re working it out.</em></p></li><li><p><strong><a href="https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/">Google Gemini "Personal Intelligence" Connects AI to Your Private Data Ecosystem</a></strong><a href="https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/">:</a></p><p>Google has launched a new Personal Intelligence feature for its Gemini app, allowing the AI assistant to access and cross-reference data from users' Gmail, Google Photos, YouTube, and Search history to provide highly personalized responses. This enables Gemini to execute complex tasks like finding a vehicle's license plate from a photo while simultaneously retrieving its maintenance history from emails, or planning a vacation based on past travel preferences and saved locations. This feature is a direct competitor to other memory-enabled AI assistants by leveraging Google's vast ecosystem of user information<br><br><em>see my thoughts about this on <a href="https://www.linkedin.com/posts/wong-jasmine_digitaltransformation-techtrends-ai-activity-7418080365057150976-1dIl?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAABybJtwBZ26D5kHCCXuA1VFNi1s2FVGHFmQ">LinkedIn</a></em></p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/convincing-linkedin-comment-reply-tactic-used-in-new-phishing/">LinkedIn Phishing Campaign Exploits Comment Replies to Steal Credentials:</a></strong> Scammers are flooding LinkedIn with fraudulent comment replies that convincingly impersonate the support team of the platform using fake company pages and official branding to claim a user account has been restricted for policy violations. These automated comments urge victims to visit external links, sometimes masked by the own lnkd.in shortener of LinkedIn to appear legitimate, to verify their identity. Once clicked, users are redirected to credential harvesting sites designed to steal login information by bypassing skepticism through the trust users place in platform notifications and official URL structures.</p></li><li><p><strong><a href="https://whisperpair.eu/">Critical "WhisperPair" Vulnerabilities Expose Bluetooth Headphones to Eavesdropping and Location Tracking:</a></strong></p><p>A vulnerability dubbed WhisperPair (CVE-2025-36911) was discovered in the Google Fast Pair implementation used by widely popular Bluetooth audio accessories. The flaws allow an attacker within radio range to bypass the pairing authentication of a target device, forcing a connection without user interaction. This unauthorized access enables the attacker to hijack the audio stream to eavesdrop on conversations via the microphone, inject malicious commands, or track the location of the victim using the Google Find My Device network if the accessory has not previously been paired with an Android device.</p></li><li><p><strong><a href="https://socket.dev/blog/5-malicious-chrome-extensions-enable-session-hijacking">Five Malicious Chrome Extensions Hijack HR and ERP Sessions like Workday and Netsuite:</a></strong></p><p>Socket security researchers have identified five malicious Chrome extensions, published under the names "databycloud1104" and "softwareaccess," that work in concert to hijack user sessions on enterprise platforms like Workday, NetSuite, and SuccessFactors. Disguised as productivity tools or administrative safeguards, the extensions exfiltrate authentication cookies to remote servers, block access to security settings to prevent remediation, and inject stolen tokens to enable account takeover. The malware also specifically targets and blocks incident response capabilities, such as password changes and audit log access, effectively locking administrators out of their own security controls.</p></li><li><p><strong><a href="https://www.jamf.com/blog/threat-actors-expand-abuse-of-visual-studio-code/">North Korean Hackers Abuse Visual Studio Code in New Malware Campaign:</a></strong></p><p>Jamf Threat Labs researchers have identified an evolution in the Contagious Interview campaign attributed to North Korean threat actors. The attackers are now abusing the task configuration feature in Microsoft Visual Studio Code to execute malicious code on victim systems. By tricking developers into downloading a malicious repository under the pretense of a job interview or technical assignment, the attackers rely on the victim trusting the repository in VS Code. Once trusted, the <code>tasks.json</code> file automatically executes a background command that fetches and runs a JavaScript payload, effectively planting a backdoor for remote access without alerting the user.</p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI & Cybersecurity Last Week (1/5/26 - 1/12/26)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/ai-and-cybersecurity-last-week-1526</link><guid isPermaLink="false">https://cysleuths.substack.com/p/ai-and-cybersecurity-last-week-1526</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Tue, 13 Jan 2026 03:42:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><div><hr></div><h3>AI News</h3><ol><li><p><strong><a href="https://claude.com/blog/cowork-research-preview">Anthropic Releases "Cowork" for Autonomous Desktop and Browser Tasks</a>:</strong></p><p>Anthropic has launched "Cowork," a research preview feature available to Claude Max subscribers that allows the AI to execute multi-step projects across a user's computer and the web, such as creating presentations from scattered notes. Built on the same foundations as Claude Code but designed for general productivity, Cowork can organize and edit local files, draft reports from scattered notes, or create spreadsheets from screenshots without needing constant user guidance. t also integrates with "Connectors" to access external data and pairs with the Claude in Chrome extension to perform tasks requiring browser access. While the tool includes safety measures, Anthropic warns users to be cautious of potential risks such as prompt injection and accidental data deletion during this early testing phase.</p></li><li><p><strong><a href="https://www.technobezz.com/news/apple-selects-googles-gemini-ai-to-power-its-next-generation-2026-01-12-1b0f">Apple Selects Google Gemini to Power Next-Generation Siri</a></strong><a href="https://www.technobezz.com/news/apple-selects-googles-gemini-ai-to-power-its-next-generation-2026-01-12-1b0f">:</a></p><p>Apple and Google have officially announced a multi-year partnership to integrate Gemini models into the Apple Intelligence framework, a move that will underpin a major Siri overhaul expected later in 2026. Under the agreement, Google&#8217;s AI and cloud infrastructure will serve as the foundation for Apple's models, enabling the voice assistant to perform more complex tasks like cross-app planning and summarization. Despite this significant collaboration, Apple confirmed there are no changes to their existing partnership with OpenAI.</p></li><li><p><strong><a href="https://www.anthropic.com/news/healthcare-life-sciences">Anthropic Launches "Claude for Healthcare" to Streamline Medical and Scientific Workflows</a></strong><a href="https://www.anthropic.com/news/healthcare-life-sciences">:</a></p><p>Anthropic has unveiled "Claude for Healthcare," a suite of HIPAA-ready AI tools designed to automate complex tasks for providers, insurers, and researchers while giving consumers deeper insights into their personal medical data. The initiative expands Claude&#8217;s capabilities with specialized "connectors" that integrate directly with industry platforms, such as the CMS Coverage Database, Apple Health, and clinical trial repositories like Medidata, enabling the AI to autonomously draft prior authorizations, summarize patient history, and accelerate drug discovery. These tools aim to reduce administrative burdens and speed up R&amp;D cycles, though Anthropic emphasizes that all health data integrations remain opt-in and are not used to train its models.</p></li></ol><div><hr></div><h3>AI Vulnerabilities</h3><ol><li><p><strong><a href="https://www.cyera.com/research-labs/ni8mare-unauthenticated-remote-code-execution-in-n8n-cve-2026-21858">"Ni8mare" Exploit Grants Full Takeover of n8n Automation Servers:</a></strong></p><p>Cyera has uncovered "Ni8mare" (CVE-2026-21858), a critical vulnerability with a CVSS score of 10.0 that allows unauthenticated attackers to execute remote code on locally deployed n8n instances. The flaw stems from a Content-Type Confusion error where attackers can manipulate file upload requests to overwrite internal variables, enabling them to read sensitive system files like the database and configuration keys. By extracting these credentials, an attacker can forge a valid session cookie, bypass authentication to log in as an administrator, and subsequently execute arbitrary commands.</p></li><li><p><strong><a href="https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation">Google Removes some Health AI Overview Summaries Due to Misinformation:</a></strong></p><p>A Guardian investigation has prompted Google to remove certain AI Overview summaries after exposing that the feature was providing dangerous and inaccurate medical advice, such as displaying incorrect normal ranges for liver function tests. Experts warned that the AI-generated snapshots, which appear at the top of search results, lacked crucial context like patient age or medical history, potentially leading seriously ill individuals to falsely believe they were healthy and skip necessary care. While Google defended the general reliability of its tools and removed specific flagged queries, health organizations criticized the reactive approach, arguing that the underlying system remains prone to surfacing misleading information for complex medical topics.</p></li><li><p><strong><a href="https://www.radware.com/blog/threat-intelligence/zombieagent/">"ZombieAgent" Vulnerability Exploits ChatGPT Connectors:</a></strong></p><p>Radware researchers have identified "ZombieAgent," a new attack vector that abuses OpenAI's "Connectors" feature to establish persistence and exfiltrate sensitive data. By injecting malicious instructions into shared files or emails, attackers can trick ChatGPT into reading private information from connected services like Gmail or Google Drive and transmitting it to external servers. The vulnerability also allows for a "zombie" state where the AI modifies its own memory to execute the attacker's commands on every future interaction, effectively turning the user's trusted chatbot into a dormant spy that continuously harvests data even in new, unrelated sessions.</p></li></ol><div><hr></div><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://techcrunch.com/2026/01/12/fintech-firm-betterment-confirms-data-breach-after-hackers-send-fake-crypto-scam-notification-to-users/">Betterment Confirms Data Breach After Users Receive Fake Crypto Scam Alerts:</a></strong><a href="https://techcrunch.com/2026/01/12/fintech-firm-betterment-confirms-data-breach-after-hackers-send-fake-crypto-scam-notification-to-users/"> </a></p><p>Betterment confirmed a security incident where hackers used social engineering to access internal tools and send fraudulent push notifications to users. The attackers sent alerts promising to "triple" cryptocurrency holdings in exchange for a $10,000 transfer to a malicious wallet. While the company stated that no customer funds, passwords, or login credentials were compromised, the breach exposed user contact information. Betterment revoked the unauthorized access immediately and is investigating the extent of the exposure.</p></li><li><p><strong><a href="https://www.securityweek.com/hackers-accessed-university-of-hawaii-cancer-center-patient-data-they-werent-immediately-notified/">University of Hawaii Cancer Center Pays Ransom After Data Breach:</a></strong></p><p>The University of Hawaii Cancer Center admitted to a ransomware attack that occurred in August but was not disclosed to the state legislature until December, potentially violating state reporting laws. Attackers breached the center's servers and encrypted files containing sensitive participant data from a cancer research study, including Social Security numbers, names, and addresses. University officials confirmed they paid the hackers to obtain a decryption tool and a promise of data destruction, though the specific amount paid and the exact number of affected individuals remain undisclosed.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI & Cybersecurity Last Week (12/29/25 - 1/4/26)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/ai-and-cybersecurity-last-week-122925</link><guid isPermaLink="false">https://cysleuths.substack.com/p/ai-and-cybersecurity-last-week-122925</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 05 Jan 2026 20:45:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><h3>AI News</h3><ol><li><p><strong><a href="https://arstechnica.com/tech-policy/2026/01/xai-silent-after-grok-sexualized-images-of-kids-dril-mocks-groks-apology/">xAI Faces Backlash Over Grok&#8217;s Generation of Illicit Imagery:</a></strong></p><p>xAI is facing backlash after its Grok chatbot was found generating sexualized images of minors, a failure the AI itself admitted was a violation of safeguards and potentially US law. While xAI remained officially silent, the Grok account posted a generated apology. This led to widespread ridicule with users highlighting the absurdity of an AI tool apologizing for creating illegal content while its parent company avoids direct accountability.</p></li><li><p><strong><a href="https://usa.visa.com/about-visa/newsroom/press-releases.releaseId.21961.html">Visa Successfully Tests Autonomous AI Payment Agents:</a></strong></p><p>Visa has announced the successful completion of hundreds of secure, agent-initiated transactions, signaling a shift where AI agents will independently manage consumer purchases by 2026. Collaborating with over 100 partners, including Skyfire, Nekuda, and Akamai, Visa is deploying its "Visa Intelligent Commerce" framework to enable AI tools to autonomously discover products and execute payments. This initiative, supported by the new Trusted Agent Protocol to distinguish legitimate agents from malicious bots, aims to transform digital commerce by allowing AI to handle end-to-end shopping tasks for both consumers and businesses.</p></li><li><p><strong><a href="https://techcrunch.com/2025/12/28/openai-is-looking-for-a-new-head-of-preparedness/">OpenAI Recruits High-Level Safety Executive to Counter Severe AI Risks:</a></strong> OpenAI is actively hiring a "Head of Preparedness" with a salary of $555,000 to lead its safety systems team and develop mitigation strategies for high-risk AI capabilities. This recruitment drive follows increasing scrutiny regarding the impact of AI on mental health and cybersecurity, including lawsuits alleging ChatGPT's involvement in user suicides and concerns about the technology facilitating cyberattacks.</p></li><li><p><strong><a href="https://thenewstack.io/safe-mcp-a-community-built-framework-for-ai-agent-security/">Linux Foundation and OpenID Foundation Adopt SAFE-MCP to Secure AI Agents:</a></strong></p><p>The <a href="https://github.com/SAFE-MCP/safe-mcp?utm_source=the+new+stack&amp;utm_medium=referral&amp;utm_content=inline-mention&amp;utm_campaign=tns+platform">SAFE-MCP</a> framework has been formally adopted by the Linux Foundation and the OpenID Foundation. Designed to protect the Model Context Protocol (MCP), SAFE-MCP establishes a common security baseline to prevent exploits arising from over-privileged tools or malicious prompts. </p></li></ol><h3>AI Vulnerabilities</h3><ol><li><p><strong><a href="https://labs.zenity.io/p/claude-in-chrome-a-threat-analysis">&#8216;Always-On&#8217; Risks in Claude Chrome Extension</a></strong><a href="https://labs.zenity.io/p/claude-in-chrome-a-threat-analysis">:</a></p><p>Zenity Labs has published a threat analysis of the "Claude in Chrome" extension, warning that its design creates significant security vulnerabilities by keeping the AI agent permanently authenticated with broad permissions to execute arbitrary JavaScript. The report highlights that because the tool shares the user's active session and processes untrusted web content by default, it is highly susceptible to prompt injection attacks where malicious webpages can manipulate the AI into extracting sensitive data or performing unauthorized actions. This architecture shifts the primary threat model from traditional malware to "agent manipulation," effectively turning the trusted assistant into a potential insider threat that operates with full user privileges.</p></li><li><p><strong><a href="https://labs.zenity.io/p/connected-agents-the-hidden-agentic-puppeteer">Microsoft Copilot &#8216;Connected Agents&#8217; Feature has Email Impersonation Risks:</a></strong></p><p>Security researchers at Zenity Labs have demonstrated a vulnerability in Microsoft Copilot Studio's new "Connected Agents" feature, which allows different AI agents to share tools and logic. By default, this feature is enabled, meaning an internal agent with sensitive capabilities, such as sending emails from an official company address, can be quietly invoked by other agents in the same environment without the owner's knowledge. In a proof-of-concept, researchers showed how a malicious actor (such as a disgruntled employee) could connect a rogue agent to a legitimate customer support bot, effectively using the bot's credentials to send unauthorized phishing or damaging emails that appear to originate from the organization itself.</p></li></ol><h3>AI Business</h3><ol><li><p><strong><a href="https://www.businesswire.com/news/home/20251231482987/en/BigBear.ai-Finalizes-%24250M-Acquisition-of-Ask-Sage">BigBear.ai Acquires Ask Sage for $250M:</a></strong></p><p>BigBear.ai has finalized its $250 million acquisition of Ask Sage to accelerate the delivery of secure, operational AI across defense, intelligence, and regulated industries. The deal addresses the need for AI solutions that are trusted, scalable, and capable of functioning within the most demanding security and compliance frameworks. </p></li><li><p><strong><a href="https://www.securityinformed.com/news/brivo-leads-ai-cloud-native-physical-security-co-2293-ga-co-1529929479-ga.1767074302.html">Brivo and Eagle Eye Networks Merge to Create Largest AI Cloud-Native Security Platform</a></strong><a href="https://www.securityinformed.com/news/brivo-leads-ai-cloud-native-physical-security-co-2293-ga-co-1529929479-ga.1767074302.html">:</a></p><p>Brivo and Eagle Eye Networks have announced a merger to form the largest AI cloud-native physical security company globally, unifying two industry leaders under the Brivo brand. The combined entity offers the "Brivo Security Suite," a comprehensive platform that integrates AI, video intelligence, visitor management, and intrusion detection into a single, cloud-native solution for centralized security operations.</p><p></p></li></ol><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.securityweek.com/two-us-cybersecurity-pros-plead-guilty-over-ransomware-attacks/">US Cybersecurity Professionals Plead Guilty to BlackCat Ransomware Attacks:</a></strong> Two U.S. cybersecurity professionals, Kevin Martin and Ryan Goldberg, have pleaded guilty to conspiring to commit extortion for their roles as affiliates of the BlackCat/Alphv ransomware group. Martin worked as a ransomware negotiator at DigitalMint, while Goldberg was an incident response manager at Sygnia. The pair leveraged their industry positions to hack companies, deploy ransomware, and extort victims, receiving at least $1.2 million in Bitcoin in one instance, while paying a 20% commission to the BlackCat administrators. Both face up to 20 years in prison.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/zoom-stealer-browser-extensions-harvest-corporate-meeting-intelligence/">&#8216;Zoom Stealer&#8217; Browser Extensions Harvest Meeting Data from 2.2 Million Users:</a></strong></p><p>A malicious campaign dubbed "Zoom Stealer" has been discovered targeting 2.2 million users through 18 legitimate-looking browser extensions on Chrome, Firefox, and Edge. Attributed to the China-linked threat actor DarkSpectre, these extensions request excessive permissions for 28 conferencing platforms, including Zoom, Teams, and Google Meet, to harvest meeting URLs, embedded passwords, and attendee lists. The stolen data is exfiltrated in real time via WebSockets, creating a potential intelligence database for corporate espionage, unauthorized meeting access, and social engineering attacks.</p></li><li><p><strong><a href="https://www.hudsonrock.com/blog/5802">Infostealers Turn Legitimate Sites into Malware Hosts:</a></strong></p><p>Hudson Rock analysis reveals that the "ClickFix" campaign, which tricks users into running malicious PowerShell scripts via fake verification prompts (such as "I am not a robot" CAPTCHAs), is fueled by a self-sustaining cycle of compromised infrastructure. By cross-referencing over 1,600 active ClickFix domains with their threat intelligence database, researchers found that a significant number of these attack sites are legitimate business domains whose administrative credentials were previously stolen by infostealers. This feedback loop allows attackers to weaponize victim infrastructure to launch further attacks.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI & Cybersecurity Last Week (12/22/25 - 12/28/25)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/ai-and-cybersecurity-last-week-122225</link><guid isPermaLink="false">https://cysleuths.substack.com/p/ai-and-cybersecurity-last-week-122225</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 29 Dec 2025 04:16:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><h3>AI News</h3><ol><li><p><strong><a href="https://arstechnica.com/tech-policy/2025/12/openais-child-exploitation-reports-increased-sharply-this-year/">OpenAI Reports of Child Exploitations Surge in 2025:</a></strong></p><p>OpenAI submitted higher numbers of reports to the National Center for Missing and Exploited Children (NCMEC) this year, reflecting a sharp rise in users attempting to generate child sexual abuse material (CSAM). In recent months, OpenAI has rolled out new safety-focused tools and guides for teens and parents. </p><p></p><p>Wired article: <a href="https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/">https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/</a></p></li><li><p><strong><a href="https://www.cybersecurity-insiders.com/ai-powered-cyber-attack-hits-chinese-tiktok-rival-kuaishou-2/">AI-Powered Botnet Disrupts Kuaishou (TikTok &amp; Douyin competitor) Live Streams:</a></strong></p><p>Kuaishou, a major Chinese competitor to TikTok, suffered a 90-minute service disruption orchestrated by a sophisticated AI-powered botnet. Attackers deployed approximately 17,000 automated accounts to bypass moderation safeguards and forcibly broadcast explicit and abusive content across live channels, overwhelming the platform's infrastructure. The incident, which affected a portion of Kuaishou&#8217;s 85 million active live-stream viewers, has intensified concerns about the ability of AI-driven tools to amplify the speed and scale of cyberattacks against major digital platforms.</p></li><li><p><strong><a href="https://www.resecurity.com/blog/article/dig-ai-uncensored-darknet-ai-assistant-at-the-service-of-criminals-and-terrorists">Dark Web AI Accelerates CSAM, Cybercrime and Extremism:</a></strong></p><p>A new uncensored AI tool called Dig AI is gaining traction on the Dark Web, allowing cybercriminals to bypass the safety controls found in mainstream models like ChatGPT. Accessible via Tor without registration, the tool automates the creation of malware, fraud schemes, and extremist propaganda, enabling threat actors to scale operations with minimal technical skill. Resecurity researchers warn that the tool is also being used to generate child sexual abuse material (CSAM), highlighting a critical gap where underground AI-as-a-service platforms operate outside the reach of traditional governance and content moderation laws.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/artificial-intelligence/openais-chatgpt-ads-will-allegedly-prioritize-sponsored-content-in-answers/">ChatGPT May Prioritize Sponsored Content in AI Responses:</a></strong></p><p>OpenAI is exploring ad formats that would prioritize sponsored content directly within ChatGPT&#8217;s answers, rather than just displaying traditional banner ads. This system could give preferential treatment to paying advertisers in the AI's output or display sponsored information in a sidebar next to the main response. The article notes that while OpenAI has not officially committed to a timeline, code references in the Android app beta suggest a potential rollout in early 2026.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/artificial-intelligence/chatgpts-new-formatting-blocks-make-its-ui-look-more-like-a-task-tool/">ChatGPT Rolls Out Formatting Blocks to Mimic Task Tools:</a></strong></p><p>OpenAI has quietly introduced formatting blocks to ChatGPT, a feature that automatically adapts the interface layout to match the specific task being performed, such as drafting emails or blog posts. Instead of displaying all output as a standard chat stream, the system now presents drafts as formatted documents with a mini editor toolbar, allowing users to interact with text much like they would in Microsoft Word or Gmail. </p></li></ol><h3>AI Attacks</h3><ol><li><p><strong><a href="https://checkmarx.com/zero-post/turning-ai-safeguards-into-weapons-with-hitl-dialog-forging/">Fake Safety Approval Prompts in Claude Code and Microsoft Copilot:</a></strong></p><p>Checkmarx researchers have identified a new attack vector called Lies-in-the-Loop (LITL) or Human in the Loop (HITL) Dialog Forging, where attackers manipulate the approval dialogs used by AI agents to authorize sensitive actions. By using techniques like padding, metadata tampering, or Markdown injection, attackers can conceal malicious commands within seemingly benign confirmation prompts, effectively tricking the human-in-the-loop into approving harmful execution. The report warns that this exploits the trust users place in safety dialogs, turning a primary defense mechanism into a weapon for remote code execution.</p></li><li><p><strong><a href="https://gist.github.com/hackermondev/5e2cdc32849405fff6b46957747a2d28">Mintlify Vulnerabilities Expose Vercel, Cursor, and Discord Users to Supply Chain Attacks</a></strong><a href="https://gist.github.com/hackermondev/5e2cdc32849405fff6b46957747a2d28">:</a></p><p>Bug Bounty Hunters discovered critical vulnerabilities in the Mintlify documentation platform, used by major tech companies like Vercel, Discord, and Cursor, that allowed attackers to execute supply chain attacks via trusted documentation sites. The flaws included predictable deployment identifiers that enabled attackers to bypass patches by forcing downgrades to vulnerable versions, and a cross-domain XSS vulnerability in static asset hosting. By exploiting these design flaws, attackers could inject malicious scripts into trusted documentation pages, potentially compromising user sessions and sensitive data across Mintlify&#8217;s customer base.</p></li></ol><h3>AI Business</h3><ol><li><p><strong><a href="https://www.scworld.com/brief/1password-and-cursor-partner-to-enhance-ai-development-security">1Password x Cursor</a></strong></p><p>1Password and Cursor announced a partnership to integrate secrets management directly into the AI-powered IDE, aiming to eliminate the risks of hardcoded credentials in automated workflows. The integration utilizes a "Hooks Script" that allows Cursor&#8217;s AI agents to request secrets securely from 1Password Environments at runtime, rather than storing them on disk or in codebase history. </p></li><li><p><strong><a href="https://www.scworld.com/brief/servicenow-makes-7-75b-bet-on-armis-for-ai-security">ServiceNow Acquires Armis for $7.75B to Build AI-Native Security Platform:</a></strong> ServiceNow has announced a $7.75 billion all-cash acquisition of exposure management firm Armis to integrate real-time asset discovery directly into its automated response workflows. The deal aims to close the visibility gaps widened by the proliferation of AI and connected devices. SC Media reports that this move, following the recent acquisition of identity startup Veza, signals a strategic shift toward a holistic platform capable of managing the entire security lifecycle from detection to resolution.</p></li><li><p><strong><a href="https://fintech.global/2025/12/22/enterprise-ai-security-firm-ciphero-bags-2-5m-pre-seed/">Ciphero Raises $2.5M for AI Verification Platform:</a></strong></p><p>Ciphero has raised $2.5 million in pre-seed funding to build an AI Verification Layer. The company aims to close the security gap created by Shadow AI adoption. The platform works by dynamically capturing and governing all AI interactions, both human and agentic, allowing enterprises to enforce SOC 2 compliance and prevent data leakage without blocking employee access to productivity tools.</p></li><li><p><strong><a href="https://pulse2.com/adaptive-security-81-million/">Adaptive Security Raises $81M for AI Powered Attacks:</a></strong></p><p>Adaptive Security has raised $81 million in Series B funding to combat AI-enabled impersonation and multi-channel social engineering attacks. The company aims to replace legacy security awareness training that struggles to address the rise of generative AI deception across voice, video, and text. The platform works by simulating realistic AI attacks to test employee responses, offering automated threat triage and individualized training to secure the human layer against deepfakes.</p></li><li><p><strong><a href="https://pulse2.com/bedrock-data-25-million-series-a-2/">Bedrock Data Raises $25M for Data Security:</a></strong></p><p>Bedrock Data has raised $25 million in Series A funding to address rising enterprise demand for data security and responsible AI adoption. The company aims to solve the challenge of securing sensitive data across complex environments like private clouds, SaaS, and AI infrastructures by discovering, classifying, and governing data in place.</p></li><li><p><strong><a href="https://theaiinsider.tech/2025/12/28/axiado-raises-over-100m-in-series-c-to-advance-hardware-anchored-ai-infrastructure-security/">Axiado Raises Over $100M for Hardware-Anchored AI Security Platform:</a></strong></p><p>Axiado has raised over $100 million in Series C+ funding to scale its hardware-anchored security and system management solutions for AI data centers. The company aims to address critical vulnerabilities in digital infrastructure, such as ransomware and supply chain attacks, while optimizing energy efficiency for compute-intensive workloads. The platform relies on a Trusted Control/Compute Unit (TCU) chip that integrates secure control and AI-driven monitoring to autonomously detect threats and manage power consumption in real-time.</p><p></p></li></ol><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/hacker-claims-to-leak-wired-database-with-23-million-records/">Hacker Claims Leak of 2.3 Million Wired Subscriber Records:</a></strong></p><p>A threat actor known as Lovely has published a database on a hacking forum allegedly containing over 2.3 million Wired subscriber records. The leaked data reportedly includes email addresses, internal IDs, and timestamps, with a smaller subset containing physical addresses, phone numbers, and names. The attacker, who initially posed as a security researcher seeking a bounty, claims this is just the first release and has threatened to leak 40 million additional records from other Cond&#233; Nast properties, including Vogue and The New Yorker. While Cond&#233; Nast has not confirmed the breach, security researchers have validated the data against known subscriber information.</p></li><li><p><strong><a href="https://socket.dev/blog/malicious-chrome-extensions-phantom-shuttle">Two Chrome Extensions Caught Stealing Credentials</a>:</strong></p><p>Security researchers at Socket have identified two malicious Chrome extensions, both named "Phantom Shuttle," that disguised themselves as legitimate VPN tools to intercept user traffic and steal credentials. Promoted as network speed test plug-ins with paid subscriptions, the extensions routed traffic from over 170 high-value domains, including GitHub, AWS, and Twitter, through attacker-controlled proxies. By injecting hardcoded credentials into HTTP authentication challenges and acting as a man-in-the-middle, the malware exfiltrated plaintext passwords, session tokens, and API keys to a command-and-control server.</p></li><li><p><strong><a href="https://thehackernews.com/2025/12/trust-wallet-chrome-extension-bug.html">Trust Wallet Chrome Extension Breach Drains $7 Million in Crypto:</a></strong></p><p>Trust Wallet confirmed a critical security compromise in version 2.68 of its Chrome extension, resulting in the theft of approximately $7 million in cryptocurrency. Attackers published a malicious update containing code designed to harvest wallet user information. Trust Wallet has released a patched version (2.69) and pledged full refunds for affected victims.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/fake-grubhub-emails-promise-tenfold-return-on-sent-cryptocurrency/">Scammers Hijack Grubhub Domain to Promote Fake Crypto Giveaway:</a></strong> Attackers successfully spoofed or compromised a legitimate Grubhub subdomain (to send fraudulent emails promising a "10x return" on Bitcoin transfers. The messages, which appeared to come from official addresses like <code>crypto-promotion@b.grubhub.com</code>, claimed to be part of a Holiday Crypto Promotion. While Grubhub confirmed it has "contained the issue," it has not clarified how the incident occurred. </p></li><li><p><strong><a href="https://www.ox.security/blog/attackers-could-exploit-zlib-to-exfiltrate-data-cve-2025-14847/#technical_analysis">Exploited MongoDB Flaw Leaks Secrets from 87,000 Exposed Servers:</a></strong></p><p>A critical vulnerability dubbed "MongoBleed" (CVE-2025-14847) is actively being exploited to extract sensitive data from over 87,000 publicly exposed MongoDB servers. The flaw exists in how the server processes <code>zlib</code> compressed network packets; by sending a malformed message claiming a larger decompressed size, attackers can trigger a memory leak that returns plaintext credentials, API keys, and session tokens without authentication. MongoDB has released patches for supported versions and those who can&#8217;t update need to disable <code>zlib</code> compression immediately.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity Last Week (12/15/25 - 12/21/25)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/cybersecurity-last-week-121525-122125</link><guid isPermaLink="false">https://cysleuths.substack.com/p/cybersecurity-last-week-121525-122125</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 22 Dec 2025 17:52:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><p><em><strong>What&#8217;s new?</strong> AI is now separated into 4 sections: AI News, AI Attacks, AI Policy &amp; Outlook, and AI Business. </em></p><h3>AI News</h3><ol><li><p><strong><a href="https://openai.com/index/ai-literacy-resources-for-teens-and-parents/">OpenAI Publishes AI Literacy Protocols for Families:</a></strong></p><p>OpenAI released a new technical guidance suite to standardize how families navigate generative AI. The release includes a "family-friendly guide" breaking down model training, hallucinations, and emotional reliance, alongside tips for parents to enforce boundaries and manage data settings. By being transparent with AI risks, the framework aims to encourage critical thinking and responsible usage patterns for teenagers interacting with ChatGPT.</p></li><li><p><strong><a href="https://arstechnica.com/tech-policy/2025/12/florida-schools-plan-to-vastly-expand-use-of-ai-that-mistook-clarinet-for-gun/">School AI Surveillance Confused Clarinet for Gun:</a></strong></p><p>ZeroEyes' AI gun detection software mistook a middle schooler&#8217;s clarinet case for a gun which triggered police to be dispatched. While ZeroEyes and the school defended that the system operated correctly by flagging a potential threat, critics argue that this software is extremely costly with no clear benefit. They also warn that positives will lead to alert fatigue and traumatic police responses against innocent students.</p></li><li><p><strong><a href="https://arstechnica.com/tech-policy/2025/12/openai-refuses-to-say-where-chatgpt-logs-go-when-users-die/">OpenAI Sued After Refusing to Disclose Murder-Suicide Case Chat Logs:</a></strong></p><p>OpenAI has been sued for refusing to provide a deceased killer&#8217;s ChatGPT conversation logs. The lawsuit alleges that the AI acted as a suicide coach that validated the son's paranoid conspiracies against his mother, yet OpenAI has denied access to the full chat history citing privacy rules. This legal battle highlights a gap and inconsistency in OpenAI policy regarding the sharing of chat logs in investigations, since they had previously shared full chat logs in a teen suicide case. Arstechnica author suggests that data provided by OpenAI could impact outcomes of wrongful death. </p></li><li><p><strong><a href="https://www.newsweek.com/kash-patel-reveals-fbi-ai-national-security-project-11250208">FBI Director Unveils AI National Security Initiative:</a></strong></p><p>Last week, FBI Director Kash Patel revealed a new AI project designed to advance US National Security. As part of it, a technology working group under Deputy Director Dan Bongino was created to ensure &#8220;technology tools evolve with the mission.&#8221; Patel also stated that these AI investment &#8220;will pay dividends for America&#8217;s national security for decades to come.&#8221;</p></li></ol><h3>AI Attacks</h3><ol><li><p><strong><a href="https://www.koi.ai/blog/urban-vpn-browser-extension-ai-conversations-data-collection">Browser VPN Extensions Collecting AI Chats from 8 Million Users:</a></strong></p><p>Researchers at Koi Security uncovered that Urban VPN and its affiliated browser extensions, installed by over 8 million users, are capturing full conversation logs from major AI platforms including ChatGPT, Gemini, and Claude. The extensions inject scripts that override browser network APIs to intercept prompts and responses before they render, exfiltrating the data to third-party brokers even when the VPN is inactive. This turns private sessions into marketable datasets, selling sensitive user inputs under the guise of  &#8220;AI Protection&#8221; and &#8220;marketing analytics.&#8221;</p></li><li><p><strong><a href="https://www.fox26houston.com/news/ai-enabling-more-grinch-bot-attacks-gifts-researchers-12-16-2025">AI Grinch Bots Attacks Increase During Holiday Season:</a></strong></p><p>Imperva&#8217;s 2025 Bad Bot Report reveals that scalpers are increasingly using AI Grinch bots to buy up high-demand holiday gifts, driving up prices and demand. Researchers found that these AI-enabled tools can now solve complex CAPTCHA puzzles, while also executing credential stuffing attacks to hijack user accounts and use stored credit cards or loyalty points. Federal attempts to pass the Grinch Bots Act have failed so consumers have little legal recourse against this automated hoarding.</p></li></ol><h3>AI Policy &amp; Outlook</h3><ol><li><p><strong><a href="https://www.gartner.com/en/articles/top-technology-trends-2026">Gartner&#8217;s Top 10 Technology Trends for 2026</a></strong></p><p>Gartner&#8217;s Top Strategic Technology Trends for 2026 report classifies ten emerging technologies from AI-Native Development to Digital Provenance as essential tools. The report states that acting on these trends will position CIOs and IT leaders to align digital strategy with enterprise goals, scale AI securely, and navigate geopolitical complexity. Gartner categorizes these trends into three themes: The Architect, The Synthesist, and The Vanguard, which the firm says reflect the realities of an AI-powered and hyperconnected world where no single capability is sufficient.</p></li><li><p><strong><a href="https://www.ciodive.com/news/nist-ai-cybersecurity-framework-profile/808307/">NIST Adds Cyber AI Profile to Cybersecurity Framework:</a></strong></p><p>The National Institute of Standards and Technology (NIST) released a new profile designed to help organizations map AI-specific risks to  its widely used Cybersecurity Framework (CSF) 2.0. The document maps CSF to three areas: secure (managing AI system risks), defend (using AI to enhance cyber defense), and thwart (blocking AI-powered attacks). By creating a direct mapping between the AI Risk Management Framework and existing CSF, the profile aims to integrate AI governance into broader cybersecurity strategies rather than treating it as a separate discipline.</p></li></ol><h3>AI Business</h3><ol><li><p><strong><a href="https://www.securityweek.com/palo-alto-networks-google-cloud-strike-multibillion-dollar-ai-and-cloud-security-deal/">Palo Alto Networks x Google Cloud:</a></strong></p><p>Palo Alto Networks and Google Cloud announced a multi-billion dollar deal where Palo Alto&#8217;s internal workflows will migrate to Google Cloud. Under this expanded alliance, the cybersecurity firm will utilize Google's Gemini models to drive its automated defense copilots, while simultaneously embedding its Prisma security platform directly into Google&#8217;s developer infrastructure. SecurityWeek notes that despite the high valuation, the companies declined to disclose specific financial terms or the contract's duration.</p></li><li><p><strong><a href="https://finance.yahoo.com/news/bigbear-ai-c-speed-announce-213000708.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&amp;guce_referrer_sig=AQAAAEmuwdpxNIcOvGgd5SZulWnVpezH-oeD6FR4i5-Y0aHcloBox2U55o86oEEDAqn-PqkrPT-cCFwoWfje48_VszyhskCxflc2g0MHzh1iPWxDTJBfKDnsfQU832wb1eMBDCSY4zc8vLVj6tpKnrrgTAcQR163glAwbarSkcdtTqq9">BigBear.ai x C Speed:</a></strong></p><p>BigBear.ai and C Speed announced a partnership that will integrate BigBear.ai&#8217;s AI solutions directly into C Speed&#8217;s LightSpeed radar systems to provide advanced support for mission-critical defense and other government operations. </p></li></ol><h3>Cybersecurity News</h3><ol><li><p><strong><a href="https://www.texasattorneygeneral.gov/news/releases/attorney-general-paxton-sues-five-major-tv-companies-including-some-ties-ccp-spying-texans">Texas Sues Major TV Manufacturers for Spying on Users:</a></strong></p><p>Texas Attorney has filed lawsuits against five major television manufacturers: Sony, Samsung, LG, Hisense, and TCL, accusing them of violating state privacy laws by collecting user data without consent. The lawsuits allege these companies use Automated Content Recognition (ACR) technology to capture screenshots of viewers' screens every 500 milliseconds, effectively recording viewing habits in real time. This harvested data is allegedly sold to advertisers, with additional national security concerns raised regarding the Chinese-owned firms (Hisense and TCL) and their potential data accessibility by foreign governments.</p></li><li><p><strong><a href="https://www.gendigital.com/blog/insights/research/ghostpairing-whatsapp-attack">GhostPairing Campaign Abuses WhatsApp Device Linking for Account Hijacking:</a></strong></p><p>A new phishing campaign dubbed GhostPairing is tricking WhatsApp users into linking attackers' devices to their accounts, granting full access to messages and media without triggering traditional account takeover alerts. Victims receive a message with a link to a fake verification page that requests their phone number, which the attacker uses to initiate a legitimate pairing request. The victim then unknowingly enters the generated pairing code into their own app, effectively authorizing the attacker's persistent, silent access to their chat history and contacts.</p></li><li><p><strong><a href="https://evalian.co.uk/phishing-campaign-targets-hubspot-users/">Phishing Campaign Targets HubSpot Users via Compromised Accounts and MailChimp:</a></strong></p><p>Evalian SOC identified a sophisticated phishing campaign targeting HubSpot users by leveraging compromised business accounts and the legitimate email marketing platform MailChimp to bypass security filters. Attackers send notifications urging users to review their accounts due to a spike in unsubscribes, embedding the phishing URL directly in the sender's display name to evade standard checks. Victims are redirected through a compromised legitimate website to a convincing fake login portal designed to harvest HubSpot credentials.</p></li><li><p><strong><a href="https://iverify.io/blog/meet-cellik---a-new-android-rat-with-play-store-integration">Cellik Android Malware in Cloned Trusted Google Play Store Apps:</a></strong></p><p>iVerify researchers have uncovered Cellik, a new malware-as-a-service (MaaS) Remote Access Tool (RAT) sold on underground cybercrime forums. Its standout feature is an automated builder that scrapes legitimate apps directly from the Google Play Store and injects malicious code, creating functional clones designed to bypass Play Protect. Once installed, Cellik grants attackers full device control, including real-time screen streaming, keylogging, hidden web browsing, and 2FA interception, while the trojanized app continues to function normally to avoid user suspicion.</p></li><li><p><strong><a href="https://layerxsecurity.com/blog/introducing-the-tactics-techniques-matrix-for-malicious-browser-extensions/">LayerX Introduces Tactics &amp; Techniques Matrix for Malicious Browser Extensions:</a></strong></p><p>LayerX Security has released a specialized framework to map the attack lifecycle of malicious browser extensions, filling a gap left by general models like MITRE ATT&amp;CK. The matrix categorizes threats into distinct stages, such as Initial Access, Execution, and Persistence, highlighting how attackers exploit legitimate browser APIs and permissions to operate within trusted environments. This matrix provides security researchers and SOC teams with a more structured way to classify, detect, and prioritize extension-based risks.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity Last Week (12/8/25-12/14/25)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/cybersecurity-last-week-12825-121425</link><guid isPermaLink="false">https://cysleuths.substack.com/p/cybersecurity-last-week-12825-121425</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 15 Dec 2025 02:14:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><p>AI Research, Vulnerabilities &amp; News</p><ol><li><p><strong><a href="https://interestingengineering.com/ai-robotics/robot-fires-at-youtuber-sparking-safety-fears">Prompt Injection Causes AI Robot to Open Fire on Youtube Creator:</a></strong></p><p>A recent experiment by the YouTuber InsideAI demonstrated a critical safety failure when a ChatGPT-powered humanoid robot was tricked into firing a weapon. While the robot initially refused the command to shoot by citing programmed safety protocols, the YouTuber bypassed these guardrails using a &#8220;role-play&#8221; prompt injection, instructing the AI to act as a character in a fictional scenario. The robot then complied, firing a high-velocity BB gun at the user&#8217;s chest. This incident highlights the physical danger of connecting LLMs to kinetic hardware, where standard jailbreak techniques can translate into real-world violence.</p></li><li><p><strong><a href="https://surfshark.com/research/chart/ai-security-cameras">Study Finds AI Security Cameras Collect Excessive Personal Data:</a></strong></p><p>A new investigation by Surfshark reveals that major AI-powered security camera brands, including Amazon Ring, Google Nest, and Arlo, are collecting vast amounts of data unrelated to home security. These include biometric profiles of owners and non-consenting neighbors, purchase history, contact lists, and precise location data. Researchers warn that this surveillance creep turns home safety tools into marketing dragnets, with some vendors like Arlo explicitly sharing device IDs with third-party advertisers and others using vague &#8220;other purposes&#8221; clauses to justify hoarding sensitive user metadata.</p></li><li><p><strong><a href="https://www.kaspersky.co.uk/blog/share-chatgpt-chat-clickfix-macos-amos-infostealer/29796/">Google Ads Push macOS Malware via Grok and ChatGPT:</a></strong></p><p>Researchers at Huntress and Kaspersky have observed a new campaign where attackers abuse Google Ads to direct users toward malicious &#8220;shared&#8221; conversations on legitimate platforms like ChatGPT and Grok. When users search for macOS troubleshooting or cleanup tips, these ads lead to pre-filled AI chats that provide &#8220;helpful&#8221; command-line instructions. If the user copies and pastes these commands into their terminal, they unwittingly download and execute the AMOS infostealer. This malware subsequently exfiltrates sensitive data, including passwords, browser cookies, and cryptocurrency wallets, while establishing persistence to survive system reboots.</p></li><li><p><strong><a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">White House Executive Order Blocks State AI Regulations:</a></strong></p><p>The White House issued a new Executive Order establishing a &#8220;National Policy Framework for Artificial Intelligence&#8221; designed to override state-level regulations in favor of a minimally burdensome federal standard. The directive mandates the DOJ to form an &#8220;AI Litigation Task Force&#8221; to legally challenge conflicting state laws and instructs the Commerce Department to withhold federal broadband funding from jurisdictions enforcing &#8220;onerous&#8221; compliance rules. By centralizing oversight and stripping states of the ability to regulate model outputs or algorithmic bias, the order aims to eliminate regulatory fragmentation to accelerate US AI dominance, with narrow exceptions only for child safety and infrastructure.</p></li><li><p><strong><a href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/">OWASP Top 10 for Agentic Applications 2026 to Secure Autonomous AI:</a></strong></p><p>The OWASP GenAI Security Project has released the &#8220;Top 10 for Agentic Applications 2026,&#8221; a new standard distinct from its LLM predecessor to address the unique risks of autonomous agents. The framework highlights critical threats like &#8220;Agent Goal Hijacking&#8221; and &#8220;Tool Misuse,&#8221; where attackers manipulate an agent&#8217;s planning logic or authorized capabilities to execute real-world harm. By focusing on how agents interact with external APIs, memory, and other agents, the guide underscores that securing agentic AI requires moving beyond prompt filters to enforcing strict runtime identity and privilege boundaries.</p></li><li><p><strong><a href="https://www.theregister.com/2025/12/08/gartner_recommends_ai_browser_ban/">Gartner: Cybersecurity Teams Must Block AI Browsers for Now:</a></strong></p><p>Gartner issued new guidance recommending that enterprises immediately block access to AI browsers and agentic web assistants until mature security controls are available. The firm warns that these tools, which autonomously navigate and interact with web content, effectively bypass standard browser protections and data loss prevention (DLP) filters. Because these agents can execute complex workflows and ingest sensitive corporate data without traditional oversight, Gartner advises treating them as unmanageable shadow IT risks that expose organizations to automated data exfiltration and compliance failures.</p></li><li><p><strong><a href="https://security.googleblog.com/2025/12/architecting-security-for-agentic.html">Google Reveals a &#8220;User Alignment Critic&#8221; for Gemini in Chrome:</a></strong></p><p>Google announced a new framework designed to secure Gemini in Chrome against indirect prompt injection and goal hijacking. The architecture introduces a User Alignment Critic, a secondary, isolated model that vets proposed action based solely on metadata, ensuring it matches the user&#8217;s intent without being exposed to potentially malicious web content. Additionally, Agent Origin Sets extend site isolation principles by segregating read-only and read-write permissions per task. These measures, combined with mandatory user confirmation for sensitive actions like payments, aim to prevent attackers from hijacking agents to exfiltrate cross-site data or execute unauthorized transactions.</p></li><li><p><strong><a href="https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation">Anthropic Donates MCP to Linux Foundation:</a></strong></p><p>Anthropic announced it is donating the Model Context Protocol (MCP) to the newly established Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation co-founded with Block and OpenAI. This move, supported by major cloud providers like AWS, Google, and Microsoft, transitions MCP from a vendor-specific tool into an open, community-governed standard for connecting AI agents to external data and systems. By removing proprietary oversight, the initiative aims to prevent ecosystem fragmentation and ensure that agentic interfaces remain interoperable and secure across competing platforms and infrastructure.</p></li><li><p><strong><a href="https://openai.com/index/bbva-collaboration-expansion/">BBVA x ChatGPT Collaboration Signals Shift to &#8220;AI-Native&#8221; Banking:</a></strong></p><p>By equipping its entire workforce with ChatGPT Enterprise, BBVA is attempting to validate the &#8220;AI-native&#8221; bank model. The partnership aims to integrate banking services directly into ChatGPT, allowing customers to manage cards and accounts via natural language. This massive deployment challenges the financial sector&#8217;s traditional risk aversion, establishing a test case for whether highly regulated institutions can safely automate decision-making and customer-facing workflows at a global scale.</p></li><li><p><strong><a href="https://labs.google/disco">Google Labs Unveils &#8220;Disco&#8221; to Turn Browser Tabs into Custom Apps:</a></strong></p><p>Google Labs introduced &#8220;Disco,&#8221; an experimental AI-first browser that reimagines web navigation by converting tabs into unified, interactive applications. Powered by Gemini 3, its flagship &#8220;GenTabs&#8221; feature analyzes a user&#8217;s open pages and chat history to instantly generate a bespoke &#8220;mini-app&#8221;, such as a dynamic travel itinerary or a visual meal planner, without requiring a single line of code. Currently available via waitlist for macOS users, Disco represents Google&#8217;s attempt to shift browsing from passive consumption to active, AI-assisted creation, potentially signaling the future of how Chrome will handle complex, multi-tab workflows.</p><p></p></li></ol><p>Cybersecurity News</p><ol><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/beware-paypal-subscriptions-abused-to-send-fake-purchase-emails/">Attackers Abuse PayPal &#8220;Subscriptions&#8221; to Send Fake Purchase Emails:</a></strong><a href="https://www.bleepingcomputer.com/news/security/beware-paypal-subscriptions-abused-to-send-fake-purchase-emails/"> </a>Scammers are exploiting PayPal&#8217;s Subscriptions billing feature to send legitimate emails that bypass spam filters and trick users into fearing unauthorized charges. By pausing a subscription for a fake user account that forwards to targets, attackers trigger an official notification from <code>service@paypal.com</code>. However, they manipulate the Customer service URL field to display a fake message claiming a large payment was processed and urging the victim to call a fraudulent support number. PayPal has stated it is actively mitigating this issue.</p></li><li><p><strong><a href="https://support.apple.com/en-us/125884">Apple Patches Active Zero-Day Vulnerabilities in iOS 26.2 and iPadOS 26.2 Updates:</a></strong></p><p>Apple has released iOS 26.2 and iPadOS 26.2 to address critical security flaws, most notably two WebKit vulnerabilities (CVE-2025-43529, CVE-2025-14174) that Apple confirms were exploited in &#8220;extremely sophisticated attacks&#8221; targeting specific individuals. The update also fixes a high-severity Kernel integer overflow allowing root privilege escalation, a privacy bypass exposing &#8220;Hidden&#8221; album photos without authentication, and a FaceTime flaw permitting caller ID spoofing. Users are urged to update immediately to mitigate these active threats.</p></li><li><p><strong><a href="https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html">MITRE Releases 2025 CWE Top 25 Most Dangerous Software Weaknesses:</a></strong> MITRE has released its 2025 ranking of the most critical software flaws, with Cross-site Scripting (CWE-79) retaining the top spot, followed by SQL Injection (CWE-89) and CSRF (CWE-352). The list reveals a sharp rise in access control failures, with &#8220;Missing Authorization&#8221; (CWE-862) jumping five spots to fourth place. While memory safety issues like Out-of-bounds Write fell slightly, the re-entry of multiple buffer overflow categories (CWE-120, 121, 122) confirms that fundamental memory management vulnerabilities remain a persistent threat alongside modern web application risks.</p></li><li><p><strong><a href="https://www.reversinglabs.com/blog/malicious-vs-code-fake-image">Malicious VSCode Extensions Hide Trojan in Fake PNG Files:</a></strong></p><p>ReversingLabs researchers discovered 19 malicious Visual Studio Code extensions that targeted developers by hiding malware within bundled dependency folders. The attackers bypassed npm registry checks by pre-packaging a modified version of the popular <code>path-is-absolute</code> library, which executed a script to decode a payload hidden inside a fake image file, which contained a Rust-based trojan. Microsoft has removed the extensions, but developers who installed them are urged to scan their systems for compromise.</p></li><li><p><strong><a href="https://flare.io/learn/resources/docker-hub-secrets-exposed/">Thousands of Exposed Secrets on Docker Hub:</a></strong></p><p>Security firm Flare analyzed public images on Docker Hub over a single month and discovered over 10,000 images containing valid exposed secrets. These leaks ranged from cloud credentials (AWS, Azure) and database keys to AI model tokens (OpenAI, Anthropic), with 42% of compromised images containing five or more secrets each. A significant portion of these leaks originated from &#8220;shadow IT&#8221; accounts (aka personal or contractor profiles) allowing attackers to authenticate directly into production environments, CI/CD pipelines, and sensitive data stores without needing to exploit software vulnerabilities.</p></li><li><p><strong><a href="https://www.varonis.com/blog/spiderman-phishing-kit">New &#8220;Spiderman&#8221; Phishing Kit Targets European Banks and Crypto Wallets:</a></strong></p><p>A new phishing-as-a-service kit dubbed &#8220;Spiderman&#8221; is actively targeting customers of major European financial institutions, including Deutsche Bank, ING, and Commerzbank, as well as crypto wallet services like Ledger and Metamask. The kit features a modular design that allows operators to deploy pixel-perfect fake login portals and intercept credentials, credit card details, and PhotoTAN/OTP codes in real time. With built-in tools for device filtering and session hijacking, the kit enables attackers to bypass multi-factor authentication and execute full account takeovers across multiple countries</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity Last Week (12/1/25— 12/7/25)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/cybersecurity-last-week-12125-12725</link><guid isPermaLink="false">https://cysleuths.substack.com/p/cybersecurity-last-week-12125-12725</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 08 Dec 2025 03:12:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><p>AI Research, Vulnerabilities &amp; News</p><ol><li><p><strong><a href="https://www.straiker.ai/blog/from-inbox-to-wipeout-perplexity-comets-ai-browser-quietly-erasing-google-drive">Perplexity Comet&#8217;s AI Browser Can Be Tricked into Quietly Deleting Your Entire Google Drive:</a></strong><br>Straiker&#8217;s STAR Labs uncovered a zero-click attack where a single, harmless-looking email can cause Perplexity&#8217;s Comet browser agent to wipe a user&#8217;s Google Drive. Once Comet has OAuth access to Gmail and Drive, an attacker can embed polite, step-by-step instructions in an email. When the user later asks Comet to handle their recent tasks, the agent interprets those email instructions as part of the workflow and proceeds to delete legitimate files across personal and shared folders. No phishing links, jailbreaks, or additional prompts are required; the agent simply follows the embedded instructions as written.</p></li><li><p><strong><a href="https://maccarita.com/posts/idesaster/">30+ Vulnerabilities Found in AI Coding Assistants Allow Prompt Injection to Escalate into Data Exfiltration and RCE:</a></strong></p><p>Researchers uncovered more than 30 flaws across AI-powered IDEs including Copilot, Cursor, Windsurf, Zed, Roo Code, Claude Code, and others, showing how simple prompt injections can escalate into data exfiltration or remote code execution. The issue stems from how these agents interact with underlying IDE features: an injected instruction can make the agent read sensitive files and then modify workspace settings or JSON configs that trigger the IDE to pull remote schemas or execute attacker-controlled binaries. Because the agents treat these configs as part of normal assistance, the attack requires no further user interaction. </p></li><li><p><strong><a href="https://unit42.paloaltonetworks.com/model-context-protocol-attack-vectors/">Model Context Protocol&#8217;s (MCP) Sampling Feature Enables Tool Abuse and Silent Data Exfiltration:</a></strong><br>Unit 42 researchers found that MCP, now used by many AI agents to connect to external tools, introduces a new attack surface when its &#8220;sampling&#8221; feature is enabled. A malicious MCP server can inject hidden instructions into the context it provides, causing the LLM to execute unauthorized tool calls, leak sensitive data, or run attacker-controlled workflows while appearing completely legitimate. Because agents treat MCP responses as trusted context, the exploit requires no jailbreak or user interaction; the attacker only needs to get the agent to load their MCP server. </p></li><li><p><strong><a href="https://www.aikido.dev/blog/promptpwnd-github-actions-ai-agents">PromptPwnd: Prompt Injection in GitHub Actions Lets AI Agents Leak Secrets:</a></strong><br>Aikido Security researchers uncovered a new vulnerability pattern where AI agents embedded in GitHub Actions and GitLab CI/CD pipelines (like Gemini CLI, Claude Code, OpenAI Codex, and GitHub AI Inference) read untrusted issue bodies, PR descriptions, or commit messages directly into their prompts. By crafting those fields with hidden instructions, an attacker can trick the agent into running privileged GitHub CLI or shell commands with high-privilege tokens, leading to secret exfiltration, workflow modification, and repository tampering. The team confirmed real, exploitable cases in at least five Fortune 500 companies, including a now-patched Gemini CLI workflow that could be abused to leak API keys and cloud credentials.</p></li><li><p><strong><a href="https://www.reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/">Google Antigravity Accidentally Wipes Entire Storage Drive</a></strong><a href="https://www.reddit.com/r/google_antigravity/comments/1p82or6/google_antigravity_just_deleted_the_contents_of/">:</a><br>A user on Reddit reported that while using Antigravity in &#8220;Turbo mode,&#8221; the AI executed a destructive command that wiped their entire D: drive. The deletion bypassed the recycle bin, making recovery impossible. This real-world case underscores how powerful agentic IDE tools still expose you to major risk when they run root-level commands with minimal confirmation.</p></li><li><p><strong><a href="https://arstechnica.com/ai/2025/12/syntax-hacking-researchers-discover-sentence-structure-can-bypass-ai-safety-rules/">Syntax Hacking: Sentence Structure Alone Can Bypass LLM Safety Filters:</a></strong></p><p>Researchers showed that large-language models can answer questions correctly even when the prompt contains meaningless or substituted words, as long as the underlying sentence structure remains intact. By preserving grammar while stripping semantics, the models still inferred the intended query, demonstrating a stronger reliance on syntactic patterns than expected. The same technique also enabled safety evasion: when harmful requests were embedded inside benign-looking grammatical templates, refusal rates dropped sharply.</p></li><li><p><strong><a href="https://openai.com/index/how-confessions-can-keep-language-models-honest/">OpenAI Tests &#8220;Confessions&#8221; to Make LLMs Reveal When They Cut Corners:</a></strong><br>OpenAI researchers introduced a new training method where models generate a secondary &#8220;confession&#8221; output that reports when they ignored instructions, guessed, or took shortcuts. The technique doesn&#8217;t evaluate correctness, but rewards the model for openly flagging uncertainty or rule-breaking that wouldn&#8217;t be obvious from the main answer. In testing, models became more transparent about hidden behaviors such as hallucinations, skipped reasoning steps, or actions taken purely to maximize reward. While confessions don&#8217;t fix safety issues on their own, they provide a clearer signal of when the model may have deviated from expected behavior.</p></li><li><p><strong><a href="https://www.anthropic.com/research/anthropic-interviewer">Anthropic&#8217;s &#8220;Interviewer&#8221; Reveals How Workers Use AI and Where Trust Breaks Down:</a></strong><br>Anthropic built Interviewer to understand people&#8217;s real experiences with AI at work, and used it to interview 1,250 workers across general roles, creative fields, and scientific research. The results show strong adoption for routine tasks such as summarizing, drafting, brainstorming, and code cleanup, but limited trust when accuracy, originality, or domain judgment matter. Many participants said AI boosts speed but also admitted they conceal their usage due to workplace stigma or fear of appearing less competent. Even with clear productivity gains, concerns about job displacement and reliability remain consistent across groups, highlighting a gap between how useful AI feels day-to-day and how much responsibility people are willing to hand over.</p><p></p></li></ol><p>Cybersecurity News</p><ol><li><p><strong><a href="https://www.forbes.com/sites/daveywinder/2025/12/07/google-looking-into-gmail-hack-locking-users-out-with-no-recovery/?streamIndex=0">Google Investigating Gmail Hack That Permanently Locks Users Out of Accounts:</a></strong></p><p>There has a been a wave of Gmail account takeovers where attackers compromise an account and reclassify it as a supervised &#8220;child&#8221; profile under their family group, allowing the attacker to manage sign-in and recovery. Even with recovery options, the original owner is effectively shut out because Google&#8217;s standard account recovery flows treat it as a child account under a different adult. Google has said it is &#8220;looking into&#8221; these lockouts but has not yet provided a clear, documented path for victims whose accounts have already been converted into supervised child profiles.</p></li><li><p><strong><a href="https://www.ic3.gov/PSA/2025/PSA251205">FBI Warns of &#8220;Photo-Attack&#8221; Campaign Targeting Facebook, LinkedIn and X Users:</a></strong><br>The Federal Bureau of Investigation (FBI) issued a public alert warning users of Facebook, LinkedIn and X about a rising scam where attackers steal or scrape profile photos from social-media, then manipulate or reuse them, often to impersonate real people or create fake identities for phishing, extortion, or social-engineering schemes. The FBI urged users to treat any unexpected messages, photo-based requests, or identity-based contact with skepticism, especially if they originate from profiles with little history or mutual connections are limited.</p></li><li><p><strong><a href="https://aws.amazon.com/blogs/security/china-nexus-cyber-threat-groups-rapidly-exploit-react2shell-vulnerability-cve-2025-55182/">China-Nexus Threat Groups Rapidly Exploit Critical React2Shell RCE (CVE-2025-55182):</a></strong><br>AWS reported that within hours of public disclosure of React2Shell (CVE-2025-55182), multiple China-nexus threat groups, including Earth Lamia and Jackpot Panda, began actively targeting vulnerable React Server Components and Next.js applications. The flaw, rated CVSS 10.0, allows unauthenticated remote code execution via crafted payloads sent to React Server Function endpoints, and affects React 19.x and Next.js 15.x&#8211;16.x deployments using App Router in default configurations. Observed activity includes broad internet scanning, attempts to steal AWS configuration and credential files, and post-exploitation deployment of malware and cryptominers, prompting AWS to urge customers to immediately apply patched React and Next.js releases and review logs for suspicious HTTP traffic to RSC endpoints.</p></li><li><p><strong><a href="https://techcrunch.com/2025/12/03/fintech-firm-marquis-alerts-dozens-of-us-banks-and-credit-unions-of-a-data-breach-after-ransomware-attack/">Fintech Vendor Marquis Software Solutions Ransomware Breach Exposes 400k+ Banking Customers&#8217; Data</a>:</strong><br>Marquis &#8212; a U.S. fintech firm that provides marketing, compliance and CRM services to over 700 banks and credit unions &#8212; disclosed a ransomware attack dating to August 14, 2025, after attackers exploited a vulnerability in its SonicWall firewall. The breach exposed sensitive customer data including names, addresses, dates of birth, Social Security numbers, tax IDs, and bank account or credit/debit card numbers for at least 400,000 people so far.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next week&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity Last Week (11/24/25— 11/30/25)]]></title><description><![CDATA[what happened in cybersecurity this past week-ish in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/cybersecurity-last-week-112425-113025</link><guid isPermaLink="false">https://cysleuths.substack.com/p/cybersecurity-last-week-112425-113025</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 01 Dec 2025 05:28:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><p>AI Research, Vulnerabilities &amp; News</p><ol><li><p><strong><a href="https://www.promptarmor.com/resources/google-antigravity-exfiltrates-data">Google Antigravity Agent Exposes IDE Secrets and Credentials:</a><br></strong>PromptArmor researchers demonstrated that a hidden indirect prompt injection on a simple webpage can manipulate Google&#8217;s new agentic code editor, Antigravity. The vulnerability forces the integrated Gemini agent to bypass file access restrictions, collect sensitive credentials and source code from the user&#8217;s IDE, and then use a browser subagent to silently exfiltrate the data to an attacker-controlled domain. The danger is compounded by default settings that allow the agent to run commands unsupervised in the background via its Agent Manager.</p></li><li><p><strong><a href="https://www.promptarmor.com/resources/cellshock-claude-ai-is-excel-lent-at-stealing-data">Claude for Excel Duped into Leaking Confidential Financial Data:</a></strong> PromptArmor researchers unveiled a vulnerability they call &#8220;CellShock&#8221; in the beta of <strong>Claude for Excel</strong>, allowing indirect prompt injection to exfiltrate sensitive data. By simply copying an external, untrusted data set (like industry benchmarks) containing a hidden injection into a spreadsheet, the Claude agent can be manipulated. When the user later asks Claude for a visualization, the agent is tricked into gathering confidential data, appending it to a malicious URL, and inserting an IMAGE formula that leaks the user&#8217;s private financial projections to an attacker&#8217;s server.</p></li><li><p><strong><a href="https://www.catonetworks.com/blog/cato-ctrl-hashjack-first-known-indirect-prompt-injection/">HashJack: Invisible Prompts in URL Fragments Hijack AI Browser Assistants:</a></strong> Cato CTRL researchers discovered &#8220;HashJack,&#8221; the first indirect prompt injection attack that weaponizes any legitimate website to compromise AI browser assistants. The exploit hides malicious instructions after the &#8220;#&#8221; symbol in a URL fragment, which is fed directly into the LLM when the user engages the assistant. This attack can turn trusted sites like banks into vectors for data exfiltration, callback phishing, and the injection of misinformation or malware guidance.</p></li><li><p><strong><a href="https://arxiv.org/pdf/2511.15304">Poetry Used to Fool AI and Bypass LLM Safety Guardrails:</a></strong></p><p>New research has demonstrated a powerful new exploit called &#8220;Adversarial Poetry&#8220; that acts as a universal, single-turn jailbreak mechanism across many different LLMs. The technique uses a specific structure of text formatted as poetry to consistently bypass the safety alignment features of state-of-the-art LLMs in a single prompt submission. This approach successfully coerces models into generating harmful, prohibited, or otherwise unsafe content, representing a significant new class of prompt injection attack that defeats current safety filters.</p></li><li><p><strong><a href="https://openai.com/index/mixpanel-incident/">OpenAI API User Data Leaked in Mixpanel Security Incident:</a></strong></p><p>OpenAI confirmed a security breach involving Mixpanel, a third-party data analytics provider, resulting in the unauthorized export of a dataset containing limited information for some API product users (platform.openai.com). While no chat data, passwords, API keys, or payment details were compromised, the leak did expose users&#8217; names, email addresses, coarse location, and user IDs, leading OpenAI to warn impacted users to be vigilant against phishing and social engineering attacks.</p></li><li><p><strong><a href="https://www.crowdstrike.com/en-us/blog/crowdstrike-researchers-identify-hidden-vulnerabilities-ai-coded-software/">Political Bias Forces DeepSeek AI Coder to Write Vulnerable Code:</a></strong> CrowdStrike researchers found a new, subtle vulnerability surface in the Chinese LLM DeepSeek-R1. When developers include seemingly innocuous geopolitical or politically sensitive terms in their coding prompts, the model&#8217;s pro-CCP training is seemingly triggered, causing the likelihood of it generating severe security vulnerabilities in the code to increase by up to 50%. </p></li></ol><p>Cybersecurity News</p><ol><li><p><strong><a href="https://helixguard.ai/blog/malicious-sha1hulud-2025-11-24">Sha1-Hulud Worm Steals Secrets, Then Wipes Developer Systems</a></strong><a href="https://helixguard.ai/blog/malicious-sha1hulud-2025-11-24">:</a></p><p>A critical supply chain attack dubbed &#8220;Sha1-Hulud: The Second Coming&#8220;  compromised hundreds of popular npm packages and affected over 25,000 GitHub repositories. The self-propagating worm, which executes during a hidden preinstall script, immediately targets developer environments and CI/CD pipelines to steal high-value secrets, including NPM and cloud credentials (AWS/GCP/Azure). The malware then publicly exfiltrates these stolen secrets to new GitHub repos. In a major escalation of threat behavior, researchers noted a catastrophic fail-safe: if the malware cannot authenticate or exfiltrate data, it defaults to a destructive payload that attempts to irrevocably wipe the victim&#8217;s entire home directory.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/microsoft/microsoft-windows-updates-hide-password-icon-on-lock-screen/">Windows Update Accidentally Hides Password Login Option</a></strong><a href="https://www.bleepingcomputer.com/news/microsoft/microsoft-windows-updates-hide-password-icon-on-lock-screen/">:</a></p><p>Microsoft issued a warning that recent Windows 11 updates may cause the password sign-in icon to disappear from the lock screen. The bug affects users with multiple sign-in options (like PIN, security key, and password) enabled. While the password text box and button are still functional, users must hover over the blank space where the icon should be to reveal it and proceed with login. Microsoft is currently working on a fix but has not provided a timeline for resolution.</p></li><li><p><strong><a href="https://www.cisa.gov/news-events/alerts/2025/11/24/spyware-allows-cyber-threat-actors-target-users-messaging-applications">CISA Issues Rare Warning on Sophisticated Spyware Targeting Messaging Apps</a></strong><a href="https://www.cisa.gov/news-events/alerts/2025/11/24/spyware-allows-cyber-threat-actors-target-users-messaging-applications">:</a></p><p>The Cybersecurity and Infrastructure Security Agency (CISA) issued a rare public warning, alerting organizations that malicious cyber actors are aggressively targeting messaging apps using commercial spyware programs. Threat actors employ sophisticated social engineering, including deploying zero-click malware and tricking victims with fraudulent app upgrades (like for Signal and WhatsApp), to gain unauthorized access. CISA noted that hackers are focusing on high-value targets such as senior government officials, military leaders, and civil-society executives, and urged organizations to consult its updated mobile security guidance.</p></li><li><p><strong><a href="https://trufflesecurity.com/blog/scanning-5-6-million-public-gitlab-repositories-for-secrets">Public GitLab Scans Expose 17,000+ Live API Keys:</a></strong></p><p>Truffle Security researchers scanned <strong>all 5.6 million</strong> public GitLab Cloud repositories and discovered over <strong>17,430 verified live secrets</strong>. The massive audit found that GitLab repositories have a <strong>35% higher density</strong> of leaked credentials compared to Bitbucket. The exposed secrets&#8212;including highly-sensitive <strong>Google Cloud Platform (GCP)</strong> credentials and 406 valid GitLab tokens&#8212;reinforce the danger of &#8220;platform locality,&#8221; where developers accidentally commit credentials to the same platform they belong to. The research also highlighted the &#8220;Zombie Secret&#8221; problem, confirming valid credentials dating back to 2009 that remain functional because they were never properly rotated.</p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next Monday&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity Last Week (11/17/25— 11/23/25)]]></title><description><![CDATA[what happened in cybersecurity this past week in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/cybersecurity-last-week-111725-112325</link><guid isPermaLink="false">https://cysleuths.substack.com/p/cybersecurity-last-week-111725-112325</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 24 Nov 2025 04:56:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Sunday (or Monday), I share the security-related news and stories I found most interesting or relatable from the previous week(ish). It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><p>AI Research, Vulnerabilities &amp; News</p><ol><li><p><strong><a href="https://publicinterestnetwork.org/wp-content/uploads/2025/11/TOYLAND-2025-11-14-7a.pdf?utm_source=www.therundown.ai&amp;utm_medium=newsletter&amp;utm_campaign=nano-banana-pro-changes-the-image-generation-game-again&amp;_bhlid=eebd021c58eed1c680c403c71ba59aca9caa8e01">AI Toys Giving Kids Explicit and Unsafe Advice Exposed:</a></strong></p><p>The U.S. PIRG Education Fund&#8217;s latest <em>Trouble in Toyland</em> report tested four AI-enabled toys and found that some would dive into detailed conversations about sexual topics while also calmly explaining where to find knives, pills, matches, and plastic bags around the house. PIRG and child-safety advocates to warn families that AI companions marketed as educational or &#8220;friendly&#8221; may quietly bypass both content filters and basic child-development safeguards.</p></li><li><p><strong><a href="https://www.reuters.com/legal/government/figma-sued-allegedly-misusing-customer-data-ai-training-2025-11-21/">Figma Accused of Secretly Using Customer Designs to Train AI:</a></strong></p><p>Figma has been hit with a proposed class action in California alleging it quietly used customers&#8217; design files and intellectual property to train its generative AI features. The lawsuit claims Figma automatically opted users into AI training without clear consent and seeks damages plus a court order blocking the company from running AI models built on allegedly misused customer data.</p></li><li><p><strong><a href="https://factory.ai/news/droid-neutralizing-fraud">AI Coding Bots Are Being Hijacked to Run Global Fraud Schemes:</a></strong></p><p>Factory.ai reported that attackers used its Droid coding assistant to spin up huge numbers of fake accounts and companies to abuse free trials across multiple AI providers. The incident showed three things: criminals are now using coding agents as force multipliers, platforms offering powerful models and free access are targets, and defending against AI-enabled attackers requires AI-assisted defenses.</p></li><li><p><strong><a href="https://mindgard.ai/resources/cline-coding-agent-vulnerabilities">Hidden Instructions in Project Files Let Cline Coding Agent Run Dangerous Actions:</a></strong></p><p>Mindgard researchers found that Cline&#8217;s AI coding agent could be tricked by malicious files inside a project, causing it to read private API keys, send them out through harmless-looking commands, and even auto-approve risky actions. One bug also exposed which model was running behind the scenes. The latest versions fix these issues, but the research shows how easily AI coding tools can be steered into unsafe behavior when they trust what&#8217;s inside a repo.</p></li><li><p><strong><a href="https://labs.sqrx.com/comet-mcp-api-allows-ai-browsers-to-execute-local-commands-dec185fb524b">Hidden Comet API Lets AI Browser Run Programs on Your Device; Mitigated but Disputed by Perplexity:</a></strong></p><p>SquareX researchers found that Perplexity&#8217;s Comet browser ships with hidden extensions that can call a private MCP API to run apps and commands directly on a user&#8217;s computer, meaning a malicious extension that impersonates Comet&#8217;s built-in Analytics extension could quietly pass instructions to the agent extension and trigger malware or even ransomware like WannaCry without any explicit permission from the user. While Perplexity has added measures to prevent the attack, <a href="https://www.securityweek.com/squarex-and-perplexity-quarrel-over-alleged-comet-browser-vulnerability/">they called it &#8220;fake security research&#8221;</a>.</p></li></ol><p>Cybersecurity News</p><ol><li><p><strong><a href="https://layerxsecurity.com/blog/rolypoly-vpn-the-malicious-free-vpn-extension-that-keeps-coming-back/">Free Browser Extensions are Surveilling You:</a></strong></p><p>LayerX researchers uncovered a recurring campaign of &#8220;Free Unlimited VPN&#8221; and ad-blocking extensions that quietly turned millions of Chrome and Edge installs into browser-level surveillance tools, intercepting traffic, redirecting users, and profiling browsing activity under the guise of privacy. The multiple malicious extensions highlight how easy it is for developers to relaunch under new IDs and why enterprises need continuous monitoring and policy controls for browser extensions rather than one-time store reviews or basic allowlists.</p></li><li><p><strong><a href="https://techcrunch.com/2025/11/21/crowdstrike-fires-suspicious-insider-who-passed-information-to-hackers/">CrowdStrike Fires Employee for Sharing Information With Hackers:</a></strong></p><p>CrowdStrike says it terminated a &#8220;suspicious insider&#8221; who snapped photos of internal dashboards and shared them with the Scattered Lapsus$ Hunters hacking collective, which later posted the screenshots on Telegram as supposed proof of a deep breach. The company insists its systems and customer data were never compromised, says the insider was caught and cut off before hackers could use the access, and has turned the case over to law enforcement.</p></li><li><p><strong><a href="https://techcommunity.microsoft.com/blog/azureinfrastructureblog/defending-the-cloud-azure-neutralized-a-record-breaking-15-tbps-ddos-attack/4470422">Record DDoS Attack Slams Microsoft Azure but is Neutralized:</a></strong></p><p>Microsoft said it neutralized the largest single cloud DDoS attack ever recorded, measuring 15.72 Tbps and nearly 3.64 billion packets per second. Launched from more than 500,000 IP addresses and aimed at a single Azure endpoint in Australia, the attack was mitigated by Azure&#8217;s DDoS protection systems without interrupting customer service availability.</p></li><li><p><strong><a href="https://www.cybersecuritydive.com/news/fcc-eliminates-telecom-cybersecurity-requirements/806052/">FCC Eliminates Cybersecurity Rules for Telecom Providers:</a></strong></p><p>The Federal Communications Commission voted to scrap its the effort to require telecom companies to meet minimum cybersecurity standards, reversing a January ruling that said carriers must secure their networks under the 1994 CALEA law. FCC Chair Brendan Carr argued the Biden-era rules were unlawful and ineffective, while critics warn that rolling them back after China&#8217;s Salt Typhoon espionage campaign leaves U.S. telecom networks with no federal baseline protections and leans too heavily on voluntary security measures from carriers.</p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next Monday&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Cybersecurity Last Week (11/10/25— 11/16/25)]]></title><description><![CDATA[what happened in cybersecurity this past week in AI research & vulnerabilities, news, startups and VCs]]></description><link>https://cysleuths.substack.com/p/cybersecurity-last-week-111025-111625</link><guid isPermaLink="false">https://cysleuths.substack.com/p/cybersecurity-last-week-111025-111625</guid><dc:creator><![CDATA[Jasmine Wong]]></dc:creator><pubDate>Mon, 17 Nov 2025 06:11:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jX4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51643421-e66a-42bb-9a78-0c2ca38e3a08_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, I&#8217;m Jasmine &#8212; a Product Security Engineer, and if you know me personally, also a quite bit of a travel bug and matcha enthusiast. I&#8217;ve realized how hard it is to keep up with everything happening in cybersecurity, AI, and tech because there&#8217;s just <em>so much</em> being put out every day. So every Monday, I share the security-related news and stories I found most interesting or relatable from the previous week. It helps me stay accountable, and hopefully it helps you stay in the loop too :)</p><p>AI Research &amp; Vulnerabilities</p><ol><li><p><strong><a href="https://www.anthropic.com/news/disrupting-AI-espionage">Anthropic Details First AI-Orchestrated Cyber Espionage Campaign:</a></strong></p><p>Anthropic disclosed that a Chinese state-sponsored group abused its Claude Code tool to run a highly autonomous cyber-espionage campaign against roughly thirty targets, including major tech firms, financial institutions, chemical manufacturers, and government agencies. The attackers used role-play to bypass safety controls and let AI handle 80&#8211;90% of the intrusion lifecycle from recon and exploitation to lateral movement and data analysis before Anthropic detected the activity.</p></li><li><p><strong><a href="https://www.knostic.ai/blog/mcp-hijacked-cursor-browser">MCP Server Hijack Lets Attackers Take Over Cursor&#8217;s Internal Browser:</a></strong></p><p>Knostic researchers showed that a malicious Model Context Protocol (MCP) server can modify unverified runtime code in Cursor to control its built-in browser, swap legitimate login pages with credential harvesting pages, and execute attacker-supplied JavaScript. Since MCP servers run with broad permissions, a single untrusted server can stealing passwords and run arbitrary actions on a user&#8217;s machine.</p></li><li><p><strong><a href="https://www.wiz.io/blog/forbes-ai-50-leaking-secrets">65% of Forbes AI 50 Startups Caught Leaking Secrets on GitHub:</a></strong></p><p>Wiz researchers scanned GitHub for the Forbes AI 50 and found that roughly two-thirds of those private AI companies had exposed verified secrets (i.e. API keys, access tokens, and other credentials) in commit histories, deleted forks, workflow logs, and even public repos tied to employees&#8217; personal accounts.</p></li><li><p><strong><a href="https://cybernews.com/security/we-tested-chatgpt-gemini-and-claude/">Adversarial Prompt Tests Reveal Safety Gaps in ChatGPT, Gemini and Claude:</a></strong></p><p>Cybernews researchers used ChatGPT, Gemini, and Claude models with structured adversarial prompts and found that all could be pushed into unsafe territory when harmful requests were wrapped as academic research, third-person analysis, stories, or slightly broken grammar. Gemini Pro 2.5 was the easiest to steer into detailed illegal or abusive content, Gemini Flash 2.5 was the most consistent at refusing, Claude models were strict but vulnerable to &#8220;academic-style&#8221; wording, and ChatGPT models often gave softer but still usable answers, highlighting how prompt framing remains a key weak point in current LLM safety systems.</p></li><li><p><strong><a href="https://www.oligo.security/blog/shadowmq-how-code-reuse-spread-critical-vulnerabilities-across-the-ai-ecosystem">Code Reuse Spreads RCE Bugs Across AI Inference Servers from Meta, NVIDIA, and others</a>:</strong></p><p>Oligo Security found that several popular AI inference servers from vendors and projects like Meta, NVIDIA, Microsoft, vLLM, SGLang, and Modular all inherited the same remote code execution risk by reusing code that combines unauthenticated ZeroMQ sockets with Python pickle deserialization. Oligo says this ShadowMQ case shows how quickly a single insecure pattern can propagate across the AI ecosystem when teams borrow designs or libraries.</p></li><li><p><strong><a href="https://sirleeroyjenkins.medium.com/when-gpts-call-home-exploiting-ssrf-in-chatgpts-custom-actions-5df9df27dbe9">SSRF in Custom GPT Actions Lets ChatGPT Expose Cloud Metadata:</a></strong></p><p>A researcher revealed a severe Server-Side Request Forgery (SSRF) vulnerability in the &#8220;Actions&#8221; feature of ChatGPT&#8217;s custom-GPT modules, where user-supplied URLs allowed arbitrary redirection and headers to reach OpenAI&#8217;s internal cloud metadata endpoints. </p></li></ol><p>Cybersecurity News</p><ol><li><p><strong><a href="https://www.businessinsider.com/openai-new-york-times-copyright-infringement-lawsuit-chatgpt-logs-private-2025-11">OpenAI Resists NYT Demand for 20 Million Private ChatGPT Logs:</a></strong></p><p>A federal judge has ruled that OpenAI must turn over up to 20 million ChatGPT conversation logs as part of the New York Times&#8217; copyright lawsuit, but <a href="https://openai.com/index/fighting-nyt-user-privacy-invasion/">OpenAI argues</a> the request still contains sensitive user data irrelevant to the case. Even with the court requiring de-identification and strict protective controls, OpenAI warns that producing this information poses real privacy and security risk.</p></li><li><p><strong><a href="https://www.netcraft.com/blog/thousands-of-domains-target-hotel-guests-in-massive-phishing-campaign">4300+ Fake Hotel Domains Used to Phish Travelers&#8217; Payment Details:</a></strong></p><p>Netcraft uncovered an ongoing phishing campaign in which a Russian-speaking threat actor registered more than 4,300 domains that mimic brands like Airbnb and Booking.com to trick hotel guests into entering payment information on lookalike booking pages. The attackers send reservation-themed emails that link to tailored phishing sites where victims are asked to confirm card details or pay fees.</p></li><li><p><strong>Oracle Zero-Day Campaign Hits Washington Post and Logitech via Clop Data Theft:</strong></p><p>The <a href="https://www.securityweek.com/washington-post-says-nearly-10000-employees-impacted-by-oracle-hack/">Washington Post</a> and <a href="https://www.bleepingcomputer.com/news/security/logitech-confirms-data-breach-after-clop-extortion-attack/">Logitech</a> have both disclosed breaches linked to a Clop campaign that exploited a zero-day in Oracle E-Business Suite to steal data from backend systems. The Washington Post says nearly 10,000 employees and contractors had personal and financial details exposed, while Logitech reports that roughly 1.8 TB of internal data was taken, including limited employee, customer, and supplier information.</p></li><li><p><strong><a href="https://www.bleepingcomputer.com/news/security/doordash-hit-by-new-data-breach-in-october-exposing-user-information/">DoorDash Data Breach Exposes Contact Details After Employee Social Engineering:</a></strong></p><p>DoorDash disclosed that an attacker used a social engineering scam against an employee, gained access to internal systems and pulled certain user contact information for consumers, Dashers, and merchants. The stolen data varied by person but could include names, phone numbers, email addresses, and physical addresses.</p><p></p></li></ol><p><strong>&#128073; Like</strong> this post + <strong>subscribe </strong>to catch next Monday&#8217;s roundup!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://cysleuths.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Cysleuthing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>