2026-07

February 13 - February 19, 2026

Subscribe
12
Total Articles
4
Topics
9
Sources
9 days
ago

This Week's Summary

The collision between AI systems and traditional security controls dominated this week, exposing fundamental gaps in how our compliance frameworks handle machine learning risks. Multiple articles demonstrated that bolting LLMs into products creates attack surfaces our current standards weren't designed to address: prompt injection attacks that mirror SQL injection from two decades ago, side-channel attacks that leak sensitive data through encrypted traffic patterns, and AI-generated passwords that look random but contain predictable patterns attackers can exploit in hours. If you're processing regulated data through third-party LLM APIs or trusting AI for security-critical functions like password generation, your risk assessments and vendor agreements need immediate updates. The compliance gap isn't that we lack controls—it's that "instructions" and "data" have merged in ways that make our existing separation-of-duties and data protection frameworks obsolete.

Browser extensions emerged as the new endpoint blind spot, with over 260,000 Chrome users installing malicious AI assistants that stole credentials and monitored emails while bypassing traditional security controls. Your EDR won't catch them, your SIEM won't log them, and most organizations don't even inventory what's installed. This is the "bring your own malware" scenario that should trigger immediate action: if you're not running an approved extension allowlist, especially for users touching customer data or business accounts, you're one download away from a reportable breach. The Odido breach—6.2 million records including identity documents and bank details—demonstrates what happens when access controls fail at scale, and it's a reminder that "enhanced monitoring" press releases don't stop years of identity fraud for affected customers.

The CIRCIA town halls starting March 9th represent a rare opportunity to influence regulatory definitions before they're set in stone, and CISA's track record suggests they actually listen to practitioner input. If you're in critical infrastructure, show up and help define what "covered entity" and "reportable incident" mean in ways you can operationalize, because the 72-hour reporting clock is coming regardless. Meanwhile, Sweden's shift to treating cybersecurity as continuous operations under sustained pressure—moving their national cyber center under intelligence control—highlights what compliance programs consistently miss: you can't checklist your way through persistent threats. The OpenSSL vulnerabilities discovered by AI research (including a critical 9.8 CVSS buffer overflow) and China's revived Tianfu Cup under government secrecy both reinforce that the gap between vulnerability discovery and your patch window is now your actual risk window, often with no disclosure warning.

The practical takeaway across all of this: if your compliance program assumes crypto libraries are solid, extensions are benign, and AI outputs are trustworthy, this week delivered three urgent corrections. Patch OpenSSL immediately, audit where it lives in your stack, implement browser extension controls, and treat any AI system processing regulated data as untrusted input requiring the same rigor you'd apply to user-submitted SQL queries. The frameworks will eventually catch up to these realities, but the attackers already have.

regulation update

2 articles

CISA Announces New Town Halls to Engage with Stakeholders on Cyber Incident Reporting for Critical Infrastructure

Feb 13, 2026 CISA News Score: 1.0

CISA announced a series of virtual town hall meetings beginning March 9, 2026 to gather stakeholder input on the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) rulemaking process. The upcoming rule will require covered critical infrastructure organizations to report cyber incidents within 72 hours and ransom payments within 24 hours to CISA. These town halls aim to balance strengthening national cybersecurity posture while minimizing compliance burden on regulated entities.

My Take

If you're in critical infrastructure, actually show up to these town halls—CISA's track record suggests they do listen, and this is your shot to influence what "covered entity" and "reportable incident" actually mean before the rule drops. The 72-hour clock is coming either way; might as well help them define it in a way you can actually live with.

CIRCIA

Key Actions

  • • Attend CISA town hall meetings starting March 9, 2026 to provide stakeholder input on CIRCIA rule
  • • Review proposed CIRCIA rulemaking and check Federal Register for full town hall schedule
  • • Prepare incident reporting procedures to meet 72-hour cyber incident and 24-hour ransom payment reporting requirements

Europe must adapt to ‘permanent’ cyber and hybrid threats, Sweden warns

Feb 13, 2026 The Record Score: 0.9

Sweden's defense official warns that cyber and hybrid threats are now permanent features of Europe's security landscape, requiring societies to build resilience and continuous functionality under sustained pressure. Sweden is implementing a comprehensive 'total defense' approach involving civilian authorities, military, and the National Cyber Security Centre to protect critical infrastructure including healthcare, energy, transport, and communications. The country is reorganizing its cybersecurity governance by bringing the NCSC under control of its intelligence agency to improve coordination and effectiveness.

My Take

Sweden's moving their cyber center under intelligence control because they've figured out what most compliance programs miss: you can't checklist your way through a sustained attack. If you're still treating cybersecurity like an annual audit event instead of continuous operations under pressure, you're already behind.

SOC2 ISO27001

Key Actions

  • • Organizations should reassess assumption that cyber disruptions will be rare and build resilience planning accordingly
  • • Critical infrastructure operators in essential sectors must strengthen cybersecurity posture and civilian-military coordination
  • • Review organizational governance structures for cybersecurity to ensure effective coordination and response capabilities

security incident

7 articles

Phishing on the Edge of the Web and Mobile Using QR Codes

Feb 13, 2026 Unit 42 Threat Research Score: 0.9

This threat research article details the emerging threat of QR code-based phishing attacks (quishing) across web and mobile platforms, with attackers averaging over 11,000 malicious QR code detections daily. The research identifies three primary attack vectors: URL shorteners disguising malicious destinations, in-app deep links for credential theft, and direct downloads of malicious apps. Organizations must strengthen security awareness and implement controls to protect users from these attacks that exploit the weaker security posture of personal mobile devices outside corporate perimeters.

My Take

QR codes are just URLs wearing a disguise—the real problem is we've trained users to scan first and think never. If your security awareness training still focuses on "don't click suspicious links" but ignores the camera in everyone's pocket, you're fighting yesterday's war.

SOC2 ISO27001 GDPR

Key Actions

  • • Implement security awareness training focused on QR code scanning risks and quishing tactics
  • • Deploy advanced URL filtering and mobile threat detection solutions
  • • Establish policies restricting QR code scanning on corporate networks and enforce endpoint controls

The Promptware Kill Chain

Feb 16, 2026 Schneier on Security Score: 0.9

The article introduces the 'promptware kill chain,' a framework describing sophisticated attack techniques against large language models (LLMs) through prompt injection and jailbreaking. It explains how malicious instructions can be embedded in LLM inputs and retrieved data, exploiting the fundamental architecture of LLMs that fail to distinguish between trusted instructions and untrusted data. The multi-stage attack model mirrors traditional malware campaigns and highlights emerging security risks in AI systems.

My Take

If you're bolting LLMs into your product without treating user prompts as untrusted input, you're basically writing SQL queries with unvalidated user data circa 2003. The real compliance gap: none of our frameworks have caught up to the fact that "instructions" and "data" are now the same thing in these systems.

SOC2 ISO27001

Key Actions

  • • Develop architectural safeguards in LLM systems to separate trusted instructions from untrusted data inputs
  • • Implement detection and prevention mechanisms for indirect prompt injection attacks across multimodal inputs
  • • Establish security frameworks and vocabulary for addressing AI-based threats in organizational policies

Side-Channel Attacks Against LLMs

Feb 17, 2026 Schneier on Security Score: 0.9

Three research papers describe side-channel attacks against Large Language Models that can infer sensitive user data including conversation topics, personally identifiable information, and confidential content by analyzing encrypted network traffic timing patterns and packet characteristics. These attacks pose significant risks to organizations deploying LLMs in sensitive domains such as healthcare, legal services, and financial services. The research demonstrates vulnerabilities in popular systems including OpenAI's ChatGPT and Anthropic's Claude, with proposed mitigations including packet padding and token aggregation.

My Take

If you're processing PII or PHI through third-party LLM APIs, your DPA and BAA just got a lot more complicated—these side-channel attacks mean even encrypted traffic leaks sensitive data patterns. Time to either implement the proposed mitigations, move to on-premise models, or update your risk assessments to acknowledge what "confidential" actually means in this context.

SOC2 ISO27001 GDPR HIPAA

Key Actions

  • • Implement network traffic padding and obfuscation mechanisms to eliminate timing-based side channels
  • • Review and remediate speculative decoding implementations in LLM deployments
  • • Conduct security assessments of LLM inference channels for side-channel vulnerabilities

AI Found Twelve New Vulnerabilities in OpenSSL

Feb 18, 2026 Schneier on Security Score: 0.9

Twelve zero-day vulnerabilities were discovered in OpenSSL by an AI security research system and disclosed responsibly in January 2026. The vulnerabilities, including a CRITICAL severity stack buffer overflow (CVE-2025-15467 with CVSS 9.8), were found in widely-used encryption software that is foundational to compliance infrastructure. Organizations relying on OpenSSL for cryptographic operations across regulated environments must prioritize patching these vulnerabilities immediately.

My Take

If your compliance program assumes your crypto layer is solid because "it's OpenSSL," this is your wake-up call. Patch immediately, yes—but the real question is whether you even have an inventory of where OpenSSL lives in your stack (spoiler: it's everywhere).

SOC2 ISO27001 HIPAA PCI-DSS

Key Actions

  • • Immediately patch OpenSSL to the latest January 27, 2026 release to address the 12 disclosed vulnerabilities
  • • Prioritize patching CVE-2025-15467 (CVSS 9.8 CRITICAL) in all systems, as exploits are already available
  • • Audit systems for exposure to these vulnerabilities and verify patch deployment across all infrastructure

China Revives Tianfu Cup Hacking Contest Under Increased Secrecy

Feb 13, 2026 SecurityWeek Score: 0.9

China's Tianfu Cup hacking competition has resumed in 2026 under government oversight by the Ministry of Public Security, with significantly increased secrecy compared to previous iterations. The event identifies zero-day vulnerabilities in consumer devices, enterprise software, and infrastructure systems including smartphones, operating systems, cloud platforms, and security products. This development has implications for vulnerability disclosure practices and the security posture of targeted systems across multiple compliance frameworks.

My Take

State-sponsored vuln hunting under a secrecy veil means the days between discovery and your patch window just became your actual risk window—and you won't know it. If you're relying on responsible disclosure timelines in your risk assessments, it's time to assume someone already has the keys and hasn't told you.

ISO27001 SOC2

Key Actions

  • • Monitor vulnerability disclosures from Tianfu Cup targets for potential exploits
  • • Review zero-day vulnerability management policies for affected products and systems
  • • Assess risk posture for systems identified as Tianfu Cup targets (Windows 11, macOS, Chrome, VMware ESXi, Palo Alto Networks, Microsoft Exchange, etc.)

Fake AI Assistants in Google Chrome Web Store Steal Passwords and Spy on Emails

Feb 13, 2026 Infosecurity Magazine Score: 0.9

Over 260,000 Google Chrome users downloaded malicious fake AI assistant extensions in a coordinated campaign called AiFrame, capable of stealing credentials, monitoring emails, and enabling remote access. The extensions bypassed Chrome Web Store security measures using techniques like extension spraying and full-screen iframes to exfiltrate data to attacker-controlled servers. Organizations must assess whether affected users are employees or customers and evaluate potential data exposure under applicable privacy and security regulations.

My Take

Browser extensions remain the soft underbelly of endpoint security—your EDR won't catch them, your SIEM won't log them, and IT probably doesn't even inventory them. If you're not controlling what extensions your users can install (especially anything touching corporate email), you're one rogue ChatGPT helper away from a reportable breach.

SOC2 ISO27001 GDPR HIPAA CCPA

Key Actions

  • • Identify and audit any employees or users who may have downloaded affected AI assistant extensions from Chrome Web Store
  • • Assess potential credential compromise and enforce password resets for affected users with access to sensitive systems
  • • Review access logs and email activity for signs of unauthorized monitoring or data exfiltration

Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History

Feb 13, 2026 The Hacker News Score: 0.9

A malicious Chrome extension named 'CL Suite' was discovered stealing sensitive business data including TOTP codes, 2FA credentials, contact lists, and analytics data from Meta Business Suite and Facebook Business Manager users. The extension exfiltrates data to attacker-controlled infrastructure without user knowledge or consent, potentially enabling account takeovers and targeted follow-on attacks. Despite low install numbers (33 users), the threat actor gains access to high-value business intelligence and authentication credentials.

My Take

If you're not maintaining an approved extension allowlist, you're basically letting employees install corporate surveillance devices themselves. This is exactly the kind of "bring your own malware" scenario that should trigger an immediate review of your browser security controls—especially for anyone touching customer data or business accounts.

SOC2 GDPR CCPA

Key Actions

  • • Immediately audit Chrome extensions installed across organization and remove 'CL Suite' (ID: jkphinfhmfkckkcnifhjiplhfoiefffl)
  • • Force password resets and review 2FA configurations for all Meta Business Suite and Facebook Business Manager accounts
  • • Monitor affected accounts for unauthorized access attempts and review audit logs for suspicious activity

data breach

1 articles

Dutch Carrier Odido Discloses Data Breach Impacting 6 Million

Feb 13, 2026 SecurityWeek Score: 0.9

Dutch mobile carrier Odido disclosed a data breach affecting approximately 6.2 million customers following unauthorized access to a customer contact system on February 7-8. The incident exposed sensitive personal information including names, addresses, phone numbers, email addresses, dates of birth, bank account numbers, and identity document details. The company has notified authorities and affected users, implemented additional security measures, and is monitoring for potential misuse of the stolen data.

My Take

Six million records including ID documents and bank details—this is exactly the kind of breach that turns into years of identity fraud for customers, not just "enhanced monitoring" PR-speak. When a telecom gets popped this badly, the real question isn't what they're doing now, it's how their access controls were so weak that someone walked out with the crown jewels.

GDPR CCPA

Key Actions

  • • Monitor credit and financial accounts for suspicious activity
  • • Remain alert to phishing attempts and social engineering
  • • Review identity theft protection options given exposure of passport/driver's license numbers

best practices

2 articles

A CISO's Playbook for Defending Data Assets Against AI Scraping

Feb 18, 2026 Dark Reading Score: 0.8

Article provides guidance for Chief Information Security Officers on protecting organizational data assets from unauthorized AI scraping activities. Covers defensive strategies and tactical approaches to prevent data exfiltration and misuse by AI systems. Relevant to multiple compliance frameworks focused on data protection and security controls.

My Take

AI scraping is just data exfiltration with better PR—your existing data loss prevention controls either work or they don't. Before spinning up a whole "AI defense strategy," audit whether you actually know where your sensitive data lives and who can access it (most organizations fail this basic test).

SOC2 ISO27001 GDPR CCPA

Key Actions

  • • Implement data classification and access controls to limit AI scraping exposure
  • • Monitor and detect unauthorized data extraction attempts
  • • Establish policies for data handling in AI/ML environments

Your AI-generated password isn't random, it just looks that way

Feb 18, 2026 The Register Security Score: 0.8

Research by Irregular security company reveals that AI-generated passwords from Claude, ChatGPT, and Gemini appear strong but contain predictable patterns that make them vulnerable to brute-force attacks within hours. The study found significant duplication, non-random character placement, and common patterns across multiple generations, contradicting the complexity indicators shown by standard password strength checkers.

My Take

If your password policy says "complex passwords required" but doesn't specify how they're generated, you've got a problem—and this research proves it. LLMs are pattern machines, not entropy sources, so any team relying on AI for credential generation needs to rethink that choice before an attacker with the same paper does it for them.

SOC2 ISO27001 GDPR HIPAA PCI-DSS CCPA

Key Actions

  • • Do not rely on AI chatbots for generating passwords for sensitive accounts
  • • Use dedicated password managers (1Password, Bitwarden, native mobile managers) instead
  • • Consider passphrase generation as alternative to AI-generated strings