██████╗██╗   ██╗██████╗ ██████╗     ██████╗██╗  ██╗
 ██╔════╝╚██╗ ██╔╝██╔══██╗██╔══██╗   ██╔════╝╚██╗██╔╝
 ██║      ╚████╔╝ ██████╔╝██████╔╝ ● ██║      ╚███╔╝ 
 ██║       ╚██╔╝  ██╔══██╗██╔══██╗   ██║      ██╔██╗ 
 ╚██████╗   ██║   ██████╔╝██║  ██║   ╚██████╗██╔╝ ██╗
  ╚═════╝   ╚═╝   ╚═════╝ ╚═╝  ╚═╝    ╚═════╝╚═╝  ╚═╝
────────────────────────────────── STAY SHARP ───

WordPress Plugin Flaw Lets Low-Level Users Seize Admin Control

Today's cybersecurity digest — CVEs, headline news, quantum computing, and something weird. April 11, 2026

cybr.cx | Daily Digest — April 11, 2026


Critical Vulnerabilities

CVE-2026-5144 | BuddyPress Groupblog (WordPress) | CVSS 8.8 | HIGH
All versions up to and including 1.9.3 of the BuddyPress Groupblog plugin allow privilege escalation via unprotected parameters in the group blog settings handler. A Subscriber-level user who has been made a group admin can manipulate the groupblog-blogid, default-member, and groupblog-silent-add parameters to escalate privileges beyond their intended role. On any WordPress multisite running this plugin, this is a straightforward path to administrative access. Update or disable immediately.

CVE-2026-5217 | Optimole Image Optimization (WordPress) | CVSS 7.2 | HIGH
The Optimole plugin through version 4.2.2 exposes an unauthenticated REST endpoint (/wp-json/optimole/v1/optimizations) that accepts a user-supplied s parameter without proper sanitisation or output escaping. An unauthenticated attacker can inject stored XSS payloads via the srcset descriptor, potentially hijacking admin sessions or executing arbitrary scripts in a victim's browser. Given how widely Optimole is deployed, the unauthenticated attack surface warrants urgent patching.

CVE-2026-5809 | wpForo Forum (WordPress) | CVSS 7.1 | HIGH
A two-step logic flaw in wpForo through version 3.0.2 allows authenticated users to trigger arbitrary file deletion on the server. The topic_add() and topic_edit() handlers accept unsanitised array values from $_REQUEST and store them as post metadata; because body is included in the allowed field set without type restrictions, an attacker can abuse this to reference and delete arbitrary files. On shared hosting environments in particular, the blast radius could extend beyond the WordPress installation itself.


Headline News

ShinyHunters Claims Rockstar Games Breach via Snowflake Integration

ShinyHunters — the threat actor group behind a string of high-profile data thefts — is claiming responsibility for a breach of Rockstar Games, reportedly achieved by compromising a Snowflake cloud data warehouse integration. If confirmed, this would mark another chapter in the group's established playbook of targeting organisations through cloud data platform credentials rather than attacking core infrastructure directly. The breach vector is consistent with the pattern seen in prior ShinyHunters campaigns: credential theft or session token hijacking enabling lateral movement into cloud-hosted data environments. For practitioners, this reinforces that Snowflake tenants without MFA enforcement and network policy restrictions on account access remain high-value soft targets. Security teams managing cloud data warehouse integrations should audit service account permissions, rotate credentials, and verify that IP allowlisting is in place — today, not next sprint.

Android Banking Trojan Tied to Cambodia Scam Operations Reaches 21 Countries

An Android banking trojan with operational ties to Southeast Asian scam compound networks has now been observed targeting victims across 21 countries, researchers have found. The malware is designed to bypass standard Android security controls, intercept banking credentials, and exfiltrate funds — a capability set consistent with the industrialised fraud infrastructure that Cambodia-linked operations have been running at scale. What makes this campaign particularly notable is the forced-labour angle: the humans operating these scams are themselves often trafficking victims, meaning the threat actor ecosystem here operates across both cybercrime and organised crime domains simultaneously. For defenders, the geographic spread suggests active re-targeting of victim pools rather than opportunistic infection, and mobile SOC teams should ensure detection rules account for the specific bypass techniques being documented in the wild.

Anthropic Discloses Mythos and Glasswing: AI Security at an Inflection Point

Anthropic has made two significant security-adjacent announcements — the preview of Mythos, a non-public AI model with capabilities that surprised much of the industry, and details around a project called Glasswing — which veteran AI security researchers are describing as the inevitable arrival of a moment they had long anticipated. The specifics of what these systems can do remain tightly controlled, but commentary from those inside the AI security research community suggests the concern centres on autonomous capability thresholds: what these models can initiate, reason about, and execute without human prompting. For the security community, the practical question is immediate: as AI models cross thresholds of autonomous capability, the attack surface for AI agent abuse, prompt injection at scale, and adversarial model misuse expands qualitatively, not just quantitatively. Practitioners building on or defending against AI-integrated systems should be paying close attention to Anthropic's own safety documentation here — the company's willingness to be explicit about internal concerns is itself a data point worth noting.


Schrödinger's Feed

AI Unlocks "Waterfall" Error Correction in Quantum Systems

Harvard researchers have demonstrated that AI can trigger a cascade — described as a "waterfall" effect — in quantum error correction, sharply reducing both error rates and processing time in ways that previous correction schemes couldn't achieve efficiently. This matters to the cryptography community because quantum error correction has long been the practical bottleneck between current noisy intermediate-scale quantum (NISQ) hardware and the fault-tolerant machines capable of running Shor's algorithm at cryptographically relevant key sizes. A sudden qualitative improvement in error correction efficiency compresses the timeline estimates that underpin most organisations' post-quantum migration planning. Practitioners who have been treating PQC migration as a 2030+ concern should note that the gap between "interesting lab result" and "re-evaluate your assumptions" is narrowing faster than the consensus timeline suggests.


/dev/random

Researchers Broke Top AI Agent Benchmarks. Turns Out the Benchmarks Were Trusting.

A team from Berkeley's RDI lab has published a detailed account of how they systematically broke leading AI agent benchmarks — and, naturally, what that means for anyone who took those benchmark scores at face value when procuring or deploying AI systems. The short version: the benchmarks were optimised to be solvable in ways that didn't generalise to real-world robustness, and sufficiently motivated researchers found the cracks without breaking much of a sweat. It's a story as old as security itself — Goodhart's Law applied to AI capability measurement — except now the stakes include autonomous agents with access to enterprise tooling. The only surprising part is that anyone is surprised.