CRD #7
On GenAI's security and impact, ransomware stats, attack frequencies, cloud vulns, healthcare impacts & more.
Research highlights for security leaders from validated sources released between 8-14 October 2024, followed by a list of all monitored reports. The Cybersecurity Research Digests cuts through the bias and marketing fluff to bring you relevant and objective insights backed by data and based on proper research.
Fragmented ransomware ecosystem: more groups, more variability, but not more victims
Threat research reveals that over the past 12 months, 31 new groups have joined the ransomware ecosystem, contributing to a 30% year-over-year increase in active ransomware groups. This surge underscores the growing fragmentation of a cybercriminal landscape recently dominated by a few key players.
Fortunately, the increase has not (yet) caused a corresponding rise in the number of victims, likely because the actors are smaller and less capable. However, it is highlighted that defenders should be prepared for greater variability in the tactics used by ransomware groups as this fragmentation evolves.
Ransomware across Asia-Pacific: payment trends and country variations
A survey of 3,844 security leaders from organizations with 250+ employees in the Asia-Pacific revealed that over 41% experienced some form of data breach in 2023. As an indication of severity, 22% of breached were targeted by ransomware, and of these, 62% ended up paying the ransom.
While these figures broadly align with similar studies from other regions, significant variations were noted across countries. Organizations in India (69%), Hong Kong (67%), Malaysia (50%), and Indonesia (50%) were the most likely to pay, while those in South Korea (19%), Japan (19%), and New Zealand (22%) were the least likely to comply. These differences likely stem from national legal frameworks, ransomware policies, and organizational resilience maturity, but further research is required to fully understand the underlying causes.
Assessing cyberattack impacts on patient outcomes, including mortality
A survey of healthcare security practitioners identified five key ways cyber incidents—such as ransomware, business email compromise (BEC), and supply chain disruptions—are perceived to impact patient outcomes. Complicated procedures and delayed length of stay were the most reported (in over 50% of cases), followed by patient transfers (44%) and increased mortality rates (22%).
Notably, user negligence and accidents were cited nearly twice as often compared to coordinated threat actor activity (e.g., malicious insiders, social engineering, or phishing) as the leading causes of data loss and exfiltration incidents, which were also linked to higher mortality rates.
Insurance data on cyber incident severity and frequency
Vendor-commissioned surveys on organizations experiencing cyber incidents can be misleading, as they often lack clear definitions of attack types and fail to adequately measure the true impact of these incidents. This is why insights by cyber insures – who conduct research based on real and quantifiable data from their customers – is highly valuable.
A report by Coalition analyzing insurance claims from the first half of 2024 reveals useful insights, showing an overall claims frequency of around 1.6%, indicating the likelihood of a client submitting a claim due to a cyber incident. In H1 2024, 32% of reported events were business email compromise (BEC), 27% involved funds transfer fraud, and 18% were ransomware. The average financial impact of a BEC attack was $26,000, while the average ransom demand was €1.3M, with 40% of policyholders choosing to pay, negotiating the demand down by an average of 57%.
OpenAI's threat report: no significant breakthroughs for threat actors
In its second report of this kind, OpenAI outlines its activities and analysis regarding the misuse of its technologies by threat actors. While much of the report focuses on how threat actors leverage ChatGPT for malicious content creation—such as generating fake personas, social media accounts, posts, and websites—it also addresses specific technical uses within cybersecurity.
OpenAI states that it has not observed evidence of its models contributing to "meaningful breakthroughs" in creating substantially new malware. Instead, they describe the use of their models primarily in "intermediate phases" of cyber threat activity, such as reconnaissance, vulnerability research, scripting support, evading anomaly detection, phishing assistance, and malware development.
Attacking GenAI: 20% of jailbreak attempts successfully bypass guardrails
An analysis of telemetry data from over 2,000 LLM-powered applications, utilizing prominent GenAI models like GPT-4 and LLaMA-3, reveals several critical and exploited vulnerabilities. In 20% of jailbreak attempts, attackers successfully bypassed application guardrails to leak sensitive data, needing just 42 seconds and five interactions on average to achieve their goal.
The perceived main aims of malicious actors was to access proprietary data or create harmful content, using three main techniques: 1) ignoring or overriding previous instructions, 2) persistent and forceful requests, and 3) Base64 encoding to manipulate or obscure data.
Cloud workloads critically exposed: the toxic cloud triad
An analysis of billions of cloud assets highlights the critical need for cloud vulnerability monitoring, revealing that 38% of organizations have at least one cloud workload classified under the "toxic cloud triad"—publicly exposed, critically vulnerable, and highly privileged.
Additionally, the research shows that 80% of monitored workloads had an unremediated critical CVE at some point, underscoring the ongoing struggle with timely vulnerability management. The report also emphasizes the need for greater visibility into cloud access and identities, citing findings such as over 75% of organizations having unnecessarily exposed storage assets and 23% of identities holding excessive permissions.
Identity fraud on the rise, including the growing use of deepfakes
A survey of over 1,200 fraud decision-makers across Western Europe highlights a 74% increase in identity fraud over the last three years, with deepfake-based fraud now accounting for 6.5% of all fraud cases—a 20-fold rise in that period.
The report notes that while generative AI was primarily used to create fake static identities and documents three years ago, it is now widely employed to produce deepfakes for social engineering attacks. This shift has made deepfakes the most prevalent threat in electronic ID fraud, with banks and large enterprises being the primary targets.
Individual security habits: self-responsibility and questioning gamification
Despite increased access to security training, a global survey of over 7,000 participants shows minimal year-over-year improvement in personal security practices, such as using password managers, unique and strong passwords, or multi-factor authentication (MFA).
There has also been a slight decline in perceived self-responsibility for data protection. Instead, individuals increasingly see the responsibility falling to the tech industry, while IT and security departments are viewed as the primary protectors within workplaces.
When it comes to training formats, respondents favored video content and written materials, followed by periodic workshops. Interestingly, gamified experiences were the least preferred, challenging the idea that gamification enhances engagement in security training.
Reports monitored: 8-14 October 2024
To take a deeper dive in the topics most relevant for you, we've listed all the research reports that were published during the observed period.
About
evisec's Cybersecurity Research Digest provides security leaders verified strategic insights via a carefully curated weekly summary of evidence-led, unbiased and objective cybersecurity research publications. Read more about our service here.
✉️ Do you have suggestions or want to collaborate? Get in touch via LinkedIn or email (henry@evisec.xyz)