2024-10-17

OpenAI Details Threat Actors Use AI Models for Cyber Operations

Level: 
Strategic
  |  Source: 
OpenAI
Global
Share:

OpenAI Details Threat Actors Use AI Models for Cyber Operations

In an update to its May 2024 threat report, OpenAI offers new insights into the state of cyber operations in AI ahead of the 2024 U.S. elections. Groups such as SweetSpecter, CyberAv3ngers, and STORM-0817 are utilizing advanced AI technologies, including large language models (LLMs), in their malicious activities. This recent report, authored by Ben Nimmo and Michael Flossman, shows that these groups have escalated their operations, leveraging AI to carry out various tasks, from malware development to social engineering, in ways that have enhanced their operational efficiency. "Their activity ranged from debugging malware, to writing articles for websites, to generating content that was posted by fake personas on social media accounts. Activities ranged in complexity from simple requests for content generation to complex, multi-stage efforts to analyze and reply to social media posts," the researchers note. It is also shared that OpenAI has disrupted over "20 operations and deceptive networks from around the world that attempted" to abuse their models.

Several case studies of cyber operations reported by OpenAI detail the use of AI by various threat groups. SweetSpecter, suspected to be a China-based group, has been actively using OpenAI’s models for vulnerability research, scripting support, and evading anomaly detection. The report highlights SweetSpecter's spear-phishing attempts targeting OpenAI employees, using techniques like malicious LNK files that deploy SugarGh0st RAT malware. Although these attempts were unsuccessful due to OpenAI’s security measures, the campaign demonstrates how adversaries are evolving their tactics with AI-driven tools to enhance reconnaissance and persistence capabilities. Prompts asked by the SweetSpecter adversary group included questions about the Log4Shell vulnerability, probing for vulnerabilities in car manufacturers, assistance with debugging, social engineering themes, and more.

Another key actor, CyberAv3ngers, linked to Iran’s Islamic Revolutionary Guard Corps (IRGC), has focused on industrial control systems (ICS) and programmable logic controllers (PLCs). This group has been using OpenAI’s models to research vulnerabilities in critical infrastructure and to debug scripts that target PLCs in sectors such as water management and energy. The report details CyberAv3ngers’ operations in targeting vulnerable ICS devices and their attempts to refine malicious scripts for attacks. However, OpenAI’s analysis concludes that while AI played a role in their operations, it did not grant the attackers any novel capabilities beyond those already achievable with publicly available tools. Some specific questions asked by CyberAv3ngers included topics surrounding networking and industrial devices, such as default passwords, direct questions to help exploit a network or files for exploitation, and specific attack techniques like creating a copy of a SAM file or methods to compromise credentials.

STORM-0817, an Iranian group, has also been flagged for using AI to assist in the development of Android malware and for scraping social media profiles. Their efforts to target activists and journalists align with their goal of gathering intelligence on specific individuals. The report highlights their use of OpenAI models to enhance their command-and-control infrastructure and debug malware components. Though still in its developmental stages, the operation reveals a focus on long-term espionage and surveillance, with AI helping streamline portions of their attack process.

The use of covert influence operations has also been a key focus, particularly with the upcoming 2024 U.S. elections. While OpenAI has disrupted numerous influence operations aimed at shaping public opinion, the report indicates that these campaigns have largely failed to achieve substantial reach. According to OpenAI’s Breakout Scale, which rates these operations on a scale from 1 to 6, with 6 being the highest, most of these operations fall under Category Two, signifying limited online impact. Despite the capabilities of AI in aiding influence operations, the report concludes that these activities have yet to significantly alter the broader information landscape.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now