2025-03-06

OpenAI Continues to Disrupt Cyber Threat Actors Exploiting AI for Influence Operations and Cybercrime

Level: 
Strategic
  |  Source: 
OpenAI
Global
Share:

OpenAI Continues to Disrupt Cyber Threat Actors Exploiting AI for Influence Operations and Cybercrime

OpenAI has released a report detailing efforts to prevent the misuse of its AI models by cyber threat actors engaged in influence operations, cybercrime, and surveillance activities. The findings describe how adversaries are leveraging AI for various malicious purposes, prompting OpenAI to take action by banning accounts and collaborating with industry peers. The report presents case studies offering insights into actors from multiple countries, including China, Iran, Cambodia, and North Korea, using AI to manipulate public discourse, conduct deceptive hiring schemes, and aid in cyber intrusion research.

One of the most relevant cases involves a deceptive employment scheme linked to North Korea, where AI was used to create fraudulent job applications, cover letters, and social media profiles to deceive employers. The scheme enabled North Korean IT workers to secure remote jobs in Western companies, allowing them to funnel income to the regime. The actors also used AI-generated content to craft fake references and explanations for inconsistencies in their behavior. "They also used our models to devise cover stories to explain unusual behaviors such as avoiding video calls, accessing corporate systems from unauthorized countries or working irregular hours," reports OpenAI. OpenAI identified and banned multiple accounts involved in this operation and shared intelligence with industry partners to prevent further exploitation.

Continuing on the North Korean front, cyber threat actors leveraged ChatGPT to debug code and explore security vulnerabilities, particularly in remote access tools and authentication bypass techniques. Threat research of interest includes "debugging and development assistance for publicly available tools and code that could be used for Remote Desktop Protocol (RDP) brute force attacks, as well as assistance on the use of open-source Remote Administration Tools (RAT)," explains OpenAI. North Korean groups linked to this activity include Kimsuky (aka Emerald Sleet, Velvet Chollima) and APT38 (aka Sapphire Sleet, Stardust Chollima). Notable activity observed includes reconnaissance efforts focused on application vulnerabilities, multiple RDP-related coding inquiries, and the use of PowerShell scripts. Additionally, targeted activity indicates financial interest in cryptocurrency and related individuals. OpenAI banned an undisclosed number of accounts and clarified that the information provided by their models was limited. "Prompts and queries from the actor were primarily based on existing open source information and the provided model generations either did not offer any novel capability or were refusals to respond."

Disruptions from OpenAI also include Iranian influence operations that involved using AI to generate articles and tweets for campaigns tied to STORM-2035, a threat activity cluster, and an operation tracked as the International Union of Virtual Media (IUVM). The report revealed a previously unreported connection between these operations, indicating potential coordination among Iranian actors. The AI-generated content was primarily pro-Iranian and anti-Western, covering topics such as U.S. foreign policy, Middle East conflicts, and support for Palestinian groups. Although the influence campaign struggled to gain significant engagement, OpenAI assessed its impact as reaching multiple platforms, warranting further monitoring. "Some of the accounts we banned only occasionally used our models to generate content for the influence operations. More often, they asked our models to help design materials for teaching English and Spanish as a foreign language," reports OpenAI. Five accounts were identified as being leveraged for Iran-related influence operations and were subsequently banned.

Chinese-linked activities were also detected, with a campaign labeled "Sponsored Discontent" using AI to generate anti-American Spanish-language articles published by Latin American news outlets. The actors also deployed AI-generated English-language social media comments criticizing Chinese dissident figures, mimicking previous disinformation campaigns. OpenAI noted that this was the first observed instance of a Chinese influence operation successfully planting long-form articles in mainstream Latin American media. Another China-linked case, "Peer Review," involved actors using AI to research and develop a surveillance tool designed to monitor protests and political discussions in Western countries, potentially feeding intelligence to Chinese authorities.

OpenAI also identified and banned accounts involved in a romance-baiting scam, also known as "pig butchering," originating from Cambodia. These actors used AI to generate and translate messages to lure victims into fraudulent investment schemes. OpenAI continues to collaborate with cybersecurity partners and government agencies to strengthen detection capabilities and prevent AI from being weaponized for malicious activities. The company encourages the sharing of insights as a "force multiplier," with support from industry peers and the security community aiding efforts to take action and address reported threats.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now