San Francisco, CA – February 25, 2025 – OpenAI has taken decisive action against state-sponsored cyber threat actors by banning multiple accounts linked to hacking groups from North Korea. The company discovered that these malicious actors were leveraging its ChatGPT platform to conduct cybersecurity research, develop hacking tools, and orchestrate cyberattacks.
North Korean Hacking Groups Exploited ChatGPT
According to OpenAI’s February 2025 threat intelligence report, the company identified and banned accounts associated with Democratic People’s Republic of Korea (DPRK)-affiliated hacking groups, specifically VELVET CHOLLIMA (Kimsuky, Emerald Sleet) and STARDUST CHOLLIMA (APT38, Sapphire Sleet). These groups, known for their cyber espionage activities and financial cybercrimes, were caught utilizing ChatGPT for reconnaissance and technical research to enhance their hacking capabilities.
The accounts were flagged using intelligence shared by an industry partner, revealing that North Korean hackers were using ChatGPT for various malicious activities, including:
- Researching cybersecurity vulnerabilities and attack methodologies.
- Learning about cryptocurrency transactions and blockchain networks.
- Developing and refining hacking scripts for Remote Desktop Protocol (RDP) brute force attacks.
- Coding and debugging Remote Administration Tools (RATs) to infiltrate target systems.
- Identifying macOS attack techniques and debugging Auto-Start Extensibility Point (ASEP) locations.
ChatGPT Used for Social Engineering & Phishing Campaigns
In addition to hacking research, OpenAI found evidence that North Korean cyber actors leveraged ChatGPT to craft highly targeted phishing campaigns. These included:
- Generating phishing emails designed to steal sensitive user information.
- Creating fake job postings and fraudulent IT worker profiles to infiltrate organizations.
- Writing scripts for PowerShell-based payload delivery, file transfers, and HTML obfuscation techniques.
- Seeking ways to bypass security warnings for unauthorized RDP access.
OpenAI noted that some banned accounts were linked to North Korean IT worker schemes, where threat actors posed as legitimate employees in Western companies. These individuals used ChatGPT to assist with their jobs while simultaneously developing cyberattack strategies. The scheme was likely intended to generate foreign revenue for the Pyongyang regime while covertly gathering intelligence on corporate infrastructure.
China & Iran Also Under OpenAI’s Radar
Beyond North Korea, OpenAI has also been monitoring state-sponsored influence operations from China and Iran. The company previously disrupted two Chinese disinformation campaigns in late 2024—“Peer Review” and “Sponsored Discontent”—which used ChatGPT to generate propaganda content and develop surveillance tools.
OpenAI also reported that, since early 2024, it has identified and disrupted over 20 cyber operations associated with Chinese and Iranian state-sponsored hackers.
AI in Cybersecurity: A Double-Edged Sword
While AI-powered tools like ChatGPT offer significant benefits in cybersecurity research and threat mitigation, the latest findings highlight the growing risks of AI misuse by cybercriminals. OpenAI has reaffirmed its commitment to preventing the abuse of AI models, stating that it will continue collaborating with industry partners and security experts to enhance detection and mitigation efforts.
As nation-state actors increasingly turn to AI for cyber warfare and disinformation, the role of AI security policies and proactive monitoring will be critical in safeguarding global cybersecurity.
Stay updated with the news on Ai and ChatGPT.
Leave a comment