OpenAI reports China-linked use of ChatGPT in cyberattack operations
OpenAI has disclosed that a ChatGPT account with connections to individuals affiliated with Chinese law enforcement was exploited as part of a multi-faceted cyberattack and influence operation, according to its latest threat report on malicious AI use. The account was used to draft, plan and document activities that included online harassment, covert influence campaigns and operational coordination across hundreds of platforms, prompting OpenAI to ban the associated […] The article OpenAI reports China-linked use of ChatGPT in cyberattack operations appeared first on Arabian Post.


OpenAI has disclosed that a ChatGPT account with connections to individuals affiliated with Chinese law enforcement was exploited as part of a multi-faceted cyberattack and influence operation, according to its latest threat report on malicious AI use. The account was used to draft, plan and document activities that included online harassment, covert influence campaigns and operational coordination across hundreds of platforms, prompting OpenAI to ban the associated account and highlight how artificial intelligence tools are being abused to support complex malicious workflows.
The company’s threat intelligence findings show that while ChatGPT was not directly used to write code for network intrusion or exploit development, it played a central role in generating narratives, refining propaganda and tracking the status of widespread operations targeting critics of the Chinese government. OpenAI’s investigators matched drafts and “situation reports” created on ChatGPT with real-world activity across social networks, blogs and other online spaces, demonstrating that the model was integrated into broader cyber and influence campaigns.
OpenAI’s report underscores the evolving landscape of AI misuse, where threat actors combine generative tools with traditional cyber techniques and social media infrastructure. In one highlighted case, the banned account was used to compose extensive logs of what were described internally as “cyber special operations,” including attempts to intimidate dissidents overseas and fabricate news reporting on their deaths. These communications were then disseminated via coordinated accounts across multiple digital platforms, amplifying disinformation and harassment.
Although the malicious use of AI in this context did not include autonomous exploitation of systems, security experts say the blending of generative models with coordinated online influence efforts marks a significant shift in how cyber and psychological operations are conducted. The techniques documented in OpenAI’s threat report reflect an increased sophistication in narrative crafting and campaign tracking, particularly as threat actors adopt AI tools to enhance scale and precision.
The case linked to Chinese law enforcement is part of a broader pattern of state-linked actors abusing AI models for harmful purposes. Past intelligence reports have described operations where large language models were misused to assist in developing multilingual phishing messages, custom malware templates and spear-phishing campaigns that targeted organisations across North America, Europe and Asia. One group tracked by researchers as UTA0388 reportedly employed generative AI to produce highly customised phishing emails in multiple languages, scaling attacks that combined rapport-building tactics with malicious payloads.
Other threat clusters identified in previous industry analyses show that misuse is not limited to one geography or actor type; Russian-linked and North Korean threat actors have also been observed using generative AI to refine malware code, draft phishing lures and generate fraudulent identities for social engineering. North Korean operators, for example, used ChatGPT to create deepfake military identification cards embedded in phishing campaigns targeting defence-related institutions, while Russian groups refined Windows malware with AI assistance.
OpenAI’s report emphasises that threat activity involving AI is rarely confined to a single platform or tool. Rather, malicious operators often incorporate multiple AI models and digital services at different stages of their workflows, combining generative text outputs with other technologies such as automated scripts, social accounts and bespoke malware. This integrated approach can magnify the reach and impact of influence operations, scams and cyberattacks, even if the generative model itself is not directly executing technical exploits.
The company’s threat intelligence initiative, which has been ongoing for more than two years, seeks to illuminate these emerging tactics and support broader efforts to detect and mitigate AI-assisted abuse. By documenting case studies and sharing insights, OpenAI aims to help industry partners and the wider security community adapt to how threat actors evolve their use of artificial intelligence. The report highlights that while model misuse is one component of a larger threat ecosystem, understanding behaviour patterns and operational workflows is critical to strengthening defences across the internet.
Experts note that the misuse of generative models for influence and cyber operations reflects a broader trend in the threat landscape, where the sophistication of social engineering and narrative attacks is rising. Academic research on generative language models has long warned that these tools can automate the creation of convincing, misleading text for influence operations, and OpenAI’s findings demonstrate how such capabilities are being harnessed by determined malicious actors.
The article OpenAI reports China-linked use of ChatGPT in cyberattack operations appeared first on Arabian Post.
What's Your Reaction?



