ClaudeAi has been misused to run more than 100 fake political personas in the Global Impance Campaign

5 Min Read
5 Min Read

Humanity, an artificial intelligence (AI) company, revealed that unknown threat actors have revered their Claude chatbots and exploited the “impact as a service” manipulation to engage in real accounts via Facebook and X.

The sophisticated activity, branded as economically motivated, is said to have coordinated 100 different people into two social media platforms using AI tools.

Human researchers say the now-invasive operation has prioritized sustainability and longevity over vitality, and sought to amplify a moderate political perspective that supports or undermines the interests of Europe, the Iranians, the United Arab Emirates (UAE) and Kenya.

These focused on promoting the UAE as a superior business environment, while being critical of the European regulatory framework, the story of energy security for European audiences, and the story of cultural identity for Iranian audiences.

The effort also promoted a narrative that supported Albanian figures, criticised the opposition in an unspecified European country, and advocated Kenya’s development initiatives and politicians. He added that these impact manipulations are consistent with national-related campaigns, but that those behind them remained unknown.

“What’s particularly novel is that this action was not only used to generate content, but also to determine when to post comments, likes, or reselling shares from real social media users,” the company said.

“Claude was used as an orchestrator to decide what actions should be taken by social media bot accounts based on politically motivated personas.”

Despite Claude’s use as a tactical engagement decision maker, the chatbot produced appropriate politically arranged responses in persona’s voice and native language, creating two general image generation tools prompts.

See also  DeepSeek-Grm: revolutionizes scalable, cost-effective AI for businesses

This operation is considered to be a commercial service job that caters to a variety of clients from different countries. At least four different campaigns have been identified using this programmatic framework.

“This operation implemented a highly structured, JSON-based approach to persona management, allowing us to maintain continuity across the platform and establish consistent engagement patterns that mimic real human behavior.”

“By using this programmatic framework, operators can efficiently standardize and scale their efforts, enabling systematic tracking and updating of persona attributes, engagement history and story themes across multiple accounts at the same time.”

Another interesting aspect of the campaign is that it directed the automated accounts to respond with humor and sarcasm to accusations from other accounts that could be bots.

Humanity said the operation underscores the need for a new framework to assess impact operations that revolve around relationship building and community integration. He also warned that similar malicious activities could become common in a few years as AI further reduces barriers to running impactful campaigns.

Elsewhere, the company noted that it used stolen credentials to ban sophisticated threat actors using models to devise ways to brute force targets aimed at the internet, which are associated with leaked passwords and security cameras using models.

Threat Actor also adopted Claude to process posts from the Stealer logs posted to Telegram, write scripts to scrape target URLs from websites, improving its own system to improve functionality.

Below are two other cases of misuse discovered by humanity in March 2025 –

  • Recruitment fraud campaign leveraging Claude to enhance the content of fraud targeting job seekers in Eastern European countries
  • Beginner actors leverage Claude to enhance technical capabilities to develop advanced malware with the ability to scan the dark web, avoid security controls, and generate undetectable malicious payloads that can maintain long-term, lasting access to compromised systems.
See also  Russian hackers abuse Microsoft OAuth to target Ukrainian allies via signal and WhatsApp

“This case illustrates how AI potentially flattens the learning curve of malicious actors, showing that individuals with limited technical knowledge can develop sophisticated tools and potentially accelerate the progression from low-level activities to more serious cybercriminal efforts,” Anthropic said.

Share This Article
Leave a comment