Two fencing partners with faces covered, threat level unknown.

A Practitioner in AI: Examples of Hacked AI Systems and Integration with Automation for Malicious Intent

There is a lot of talk about the overall ethics and use of AI, but how does that translate into real-world scenarios and usage? As a digital technical project manager who does a lot of work in digital marketing, I’ve witnessed the transformative power of artificial intelligence (AI) in optimizing processes and driving innovation. In fact, I have some AI tools that have already become staples in my toolstack and workflows. And as a digital marketing practitioner, I also understand how easily AI’s potential for misuse when integrated with automation workflows poses significant risks. In this article I’ll provide a comprehensive overview of how AI can be exploited to influence public opinion and behavior, using practical examples and real-world case studies. By understanding these threats, where, and how we could encounter them, will help us all better  safeguard our digital environments and the information we extract from them.

The Dual-Edged Sword of AI

There is no question that AI is a powerful tool that can enhance productivity and improve decision-making. At TPS, for example, we build workflows and processes that utilize AI to increase performance and efficiency. However, the capabilities of AI can be misused for deceptive purposes. According to OpenAI, the creators of ChatGPT, their models are sometimes abused in support of covert influence operations (IO) to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them​​. You see the problem, right? This dual nature of AI necessitates a vigilant approach to its deployment and use.

Case Studies of Malicious AI Use

And, friends, it’s happening already! Below are just a few known IOs:

Spamouflage Dragon

Spamouflage Dragon is a Chinese influence operation known for its extensive use of AI-generated content. This group targets global audiences with content that praises the Chinese government and criticizes its opponents. By generating large volumes of posts and fake engagement, Spamouflage Dragon attempts to create the illusion of widespread support​​ .

Example: Spamouflage Dragon used AI to generate articles accusing Japan of environmental damage, which were then posted across various platforms, including Medium and Blogspot. This operation also utilized AI to debug code and analyze social media posts, enhancing its efficiency and reach​​.

Pro-Russia Doppelgänger

The Pro-Russia Doppelgänger operation focuses on spreading anti-Ukraine propaganda. This group uses AI to generate content in multiple languages and posts it across various social media platforms. Despite their efforts, these campaigns often fail to gain authentic engagement​​.

Example: Doppelgänger’s AI-generated posts on 9GAG and X included memes and comments designed to portray Ukraine negatively. These posts were typically met with critical responses from authentic users, highlighting the challenge of achieving genuine engagement​​.

Bad Grammar

Bad Grammar is a Russian influence operation that uses AI to generate and post comments on Telegram. This group focuses on political themes and targets audiences in Russia, Ukraine, the United States, and the Baltic States​​.

Example: The Bad Grammar network used AI to create comments in the voices of fake personas, posting them on pro-Russia channels. Despite their efforts, the campaign struggled to attract significant engagement, often being overshadowed by more authentic content​​.

How AI is Integrated with Automation for Malicious Intent

In a practical, real-world sense, threat actors such as the IO groups mentioned above, often combine AI with automation to easily scale their objectives, enhancing efficiency such as posting rates and comments/reactions creation. This integration allows them to generate vast amounts of content and automate its distribution across multiple platforms.

Example: Spamouflage Dragon used AI to generate short comments in various languages and automated their posting across platforms like Telegram and X. By doing so, they aimed to create a false sense of engagement and support​​ .

The Impact on Social and Digital Platforms

After reading this, one can see the malicious use of AI poses significant risks to social and digital platforms. By creating fake posts and engagement, threat actors can manipulate public opinion and spread disinformation. This not only undermines the integrity of these platforms but also erodes public trust.

Example: The Pro-Russia Doppelgänger operation’s AI-generated posts often included misleading information about the Ukraine conflict. Despite their widespread dissemination, these posts failed to achieve meaningful engagement, highlighting the challenge of influencing authentic audiences​​ .

The Role of Us

For an organization’s digital marketing ecosystem, project managers and digital marketing team stakeholders play a crucial role in safeguarding against the malicious use of AI. When we all understand the tactics used, we can recognize threat actor tactics and implement effective countermeasures to protect our digital ecosystems.

Practical Steps to Protect Against Threat Agents:

  • MONITOR, MONITOR, MONITOR: Regularly monitor your organization’s digital platforms for any signs of AI-generated content and fake engagement. Explore any spikes in the platform’s engagement and ensure they are because of that great piece of content your team cross-promoted, and not part of a threat agent’s agenda!
  • STAY IN COMMUNICATION & UP TO DATE: Work with AI specialists, cybersecurity experts, and other stakeholders in your organization for collaboration to share threat intelligence and develop robust defenses. It’s an environment that is changing everyday!
  • ORGANIZATIONAL AWARENESS: Educate all the members of your organization about the potential risks of AI and how it could be used maliciously against your organization. Then encourage your team to critically evaluate online content as it pertains to their job.

Conclusion

Like any wonderful thing, with the good use of AI comes the bad. AI’s potential for misuse  shows us, as digital practitioners, we must always be vigilant and proactive in our management of social platforms and digital spaces. By understanding the tactics used by threat actors and implementing robust organizational safeguards, practitioners can protect their organization’s digital ecosystems, ensuring AI remains a force for good🦸🏻‍♀️. 

Ready to safeguard your digital ecosystem? Contact TPS to get started!

References

  1. OpenAI. “Disrupting Deceptive Uses of AI by Covert Influence Operations.” OpenAI
  2. Washington Post. “OpenAI Disinfo Influence Operations in China and Russia.” Washington Post
  3. NPR. “Russia Propaganda Deepfakes Sham Websites Social Media Ukraine.” NPR
  4. OpenAI Threat Report. PDF