Cracking Down on AI Fraud: How OpenAI Disrupted AI-Assisted Political Fraud

by | Jun 4, 2024 | Blog

iStudiosMedia Marketing Agency

With the emerging prevalence of AI in modern technology, we’ve seen a variety of improvements in our everyday lives– unfortunately, this can also mean new online threats. Over the last 3 months, OpenAI reportedly carried out a crackdown operation on five covert influence operations (IO) across their site. These anonymous operations used OpenAI’s programs in various ways to mass-produce highly political content, aiming to interfere with political outcomes in various countries (including the United States).

With industries at large rapidly shifting to accommodate advances in AI technology, what does this mean for the future of online security? How can companies like OpenAI reduce the risk of harmful content generation?

The OpenAI Crackdown Operation

Two hands typing on a keyboard in front of a computer screen displaying lines of code.

According to OpenAI’s report, the company identified and terminated the accounts of five IO organizations that were using their software. As many businesses have done over the past few years, these groups used AI programs to automate a variety of mundane tasks, such as debugging and translation; however, these groups aim to create and spread inflammatory political content on social media sites like Instagram, Facebook, and X. Some of these tasks included:

“…Generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts.”
– OpenAI | May 30th, 2024

OpenAI's Bad Actors

The IO groups who had their accounts terminated focused primarily on mass-generating content and faking engagement on their own platforms. These groups included the following:

  • “Bad Grammar”: A Russian IO that used AI to debug their bot program, which would write short political comments in English and Russian on Telegram Messenger.
  • Doppelganger: Another Russian IO; their operations included posting AI-generated political comments on X and 9GAG, as well as translated news articles that were linked to their website.
  • Spamouflage: A Chinese IO that uses AI to generate texts in various languages– including English, Chinese, Japanese, and Korean– and conduct research into social media trends on various sites.
  • IUVM: An Iranian IO that generated and translated various resources for their website, including long-form articles and web tags.
  • “Zero Zeno”: An IO that was activated by STOIC, an Israeli commercial company. This company used OpenAI to generate articles and comments for various associated websites and social media platforms.

More information on OpenAI’s IO counter-operation, as well as further details regarding its security measures, can be found on the website’s blog page and full report.

The Ethics of AI-Generated Content

An eye with computer-like patterns overlaid over it.

The creation of highly advanced, accessible AI programs truly is one of the most revolutionary technological advancements of the 21st century. As was the case with the internet and the smartphone, AI has the capacity to both help and harm our everyday lives in its own ways; this is because, simply put, these programs can be used by virtually anyone for their own purposes.

AI-generated content, including news articles, artwork, blog posts, and even entire websites are becoming more and more commonplace. As AI-generated content becomes harder to distinguish from manually created content, all kinds of people are becoming more comfortable with generating and spreading this content for their own purposes– for better or for worse. Although models like ChatGPT have some limitations in place to prevent the generation of inflammatory or inappropriate content, users are rapidly learning how to circumvent these limitations with just a few extra words at the end of a prompt.

Because modern AI programs are still relatively new, companies like OpenAI are constantly adjusting their models to mitigate the risk of harmful content. However, until those measures are perfected and properly implemented, the responsibility to keep AI-generated content safe, legal, and ethical ultimately falls to the users.

The Big Question: Is AI Safe to Use?

A robotic hand typing on a keyboard.

As of now: Yes, AI programs such as ChatGPT are perfectly safe to use. However, using these programs to commit illegal acts or harm others (e.g., propagandizing and political fraud) is fully illegal, and can be penalized as such.

At the end of the day, AI is merely a tool for people to use however they wish. As society shifts to accommodate these new technologies, it’s important for us as viewers and consumers to use AI safely and responsibly.

iStudios Media: Why it Matters to Us

At iStudios Media, we are firm advocates of safe, ethical, and legal usage of artificial intelligence. We are a Bay Area digital marketing firm that uses AI to assist us in content creation, outreach, and optimization; however, we also know the importance of the human element when it comes to building organic and meaningful consumer relationships.

Click here to visit our website and see what we have to offer.

Related Posts