Artificial intelligence technology is making its way into just about every facet of life, it seems, and now there is concern that it could make its way into the 2024 elections as well.
Members of the Federal Election Commission are set to meet to have discussions about whether the panel needs to create rules that would protect voters in the upcoming election from deepfakes.
On May 16, a petition was posted by the group Public Citizen asking for the FEC to enact a full ban on political deepfakes. To this point, the FEC hasn’t done much at all to address AI and the ways that political campaigns can use content that’s generated from it.
Republicans who sit on the FEC have said that the agency can regulate the AI content through some existing rules that are in place. Others are not so sure.
In 2024, there will be many important elections held throughout the world, with nearly 1 billion people expected to cast a vote in one of these elections across the globe.
One of the major concerns heading into these elections is the divisive, hateful and sometimes false content that’s posted on social media networks. In the past, all of this content was directly created by humans. Through the power of generative AI, though, it’s become very easy to create a huge amount of content that can then be distributed globally.
Some voters’ rights groups are concerned that many people could soon encounter a wide variety of AI-generated content that could include memes or ads that are both hyper-targeted and highly tested; video and audio clips that are actually deepfakes of political candidates; and even bot accounts on social media sites that have the ability to discuss, comment and – most concerningly – spew propaganda with the same skills that humans can.
The generative AI content is perhaps most concerning to voting rights groups because it can be had for cheap. That could lead to many political operators who would normally not be able to participate in a large-scale influence operation because of the cost to do just that in 2024.
What’s more, voting rights groups are concerned because most people don’t have a complete understanding yet of what generative AI’s true capabilities are. In other words, it’s very hard for a lot of people to discern between real content and content that’s generated by artificial intelligence.
There isn’t even a lot of good regulations that are in place to oversee it, either. For instance, no law exists that requires a political organization to label video, images or text that’s generated by AI.
At the same time, the social media companies that played such an outsized role in the influence campaigns during recent elections are reducing their workforces that were dedicated to misinformation.
Even the companies that created the AI technology haven’t yet established concrete guidelines on how their tools should and shouldn’t be used for all political content.
That’s why Public Citizen wants the FEC to step in and do something about it sooner rather than later. We’ll see whether they decide to act.