AI in Politics: The fine line between reality and fiction

By: Diana Bello Aristizábal

Para leer en Español

 

One month prior to the elections that will determine who is the next president of the United States, among other things, the so-called deepfakes continue to grow. This term makes reference to images, videos or audios edited or generated with artificial intelligence (AI) tools that mimic the appearance and sound of a person. In politics, this technology can be devastating because it influences voters’ behavior.

 

This type of content sometimes borders on the absurd and, at the same time, sparkles sentiments of fear as it was the case with the images that circulated in August in which candidates Kamala Harris and Donald Trump were seen sharing as a couple or global leaders, including Joe Biden, committing crimes.

 

In other instances, AI can generate an image or audio of a celebrity apparently supporting a candidate, just like it happened with Taylor Swift, whose fake image appeared endorsing Donald Trump in a publication posted by the candidate himself on his Truth Social account.

 

“When we no longer know how to differentiate between reality and fiction, we are talking about a complete and total collapse of our society and democracy. This is a dangerous moment, and we cannot trust anything,” says Richard Tapia, professor of political science and international relations at Miami Dade College.

 

The power is in our judgment.

Since we are constantly watching malicious and manipulative content online, we can follow two paths according to experts: develop critical media skills and regulate artificial intelligence.

 

“What people can do is educate themselves. Who is posting what you are seeing or hearing? Do you really trust the source? Does it match with reality? And is it something you want to believe?” says Jovianna Gonzalez, cybersecurity and digital forensics expert, are the questions we should all ask ourselves.

 

According to the founder and CEO of Digital Forensics Now, those who create misleading content use emotional manipulation to achieve their goal. “They use psychology in order to persuade people to do something. This is the reason why it’s very important to be mindful of what is happening inside of us when we are consuming content, receiving a text message or an email, and researching and verifying thoroughly before trusting and taking action,” she says.

 

But the above is just step one if we consider AI tools are getting increasingly sophisticated, since those who use them with a dark purpose seek to make it harder for the public to identify a deepfake. Therefore, step two would be to become experts at reading details.

 

This means observing if there is a lag in the voice, too many pauses or if the audio doesn’t match with the movement of the lips; unnatural movements in the eyes or backgrounds in which something blurred appears in a split second. These signs are all red flags, according to the cybersecurity expert. “Perceiving them requires paying attention, but nothing guarantees that you will always be able to identify if something is legitimate or not,” Gonzalez says.

 

Despite this, the responsibility lies on software manufacturers, whose challenge is to develop products capable of detecting by default even the most sophisticated AI system. The bad news, Gonzalez predicts, is that deepfakes will be detected relatively easy for the last time on these coming elections.

 

“Even creating legislation like California’s to impose regulations on AI (at press time, the bill had been vetoed), bad actors will always be one step ahead. Therefore, pressure should be put on the manufacturers.”

 

Voters, watch out!

In politics, the fact that bad actors are polluting the Internet with deepfakes can have a devastating effect on democracy by sowing doubt in voters and serving as an umbrella for unprincipled politicians.

 

“Many politicians can utilize the argument of deepfakes in their favor and this strategy is going to be used more and more. They will claim that the comments or statements spread are not their own, even when that’s not the case, just to protect themselves,” warns Richard Tapia.

 

On the opposite side there’s the content that, in fact, is fake and made with artificial intelligence, leaving an indelible mark. For example, if a candidate appears in an image hugging a person with a bad reputation, this will tarnish the candidate’s career even after attempting to apply corrective measures.

 

“They will be found guilty by association regardless of how real or fake the image is, and the damage is very difficult to control,” says Tapia. Every time content is posted with artificial intelligence, the damage that could cause becomes irreversible, because even if it’s later revealed and proved the content was fake, there will always be a group of people who believe it to be real and act in accordance with that belief, possibly affecting voting intention.

 

On the other hand, psychometric profiles of each one of the voters are being created through artificial intelligence, so that the candidates can disseminate campaign targeted messages for each individual and not for groups of people. “It’s a very powerful tool that can be used to get a better understanding of ??what the masses want or to manipulate and confuse them,” says Tapia.

 

The issue is complex because there is a fine line between market segmentation (similar to the tool explained above) and exercising unethical practices. According to Tapia, the solution is in the hands of Congress that should implement controls on artificial intelligence.

 

“Nowadays, Congress is already talking about putting in place certain safeguards and barriers when, for example, these tools threaten national security or in cases of libel and slander. I think we are going to start seeing legislation in all states as the government is already aware of the danger this technology entails.”

 

But it’s not about demonizing artificial intelligence or vetoing freedom of expression and freedom of the press to the point that the world becomes a place where AI cannot be used, knowing its many benefits. Rather, the challenge for the future will be to find a way to preserve democracy and “flee from authoritarians, demagogues and Machiavellians who want to manipulate people,” as Tapia concludes.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend