Sunday, October 6, 2024
HomeTop StoriesHow AI could manipulate voters and undermine elections, threatening democracy

How AI could manipulate voters and undermine elections, threatening democracy


It’s common knowledge that technology had a role in swaying voters in the 2016 and 2020 elections.

To add an additional layer of complications to the upcoming elections in the U.S., artificial intelligence will likely play a heavier hand. 

While AI has been utilized in a multitude of ways in society, there are growing concerns about the use of generative AI during this election season, which may manipulate voters and undermine the elections.

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER 

Illustration of generative AI   (Kurt “CyberGuy” Knutsson)

What is generative AI?

Generative AI is artificial intelligence that is capable of generating photos, written information and other data based on models that learn and process raw data as well as through user prompts.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

How can generative AI be misused in this year’s election?

For every candidate who is using AI as a cost-saving measure, there are those who can use it for more malicious purposes. While AI can be used to distinguish and exclude ineligible voters from registries as well as signature matches, it may end up suppressing voters by knowingly or unknowingly removing those who are actually eligible.

Chatbots and algorithms can be used to drum up incorrect information to voters, which can sway them against certain candidates. In the worst-case scenario, AI can amplify hot-button issues and potentially stir up violence.

ELECTION bALLOT

A hand putting in a ballot   (Kurt “CyberGuy” Knutsson)

MORE: AI WEARABLE CONTRAPTION GIVES YOU SUPERHUMAN STRENGTH

How tech and AI companies are failing to protect election integrity

Tech companies aren’t investing in election integrity initiatives. AI companies don’t have the connections and funding to manage any risks involved with how their tools get utilized for elections. This means that there is less and less human oversight on what AI generates as well as how the AI-generated information gets used.

The very nature of the American Constitution could be in direct conflict with AI during this election season as free speech is part of the very fabric of American ideals, yet preventing and stopping misinformation is crucial to ensure a fair election.

Not only is the classic mud-slinging of candidates likely, but other countries, such as China, Iran and Russia, have recently been caught trying to use content created with AI to manipulate U.S. voters.

GET MORE OF MY SECURITY ALERTS, QUICK TIPS & EASY VIDEO TUTORIALS WITH THE FREE CYBERGUY NEWSLETTER – CLICK HERE

voters

Voters lined up to vote at the polls   (Kurt “CyberGuy” Knutsson)

MORE: CYBERATTACK ON DC ELECTION SITE EXPOSES VOTER DATA TO HACKERS 

Ways to prevent misuse of AI

Social media has undoubtedly changed the way election campaigns are run. Various platforms have their own processes in place to deal with election information and misinformation. YouTube has changed its policy and states: “We will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

YouTube’s parent company, Alphabet, requires election advertisers to prominently disclose when their ads include realistic synthetic content that has been digitally altered or generated, including by AI tools. Over the coming months, YouTube will also require creators to disclose when they have created realistic altered or synthetic content and will display a label that indicates to people that the content they’re watching is synthetic. 

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Meta, which owns Facebook, Instagram and Threads, will put labels on images and ads that were made with AI, in order to help people know what is real and what is not, and to stop false or harmful information from spreading, especially during elections.

Several states have passed laws regulating the use of political deepfakes, including California, Michigan, Minnesota, Texas and Washington.

MORE: AI AND HOLOGRAMS BRING THE KING OF ROCK N ROLL BACK TO LIFE

Kurt’s key takeaways

While there will always be the potential for AI to be misused in any facet of society, it seems most alarming if it will impact our democracy. With the awareness of potential misuse by pundits and voters alike, there is a chance that it will encourage more critical thinking by voters who will be viewing election candidates, issues and information with a more critical eye. That can make people more apt to do their own research than just absorb what they are being “fed” online or offline. And since America’s election system isn’t centralized, it will be harder for AI to be misused as votes are managed at the local level. At the end of the day, your vote will still matter.

CLICK HERE TO GET THE FOX NEWS APP

What are your biggest concerns regarding the use of AI during this year’s election? Do you think you’ll see or feel the impact? Let us know by writing us at Cyberguy.com/Contact

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question, or let us know what stories you’d like us to cover

Answers to the most asked CyberGuy questions:

Copyright 2024 CyberGuy.com. All rights reserved.



Source link

RELATED ARTICLES

Most Popular

Recent Comments