Home Markets AI chatbots aren’t reliable for voting questions: Government officials

AI chatbots aren’t reliable for voting questions: Government officials

by admin

AI chatbots aren’t reliable for voting questions: Government officials

New York Attorney General Letitia James speaks during a press conference at the Office of the Attorney General in New York on February 16, 2024. 

Timothy A. Clary | AFP | Getty Images

With four days until the presidential election, U.S. government officials are cautioning against reliance on artificial intelligence chatbots for voting-related information.

In a consumer alert on Friday, the office of New York Attorney General Letitia James said it had tested “multiple AI-powered chatbots by posing sample questions about voting and found that they frequently provided inaccurate information in response.”

Election Day in the U.S. is Tuesday, and Republican nominee Donald Trump and Democratic Vice President Kamala Harris are locked in a virtual dead heat.

“New Yorkers who rely on chatbots, rather than official government sources, to answer their questions about voting, risk being misinformed and could even lose their opportunity to vote due to the inaccurate information,” James’ office said.

It’s a major year for political campaigns worldwide, with elections taking place that affect upward of 4 billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns.

The number of deepfakes has increased 900% year over year, according to data from Clarity, a machine learning firm. Some included videos that were created or paid for by Russians seeking to disrupt the U.S. elections, U.S. intelligence officials say.

Lawmakers are particularly concerns about misinformation in the age of generative AI, which took off in late 2022 with the launch of OpenAI’s ChatGPT. Large language models are still new and routinely spit out inaccurate and unreliable information.

“Voters categorically should not look to AI chatbots for information about voting or the election — there are far too many concerns about accuracy and completeness,” Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, told CNBC. “Study after study has shown examples of AI chatbots hallucinating information about polling locations, accessibility of voting and permissible ways to cast your vote.”

In a July study, the Center for Democracy & Technology found that in response to 77 different election-related queries, more than one-third of answers generated by AI chatbots included incorrect information. The study tested chatbots from Mistral, Google, OpenAI, Anthropic and Meta.

“We agree with the NY Attorney General that voters should consult official channels to understand where, when, and how to vote,” an Anthropic spokesperson told CNBC. “For specific election and voting information, we direct users to authoritative sources as Claude is not trained frequently enough to provide real-time information about specific elections.”

OpenAI said in a recent blog post that, “Starting on November 5th, people who ask ChatGPT about election results will see a message encouraging them to check news sources like the Associated Press and Reuters, or their state or local election board for the most complete and up-to-date information.”

In a 54-page report published last month, OpenAI said that it’s disrupted “more than 20 operations and deceptive networks from around the world that attempted to use our models.” The threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote, though none of the election-related operations were able to attract “viral engagement.”

As of Nov. 1, Voting Rights Lab has tracked 129 bills in 43 state legislatures containing provisions intended to regulate the potential for AI to produce election disinformation.

WATCH: More than a quarter of new code is now AI-generated

Google: More than a quarter of new code is now AI-generated

Source link

related posts