OpenAI to launch anti-disinformation tools for 2024 elections


OpenAI's ChatGPT is one of the most powerful generative AI tools available to the public. Photo: Kirill KUDRYAVTSEV / AFP/File Source: AFP

chatGPT creator In advance of the several elections that will take place this year in nations that account for half of the global population, OpenAI has announced that it will provide tools to counter misinformation.

A global artificial intelligence revolution was sparked by the explosive success of ChatGPT, a text generator that also raised concerns about the potential for voter fraud and misinformation on the internet.




OpenAI announced on Monday that it will forbid the use of its technology, which includes ChatGPT and the picture generator DALL-E 3, for political campaigns. Elections are scheduled for this 12 months in some of nations, which include the United States, India, and Britain.

In a blog put up, OpenAI stated, "We want to make certain our era is not utilized in a manner that would undermine" the democratic system.

"We're still working to understand how effective our tools might be for personalized persuasion," it continued.

"Until we know more, we don't allow people to build applications for political campaigning and lobbying."

OpenAI to launch anti-disinformation tools for 2024 elections



The World Economic Forum cautioned last week in a report that AI-driven misinformation and disinformation are the largest short-term global dangers and might topple recently elected governments in key countries.

Election disinformation has long been a source of concern, but experts claim that the threat has increased since powerful AI text and image generators are now widely available. This is especially true if consumers find it difficult to discern whether the content they are exposed to is modified or phony.


On Monday, OpenAI said that it was developing tools that would allow users to determine whether an image was made using DALL-E 3 and provide accurate credit to language produced by ChatGPT.

"Early this year, we will implement the Coalition for Content Provenance and Authenticity's digital credentials -- an approach that encodes details about the content's provenance using cryptography," the business stated.


The C2PA consortium seeks to enhance techniques for digital content identification and tracking. Microsoft, Sony, Adobe, and the Japanese photo companies Nikon and Canon are among its members.

Guardrails:


According to OpenAI, ChatGPT will refer visitors to reliable sources when they ask procedural questions regarding US elections, such where to cast their ballot.

"Lessons from this work will inform our approach in other countries and regions," the business stated.

Furthermore, it stated that DALL-E 3 included "guardrails" that prohibit users from creating pictures of actual individuals, such as candidates.

The announcement from OpenAI comes after US Internet behemoths Google and Facebook parent Meta last year disclosed measures to restrict electoral meddling, particularly with regard to artificial intelligence.

Deepfakes, or edited movies, purporting to show US President Joe Biden proposing a military draft and former Secretary of State Hillary Clinton promoting Florida Governor Ron DeSantis for president have previously been disproved by AFP.

According to AFP Fact Check, politicians' manipulated audio and video were shared on social media prior to last month's presidential election in Taiwan.

Experts claim that misinformation is causing a crisis of trust in political institutions, even if a large portion of this content is of poor quality and it is not immediately apparent if it was produced using AI programs.