A coalition of 20 technology companies, including Meta, announced that they will work to prevent bogus artificial intelligence content from interfering in global elections this year.
A group of 20 technology companies, including Meta, stated on Friday that they have pledged to collaborate to avoid false artificial intelligence content from meddling with global elections this year.
The rapid development of generative artificial intelligence (AI), which can generate text, images, and video in seconds in response to prompts, has raised concerns that the new technology could be used to sway major elections this year, when more than half of the world’s population is set to vote.
Signatories to the tech accord, which was announced at the Munich Security Conference, include businesses that are developing generative AI models for content creation, such as OpenAI, Microsoft (MSFT.O), OpenTable, and Adobe.
Other signatories include social media platforms that will confront the issue of removing damaging content from their sites, such as Meta Platforms (META.O), TikTok, and X, formerly known as Twitter.
The agreement includes agreements to work together on developing technologies for recognizing misleading AI-generated graphics, video, and audio, conducting public awareness campaigns to educate voters on deceptive information, and acting on such content on their platforms.
Watermarking or embedding meta data might be used to identify and validate AI-generated content, according to the firms.
The accord did not establish a timetable for completing the commitments or how each corporation would carry them out.
“I think the utility of this (accordingly) is the breadth of the companies signing up to it,” said Nick Clegg, Meta Platforms’ head of global relations.
“It’s all good and well if individual platforms develop new policies of detection, provenance, labeling, watermarking, and so on, but unless there is a wider commitment to do so in a shared interoperable way, we’re going to be stuck with a hodgepodge of different commitments,” he said.
Generative AI is already being used to sway politics and persuade individuals not to vote.
In January, a robocall containing phony audio of US President Joe Biden was distributed to New Hampshire voters, asking them to stay home during the state’s presidential primary election.
Despite the popularity of text-generation technologies such as OpenAI’s ChatGPT, internet companies will prioritize minimizing the negative consequences of AI images, videos, and audio, in part because people are more skeptical of text, according to Dana Rao, Adobe’s chief trust officer.
“There’s an emotional connection to audio, video and images,” he went on to say. “Your brain is wired to believe that kind of media.”
Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter
About us:
CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics.