On Friday, Europe reached an agreement on important European Union legislation, including governments’ use of AI in biometric monitoring and how to regulate AI systems.
Europe struck a provisional agreement on Friday on major European Union rules controlling the use of artificial intelligence, including governments’ use of AI in biometric surveillance and how to regulate AI systems like ChatGPT.
With the political accord, the EU draws closer to being the first major international power to pass AI legislation. The agreement reached on Friday by EU governments and European Parliament members came after roughly 15 hours of negotiations and a nearly 24-hour debate the day before.
The two parties are expected to work out details in the coming days, which could alter the final legislation’s design.
“Europe has positioned itself as a forerunner, recognizing the significance of its role as a global standard setter.” Yes, I believe this is a On Friday, Europe reached a provisional deal on landmark European Union rules governing the use of artificial intelligence, including governments’ use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.
With the political agreement, the EU moves toward becoming the first major world power to enact laws governing AI. Friday’s deal between EU countries and European Parliament members came after nearly 15 hours of negotiations that followed an almost 24-hour debate the previous day.
The two sides are set to hash out details in the coming days, which could change the shape of the final legislation.
“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is, yes, I believe, a historical day,” European Commissioner Thierry Breton stated.
The agreement requires foundation models such as ChatGPT and general-purpose AI systems (GPAI) to meet transparency requirements before they are released to the public. These include creating technical documentation, adhering to EU copyright legislation, and publishing detailed summaries of training content.
High-impact foundation models with systemic risk must conduct model reviews, analyze and manage systemic risks, conduct adversarial testing, report major incidents to the European Commission, assure cybersecurity, and report on energy efficiency.
To comply with the new regulation, GPAIs with systemic risk may rely on codes of practice.
Governments can only utilize real-time biometric surveillance in public settings to identify victims of specific crimes, to prevent genuine, current, or foreseeable dangers such as terrorist attacks, and to search for those suspected of the most serious crimes.
The agreement prohibits cognitive behavioral manipulation, untargeted scraping of facial photos from the internet or CCTV footage, social scoring, and biometric classification systems used to infer political, religious, philosophical opinions, sexual orientation, and race.
Consumers would be able to file complaints and receive relevant explanations, with fines ranging from 7.5 million euros ($8.1 million), or 1.5% of worldwide revenue, to 35 million euros, or 7% of global turnover.
The measures were criticized by the business organization DigitalEurope as yet another hardship for businesses on top of other recent legislation.
“We have an agreement, but at what cost?” We completely backed a risk-based strategy based on the applications of AI rather than the technology itself, but the last-minute attempt to regulate foundation models has flipped this on its head,” said Cecilia Bonefeld-Dahl, the organization’s Director General.
European Digital Rights, a privacy rights organization, was equally negative.
“It’s difficult to be excited about a law that, for the first time in the EU, has taken steps to legalize live public facial recognition across the bloc,” said the organization’s senior policy advisor, Ella Jakubowska.
“While the Parliament worked tirelessly to limit the damage, the overall package on biometric surveillance and profiling is in jeopardy.”
The legislation is likely to go into effect early next year when both parties formally approve it, and it will be in effect for two years after that.
Governments throughout the world are attempting to combine the benefits of technology, which can engage in human-like conversations, answer queries, and write computer code, with the need to erect safeguards.
Europe’s ambitious AI guidelines come as businesses like OpenAI, in which Microsoft (MSFT.O) is a shareholder, continue to find new applications for their technology, eliciting both praise and criticism. Alphabet (GOOGL.O), the parent company of Google, announced Gemini, a new AI model to compete with OpenAI, on Thursday.
The EU regulation could serve as a model for other nations, as an alternative to the US’ light-touch policy and China’s temporary measures.
Also read: Organizations should be driven based on people and processes instead of emphasizing technology
Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter
About us:
CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics.