Saturday, July 24, 2021
Home Artificial Intelligence Artificial Intelligence and policymakers’ accountability and trust-building issues to be resolved

Artificial Intelligence and policymakers’ accountability and trust-building issues to be resolved

Artificial intelligence technology in law enforcement also has been questioned

Authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, a report explores how to resolve accountability and trust-building issues with artificial intelligence (AI) technology.

Bora and Timis note there is “a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle.” As a result, the two add, this governance “needs to be designed under continuous dialogue utilising multi-stakeholder and interdisciplinary methodologies and skills”.

The authors note that both sides need to speak the same language. Yet while artificial intelligence creators have the information and understanding, this does not extend to regulators.

“There are a limited number of policy experts who truly understand the full cycle of AI technology”. “On the other hand, the technology providers lack clarity, and at times interest, in shaping AI policy with integrity by implementing ethics in their technological designs”.

Where inherent bias is built into systems or examples of unethical practice of artificial intelligence are legion. MIT, in July, apologised for and took offline, a dataset which trained models of artificial intelligence with misogynistic and racist tendencies. Also, Microsoft and Google have also fessed up to errors with MSN News and YouTube respectively.

In law enforcement, artificial intelligence technology also has been questioned. An open letter was signed in June by more than 1,000 researchers, academics and experts to question an upcoming paper which claimed to be able to predict criminality based on automated facial recognition. Separately, the chief of Detroit Police, in the same month admitted its AI-powered face recognition did not work the vast majority of the time.

With the firing of Margaret Mitchell last week, who co-led the company’s ethical artificial intelligence team, Google has been under fire of late, adding to the negative publicity. On Twitter, Mitchell confirmed her dismissal. A statement from Google to Reuters said the firing followed an investigation which found Mitchell moved electronic files outside of the company.

Google had fired Timnit Gebru in December, another leading figure in ethical artificial intelligence development, who claimed she was fired over an unpublished paper and sending an email critical of the company’s practices. Detailing ‘concern’ over the firing, Mitchell had previously written an open letter. As per an Axios report, the company made changes into ‘how it handles issues around research, diversity and employee exits, following Gebru’s dismissal. As this publication reported, Gebru’s departure forced other employees to leave; software engineer Vinesh Kannan and engineering director David Baker.

From the technology providers’ perspective, Bora and Timis emphasised the need for ‘ethics literacy’ and a ‘commitment to multidisciplinary research.

“Through their training and during their careers, the technical teams behind AI developments are not methodically educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs”.

“The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time,” Bora and Timis added. “With increased investments in artificial intelligence, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them”.

Hasty withdrawals and fulsome apologies when models behave unethically can be taken care by this, yet researchers also noted how policymakers need to step up.

“It is only by familiarising themselves with artificial intelligence and its potential benefits and risks that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential”. “Knowledge building is critical both for developing smarter regulations when it comes to artificial intelligence, for enabling policymakers to engage in dialogue with technology companies on an equal footing, and together set a framework of ethics and norms in which AI can innovate safely”.

With regard to solving algorithmic bias, innovation is taking place. As this publication reported in November, In the UK, to tackle the issue, the Centre for Data Ethics and Innovation (CDEI) has created a ‘roadmap’. The CDEI report focused on policing, recruitment, financial services, and local government, as the four sectors where algorithmic bias posed the biggest risk.

Also read: Transformation of BFSI industry with cloud and AI combo

Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter

khushbu
Khushbu Sonihttps://www.cionews.co.in
Chief Editor - CIO News | Founder & CEO - Mercadeo

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -WhatsApp Image 2020 11 10 at 1.52.58 PMWhatsApp Image 2020 11 10 at 1.52.58 PMWhatsApp Image 2020 11 10 at 1.52.58 PMWhatsApp Image 2020 11 10 at 1.52.58 PM

Most Popular

Cyber-threats: US not prepared for the attacks growing in number and severity

Another top concern of cyber-threats is to vehicles and public transportation systems as they are becoming smarter and more interconnected and more vulnerable to...

Data compromised of Saudi Aramco after $50 million ransom demand

The oil company did not name the supplier or explain how the data were compromised After a cyber-extortionist claimed to have seized troves of its...

IOT start-up Atsuya Technologies receives grant from Tamil Nadu Government

  At the Investment Conclave 2021 in Chennai on Tuesday, Tamil Nadu Chief Minister MK Stalin handed over the Rs 60.40 lakh grant amount to the...

Automotive digital technology and innovation centre announced by Infosys in Germany

  Leveraging Infosys Cobalt, a combination of Infosys services, solutions and platforms that supports enterprises in accelerating their cloud journey, alongside other leading cloud providers,...

Recent Comments