Artificial intelligence technology in law enforcement also has been questioned
Authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, a report explores how to resolve accountability and trust-building issues with artificial intelligence (AI) technology.
Bora and Timis note there is “a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle.” As a result, the two add, this governance “needs to be designed under continuous dialogue utilising multi-stakeholder and interdisciplinary methodologies and skills”.
The authors note that both sides need to speak the same language. Yet while artificial intelligence creators have the information and understanding, this does not extend to regulators.
“There are a limited number of policy experts who truly understand the full cycle of AI technology”. “On the other hand, the technology providers lack clarity, and at times interest, in shaping AI policy with integrity by implementing ethics in their technological designs”.
Where inherent bias is built into systems or examples of unethical practice of artificial intelligence are legion. MIT, in July, apologised for and took offline, a dataset which trained models of artificial intelligence with misogynistic and racist tendencies. Also, Microsoft and Google have also fessed up to errors with MSN News and YouTube respectively.
In law enforcement, artificial intelligence technology also has been questioned. An open letter was signed in June by more than 1,000 researchers, academics and experts to question an upcoming paper which claimed to be able to predict criminality based on automated facial recognition. Separately, the chief of Detroit Police, in the same month admitted its AI-powered face recognition did not work the vast majority of the time.
With the firing of Margaret Mitchell last week, who co-led the company’s ethical artificial intelligence team, Google has been under fire of late, adding to the negative publicity. On Twitter, Mitchell confirmed her dismissal. A statement from Google to Reuters said the firing followed an investigation which found Mitchell moved electronic files outside of the company.
Google had fired Timnit Gebru in December, another leading figure in ethical artificial intelligence development, who claimed she was fired over an unpublished paper and sending an email critical of the company’s practices. Detailing ‘concern’ over the firing, Mitchell had previously written an open letter. As per an Axios report, the company made changes into ‘how it handles issues around research, diversity and employee exits, following Gebru’s dismissal. As this publication reported, Gebru’s departure forced other employees to leave; software engineer Vinesh Kannan and engineering director David Baker.
From the technology providers’ perspective, Bora and Timis emphasised the need for ‘ethics literacy’ and a ‘commitment to multidisciplinary research.
“Through their training and during their careers, the technical teams behind AI developments are not methodically educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs”.
“The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time,” Bora and Timis added. “With increased investments in artificial intelligence, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them”.
Hasty withdrawals and fulsome apologies when models behave unethically can be taken care by this, yet researchers also noted how policymakers need to step up.
“It is only by familiarising themselves with artificial intelligence and its potential benefits and risks that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential”. “Knowledge building is critical both for developing smarter regulations when it comes to artificial intelligence, for enabling policymakers to engage in dialogue with technology companies on an equal footing, and together set a framework of ethics and norms in which AI can innovate safely”.
With regard to solving algorithmic bias, innovation is taking place. As this publication reported in November, In the UK, to tackle the issue, the Centre for Data Ethics and Innovation (CDEI) has created a ‘roadmap’. The CDEI report focused on policing, recruitment, financial services, and local government, as the four sectors where algorithmic bias posed the biggest risk.