Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?
A number of prominent figures in technology, including Elon Musk, have urged Artificial Intelligence research facilities to put a stop to the creation of systems that can match human intelligence.
Artificial Intelligence labs were urged to stop training models more potent than GPT-4, the most recent version of the large language model software created by American startup OpenAI, in an open letter from the Future of Life Institute, signed by Musk, Apple co-founder Steve Wozniak, and 2020 presidential candidate Andrew Yang.
The letter reads, “Contemporary Artificial Intelligence systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful Artificial Intelligence systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Further reads, “Therefore, we call on all Artificial Intelligence labs to immediately pause for at least 6 months the training of Artificial Intelligence systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
As per the letter Artificial Intelligence labs and independent experts should use the pause to develop and implement shared safety protocols for advanced Artificial Intelligence design and development. These protocols should ensure systems are safe beyond a reasonable doubt. Artificial Intelligence research and development should focus on making today’s powerful systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
The letter also mentioned that Artificial Intelligence developers must work with policymakers to develop robust Artificial Intelligence governance systems, including new and capable regulatory authorities, oversight and tracking of Artificial Intelligence systems, provenance and watermarking systems, auditing and certification ecosystem, liability for AI-caused harm, public funding for technical Artificial Intelligence safety research, and well-resourced institutions.
“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” as per the letter .
Also read: Technology continues to scale faster than most people realise
Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter
About us:
CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics