How to Elucidate Board Mystery in Age of AI and Threats?

How to Elucidate Board Mystery in Age of AI and Threats?
How to Elucidate Board Mystery in Age of AI and Threats?

Generative AI is a potential fulcrum for significant economic and societal change

This is an exclusive interview conducted by Santosh Vaswani, Editor at CIO News with Kapil Bareja, Cyber & Strategy Risk Advisory, Deloitte

The latest developments in Artificial Intelligence (AI) promise to have a profound impact on business and society. They will test companies’ ability to address the business opportunities and risks associated with yet another transformative technology.

We’re now in the earliest stages of another profound change, the Age of AI. It’s analogous to those uncertain times before speed limits and seat belts. AI is changing so quickly that it isn’t clear exactly what will happen next. We’re facing big questions raised by the way current technology works, the ways people will use it for ill intent, and the ways AI will change us as a society and as individuals.

One thing that’s clear from everything that has been written so far about the risks of AI—and a lot has been written—is that no one has all the answers. Another thing that’s clear to everyone is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but the board can solve the mystery optimistically.

  • Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what’s worked in the past.
  • Many of the problems caused by AI can also be managed with its help.
  • We’ll need to adapt old laws and adopt new ones—just as existing laws against fraud had to be tailored to the online world.

For boards to solve their companies’ problems by maximising AI’s business benefits while responsibly balancing the welfare of a firm’s multiple stakeholders and society at large.

  1. Engage with the technology for empirical understanding. For decades, companies have used AI to perform routine manual tasks and assist human decision-making. More recently, AI has come closer to mimicking human thinking itself, with tools predicting behaviour and now, with “generative AI” tools such as ChatGPT, creating written, oral, and visual content. For boards to provide effective oversight and make key decisions regarding their company’s use of AI, they need to understand it. Management should provide boards with the opportunity, in a controlled environment, not only to see the technology but also to engage with it firsthand and to do so on an ongoing basis as the tools evolve.
  2. Catechize management to provide an overview—and then regular updates—on how AI intersects with the company’s business. Boards will need to develop a firm grasp of how AI affects the company’s activities in the marketplace (the products and services a company sells or buys), the workplace (operations and employees), and the public space (including government relations, communications, and corporate social responsibility). As they focus on AI, however, some boards may benefit from a more general update on how technology is transforming the business; according to a recent survey of 600 US C-suite executives, only 50% say their boards have a good understanding of how the digital transformation is affecting their business.
  3. Acculturate AI into Enterprise Risk Management (ERM). Boards should ensure that AI is considered as part of the company’s ERM programme. This should include not just generative AI but also predictive AI and other forms of AI that raise many of the same legal, ethical, and reputational issues. It is also critical to consider the multiple, interconnected areas of risk associated with AI. For example, AI may serve as a catalyst for boards to view data security, data privacy, intellectual property, and antitrust not in separate silos but collectively under the heading of “data protection” or, perhaps even more accurately, “knowledge protection.”
  4. Perpend how to address AI at the board level. With the SEC imposing obligations on boards with respect to cybersecurity, boards will be taking a fresh look at their role with respect to data security. That should trigger a broader discussion of how they will address AI. Boards may want to begin by asking management to focus on AI as a discrete topic meriting “a deep dive” at a board meeting. Soon thereafter, management should begin incorporating AI into existing board reports and processes. AI might, however, prompt boards to consider broader changes in how they are structured and spend their time.
  5. Allure in the development of public policy. The government is not writing on a blank slate here, with numerous laws on the books relating to data security and privacy, intellectual property, and consumer protection. Nonetheless, companies will be addressing AI against fast-moving, and sometimes conflicting, public policy developments worldwide.

Generative AI is a potential fulcrum for significant economic and societal change. As Mo Gawdat wrote in Scary Smart: The Future of Artificial Intelligence and How You Can Save the World, one day machines will likely be smarter in many ways—more knowledgeable, more able to assess risks and opportunities, more analytical, and more creative—than any individual on the planet. That would place humankind in a different place in the pecking order, potentially changing our perspective on how we view our role in the world. That moment, if it comes, is still years away. In the meantime, boards should approach AI with a steady hand, an open mind, clear values, and a culture of continuous human learning.

What’s next?

There are more reasons than not to be optimistic that we can manage the risks of AI while maximising its benefits. But we need to move fast.

Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology. They’ll need to grapple with misinformation and deepfakes, security threats, changes to the job market, and the impact on education. To cite just one example: The law needs to be clear about which uses of deepfakes are legal and about how deepfakes should be labelled so everyone understands when something they’re seeing or hearing is not genuine.

Finally, boards should follow developments in AI as much as possible. It’s the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.

Also readDigital Empowerment: Unlocking the Potential of the Next Generation

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter

About us:

CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics