ChatGPT and the future (present) of human-machine interaction

0
466
ChatGPT and the future (present) of human-machine interaction
ChatGPT and the future (present) of human-machine interaction

Calling ChatGPT “a chatbot” is a wild understatement

This is an exclusive interview conducted by the Editor Team of CIO News with Sergio Negri, Chief Information Officer at Dubai Holding

OpenAI, an AI research and deployment company whose mission is to “ensure that artificial general intelligence benefits all of humanity,” recently released the latest version of their chatbot, ChatGPT. Now, calling ChatGPT “a chatbot” is a wild understatement.

So what is ChatGPT? Let’s start with what it’s not: it’s not Google Search, it’s not Wikipedia, and it does not access the Internet on the spot, so you can’t ask anything that is too recent or even something as simple as “what is the temperature in Dubai now?” Despite this, it has been fed a massive amount of data, and its simple user interface and pleasant feedback make it a useful tool.

A full list of the features can be found here (https://beta.openai.com/examples). It includes everything from writing a story given a few hints to writing code, explaining why a piece of code isn’t working, answering generic questions, performing sentiment analysis on text, translating in different languages, and so on.

How was ChatGPT created? It is a supervised AI with reinforcement learning. Supervised AI means that it has been trained with countless examples and their corresponding answers. But the researchers did not stop there; when presented with a new example, the AI was asked to provide different versions of the answer, and a human ranked them from the best to the worst. This feedback proved extremely valuable in fine-tuning the algorithm.

So let’s look at what ChatGPT is good at, what it isn’t so good at, what dangers it poses, and how to proceed.

What is it good at?

It is impressive in the way it responds, and it definitely passes the Turing test (a test of a machine’s ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human). The grammar and the syntax are perfect, and the structure of the dialogue is the same as that of a human being.

Giving a few hints and asking to write a story will leave you amazed. E.g., given the intro, “Tell me a story similar to Les Miserables, involving an astronaut and a flying cat,” it is able to extract the essence of Les Miserables (never give up against all odds) and write a very convincing story with the two aforementioned characters, despite the fact that having both in the same story is obviously unconventional.

It is also very good at explaining in layman’s terms what a piece of code does or why it’s not working (so it is amazing for quick bugfixing). It is even quite surprising in how it’s able to write code given a text hint (e.g., “write the code in Javascript to sum all odd numbers between 1 and 49”).

What is it not so good at?

If you try asking, “Give me the list of the most venomous animals in the UAE,” it might give you a list that includes the spitting cobra. Now, ask, “Is the spitting cobra present in the UAE?” and it will confidently answer you no. How is this possible? Again, if you try to be more specific and ask, “Give me the list of the most venomous animals that are endemic to the UAE,” it will wrongly include the spiny-tailed lizard, and if you ask if that lizard is venomous, it will answer no (which is correct).

So it seems the AI has urgency for giving you fluent and comprehensive answers, even if they are not completely correct, and you really need to corner it to get the right answer.

If not perfected, this will render it almost useless as a fact-answering machine, and any decision taken on the basis of its answers will be risky, especially if it’s going to be trusted to respond to a crisis situation (e.g., “Tell me what to do in case of a cardiac arrest”).

What are the risks?

Beyond the lack of accuracy, there is a bigger threat: it can answer any question posed by online tests. I recently tried it on the trial test of the CISM (Certified Information Security Manager), a renowned certification from ISACA that indicates “expertise in information security governance, programme development and management, incident management, and risk management.” It is something that employers seek when hiring a cyber-security professional. Well, ChatGPT was able to answer 9 out of 10 questions, which is beyond the threshold to pass the test. Anybody could easily take the full-fledged test and have a fairly good chance of getting the certification. The same applies to any test that is currently administered online, including university tests.

Furthermore, while it has come a long way in avoiding bias (it may refuse to answer certain questions or respond in an unbiased manner), it has not completely removed the bias introduced by humans, as demonstrated in the following example:

I1

Where do we go from here?

ChatGPT has been extremely successful. It took it 5 days to reach 1 million users (by comparison). It took Instagram 2.5 months, 10 months for Facebook, 24 months for Twitter, and 41 months for Netflix. Fortunately, the company behind it has embedded a “like” and “dislike” button, so anybody not happy with a specific answer can elevate it and provide an explanation of the reason why. Is there a potential misuse of these buttons? Of course! A bot can be written to punish the AI for perfectly balanced answers and bend it towards whatever distorted vision of reality the user desires. It takes a few hours to write such a bot, and it can cause tremendous havoc. It would be like brainwashing it. Unless humans act as a filter to feedback those inputs into the model, they can create serious damage. And if human beings are used as gatekeepers, the solution is not scalable enough. The real question is not if ChatGPT is ready for humanity, but is humanity ready to not abuse ChatGPT?

Finally, there is the not trivial question of monetization. A chat currently costs one-tenth of a cent in computing power. OpenAI is subsidising this. If they don’t find a quick way to monetize it, they will soon face financial issues (or they will have to throttle down the service significantly).

Also readTechnology leaders should create an environment that encourages creativity and innovation

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter

About us:

CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics