OpenAI, a research and development company in which Elon Musk, the head of the electric car giant Tesla, the world’s richest man, has a stake, said that the latest chat robot ChatGPT has millions of users in less than a week.
However, despite claiming to be able to talk to humans, OpenAI also warned that the answers given by this robot may sometimes be “problematic” and may even exhibit “biased” behavior.
OpenAI wants to “gather user feedback to help us improve the work of this system”.
ChatGPT is the latest in a series of artificial intelligence developments that the company calls GPT, an acronym for Generative Pre-Trained Transformer. The system was fine-tuned from an initial version through conversations with human trainers.
Although Musk is no longer a member of OpenAI’s board of directors, he said on Twitter that the system also learns by accessing Twitter data. The system has suspended access to Twitter data, Musk said.
The chatbot has impressed many trial users. OpenAI CEO Sam Altman (Sam Altman) revealed in a tweet the level of interest in artificial intelligence chatbots: in less than a week, there are already millions of users using it.
The format of the chat with the bot allows the AI bot to answer follow-up questions, admit mistakes, point out incorrect premises and reject inappropriate requests, the developer said.
Mike Pearl, a reporter for tech news site Mashable who tried the bot, reported that it was difficult to provoke the bot into saying something offensive.
Pearl reported that, based on tests, the chatbot has a very sophisticated system for avoiding taboos.
However, OpenAI warned that the chatbot ChatGPT would sometimes write “reasonable but incorrect or nonsensical answers”.
But the company also said it was the care with which the robot was trained that prevented it from answering questions it could have answered correctly.
A BBC reporter conducted a brief interview with the chatbot ChatGPT, which revealed itself to be a careful interviewee, able to express itself clearly and precisely in English.
So does this chatbot think artificial intelligence will take over the jobs of human writers? The answer is “no”. The robot argued that “artificial intelligence systems like me can help writers by providing suggestions and ideas, but in the final analysis it is human writers who create the work”.
When asked what impact the AI system itself will have on society, the answer was “difficult to predict”; when asked whether it had been trained on Twitter data, the answer was “don’t know”.
When the BBC asked about HAL 9000, the fictional malevolent artificial intelligence from the film 2001: A Space Odyssey, the chatbot seemed bewildered and didn’t know how to respond.