The ChatGPT chat robot “like talking to a real person” has sparked heated discussions in the community, and the papers produced cannot even be detected by anti-plagiarism software

Sharing is Caring

(Central News Agency) The chatbot ChatGPT has been online for a week, causing heated discussions on social media. The technology circle praised the new tool for solving complex programming problems, and the question-and-answer process was like a conversation with a real person; however, some media issued warnings, saying that the artificial intelligence model has been able to write papers, and even well-known anti-plagiarism software cannot be tested.

OpenAI, a foundation founded by billionaire Elon Musk, has developed a new text generation language model tool ChatGPT , which can answer various questions, such as writing code and writing articles, and more importantly, expressing in words It is smoother and more precise than previous generations. Within a week of its launch, the number of registered users exceeded one million, and users also shared their interesting experiences interacting with chatbots online.

Some people in the field of artificial intelligence called it incredible, saying that when talking to a chat robot, it is so natural that he feels like talking to a real person, and he only needs to slightly adjust the content to use it with confidence. Someone also asked the chatbot to write a poem for US President Joe Biden, and the other party came up with a wonderful pen and wrote an inspiring text.

1

The reporter asked it to draft a profile in English, and immediately got a basic answer, such as: “What is your past experience? What is the project you are working on now? How do you solve the problem?” The reporter asked it to write a background introduction of the location, and it can also jump out of smooth and rich content.

The ” New York Times wrote an article on the 8th to analyze the advantages of this new technology, saying that the amount of data that ChatGPT has been trained in advance is very large. With the creator’s technology, it has been able to produce “coherent articles”. Can predict what a good writing should look like.

Also, an obvious benefit is that this technological tool facilitates research and enables the writing of papers and articles.

However, the “New York Times” also raised the potential risks of new technologies, pointing out that some experts believe that humans may not be able to train artificial intelligence systems to do what humans want to do, and if they are not careful, they may compete with humans and conflict with each other .

The American online media “VOX” also issued a warning on its technology news website, mentioning that a few weeks ago, Wharton Business School professor Ethan Mollick asked MBA students to use the previous generations of artificial intelligence model GPT to test it. Can students write reports based on the topics in the class.

The articles generated by GPT are not perfect, Morik said, using too much passive voice, but “at least they are fluent.” Not only that, but the articles also passed the inspection of a very popular anti-plagiarism software.

In addition to Professor Morik’s experience, the report also cited the example of Kris Jordan, a professor of computer science at the University of North Carolina; Jordan threw the final exam questions to the GPT test, and the result was far higher than the median score obtained by real people. Long before ChatGPT went online, students had been using previous generations of versions to write their homework, and they were not caught cheating.

When reporting and reflecting on the use of new technologies, there should be more comprehensive thinking, namely: “What kind of things do humans want to continue to do, and what kind of problems do we hope new technologies can solve.”

1

 

 

Sharing is Caring