Colombian judge consults ChatGPT in trial, EU officials warn of chatbot risks
Trial case consultation chatbot ChatGPT, Colombian judge’s behavior provokes controversy
(Central News Agency) A Colombian judge sparked controversy by using the artificial intelligence (AI) chatbot ChatGPT in preparation for sentencing in a child health care case.
Judge Juan Manuel Padilla said he was using the 10-year-old rule of thumb in a case over whether an autistic child should be exempted from medical appointments, treatment and transportation costs because his parents don’t have enough income, Agence France-Presse reported. developed this chatbot that can generate text according to the prompt characters.
In an interview with local radio station Blu Radio on January 31, he said ChatGPT and other similar programs were helpful “when drawing up drafts of trial texts” but “the goal is not to replace” judges.
Badia ultimately ruled that children were exempt from the fees and said in his Jan. 30 judgment that he consulted ChatGPT, but did not say how much he relied on the chatbot.
He also insisted that asking questions to the app does not make people lose their judgment and thinking ability.
ChatGPT uses artificial intelligence and massive data on the Internet to answer questions posed to it by human users.
In this case, Badia said he asked ChatGPT “whether minors with autism should be exempted from seeing a doctor for treatment,” among other questions.
As a result, ChatGPT replied, “Yes, that is correct. According to Colombian regulations, minors diagnosed with autism are exempted from seeing a doctor for treatment.”
Badia claimed that ChatGPT provides services that have been provided by secretaries in the past, and that the approach is “organized, simple and methodical” and should improve the efficiency of the judicial system.
Juan David Gutierrez, a professor at Rosario University and an expert on artificial intelligence regulation, is among those who are skeptical of Badia’s approach. He claimed that he also asked the same question on ChatGPT, but got a different answer.
“It is certainly irresponsible or immoral for the judge to intend to use ChatGPT in sentencing,” Gutierrez tweeted. He also called for the immediate development of “Digital Literacy” among judges.
ChatGPT, a popular AI chat robot created by OpenAI, a California-based development company, can write prose, articles, poems and even computer code. It will be available for free trial in late November (2022), and then it will become popular all over the world.
Critics fear the chatbot could be used as a cheating tool in schools and universities. OpenAI warned that ChatGPT could make mistakes.
But Badia said, “I guess many colleagues will join me in delivering judgments in an ethical manner with the help of artificial intelligence.”
Ou warns of ChatGPT risks, bans chatbot Replika from using personal information
(Central News Agency) Italy yesterday (3) banned the US chatbot Replika from using Italian users’ personal information on the grounds that it may endanger minors and emotionally vulnerable people. European Union (EU) industry chiefs have also warned of the risks of chatbot ChatGPT.
According to Reuters, Replika is a chat robot launched by San Francisco start-up Luka in 2017 using artificial intelligence (AI) technology, providing users with customized humanoid robots that can speak and listen to customers.
Replika first became popular among the English-speaking population, and there is no need to pay for its use; the company earns income from the sale of additional functions such as voice chat. The current monthly revenue is about 2 million US dollars (about NT$60 million).
Replika is marketed as a “virtual friend” that claims to improve users’ emotional well-being. But Italy’s watchdog, the Data Protection Agency, said Replika’s intervention in users’ moods “could put individuals at a developmental or emotionally vulnerable stage at increased risk”.
Italian authorities also pointed out that Replika does not have mechanisms for verifying the user’s age, such as a filter for minors, or a device that would block the user if the user did not explicitly declare their age.
Italy’s data protection agency said Replika violated EU privacy regulations by processing personal data in an unlawful manner. Because Replika cannot process personal data under contracts that minors are incapable of signing, even though it has not stated that it does so.
According to the statement, when Italy imposed restrictions, Luka had to notify within 20 days of the measures it had taken to implement the requirements, otherwise it could face a fine of up to 20 million euros (approximately NT$648 million), or up to 4% of its global annual turnover. fine.
Meanwhile, Thierry Breton, European Commission executive responsible for the internal market, told Reuters new AI regulations would address concerns about risks associated with ChatGPT and ensure Europeans can trust AI technology. This is the first time that a senior EU official has commented on concerns related to ChatGPT.
ChatGPT was developed by OpenAI, an American private company supported by Microsoft, and it will be open to the public for free at the end of November 2022. However, according to research conducted by UBS citing market analysis company Similarweb, ChatGPT has become the fastest growing consumer application in history in just two months after its launch.
When ChatGPT receives questions from users, it can generate articles, essays, jokes and even poems to respond.
In written comments to Reuters, Breton said the risks posed by ChatGPT underscore the urgent need for artificial intelligence regulations. He has proposed relevant bills in 2022, hoping to set global standards for artificial intelligence technology specifications.
Including smart phones, self-driving vehicles, online shopping and factories, etc., all use AI technology. China and the United States currently lead the way in this technology.
Breton said: “As ChatGPT demonstrates, AI solutions can offer great opportunities for businesses and citizens, but they can also pose risks. This is why we need a solid regulatory framework to ensure high-quality Reliable AI based on data.”