A court in Florence, Italy, has warned of the risks of using artificial intelligence (AI) in the legal field after a lawyer cited non-existent decisions in a hearing. The incident occurred during a trial over copyright infringement involving the use of designs on T-shirts.
The hearing was held in March, but the case came to light this Thursday, the 17th. According to the legal website Diretto.it, the defense attorney presented as an argument some alleged Supreme Court decisions. The references, however, did not exist and were allegedly generated by the tool ChatGPT.
The judges checked the documents and found that they were false decisions. When questioned, the lawyer stated that the material had been prepared by a colleague at the firm.
The court considered opening proceedings under the article that punishes those who act in bad faith during a trial. However, the judges decided not to proceed with the charges. They attributed the error to the incorrect use of artificial intelligence.
In a decision dated March 14, the judges drew attention to the phenomenon of “hallucinations” AI. The term refers to inaccurate or completely false responses produced by such tools, when they present information that is not supported by real data.
Justice and technology
The situation reignites the debate over the use of AI in the justice system. Although it offers capabilities for research and analysis, its indiscriminate use can compromise the integrity of legal processes.
The court highlighted the importance of human validation of information extracted from these tools.































































