A regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service.
Brian Hood, who was elected mayor of Hepburn Shire, 120km (75 miles) northwest of Melbourne, last November, became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.
Hood did work for the subsidiary, Note Printing Australia, but was the person who notified authorities about payment of bribes to foreign officials to win currency printing contracts, and was never charged with a crime, lawyers representing him said.
The lawyers said they sent a letter of concern to ChatGPT owner OpenAI on March 21, which gave OpenAI 28 days to fix the errors about their client or face a possible defamation lawsuit.
A Microsoft spokesperson was not immediately available for comment.
“It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space,” James Naughton, a partner at Hood’s lawfirm Gordon Legal, told Reuters.
“He’s an elected official, his reputation is central to his role,” Naughton said. Hood relied on a public record of shining a light on corporate misconduct, “so it makes a difference to him if people in his community are accessing this material”.
Australian defamation damages payouts are generally capped around AUD 400,000 (roughly Rs. 2,20,95,602). Hood did not know the exact number of people who had accessed the false information about him – a determinant of the payout size – but the nature of the defamatory statements was serious enough that he may claim more than AUD 200,000 (roughly Rs. 1,09,70,118), Naughton said.
If Hood files a lawsuit, it would accuse ChatGPT of giving users a false sense of accuracy by failing to include footnotes, Naughton said.
“It’s very difficult for somebody to look behind that to say ‘how does the algorithm come up with that answer?'” said Naughton. “It’s very opaque.”