02-01-2024
WASHINGTON: Artificial intelligence represents a mixed blessing for the legal field, U.S. Supreme Court Chief Justice John Roberts said in a year-end report published on Sunday, urging “caution and humility” as the evolving technology transforms how judges and lawyers go about their work.
Roberts struck an ambivalent tone in his 13-page report. He said AI had potential to increase access to justice for indigent litigants, revolutionize legal research and assist courts in resolving cases more quickly and cheaply while also pointing to privacy concerns and the current technology’s inability to replicate human discretion.
“I predict that human judges will be around for a while,” Roberts wrote “but with equal confidence I predict that judicial work particularly at the trial level will be significantly affected by AI.”
The chief justice’s commentary is his most significant discussion to date of the influence of AI on the law, and coincides with a number of lower courts contending with how best to adapt to a new technology capable of passing the bar exam but also prone to generating fictitious content, known as “hallucinations.”
Roberts emphasized that “any use of AI requires caution and humility.” He mentioned an instance where AI hallucinations had led lawyers to cite non-existent cases in court papers, which the chief justice said is “always a bad idea.” Roberts did not elaborate beyond saying the phenomenon “made headlines this year.”
Last week, for instance, Michael Cohen, Donald Trump’s former fixer and lawyer, said in court papers unsealed last week that he mistakenly gave his attorney fake case citations generated by an AI program that made their way into an official court filing. Other instances of lawyers including AI-hallucinated cases in legal briefs have also been documented.
A federal appeals court in New Orleans last moth drew headlines by unveiling what appeared to be the first proposed rule by any of the 13 U.S. appeals courts aimed at regulating the use of generative AI tools like OpenAI’s ChatGPT by lawyers appearing before it.
The proposed rule by the 5th US Circuit Court of Appeals would require lawyers to certify that they either did not rely on artificial intelligence programs to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings.
A federal appeals court in New Orleans is proposing requiring lawyers to certify that they either did not rely on artificial intelligence programs to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings.
The 5th US Circuit Court of Appeals in a notice late Tuesday unveiled what appears to be the first proposed rule by any of the nation’s 13 federal appeals courts aimed at regulating the use of generative AI tools like OpenAI’s ChatGPT by lawyers appearing before it. (Int’l News Desk)