In May v Costaras [2025] NSWCA 178 (8 August 2025), Bell CJ – Payne and McHugh JJA agreeing – wrote:
…
[12] After acknowledging that “[a]rtificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future”, Dame Victoria Sharp, President of the King’s Bench Division of the High Court of Justice has recently observed, in delivering the reasons of the Court in Ayinde v The London Borough of Haringey [2025] EWHC 1383 (Admin) at [5] –[9] (Ayinde) (omitting footnotes):
This comes with an important proviso however. Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.
In the context of legal research, the risks of using artificial intelligence are now well known. Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.
Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example). Authoritative sources include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers.
This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.
We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.
[13] I would endorse these observations. The great care that is required by, and responsibility of, legal practitioners in New South Wales in the use of Generative AI is reflected in Practice Note SC Gen 23 — Use of Generative Artificial Intelligence (Gen AI).
[14] The appendix to the judgment in Ayinde contains non-exhaustive examples from England and Wales, the United States, Australia, New Zealand and Canada of material being placed before courts that has been generated by an artificial intelligence tool, but which is erroneous. At least two of those cases involved unrepresented litigants: Olsen v Finansiel Stabilitet A/S [2025] EWHC 42 (KB); Zzaman v Commissioners for His Majesty’s Revenue and Customs [2025] UKFTT 539 (TC). The list of such cases continues to grow.
[15] The problems of unverified use of artificial intelligence in the preparation of submissions are exacerbated where the technology is used by unrepresented litigants who are not subject to the professional and ethical responsibilities of legal practitioners and who, while subject to the Practice Note SC Gen 23, may be unaware of its terms. All litigants are under a duty not to mislead the court or their opponent: Vernon v Bosley (No 2) [1999] QB 18 at 37 , 63, cited in Burragubba v Queensland (2016) 151 ALD 471; [2016] FCA 984 at [228]; see also, in relation to the obligations of unrepresented litigants, Barton v Wright Hassall LLP [2018] UKSC 12 [2018] 1 WLR 1119 at [18] (Barton). As Lord Sumption observed in Barton at [18], in a passage quoted in Mohareb v Saratoga Marine Pty Ltd [2020] NSWCA 235 at [39], “[u]nless the rules and practice directions are particularly inaccessible or obscure, it is reasonable to expect a litigant in person to familiarise himself with the rules which apply to any step he is about to take.”
[16] It is and will remain important for judicial officers to be conscious of the potential use of Generative AI by unrepresented litigants in legal proceedings and it is legitimate to inquire, as the Court did of the respondent in the present case, whether Generative AI has been used in the preparation of materials placed before the Court. Such use may introduce added cost and complexity to the proceedings and, where unverified, add to the burden of other parties and the Court in responding to it.
[17] At least at this stage in the development of the technology, notwithstanding the fact that Generative AI may contribute to improved access to justice which is itself an obviously laudable goal, the present case illustrates the need for judicial vigilance in its use, especially but not only by unrepresented litigants. It also illustrates the absolute necessity for practitioners who do make use of Generative AI in the preparation of submissions — something currently permitted under the Practice Note — to verify that all references to legal and academic authority, case law and legislation are only to such material that exists, and that the references are accurate, and relevant to the proceedings.
…
(emphasis added)
The link to the full decision is here.