FEATURE ARTICLE -
Issue 102: December 2025, Professional Conduct and Practice
The following guidelines were released by ‘Queensland Courts’ on 15 September 2025 for use by judges and members of the Supreme Court, District Court, Planning and Environment Court, Magistrates Courts, Land Court, Childrens Court, Industrial Court, Queensland Industrial Relations Commission and Queensland Civil and Administrative Tribunal.
Introduction
[1] Any use of Generative AI tools by judicial officers and their staff must be consistent with the fulfilment of the judiciary’s obligation to do equal justice to all persons according to law, including by facilitating the just and expeditious resolution of civil and criminal proceedings in accordance with applicable substantive and procedural law.
[2] Generative AI tools are already in use by lawyers and self-represented litigants in relation to their work before courts and tribunals. It is likely that the use of such tools will become more widespread. Some judicial officers and their staff either have used such tools or will seek to do so in the future. It is likely that such use will also become more widespread.
[3] These guidelines have been developed so that judicial officers may have a greater understanding of the risks concerning the use of Generative AI tools by themselves, their staff and the lawyers and non-lawyers who conduct work before courts and tribunals.
[4] These guidelines have been adopted by the Supreme Court, District Court, Planning and Environment Court, Magistrates Courts, Land Court, Childrens Court, Industrial Court, Queensland Industrial Relations Commission and Queensland Civil and Administrative Tribunal.
[5] Some terms should first be defined:
(a) Artificial Intelligence (AI): Computer systems able to perform tasks normally requiring human intelligence.
(b) Generative AI: A form of AI which enables users to generate new content, which can include text, images, sounds and computer code. Some Generative AI tools are designed to take actions.
(c) Generative AI chatbot: A computer program which simulates online human conversations using Generative AI.
(d) Public Generative AI chatbots: Well-known examples include OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini and Microsoft Copilot.
(e) Large Language Model (LLM): LLMs are AI models which, through sophisticated pattern recognition and probabilistic calculations, learn to predict the next best word or part of a word in a sentence.
(f) Prompt: A short instruction entered to a generative AI chatbot to obtain an answer or output.
[6] The following seven guidelines are explained under separate headings below:
- Understand AI and its limitations;
- Uphold confidentiality, suppression, and privacy;
- Ensure accountability and accuracy;
- Be aware of ethical issues;
- Maintain security;
- Take responsibility;
- Be aware that court users may have used AI tools.
1. Understand AI and its limitations
[7] Despite the name, Generative AI chatbots are not actually intelligent in the ordinary human sense. Nor is the way in which they provide answers analogous to the human reasoning process. It is important to note:
(a) Generative AI chatbots are built on LLMs. LLMs analyse a large amount of training text to predict the probability of the next best word in a sentence given the context. Just as Google offers to autocomplete your search, LLMs autocomplete repeatedly to form words, sentences, and paragraphs of text.
(b) LLMs have been further trained on ideal human written responses to prompts, and on survey results, about which responses sound most natural or best mimic human dialogue.
(c) This means the answers which Generative AI chatbots generate is what the chatbot predicts to be the most likely combination of words (based on the documents and data that it holds as source information), not necessarily the most accurate answer.
(d) And because their responses are based on probability-derived calculations about the next best word in context, these tools are unable to reliably answer questions that require a nuanced understanding of language content. They have no intrinsic understanding of what any word they output means, nor aconception of truth.
[8] The answers provided by Generative AI chatbots depend on the content of the datasets from which they are trained. You should note the following limitations:
(a) Generally, the text used to train public Generative AI chatbots comes from various internet sources, such as webpages, online books, and social media posts. It does not necessarily come from authoritative or up to date databases.
(b) Public Generative AI chatbots may have limited access to training data on current Australian law or the procedural requirements that apply in Australian courts and tribunals.
(c) Generative AI chatbots cannot distinguish between facts, inferences and opinions contained in their source datasets. This means that text which they generate in response to a prompt may contain incorrect, opinionated, misleading or biased statements presented as fact.
[9] The quality of any answers you receive from Generative AI chatbots may depend on how you engage with the relevant AI tool, including the “quality” of the prompts you enter. Even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased.
[10] The result is that, as with any other information available on the internet in general, AI tools may be useful to find material you would recognise as correct but have not got to hand, but are a poor way of conducting research to find new information which you cannot otherwise verify. They may be best seen as a way of obtaining nondefinitive confirmation of something, rather than providing immediately correct facts.
[11] Commercial legal publishers are incorporating Generative AI chatbots and other tools in their products. Although such AI tools may be expected to have been trained with more authoritative and current data sets than public Generative AI chatbots, and to address data security and confidentiality to a greater extent, user awareness of these issues will still be important.
2. Uphold confidentiality, suppression, and privacy
[12] Do not enter any information into a public Generative AI chatbot that is not already in the public domain. Take particular care not to enter any private, confidential, or suppressed information. Any information that you input into a public Generative AI chatbot should be seen as being published to all the world.
[13] The current public Generative AI chatbots can remember every question that you ask them, as well as any other information you put into them. They could then use that information to respond to queries from other users. As a result, anything you type into such a chatbot could become publicly known. This could result in breaches of suppression orders or accidental disclosure of confidential or sensitive information. You should disable the chat history in chatbots if this option is available.
[14] In the event of unintentional disclosure of confidential, suppressed, or sensitive information, your staff should report to you. You should notify your head of jurisdiction.
3. Ensure accountability and accuracy
[15] The accuracy of any information you have been provided by a Generative AI chatbot must be checked before it is used or relied upon.
[16] Information provided by Generative AI chatbots may be inaccurate, incomplete, misleading or out of date. Even if it purports to represent Australian or Queensland law, it may not do so. Generative AI chatbots have in the past:
(a) made up fictitious cases, citations or quotes, or referred to legislation, articles or legal texts that do not exist;
(b) provided incorrect or misleading information regarding the law or how it might apply; and
(c) made factual errors.
[17] With straightforward areas of law or for material you would recognise as correct but do not have to hand, AI tools might be able to produce helpful, higher-level legal explanations or summaries of relevant legal principles, but:
(a) Care must be taken to provide accurate and reliable information to the chatbot.
(b) The quality of legal research produced will be influenced by the dataset from which the chatbot is operating. Thus Generative AI chatbots are limited by the date range, jurisdictional information, and type of legal materials they can access.
(c) It may be hoped that commercial AI tools from reputable legal publishers that have been specifically trained on appropriate legal cases and legislation, and that link answers directly to the source of the information may assist with ensuring the accuracy of generative AI outputs.
(d) The quality of legal research produced will also be influenced by the type of prompts the chatbot is given. Writing high quality prompts, and tailoring them in response to the answers received, is a skill that takes some practice.
(e) Presently public Generative AI chatbots can produce some apparently highquality outputs (for example identifying and explaining relevant legal principles) but they are also prone to errors and “hallucination”. For example, it is currently common for such chatbots to generate fictitious legal citations. In doing so, the chatbot may appear authoritative.
(f) The use of such chatbots is not a substitute for conducting research using trusted sources such as academic texts or legal databases.
4. Be aware of ethical issues
[18] AI tools based on LLMs generate responses based on the dataset upon which they are trained. Information generated by such tools will inevitably reflect whatever gaps, errors and biases are contained in its training data. You should always have regard to this possibility and the need to correct this.
[19] The use of AI tools based on LLMs may also raise copyright and plagiarism issues. For example, Generative AI chatbots can be very useful in condensing or summarising information or presenting the information in a different format. However, the following should be considered:
(a) Using a chatbot to summarise a portion of a textbook or other intellectual property could breach the author’s copyright.
(b) Any such use would need to be carefully reviewed to ensure the summarised passage carries the same meaning as the original content.
(c) Depending on context, the source may need to be acknowledged and citations added.
[20] Similarly, Generative AI chatbots can be a helpful tool in planning a speech and producing an outline of potential speaking points. They could then be used to elaborate further on potential content for a specific speaking point. However it would be important to ensure that any AI-generated material was accurate and supported by reliable sources. And, again, depending on context, the source may need to beacknowledged and citations added.
5. Maintain security
[21] Follow best practices for maintaining your own and the Court’s security.
(a) Use work devices (rather than personal devices) to access AI chatbots.
(b) Use your work email address.
(c) If you have a paid subscription to an AI platform, use it rather than a public AI tool. (Paid subscriptions have been identified as generally more secure than non‑paid).
[22] If there has been a potential security breach your staff should report to you. You should notify your head of jurisdiction.
6. Take responsibility
[23] You are personally responsible for material which is produced in your name.
[24] Provided these guidelines are appropriately followed, there is no reason why you cannot use generative AI as a potentially useful secondary tool for research or preparatory work. However, you must ensure that any use of AI tools by you or your staff is consistent with the core judicial values of open justice, accountability, impartiality and equality before the law, procedural fairness, access to justice and efficiency.
[25] AI tools should not be used for decision-making nor used to develop or prepare reasons for decision. The development and expression of judicial reasoning must be done by the judicial officer themselves.
[26] If your staff are using AI tools in the course of their work for you, you should discuss it with them to ensure they are using such tools appropriately and taking steps to mitigate any risks.
7. Be aware that court users may have used AI tools
[27] Some kinds of AI tools have been used for a significant time without difficulty.
[28] Many aspects of AI are already in general use for example in search engines to autofill questions, in social media to select content to be delivered, and in image recognition and predictive text. If you use an app to give you directions or transport options, you are using an AI tool.
[29] Technology Assisted Review is now part of the landscape of approaches to electronic disclosure in litigation. This process involves a machine learning system trained on data created by lawyers identifying relevant documents manually, then the tool uses the learned criteria to identify other similar documents from very large disclosure data sets.
[30] The present issue of concern is the use of AI to produce submissions or other material aimed at persuading judicial officers.
[31] All legal representatives are responsible for the material they put before the courts and tribunals and have a professional obligation to ensure it is accurate and appropriate.
[32] Lawyers should be required to confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of AI.
[33] AI chatbots are now being used by self-represented litigants. They may be the only source of advice or assistance some litigants receive. Such litigants rarely have the knowledge or skills to independently verify legal information provided by AI chatbots and may not be aware that they are prone to error. This can result in content that (superficially at least) appears to be highly persuasive and well written, but on closer inspection contains obvious substantive errors. There is also a risk that inaccurate information has been included in affidavits or witness statements which have been prepared using an AI tool, and which have not been carefully checked by the deponent or witness before being finalised.
[34] If it appears an AI chatbot may have been used to prepare submissions or other documents, it may be appropriate to inquire about this, and ask what checks for accuracy have been undertaken (if any). Indications which may suggest an AI chatbot has been used include:
(a) references to cases that do not sound familiar, or have unfamiliar citations;
(b) citation of different bodies of case law in relation to the same legal issues;
(c) submissions that do not accord with your general understanding of the law in the area;
(d) elaborate language, or language which is inconsistent with the manner in which the person otherwise speaks or writes.
[35] AI tools are now being used to produce fake material, including text, images and video. Courts have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology.
[36] Judges should also be alert to the use of AI by experts to assist in the generation or expression of an opinion contained in an expert report and consider whether the expert should be required to identify in their report the precise way in which they have used AI.
A link to the Queensland Courts page on the use of generative AI is here.
Version history:Guidelines issued on 13 May 2024Revised version issued on 15 September 2025