Practice Note – Supreme Court­­­­­­­­­­­­­­­


As we move into the last weeks of the court year and are looking forward to some well-earned rest alongside time with family and friends, practitioners may be forgiven for failing to fully appreciate the positive impact which the publication by the Chief Justice of Practice Direction No 6 of 2025 – Bail Applications will have on the civil applications jurisdiction of the Supreme Court in Brisbane, not to mention bail and dangerous prisoner applications. You may download it here.

The problem which PD 6 of 2025 addresses

The Applications jurisdiction was, and remains, intended for short civil applications including interlocutory applications in proceedings on foot as well as applications for final relief. The jurisdiction is a vital component of the access to justice afforded by the Court to parties to civil proceedings. Two judges are of course listed to sit in Applications every sitting week in Brisbane and cases with estimates of up to two hours are routinely accommodated.

Traditionally, applications brought under the Bail Act 1980 (Qld) (which are also civil applications) have been listed to be heard in the Applications list. However, the number of such applications has steadily increased since 2014 and increased significantly in the last three or so years. For example, there was an average of three hearings each day in July 2022 but this rose to between eight and nine last December (always the busiest month for bail applications) and was still sitting at between five and six daily hearings in the last reporting period (September 2025). In addition, cases arising under the Dangerous Prisoners (Sexual Offenders) Act 2003 (Qld) – i.e. new applications, reviews of continuing detention orders and contravention proceedings – are listed on Monday and, where necessary, Tuesday in the Civil list but they are heard by one of the two judges sitting in Applications.

The upsurge in work from these two sources has meant the Court’s ability to list civil applications for hearing in Brisbane within a relatively short timeframe has been substantially depleted, so something needed to be done. That something is embodied in the restructuring to be trialled for six months under PD 6 of 2025. It is hoped this will go a long way towards freeing up the Applications list by reducing delays and enhancing the capacity of the judges sitting in that jurisdiction to entertain cases attended by real urgency.

The solution

From the first sitting week of 2026 until at least the mid-Year vacation, all bail and dangerous prisoner cases filed in the Brisbane Registry will be taken out of the Applications list and heard separately by a judge who is allocated each sitting week to do so. The Applications jurisdiction will stand alone; two judges will continue to sit in that list and will be exclusively dedicated to short civil applications in the sense earlier explained. Dangerous prisoner cases are already the subject of a list which is case managed by a judge – see Practice Direction No 6 of 2012 – Applications made under the Dangerous Prisoners (Sexual Offenders) Act – but bail applications will now be case managed under their own list. PD 6 of 2025 also introduces several new measures to promote case resolution and hearing efficiency for bail applications.

The Bail/DPSOA pilot

The new regime for bail applications to be trialled under PD 6 of 2025 was in part the subject of a Criminal Practice Seminar held in the Banco Court on the evening of 25 November 2025. In a wonderful response, almost 300 practitioners attended either in-person or online. The seminar was also recorded and may be viewed on the Court’s YouTube site here. What follows is a brief overview of the main features of the pilot.

PD 6 of 2025 commenced on 19 November 2025 and will apply to applications coming before the Court on and from 27 January 2026. Until that time, bail applications will continue to be heard in the Applications list. The Registry will transition to the new regime from Monday, 19 January 2026.

The pilot is limited to applications filed in the Brisbane Registry. A dedicated Bail List Manager has been assigned to manage the list under the supervision of a Bail List Judge (who will be me for at least the first six months of the pilot). The Bail List Judge will exercise oversight over the new Bail List and is also responsible for dealing with various issues as they arise (such as whether to grant an expedited hearing). The judge allocated to sit each week will conduct both reviews and substantive hearings.

A central feature of the pilot is mandatory review hearings. Once an application is accepted for filing, it will not be allocated court time for a substantive hearing until it has first been reviewed by a judge. Review hearings will be brief, usually allocated no more than ten minutes, and remote appearances may be approved. The primary goal is the timely and efficient disposition of applications. The judge reviewing an application will be interested to ensure all necessary evidence and material has been filed by the applicant, to manage the provision of response material by the respondent, to refine the issues for the substantive hearing and to make necessary directions for an efficient hearing and determination. The judge may also finally dispose of the application by Order if that is possible (e.g., where the parties have arrived at an agreed position). Applications will generally be listed for a review hearing not less than five clear business days after filing. There is an exception for urgent applications; these will continue to be accommodated in an expedited way.

A key expectation under PD 6 of 2025 is that practitioners will be expected to file complete, ready-to-proceed material to accompany any application. This must include evidence regarding several essential matters which are specified in the PD, an outline of argument (not to exceed six pages) and (where the applicant is legally represented) a draft Order containing the conditions proposed for release on bail. Service on the respondent should be effected without delay, ideally on the same day, but in no case less than two clear days before the review hearing. The PD also regulates the content of material to be filed in response to an application, and by when.

PD 6 of 2025 also makes it clear that practitioners should act responsibly before engaging the processes of the court. For example, where an application for bail has been refused in the Magistrates Court but, subsequently, the applicant asserts there has been a material change in circumstances, it is expected that any new application will be made in the Magistrates Court.        Otherwise, and conformably with the obligations of parties to a proceeding under r 5 of the Uniform Civil Procedure Rules 1999 (Qld), each party to a bail application impliedly undertakes to the court and to the other parties to proceed in an expeditious way.

PD 6 of 2025 emphasises the front-loading of evidence followed by judge-led case management. It places a high premium on the completeness of filed material. Practitioners must treat the initial filing not as a preliminary step, but as a comprehensive presentation of the material going in support of the application so that it is ready for immediate review. Failure to meet these requirements will inevitably lead to unnecessary directions, adjournments, and delays in the substantive hearing of the application. Meeting them should have the opposite effect.


“With the exception of nuclear DNA analysis, … no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.”

“Strengthening Forensic Science in the United States: A Path Forward”, National Academy of Sciences, 2009


On 15 April, Practice Direction 14 of 2024: Expert Evidence in Criminal Proceedings (Other than Sentences) (“PD14”) was issued by the Honourable Chief Justice. It starts in operation on 15 July 2024 and applies to all Supreme Court criminal proceedings commenced by an indictment presented on or after that date. Its main purpose is to enhance the quality and reliability of expert evidence relied on by the prosecution and the accused in criminal trials and pre-trial hearings in the Supreme Court. It is also intended to encourage the early identification of disputed forensic issues and help focus the expert evidence on those issues.

The Practice Direction was developed by the court’s Forensic Evidence Working Group, comprising the Chief Forensic Pathologist, representatives of Forensic Science Queensland, Forensic Medicine Queensland and the Queensland Police Service, judges from both the Court of Appeal and Trial Division and legal practitioners from the Office of the Director of Public Prosecutions, Legal Aid Queensland, Queensland Law Society and Bar Association of Queensland.

PD14 is not just an important initiative, it will go a long way towards enhancing the quality and reliability of forensic evidence advanced at trial while also allowing for a much greater level of scrutiny regarding any limitations associated with that evidence. Only a handful of jurisdictions around the World have taken such a step through court-regulation, with Victoria and England leading the way, but the need to do so in order to address what is now widely recognised as a serious problem cannot be doubted.

The Problem

In 2005, the US Congress authorised the National Academy of Sciences to conduct a study into forensic evidence after widespread concerns about the validity and reliability of much of that evidence were brought to prominence by initiatives such as the Innocence Project. Following a comprehensive review by a multi-disciplinary body assembled for the study, what has become known as the NAS Report was released in 2009. It was entitled, Strengthening Forensic Science in the United States: A Path Forward, and it confirmed the suspicions of many stakeholders in the trial process around the World. Indeed, if anything, the problems with forensic evidence were far more rudimentary and widespread than most had until that point in time appreciated, with the report being particularly critical of weaknesses in the scientific underpinnings of several forensic disciplines routinely used in the criminal justice system. The report wasted no time getting to the point. This appears in the preface:

“Often in criminal prosecutions and civil litigation, forensic evidence is offered to support conclusions about ‘individualization’ (sometimes referred to as ‘matching’ a specimen to a particular individual or other source) or about classification of the source of the specimen into one of several categories. With the exception of nuclear DNA analysis, however, no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source. In terms of scientific basis, the analytically based disciplines generally hold a notable edge over disciplines based on expert interpretation. But there are important variations among the disciplines relying on expert interpretation. For example, there are more established protocols and available research for fingerprint analysis than for the analysis of bite marks. There also are significant variations within each discipline. For example, not all fingerprint evidence is equally good, because the true value of the evidence is determined by the quality of the latent fingerprint image. These disparities between and within the forensic science disciplines highlight a major problem in the forensic science community: The simple reality is that the interpretation of forensic evidence is not always based on scientific studies to determine its validity. This is a serious problem. Although research has been done in some disciplines, there is a notable dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods” [Emphasis added].

The NAS Report went on to make clear that these fundamental failings could not simply be attributed to a handful of rogue analysts or underperforming laboratories. They were systemic and pervasive. Shortcomings were especially prevalent among the feature-comparison disciplines, many of which lacked well-defined systems for determining error rates and had not done studies to establish the relative rarity or commonality of the features examined. In addition, proficiency testing, where it had been conducted, showed instances of poor performance by specific examiners. In short, the report concluded that “much forensic evidence – including, for example, bitemarks and firearm and toolmark identifications – is introduced in criminal trials without any meaningful scientific validation, determination of error rates, or reliability testing to explain the limits of the discipline”.

In 2015, President Obama asked his Council of Advisors on Science and Technology to consider whether additional steps could usefully be taken to strengthen the forensic-science disciplines and ensure the validity of forensic evidence used in the legal system. Two important gaps were identified: (1) the need for clarity about the scientific standards for the validity and reliability of forensic methods and (2) the need to evaluate specific forensic methods to determine whether they have been scientifically established to be valid and reliable.

The Council attempted to close these gaps in the case of “feature-comparison” methods – that is, methods that attempt to determine whether an evidentiary sample (e.g. from a crime scene) is or is not associated with a potential ‘source’ sample (e.g. from a suspect), based on the presence of similar patterns, impressions, or other features in the sample and the source (such as DNA, hair, latent fingerprints, firearms and spent ammunition, toolmarks and bite marks, shoeprints and tyre tracks and handwriting). In the Council’s report published a year later – Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods – serious flaws were reported in all methods examined (fingerprints, bite marks, shoe prints, blood splatter and hair analysis) with the exception of nuclear DNA analysis.

As to that, the pivotal role played by nuclear DNA analysis in the development of this new awareness of the unreliability of much of what passed before as expert testimony was singled out in the report:

“Ironically, it was the emergence and maturation of a new forensic science, DNA analysis, in the 1990s that first led to serious questioning of the validity of many of the traditional forensic disciplines. When DNA evidence was first introduced in the courts, beginning in the late 1980s, it was initially hailed as infallible; but the methods used in early cases turned out to be unreliable: testing labs lacked validated and consistently applied procedures for defining DNA patterns from samples, for declaring whether two patterns matched within a given tolerance, and for determining the probability of such matches arising by chance in the population. When, as a result, DNA evidence was declared inadmissible in a 1989 case in New York, scientists engaged in DNA analysis in both forensic and non-forensic applications came together to promote the development of reliable principles and methods that have enabled DNA analysis of single-source samples to become the ‘gold standard’ of forensic science for both investigation and prosecution.

Once DNA analysis became a reliable methodology, the power of the technology – including its ability to analyse small samples and to distinguish between individuals – made it possible not only to identify and convict true perpetrators but also to clear wrongly accused suspects before prosecution and to re-examine a number of past convictions. Reviews by the National Institute of Justice and others have found that DNA testing during the course of investigations has cleared tens of thousands of suspects and that DNA-based re-examination of past cases has led so far to the exonerations of 342 defendants. Independent reviews of these cases have revealed that many relied in part on faulty expert testimony from forensic scientists who had told juries incorrectly that similar features in a pair of samples taken from a suspect and from a crime scene (hair, bullets, bitemarks, tire or shoe treads, or other items) implicated defendants in a crime with a high degree of certainty.

One of the many examples supplied was this:

Starting in 2012, the Department of Justice and FBI undertook an unprecedented review of testimony in more than 3,000 criminal cases involving microscopic hair analysis. Their initial results, released in 2015, showed that FBI examiners had provided scientifically invalid testimony in more than 95 percent of cases where that testimony was used to inculpate a defendant at trial.”

The 2016 report reiterated the need to evaluate specific forensic methods to determine whether they had been scientifically established to be valid and reliable, noting:

“ … many forensic feature-comparison methods have historically been assumed rather than established to be foundationally valid based on appropriate empirical evidence. Only within the past decade has the forensic science community begun to recognize the need to empirically test whether specific methods meet the scientific criteria for scientific validity.”

The report concluded, unsurprisingly, that scientific validity and reliability requires that a method has been subjected to empirical testing under conditions appropriate to its intended use. It stated that “without actual empirical evidence of its ability to produce conclusions at a level of accuracy appropriate to its intended use, an examiner’s conclusion that two samples were likely to have come from the same source was ‘scientifically meaningless”’.

Like the NAS Report, the Council found serious flaws in all methods examined (fingerprints, bite marks, shoe prints, blood splatter and hair analysis) with the exception of nuclear DNA analysis. Unsurprisingly, the Council concluded that “without actual empirical evidence of its ability to produce conclusions at a level of accuracy appropriate to its intended use, an examiner’s conclusion that two samples were likely to have come from the same source was scientifically meaningless”.

The Response

The findings in these two reports led ultimately to the establishment in the USA of the National Commission on Forensic Science. It was tasked to provide recommendations and advice to the Department of Justice concerning national methods and strategies for: strengthening the validity and reliability of the forensic sciences; enhancing quality assurance and quality control in forensic science laboratories and units; identifying and recommending scientific guidance and protocols for evidence seizure, testing, analysis, and reporting by forensic science laboratories and units; and identifying and assessing other needs of the forensic science communities to strengthen their disciplines and meet the increasing demands generated by the criminal and civil justice systems at all levels of government. It also served as a national forum where all stakeholders could come together, establish common ground, and find solutions for policy recommendations to strengthen the criminal justice system at the federal, state and local levels.

A good idea? Yes, but the Trump administration allowed the charter for the National Commission on Forensic Science to expire in 2017, and nothing has replaced it. It might therefore be a mistake to think that the heavy lifting in this area will continue to be led by the Americans. Much will always depend on the political will to allocate resources to the problem.

For an Australian example of this very thing, at a meeting of the Council of Attorneys-General in November 2019, the Attorneys agreed to share their experiences on the use of forensic evidence in criminal trials and to review existing laws and practices. Victoria was to lead the work, in close consultation with relevant bodies and representatives from all interested jurisdictions, and to report back. The review’s working group intended to examine the capability of juries to understand complex evidence and whether baseline standards should be introduced in courts to restrict the use of untested or speculative expert opinions. However, the review was shelved after only 16 months in favour of other “national priorities”.

In Australia, “[t]here has not been a systematic, independent review of the state of forensic science, and questions of reliability have only occasionally arisen for appellate consideration”.[1] Issues of reliability and validity of forensic evidence have been raised in some academic literature but overall they have received relatively little attention.[2] If “of more than 2,400 proven false convictions since 1989 recorded by the [US] National Registry of Exonerations, nearly 25% involved false or misleading forensic scientific evidence … [it] would not be unreasonable to think that, whatever the percentage might be in [Australia], there have been a significant number of wrongful convictions based upon flawed evidence of that kind”.[3]

Why there has been such inertia here is difficult to fathom. After all, Australia has had more than its fair share of miscarriages of justice due to unsound expert evidence – from the hanging of Colin Ross in the 1920s to the Chamberlains and then to David Eastman, to name a small selection.[4]

But, there have been some notable exceptions.

The National Institute of Forensic Science has not been idle. In that regard, in 2020, Ballantyne and Wilson-Wilde wrote that:

“[The NIFS] has been working with stakeholders for a number of years to address the concerns raised in the recent reports. The size of this work is significant and currently ANZPAA NIFS is maximising its resources and funding to achieve this aim. There is, however, an increasing risk to forensic science service providers globally, where gaps have and continue to be identified, but work is protracted due to resources and non-coordinated efforts.”[5]

They went on to note that the absence of empirical evidence does not necessarily imply unreliability. However, “until empirical studies are performed for a particular method, caution should be exercised when considering the information and opinions provided”.[6]

In Victoria, a Forensic Evidence Working Group comprised of judges, forensic scientists, prosecutors and defence lawyers, developed a Practice Note, Expert Evidence in Criminal Trials, to regulate the provision of forensic opinion and it has been in operation for a decade. Indeed, PD14 was closely modelled on it. In England, a similar approach was taken. Following a report by the UK Law Commission in 2011, a Practice Direction was issued which specifies the factors which the court may consider in determining the reliability of expert opinion evidence and provides that, in considering the reliability of expert scientific opinions, the court should be astute to identify potential flaws in any such opinions.[7]

The Solution

As the NAS Report made clear, the adversarial process relating to the admission and exclusion of scientific evidence is not really suited to the task of finding “scientific truth”:

“The judicial system is encumbered by, among other things, judges and lawyers who generally lack the scientific expertise necessary to comprehend and evaluate forensic evidence in an informed manner, trial judges (sitting alone) who must decide evidentiary issues without the benefit of judicial colleagues and often with little time for extensive research and reflection.”[8]

Furthermore, the evidence assembled in support of the vast majority of serious crime prosecutions in Australia includes some form of forensic evidence, and yet almost every defendant to those prosecutions is legally aided. Only rarely is funding advanced to properly comprehend, test or challenge that evidence. With few exceptions, defence counsel are not equipped by training or learning to mount an effective challenge to forensic evidence without expert assistance. Plainly, these resource constraints and a lack of awareness and understanding by counsel about forensic evidence and its potential limitations are productive of a marked imbalance at the bar table.[9]

The result is, more often than not, forensic evidence is not properly scrutinised by the defence, not properly tested in court and, in many cases, not challenged at all.

Plainly enough, if the defence do not question the scientific foundation, or reliability, of particular expert evidence, the jury have no practical alternative but to treat the evidence as reliable. In turn, the comfort usually provided by the existence of avenues of appeal becomes no comfort at all because there is no evidentiary basis to scrutinise the unchallenged expert findings presented to the jury.[10]

In recognition of these practicalities, PD14 contains detailed specifications of what an expert report must contain including a comprehensive statement of all limitations in the methodology or analysis reported including any qualification of an opinion expressed in the report without which the report would or might be incomplete or misleading, any limitation or uncertainty affecting the reliability of the methods or techniques used or the data relied on to arrive at the opinion in the report and any limitation or uncertainty affecting the reliability of the opinion in the report as a result of insufficient research or insufficient data. PD14 also establishes procedures to enable defence counsel to confer with a prosecution expert before trial and enables the trial judge to direct experts to confer and prepare a joint report.


An information session in relation to ‘A New Regime for Expert Evidence in Supreme Court Criminal Proceedings’ was held on 19 June 2024.  Watch this session


[1] Chris Maxwell AC, “Preventing Miscarriages of Justice: The Reliability of Forensic Evidence and the Role of the Trial Judge as Gatekeeper”, (2019) 93(8) Australian Law Journal 642, 644.

[2] Ibid. See, eg, G Edmond, “The Admissibility of Forensic Science and Medicine Evidence under the Uniform Evidence Law” (2014) 38(3) Criminal Law Journal 136; G Edmond, “What Lawyers Should Know about the Forensic Sciences” (2015) 36(1) Adelaide Law Review 33; G Edmond, “Forensic Science Evidence and the Conditions for Rational (Jury) Evaluation” (2015) 39(1) Melbourne University Law Review 77; T Ward et al, “Forensic Science, Scientific Validity and Reliability: Advice from America” (2017) 5 Criminal Law Review 357.

[3] Mark Weinberg, ‘Juries, Judges, and Junk Science – Expert Evidence on Trial’ (2021) 14(4) The Judicial Review 315, 318.

[4] See M Smith and G Urbas, ‘A Century of Science in Australian Criminal Trials’ (2019) 47(1) Australian Bar Review 72; D Hamer and G Edmond, ‘Forensic Science Evidence, Wrongful Convictions and Adversarial Process’ (2019) 38(2) University of Queensland Law Journal 185.

[5] K Ballantyne and L Wilson-Wilde, ‘Assessing the Reliability and Validity of Forensic Science – An Industry Perspective’ (2020) 52(3) Australian Journal of Forensic Sciences 275, 280.

[6] Ibid 277.

[7] Criminal Practice Directions CPD V Evidence 19A Expert Evidence, 19A.5.

[8] NAS Report, 110.

[9] G Edmond et al, ‘Forensic Science Evidence and the Limits of Cross-Examination’ (2019) 42(3) Melbourne University Law Review 858, 862.

[10] Note the exceptional course taken by the Victorian Court of Appeal in Vinaccia v The Queen [2022] VSCA 107, where the defence were permitted to adduce expert evidence on appeal which challenged (for the first time) the reliability of the prosecution’s expert evidence.