Judge Megan Goldish: What Happens When AI Meets High Volume Courtrooms

AI uses in courtrooms

Key Takeaways

  • High-volume courtrooms are experimenting with AI for scheduling, intake, and document handling.
  • AI-generated filings can introduce serious risks, including citation errors and missing legal context.
  • Judges must retain control over prioritization, overrides, and final legal judgment.
  • Confidential records and sensitive filings require strict limits on AI access and usage.
  • Courts should evaluate AI using performance metrics tied to speed, accuracy, and fairness.


Judge Megan Goldish serves on the Circuit Court of Cook County, where she presides in the Domestic Violence Division and manages a high-volume courtroom that handles both criminal and civil matters. Judge Megan Goldish was elected to the bench in 2014 after approximately 17 years as a Cook County Assistant State’s Attorney, including time as a supervising attorney on a specialized treatment call. Her daily responsibilities involve reviewing confidential filings, conducting hearings on orders of protection, and ruling on time-sensitive motions that require careful attention under tight schedules.

In addition to her judicial duties, Judge Goldish is an adjunct instructor in trial advocacy and remains active in legal education and professional organizations. Her experience in fast-paced court settings provides relevant context for understanding how emerging technologies, including artificial intelligence, intersect with modern judicial workflows. As courts explore AI tools to assist with scheduling, document handling, and intake processes, the realities of high-volume dockets raise important questions about accuracy, confidentiality, oversight, and procedural fairness.

What Happens When AI Meets High Volume Courtrooms

In high-volume courtrooms, where dozens of matters move from intake to ruling each day, they have introduced artificial intelligence as an aid. Many of these courtrooms handle criminal and civil dockets together, with emergency motions, protective orders, and preliminary hearings occurring in fast succession. Some court systems are exploring AI to support docket management, filing review, and document generation, automating intake checks and routine drafting. Still, its fit within fast-paced judicial workflows remains unsettled.

Judges in these courtrooms make rapid decisions based on rotating calls, midday hearings, and off-hours filings. They review petitions, evaluate pretrial conditions, and assess felony complaints during short breaks. Hearings often run late, and rulings follow in close sequence with little time between matters.

When configured carefully, AI-enabled scheduling tools can help organize calendars and surface conflicts, but many assume static workflows and fixed prioritization. Courtroom scheduling systems still need human reprioritization and override when volumes surge, or specialized routing is triggered, consistent with court guidance that automation should remain under human supervision. Such overrides can pause a case and ripple across later calls.

Courts have reported problems with AI-written legal documents. Some mimic legal formatting while misapplying case law or citing nonexistent decisions. Judges and clerks may not have time to verify accuracy, increasing the risk of error at the ruling stage.

Emergency protection hearings require precision, but AI-generated documents may fall short of courtroom standards. When these tools produce timelines or message summaries, judges must assess whether the information is reliable and relevant, and whether it satisfies the jurisdiction’s statutory elements for the relief sought. If context is missing or sources cannot be verified, courts may find the filing legally insufficient.

In these same emergency contexts, confidentiality adds a second layer of concern. Domestic violence or stalking motions may reference Illinois Department of Children and Family Services (DCFS) reports, medical histories, or personnel files. Courts and court policies increasingly restrict AI tools from directly accessing such records, and recent guidance recommends that any AI use follow strict evidence-handling and confidentiality rules to prevent unauthorized review or data exposure.

Clerks and courtroom staff face practical barriers as well. When AI-generated filings arrive with omissions, routing errors, or formatting issues, staff must intervene during intake. These fixes slow the case at hand and, on heavy calendars, push later start times.

Access inequities can also grow wider. Litigants who rely on walk-in assistance, language interpreters, or paper filing are often excluded from AI-centered tools. Judges committed to procedural fairness must weigh these gaps when evaluating how AI appears in case preparation or presentation.

Judicial oversight remains central. Judges must determine whether AI tools were used to assist with a filing, attempt to replace the legal reasoning courts apply under the law, or introduce unverified content into the record. As more filings rely on these systems, courts must clarify which tools are permitted, under what rules, and whether parties understand the risks of using them without human review.

Courts can pilot AI with clear success measures and public reporting by tracking time-to-hearing variance, intake-correction rates, citation-error incidents, interpreter access outcomes, and disclosure-compliance rates. Policies can require human verification attestations, incident reporting within 24 hours, and automatic pause rules if error thresholds or equity measures slip. Together, these metric-driven safeguards let courts expand or limit AI use based on observed performance, not assumptions, while protecting speed, fairness, and confidentiality.

FAQs

How are courts using AI in high-volume dockets?

Some courts use AI tools for scheduling, intake checks, and routine document drafting. These tools are meant to assist staff, not replace judicial decision-making.

What risks do AI-generated legal documents create?

They can include incorrect citations, missing context, or fabricated case law. In fast-paced courtrooms, these errors may not be caught in time.

Why is human oversight still required?

Court workflows change constantly and require reprioritization that automation cannot handle alone. Judges must also ensure legal reasoning and evidence standards are met.

How does confidentiality affect AI use in court?

Many filings contain sensitive records such as medical or child welfare information. Courts must restrict AI access to prevent unauthorized data exposure.

Could AI increase access or fairness problems?

Yes, especially for litigants who rely on in-person help, interpreters, or paper filing. Courts must consider these gaps before expanding AI-dependent processes.

About Judge Megan Goldish

Judge Megan Goldish is a Circuit Court judge in Cook County, Illinois, serving in the Domestic Violence Division. Elected in 2014, she previously spent nearly two decades as a Cook County Assistant State’s Attorney. She holds a juris doctor from Northwestern University Pritzker School of Law and teaches trial advocacy as an adjunct instructor. Judge Goldish remains active in legal education, professional associations, and community organizations focused on justice, equity, and public service.

Leave a Reply

Your email address will not be published. Required fields are marked *