Beyond the Hype: Building Safe, Accountable AI in Australian Healthcare
Jurassic Tech: Why AI in Healthcare Needs Governance Before It Gets Loose
By Dr Bassam Nuseibeh
Emergency Physician | Entrepreneur | Autism Advocate
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
— Jurassic Park (1993)
It’s 3pm in a busy ED. You’re two hours behind, managing a patient whose condition is on the edge of stability. The AI system built into your EMR flashes a suggestion:
“Low risk. Discharge appropriate.”
It’s not that you don’t want help—but the recommendation feels off. The patient looks tired, their history is complicated, and the model hasn’t accounted for either. You override the alert and move on. For the third time today.
AI is here, and it’s learning fast. But in healthcare, enthusiasm alone isn’t enough. This moment—where technology feels impressive but disconnected—is starting to feel a little too much like Jurassic Park.
The tech is powerful. The interface sleek. The promise real. But somewhere in the rush to build the future, we forgot to ask the question that matters most:
Just because we can—should we?
We’ve Seen This Movie Before
Movie: The Matrix (1999)
The last major digital disruption in healthcare was meant to be transformative. Electronic medical records promised a streamlined future: safer handovers, faster documentation, smarter systems.
Instead, many clinicians found themselves stuck in The Matrix. A digital reality that didn’t reflect clinical logic or the way we actually work. The interface demanded more than it gave. It often created inefficiencies, divided attention, and drove burnout.
We’re now on the cusp of doing it again—but this time, with predictive algorithms and generative models. If we don’t design AI around the clinical experience, we’ll just repeat the cycle: high expectations, poor implementation, and clinicians forced to make square pegs fit round holes.
In Australia, that risk is magnified by fractured data systems. Our hospitals don’t speak the same language. Our datasets don’t link. Training AI on patchy, siloed, or biased data is like building a skyscraper on quicksand.
Governance Isn’t Bureaucracy—It’s Safety Engineering
Movie: The Martian (2015)
In The Martian, Mark Watney doesn’t survive Mars with brute force. He survives by applying logic, data, and structure to every single decision. He governs his chaos.
That’s what we need in AI governance: not red tape, but real-world safety engineering.
Governance in this context means:
Embedding clinician input from the design phase
Ensuring datasets reflect the real population—rural, regional, First Nations, vulnerable groups
Establishing clear oversight pathways and escalation points
Setting out who owns risk, and what accountability looks like
It’s the clinical equivalent of double-checking a paediatric medication dose: we do it not because we distrust the system—but because lives are at stake.
Just as we wouldn’t accept an unregulated device in our resus bay, we shouldn’t tolerate ungoverned algorithms influencing decisions at the bedside.
Designing for Augmentation, Not Automation
Movie: Iron Man (2008)
AI should be like the Iron Man suit: something that enhances our capabilities while leaving the decision-making—and the accountability—with us.
The power of AI is in augmentation: helping us detect subtle trends, handle complex data, and streamline documentation. What it should never do is replace clinical judgment. Once that line is crossed, we’re no longer co-pilots—we’re just passengers.
Too many AI tools are built without a clear philosophy of use. Are they meant to guide? Decide? Flag? Replace?
That ambiguity becomes dangerous. Especially when clinicians feel pressured to follow recommendations they didn’t create, can’t fully interpret, and may not agree with. Without a feedback loop, we risk reducing the clinician’s role to that of a reluctant human override button.
The Quiet Creep of Risk Dumping
Movie: Minority Report (2002)
In Minority Report, decisions are made based on prediction. The system says someone will commit a crime, and that’s enough for action—no trial, no doubt.
We’re not quite there in healthcare, but the logic is creeping in. AI predicts deterioration. AI predicts suitability for discharge. AI says this test isn’t necessary. When it gets it right, we call it efficiency. When it gets it wrong, it’s the clinician’s problem.
This is risk dumping—a subtle but dangerous trend where institutions deploy tools that shift the cognitive burden and medico-legal exposure back onto clinicians without support, explanation, or accountability.
Imagine a tool that says “safe to discharge.” You follow it, and the patient deteriorates. The coroner asks why. The system’s logic? Not available. The developer? Not present. The responsibility? Yours.
Clinicians must be empowered to challenge AI, understand its limitations, and know that governance structures will support them—not punish them—for using their judgment.
Avoiding the Skynet Scenario
Movie: Terminator 2 (1991)
The threat in Terminator 2 wasn’t robots. It was a system—Skynet—that made decisions faster than humans could question them. It was optimised for one outcome, at the expense of all others.
This is not science fiction anymore.
Without oversight, AI systems in healthcare can become black boxes optimised for throughput or cost reduction, not care. They may unintentionally bias against certain groups, cut corners on uncertainty, or mask the very human complexity that defines patient care.
That’s why governance must be continuous—not a one-off ethics review. It needs to evolve as the system evolves. It must include real-time feedback, clinical validation, and open dialogue between frontline users, developers, and health system leaders.
This Time, We Write the Script
Movie: Inception (2010)
In Inception, dreams are planted and realities reshaped. The protagonists learn that the only way to protect the mind is to take control of the architecture.
In healthcare, AI is already shaping our workflows, our decisions, and—ultimately—our outcomes. The question is: do we architect the system, or does it architect us?
Clinicians need a seat at the table—not just in governance boards but in data selection, model design, and feedback loops. We must advocate for:
Transparent AI that shows its workings
Equitable data that represents our patients
Tools that serve real clinical needs—not just abstract ones
Shared responsibility between people and systems
Governance is the scaffolding that lets us build something robust. Without it, we’re just improvising.
Final Scene: Let’s Not Be Extras in Our Own Story
We’ve seen what happens when healthcare tech is built without us. We’ve adapted to it, worked around it, and paid the price for it. This time, we don’t have to.
AI can be transformative—but only if it’s built with purpose, governed with rigour, and designed for people.
Let’s:
Treat governance as clinical safety, not compliance
Reject black boxes and demand transparency
Avoid risk dumping by aligning responsibility with control
And build tools that respect clinical judgment, not replace it
There’s no post-credits fix in healthcare. No alternate ending. Just us—clinicians, patients, developers—deciding what kind of system we want to build.
Let’s make it one worth working in.