Smarter Triage

In emergency medicine, triage is often the most consequential decision we make—usually within sixty seconds of a patient’s arrival. Who can safely wait? Who might deteriorate unnoticed? Who needs a resus bay—right now?

Historically, our answers have depended on structured tools like the Australasian Triage Scale, combined with clinical gestalt and protocol-driven decision trees. These have generally worked—but they're blunt instruments in the increasingly complex and overcrowded environments we manage.

Enter artificial intelligence.

Machine learning algorithms are now trained to predict critical outcomes— early deterioration, ICU admission and in-hospital mortality—using only initial vital signs and basic triage information. Remarkably, some AI models match or surpass experienced clinicians in controlled studies. Even large language models like GPT-4 have demonstrated diagnostic accuracy rivalling or exceeding junior doctors in simulated emergency scenarios.

It’s tempting to be captivated by such promise. Algorithms built on millions of data points, rapidly offering decision-making support or even replacing initial clinical judgement, appear especially attractive in understaffed and overwhelmed emergency departments.

Yet the reality is more nuanced.

Most AI models are trained retrospectively and validated using similar data to their training sets—rarely tested robustly in diverse real-world environments. A model that excels in one hospital can falter dramatically in another. Even more concerning, these algorithms risk inheriting biases embedded in their training data. If the dataset reflects historical inequities—under-triaging certain populations, for example—then the AI can inadvertently magnify such disparities.

There's a subtler danger too: loss of context. Consider a patient presenting with vague complaints, normal vital signs, and seemingly minor concerns. An AI-driven triage might confidently assign them low priority. But an experienced clinician notices subtleties—the pallor, the patient’s hesitancy, the strained voice. These human observations aren't captured in structured data points; they’re embedded in intuition shaped by years of patient interactions.

The answer isn't to reject AI outright. It's to reposition it.

AI should become our assistant, not our judge. It should highlight concerning patterns, question assumptions, and prompt clinicians to reconsider initial impressions. AI can dynamically assist in re-triaging as new information arrives, reinforcing rather than replacing clinical insight.

We don’t require artificial intelligence that dictates action. We need intelligence that collaborates—AI that thinks alongside clinicians.

Because triage isn't just categorising patients—it's about navigating urgency, uncertainty, and nuance. For now, and perhaps always, those remain profoundly human skills.

Previous
Previous

Beyond the Hype: Building Safe, Accountable AI in Australian Healthcare

Next
Next

A New Nexus