A New Nexus

In the 21st century, AI may become the nexus of truth — or delusion.
— Yuval Noah Harari, Nexus

In his recent book Nexus, Yuval Noah Harari makes a stark observation: throughout history, the dominant force shaping societies hasn’t just been power or technology, but the networks of information we build - and the fictions we choose to believe. Whether religious narratives, national myths, or scientific paradigms, humans have always organised around shared truths and delusions. Now, Harari argues, we are handing over control of those narratives to artificial intelligence.

In this context, healthcare - particularly emergency and critical care medicine - becomes a front line in this transformation.

Emergency medicine is built on fast decisions, incomplete data, and constant triage. Every moment is a negotiation between uncertainty and action. Traditionally, we’ve leaned on heuristics, clinical gestalt, and protocols to make sense of the chaos. But now, AI models trained on vast datasets are being introduced to help clinicians predict outcomes, assign triage categories, and suggest diagnoses. Some perform better than junior doctors in controlled studies. Others flag impending clinical deterioration earlier than the most seasoned ICU nurses.

It’s an astonishing moment: we are beginning to share epistemic authority - our judgment of truth - with machines.

Harari's warning is not that AI is evil, but that it’s powerful in subtle ways. Algorithms can reinforce truths, or they can reinforce delusions. In emergency care, that might look like:

  • An AI triage system that down-prioritises certain demographics because the training data was biased.

  • A chatbot advising antibiotics when they're not needed, because it was overfitted to historical overprescribing.

  • Or, more insidiously, a system that becomes so efficient that clinicians stop asking “why?”

The danger is not that AI will make us less human, but that it might make us less questioning.

Yet Harari also gives us a choice. History, he reminds us, is not deterministic. The trajectory of AI will be shaped by the decisions we make now - not just by engineers, but by clinicians, ethicists, patients, and leaders. In medicine, that means embedding AI tools into workflows in a way that supports, not supplants, human judgment. It means demanding transparency, not magic. It means remembering that pattern recognition is not the same as wisdom.

So, as AI systems increasingly enter the emergency department, the ambulance, the ICU - we must ask:
Are we building systems that deepen our understanding of patients, or ones that simply predict them? Are we designing tools that challenge our biases, or automate them?

Harari’s Nexus isn’t a book about medicine. But it is a book about power - who holds it, who surrenders it, and how that shapes the future. In emergency medicine, where minutes matter and data can deceive, we should take that lesson seriously.

Because what’s at stake isn’t just faster triage or earlier alerts. It’s what kind of care we’re creating - and who gets to define it.

Previous
Previous

Smarter Triage