Louise Ryves is speaking on the panel “From Data to Dialogue: Building Public Trust and Value at the Clinical Service Level” at Digital Health Festival on 21 May.
Doctors report spending up to 40 per cent of their working hours on administrative tasks. In a ten-hour clinical day, that’s four hours not spent on patients. The clinical data that accumulates during those hours (consultation notes, patient histories, diagnostic results, referral patterns) sits in systems that were built to store it, not to put it to use when a clinician needs it.

There is no shortage of useful data being collected, the gap is the lack of intelligence applied to it. A specialist opens a patient record and sees raw information, not a summary of what’s changed since the last visit. A clinician manually cross-checks a medication history that a system could flag in seconds. A practice manager triages referrals by hand when the data to prioritise them already exists.
We’ve been capturing clinical data for decades. The next leap forward is making it do something useful, with the governance to match.
Something changed in the consulting room
Two years ago, patients weren’t being asked about data use during a consultation. They are now. AI transcription tools have introduced a direct conversation between doctor and patient about whether the consultation will be recorded and used to generate clinical notes. It’s on new-patient forms and it’s in the consult itself.
As significant as the take-up of AI medical scribes has been, transcription is just the entry point, not the destination. Once a system is capturing and structuring consultation data with consent, what follows matters more: decision support that uses historical unstructured patient data to provide contextual information at the point of care; patient summaries that pull relevant history to the front of a consultation instead of making a clinician scroll through years of notes; search that finds a specific result across an entire care record in seconds.
This all becomes possible once clinical data is structured and available where care is being delivered, and once the consent conversation has already happened.
The behaviour change we stopped needing
For a decade, the medical software industry assumed that clinician behaviour change had to come first. Get them entering data into structured fields. Get them using coded terminology instead of free text. Get them into the cloud. Then the data would be good enough to become helpful.
AI ruptured that sequence. A clinician who prefers to dictate can now generate structured, coded notes without changing how they consult. Free-text notes of the kind clinicians have always written can be interpreted and organised after the fact by systems that read them, rather than requiring a human to re-enter the information in a different format. The system adapts to the clinician.
That changes what’s possible inside a single practice. Software that recognises the patterns in a clinician’s workflow can put the right information in front of them at the right moment, not because they asked for it but because the consultation context makes it relevant. That’s the shift from software that records what happened to software that supports what happens next.
After years of talking about how to meet clinicians where they are, the tools to actually do it have arrived. The platforms that will earn adoption are the ones where intelligence is built into the clinical workflow with consent, not sold as a separate product with a separate login and a separate set of data handling promises.
The trust gap is real and it’s ours to close
Public trust in health data use is conditional, with a broad spectrum of comfort levels. Research from the University of Wollongong’s Australian Centre for Health Engagement, Evidence and Values has consistently found that community willingness to share clinical data drops when a private company is involved, compared with a government body or research institution. That’s a sentiment to be respected, and it needs to shape our approach to innovation.
Earning trust is genuinely hard, and rightly so. Governance has to be visible, specific, and verifiable. That means publishing exactly where patient data is stored, what it’s used for, whether it trains AI models, how identifiers are removed, and who has clinical oversight. The MSIA and MTAA AI Governance Code, which several vendors signed earlier this year, is an important step. But it’s just one, and the companies that publish their commitments in plain detail will be the ones that hold social licence when it’s tested.
What actually shifts trust isn’t just reassurance, it’s directly experiencing a positive change. A patient whose history is summarised rather than scattered across systems. A clinician who finishes the day having spent less time typing. A specialist who catches an interaction that a manual check would have missed. Trust follows benefit, but only when the governance is already in place to withstand scrutiny.
The conversation has started
Two years ago, no one was asking patients about AI in the consulting room. Now it’s routine in thousands of practices. That shift happened through innovation, and it created something healthcare has needed for a long time: an active, consent-based dialogue between patients and clinicians about the use of data to drive technology that supports clinicians in supporting them as patients. The irony is that patients started having this conversation not because we designed a policy framework or ran a public awareness campaign, but because a product needed their permission. That’s a starting point the sector should take seriously, and we should build upon it with care.





