AI continues to reshape our healthcare landscape, and Australia is now standing at the brink of another significant leap: the rise of “agentic AI” – systems that can proactively act and make decisions autonomously on behalf of a human (within defined parameters).
While traditional AI models already analyse data or assist with tasks, agentic AI can independently manage complex interactions, adapt in real time, and have the potential to change many aspects of healthcare delivery.

In the longer term, we could imagine automated clinical assistants that proactively monitor chronic disease indicators from various data sources, autonomously flagging potential health crises before they become acute. Such agents could significantly shift Australia’s healthcare paradigm from reactive treatment towards preventive and continuous monitoring. However, in the short-term, consumers may be fast adopters of agentic AI due to perceived benefits of auto-completion of tedious tasks such as perusing travel and shopping sites: OpenAI and Perplexity have both recently launched “agentic AI” systems that can perform operations within browsers. Such rapid consumer adoption may see pressure for the healthcare system to similarly adopt agentic AI, perhaps more rapidly than healthcare professionals and system operators might be anticipating.
While the Australian government has continued investing in our digital healthcare future with funds to modernise the My Health Record system, realising the full potential of digital health and agentic AI in particular will require eliminating the many pockets of the healthcare system that still rely heavily on legacy IT systems, or are based on paper and manual information flows, all of which can severely impede the integration of modern AI tools.
Fragmented, poor quality and incomplete data also remain critical challenges, as the effectiveness of any AI is fundamentally dependent on the quality of the data it learns from (for training), or depends upon at inference time.
In addition to the technological complexities of incorporating agentic systems into our current healthcare system, there are increased ethical and operational risks.
The need for strong guardrails
Despite its considerable promise, the autonomy and power of agentic AI will amplify current concerns around cybersecurity, transparency, accountability and bias. During their launch, OpenAI directly highlighted the potential for harm from their agentic AI systems such as via “prompt injection” attacks where the AI agents can be forced to spoof systems into nefarious behaviours. These concerns are not theoretical; they are at the heart of the Australian government’s recent proposals for “Introducing mandatory guardrails for AI in high-risk settings,” a category that explicitly includes healthcare.
Led by the Department of Industry, Science and Resources, this initiative acknowledges that existing regulations are insufficient for AI’s unique challenges and aims to build public trust by establishing clear expectations for safety and responsibility.
It is pleasing that key medical bodies are involved in actively shaping these regulations. The Australian Medical Association (AMA) has advocated for a co-regulatory model where government standards are complemented by sector-specific guidelines, stressing the need for meaningful clinician involvement in AI governance.
Similarly, the Royal Australian College of General Practitioners (RACGP) has highlighted the importance of distinguishing between high-risk AI, such as diagnostic tools, and lower-risk applications like appointment scheduling. A core principle underpinning these proposed guardrails is the need for robust human oversight, ensuring that clinicians remain central to patient care, with AI serving to augment, not replace, their expertise.
The ‘Black Box’ Problem: Is Explainable AI (XAI) still a realistic hope?
But we need more than human oversight or human-in-the-loop workflows. We need trust in the systems themselves.
A critical foundation to building trust in the safe use of AI in healthcare is transparency and explainability. That is, for humans to trust and responsibly use AI, they must be able to see and understand the reasoning behind its recommendations – a challenge known as the “black box” problem.
In Australia, significant efforts have been underway to advance Explainable AI (XAI).
CSIRO’s Responsible Innovation Future Science Platform, in collaboration with the Australian e-Health Research Centre, is actively researching how clinicians interact with AI explanations, particularly in high-stakes environments like intensive care units . After all, an explanation is only useful if it is timely, understandable, and relevant to the clinical decision at hand.
This means that involving clinicians directly in the design and testing of these systems is not just beneficial, but essential for their successful adoption, and such involvement is time consuming and costly.
However, even the most explainable and trustworthy systems will not be of benefit if they are so far behind the frontier models in capability that vendors simply do not incorporate them into their systems or workflows.
In my view, XAI technology deployments are always going to lag behind the frontier models of AI since this is not the core focus of the major technology developers who are instead fixated on their race to develop what the hype cycle calls AGI (Artificial General Intelligence) or Super Intelligence.
There is also the challenge of integrating XAI approaches into existing product workflows that often include legacy systems. So, I suspect the jury is still out as to whether XAI can truly offer the benefits we desire given the gulf between them and the increasingly powerful agentic frontier models.
Looking ahead: Making effective use of Agentic AI will require effort
Australian healthcare use of agentic AI stands at an exciting but complex crossroads. With careful planning, robust ethical guardrails and strategic investment, agentic AI can be a powerful partner in transforming our healthcare system. It holds the potential to shift our focus from reactive treatment to genuinely proactive, personalised and preventive care.
But the path forward requires an ongoing effort, eliminating those large islands of disconnected data, interrupted workflows and incorporation of private healthcare providers that so far have not adopted some of the digital backbones present in major public hospitals.
To consider agentic AI systems as powerful partners in our work routines will require trust in the overall systems and workflows that clinicians and regulators rightly demand. There is a short window because consumers will be mystified if the healthcare industry itself does not adopt at least some agentic workflows to increase the efficiency of healthcare e.g. to overcome administrative bottlenecks during intake and discharge.
GenAI has limitations and frustrations, but it is here to stay and will be adopted in various settings whether we like it or not. By proactively piloting and developing guard rails for this transformative technology (particularly agentic AI) with input from clinicians, patients, and policymakers, we can more effectively harness the power and understand the limitations of emerging agentic AI to achieve a safe and efficient healthcare future for all Australians.

Have your say with our Poll question: Is our healthcare system ready to harness the power of agentic AI?