With the growing incorporation of AI in healthcare, clinicians should remember that “the buck stops” with them. As such, healthcare professionals must ensure that they are informed about AI tools and clinical implications. That was the message from an expert panel discussion at the recent National Health Summit 2026.
Clinicians ultimately will remain responsible for clinical decision-making, so clinicians need to be able to interpret the outputs from an AI tool. Ultimately the decision-making rests with them,” said Clare Harney, Principal Consultant with Strategic Health Tech.
“It shouldn’t be that clinicians need to learn to be technologists, but it is about raising AI and digital literacy so that they can understand the impact of certain things, but also when not to rely on AI,” she said.
She added that guidance, such as the proposed Health Information and Quality Authority (HIQA) guidelines, will go a long way towards building confidence in AI tools in healthcare, and will allow clinicians to interpret when AI is working well and when it is not.
‘Trust yourself’
“Because AI comes with a disclaimer; our clinicians do not. The buck still stops with the clinician, so building literacy and confidence, ensuring that we maintain a culture within the clinical and healthcare community where questioning AI is encouraged [is essential]. We don’t want people thinking, well that’s what AI says so it must be true even though my clinical judgement says otherwise. Trust yourself,” Ms Harney said.
HIQA’s draft National Guidance for the Responsible and Safe use of Artificial Intelligence in Health and Social Care Services is open for public consultation until March 5, 2026.
The purpose of the guidance is to promote awareness and build good practice among services and staff about the responsible and safe use of AI. It aims to prepare the health and social care sector for the changes that are coming in this area, including the processes needed to deliver robust governance, transparency, and public engagement. The guidance will also educate people using services on how AI can be used safely and responsibly during their care.
The draft guidance focuses on accountability, a human rights-based approach, safety and wellbeing, and responsiveness.
Promoting good practice
Linda Weir, Deputy Director for Health Information Standard with HIQA, said the guideline addresses “the need for trust and confidence among staff and service users”.
“It’s about promoting good practice but also raising awareness of what AI looks like within services, how it is potentially going to be used, setting out the need for strong governance arrangements, good accountability, clear lines of accountability, and clarity around roles and responsibilities,” she said.
“One of the key points that has emerged as part of this work is transparency, around traceability of the tool itself, and clinicians being able to understand how the tool actually works, how these outputs are being delivered, which helps them to interpret them and gives confidence in the tools that they’re using. It also helps them to recognise if a tool isn’t performing as it should and what they need to do in that instance, and gives them that level of comfort and confidence,” Ms Weir said.
She added that the HIQA guidance also aims to increase knowledge and confidence among service users by ensuring that people are “informed about how those tools are being used in their care, that they’re given information around the benefits but also potential limitations of those tools, and that that information is given to them in a way that they can understand.”
Human in the loop
Part of this approach will be increasing awareness about the risk of biases and hallucinations with AI.
“Hallucinations and biases are always going to be there. I don’t think there’s a way to eliminate it yet, so [human] in the loop is very important,” said Tom Sadler, Data Science and AI Solution Lead with HP in Ireland and the UK.
“It is a digital transformation journey and, for that to be successful, people need to be the first layer – are the people ready? Are they able to embrace it? Then, do you have the data? Then, do you have the governance? And then, you have the technology? I see far too often people have the technology and then they’re trying to force feed it, and it tends to fail. Ultimately you need to bring the people along,” Mr Sadler said.
He said there should not be a rush to incorporate AI into clinical care. “With AI, we overestimate the short-term benefits and underestimate the long-term benefits,” he said.
“We need to embrace that it’s going to happen, but we don’t have to sprint to it. The technology is going to grow, it’s going to become better. AI has been around for 70 plus years now, since the Dartmouth conference 72 years ago. ChatGPT took seven years to make, and everyone’s like, it happened overnight… We’ve come quite far, but there’s still more, and we’re seeing mistakes, so I think we need to embrace it, but we don’t need to sprint to the finish line; it’s a marathon,” he said.
“Whether we like it or not, AI is here to stay,” said fellow panellist Dr Ronan Glynn, Health Sector Lead at EY Ireland. “If we want our clinicians to come on board, at the heart of that has to be a conversation about trust, particularly when it comes to AI… There needs to be transparency. Patients, the public, [and] healthcare professionals need to know what data is going to be used, what’s it going to be used for, what are the safeguards around it.
Clear accountability
There also needs to be clear accountability. Clinicians need to know “who’s going to be responsible if something goes wrong? What are clinicians responsible for, and what are they not going to be responsible for in this agentic age? If we think that we’re just going to push AI onto clinicians, we’ll get a sharp wake-up call. They need to be involved from the outset,” Dr Glynn said.
“Clinicians need to be shown how to use the technology and how to critically appraise the technology, and they need to be confident that if they have issues with the technology, that those issues will be acted upon and the technology will be improved over time. That optimisation phase is really important. Clinicians need to be trained, they need to have the skills, but there’s a whole layer of work that needs to go underneath that, and it’s at least as important as the technology itself,” he said.





