AI.Care 2023: Citizens’ jury convened for consumer view of AI in healthcare

A citizens’ jury convened by the University of Wollongong to provide an Australian consumer perspective on AI in healthcare has recommended the development of a charter for AI that respects the rights of both patients and clinicians when using the technology.

The citizens’ jury idea uses deliberative democratic methods devised by political scientists in the 1970s and 80s to help citizens inform their governments of their views and to determine the future of policy.

Founding director of the Australian Centre for Health Engagement, Evidence and Values at the University of Wollongong Stacy Carter told the AI.Care conference in Melbourne today that the methodology was used in a National Health and Medical Research Council-funded process that brought together 30 representative Australians over an intensive period to hear from experts about what AI is and how it is likely to affect healthcare, and to come up with some joint recommendations.

Some of the ideas that underpin deliberative democratic methods are that “people should have a say in the policies that affect them”, Professor Carter said, and “everyone should have an equal chance to have a say.”

To do this, researchers randomly invite households in the jurisdiction affected by a policy to express an interest in being involved. In this project, which is shortly to be published in the Medical Journal of Australia, 6000 representative households were invited by letter to take part. Just over 100 replied, and 30 were eventually selected.

They were provided with balanced, accurate information from expert witnesses, and were supported to develop clear and relevant recommendations.

“This jury came out of a National Health and Medical Research Council grant that was focused on diagnosis and screening, and particularly the ethical, legal and social implications of using AI for diagnosis and screening,” Prof Carter said.

“And because of that, the question for the jury was framed in this way: under what circumstances, if any, should AI be used in Australian health systems to detect or diagnose disease? You will note that this is a very open question.

“We also asked the jurors … what are the most important issues you’ve heard about from the information that you’re exposed to? What to think about the potential benefits, the potential harms and the potential effect algorithmic bias has on outcomes?”

After intensive, in-person discussions, the jurors came up with 10 categories, from which they made 15 recommendations.

The first recommendation was to call for a charter to underpin the use of AI that addresses the need of marginalised groups, sustainability in the environment, Australian sovereignty and security, ethics and human rights.

“The charter should be managed by an independent body with an independent chair and diverse representation,” Prof Carter said. “And they wanted that to be dynamic, representative, equitable and free of conflict of interest or bias.”

One category was about balancing benefits and harms, in which the jury recommended continual evaluation to make sure there were benefits to both patients and healthcare professionals, to make sure the benefits outweigh harms, that there be an intention to include marginalised groups, and to support informed choice.

The jury also called for making access to AI and healthcare a universal right for all Australians, with guidelines for patient rights, and that that they should be fair and inclusive to make sure that everyone can access effective and affordable care.

They recommended training for healthcare workers before AI is implemented, oversight by relevant professional bodies, including oversight of the training process, mandatory monitoring, auditing and reporting processes.

“The intention there was to include ensure that care is safe and effective and evidence based but also to ensure accountability, including for potential misuse,” Prof Carter said. “That need for accountability was another thing that really ran through the recommendations.”

Under technical governance and standards, the jury called for regulators to scrutinise systems regarding their purpose, efficacy, training, data sets, flaws and limitations, to make sure that all of those things are transparent, she said.

“And the AI system should only be approved if they perform equal to or better than current standard healthcare practice, and that they don’t degrade the performance of the healthcare system.”

They also highlighted that AI training datasets should capture Australia’s multiculturalism and diversity to ensure that equity and fairness are upheld.

“In relation to evaluation and assessment, there were calls for very high quality evidence to underpin these systems,” she said. This recommendation resonated with the National AI in Healthcare roadmap released at the conference.

“The jurors called for very high quality evidence based on representative Australian data … and that international data could be used, but only where it’s necessary and justified.

“Design trials will need to be done that were relevant to real world clinical settings with transparent analysis and reporting and conclusions that reflect system performance. And the intention there was to guard against hype about the performance of systems, but rather to be assured that the evidence does demonstrate the performance that’s claimed.

“And then finally, there was a call for education and communication … a call for comprehensive and fully funded community education as part of a broader approach to digital health literacy. And the intention here was to bring the community along with these changes in the health system, and to recognise the needs of particular communities.”

Leave a Reply

You must be logged in to post a comment.