Dermatology AI tools are misdiagnosing skin cancer on people who look like me.
Not because the technology is broken. Because the training data didn’t include enough darker skin tones. The University of Sydney research published in The Lancet Primary Care makes it plain: these systems are being deployed without the data to diagnose us accurately.

That’s not a bug in the code. That’s a failure of governance, a failure to ask “whose bodies are in this training data?” before deployment.
As a woman of colour working in Australia’s tech workforce and AI governance, this isn’t an abstract policy debate. It’s personal. These algorithms could miss a melanoma on my skin.
And here’s what keeps me up at night: when that misdiagnosis happens and it will. the engineer who built the model won’t be in the room explaining what went wrong.
I will. Or someone like me. The clinician. The health executive. The board member who signed off on procurement.
We’re the ones who’ll face the coroner. And right now, we’re flying blind.
That’s not me being dramatic. That’s Associate Professor Liliana Laranjo from the same University of Sydney research: “Without Australian data on how many GPs are using it or proper oversight, we’re flying blind on safety.”
An estimated 40 percent of Australian GPs are now using AI tools. AI scribes. ChatGPT for clinical queries. Patient-facing apps. And yet the people who will be held accountable when these systems fail, the clinicians, executives, and board members, aren’t the ones being equipped to govern them.
The accountability gap hiding in plain sight
We have a governance gap in Australian healthcare that nobody wants to name directly: we’re building AI capability in the wrong rooms.
Every AI upskilling initiative, every “AI literacy” program, every hackathon and innovation sprint is packed with the same faces. Engineers. Data scientists. Technical architects. The people who are genuinely excellent at building things but who will never, ever be held professionally accountable for what those things do to patients.
Meanwhile, the clinical leads, the risk managers, the ethics committees, the executives and board members who will be accountable are expected to govern technology they don’t understand, using frameworks that don’t exist, with language they haven’t been taught.
This is not a training problem. It’s a systemic failure of imagination about where AI risk actually lives.
Homogeneous teams miss homogeneous risks
The dermatology AI example isn’t an outlier. It’s a pattern.
When you build an AI system with a room full of engineers, you get engineering risks surfaced. Model drift. Data quality. Latency. Compute efficiency.
What you don’t get surfaced:
- The legal exposure when the model can’t explain its recommendation
- The clinical workflow implications when the AI contradicts the treating physician’s judgement
- The cultural safety risks when training data reflects historical patterns of discrimination
- The regulatory gap
- The human rights implications when algorithmic decisions affect access to care
- The question “whose bodies are missing from this dataset?”
Professor Laranjo’s warning is stark: “Generative models like ChatGPT can sound convincing but be factually wrong. They often agree with users even when they’re mistaken, which is dangerous for patients and challenging for clinicians.”
Who catches that risk? Not the engineer optimising for fluency. It takes a clinician who understands the danger of a confidently wrong answer in a diagnostic context. It takes a cultural safety expert who asks whose experiences are missing from the training data. It takes a risk manager who understands liability when “convincingly wrong” becomes “harmfully wrong.”
These aren’t edge cases. They’re the primary risks of AI in healthcare. And they’re invisible to homogeneous technical teams not because those teams are incompetent, but because you can’t see what you haven’t been trained to look for.
Different professional lenses catch different risks. Lawyers see liability. Clinicians see workflow disruption and patient safety. Ethicists see values misalignment. Risk managers see systemic exposure. Indigenous health experts see colonial patterns encoded in data. Women of colour see whose bodies are missing from the training data.
When your AI governance table has one professional perspective, you get one category of risk identified. When it has five, you get five. This isn’t diversity as a nice-to-have. It’s diversity as a patient safety strategy.
The coders aren’t coming to save you
I want to be crystal clear: this is not an attack on engineers or data scientists. The technical community building AI in healthcare is, by and large, doing remarkable work under difficult conditions. Many of them are deeply concerned about the governance gaps they’re seeing.
But here’s the uncomfortable truth they’ll tell you privately: they cannot govern what they build.
Not because they lack capability but because governance is not their job, not their training, not their professional accountability framework, and not their risk to carry.
When an AI system causes harm, the engineer doesn’t lose their medical registration. The data scientist doesn’t face the health complaints commissioner. The vendor’s technical lead doesn’t explain to a coronial inquest why the algorithm made the recommendation it did.
That falls to the clinicians and health leaders who deployed the system. Who trusted the procurement process. Who signed the contract. Who didn’t ask the questions they didn’t know to ask.
We have outsourced AI governance to the people least equipped to do it and least accountable when it fails. And then we wonder why healthcare AI adoption is stalling, why trust is eroding, and why the governance gap keeps widening.
What good looks like
The University of Sydney researchers are clear about what’s needed:
- Robust evaluation and real-world monitoring of AI tools
- Regulatory frameworks that keep pace with innovation
- Education for clinicians and the public to improve AI literacy
- Bias mitigation strategies to ensure equity in healthcare
Notice what’s at the centre of that list? Education. Capability. Governance literacy for the people actually using and overseeing these tools not just the people building them.
I work closely with Dr Kobi Leins, who has been instrumental in developing ISO/IEC 42005, the international standard for AI impact assessment. Kobi doesn’t just teach AI governance theory; she writes the global rules that organisations will soon be required to follow.
What Kobi will tell you is this: AI governance is not a technical discipline. It’s a translation discipline.
The job is not to understand how a neural network works at the mathematical level. The job is to translate technical risk into clinical risk. To translate regulatory requirements into procurement questions. To translate ethical principles into operational controls.
That translation requires people who understand clinical workflows, legal liability, regulatory frameworks, ethical reasoning, and board-level accountability. It requires people who can sit in a room with engineers and ask the questions the engineers haven’t thought to ask because their professional training taught them different things to look for.
It requires, in other words, exactly the people we are currently failing to train.
A call to health leaders
Forty percent of Australian GPs are already using AI. It’s in your diagnostic imaging. Your pathology workflows. Your bed management systems. Your triage algorithms. Your clinical decision support. Your risk stratification models.
And we’re “flying blind on safety.”
Every one of those systems is making decisions that affect patient care. And every one of those systems will, eventually, be part of a conversation about what went wrong.
When that conversation happens, the engineer will not be in the room.
You will.
The question is whether you’ll have the governance capability to account for what happened or whether you’ll be another case study in what happens when accountability and capability don’t live in the same place.
At Tech Diversity Academy, we have built Australia’s first AI Governance Practitioners Programme specifically for the people who will be held accountable: lawyers, risk managers, compliance leads, clinicians, executives, and board members. Not-for-profit. Designed by Dr Kobi Leins. Built for health leaders who need to govern AI, not build it.
Because the algorithm doesn’t face the coroner.
You do.
Luli Adeyemo is Executive Director of Tech Diversity Academy, a not-for-profit foundation focused on diversity in technology. She is leading the development of Australia’s first Diverse AI Governance Practitioners Programme, designed by Dr Kobi Leins, in partnership with the Australian Computer Society.





