Your leading voice in digital health news
Twitter X Logo

When AI becomes “the boss”: digital health’s next psychosocial risk

27 February 2026
By Alexander Amatus, Business Development Lead, TherapyNearMe
Image: iStock

Healthcare has spent the last decade digitising care. Now we’re digitising management.

Not in the “new org chart” sense, more in the everyday reality of rosters, throughput targets, inbox triage, clinical documentation prompts, and performance dashboards. Increasingly, software (sometimes branded as “AI”) is being used to allocate work, monitor compliance, and “optimise” workflows. In other words, it’s edging into the territory traditionally held by line managers.

Alexander Amatus
Business Development Lead, TherapyNearMe.com.au

Digital health leaders should treat this shift as more than a productivity story. It’s also a psychological safety story, and in Australia that is no longer optional. Under the model WHS framework, organisations have clear obligations to manage psychosocial hazards at work, with practical guidance set out in the Safe Work Australia model Code of Practice on managing psychosocial hazards

Why “AI management” feels different to staff

Classic “bad boss” dynamics are familiar in healthcare: unrealistic demands, inconsistent feedback, poor communication, and low psychological support. What changes when AI is in the loop is the texture of power. Rules become less negotiable. Decisions look objective. Appeals feel harder. And staff may not know who is accountable, “the system” or a human.

Evidence reviews of harmful supervisory behaviours (including abusive supervision) consistently associate them with negative outcomes for workers’ wellbeing and functioning, as summarised in systematic literature reviews such as Abusive supervision: a systematic literature review. The point isn’t that algorithms are “abusive”; it’s that poorly governed management systems can reproduce the same harms at scale.

In practical terms, many of the signals staff use to assess safety, tone, empathy, discretion, context – are the things AI struggles to deliver consistently. The result can be a workplace that feels tightly controlled, harder to navigate, and emotionally colder, even if headline metrics improve.

What the research says about algorithmic management and psychosocial risk

Internationally, there’s growing attention on “algorithmic management”: software that partially automates tasks traditionally done by managers. The OECD defines and examines this trend in Algorithmic management in the workplace, noting both the promised benefits (consistency, efficiency) and evidence of potential detrimental impacts depending on design and implementation.

In occupational health literature, emerging studies and reviews link certain forms of algorithmic management, especially performance monitoring and work intensification, to higher psychosocial risks. For example, recent work including Algorithmic management is associated with psychological… points to negative effects when monitoring and task control are implemented in ways that reduce autonomy and increase pressure.

Healthcare is particularly exposed because it already runs close to capacity. A “small” increase in administrative friction or workload intensity can translate into outsized impacts on fatigue, moral distress, and retention.

The “bad boss” problem is already big, AI can amplify it

Most experienced digital health leaders don’t need a lecture on burnout. But it’s worth highlighting that leadership and management quality is not a soft factor; it has measurable links to mental health outcomes. A meta-analysis on leadership and followers’ mental health, A meta-analysis of the relative contribution of leadership…, supports the idea that leadership styles make distinct contributions to worker mental health. 

In Australian primary care training, for instance, an AJGP supplement reported high levels of burnout among GP supervisors, with findings discussed in Burnout and retention of general practice supervisors. Whether you agree with every definition, the direction of travel is clear: the workforce is strained, and management practices matter. 

This is where it’s useful to separate “boss behaviour” from personality. Many of the worst workplace dynamics are structural: unclear expectations, persistent overload, inconsistent rules, and weak feedback loops. A plain-language synthesis of what the evidence tends to show about harmful management patterns and protective steps, is relevant here because AI-enabled management can unintentionally replicate those patterns (e.g., constant monitoring, reduced discretion, and “numbers-first” prioritisation).

The risk for health IT is that AI becomes the accelerant: it makes high-pressure management easier to scale, easier to justify, and harder to challenge.

So what should Pulse+IT readers do differently?


The WHO’s Guidelines on mental health at work emphasise organisational interventions, manager capability, and system-level prevention, rather than placing the burden solely on individual resilience. That aligns closely with Australia’s WHS direction of travel.

For digital health and IT leaders, a practical approach is to treat “AI management” as a safety-critical system with a psychosocial risk register. Concretely:

1) Classify AI-enabled management tools as psychosocial risk controls or hazards

If a tool changes workload, autonomy, monitoring intensity, or decision transparency, it is operating in psychosocial hazard territory. Use the language of the Safe Work Australia psychosocial hazards framework to structure your assessment and controls.

2) Build “explainability” into management workflows, not just models

Staff need to know:

  • what data is being used
  • what the tool is optimising for
  • what human oversight exists
  • how to appeal or correct decisions

The OECD’s discussion of risks and design choices in algorithmic management is a good starting point for governance questions, not just technical ones.

3) Protect autonomy in clinical work

If AI pushes work allocation and timing too hard, without clinical context, it can undermine professional judgement and increase moral distress. Autonomy isn’t a luxury feature; it’s a resource that buffers strain.

4) Measure what matters: don’t let efficiency become the only KPI

Track human outcomes alongside throughput:

  • reported workload sustainability
  • roster stability
  • sick leave and turnover
  • incident reports linked to fatigue or time pressure
  • staff perceptions of fairness and control

This approach aligns with psychosocial risk management guidance embedded in frameworks like ISO 45003, which focuses on managing psychosocial risk within an OH&S management system.

5) Train human managers to manage “around” AI

Even the best tool will fail if managers outsource judgement to it. Manager training—explicitly recommended in the WHO guidelines, is the difference between AI as support and AI as an unaccountable authority.

The final word

In digital health, we are rightly excited about AI that improves clinical quality, reduces documentation burden, and accelerates access. But when AI starts shaping how people are managed, how work is allocated, monitored, and judged, it becomes a psychosocial design problem as much as a technical one.

The industry has a choice: implement AI in ways that quietly intensify work and erode autonomy, or implement it with governance that strengthens fairness, clarity, and psychological safety.

If AI is going to be “part of management,” it has to be managed like it matters.


Alexander Amatus, MBA is Business Development Lead at TherapyNearMe.com.au. He works at the intersection of clinical operations, AI-enabled care pathways, and sustainable digital infrastructure. He is an AI expert who leads a team developing a proprietary AI powered psychology assistant, psAIch.


Leave a Reply

Your leading voice in digital health news

Twitter X

Your leading voice in digital health news 

Keep your finger on the pulse with full access to all articles published on 
pulseit.news
Subscribe from only $39
magnifiercrossmenuchevron-down