Some bias in artificial intelligence is unavoidable and essential for learning, says expert solution architect Philip Stalley-Gordon. However harmful biases from historical or systemic prejudice must be found and addressed.
“Everyone needs to be taught about bias in artificial intelligence, because otherwise we end up embedding it in our models… the horse will be bolted, and it’ll be very, very difficult for us to rein it back in,” he told a webinar last week.
Mr Stalley-Gordon is Principal Enterprise/Solution Architect and AI principal for Australia and New Zealand at Dedalus.

In the educational webinar Breaking the bias: Equitable AI at the heart of design, facilitated by Dr Louise Schaper, he talked through various forms of bias, including explicit (conscious) and implicit (unconscious) biases and how they entered AI systems through data.
“Bias itself is really important inside AI models… you do need those slight parts of bias. Where it becomes really problematic is where you have biases that are harmful,” he said.
“Explicit bias is where people have that conscious bias… implicit bias is a much more nuanced, subtle effect, where it’s unconscious… and when you have that bias in the data, it does propagate into AI models.
“You need to understand the different types of bias that can occur in data… data-induced bias… historical bias… representational bias… measurement bias… labelling bias… association bias.
“The proactive idea is to have this in your thoughts before you even get anywhere near the design process… so that embeds fairness and equity into the core of any AI model.
“Scrutinise the intended use case and think about the bias… look hard in the data… make sure data is unbiased and as representative as possible,” he said.
LEADERSHIP BUY-IN
Mr Stalley-Gordon stressed the importance of top-down commitment from leadership. He also supported diverse teams because “without different life experiences and backgrounds, you can have blind spots”.
He said an interesting concept was to define success metrics beyond ‘accuracy’.
“When you are using equitable AI you need a broader lens, you’ve got to look a bit further beyond that.
“You need to integrate fairness metrics into the core objectives for success. It sounds counterintuitive but you could be 100% accurate in one way but the success is actually completely off.
“So you need a balance between robustness, which is the way the model performs in different scenarios, safety, and obviously fairness.
“You look at those and work them together. So you do ‘accuracy versus fairness’, and ‘accuracy versus robustness’, and ‘fairness versus robustness’, and every single different type of way to actually make sure that the outcomes become equitable for success rather than just looking at accuracy.”
He said it was vital to consider demographic parity, equal opportunity, equalised odds and counterfactual fairness.
“With counterfactual fairness, when you look at a model, you take an individual rather than the group, and you put it through the model, and change one of their protected attributes. If you change from male to female, or female to male, the result should be identical.”
When it comes to good practices with data, he had the following advice:
- Ensure data is accurate, representative, and unbiased.
- Localisation is essential because models must be retrained with local data.
- Be cautious with synthetic data; it can introduce hidden bias.
- Maintain continuous monitoring because models drift over time
- For risk mitigation, do a ‘pre-mortem’ rather than a ‘post mortem’ analysis – think of every scenario before you build the model and think ‘what if it goes wrong?’
Mr Stalley-Gordon will speak at HIC2025 in Melbourne on Monday August 18 from 11.15am. Session 105. See HIC Program for more details.