- The LTV Doubler
- Posts
- The AI industry is facing a crisis of bias. It’s leading to skewed predictions and mistrust in technology. Here are 4 ways to address AI bias Post
The AI industry is facing a crisis of bias. It’s leading to skewed predictions and mistrust in technology. Here are 4 ways to address AI bias Post
Imagine if your friend, a robot, tries to guess what ice cream flavor you’ll pick at the store. It’s a tough job because, hey, sometimes you just wake up feeling like chocolate instead of vanilla!

Imagine if your friend, a robot, tries to guess what ice cream flavor you’ll pick at the store. It’s a tough job because, hey, sometimes you just wake up feeling like chocolate instead of vanilla!
AI & ML are those robot friends, trying hard to guess what people will do next. But humans are tricky, we change our minds a lot and like a surprise. AI and ML love clear clues, but our behaviour is a puzzle. They dig through loads of information to make a good guess, but sometimes, they get a little mixed up, especially if the info they have is like a story with missing pages or has wrong details.
And oh, there’s another twist! Just like a game of telephone might change a message, the information can pick up wrong ideas from people along the way, making the guessing game even harder for our robot friends.
But, before we get started. Get a deeper dive into rapid AI software launch strategies that rejuvenate your business model, stop by my LinkedIn.
Data Quality & Bias
Predictability? It’s not a human’s strong suit.
AI thrives on patterns, humans are masters of going with the flow and acting in spite of what is expected of them. This makes it difficult for AI and ML algorithms to accurately predict human behaviour.
That’s because AI operates on massive amount of data. The lifeblood of AI, must be rich, accurate, timely. This is one of the many challenges with modern AI. How can we accurately predict behaviour when the data is so uncertain.
Simultaneously human bias, the unwanted guest. Seeps into data, tilting AI perspectives. Inherent Bias. Confirmation Bias. Selection Bias. They obscure the lens through which AI views our world.
Model Complexity & Interoperability
To get at grip on what AI can and can’t do in the context of understanding and predicting human behaviour we need to understand how it works.
What is commonly referred to as AI is actually a collection of fields each with their own methodologies to build “models”. These are bundles of algorithms and computer code that can predict the probability of a certain result. Like the chance a person will perform a specific task based on information they’re given.
Now most of the AI you hear about in the news, like ChatGPT, is powered by a specific kind of model called a deep neural network. These are bundles of algorithms that number in the millions (of layers). We have moved far beyond understanding the complexity of these models.
Human Factors & Feedback
While AI is doing incredible things in knowledge based tasks there are still difficulties when it starts to considers human factors. As we mentioned earlier, AI models require large amounts of consistent data and constant feedback. Two things humans are notoriously bad at. Consistency and open feedback.
It’s not just the required engagement between AI and the human component that makes prediction of behaviour difficult.
The human model (there’s that word again) for understanding the world is extremely complex. Top researchers in neurology have noted that it’s just your mind that perceives the world but also every digit and inch of skin. Meaning we see the world through millions or data points every second.
Ethical & Social Implications
Impactful AI in our society is a losing battle at the moment.
In the world of artificial intelligence there is a common saying, “garbage in, garbage out”. This refers to the fact that models mirror the data they are given. Meaning that if bias is fed into an AI then the AI will be biased.
This has huge negative implications as AI is starting to be used to predict human behaviour. For example, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) predicted the likelihood that US criminals would re-offend. In 2016, ProPublica investigated COMPAS and found that the system was far more likely to say black defendants were at risk of reoffending than their white counterparts.
Constant monitoring will be needed.
Want to a free guide to nail product-market fit and launch your own AI software in just 8 weeks? Head over to my LinkedIn and grab a free copy
All the best,
-Adrian