- The LTV Doubler
- Posts
- You can implement AI in your business but if you don’t prioritize safety and fairness, you will not get sustainable growth
You can implement AI in your business but if you don’t prioritize safety and fairness, you will not get sustainable growth
Hey guys it’s Adrian here. If you appreciate my content consider hitting the like button or sharing this article. It’s the only way the algorithm really notices me.

Safety constraints
The one thing experts don’t talk about in AI & ML is it’s disasterous results in video gaming.
Early in it’s development reinforcement learning was used in video games to test its reactions and responsiveness in simulated environments. Rewards functions were executed and constraints were NOT applied.
Several famous examples include a boat racing game where the boat would destroy the compeptitors to get points instead of winning the race. In Tetris, the AI would pause the game indefinitely rather than lose the game.
While safety constraints can always be added it is difficult to clearly defines all the scenarios where misalignment can occur. These AI need to be taught to evaluate safety on their own as a way to show self constraint.
Fairness metrics
Publishing AI models without checking for bias or fairness is an easy way to end up in legal trouble or lose your business.
Anyone with experience in reinforcement learning know that the results of the model is a direct reflection of the data fed into the model. If there is a specific skew in the dataset then the same bias will likely be exponentially worse in a model deployed at scale.
A famous example of this is AI being used in the legal prosecution of criminals in the southern United States.
Originally intended to remove bias of judges when sentencing it convicted African American at rates nearly double that of their white counterparts. This is because the reports it was trained on from the police were biased.
Privacy preservation
This is why most people don’t understand ChatGPT or other popular AI they use on a regular basis. It doesn’t create new material. It only regurgitates it’s trainingg data.
Big companies like OpenAI have taken into consideration user privacy when building such large systems. Smaller open source LLMs or other RL algorithms may not be so thurough.
If the data used to train these models is stored in an embeddings database (very likely) then it can be accessed at any point with the right query.
Professional teams know that any personal data needs to be cleaned from the datasource before being stored somewhere accessible to the AI model as it lacks discernment to filter out such information.