From automating tasks, improving decision-making and enhancing overall productivity, data and AI has the power to transform organisations for the better.
But as we continue to learn more about AI systems, the need to address the issue of AI bias become more pressing. Indeed, the impact of bias creeping into AI models and the lack of diversity in data sources is becoming a growing concern. At a time when more businesses are looking to integrate these systems across their operations, it’s crucial business leaders are aware of just how harmful the impact of biased models can be.
Take data recognition for example. In a 2020 study from Harvard University, The Gender Shades project revealed discrepancies in relation to how well facial recognition technology worked for different skin tones and sexes. It was found that the facial recognition algorithms consistently worked poorest for darker-skinned females and best for lighter-skinned males.
Another example was seen from a 2019 study regarding a frequently used clinical algorithm for US hospitals. It found that black patients must be deemed sicker than white patients to receive the same level of care. This was because the algorithm’s data sources reflected a history where black patients had less to spend on healthcare compared to white patients due to longstanding wealth and income disparities. The algorithm has since been corrected and no longer uses cost as a proxy for needs.
According to techUK, less than a tenth (8.5%) of senior leaders in UK tech are from ethnic minority groups, only 16% of IT professionals are female and less than one in ten (9%) of all IT specialists have a disability.
Not only that, but diversity in workforces around the world seems to be worsening year on year. According to Harnham’s 2023 Diversity in Data Report, in the UK alone, the pool of entry-level Black, Asian and Minority Ethnic (BAME) professionals reduced from 42% to 12% from 2022 to 2023. Similarly, as of 2023, only 17% of data professionals are women; that’s a 9% decrease from 2022.
So, if our tech workforce – the people who actually build the models – doesn’t reflect all backgrounds, genders, ages, abilities and ethnicities, it’s hardly surprising that the technology is more susceptible to certain viewpoints.
The output is only as good as the input
The bias in AI typically stems from developers either creating algorithms that reflect unintended human bias or from a lack of varied data sets being used to train the system.
AI isn’t ‘making a choice’ to be fair, neutral or biased. The algorithms that make up large language models (LLMs) are only as good as the data they are trained on. To put it simply, if the data is biased then naturally, the AI model will be too.
When it comes to AI, data is not just a resource, it’s the knowledge bank from which systems learn, evolve and make decisions. The responsibility for maintaining a dataset is huge as it is what allows the system to accurately mirror and react to all human diversity in society. In the development of AI models, we have something known as the intersectionality principle. This recognises the importance of identity and experiences formed as a result of race, gender and class. Fundamentally, if the dataset is not sufficiently diverse then the models and systems will fall short of true representation.
For the model to form a general understanding of human experience, and avoid bias, developers must input a combination of diverse and high-quality datasets in its training. This ensures the system not only performs with outstanding accuracy but is also ethically responsible and socially sensitive to the richness of human identities and experiences.
Tackling the bias one step at a time
Mitigating the threat of bias in AI, doesn’t just end at the development stage. We need to be monitoring the model throughout its entire lifecycle.
Firstly, developers need to lead by example. Diversity of lived experience should be encouraged in the development team. Filling your workforce with professionals from all walks of life, genders, ethnicities, and ages will provide a range of varied perspectives that can applied to machine learning.
However, the data side is a tougher hill to climb. Developers must be careful with the curation of data and consider the ethical consequences of their work to ensure diversity and inclusion is at the heart of these models.
Regular testing and ongoing monitoring are essential to ensure the model works as intended. Couple it with a diverse workforce, and you’re on your way to developing a more accurate and reliable AI system that, if used correctly, has the potential to revolutionise diversity, equity and inclusion (DEI) in the workplace.
Harnessing the power of AI
The benefits AI can bring to enhance DEI in the workplace are significant. Take recruitment as one example. AI can improve the processes by updating job descriptions to target more diverse groups. It can also identify diversity gaps in current staff which could be instrumental in addressing gender pay gaps by helping to equalise salaries across different levels of the business.
But to reap the benefits, inclusivity must be front of mind. Organisations have a responsibility to ensure bias in AI is reduced. By collaborating closely with developers, businesses can ensure diverse data sets are being used in the models they integrate into their business.
Representation also plays a role in enhancing DEI in the workforce, particularly in traditionally male-dominated fields like tech. Generative AI is already being used to create positive portrayals of diverse groups in these job areas that act as role models, contributing to building a more inclusive workforce, particularly when seen from a young age.
The responsibility of minimising bias is one that should be shared. By actively cultivating a diverse workforce that contains a variety of perspectives, ideas and experiences, your business will be propelled towards a more innovative and inclusive industry.