We’re feeding AI racist and sexist data and the results could be disastrous

We’re only scraping the surface on ways that artificial intelligence (AI) can improve technology, automate boring processes and revolutionise our world. AI and machine learning is behind tech like the virtual assistants that live in our phones, or powerful search engines like Google – and it has the potential to do so much more. But there’s a catch.

AI is only as good as the data it’s trained on. And, since the world is far from perfect, machines are beginning to pick up biases that are reflected in our society. We take a look at 3 ways that AI bias has caused problems, and how we can fix them.

 

1. African Americans are more likely to be flagged by AI as ‘dangerous criminals’

What happened?

The US courts were worried about how human bias might affect decisions like sentencing for prison time, and so turned to technology to help alleviate that bias.

Turns out, the artificial intelligence program widely implemented to replace human decision making was twice as likely to mistakenly label black defendants as likely re-offenders (45%) compared to mistaken predictions for white people (24%).

How?

It’s not clear what data is used to inform this algorithm, so there could be any number of sinister factors at play here.

What can we do about it?

The good news is that AI is not racist. But limited data sets can mean skewed results. If you’re keen to work in the field of machine learning, you’ll be deciding the data sets that make these all important decisions. You could be the one that ensures unfair considerations like income or postcode don’t factor in to such an important algorithm.

Machine learning careers have heaps of awesome applications for making the world a more inclusive place to live. Check them out here.

 

2. Autonomous vehicles are less likely to ‘see’ people with darker skin tones

What happened?

Three researchers from the Georgia Institute of Technology ran a study (posted on the arXiv prepublication site for physics papers) to test the effectiveness of object detection – the kind of technology behind the ‘sight’ functions in self-driving cars.

The researchers discovered that the systems were 5% more likely not to detect people with darker skin tones. Although it’s a small number, over a large period of time that small percentile could result in hundreds of thousands of accidents.

How?

Since tech companies don’t make their algorithms and data available, we can’t know for sure. But, it’s almost certainly because the algorithms aren’t given a sufficient number of training models featuring dark skin tones.

Simply put, these algorithms are ‘fed’ thousands of images with which to test their skills. If they’re not given enough pictures featuring dark skins, then the tech will struggle to identify darker skin tones in practice.

What can we do about it?

Thankfully, there’s time to work out these issues before autonomous vehicles are launched to market. Researchers like Anjali Jaiprakash are currently researching the next frontier of robotic abilities by giving machines ‘sight’ capabilities. See how a career like Anjali’s could see you preparing robots for surgery, or making our future roads safer.

 

3. Amazon’s recruitment AI prioritised male applicants for tech roles

What happened?

Amazon tried to apply their signature innovative approach to their hiring process. They created a team to devise a complex machine learning algorithm that could automate the hiring process, by comparing 10 years of successful Amazon resumes against incoming candidates.

The problem? The tech industry has long been a male-dominated industry, so the algorithm reflected this bias.

How?

The algorithm began to trash resumes that made reference to women, e.g. listing participation in the women’s soccer club, or mention of a women’s only university.

The tech team noticed that the AI was unfairly discounting eligible female candidates, and so corrected the glitch. There was no way to ensure that the complex algorithm wouldn’t apply bias in other, more invisible ways, so the project was (thankfully!) canned.

What can we do about it?

AI bias can sneak into an algorithm in unexpected ways. Data science professionals like CommBank’s Amy Shi-Nash are on the case, keeping their eyes peeled for AI bias and challenging the results that AI produces. In future, you could join teams like Amy’s at CommBank to actively fight AI bias and produce fairer machine learning applications.

 

How do we combat AI bias?

Cases of AI bias don’t prove that artificial intelligence is inherently flawed. Instead, it reflects flaws in secretive algorithms and data from biases already ingrained in society.

As Microsoft puts in its inclusive design manifesto:

“If we use our own abilities and biases as a starting point [for tech design], we end up with products designed for people of a specific gender, age, language ability, tech literacy, and physical ability. Those with specific access to money, time, and a social network.”

With greater transparency in the data used to train AI and a more diverse tech workforce we can correct this imbalance and start to unveil the magical possibilities that AI has to offer.

So, where do we start? There is already a multitude of diverse Australians of all ages, genders, races, cultural identities, languages and abilities working in STEM. Meet them here, be inspired and maybe you can contribute to a more diverse tech workforce in future!

Eliza Brockwell

Author: Eliza Brockwell

Eliza is the Digital Producer for Careers with STEM. Eliza is passionate about creating content that encourages diversity of representation in STEM.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.