Racial bias: Correcting the code of facial recognition

Posted by Julia Sieger | 2 weeks ago



 In this edition, as a faulty facial recognition match leads to the arrest of an innocent man in the US state of Michigan, we take a look at the underlying racial and gender bias of the technology. We dig deeper into the subject with MIT's Joy Buolamwini, founder of the Algorithmic Justice League, who is helping correct the code. 

Joy Buolamwini is a graduate researcher at MIT who describes herself as a poet of code. She has helped uncover racial and gender bias in AI services from companies like Microsoft, IBM and Amazon.

The topic is making headlines as an innocent man in Michigan was arrested and detained for 30 hours based on a faulty facial recognition match. The cities of San Francisco and Boston have already decided to ban the use of the technology as it now appears it's widely inaccurate, especially for people of colour. We spoke to Buolamwini to find out more.

But first, like their American counterparts in Silicon Valley, French tech companies are searching for ways to become more diverse. Since 2017, the industry has been working with the government to try to make that happen. A number of non-profit associations have also joined the fight to make headway on what is proving a long and difficult process. 

Finally, in Test 24, Dhananjay Khadilkar tries Samsung's Sero TV, which rotates between landscape and portrait orientations depending on what kind of content you're watching.


Source: France 24
Get NaYourNews Android App Now!
Get it on Google Play

Readers Comments

0 comment(s)

No comments yet. Be the first to post comment.


You may also like...