This blog is inspired by a ted video presented by Joy Buolamwini

Introduction about Joy Buolamwini,

She’s an MIT grad student working with facial analysis software when she noticed a problem: the software didn’t detect her face — because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.” It’s an eye-opening talk about the need for accountability in coding … as algorithms take over more and more aspects of our lives. Her research explores the intersection of social impact technology and inclusion.

Joy Buolamwini

What’s Algorithmic bias?

Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected, or used to train the algorithm. Algorithmic bias is found across platforms, including but not limited to search engine results and social media platforms, and can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect “systematic and unfair” discrimination.

Why Algorithmic Bias matter?
It started when Joy Buolamwini wanted to test her face on a camera-enabled to detect/identify the face of humans for computational processing; wherein the camera detected the people with light skin and demanded dark skin people to wear a white mask to detect their faces.

Joy Buolamwini involved in a project during her undergraduate to enable a robot to play peek-a-boo

It happened to her once she was involved in a project during her undergraduate program to train a robot to play peek-a-boo (where robot played actively with her roommates of lighter skin and failed to detect her face often)and it again happened to her at an entrepreneurship competition.

Joy Buolamwini at an entrepreneur competition (Hong Kong)

What’s the Aspire Mirror?

Aspire mirror (a project by Joy Buolamwini) is a device that enables you to look at yourself and see a reflection on your face based on what inspires you or what you hope to empathize with. Where she used a generic facial recognition software and it couldn’t detect her in a row; the solution for this problem would be creating a full-spectrum training set that will reflect a richer portrait of humanity.

Aspire Mirror Project Picture as available in the internet

Why people should read this book “Weapons of Math destruction” written by Cathy O’Neil?

Cause the author talks about the importance of WMD (Widespread, Mysterious, and Destructive Algorithms). She analyses how the use of big data and algorithms in a variety of fields, including insurance, advertising, education, and policing, can lead to decisions that harm the poor, reinforce racism, and amplify inequality.

Weapons of Math Destruction by Cathy O’Neil

Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.”

Joy Buolamwini wants people to be an inclusive coder. She focuses on three aspects of developing things concerning inclusion and they’re

  1. Who code matters
  2. How we code matters
  3. Why we code matters

By following the above aspect we have an option to unlock the inequality if we consider a social change a priority. Also, we should try identifying the bias and curate inclusiveness with conscientious development to fight against the inequality in the algorithm inferences. Joy Buolamwini wants people to join Algorithmic Justice League (https://ajlunited.org) to fight against the inequality in the algorithms available in the market by conveying that technology should serve all of us and not just the privileged few.

What is Gender Shades?

Joy Buolamwini and Timnit Gebru; researchers from Gender Shades project

Gender Shades in detail: Intersectional Accuracy Disparities in Commercial Gender Classification. Gender Shades is an approach to evaluate bias present in automated facial analysis algorithms and datasets concerning phenotypic subgroups.

From the abstract: The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms

Research outcome from Gender Shades

Gender Shades research worked on the models that are made available for commercial needs from Microsoft, Face++, and IBM. And the final accuracy metrics were Microsoft (93.7%) > Face++ (90.0%) > IBM had (87.9%) From the results, all companies performed better on males and even better results with lighter skin than darker ones. In particular, all the companies performed badly at darker females. One of the finest practices for gender inequality in real-time was, Woman, are less likely to be shown ads for high paying jobs on Google (since most of the executives are male-dominant — an algorithm learns in such a way)

Final thoughts,

We’re entered into a world of automation with over confident and under prepared. We should develop/ train machine make contextual decisions with inclusiveness and practicality in nature. Machine neutrality is expected across the globe and hence technology should serve all of us and not just the privileged few.

Credits

  1. Algorithmic Bias https://en.wikipedia.org/wiki/Algorithmic_bias
  2. Joy Buolamwini Image https://upload.wikimedia.org/wikipedia/commons/a/ac/Joy_Buolamwini_-_Wikimania_2018_01.jpg
  3. Ted Talk https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en
  4. Algorithmic Justice League https://ajlunited.org
  5. Gender Shades http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
  6. When the Robot Doesn’t See Dark Skin https://www.nytimes.com/2018/06/21/opinion/facial-analysis-technology-bias.html
  7. AI, Ain’t I a Woman https://www.youtube.com/watch?v=QxuyfWoVV98
  8. Not Flawless https://www.notflawless.ai
  9. Aspire Mirror http://www.aspiremirror.com
  10. Race & Gender in Tech http://www.sigcas.org/wp-content/uploads/2019/05/LeeCSG2019.pdf