Visit

Good Code is a weekly podcast about ethics in our digital world. We look at ways in which our increasingly digital societies could go terribly wrong, and speak with those trying to prevent that. Each week, host Chine Labbé engages with a different expert on the ethical dilemmas raised by our ever-more pervasive digital technologies. Good Code is a dynamic collaboration between the Digital Life Initiative at Cornell Tech and journalist Chine Labbé.

Follow @goodcodepodcast on Twitter,  Facebook, and Instagram.

On this episode:

For a long time, Timnit Gebru didn’t want to combine her social justice interest and her technical work. But things changed when she read a ProPublica article that showed how an algorithm used in certain courtrooms was biased against Black defendants.

Around the same time, Gebru also started noticing how little diversity existed in her field: there were very few women and very few people of color at AI conferences.

Gebru co-founded a group called “Black in AI”, and started looking at instances in which algorithms fail us. She also started thinking of ways to make sure we use AI to foster progress, without automating our biases.

You can listen to this episode on iTunesSpotifySoundCloudStitcherGoogle PlayTuneInYouTube, and on all of your favorite podcast platforms.

We talked about:

  • I am a big fan of French TV show The Bureau (Le Bureau des Legendes), and in my intro to this episode, I quote a simple (yet brilliant) definition of AI given by one of its characters. The Bureau follows the adventures of a deep-cover French agent after he comes back from a mission in Syria. Read about it here.
  • Timnit Gebru says a 2016 ProPublica article called Machine Bias was instrumental in bringing her to study the social impact of machine learning. It was about a software used in certain US jurisdictions to predict a defendant’s likelihood of committing future crimes. And it showed the software’s bias against Black defendants. This investigative piece was spearheaded by tech journalist Julia Angwin, the guest of our very first episode. In a shocking (and sad) turn of events, Julia Angwin, who came on our show a few months back to talk about her exciting new journalistic venture The Markup, was just ousted as editor-in-chief of the newsroom she co-founded. Five out of seven editorial staffers resigned in support. Angwin now says she is determined to remake The Markup.
  • Timnit Gebru also says she also started worrying when she realized how few women and people of color attended NIPS, one of the biggest Artificial Intelligence conferences. NIPS stands for the Neural Information Processing Systems Conference.
  • To address the lack of diversity in the field, Timnit Gebru created “Black in AI”. It’s “a place for sharing ideas, fostering collaborations and discussing initiatives to increase the presence of Black people in the field of Artificial Intelligence,” according to the group’s website.
  • In this episode, Timnit Gebru mentions an AI augmented hiring tool created by the company HireVue. HireVue says their “hiring Intelligence platform is transforming the way companies discover, hire, and develop talent.” But, we don’t really know what their algorithm does, Gebru points out. And while it might very well be helping companies increase the diversity of their workforce, it could also be perpetuating biases.
  • Timnit Gebru also mentions her work with MIT researcher Joy Buolamwini on a paper called “Gender shades”. They showed that several commercial facial recognition softwares were better at detecting light-skinned males than dark-skinned females. Buolamwini specializes in algorithmic bias: she calls it the “coded gaze.”
  • In a paper called “Datasheets for Datasets”, Timnit Gebru calls for a standardized way of identifying the potential skews of a dataset. More generally, in this episode, she calls for some standardization of the field.

Read More:

  • There is a lot of talk about AI bias these days. But AI bias isn’t the real problem, this column argues: our biased society is. And until AI systems can detect our biases, and their proxies, it might be too dangerous to use them, it adds.
  • To build an ethical AI, this article argues, we need people to understand “systemic racism.”
  • In 2017, New York City passed a law that established a task force in charge of checking on all algorithms used by the city government, and suggesting to prevent and address bias to make sure they are not biased. But it’s already fracturing, according to The Verge. “With nothing to study, critics say, the task force is toothless and able to provide only broad policy recommendations — the kind that experts would have been able to suggest without convening a task force at all.”
  • Body scanners at airports “are prone to false alarms for hairstyles popular among women of color,” according to this ProPublica investigation.
  • In March, Stanford launched a brand new institute for Human-Centered Artificial Intelligence. But the lack of diversity in the staff as initially announced was met with suspicion. It “inadvertently showcas(ed) one of tech’s biggest problems,” according to Quartz.
  • Google also created an AI ethics board, but the controversial group lasted just over a week. At the heart of the outcry: one board member’s comments about trans people. Thousands of Google employees had signed a petition to have him removed.
  • A new bill could force companies to audit their algorithms for bias. Read about it here.