Visit

Good Code is a weekly podcast about ethics in our digital world. We look at ways in which our increasingly digital societies could go terribly wrong, and speak with those trying to prevent that. Each week, host Chine Labbé engages with a different expert on the ethical dilemmas raised by our ever-more pervasive digital technologies. Good Code is a dynamic collaboration between the Digital Life Initiative at Cornell Tech and journalist Chine Labbé.

Follow @goodcodepodcast on Twitter,  Facebook, and Instagram.

On this episode:

David Robinson has a mild case of cerebral palsy, and from a very young age, he understood the tremendous power of technology, and all the ways in which it could help people. That’s why he cofounded Upturn, to make sure technology would remain an overall positive force.

Over the past few years, he’s looked at algorithms used to help guide decision-making in the public sector, especially in the criminal justice system.

Body cameras worn by police officers were among the first tools he studied. Since then, he has also analyzed predictive policing tools and algorithms used to predict the likelihood of someone being re-arrested if they are not kept in jail while awaiting trial.

You can listen to this episode on iTunesSpotifySoundCloudStitcherGoogle PlayTuneInYouTube, and on all of your favorite podcast platforms.

We talked about:

  • In this episode, David Robinson talks about “bodycams” worn by police officers – these cameras that were poised to improve accountability and transparency in the case of fatal encounters with the police. But it quickly appeared that they could be used to “distort evidence”, as Upturn wrote in a November 2017 report. “Without carefully crafted policy safeguards in place, there is a real risk that body-worn cameras could be used in ways that threaten civil and constitutional rights and intensify the disproportionate surveillance of communities of color,” the report said.
  • Robinson also talks about risk-assessment algorithms used by judges in many jurisdictions in the country. Many issues come up with such tools, as he explains. One of them is that they rely on historical data, thus “copying the mistakes of the past”, as detailed in this article.
  • Last summer, over 100 organizations signed a letter raising concerns about these tools. Read it here.

Read More:

  • The Partnership on AI, a group of organizations created to establish best practices in the field, wrote a report on risk-assessment tools. “These tools should not be used alone to make decisions to detain or to continue detention,” this report says. And “any use of these tools should address the bias, human-computer interface, transparency, and accountability concerns outlined in this report”, it adds.
  • Estonia is aggressively deploying AI in many areas of public life. And its most ambitious project to date is a “robot judge” for small claims disputes. The project should start later this year, according to this Wired article. How will it work? “The two parties will upload documents and other relevant information, and the AI will issue a decision that can be appealed to a human judge”, says the article.