Fairness in AI

Google has great resources for learning more about AI, both for developers and businesspeople: ai.google.

Through this site, I watched a lecture by Margaret Mitchell on fairness in AI. There are many stories about unintended bias in AI tools. A recent article about Amazon’s challenge with this made a lot of noise in the HR community. There are different types of human bias that can manifest in data:

  • Reporting bias: People report what they find interesting or notable so data doesn’t reflect real-world frequencies
  • Selection bias: The training data for machine learning systems is not a random sample of the world but instead things we find interesting
  • Overgeneralization: Conclusion is made based on limited information or information not specific enough
  • Out-group homogeneity bias: We assume people in groups we don’t interact with every day are more similar to each other than those in our in-group
  • Confirmation bias: Tendency to search for, interpret and favor information that confirms our own pre-existing beliefs and hypotheses
  • Automation bias: Preference for suggestions from systems, as if they are somehow more objective than other sources, like humans.

There are methods for designing in fairness in machine learning but this must be intentional – AI is not inherently unbiased.