Risks of AI in HR applications

As a follow-on to my post yesterday about responsible AI, I wanted to highlight some of the risks inherent when applying AI to HR applications. Working in people analytics, I realized there are additional challenges given the sensitive data set. I imagine some of this is similar to concerns with handling analysis in healthcare given the sensitivity of people data, but some are specific to HR.

  1. Algorithms are built on training data and learn from past behavior. Biased, discriminatory, punitive or overly hierarchical management practices could be institutionalized without additional review and management of the training data. AI systems need levers and transparency so that users know how it works and can engage it appropriately.
  2. As with so many tools in HR, the use of people data presents a risk of data exposure.
  3. Possible misuse of data. Management may pull development opportunities from employees predicted to leave in the next six months. A hiring manager may decide not to make an offer by someone predicted to reject an offer. AI is appealing because it can inform decisions, but false positives have real-life consequences.

Many of the companies building AI tools for HR are thinking deeply about these issues. Frida Polli, founder of pymetrics, discussed the impact on diversity and inclusion in this interview at Davos. There is great thought leadership in the space.