Ethical concerns need to be at the forefront when implementing AI tools. Thankfully, people are discussing this and organizations have drafted best practices of responsible, ethical AI. Given the multitude of applications of AI, there are many issues to consider when thinking about responsibility. Focusing on HR, some clear concerns emerge:
- Inherent bias in the training data. AI learns from the data it is fed. Amazon ran into this issue when exploring AI for recruiting.
- Transparency in the hiring process. Companies need to be able to explain why they selected a subset of candidates for interviews, why they discarded some applications without viewing them… When AI is deployed, companies will still need to explain these decisions and will need to understand what the algorithm is targeting.
Google AI drafted recommended practices for building AI that captures guidelines for building software with specific guidelines for machine learning: https://ai.google/education/responsible-ai-practices.