Responsible AI

Ethical concerns need to be at the forefront when implementing AI tools. Thankfully, people are discussing this and organizations have drafted best practices of responsible, ethical AI. Given the multitude of applications of AI, there are many issues to consider when thinking about responsibility. Focusing on HR, some clear concerns emerge:

  • Inherent bias in the training data. AI learns from the data it is fed. Amazon ran into this issue when exploring AI for recruiting.
  • Transparency in the hiring process. Companies need to be able to explain why they selected a subset of candidates for interviews, why they discarded some applications without viewing them… When AI is deployed, companies will still need to explain these decisions and will need to understand what the algorithm is targeting.

Google AI drafted recommended practices for building AI that captures guidelines for building software with specific guidelines for machine learning: https://ai.google/education/responsible-ai-practices.

IBM Watson, five years later

In 2014 I wrote about my excitement over a demo of IBM Watson. I have recently heard about some familiar consumer tools that are using Watson and I revisited my post to see what’s changed over five years.

There are great case studies on IBM Watson here. It’s no surprise that this technology has gotten a lot of traction over the last five years.

One caveat I had when reviewing this tool in 2014 was that the data is only as good as how it’s communicated, and this still stands. Many analytics tools are doing a much better job of showing the quality of the underlying data and how reliable the predictions are, but there is still a level of understanding that is required when handling big data sets.

Fairness in AI

Google has great resources for learning more about AI, both for developers and businesspeople: ai.google.

Through this site, I watched a lecture by Margaret Mitchell on fairness in AI. There are many stories about unintended bias in AI tools. A recent article about Amazon’s challenge with this made a lot of noise in the HR community. There are different types of human bias that can manifest in data:

  • Reporting bias: People report what they find interesting or notable so data doesn’t reflect real-world frequencies
  • Selection bias: The training data for machine learning systems is not a random sample of the world but instead things we find interesting
  • Overgeneralization: Conclusion is made based on limited information or information not specific enough
  • Out-group homogeneity bias: We assume people in groups we don’t interact with every day are more similar to each other than those in our in-group
  • Confirmation bias: Tendency to search for, interpret and favor information that confirms our own pre-existing beliefs and hypotheses
  • Automation bias: Preference for suggestions from systems, as if they are somehow more objective than other sources, like humans.

There are methods for designing in fairness in machine learning but this must be intentional – AI is not inherently unbiased.


Sourcing with AI: HR tech companies in the space

And now, the convergence of my two areas of interest: recruiting and AI. The companies in this space, many of which are startups, are finding novel ways to apply AI to the recruiting process. I’ll break these out into a few categories, reflecting the stages of recruiting: sourcing, assessment and candidate experience. Today I’m highlighting sourcing.

Sourcing is a natural fit for AI because it’s an expensive activity for recruiting organizations and there is so much data available on potential candidates.

In traditional talent sourcing, a recruiter (or sourcer) looks far and wide across a population to find relevant talent for an available job. Once a qualified person has been identified, the recruiter then attempts to engage this person to see if they will consider the job. There are a few obstacles here: first, the pool of potential talent may be very large and difficult to comb through; second, it may be difficult to identify best-fit candidates, and third, it may be difficult to engage or find people willing to engage.

AI is a great fit for AI because it’s a data-rich activity. Across the web, there’s social media profiles, participation on forums, articles and white papers. Content that could flag someone as a relevant fit for a job is nearly limitless. And within companies, there is plentiful data as well. Data on existing employees can suggest what skills work well for roles, and efficient mining of previous job candidates can lead to a future hire for a different job.

Here are a few companies applying AI to sourcing activities:

  • LinkedIn: A leader in recruiting technology. LinkedIn Recruiter is a popular tool for recruiting organizations and AI is at the heart of the recommendations to recruiters when they are searching for new talent pools
  • Entelo: A startup that applies predictive analytics to identify those most receptive to a job opportunity
  • Restless Bandit: Analyzes resumes within a company applicant tracking system to match top candidates to open roles

Fun AI: Iconary

I’m ending the week on a light note with an AI game I found. Iconary is a Pictionary-like game developed by AllenAI in Washington. I had a lot of fun drawing and guessing and it’s surprising to see how closely my perception and guesses match my AI opponent, Allen.

This is the kind of thing that is so tricky for AI – reading meaning into symbols. AI can recognize a tree as a tree, but can it recognize a group of trees as a forest? This one can!

Here’s a great write-up on the game and the impressive AI behind it on TechCrunch.

How AI can revolutionize HR

My #AIFebruary project is focused not just on learning about artificial intelligence, but also its applications in my field, HR and recruiting. With that in mind, I enjoyed these thoughts from the Forbes Human Resources Council from July. Members were asked what a future with AI might look like in our field. Some top answers:

Enhance efficiency. Stacey Browning, President of Paycor, advocates for humans and technology working together to scale a high-touch and responsive recruiting process.

Automation and a human touch don’t have to be mutually exclusive. Strategically combining them can deliver unrivaled results. In recruiting, automation’s infinitely scalable levels of efficiency mean that, regardless of the volume of candidates, each receives a timely correspondence. For candidates, being kept in the loop with a thoughtful and sincerely worded email is what makes the difference.

Stacey Browning

Reduce bias. Sherry Martin of the Denver Public School system highlights how assessments can be analyzed for bias in language and outcomes and adjusted over time to minimize adverse impact, ideally leading to a wider variety of job candidates.

Simpler sourcing. Sourcing is a popular aim for up-and-coming AI tools, and Heather Doshay at Rainforest QA talks about the impact of improving the ability to match candidates to jobs.

Sourcing is a time-intensive pain point for most talent professionals, and providing well-matched candidates to companies would significantly speed up the top of the recruiting funnel and increase the quality of hires.

Heather Doshay

Replace administrative tasks. This comment from John Feldmann at Insperity Jobs groups together the time-consuming but critical tasks that are part of nearly every recruiting process.

AI will be valuable in automating repetitive recruiting tasks such as sourcing resumes, scheduling interviews and providing feedback. This will allow recruiters and HR managers the opportunity to focus on strategic work that AI will most likely never replace, such as connecting with top talent, providing a more personalized interview experience and establishing training and mentoring programs.

John Feldmann

Stay compliant. Compliance is a critical concern in recruiting and Char Newell thinks that AI could help automate this aspect of the workload for recruiting organizations.

Defining AI

Machine learning and deep learning are two phrases that are related to AI, and I want to be clear on them before proceeding. Here’s the quickest clip I could find on Youtube to get some clarity:

Video from Acadguild tutorial on Data Science

Artificial intelligence is any code, technique or algorithm that helps a machine mimic, develop and demonstrate human behavior.

Machine learning is the techniques and processes by which machines can learn the ways of humans.

Deep learning is drawing meaningful inferences from large data sets, requiring artificial neural networks.

Deep learning is a subset of machine learning, which is a subset of artificial intelligence. These three terms are related but not interchangeable.

Robots will take our jobs

Is artificial going to displace humans? I hear this concern a lot and this article in Mother Jones does a good job articulating the reality of robot colleagues.

Illustration by Roberto Parada

I realize now that a lot of these projections about the rapid acceleration of computer learning rely on Moore’s Law – the historically-true law that computing power (in the original case, transistors) double about every two years). However, Moore’s Law may eventually break down, and outcomes of advancement don’t always match our expectation. For instance, the rise of the computer age led many to assume that paper would soon be phased out… yet we are using more of it than ever.

The most interesting section of this article was the markers we should look for if AI really is taking our jobs:

  • A steady decline in the share of the population that’s employed
  • Fewer job openings than in the past
  • Middle-class incomes flatten in a race to the bottom
  • Corporations stockpile more cash and invest less in new products and factories
  • Labor’s share of national income decline and capital’s share rise

And… hmm. A few markers there but 2019 is looking a bit better than 2013 when this article was published.

Intro to AI via cartoons

waitbutwhy.com always delivers with the thoughtful and funny

Kicking off this month with a primer on AI from waitbutwhy.com. This is from 2015, so there have probably been leaps in AI since this was written, but it is helpful as an anchor.

There are three types of AI:

  • Artificial Narrow Intelligence: We’re here now. AI that can reliably deliver better outcomes than a human brain, such as navigating with a map on your phone and getting customized music recommendations.
  • Artificial General Intelligence: This is AI that can reason and solve problems. We’re not here yet.
  • Artificial Superintelligence: This is intelligence that greatly exceeds human intelligence across all spectrums. This could be closer than we think.

This primer on AI goes into a lot of detail on the exponential increase of computing power over time as well.

When I think about AI’s applications for recruiting and HR, I think of many areas that artificial narrow intelligence is just starting to be applied. Recommendations based on past choices can be generated with existing data. And I also see the potential for recreating bias; this comes up over and over again as a concern with AI for recruiting. I’m curious to read more about how that can be addressed.