Thoughts

Impact of AI on jobs today

Last week I shared a 2013 article from Mother Jones about the fears of job automation. This week I want to share an article from LinkedIn about how this is now our reality.

This September 2018 post leans on LinkedIn data and the World Economic Forum Future of Jobs report to show some trends of the entrance of AI across industries.

An interesting highlight from the article is a comparison of the occupations with the highest and lowest growth over the past five years.

Image via LinkedIn

Among the highest growing jobs are Human Resources Specialist and Recruiter, which this article suggests are inherently difficult to automate and therefore less likely to see the impacts of AI.

These roles require an understanding of human behaviors and preferences—a skill set which fundamentally can’t be automated.

Igor Perisic, “How artificial intelligence is already impacting today’s jobs,” LinkedIn

I would agree that the top jobs on this list do require an understanding of human behavior that may insulate them in some ways. However, the growth of these jobs also increases the pressure to ensure they are as efficient as possible, and that is the benefit of applying AI in these fields.

Risks of AI in HR applications

As a follow-on to my post yesterday about responsible AI, I wanted to highlight some of the risks inherent when applying AI to HR applications. Working in people analytics, I realized there are additional challenges given the sensitive data set. I imagine some of this is similar to concerns with handling analysis in healthcare given the sensitivity of people data, but some are specific to HR.

  1. Algorithms are built on training data and learn from past behavior. Biased, discriminatory, punitive or overly hierarchical management practices could be institutionalized without additional review and management of the training data. AI systems need levers and transparency so that users know how it works and can engage it appropriately.
  2. As with so many tools in HR, the use of people data presents a risk of data exposure.
  3. Possible misuse of data. Management may pull development opportunities from employees predicted to leave in the next six months. A hiring manager may decide not to make an offer by someone predicted to reject an offer. AI is appealing because it can inform decisions, but false positives have real-life consequences.

Many of the companies building AI tools for HR are thinking deeply about these issues. Frida Polli, founder of pymetrics, discussed the impact on diversity and inclusion in this interview at Davos. There is great thought leadership in the space.

How AI will impact the future of work

All February I’ve studied how AI can influence Human Resources, but a parallel and very interesting topic is how AI will impact the future of work. Here’s a prediction from HR Technologist on some ways AI will change the workplace:

  1. Recruiting. I’ve looked at this extensively this February as this is professional background.
  2. Internal communications and interactions across languages. I recently spent a work from home day alongside a friend in technical customer service. She was responding to questions from the product team in Japan and using Google Translate as the intermediary. As she said, it wasn’t perfect but it got the job done and she was able to resolve their issue across languages. This is becoming a built-in feature for employee collaboration.
  3. Streamline training and onboarding. AI can provide coaching tips in real-time. Think Gmail message auto-complete for work performance.
  4. Offer more robust problem-solving support. Beyond simplifying, AI can offer a wider view of potential solutions and approaches.
  5. Drive productivity. AI can automate tedious and repetitive actions of the workplace – meeting scheduling and review, answering common questions.
  6. Push for new regulations. Many of the areas that AI will touch are not well regulated. This will need to change as workers engage with it regularly.

AI and the candidate experience

It’s exciting to see the ways that AI can enhance recruiting capabilities, but just as important – and potentially more so – is how it impacts the job candidate’s experience. An article from last year on CNBC highlighted one candidate’s feelings of distance when interacting with an AI recruiting tool, in this case Hirevue: “It felt weird. I was kind of talking into the void.”

Anecdotally, I’ve heard a variety of reactions, more increasingly positive. As interacting with AI tools becomes a more common experience during the recruiting process, people become more comfortable with the idea that part of the process will be automated. I also suspect some generational differences; Millennials notoriously hate phone calls, and many of these AI solutions approximate a text conversation or a video chat.

A great approach to exploring these solutions is to look at the data. Are candidates less likely to continue with the recruiting process when presented with an AI tool? Do they rate their experience lower once these solutions are put in place? Individual companies can track this for themselves with some simple data collection.

Public policy and AI

One of the largest AI conferences is NeurIPS, which happens annually in December. I spent some time browsing the presentations from the 2018 conference and found an interesting presentation by Edward W. Felten from Princeton University titled, “Machine Learning Meets Public Policy: What to Expect and How to Cope.”

Felten kicks off his talk by highlighting the messaging that people in public policy are hearing about AI, and overwhelmingly, it is a warning to put regulations in place. People like Henry Kissinger and Elon Musk have already sounded the alarm to policy makers.

His thesis is that the best policies will come out of technical people partnering well with policy makers, with both sides trusting the others’ expertise. This comes from being engaged and constructive in the policy making process over time.

It was interesting to see push back on this thesis by some attendees. One counterpoint was that many industries have self-regulating bodies, such as FINRA, and that this could be an option for machine learning. However, Felten pushes back on this because self-regulating bodies work well when they have public accountability, and they are easily replaced when not accountable.

Responsible AI

Ethical concerns need to be at the forefront when implementing AI tools. Thankfully, people are discussing this and organizations have drafted best practices of responsible, ethical AI. Given the multitude of applications of AI, there are many issues to consider when thinking about responsibility. Focusing on HR, some clear concerns emerge:

  • Inherent bias in the training data. AI learns from the data it is fed. Amazon ran into this issue when exploring AI for recruiting.
  • Transparency in the hiring process. Companies need to be able to explain why they selected a subset of candidates for interviews, why they discarded some applications without viewing them… When AI is deployed, companies will still need to explain these decisions and will need to understand what the algorithm is targeting.

Google AI drafted recommended practices for building AI that captures guidelines for building software with specific guidelines for machine learning: https://ai.google/education/responsible-ai-practices.

IBM Watson, five years later

In 2014 I wrote about my excitement over a demo of IBM Watson. I have recently heard about some familiar consumer tools that are using Watson and I revisited my post to see what’s changed over five years.

There are great case studies on IBM Watson here. It’s no surprise that this technology has gotten a lot of traction over the last five years.

One caveat I had when reviewing this tool in 2014 was that the data is only as good as how it’s communicated, and this still stands. Many analytics tools are doing a much better job of showing the quality of the underlying data and how reliable the predictions are, but there is still a level of understanding that is required when handling big data sets.

Fairness in AI

Google has great resources for learning more about AI, both for developers and businesspeople: ai.google.

Through this site, I watched a lecture by Margaret Mitchell on fairness in AI. There are many stories about unintended bias in AI tools. A recent article about Amazon’s challenge with this made a lot of noise in the HR community. There are different types of human bias that can manifest in data:

  • Reporting bias: People report what they find interesting or notable so data doesn’t reflect real-world frequencies
  • Selection bias: The training data for machine learning systems is not a random sample of the world but instead things we find interesting
  • Overgeneralization: Conclusion is made based on limited information or information not specific enough
  • Out-group homogeneity bias: We assume people in groups we don’t interact with every day are more similar to each other than those in our in-group
  • Confirmation bias: Tendency to search for, interpret and favor information that confirms our own pre-existing beliefs and hypotheses
  • Automation bias: Preference for suggestions from systems, as if they are somehow more objective than other sources, like humans.

There are methods for designing in fairness in machine learning but this must be intentional – AI is not inherently unbiased.


Sourcing with AI: HR tech companies in the space

And now, the convergence of my two areas of interest: recruiting and AI. The companies in this space, many of which are startups, are finding novel ways to apply AI to the recruiting process. I’ll break these out into a few categories, reflecting the stages of recruiting: sourcing, assessment and candidate experience. Today I’m highlighting sourcing.

Sourcing is a natural fit for AI because it’s an expensive activity for recruiting organizations and there is so much data available on potential candidates.

In traditional talent sourcing, a recruiter (or sourcer) looks far and wide across a population to find relevant talent for an available job. Once a qualified person has been identified, the recruiter then attempts to engage this person to see if they will consider the job. There are a few obstacles here: first, the pool of potential talent may be very large and difficult to comb through; second, it may be difficult to identify best-fit candidates, and third, it may be difficult to engage or find people willing to engage.

AI is a great fit for AI because it’s a data-rich activity. Across the web, there’s social media profiles, participation on forums, articles and white papers. Content that could flag someone as a relevant fit for a job is nearly limitless. And within companies, there is plentiful data as well. Data on existing employees can suggest what skills work well for roles, and efficient mining of previous job candidates can lead to a future hire for a different job.

Here are a few companies applying AI to sourcing activities:

  • LinkedIn: A leader in recruiting technology. LinkedIn Recruiter is a popular tool for recruiting organizations and AI is at the heart of the recommendations to recruiters when they are searching for new talent pools
  • Entelo: A startup that applies predictive analytics to identify those most receptive to a job opportunity
  • Restless Bandit: Analyzes resumes within a company applicant tracking system to match top candidates to open roles