Public policy and AI

One of the largest AI conferences is NeurIPS, which happens annually in December. I spent some time browsing the presentations from the 2018 conference and found an interesting presentation by Edward W. Felten from Princeton University titled, “Machine Learning Meets Public Policy: What to Expect and How to Cope.”

Felten kicks off his talk by highlighting the messaging that people in public policy are hearing about AI, and overwhelmingly, it is a warning to put regulations in place. People like Henry Kissinger and Elon Musk have already sounded the alarm to policy makers.

His thesis is that the best policies will come out of technical people partnering well with policy makers, with both sides trusting the others’ expertise. This comes from being engaged and constructive in the policy making process over time.

It was interesting to see push back on this thesis by some attendees. One counterpoint was that many industries have self-regulating bodies, such as FINRA, and that this could be an option for machine learning. However, Felten pushes back on this because self-regulating bodies work well when they have public accountability, and they are easily replaced when not accountable.


Fun AI: Iconary

I’m ending the week on a light note with an AI game I found. Iconary is a Pictionary-like game developed by AllenAI in Washington. I had a lot of fun drawing and guessing and it’s surprising to see how closely my perception and guesses match my AI opponent, Allen.

This is the kind of thing that is so tricky for AI – reading meaning into symbols. AI can recognize a tree as a tree, but can it recognize a group of trees as a forest? This one can!

Here’s a great write-up on the game and the impressive AI behind it on TechCrunch.

Fairness in AI

Google has great resources for learning more about AI, both for developers and businesspeople:

Through this site, I watched a lecture by Margaret Mitchell on fairness in AI. There are many stories about unintended bias in AI tools. A recent article about Amazon’s challenge with this made a lot of noise in the HR community. There are different types of human bias that can manifest in data:

  • Reporting bias: People report what they find interesting or notable so data doesn’t reflect real-world frequencies
  • Selection bias: The training data for machine learning systems is not a random sample of the world but instead things we find interesting
  • Overgeneralization: Conclusion is made based on limited information or information not specific enough
  • Out-group homogeneity bias: We assume people in groups we don’t interact with every day are more similar to each other than those in our in-group
  • Confirmation bias: Tendency to search for, interpret and favor information that confirms our own pre-existing beliefs and hypotheses
  • Automation bias: Preference for suggestions from systems, as if they are somehow more objective than other sources, like humans.

There are methods for designing in fairness in machine learning but this must be intentional – AI is not inherently unbiased.

Defining AI

Machine learning and deep learning are two phrases that are related to AI, and I want to be clear on them before proceeding. Here’s the quickest clip I could find on Youtube to get some clarity:

Video from Acadguild tutorial on Data Science

Artificial intelligence is any code, technique or algorithm that helps a machine mimic, develop and demonstrate human behavior.

Machine learning is the techniques and processes by which machines can learn the ways of humans.

Deep learning is drawing meaningful inferences from large data sets, requiring artificial neural networks.

Deep learning is a subset of machine learning, which is a subset of artificial intelligence. These three terms are related but not interchangeable.

Welcome to AI February!

Having touched companies’ recruiting processes in some capacity the entirety of my career, I’ve been fascinated with the ways recruiting has evolved in that time. As a snapshot of this change, ten years ago as a new recruiter, I sometimes called an advertising agency to help post retail openings in the local newspaper. Today in recruiting operations, we use Google Analytics and tracking pixels to see performance of online postings, and programmatic advertising can automate some placement decisions.

One of the most exciting advancements in the field is artificial intelligence. Already there are HR tech startups touting their use of AI to enhance and improve the recruiting experience. The tagline is generally something about removing human bias and increasing efficiency – great arguments to move towards technology!

As a data analyst and a former recruiter, I am both excited and skeptical. Can these tools do what they promise? Do they truly apply AI or is this advanced statistics dressed up? And finally – aren’t people a critical component of the hiring practice?

To dive into some of these questions and familiarize myself with the state of AI, I’ve decided to commit my February to learning more about it. Every day throughout February, I will learn something about artificial intelligence and share it here. I’m focused on the following areas:

  • Defining artificial intelligence. What counts as artificial intelligence? What is the difference between AI and machine learning or are they interchangeable?
  • What’s the state of artificial intelligence in recruiting? Who is already doing this and how successful is it?
  • What are the promises and pitfalls of AI?

I realize that these three areas are rich enough that I could probably devote a month (or more!) to each. My intention here is to more broadly explore, though, so I’ll touch on all of these in one short month.

I’m looking forward to this month of learning and hope you follow along! Feel free to share any interesting tidbits, resources or your own interest with me here or via email or Twitter. Here’s to learning!