Vision of the Future

I often find it exhilarating to hear about visions of the future. To be fair, these can sometimes be dystopian but I’m thinking more along of the lines of the Disney World vision of “Look at these amazing things ahead!”

Visually these conversations bring to mind a scene from Epcot’s Spaceship Earth. Spaceship Earth is a ride inside the iconic geodesic dome at Epcot at Walt Disney World. The ride, at one point sponsored by AT&T (corporate sponsorships for rides, genius!), tells the story of the history of communication through animatronics: cave paintings in France, the lectures of Socrates, and the invention of the printing press are a few of the scenes presented. There is also a section that looks ahead to what is possible for human communication using technology.

One of these visions particularly thrilled me in the 1990s – a mother archaeologist, on site at a remote dig, video chatting with her child comfortably at home. This was a vision of the future I wanted to be a part of. First, a working mother who travels with an exciting job – amazing! But also the ability to connect with close family no matter the distance.

Well, it’s 2019 and we have Facetime. So that dream has come true. But I get this same feeling of exhilaration reading some of these predictions about AI and its potential to enhance human connection and performance in this article.

Here are some particuarly hopeful selections:

  • “By 2030, most social situations will be facilitated by bots — intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry.” —Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society
  • “The developed world faces an unprecedented productivity slowdown that promises to limit advances in living standards. A.I. has the potential to play an important role in boosting productivity and living standards.” —Robert D. Atkinson, president of the Information Technology and Innovation Foundation
  • “People will increasingly realize the importance of interacting with each other and the natural world and they will program A.I. to support such goals, which will in turn support the ongoing emergence of the ‘slow movement.’ For example, grocery shopping and mundane chores will be allocated to A.I. (smart appliances), freeing up time for preparation of meals in keeping with the slow food movement. Concern for the environment will likewise encourage the growth of the slow goods/slow fashion movement. The ability to recycle, reduce, reuse will be enhanced by the use of in-home 3-D printers, giving rise to a new type of ‘craft’ that is supported by A.I.” —Dana Klisanin, psychologist, futurist and game designer

To be fair, many of the predictions in the article are not nearly so rosy. Many point out the potential for AI to be destructive and misery-making for much of humanity — scary stuff. Reading these comments from AI experts and futurists, I appreciate the spectrum of visions that seemed lacking when I was happily dreaming about my 2019 life during a Disney trip. I think (and I hope) that these warnings are driving conversations about how industries will shape AI as we move into the future.

#AIFebruary: Month in Review

Companies spend 40-60% of revenue on payroll and some of this enormous expense is driven by management decisions about recruiting, promoting and training made on gut feel. That said, HR is an area of active innovation. The growing discipline of people analytics and improved technology are just some of the ways that the function is becoming data savvy and predictive.

My goal this month was to understand more about what AI is and how it will impact HR and the future of work. My takeaways:

  1. Robots are not going to take our jobs. At least not in the near future. The most compelling applications of AI currently enhance human work. Jobs and the skill sets needed to excel in this environment will likely change, with a focus on skills that humans already excel at, like thoughtful communication and making judgment calls with complex information.
  2. The ethical implications of AI are a serious concern. There are many, many discussions on this topic but no clear guidelines have emerged.
  3. Regulation lags implementation. Similar to ethical concerns, it’s not yet clear how companies will be asked to stay compliant using AI tools.
  4. Interacting with AI can be fun! Concerns that AI leads to poor user experience or an inhuman touch seem to be unfounded for the most part. While there are complaints about highly automated recruiting processes, many platforms provide transparency and feedback at a scale that isn’t possible for traditional recruiting organizations.

Thanks for engaging in #AIFebruary. It’s been fun to hear from people interested in the topic and always amazing to find kindred hobbyists on the internet. Please continue to reach out to me with questions and comments! I’ll continue to share my thoughts on the topic here and on Twitter.

HR as a model for enterprise AI

This article by Tracy Malingo on HR Technologist caught my attention as an interesting approach to ensure the ethical application of AI in the enterprise.

In “HR is the Ethics Model AI Needs,” Malingo makes a compelling case for placing AI under the purview of HR instead of IT. This seems like a radical notion but her arguments are solid:

  • HR is already tasked with steering company culture and acceptable behavior. While HR has often been the last adopters of innovation, this limits their ability to be part of creating better technology and innovative approaches to hiring, managing, developing and retaining the company’s valuable workforce.
  • Placing it within HR provides a system of checks and balances, since HR is incentivized to prioritize employee relations.

I’m not sure if companies will implement this, but I think it’s a proposal worth considering.

Steven Hawking’s take on AI

I’m reading Brief Answers to the Big Questions, Steven Hawking’s last book published in 2018. One of the questions he takes on is, “Will artificial intelligence outsmart us?” His answer is more nuanced than some media outlets give him credit for.

Hawking sees huge potential for artificial intelligence, especially in partnership with human cognition and if properly aligned with human interests.

If we can connect a human brain to the internet it will have all of Wikipedia as its resource.

Steven Hawking, Brief Answers to the Big Questions

However, he acknowledges that creating this alignment is tricky and there are many risks in a technology that can quickly surpass human abilities and exponentially develop itself.

His key advice is that humanity needs to seriously consider the risks and impact of artificial intelligence alongside developing it if it is to be a beneficial rather than a destructive force.

For my notes on this book as well as the list of books I’ve read and my reviews, visit my Goodreads page.

AI blogs and newsletters for businesspeople

Keeping it brief today. I wanted to share some content I follow for information on AI developments. A challenge I’ve found as I learn about AI is that a lot of content skews technical – the intended audience is programmers, not businesspeople. The sites I’ve shared below are relevant to those focused on implementation and investment rather than creation. I may update this over time as I find additional sites.

  • Artificial Intelligence Weekly: Download of relevant stories and investment news
  • VentureBeat’s AI Channel: Good coverage and a weekly newsletter recap from Khari Johnson, the AI staff writer
  • Work-bench blog: Work-bench is a enterprise technology focused VC in New York City. They’re not exclusively focused on AI but it’s gotten a lot of attention lately and often comes up in their blog posts.

Impact of AI on jobs today

Last week I shared a 2013 article from Mother Jones about the fears of job automation. This week I want to share an article from LinkedIn about how this is now our reality.

This September 2018 post leans on LinkedIn data and the World Economic Forum Future of Jobs report to show some trends of the entrance of AI across industries.

An interesting highlight from the article is a comparison of the occupations with the highest and lowest growth over the past five years.

Image via LinkedIn

Among the highest growing jobs are Human Resources Specialist and Recruiter, which this article suggests are inherently difficult to automate and therefore less likely to see the impacts of AI.

These roles require an understanding of human behaviors and preferences—a skill set which fundamentally can’t be automated.

Igor Perisic, “How artificial intelligence is already impacting today’s jobs,” LinkedIn

I would agree that the top jobs on this list do require an understanding of human behavior that may insulate them in some ways. However, the growth of these jobs also increases the pressure to ensure they are as efficient as possible, and that is the benefit of applying AI in these fields.

Risks of AI in HR applications

As a follow-on to my post yesterday about responsible AI, I wanted to highlight some of the risks inherent when applying AI to HR applications. Working in people analytics, I realized there are additional challenges given the sensitive data set. I imagine some of this is similar to concerns with handling analysis in healthcare given the sensitivity of people data, but some are specific to HR.

  1. Algorithms are built on training data and learn from past behavior. Biased, discriminatory, punitive or overly hierarchical management practices could be institutionalized without additional review and management of the training data. AI systems need levers and transparency so that users know how it works and can engage it appropriately.
  2. As with so many tools in HR, the use of people data presents a risk of data exposure.
  3. Possible misuse of data. Management may pull development opportunities from employees predicted to leave in the next six months. A hiring manager may decide not to make an offer by someone predicted to reject an offer. AI is appealing because it can inform decisions, but false positives have real-life consequences.

Many of the companies building AI tools for HR are thinking deeply about these issues. Frida Polli, founder of pymetrics, discussed the impact on diversity and inclusion in this interview at Davos. There is great thought leadership in the space.

How AI will impact the future of work

All February I’ve studied how AI can influence Human Resources, but a parallel and very interesting topic is how AI will impact the future of work. Here’s a prediction from HR Technologist on some ways AI will change the workplace:

  1. Recruiting. I’ve looked at this extensively this February as this is professional background.
  2. Internal communications and interactions across languages. I recently spent a work from home day alongside a friend in technical customer service. She was responding to questions from the product team in Japan and using Google Translate as the intermediary. As she said, it wasn’t perfect but it got the job done and she was able to resolve their issue across languages. This is becoming a built-in feature for employee collaboration.
  3. Streamline training and onboarding. AI can provide coaching tips in real-time. Think Gmail message auto-complete for work performance.
  4. Offer more robust problem-solving support. Beyond simplifying, AI can offer a wider view of potential solutions and approaches.
  5. Drive productivity. AI can automate tedious and repetitive actions of the workplace – meeting scheduling and review, answering common questions.
  6. Push for new regulations. Many of the areas that AI will touch are not well regulated. This will need to change as workers engage with it regularly.

AI and the candidate experience

It’s exciting to see the ways that AI can enhance recruiting capabilities, but just as important – and potentially more so – is how it impacts the job candidate’s experience. An article from last year on CNBC highlighted one candidate’s feelings of distance when interacting with an AI recruiting tool, in this case Hirevue: “It felt weird. I was kind of talking into the void.”

Anecdotally, I’ve heard a variety of reactions, more increasingly positive. As interacting with AI tools becomes a more common experience during the recruiting process, people become more comfortable with the idea that part of the process will be automated. I also suspect some generational differences; Millennials notoriously hate phone calls, and many of these AI solutions approximate a text conversation or a video chat.

A great approach to exploring these solutions is to look at the data. Are candidates less likely to continue with the recruiting process when presented with an AI tool? Do they rate their experience lower once these solutions are put in place? Individual companies can track this for themselves with some simple data collection.

Public policy and AI

One of the largest AI conferences is NeurIPS, which happens annually in December. I spent some time browsing the presentations from the 2018 conference and found an interesting presentation by Edward W. Felten from Princeton University titled, “Machine Learning Meets Public Policy: What to Expect and How to Cope.”

Felten kicks off his talk by highlighting the messaging that people in public policy are hearing about AI, and overwhelmingly, it is a warning to put regulations in place. People like Henry Kissinger and Elon Musk have already sounded the alarm to policy makers.

His thesis is that the best policies will come out of technical people partnering well with policy makers, with both sides trusting the others’ expertise. This comes from being engaged and constructive in the policy making process over time.

It was interesting to see push back on this thesis by some attendees. One counterpoint was that many industries have self-regulating bodies, such as FINRA, and that this could be an option for machine learning. However, Felten pushes back on this because self-regulating bodies work well when they have public accountability, and they are easily replaced when not accountable.