Variety Is the Spice of Life for AI, Particularly in Humanizing HR

B. Shimmin
B. Shimmin

Summary Bullets:

  • Digital assistants like Microsoft Cortana are a lot like people in that they are the most interesting when they specialize, and when they become experts in a given field.
  • This is the case with human resources (HR), where there are many AI-driven chatbots available, each able to answer specific problems (like employee feedback) or support specific constituencies (like millennial employees).

If there’s one lesson to be learned from this week’s announcement that Microsoft Cortana will be able to converse freely with Amazon Alexa, it’s that AI-driven personal assistants – like people – do well to specialize. Cortana is quite adept at setting appointments and Alexa is pretty good at turning lights off and on. But don’t ask Cortana to turn down the heat or Alexa to set up an Outlook meeting. Like people, AI platforms grow up under very different circumstances, each with its own unique philosophy, friends, and culture (in the world of IT, ‘philosophy’ means AI algorithms, ‘friends’ equals data sources, and ‘culture’ means domain of expertise).

Just as in the real world, variety is the spice of life for AI. That’s why there are so many unique chatbots built on top of AI assistant platforms like Cortana and Alexa, each specializing in a unique set of capabilities, each catering to a distinct constituency. A great place to see this in action is within the realm of human resources (HR). Here are a few notable HR-capable chatbots, most of which are natively embedded within the Slack collaboration tool.

  • Obie – This general-purpose question-and-answer chatbot uses ML to learn from past interactions in supporting user requests for information. It is in use among some very notable customers, including ESPN and Disney, NASA, and SAP.
  • Lucy Abbot – Built on top of, this Slack plug-in is geared around tasks such as onboarding and uses a lexicon specific to millennial and Gen-Z users.
  • Niles – Built as a knowledge-sharing chatbot, Niles works, not just to answer questions, but also to ensure that supportive content is kept up to date, identifying stale answers and reminding owners to provide an update.
  • Captain Feedback – Created to give employees an open forum for honest discussion of performance.

From an IT perspective, one of the benefits of intelligent chatbots like these is that they do not require a tremendous amount of data and domain expertise to build. IBM, for instance, uses the popular Botkit open source project to embed various IBM Watson AI chat capabilities within Facebook Messager, Twitter, Slack, etc. Through dialog flows and pre-trained content, these chatbots don’t need to be trained… at least not initially and extensively. However, because chatbots rely upon machine learning (ML) and deep learning (DL) routines, which constantly learn from past experience, it is imperative that IT view a chatbot as if it were a garden, ensuring that it is constantly nourished with new and accurate data and that it is free from weeds by providing continuous feedback. This means refining modeling phrases (“I want candy” = “Candy I want”), marking incorrect answers as such, and adding new classifiers to better identify intent.

For that reason, because chatbots are a lot like children, requiring an attentive, guiding hand, buyers should beware of vendors promoting ML- and DL- driven chatbots. There are many solutions that appear to be AI-infused but, in actuality, only deliver rule-based heuristics, well-informed decision trees, or pattern-matching recommendations. True ML and DL systems (chatbot or otherwise) are not static entities reliant upon an extensive set of rules; rather, these systems – when presented with decisions that do not adhere to any rule – are still able to draw meaningful conclusions and propose appropriate actions.

Enterprise buyers of AI services must be careful not to underestimate the complexity of adopting even the simplest AI capability, such as sentiment analysis. There is a tremendous amount of hype in the marketplace characterizing these services as no- or low-investment implementations in terms of expertise. And while many services are indeed easily built, the underlying, data-oriented work involved in creating and training the necessary data models for these services is in no way a trivial affair – a fact that is not likely to change going forward.

What do you think?

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.