• HPE announced plans to acquire MapR, augmenting its data analytics portfolio with proprietary file system technology.
• HPE’s purchase reinforces the message that to derive true value from an artificial intelligence (AI) implementation, enterprises need to master the basics of data management.
Life isn’t always as it seems, and the same can be said of AI. Sure, the sexy parts of AI are the platforms, the algorithms, the APIs, and the use cases. We are enamored with the natural language processing capabilities, the predictive maintenance, the improved decision making, and the ability to provide a more personalized customer experience. But there is also the intrigue. The seedy underbelly of AI is comprised of the ethical concerns that reveal the potential dark sides of the technology. What if models result in unfair bias against a specific gender or race? What about privacy concerns? What if it’s used for destructive rather than constructive purposes? Continue reading “HPE’s Acquisition of MapR Underscores That AI is All About Data”→
Businesses looking to adopt AI must not only evaluate the technology’s implications on job displacement and data security, but also consider that algorithms may unintentionally undermine the organization’s ethical standards.
Customers are quick to pass judgement; if unintentional biases become public, a company’s brand reputation may suffer significantly.
Much has been written about ethics and artificial intelligence (AI), and rightly so. With many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of emerging ethical concerns.
Job displacement is still a key consideration, as is safeguarding data. In a recent GlobalData survey, 23% of organizations indicated they had cut or not replaced employees because of AI; 57% indicated security as a top concern.
However, looking ahead, the question of ethics is the real challenge the AI community will need to tackle. And it is a challenge that is far more controversial than security or privacy. What happens when a self-driven car needs to decide between hitting a child that has run into the road, or swerving and risking the injury of its passenger? How proactive should a personal assistant be when it detects wrongdoing? What should be done when a personal assistant believes that a user’s usage pattern points to having committed a serious offense – should it alert authorities?
Probably more relevant to business leaders is the concern that they may not know if an AI infused application will perform up to their organization’s ethical standards. It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another. What should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another – can an algorithm be trained to reliably make this distinction? Maybe, but what happens when it makes a mistake?
On the one hand, unintentional results are not the fault of the organization using the AI solution. The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgement. If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.
Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line. It’s in their best interest to ensure that doesn’t happen – their reputation depends on it.
• How did a computer algorithm like Google’s AlphaZero manage to learn, master and then dominate the game of chess in just four hours?
• AlphaZero’s mastery of chess stemmed from the sheer, brute force of Google’s AI-specific Tensorflow processing units (TPUs) – 5,000 of them to be exact.
“How about a nice game of chess?” With that iconic line of dialog from what is one of my favorite films, the 1983 cold war sci-fi thriller WarGames, nuclear war was narrowly averted by a machine (named Joshua) capable of teaching itself how to play a game. This week another machine, one of Google’s DeepMind AI offspring, AlphaZero, did something similar in that it took four hours to teach itself how to play chess and then proceeded to demolish the best, highest rated chess computer, Stockfish. After 100 games, AlphaZero racked up 28 wins and zero losses. So much for more than a millenium of human effort in teaching a computer how to play chess. But how was this possible? Was this a fair match? How did a computer algorithm like AlphaZero manage to learn, master and then dominate the game of chess in just four hours? Continue reading “The Chess Dominance of Google’s AlphaZero Teaches Us More About Chips Than About Brains”→
At its annual user conference, customer experience management player Genesys introduced Kate, a personified artificial intelligence (AI) platform tailored to augment and automate multimodal customer interactions.
Genesys Kate, however, is not meant to compete with AI platforms such as IBM Watson or Salesforce.com Einstein. Instead Kate seeks to blend its own capabilities with those offerings, serving as an open platform.
Personified AI platforms – suddenly every technology vendor seems to have an AI persona that’s eager to strike up a one-on-one conversation. There’s of course IBM Watson, Amazon Alexa, Apple Siri, Google Assistant, Salesforce.com Einstein, and Adobe Sensei, but that somewhat lengthy list doesn’t even scratch the surface of what’s available when you bring AI bots like Mitsuku, Poncho, Melody, Rose, and my personal favorite, Dr. AI. And now we have Kate, a personified AI platform introduced by customer experience manager Genesys this week during its annual user conference. Continue reading “Genesys Jumps on the AI Bandwagon, Invites Others Along for the Ride”→