The Three Laws of Enterprise AI, or How I Learned to Stop Worrying and Love Machine Intelligence

B. Shimmin

B. Shimmin

Summary Bullets:

  • Microsoft’s venture fund for AI includes a number of stipulations concerning not just what AI can do, but also how it might impact humans and the future of humanity itself.
  • In the spirit of Isaac Asimov, we’ve translated Microsoft’s AI venture funding stipulations into our own three laws of robotics in the enterprise, positing some questions of our own regarding whether or not AI can actually save us from ourselves when it comes to cognitive bias.

I’m a big fan of science fiction authors like Philip K. Dick and Isaac Asimov because these gentlemen teach me over and over to believe fully in technology but also recognize the dangers in rushing headlong into a future predicated upon the unbridled application of that technology. The outcome for fictional techno-eager civilizations is often a full-on dystopia (as in Dick’s Do Androids Dream of Electric Sheep?). Alternatively, and perhaps more terrifyingly, the resulting society might appear quite utopian on the surface but in fact operate as a dystopia. With Asimov’s ‘Foundation Trilogy,’ we definitely see that nice ice cream swirl combining both outcomes in one tasty treat, stemming from the development of a new branch of science called psychohistory, which could be used to predict the future for large groups of people by merging statistic, sociology and history.

The more I learn about the application of artificial intelligence (AI) within the realm of enterprise data and analytics, the more I feel we are speeding toward our own chocolate and vanilla rendition of Asimov’s Galactic Empire from the Foundation Trilogy. Certainly, predicting the future or even coming close to predicting the future could greatly benefit humankind. But, it could also cause great suffering if used either incorrectly or with malice. I think the folks at Microsoft understand this sort of worry. At least, that was my impression of this recent blog post trumpeting the company’s efforts to set up a Microsoft Ventures program supportive of AI entrepreneurs.

This program has already helped 19 companies get up and running with AI. Some are working on smart sales assistants, others on smart data preparation and on self-contextualized content. It seems Microsoft is casting a wide net. But not every AI-curious company can apply. To quote Microsoft CEO Satya Nadella, Microsoft’s principles and goals for AI are that “AI must be designed to assist humanity; be transparent; maximize efficiency without destroying human dignity; provide intelligent privacy and accountability for the unexpected; and be guarded against biases.”

Those words seem awful familiar. In fact, they read quite like Asimov’s early but highly influential set of rules for robotics introduced in his short story, “Runaround.” I won’t repeat them here, but with those in mind, I’ve rephrased Nadella’s comments as The Three Laws of Enterprise AI:

  1. An enterprise AI must assist humanity, but it must do so without destroying human dignity.
  2. An enterprise AI must at all times be able to explain its decisions.
  3. An enterprise AI must protect us from ourselves, particularly from our own biases.

They lack Asimov’s lovely use of self-reference, but I think they summarize pretty well Microsoft’s concerns. Are these wholly achievable? Well, that mostly depends upon us. The first law is up to benevolent leadership and responsible corporate stewardship. The second law poses a purely theoretical problem that is as yet unknowable. For example, consider Google Brain, which just created its own, opaque encryption algorithm. It’s the third law that worries me the most. No matter how smart our analytics software may become, at the end of the day, the application of any arising insights is down to human judgment, which unfortunately itself depends upon an opaque algorithm that’s private to each person and quite frankly not to be trusted.

I’m of course referring to the one hundred plus cataloged cognitive biases such as confirmation bias, gambler’s fallacy, and framing effect. And from what little I know, such happy outcomes of evolutionary physiology are not a problem AI can mitigate, let alone resolve. At present, the closest answer we have to the cognitive bias problem comes from the application of some admittedly crazy-sounding ideas such as behavioral economics, cognitive psychology, and neuroeconomics. Ironically, these sound like a solid starting point for a culture seeking to quantify how decisions are made. Is our ultimate endpoint something akin to psychohistory? Who knows? The point is that we humans must view our own fallible nature as an inseparable and hugely important variable within the ultimate equation of AI itself.

About Brad Shimmin
As Principal Analyst for Collaboration and Conferencing at Current Analysis, Brad analyzes the rapidly expanding use of collaboration software and services as a means of improving business agility, fostering employee optimization and driving business opportunities.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: