Google’s Eavesdropping Home Mini: Who’s Watching the Watchers?

B. Shimmin

B. Shimmin

Summary Bullets:

  • Digital home assistants like Google Home Mini and Amazon Echo owe users much more than privacy; if they are to be truly trusted, they must also explain how they think and how they make decisions.
  • Fortunately, regulations such as General Data Protection Regulation (GDPR) will begin asking such questions. The only problem is that artificial intelligence (AI) may not be able to provide any answers.

Google was quick to lay blame for its recent eavesdropping Home Mini fiasco on a ‘hardware bug,’ rolling out a quick update that purportedly prevents devices from inadvertently recording and reporting on overheard conversations should their owners accidentally press the wrong button. From now on, Google Home Mini will only record what you say after you capture its attention via “Hey Google” or “Okay Google.”

In other words, it was a simple case of user error and a misunderstanding of how one intelligent home device in particular works. No more button, no more problem, right? Not even close.

With a rapidly building tsunami of such devices – driven by AI from Google, Amazon, Apple, and Baidu – quickly filling the global market with price tags suitable, not just for any home, but for multiple rooms within those homes, it is clear that consumers are extremely bullish on AI itself. A recent study conducted by Verdict and GlobalData confirms this, showing that over 80% of those surveyed consider AI to represent a ‘reward’ rather than a ‘risk.’

However, when asked “What do you see as the biggest risk of AI in the coming five to ten years?” almost 60% of those same users singled out information security as the single biggest threat looming on the horizon.

You can view the full data in Verdict’s new digital magazine on AI, called Verdict AI.

The Google Home Mini incident mentioned above certainly puts this concern into stark relief, calling for tighter information security practices for any device that collects data actively or (heaven forbid) passively. But this is only the tip of the iceberg. No one is talking about the bigger concern with these ‘always-on’ devices that goes well beyond data sovereignty and eavesdropping. There’s a more difficult question to answer here concerning the very AI algorithms driving these devices.

What do we know about the brains behind the digital assistant’s ‘record’ button? What do we know about the AI algorithms responsible for the decisions made by Google Home Mini’s AI when it listens to our private conversations and accesses our private information? How do they decide what to say, do, and show? Can a user know that a recommendation or declaration is without bias and free from any agenda operating on behalf of the owner of that AI?

Anticipating these questions back in late 2016, Microsoft CEO Satya Nadella proposed an ethical approach to AI, suggesting that vendors must build solutions that are:

— Designed to assist humanity;
Are transparent;
— Maximize efficiency without destroying human dignity;
— Provide intelligent privacy and accountability for the unexpected;
— And are guarded against biases.

These seem plausible in terms of their intent. Interestingly, when the European Union puts General Data Protection Regulation (GDPR) into effect this coming year, Nadella’s call for transparency will be put to the test. Under GDPR’s ‘Right to Explanation’ clause, users will be able to ask Google, Amazon, and others for an explanation of the inner workings of those AI black boxes.

This is all well and good, but the real problem is one of visibility, or rather a lack thereof.

Users deserve to know how and why an AI algorithm decides when we need to leave in order to reach an appointment on time or how it determines the best arrangement of smooth jazz hits for our morning commute. Unfortunately, the deep learning (DL) algorithms used by these devices appear as black boxes, opaque even to their creators, who simply set down goals like getting to work on time and add data – lots and lots of data. These black boxes use many (many) layers of neural networks that run simulations against all available data in order to reach those goals; knowing how the algorithms get there isn’t a priority or even a possibility in any detailed sense for someone watching over the proceedings.

Consider Google’s AI learning project, Deep Brain, which in 2016 created its own encryption algorithm. Not even Google’s own engineers who built this algorithm can reverse-engineer the ‘how’ used by Deep Brain to come up with that new encryption scheme. At best, therefore, smart digital assistants like Amazon Alexa, Google Assistant, and Apple Siri can only explain the rules, goals, and data at hand, nothing more. Is that enough to avoid discrimination and drive a longstanding trust in AI itself? Only time and transparency will tell. At least with legislation like GDPR asking difficult questions at the outset, the AI industry and those who benefit from AI can engage in an open discussion.

About Brad Shimmin
As Principal Analyst for Collaboration and Conferencing at Current Analysis, Brad analyzes the rapidly expanding use of collaboration software and services as a means of improving business agility, fostering employee optimization and driving business opportunities.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: