Job Displacement And Privacy Were Yesterday’s AI Hot Buttons; Today, Ethics Issues Are Even More Important and Controversial

R. Bhattacharyya

Summary Bullets:

  • Businesses looking to adopt AI must not only evaluate the technology’s implications on job displacement and data security, but also consider that algorithms may unintentionally undermine the organization’s ethical standards.
  • Customers are quick to pass judgement; if unintentional biases become public, a company’s brand reputation may suffer significantly.

Much has been written about ethics and artificial intelligence (AI), and rightly so.  With many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of emerging ethical concerns. 

Job displacement is still a key consideration, as is safeguarding data. In a recent GlobalData survey, 23% of organizations indicated they had cut or not replaced employees because of AI; 57% indicated security as a top concern.

However, looking ahead, the question of ethics is the real challenge the AI community will need to tackle. And it is a challenge that is far more controversial than security or privacy.  What happens when a self-driven car needs to decide between hitting a child that has run into the road, or swerving and risking the injury of its passenger?  How proactive should a personal assistant be when it detects wrongdoing?  What should be done when a personal assistant believes that a user’s usage pattern points to having committed a serious offense – should it alert authorities?

Probably more relevant to business leaders is the concern that they may not know if an AI infused application will perform up to their organization’s ethical standards.  It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another.  What should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another – can an algorithm be trained to reliably make this distinction?  Maybe, but what happens when it makes a mistake?

On the one hand, unintentional results are not the fault of the organization using the AI solution.  The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgement.  If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.

Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line.  It’s in their best interest to ensure that doesn’t happen – their reputation depends on it.

 

When It Comes to Data Processing, IoT Users Like to Live on the Edge

B. Shimmin
B. Shimmin

Summary Bullets:

  • The Internet of Things is not only changing how consumers interact with the world around them; it is also driving a tectonic shift in how companies process and analyze device data.
  • Traditional best practices for gathering and analyzing data, where information is stored and processed centrally, are no longer relevant. Forget big data warehouses. IoT customers are looking to analyze data as close to the source as possible, at the edge of the network.

Companies looking to jump on the burgeoning IoT bandwagon may have to reevaluate how they architect their solutions.  Continue reading “When It Comes to Data Processing, IoT Users Like to Live on the Edge”

The Unity and IBM Partnership Is as Much About Business Apps as It Is About Gaming

R. Bhattacharyya

Summary Bullets:

  • The announced IBM and Unity partnership has the potential to expose a larger audience to the world of AI.
  • The implications of the deal go beyond gaming; it could change both the way consumers expect to interact with software – at home and at work – and the way developers design software in the future.

Last week, Unity and IBM announced a partnership that could have significant implications for the adoption of artificial intelligence (AI) in both consumer and business applications. The two companies launched IBM Watson Unity SDK, which enables developers to integrate Watson’s cloud-based AI features into Unity applications. Developers can include features such as Watson’s speech-to-text, visual recognition, language translation, and language classification capabilities in their programs, changing how users issue commands and how software responds to them. Continue reading “The Unity and IBM Partnership Is as Much About Business Apps as It Is About Gaming”

The Internet of Things Isn’t Driven by Devices as Much as by the Internet Itself

B. Shimmin
B. Shimmin

Summary Bullets:

  • Internet of Things (IoT) adoption is certainly being driven by the promise of real-time analytics and AI at scale, but its ultimate feasibility still depends on something much more mundane, namely how efficiently it can move data between connected devices and backend systems.
  • And yet, according to a recent GlobalData study, IoT practitioners haven’t yet learned that lesson, relying not on fit-for-purpose protocols like MQTT, but instead on the ubiquitous, now aging web standard, HTTP.

At Mobile World Congress this week, networking giant Cisco rolled out a new networking and device management platform for IoT practitioners that promises to enable the creation of extremely large-scale deployments without breaking the bank. IoT at scale is a no-brainer. More devices equal more data. More data equals deeper business insights. But, IoT at scale can be expensive in terms of delivering basic device interconnectivity and management costs. Continue reading “The Internet of Things Isn’t Driven by Devices as Much as by the Internet Itself”

Looking for a Magic Pill to Realize the Potential of AI? Look No Further Than Your Good Old Telephone Company

R. Bhattacharyya

Summary Bullets:

• Many organizations are unsure of how to best incorporate AI to meet their industry-specific challenges – often because the use case options are so vast and so varied.

• Organizations – particularly mid-sized businesses, companies starting out on their analytics journeys, or those rolling out IoT solutions – should explore the services available from their telecom provider, many of which have built out their professional services capabilities around digital transformation.

Despite the consumer hype around AI, many organizations still have no idea how to apply AI in the corporate world. Technologies abound, but the means to apply those to business outcomes remains elusive without help from expensive third party integrators with the necessary domain expertise. But help is already at hand thanks to ever present telecom service providers, which have taken on AI as a core competency. Continue reading “Looking for a Magic Pill to Realize the Potential of AI? Look No Further Than Your Good Old Telephone Company”

To Improve Data Quality, Sometimes the Best Place to Start is at the Very End

Brad Shimmin – Research Director, Business Technology and Software

Summary Bullets:

• With big data and analytics, older ideas like predictive analytics and AI are coming together to solve long-standing problems, most notably data quality.

• Sisense is adding another twist by taking advanced design and visualization concepts and putting those to work at the very beginning of the analytics lifecycle.

Invention invariably involves theft. Each generation of inventors stands on the shoulders of its predecessors, borrowing freely from their available pool of knowledge. Ideas are deconstructed, mixed up, and reapplied in new ways and within unexpected contexts to form, well, something new. Sometimes these new inventions are simply the opportunistic reinterpretation of an existing idea, taking something unique but impractical and turning it into something incredibly useful. That’s the way it was with the invention of the automobile, the light bulb and the radio. And that’s how it is with big data and analytics, where older ideas are only now coming together to solve long-standing problems. Continue reading “To Improve Data Quality, Sometimes the Best Place to Start is at the Very End”

New Capabilities, Not Cost Savings, Are Biggest Driver of AI Adoption

R. Bhattacharyya

Summary Bullets:

  • Recent survey results reveal that companies have high expectations when it comes to artificial intelligence; close to half expect the technology to bring new capabilities to their organization.
  • Vendors are bringing new solutions to market to help companies implement their AI vision, offering solutions that speed and ease adoption of artificial intelligence, machine learning, and deep learning in particular.

Businesses are looking at artificial intelligence (AI) as a truly disruptive technology, with the potential to change the way they run their organizations. Unlike many other new solutions, which are often adopted because of their promises to cut costs, companies are embracing AI so they can to bring new capabilities to their teams or to improve the support they provide to their customers and partners. Continue reading “New Capabilities, Not Cost Savings, Are Biggest Driver of AI Adoption”

Let’s Drain the Database Swamp! (Okay, Just Kidding)

B. Shimmin
B. Shimmin

Summary Bullets:

  • Enterprise buyers looking to simplify their data integration woes through centralization are missing the value inherent in diversity.
  • Database diversity (actually diversity across all workloads) should not only be welcomed but actually sought after as a means of blending opportunity with capability.

Back in the ‘90s, the average enterprise maintained not one, not two, but seven databases on average: one for transactional information, one for data mining cubes, one for server logs, etc. Today, that has grown dramatically thanks to the proliferation of NoSQL-style databases built to handle unstructured, semi-structured and polymorphic data. Add to this the ever-expanding list of data storage options across public cloud data platforms, and you’ve an honest to goodness embarrassment of riches. Continue reading “Let’s Drain the Database Swamp! (Okay, Just Kidding)”

What Will the Enterprise Data and Analytics Market Look Like in 2018? In a Word, Practical

Brad Shimmin – Research Director, Business Technology and Software

Summary Bullets:

• In 2017 the enterprise data and analytics vendor community emphasized opportunity in the cloud and the democratization of data. What will 2018 bring?

• We expect to see a shift in focus towards quality, to solving problems such as data governance, and putting AI to work within tactical business workflows.

What does the coming year have in store for the enterprise data and analytics marketplace? Sometimes, the best way to predict the future is to look at the past. To that end, here’s what we predicted for 2017 back in December of last year.

• IoT success will ride on pre-built data models and packaged software
• Smaller players will drive cognitive software innovation
• Vendors will prioritize self-service data integration, prep and management
• Vertical markets and specific use cases will fuel data-as-a-service adoption

For the most part (aside from predicting that AI would come from smaller vendors), the year played out as anticipated with a distinct emphasis on direct business outcomes and the broad adoption of analytics among business users. How will these trends move forward? In short, we don’t expect to see grand speculation and rabid investment in unproven ideas. Yes, we’re looking at you blockchain! Continue reading “What Will the Enterprise Data and Analytics Market Look Like in 2018? In a Word, Practical”

The Chess Dominance of Google’s AlphaZero Teaches Us More About Chips Than About Brains

Brad Shimmin – Research Director, Business Technology and Software

Summary Bullets:

• How did a computer algorithm like Google’s AlphaZero manage to learn, master and then dominate the game of chess in just four hours?

• AlphaZero’s mastery of chess stemmed from the sheer, brute force of Google’s AI-specific Tensorflow processing units (TPUs) – 5,000 of them to be exact.

“How about a nice game of chess?” With that iconic line of dialog from what is one of my favorite films, the 1983 cold war sci-fi thriller WarGames, nuclear war was narrowly averted by a machine (named Joshua) capable of teaching itself how to play a game. This week another machine, one of Google’s DeepMind AI offspring, AlphaZero, did something similar in that it took four hours to teach itself how to play chess and then proceeded to demolish the best, highest rated chess computer, Stockfish. After 100 games, AlphaZero racked up 28 wins and zero losses. So much for more than a millenium of human effort in teaching a computer how to play chess. But how was this possible? Was this a fair match? How did a computer algorithm like AlphaZero manage to learn, master and then dominate the game of chess in just four hours? Continue reading “The Chess Dominance of Google’s AlphaZero Teaches Us More About Chips Than About Brains”