Enterprise big data and analytics cuts through the hype to make sense of data collection, storage, management, dissemination and discovery technologies, all employed collectively as a means of realizing corporate efficiencies and uncovering business opportunities.
Successful AI projects take a village; project teams that include members from groups across the company are more likely to uncover the ‘what-if’ and ‘then what’ questions that are best addressed early.
GlobalData’s 2018 survey found that close to 40% of businesses include all affected parties in decisions related to big data and analytics solutions.
We’ve all heard that not only are machine learning (ML) algorithms time-consuming to develop and train, but that they also need access to vast data lakes and specialized data scientists. With these requirements, it’s no wonder that businesses tend to focus on identifying the skilled IT-centric resources required for undertaking an AI deployment. But AI isn’t just the playground of data specialists, successful outcomes take a village. Project teams that include members from different organizations across the company are more likely to uncover the ‘what-if’ and ‘then what’ questions that are best addressed early on. HR, legal, finance, customer service, operations, and other business units have much to contribute to a successful AI deployment. Continue reading “With AI Decisions, It Takes a Village”→
At Google I/O this week, Sundar Pichai walked attendees through a number of impressive implementations of AI, one of which showed how Google Assistant could book a haircut and make a dinner reservation via an unnervingly convincing conversation between human and machine.
What happens, then, if that assistant eventually learns how to pass itself off as you?
You know it’s spring when the cherry blossoms appear in force, the birds start singing in unison, and Google CEO Sundar Pichai takes the stage at Google I/O and nonchalantly demonstrates some new bit of technology that simultaneously manages to amaze and terrify. I’m talking about Google Duplex, an interesting blend of natural language understanding (NLU), deep learning (DL), and text-to-speech technology designed to do one thing: use AI to emulate at least one half of an actual human conversation. Continue reading “Google I/O 2018: Did Google AI Just Pass the Turing Test?”→
• Huawei showcased its Video Cloud Platform at its recent analyst event, touting its application for public safety.
• The company pointed to widespread adoption and success in China, but can it find a market for its solution overseas?
During Huawei’s Analyst Summit in Shenzhen, China, executive keynotes emphasized the role of artificial intelligence (AI) in the company’s vision to create a more connected, more intelligent world. The company’s vision is to use AI to improve people’s daily lives and to benefit society as a whole. Unlike some competitors, who often showcase the application of AI to improve the customer experience, or point to use cases that incorporate natural language processing or natural language generation, Huawei was keen to highlight its video strategy. The company has roughly 5,450 members of its staff involved in developing video solutions and eight research and development centers that focus on video technology (three in China, as well as sites in the US, France, Ireland, Russia, and Japan). Huawei envisions several use cases for the application of AI and video, including identification of abandoned objects, intrusion detection, crowd density monitoring, facial control/admission processing, and vehicle, facial, and physical attribute identification. Continue reading “Is Public Safety China’s New Export?”→
There are many AI-savvy chipsets on the market right now, each fine-tuned to support specific AI workloads, development frameworks, or vendor platforms.
But, what if developers could flexibly combine AI-specific hardware resource pools on the fly, on-premises as well as online?
There’s certainly enough buzz in the industry right now about artificial intelligence (AI). If you look beyond the doomsday predictions of a machine uprising, the prevailing view is that AI is a literal Swiss Army knife of circumstance, able to cut through any and all problems, ready to assemble opportunity out of nothing more than data. It seems that every vendor has one or two machine learning (ML) and deep learning (DL) frameworks lying about. It’s no wonder. There’s TensorFlow, Caffe, Theano, Torch, and many, many more to choose from, most of which open source and are quite accessible to the broader developer community. Continue reading “It’s Time to Orchestrate AI Hardware for Maximum Effect”→
• There’s a race right now in high tech to build the first general purpose quantum computer, with industry leaders IBM, Google, D-Wave Technologies, and Intel each building out very different implementations of a single, revolutionary idea — the use of qubits instead of plain old bits.
• But unlike most races, this one has no clear finish line as we’re still figuring out the best approach to quantum computing or to building software for them. Enter IT services powerhouse Atos, which is backing a pure but as yet simulated idea of quantum computing in an effort to garner what matters most, namely the hearts and minds of future quantum developers.
There’s an awful lot of noise in the technology industry right now regarding the promise of quantum computing. A sizable number of dissimilar technology and platform players, ranging from Intel to Google to Atom Computing (a 2018 startup) are all busy building increasingly capable computers that push and pull qubits rather than bits. And as you might expect from such a diverse cast, there are a lot of differing views on how to build such a beast and how to best put it to use. Continue reading “Atos Has a Secret Weapon, and It Rhymes with Awesome Computing”→
• During IBM Think, IBM made several AI-related announcements, some designed for enterprises with complex requirements, and others geared towards helping businesses deploy their first AI solution.
• Although IBM’s new capabilities and tools in support of deep learning are impressive, and position IBM as a thought leader, it’s the steps IBM is taking to help companies just getting started with AI that truly move the market forward.
IBM Think was promoted as an event that would bring together the greatest minds in AI. It featured technologies such as virtual assistants, machine learning (ML), and deep learning (DL), and also touched on hot button issues such as ethics and AI. During her keynote, CEO Ginni Rometty discussed the transformational role that AI will have on the IT market going forward, and she introduced Watson’s Law, describing it as a follow on to Moore’s Law and Metcalfe’s Law. Continue reading “IBM Think 2018: Big Blue Looks to Help Companies Adopt Their First AI Project”→
• When it comes to swapping ones and zeros, quantum computing promises to outpace traditional processors in pure scale.
• Yet its true promise will play out when we learn how to invoke quantum phenomena in order to speed up artificial intelligence (AI).
At last week’s IBM Think conference in Las Vegas, Big Blue and AI chip manufacturer NVIDIA talked up the importance of hardware in resolving AI performance bottlenecks. As it turns out, building a smart AI system demands not only copious amounts of data but also the ability to rapidly run machine learning (ML) and deep learning (DL) algorithms against that data. The trouble is that quite often hardware gets in the way. Continue reading “This is Your Brain on Quantum Computing”→
Domo remains as flamboyant as ever both in how it goes to market and in how it approaches BI as a business operating system.
Yet, a surprising new go-to-market message hints at a newfound maturity that underscores the company’s desire to play a crucial, central role in the success of its customers.
To say that the corporate culture at Domo is unique is to do a serious disservice to all Domo employees, or ‘Domosapiens,’ as they like to call themselves. Domo’s corporate culture is not your typical corporate attempt to feign a sense of style. Domo is downright wacky behind the leadership of its enigmatic founder and CEO, Josh James. Case in point, at this year’s Domopalooza conference in Salt Lake City, Mr. James made a rather interesting entrance during the keynote. Not content to follow the opening entertainment act, put on by the KinJaz dance group, the Domo CEO actually danced a full routine with the group. Continue reading “Take Two Domo and Call Me in the Morning”→
Businesses looking to adopt AI must not only evaluate the technology’s implications on job displacement and data security, but also consider that algorithms may unintentionally undermine the organization’s ethical standards.
Customers are quick to pass judgement; if unintentional biases become public, a company’s brand reputation may suffer significantly.
Much has been written about ethics and artificial intelligence (AI), and rightly so. With many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of emerging ethical concerns.
Job displacement is still a key consideration, as is safeguarding data. In a recent GlobalData survey, 23% of organizations indicated they had cut or not replaced employees because of AI; 57% indicated security as a top concern.
However, looking ahead, the question of ethics is the real challenge the AI community will need to tackle. And it is a challenge that is far more controversial than security or privacy. What happens when a self-driven car needs to decide between hitting a child that has run into the road, or swerving and risking the injury of its passenger? How proactive should a personal assistant be when it detects wrongdoing? What should be done when a personal assistant believes that a user’s usage pattern points to having committed a serious offense – should it alert authorities?
Probably more relevant to business leaders is the concern that they may not know if an AI infused application will perform up to their organization’s ethical standards. It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another. What should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another – can an algorithm be trained to reliably make this distinction? Maybe, but what happens when it makes a mistake?
On the one hand, unintentional results are not the fault of the organization using the AI solution. The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgement. If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.
Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line. It’s in their best interest to ensure that doesn’t happen – their reputation depends on it.
The Internet of Things is not only changing how consumers interact with the world around them; it is also driving a tectonic shift in how companies process and analyze device data.
Traditional best practices for gathering and analyzing data, where information is stored and processed centrally, are no longer relevant. Forget big data warehouses. IoT customers are looking to analyze data as close to the source as possible, at the edge of the network.