• Concerns over the accuracy of facial analytics have prompted IBM to release a dataset of over one million facial images, including facial coding, that can be used to train facial analytics software.
• Improving the results of facial analytics will bolster public confidence in the technology, promoting adoption by enterprises.
IBM has released a dataset of over one million facial images to the global research community to combat bias in facial recognition software. The announcement comes after researchers from MIT and the University of Toronto made claims that a well-known competitor’s product misclassified women at a higher rate than men, with error rates for darker-skinned women far surpassing error rates for lighter-skinned women. With women accounting for roughly half of the world’s population, inaccuracies in their classification present a serious threat to facial recognition adoption. Continue reading “IBM Releases Images to Improve Facial Analytics Accuracy”→
During its AI developer conference, Baidu made several announcements that demonstrate how it is moving the Chinese AI market forward, but the release of its Kunlun chip stands out as a key move that repositions it in the not only the Chinese market, but also globally
With Kunlun, Baidu joins the ranks of a select few companies that not only offer an AI platform that helps enterprises deploy AI-infused solutions, but that have also developed their own hardware to maximize AI processing.
Baidu is hot on the heels of the likes of Microsoft and Google. Although already known as an ambitious player in the AI realm, primarily in China, the search engine provider hasn’t managed to establish itself as a major force in the space, until now. Earlier this month, Baidu announced that it is bringing to market an AI-optimized chip, called Kunlun. With the move, Baidu joins the ranks of a select few companies that not only offer an AI platform that helps enterprises deploy AI-infused solutions, but have also developed their own hardware to maximize AI processing. Continue reading “Release of AI-optimized Kunlun Chip a Game Changer for Baidu”→
Successful AI projects take a village; project teams that include members from groups across the company are more likely to uncover the ‘what-if’ and ‘then what’ questions that are best addressed early.
GlobalData’s 2018 survey found that close to 40% of businesses include all affected parties in decisions related to big data and analytics solutions.
We’ve all heard that not only are machine learning (ML) algorithms time-consuming to develop and train, but that they also need access to vast data lakes and specialized data scientists. With these requirements, it’s no wonder that businesses tend to focus on identifying the skilled IT-centric resources required for undertaking an AI deployment. But AI isn’t just the playground of data specialists, successful outcomes take a village. Project teams that include members from different organizations across the company are more likely to uncover the ‘what-if’ and ‘then what’ questions that are best addressed early on. HR, legal, finance, customer service, operations, and other business units have much to contribute to a successful AI deployment. Continue reading “With AI Decisions, It Takes a Village”→
• Key themes from the 15th Huawei Analyst Summit (HAS) in Shenzhen, China, included edge computing, hybrid cloud enablement, and the application of AI to data center technologies.
• To unlock commercial opportunities and reinforce the competitiveness of its solutions, Huawei would benefit from a stronger articulation of both its hybrid cloud and edge computing capabilities.
Judging by the themes of the 15th HAS in Shenzhen, China, 17-19 April, Huawei expects data center technologies to become increasingly more intelligent, more distributed in the way they are deployed, and more diverse in the use cases they support. Key themes from the Summit, with particular relevance to data centers, included edge computing and the Internet of Things (IoT), multi-cloud and hybrid cloud enablement, and the application of artificial intelligence (AI) to both data centers and the use cases they support.
The Summit saw a recurring emphasis on the theme of “boundless computing”, reflecting Huawei’s commitment to a single infrastructure platform that blurs the boundaries between CPUs, servers, and data centers and supports the delivery of resources wherever they are required. There was considerable discussion of edge computing, which involves the maintenance and operation of IT resources at locations that are closer to the points of data generation, and to the end users of digital content and applications. Huawei already offers several solutions that support enterprise edge computing initiatives, including its Cloud Fabric SDN solution and a version of its hyperconverged infrastructure offering, FusionCube, which is specifically optimized for remote office and branch office (ROBO) and edge computing deployments. Continue reading “Huawei Analyst Summit 2018: Edge Computing, Hybrid Cloud, and AI are Central to Huawei’s Future Vision of the Data Center”→
Businesses looking to adopt AI must not only evaluate the technology’s implications on job displacement and data security, but also consider that algorithms may unintentionally undermine the organization’s ethical standards.
Customers are quick to pass judgement; if unintentional biases become public, a company’s brand reputation may suffer significantly.
Much has been written about ethics and artificial intelligence (AI), and rightly so. With many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of emerging ethical concerns.
Job displacement is still a key consideration, as is safeguarding data. In a recent GlobalData survey, 23% of organizations indicated they had cut or not replaced employees because of AI; 57% indicated security as a top concern.
However, looking ahead, the question of ethics is the real challenge the AI community will need to tackle. And it is a challenge that is far more controversial than security or privacy. What happens when a self-driven car needs to decide between hitting a child that has run into the road, or swerving and risking the injury of its passenger? How proactive should a personal assistant be when it detects wrongdoing? What should be done when a personal assistant believes that a user’s usage pattern points to having committed a serious offense – should it alert authorities?
Probably more relevant to business leaders is the concern that they may not know if an AI infused application will perform up to their organization’s ethical standards. It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another. What should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another – can an algorithm be trained to reliably make this distinction? Maybe, but what happens when it makes a mistake?
On the one hand, unintentional results are not the fault of the organization using the AI solution. The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgement. If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.
Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line. It’s in their best interest to ensure that doesn’t happen – their reputation depends on it.
• Many organizations are unsure of how to best incorporate AI to meet their industry-specific challenges – often because the use case options are so vast and so varied.
• Organizations – particularly mid-sized businesses, companies starting out on their analytics journeys, or those rolling out IoT solutions – should explore the services available from their telecom provider, many of which have built out their professional services capabilities around digital transformation.
In 2018, rising enterprise demand for hybrid cloud solutions will fuel new and expanded partnerships between traditional infrastructure vendors and hyperscale public cloud providers.
Vendor initiatives will target the challenge of managing workloads across hybrid and increasingly distributed IT environments, along with ways of simplifying the procurement, deployment and consumption of IT.
2017 saw a growing recognition that private cloud technology is both a realistic and desirable way to manage enterprise workloads, and can be used more efficiently through effective integration in conjunction with public cloud services. A common theme during the year’s industry events was envisaging and enabling multi- and hybrid cloud futures. At the same time, in 2017, data center infrastructure vendors from Cisco and Dell EMC to IBM and HPE continued to transform their solutions and services businesses. These transformations were a response to enterprise digitalization initiatives and recognition that in the future, IT will be hybrid, and must be able to span the full spectrum of enterprise locales from the cloud to core data centers to the network edge. In 2017, individual vendors went through quite different transformation processes: in addition to launching new solutions, technology companies acquired and integrated new businesses, and forged alliances with one another and with hyperscale cloud providers in order to fill out their portfolios. These developments were all driven by a competitive push to help enterprises modernize their traditional data center environments, capitalize on the benefits of hybrid cloud, and expand their ability to handle growing volumes of data at the edge of their networks. Continue reading “In 2018, Data Center Technology Will Become Smarter, Hybrid, More Distributed, and Easier to Consume”→
• How did a computer algorithm like Google’s AlphaZero manage to learn, master and then dominate the game of chess in just four hours?
• AlphaZero’s mastery of chess stemmed from the sheer, brute force of Google’s AI-specific Tensorflow processing units (TPUs) – 5,000 of them to be exact.
“How about a nice game of chess?” With that iconic line of dialog from what is one of my favorite films, the 1983 cold war sci-fi thriller WarGames, nuclear war was narrowly averted by a machine (named Joshua) capable of teaching itself how to play a game. This week another machine, one of Google’s DeepMind AI offspring, AlphaZero, did something similar in that it took four hours to teach itself how to play chess and then proceeded to demolish the best, highest rated chess computer, Stockfish. After 100 games, AlphaZero racked up 28 wins and zero losses. So much for more than a millenium of human effort in teaching a computer how to play chess. But how was this possible? Was this a fair match? How did a computer algorithm like AlphaZero manage to learn, master and then dominate the game of chess in just four hours? Continue reading “The Chess Dominance of Google’s AlphaZero Teaches Us More About Chips Than About Brains”→
At its annual user conference, customer experience management player Genesys introduced Kate, a personified artificial intelligence (AI) platform tailored to augment and automate multimodal customer interactions.
Genesys Kate, however, is not meant to compete with AI platforms such as IBM Watson or Salesforce.com Einstein. Instead Kate seeks to blend its own capabilities with those offerings, serving as an open platform.
Personified AI platforms – suddenly every technology vendor seems to have an AI persona that’s eager to strike up a one-on-one conversation. There’s of course IBM Watson, Amazon Alexa, Apple Siri, Google Assistant, Salesforce.com Einstein, and Adobe Sensei, but that somewhat lengthy list doesn’t even scratch the surface of what’s available when you bring AI bots like Mitsuku, Poncho, Melody, Rose, and my personal favorite, Dr. AI. And now we have Kate, a personified AI platform introduced by customer experience manager Genesys this week during its annual user conference. Continue reading “Genesys Jumps on the AI Bandwagon, Invites Others Along for the Ride”→
• Google wants to democratize AI and operationalize machine learning (ML) with the release of Google Cloud Machine Learning Engine, a platform that includes developer-friendly APIs and pre-trained data models.
• But what the company really needs isn’t just data, algorithms or even data scientists but instead a new breed of developers, who can build software that can anticipate outcomes.
It’s always the same at the end of a company’s keynote address. After all of the important messages have been conveyed and all of the product announcements have been made, a mid-level corporate mouthpiece will take the stage and provide the audience with some positive reinforcement of what went before. It’s like the closing credits of a film, something that may contain a nugget of interest to the cinephile. More often, it serves as filler, a thematic soundtrack to accompany attendees as they make for the exits.