As Principal Analyst for Data Center Technology at Current Analysis, Chris is responsible for covering the emerging technologies that are remapping the traditional data center landscape. These include software and hardware products that are required to support public, private and hybrid cloud architectures, as well as the underlying virtualization and orchestration technology that is needed to enable process automation and workload management. He also covers the Converged Infrastructure market, with a focus on the latest generations of vendor pre-certified and optimized hardware/software stacks.
Blockchain digital ledger technology can make complex food supply chains more transparent, while delivering a range of benefits for food producers, distributors, retailers, and consumers.
However, these are early days for blockchain as a supply chain management technology, with limitations including the challenge of verifying the authenticity of data supplied to the platform.
Several innovative applications of blockchain, the distributed digital ledger technology, illustrate the technology’s potential use beyond its cryptocurrency origins. One promising application of blockchain is as platform for improving the efficiency and transparency of global food supply chains. Here, we are already seeing numerous applications of blockchain which promise to bring benefits to producers, retailers, and consumers. However, these are still early days for blockchain as a supply chain management tool and it is important to be mindful, not only of the technology’s potential as well as early successful applications within food supply chains, but also of its risks and limitations. Among them is the challenge of how to verify the authenticity of the data supplied to the blockchain. Continue reading “Your Food Chain on Blockchain: The Coming Shakeup”→
A newly autonomous AI algorithm operating Google’s data center cooling systems will be scrutinized to learn how AI can be applied to other areas of data center operations.
There are opportunities for Google to leverage its growing expertise in applying AI to internal operations by expanding the range of AI solutions it offers enterprise customers.
Google recently announced plans to give operational control over the cooling systems in its data centers to an artificial intelligence (AI) algorithm. Using this algorithm, Google has already achieved a considerable reduction in the energy consumed by its 15 globally distributed data centers, with both cost and environmental implications. However, this latest move is significant because it represents a major first application of AI to data center operational control systems on a large scale. The initiative also provides another example of how AI can be deployed in data centers in ways that improve their operational efficiency, including their consumption of energy resources. Initiatives like this are becoming more common and are often included under the ‘AIOps’ (artificial intelligence for IT operations) banner, a term that refers to the use of big data analytics, machine learning (ML), and other AI technologies to automate the management of IT systems and processes. Continue reading “Google Gives More Power to AI Within Its Data Centers, but the Biggest Opportunities Lie Ahead”→
New data centers operated by Apple, Facebook, and Google will contribute to rising energy consumption in Denmark over the next decade, coinciding with a growing shortfall of renewable energy.
Despite concerns about their environmental impact, hyperscale Internet firms are supporting various energy efficiency initiatives, including energy recycling and new data center design and deployment methods.
• HPE’s new EdgeLine portfolio enhancements will enable customers to run storage-intensive applications and additional core data center functions within remote edge locations.
• HPE’s new GreenLake Hybrid Cloud offering will appeal to hybrid cloud customers that struggle with things like cost and management complexity but won’t disrupt the wider market.
At its Discover event in Las Vegas a last week, Hewlett Packard Enterprise (HPE) unveiled several new solution updates and strategic initiatives which, it believes, will transform the way businesses consume, deploy and operate data center technologies. First, HPE announced plans to invest US$4 billion over the next four years to develop technologies that support enterprise edge computing. Edge computing promises to transform the way data centers are deployed and managed and the type of workloads they support. It enables the operation and allocation of enterprise IT resources – including compute, storage, networking, data management, and analytics – at locations that are closer to the points of data generation, and to the end users of digital content and applications.
HPE already has a number of products that support enterprise edge computing initiatives. These include its EdgeLine hyperconverged infrastructure systems, which are specifically designed for deployment in remote locations, often far from central data centers. In Vegas, HPE revealed that it was increasing the storage allocation available on its EL1000 and EL4000 models, from 4TB to 48TB, thanks to a new hardware add-on. The additional storage will allow EdgeLine to support more storage-intensive use cases at the edge of enterprise networks, including databases, artificial intelligence, and video applications. In addition, HPE announced that it had validated several enterprise software stacks for use with the EL1000 and EL4000 systems, including VMware, Microsoft SQL Server, SAP HANA and Citrix XenDesktop. By validating entire software stacks, rather than lighter, tailored versions, HPE aims to help customers run virtualization and compute functions at the network edge with the same tools they use in their primary data centers. Continue reading “HPE Sets Out to Master the Edge While Extending Managed, Metered IT Consumption to Hybrid Cloud”→
Last week’s KubeCon + CloudNativeCon Europe conference in Denmark demonstrated the extent to which Kubernetes has become the industry standard for orchestrating and managing cloud-native applications.
The conference saw Kubernetes announcements from Cisco, Red Hat, and Oracle, illustrating the growing commitment of data center infrastructure vendors to open source and application performance management (APM) technologies.
• Key themes from the 15th Huawei Analyst Summit (HAS) in Shenzhen, China, included edge computing, hybrid cloud enablement, and the application of AI to data center technologies.
• To unlock commercial opportunities and reinforce the competitiveness of its solutions, Huawei would benefit from a stronger articulation of both its hybrid cloud and edge computing capabilities.
Judging by the themes of the 15th HAS in Shenzhen, China, 17-19 April, Huawei expects data center technologies to become increasingly more intelligent, more distributed in the way they are deployed, and more diverse in the use cases they support. Key themes from the Summit, with particular relevance to data centers, included edge computing and the Internet of Things (IoT), multi-cloud and hybrid cloud enablement, and the application of artificial intelligence (AI) to both data centers and the use cases they support.
The Summit saw a recurring emphasis on the theme of “boundless computing”, reflecting Huawei’s commitment to a single infrastructure platform that blurs the boundaries between CPUs, servers, and data centers and supports the delivery of resources wherever they are required. There was considerable discussion of edge computing, which involves the maintenance and operation of IT resources at locations that are closer to the points of data generation, and to the end users of digital content and applications. Huawei already offers several solutions that support enterprise edge computing initiatives, including its Cloud Fabric SDN solution and a version of its hyperconverged infrastructure offering, FusionCube, which is specifically optimized for remote office and branch office (ROBO) and edge computing deployments. Continue reading “Huawei Analyst Summit 2018: Edge Computing, Hybrid Cloud, and AI are Central to Huawei’s Future Vision of the Data Center”→
Although edge computing will decentralize IT, it will not replace traditional data centers or cloud-based architectures, instead operating as an additional tier of IT processing, storage, security, and analytics.
In addition to supporting IoT, edge computing use cases will include VR, AR, and connected car applications that are latency-sensitive and require high levels of performance.
In 2018, rising enterprise demand for hybrid cloud solutions will fuel new and expanded partnerships between traditional infrastructure vendors and hyperscale public cloud providers.
Vendor initiatives will target the challenge of managing workloads across hybrid and increasingly distributed IT environments, along with ways of simplifying the procurement, deployment and consumption of IT.
2017 saw a growing recognition that private cloud technology is both a realistic and desirable way to manage enterprise workloads, and can be used more efficiently through effective integration in conjunction with public cloud services. A common theme during the year’s industry events was envisaging and enabling multi- and hybrid cloud futures. At the same time, in 2017, data center infrastructure vendors from Cisco and Dell EMC to IBM and HPE continued to transform their solutions and services businesses. These transformations were a response to enterprise digitalization initiatives and recognition that in the future, IT will be hybrid, and must be able to span the full spectrum of enterprise locales from the cloud to core data centers to the network edge. In 2017, individual vendors went through quite different transformation processes: in addition to launching new solutions, technology companies acquired and integrated new businesses, and forged alliances with one another and with hyperscale cloud providers in order to fill out their portfolios. These developments were all driven by a competitive push to help enterprises modernize their traditional data center environments, capitalize on the benefits of hybrid cloud, and expand their ability to handle growing volumes of data at the edge of their networks. Continue reading “In 2018, Data Center Technology Will Become Smarter, Hybrid, More Distributed, and Easier to Consume”→
• At its Discover event in Madrid, HPE communicated its vision and strategy and to an industry eager to comprehend the impact of Meg Whitman’s decision to step down as CEO.
• In addition to its goal of making hybrid IT simple for enterprise customers, HPE, under its incoming CEO, Antonio Neri, will strengthen its focus on IoT, what it terms “intelligent IT”, edge computing, converged OT control systems, and analytics.
For HPE, last week’s Discover event in Madrid was an opportunity to communicate its vision and strategy to an industry eager to comprehend the impact of Meg Whitman’s announced decision, the week before, to step down as CEO. In her time as CEO, Whitman oversaw the company’s transformation from a provider of traditional data center infrastructure to a business focused on enterprise cloud and hybrid IT solutions. This transformation saw the creation of HPE at the end of 2014, followed by a further slimming down of the company via the disposal of non-core businesses. At the same time, HPE acquired several new companies, including wireless-network infrastructure provider, Aruba Networks and all-flash hybrid storage array provider, Nimble Storage. Continue reading “HPE Discover 2017: Under Antonio Neri, HPE Will Expand Its Focus on the Edge and Intelligent IT”→