As Principal Analyst for Data Center Technology at Current Analysis, Chris is responsible for covering the emerging technologies that are remapping the traditional data center landscape. These include software and hardware products that are required to support public, private and hybrid cloud architectures, as well as the underlying virtualization and orchestration technology that is needed to enable process automation and workload management. He also covers the Converged Infrastructure market, with a focus on the latest generations of vendor pre-certified and optimized hardware/software stacks.
Data center portfolios will become more distributed and decentralized in 2019, thanks to the growing use of hybrid and multi-cloud environments and edge computing deployments.
2019 will see an emphasis on enhancing the performance, security, and scalability of compute, storage, and networking resources in order to support new workloads, including AI.
GlobalData’s top predictions for data center technologies in 2019 include the expectation of increased competition in the provision of hybrid cloud, multi-cloud, and Kubernetes solutions; this will be partly driven by a growing interest in on-premises workloads from leading public cloud providers. We also anticipate increased innovation around edge computing, as well as the evolution and enhancement of data center platforms and architectures with technologies that improve their ability to support AI and other advanced workloads: Continue reading “GlobalData’s Top Predictions for Data Center Technologies in 2019”→
• In 2019, Amazon Web Services (AWS), Microsoft, and other leading cloud providers will accelerate their efforts to target emerging hybrid cloud opportunities among enterprise customers.
• AWS, Microsoft, and the other hyperscale clouds will face growing competition from China’s Alibaba, which in August announced the availability of its Aspara Stack on-premises offering outside China.
Hybrid cloud environments have rapidly evolved to support a range of IT and application requirements, and in 2019 a growing number of organizations will make use of them. Hybrid clouds combine dedicated private cloud infrastructure (often maintained on-premises within an organization’s own data center) with public cloud services provided by companies like AWS, Microsoft, and Google. Recognizing this, AWS, Microsoft, and other leading cloud providers will accelerate their efforts in 2019 to target emerging hybrid cloud opportunities among enterprise customers.
Although AWS is the largest provider of public cloud services globally, this does not guarantee its success in the hybrid cloud space. AWS already has several options for organizations that require a hybrid cloud solution and, in 2019 it plans to launch a new managed on-premises version of its cloud platform called AWS Outposts. Once available, Outposts will allow customers to deploy configurable compute and storage racks based on AWS hardware and software in their own data centers, and integrate their on-premises AWS environment with the AWS public cloud. However, information on features, services, pricing, and availability has yet to be provided. Continue reading “Hyperscale Cloud Firms Will Fiercely Target On-Premises Workloads in 2019”→
Blockchain digital ledger technology can make complex food supply chains more transparent, while delivering a range of benefits for food producers, distributors, retailers, and consumers.
However, these are early days for blockchain as a supply chain management technology, with limitations including the challenge of verifying the authenticity of data supplied to the platform.
Several innovative applications of blockchain, the distributed digital ledger technology, illustrate the technology’s potential use beyond its cryptocurrency origins. One promising application of blockchain is as platform for improving the efficiency and transparency of global food supply chains. Here, we are already seeing numerous applications of blockchain which promise to bring benefits to producers, retailers, and consumers. However, these are still early days for blockchain as a supply chain management tool and it is important to be mindful, not only of the technology’s potential as well as early successful applications within food supply chains, but also of its risks and limitations. Among them is the challenge of how to verify the authenticity of the data supplied to the blockchain. Continue reading “Your Food Chain on Blockchain: The Coming Shakeup”→
A newly autonomous AI algorithm operating Google’s data center cooling systems will be scrutinized to learn how AI can be applied to other areas of data center operations.
There are opportunities for Google to leverage its growing expertise in applying AI to internal operations by expanding the range of AI solutions it offers enterprise customers.
Google recently announced plans to give operational control over the cooling systems in its data centers to an artificial intelligence (AI) algorithm. Using this algorithm, Google has already achieved a considerable reduction in the energy consumed by its 15 globally distributed data centers, with both cost and environmental implications. However, this latest move is significant because it represents a major first application of AI to data center operational control systems on a large scale. The initiative also provides another example of how AI can be deployed in data centers in ways that improve their operational efficiency, including their consumption of energy resources. Initiatives like this are becoming more common and are often included under the ‘AIOps’ (artificial intelligence for IT operations) banner, a term that refers to the use of big data analytics, machine learning (ML), and other AI technologies to automate the management of IT systems and processes. Continue reading “Google Gives More Power to AI Within Its Data Centers, but the Biggest Opportunities Lie Ahead”→
New data centers operated by Apple, Facebook, and Google will contribute to rising energy consumption in Denmark over the next decade, coinciding with a growing shortfall of renewable energy.
Despite concerns about their environmental impact, hyperscale Internet firms are supporting various energy efficiency initiatives, including energy recycling and new data center design and deployment methods.
• HPE’s new EdgeLine portfolio enhancements will enable customers to run storage-intensive applications and additional core data center functions within remote edge locations.
• HPE’s new GreenLake Hybrid Cloud offering will appeal to hybrid cloud customers that struggle with things like cost and management complexity but won’t disrupt the wider market.
At its Discover event in Las Vegas a last week, Hewlett Packard Enterprise (HPE) unveiled several new solution updates and strategic initiatives which, it believes, will transform the way businesses consume, deploy and operate data center technologies. First, HPE announced plans to invest US$4 billion over the next four years to develop technologies that support enterprise edge computing. Edge computing promises to transform the way data centers are deployed and managed and the type of workloads they support. It enables the operation and allocation of enterprise IT resources – including compute, storage, networking, data management, and analytics – at locations that are closer to the points of data generation, and to the end users of digital content and applications.
HPE already has a number of products that support enterprise edge computing initiatives. These include its EdgeLine hyperconverged infrastructure systems, which are specifically designed for deployment in remote locations, often far from central data centers. In Vegas, HPE revealed that it was increasing the storage allocation available on its EL1000 and EL4000 models, from 4TB to 48TB, thanks to a new hardware add-on. The additional storage will allow EdgeLine to support more storage-intensive use cases at the edge of enterprise networks, including databases, artificial intelligence, and video applications. In addition, HPE announced that it had validated several enterprise software stacks for use with the EL1000 and EL4000 systems, including VMware, Microsoft SQL Server, SAP HANA and Citrix XenDesktop. By validating entire software stacks, rather than lighter, tailored versions, HPE aims to help customers run virtualization and compute functions at the network edge with the same tools they use in their primary data centers. Continue reading “HPE Sets Out to Master the Edge While Extending Managed, Metered IT Consumption to Hybrid Cloud”→
Last week’s KubeCon + CloudNativeCon Europe conference in Denmark demonstrated the extent to which Kubernetes has become the industry standard for orchestrating and managing cloud-native applications.
The conference saw Kubernetes announcements from Cisco, Red Hat, and Oracle, illustrating the growing commitment of data center infrastructure vendors to open source and application performance management (APM) technologies.
• Key themes from the 15th Huawei Analyst Summit (HAS) in Shenzhen, China, included edge computing, hybrid cloud enablement, and the application of AI to data center technologies.
• To unlock commercial opportunities and reinforce the competitiveness of its solutions, Huawei would benefit from a stronger articulation of both its hybrid cloud and edge computing capabilities.
Judging by the themes of the 15th HAS in Shenzhen, China, 17-19 April, Huawei expects data center technologies to become increasingly more intelligent, more distributed in the way they are deployed, and more diverse in the use cases they support. Key themes from the Summit, with particular relevance to data centers, included edge computing and the Internet of Things (IoT), multi-cloud and hybrid cloud enablement, and the application of artificial intelligence (AI) to both data centers and the use cases they support.
The Summit saw a recurring emphasis on the theme of “boundless computing”, reflecting Huawei’s commitment to a single infrastructure platform that blurs the boundaries between CPUs, servers, and data centers and supports the delivery of resources wherever they are required. There was considerable discussion of edge computing, which involves the maintenance and operation of IT resources at locations that are closer to the points of data generation, and to the end users of digital content and applications. Huawei already offers several solutions that support enterprise edge computing initiatives, including its Cloud Fabric SDN solution and a version of its hyperconverged infrastructure offering, FusionCube, which is specifically optimized for remote office and branch office (ROBO) and edge computing deployments. Continue reading “Huawei Analyst Summit 2018: Edge Computing, Hybrid Cloud, and AI are Central to Huawei’s Future Vision of the Data Center”→
Although edge computing will decentralize IT, it will not replace traditional data centers or cloud-based architectures, instead operating as an additional tier of IT processing, storage, security, and analytics.
In addition to supporting IoT, edge computing use cases will include VR, AR, and connected car applications that are latency-sensitive and require high levels of performance.