Although HPE Announces a Milestone for Its “Machine” Project, Future Success is Far from Certain

C. Drake

C. Drake

Summary Bullets:

• HPE announced a major milestone for The Machine research project, which promises to transform future computing and data center architectures.

• Despite real achievements, it remains to be seen whether HPE can make a success of The Machine in the way the vendor originally envisaged.

Of the various announcements Hewlett Packard Enterprise (HPE) made at its recent Discovery Event in London, November 2016, one of the most interesting related to “The Machine”, a Hewlett Packard Labs research project that was inaugurated in 2014 and which aspires to revolutionize the way computers are built and data centers of the future are architected. At the London Discovery event HPE announced that it had reached a major milestone for The Machine project, having built and successfully tested a prototype of the “world’s first memory-driven computing architecture”.

The Machine represents an effort to transform existing computer architectures so that they can support rising demand for processing power and ever-increasing volumes of data while simultaneously being more energy efficient. The Machine abandons traditional computer architectures based on central processing capabilities and peripheral storage, and replaces them with a distributed fabric of non-volatile semiconductor memory. Within this fabric, stored data remains static and, instead of being moved from server to server, individual servers (processors) are applied to either all or particular segments of data. Processors access this data from memory chips based on photonics (the transmission of information via light). These computer architectures of the future are expected to have hundreds of terabytes or even petabytes of memory, dwarfing the 24-48 terabytes of memory that characterizes current systems. Meanwhile, traditional computing based on scale-out and cluster architectures will give way to systems in which processors are attached to a memory-based fabric as peripherals, rather than systems that are constructed around central processors.

The benefits of the type of computing envisaged by The Machine project are said to include substantial improvements in compute performance, enhanced energy efficiency, improved IT security, the ability to extend computation to new workloads and the potential to conduct more sophisticated forms of analytics.

All of this has implications for the way servers and data centers are currently designed and built, affecting everything from chip design through to server boards, CPU-memory architectures and network fabrics. Servers are not only a core business for HPE, but also for competitors such as Dell EMC, Cisco, IBM and Oracle. The potential ramifications of further advances for The Machine project are therefore industry-wide.

That said, it remains to be seen whether HPE can make a success of The Machine in the way the vendor originally envisaged. HPE has expressed a commitment to rapidly commercializing the technologies developed under The Machine research project into new and existing products. This process will occur over the next four years and, most likely, an even longer timeframe than that. However, despite potential for a range of applications, it currently looks doubtful that The Machine will emerge from current research and development processes as a single commercial product. Not to do so however, could significantly water down HPE’s potential to transform the existing market for computing and data center technologies. It would also provide alternative data center technology vendors with opportunities to come forward with their own proposed solutions to the sort of challenges The Machine seeks to address.

About Chris Drake
As Principal Analyst for Data Center Technology at Current Analysis, Chris is responsible for covering the emerging technologies that are remapping the traditional data center landscape. These include software and hardware products that are required to support public, private and hybrid cloud architectures, as well as the underlying virtualization and orchestration technology that is needed to enable process automation and workload management. He also covers the Converged Infrastructure market, with a focus on the latest generations of vendor pre-certified and optimized hardware/software stacks.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: