This is Your Brain on Quantum Computing

B. Shimmin
B. Shimmin

Summary Bullets:

• When it comes to swapping ones and zeros, quantum computing promises to outpace traditional processors in pure scale.

• Yet its true promise will play out when we learn how to invoke quantum phenomena in order to speed up artificial intelligence (AI).

At last week’s IBM Think conference in Las Vegas, Big Blue and AI chip manufacturer NVIDIA talked up the importance of hardware in resolving AI performance bottlenecks. As it turns out, building a smart AI system demands not only copious amounts of data but also the ability to rapidly run machine learning (ML) and deep learning (DL) algorithms against that data. The trouble is that quite often hardware gets in the way.

The industry has discovered that traditional CPUs can only go so far — maybe one or two arithmetic operations per instruction cycle. That’s why graphic processing units (GPUs) from NVIDIA, Google, and others have become popular among not just cryptocurrency miners but also among data scientists seeking to model complex phenomenon and with casual users perusing the many wonders of Google Street View. These purpose-built machines can chunk through 128,000 such operations per cycle. That makes a big difference for complex DL routines that can employ a sizable number of neural network layers.

But these chips have their limits as well. For example, moving data between CPUs and GPUs can impose its own performance lag. To that end, IBM and NVIDIA demonstrated a use case combining IBM POWER9 servers with NVIDIA Tesla V100 GPUs that promise to speed ML modeling routines at scale. IBM published a benchmark of this joint configuration using Criteo’s open source marketing dataset, claiming to provide a 46% improvement over previous speeds available from Google’s tensor processing units (TPUs) running on Google Cloud Platform. This issue is of particular importance for use cases that demand either a large number of ML models or a constant retraining of ML models in near real-time using sizable data sets.

Clearly, AI innovation and hardware innovation go hand in hand. So what’s next, further GPU-style processor refinements? Better algorithms that do more AI with less hardware (e.g., Knowm’s neuromemristive processor)? Of course, but my long term money is not on accelerating traditional hardware. For AI to truly deliver on tough problems such as the real-time optimization of a complex supply chain or the discovery of a new molecule capable of eradicating an intractable disease, we’ll need some new hardware. We’ll need quantum computing.

Running as close to absolute zero as possible and trading in bits for qubits (ones, zeros and any quantum superposition in between those two states), quantum computers from IBM, Google, Intel, D-Wave Systems, and Microsoft will one day revolutionize AI. They will do so not through pure speed alone but by exploiting the very nature of quantum mechanics (ideas like entanglement and tunneling) to enable the creation of brand new algorithms that do not have to obey the classic rules of linear execution. For heavy lifting AI chores like classification, regression and clustering, quantum computing will usher in an entirely new realm of performance and scale.

The basic value proposition is this: instead of solving problems in sequence as with traditional computer chips, quantum computers can solve multiple problems in parallel. In this way, a quantum computer can use its qubits to probe various quantum states (superpositions) between one and zero in parallel. This can greatly speed up key facets of ML, such as reinforcement learning, as the computer can probe a much larger number of “possibilities” in determining a course of action that leads to a reward — let’s say winning at the game of Go or Chess. It can likewise speed up the time-consuming task of modeling and simulation, allowing researchers to test out a myriad of possibilities without having to move data back and forth between various chip sets.

Of course, much of this is still theoretical as there are very few quantum computers available to test out these ideas and even fewer that can be put to work in a production environment at scale. But if IBM’s quantum computing efforts, which were very much on display during IBM Think, are a reliable bellwether for how the market will mature, we should see quantum ML enter the mainstream within the next five years. In the meantime, we’ll just have to make do with the current crop of cloud-based supercomputers available from Google, Microsoft, IBM, Intel, NVIDIA and others. Sure, it’s not a 50 qubit quantum computer, but 180 teraflops of performance from a single Google TPU is nothing to sneeze at.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s