What's new

IBM, Intel Papers Report AI Breakthroughs for Quantum Science

Hamartia Antidote

ELITE MEMBER
Nov 17, 2013
25,447
24
17,906
Country
United States
Location
United States
https://www.hpcwire.com/2019/03/13/ibm-intel-papers-report-ai-breakthroughs-for-quantum-science/

Fascinatingly, two announcements today show how AI (machine and deep learning) can influence quantum computing in quite different ways. IBM (et. al) reported developing ‘AI’ algorithms that “demonstrate how noisy quantum computers can solve machine learning classification problems that classical computers cannot” thus paving the way to obtain quantum advantage. Intel (et. al) reported having “mathematically proven that artificial intelligence can help us understand currently unreachable quantum physics phenomena” which, among other things, could lead to better quantum computers.

The twin announcements closely track prestigious publications. The MIT, Oxford, and IBM-led paper, Supervised learning with quantum-enhanced feature spaces, was published in Nature today. The Intel-led paper, Quantum Entanglement in Deep Learning Architectures, was published in APS Physical Review Letters last month. Intel made its announcement in conjunction with Intel Mobileye co-founder/CEO Amnon Shashua’s keynote today at the National Academy of Sciences ‘Science of Deep Learning’ conference. Shashua is also a professor at Hebrew University and one of the paper’s authors.

IBM posted a blog by IBM researchers Kristan Temme and Jay Gambetta explaining the work.

“There are high hopes that quantum computing’s tremendous processing power will someday unleash exponential advances in artificial intelligence. AI systems thrive when the machine-learning algorithms used to train them are given massive amounts of data to ingest, classify and analyze. The more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform. Quantum computers are expected to play a crucial role in machine learning, including the crucial aspect of accessing more computationally complex feature spaces – the fine-grain aspects of data that could lead to new insights,” write Temme and Gambetta.

“[In the paper] we describe developing and testing a quantum algorithm with the potential to enable machine learning on quantum computers in the near future. We’ve shown that as quantum computers become more powerful in the years to come, and their Quantum Volume increases, they will be able to perform feature mapping, a key component of machine learning, on highly complex data structures at a scale far beyond the reach of even the most powerful classical computers…Our methods were also able to classify data with the use of short-depth circuits, which opens a path to dealing with decoherence. Just as significantly, our feature-mapping worked as predicted: no classification errors with our engineered data, even as the IBM Q systems’ processors experienced decoherence.”

Given the nature of the material, the IBM blog and paper are best read directly.

Intel’s work attacked a different issue and the paper’s authors do a nice job framing the challenge in this excerpt:

“A prominent approach for classically simulating many-body wave functions makes use of their entanglement properties in order to construct tensor network (TN) architectures that aptly model them in the thermodynamic limit. Though this method is successful in modeling one-dimensional (1D) systems that obey area-law entanglement scaling with subsystem size through the matrix product state (MPS) TN, it still faces difficulties in modeling two-dimensional (2D) systems due to intractability.

“In the seemingly unrelated field of machine learning, deep neural network architectures have exhibited an unprecedented ability to tractably encompass the convoluted dependencies that characterize difficult learning tasks such as image classification or speech recognition. A consequent machine learning inspired approach for modeling wave functions makes use of fully connected neural networks and restricted Boltzmann machines (RBMs), which represent relatively veteran machine learning constructs.

“In this Letter, we formally establish that highly entangled many-body wave functions can be efficiently represented by deep learning architectures that are at the forefront of recent empirical successes. Specifically, we address two prominent architectures in the form of convolutional neural networks (CNNs), commonly used over spatial inputs (e.g., image pixels), and recurrent neural networks (RNNs), commonly used over temporal inputs (e.g., phonemes of speech).”

Once again, this is a topic best examined by reading the original paper. That said the implications are far reaching affecting many areas of research at the quantum level.
 

QasimTraveler

FULL MEMBER

New Recruit

Dec 24, 2018
70
2
75
Country
Norway
Location
Pakistan
Really fascinating to see how the field of AI is interacting with so much varying arenas. Great.
 

Users Who Are Viewing This Thread (Total: 1, Members: 0, Guests: 1)


Top Bottom