Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers Alejandro Perdomo-Ortiz, 1, 2, 3, 4, 5, * Marcello Benedetti, 1, 2, 4, 5 John Realpe-G´omez, 1, 6, 7 and Rupak Biswas 1, 8 1 Quantum Artificial Intelligence Lab., NASA Ames Research Center, Moffett Field, CA 94035, USA 2 USRA Research Institute for Advanced Computer Science, Mountain View, CA 94043, USA 3 Qubitera, LLC., Mountain View, CA 94041, USA 4 Department of Computer Science, University College London, WC1E 6BT London, UK 5 Cambridge Quantum Computing Limited, CB2 1UB Cambridge, UK 6 SGT Inc., Greenbelt, MD 20770, USA 7 Instituto de Matem´aticas Aplicadas, Universidad de Cartagena, Bol´ıvar 130001, Colombia 8 Exploration Technology Directorate, NASA Ames Research Center, Moffett Field, CA 94035, USA (Dated: March 20, 2018) With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising “killer” applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum en- hancement in the near future. In this contribution to the focus collection on “What would you do with 1000 qubits?”, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learn- ing, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low- dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework. I. INTRODUCTION With quantum computing technologies nearing the era of commercialization and of quantum supremacy [1], it is important to think of potential applications that might benefit from these devices. Machine learning (ML) stands out as a powerful statistical framework to attack problems where exact algorithms are hard to develop. Examples of such problems include image and speech recognition [2, 3], autonomous systems [4], medical ap- plications [5], biology [6], artificial intelligence [7], and many others. The development of quantum algorithms that can assist or entirely replace the classical ML routine is an ongoing effort that has attracted a lot of interest in the scientific quantum information community [8–40]. We restrict the scope of our perspective to this specific angle and refer to it hereafter as quantum-assisted ma- chine learning (QAML). Research in this field has been focusing on tasks such as classification [14], regression [11, 15, 18], Gaussian models [16], vector quantization [13], principal compo- nent analysis [17] and other strategies that are routinely * Correspondence: alejandro.perdomoortiz@nasa.gov used by ML practitioners nowadays. We do not think these approaches would be of practical use in near-term quantum computers. The same reasons that make these techniques so popular, e.g., their scalability and algorith- mic efficiency in tackling huge datasets, make them less appealing to become top candidates as killer applications in QAML with devices in the range of 100-1000 qubits. In other words, regardless of the claims about polynomial and even exponential algorithmic speedup, reaching in- teresting industrial scale applications would require mil- lions or even billions of qubits. Such an advantage is then moot when dealing with real-world datasets and with the quantum devices to become available in the next years in the few thousands-of-qubits regime. As we elaborate in this paper, only a game changer such as the new develop- ments in hybrid classical-quantum algorithms might be able to make a dent in speeding up ML tasks. In our perspective here, we propose and emphasize three approaches to maximize the possibility of finding killer applications on near-term quantum computers: (i) Focus on problems that are currently hard and in- tractable for the ML community, for example, gen- erative models in unsupervised and semi-supervised learning as described in Sec. II. (ii) Focus on datasets with potentially intrinsic arXiv:1708.09757v2 [quant-ph] 19 Mar 2018