The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.[1]

The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm, Grover's search algorithm, and the quantum Fourier transform. Provided the linear system is sparse[2] and has a low condition number , and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of , where is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).

An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by three independent publications.[3][4][5] The demonstrations consisted of simple linear equations on specially designed quantum devices.[3][4][5] The first demonstration of a general-purpose version of the algorithm appeared in 2018.[6]

Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability.[7]

Procedure edit

The HHL algorithm tackles the following problem: given a   Hermitian matrix   and a unit vector  , prepare the quantum state   corresponding to the vector   that solves the linear system  . More precisely, the goal is to prepare a state   whose amplitudes equal the elements of  . This means, in particular, that the algorithm cannot be used to efficiently retrieve the vector   itself. It does, however, allow to efficiently compute expectation values of the form   for some observable  .

First, the algorithm represents the vector   as a quantum state of the form:

 

Next, Hamiltonian simulation techniques are used to apply the unitary operator   to   for a superposition of different times  . The ability to decompose   into the eigenbasis of   and to find the corresponding eigenvalues   is facilitated by the use of quantum phase estimation.

The state of the system after this decomposition is approximately:

 

where   is the eigenvector basis of  , and  .

We would then like to perform the linear map taking   to  , where   is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncomputed the   register and are left with a state proportional to:

 

where   is a quantum-mechanical representation of the desired solution vector x. To read out all components of x would require the procedure be repeated at least N times. However, it is often the case that one is not interested in   itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value  . This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x.

Explanation edit

Initialization edit

Firstly, the algorithm requires that the matrix   be Hermitian so that it can be converted into a unitary operator. In the case where   is not Hermitian, define

 

As   is Hermitian, the algorithm can now be used to solve   to obtain  .

Secondly, the algorithm requires an efficient procedure to prepare  , the quantum representation of b. It is assumed that there exists some linear operator   that can take some arbitrary quantum state   to   efficiently or that this algorithm is a subroutine in a larger algorithm and is given   as input. Any error in the preparation of state   is ignored.

Finally, the algorithm assumes that the state   can be prepared efficiently, where

 

for some large  . The coefficients of   are chosen to minimize a certain quadratic loss function which induces error in the   subroutine described below.

Hamiltonian simulation edit

Hamiltonian simulation is used to transform the Hermitian matrix   into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows   to be simulated in time  .

Uinvert subroutine edit

The key subroutine to the algorithm, denoted  , is defined as follows and incorporates a phase estimation subroutine:

1. Prepare   on register C

2. Apply the conditional Hamiltonian evolution (sum)

3. Apply the Fourier transform to the register C. Denote the resulting basis states with   for k = 0, ..., T − 1. Define  .

4. Adjoin a three-dimensional register S in the state

 

5. Reverse steps 1–3, uncomputing any garbage produced along the way.

The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error  .

The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of   is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule.

Main loop edit

The body of the algorithm follows the amplitude amplification procedure: starting with  , the following operation is repeatedly applied:

 

where

 

and

 

After each repetition,   is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until   is measured, which occurs with a probability  . Rather than repeating   times to minimize error, amplitude amplification is used to achieve the same error resilience using only   repetitions.

Scalar measurement edit

After successfully measuring 'well' on   the system will be in a state proportional to:

 

Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of  .

Run time analysis edit

Classical efficiency edit

The best classical algorithm which produces the actual solution vector   is Gaussian elimination, which runs in   time.

If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector  , which can be found in   time by minimizing the quadratic function  .

When only a summary statistic of the solution vector   is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of   in  .

Quantum efficiency edit

The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be  , where   is the error parameter and   is the condition number of  . This was subsequently improved to   by Andris Ambainis[8] and a quantum algorithm with runtime polynomial in   was developed by Childs et al.[9] Since the HHL algorithm maintains its logarithmic scaling in   only for sparse or low rank matrices, Wossnig et al.[10] extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in   time compared to the   of the standard HHL algorithm.

Optimality edit

An important factor in the performance of the matrix inversion algorithm is the condition number  , which represents the ratio of  's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as   becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix   lie between   and 1, in which case the claimed run-time proportional to   will be achieved. Therefore, the speedup over classical algorithms is increased further when   is a  .[1]

If the run-time of the algorithm were made poly-logarithmic in   then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.[1]

Error analysis edit

Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating  . Assuming that   is s-sparse, this can be done with an error bounded by a constant  , which will translate to the additive error achieved in the output state  .

The phase estimation step errs by   in estimating  , which translates into a relative error of   in  . If  , taking   induces a final error of  . This requires that the overall run-time efficiency be increased proportional to   to minimize error.

Experimental realization edit

While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel.

Cai et al. edit

Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving   linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993.[11]

Barz et al. edit

On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[4]

Pan et al. edit

On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity.[5]

Wen et al. edit

Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al.[12] in 2018 using the algorithm developed by Subaşı et al.[13]

Applications edit

Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited practical use due to the current small size of quantum computers. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science and engineering, both on its own and as a subroutine in more complex problems.

Electromagnetic scattering edit

Clader et al. provided a preconditioned version of the linear systems algorithm that provided two advances. First, they demonstrated how a preconditioner could be included within the quantum algorithm. This expands the class of problems that can achieve the promised exponential speedup, since the scaling of HHL and the best classical algorithms are both polynomial in the condition number. The second advance was the demonstration of how to use HHL to solve for the radar cross-section of a complex shape. This was one of the first end to end examples of how to use HHL to solve a concrete problem exponentially faster than the best known classical algorithm.[14]

Linear differential equation solving edit

Dominic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an efficient algorithm for solving the full-time evolution under sparse linear differential equations on a quantum computer.[15]

Finite element method edit

The Finite Element Method uses large systems of linear equations to find approximate solutions to various physical and mathematical models. Montanaro and Pallister demonstrate that the HHL algorithm, when applied to certain FEM problems, can achieve a polynomial quantum speedup. They suggest that an exponential speedup is not possible in problems with fixed dimensions, and for which the solution meets certain smoothness conditions.

Quantum speedups for the finite element method are higher for problems which include solutions with higher-order derivatives and large spatial dimensions. For example, problems in many-body dynamics require the solution of equations containing derivatives on orders scaling with the number of bodies, and some problems in computational finance, such as Black-Scholes models, require large spatial dimensions.[16]

Least-squares fitting edit

Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm.[17]

Machine learning and big data analysis edit

Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and thus are well-suited platforms for machine learning algorithms.[18]

The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers.[19]

In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm for linear systems of equations,[6] providing also the first general-purpose implementation of the algorithm to be run in cloud-based quantum computers.[20]

Finance edit

Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution.[21]

Quantum chemistry edit

In 2023, Baskaran et al. proposed the use of HHL algorithm to quantum chemistry calculations, via the linearized coupled cluster method (LCC).[22] The connection between the HHL algorithm and the LCC method is due to the fact that the latter can be recast in the form of system of linear equations. A key factor that makes this approach useful for quantum chemistry is that the number of state register qubits is the natural logarithm of the number of excitations, thus offering an exponential suppression in the number of required qubits when compared to variational quantum eigensolver or the quantum phase estimation algorithms. This leads to a 'coexistence across scales', where in a given quantum computing era, HHL-LCC could be applied to much larger systems whereas QPE-CASCI could be employed for smaller molecular systems but with better accuracy in predicting molecular properties. On the algorithmic side, the authors introduce the 'AdaptHHL' approach, which circumvents the need to expend an ~Ο(N3) classical overhead associated with fixing a value for the parameter 'c' in the controlled-rotation module of the algorithm.

Critique edit

Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson[23] analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm.

  1. the solution vector,  , has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes   steps the exponential advantage of HHL would vanish.
  2. the QPE phases calls for the generation of the unitary  , and its controlled application. The efficiency of this step depends on the   matrix being sparse and 'well conditioned' (low  ). Otherwise, the application of   would grow as   and once again, the algorithm's quantum advantage would vanish.
  3. lastly, the vector,  , is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator  . If actual values of   are needed, then HHL would need to be repeated   times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed;[24] second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed.[25]

See also edit

References edit

  1. ^ a b c Harrow, Aram W; Hassidim, Avinatan; Lloyd, Seth (2008). "Quantum algorithm for linear systems of equations". Physical Review Letters. 103 (15): 150502. arXiv:0811.3171. Bibcode:2009PhRvL.103o0502H. doi:10.1103/PhysRevLett.103.150502. PMID 19905613. S2CID 5187993.
  2. ^ Johnston, Eric (2019-07-03). Programming Quantum Computers: Essential Algorithms and Code Samples. O'Reilly Media. p. 267. ISBN 9781492039655.
  3. ^ a b Cai, X.-D; Weedbrook, C; Su, Z.-E; Chen, M.-C; Gu, Mile; Zhu, M.-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei (2013). "Experimental Quantum Computing to Solve Systems of Linear Equations". Physical Review Letters. 110 (23): 230501. arXiv:1302.4310. Bibcode:2013PhRvL.110w0501C. doi:10.1103/PhysRevLett.110.230501. PMID 25167475. S2CID 20427454.
  4. ^ a b c Barz, Stefanie; Kassal, Ivan; Ringbauer, Martin; Lipp, Yannick Ole; Dakić, Borivoje; Aspuru-Guzik, Alán; Walther, Philip (2014). "A two-qubit photonic quantum processor and its application to solving systems of linear equations". Scientific Reports. 4: 6115. arXiv:1302.1210. Bibcode:2014NatSR...4E6115B. doi:10.1038/srep06115. ISSN 2045-2322. PMC 4137340. PMID 25135432.
  5. ^ a b c Pan, Jian; Cao, Yudong; Yao, Xiwei; Li, Zhaokai; Ju, Chenyong; Peng, Xinhua; Kais, Sabre; Du, Jiangfeng; Du, Jiangfeng (2014). "Experimental realization of quantum algorithm for solving linear systems of equations". Physical Review A. 89 (2): 022313. arXiv:1302.1946. Bibcode:2014PhRvA..89b2313P. doi:10.1103/PhysRevA.89.022313. S2CID 14303240.
  6. ^ a b Zhao, Zhikuan; Pozas-Kerstjens, Alejandro; Rebentrost, Patrick; Wittek, Peter (2019). "Bayesian Deep Learning on a Quantum Computer". Quantum Machine Intelligence. 1 (1–2): 41–51. arXiv:1806.11463. doi:10.1007/s42484-019-00004-7. S2CID 49554188.
  7. ^ Quantum Computer Runs The Most Practically Useful Quantum Algorithm, by Lu and Pan.
  8. ^ Ambainis, Andris (2010). "Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations". arXiv:1010.4458 [quant-ph].
  9. ^ Childs, Andrew M.; Kothari, Robin; Somma, Rolando D. (2017). "Quantum Algorithm for Systems of Linear Equations with Exponentially Improved Dependence on Precision". SIAM Journal on Computing. 46 (6): 1920–1950. arXiv:1511.02306. doi:10.1137/16m1087072. ISSN 0097-5397. S2CID 3834959.
  10. ^ Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam (2018). "A quantum linear system algorithm for dense matrices". Physical Review Letters. 120 (5): 050502. arXiv:1704.06174. Bibcode:2018PhRvL.120e0502W. doi:10.1103/PhysRevLett.120.050502. PMID 29481180. S2CID 3714239.
  11. ^ Cai, X. -D; Weedbrook, Christian; Su, Z. -E; Chen, M. -C; Gu, Mile; Zhu, M. -J; Li, L; Liu, N. -L; Lu, Chao-Yang; Pan, Jian-Wei (2013). "Experimental Quantum Computing to Solve Systems of Linear Equations". Physical Review Letters. 110 (23): 230501. arXiv:1302.4310. Bibcode:2013PhRvL.110w0501C. doi:10.1103/PhysRevLett.110.230501. PMID 25167475. S2CID 20427454.
  12. ^ Jingwei Wen, Xiangyu Kong, Shijie Wei, Bixue Wang, Tao Xin, and Guilu Long (2019). "Experimental realization of quantum algorithms for a linear system inspired by adiabatic quantum computing". Phys. Rev. A 99, 012320.
  13. ^ Subaşı, Yiğit; Somma, Rolando D.; Orsucci, Davide (2019-02-14). "Quantum Algorithms for Systems of Linear Equations Inspired by Adiabatic Quantum Computing". Physical Review Letters. 122 (6): 060504. arXiv:1805.10549. Bibcode:2019PhRvL.122f0504S. doi:10.1103/physrevlett.122.060504. ISSN 0031-9007. PMID 30822089. S2CID 73493666.
  14. ^ Clader, B. D; Jacobs, B. C; Sprouse, C. R (2013). "Preconditioned Quantum Linear System Algorithm". Physical Review Letters. 110 (25): 250504. arXiv:1301.2340. Bibcode:2013PhRvL.110y0504C. doi:10.1103/PhysRevLett.110.250504. PMID 23829722. S2CID 33391978.
  15. ^ Berry, Dominic W (2010). "High-order quantum algorithm for solving linear differential equations". Journal of Physics A: Mathematical and Theoretical. 47 (10): 105301. arXiv:1010.2745. Bibcode:2014JPhA...47j5301B. doi:10.1088/1751-8113/47/10/105301. S2CID 17623971.
  16. ^ Montanaro, Ashley; Pallister, Sam (2016). "Quantum Algorithms and the Finite Element Method". Physical Review A. 93 (3): 032324. arXiv:1512.05903. Bibcode:2016PhRvA..93c2324M. doi:10.1103/PhysRevA.93.032324. S2CID 44004935.
  17. ^ Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2012). "Quantum Data Fitting". Physical Review Letters. 109 (5): 050505. arXiv:1204.5242. Bibcode:2012PhRvL.109e0505W. doi:10.1103/PhysRevLett.109.050505. PMID 23006156. S2CID 118439810.
  18. ^ Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2013). "Quantum algorithms for supervised and unsupervised machine learning". arXiv:1307.0411 [quant-ph].
  19. ^ Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth (2013). "Quantum support vector machine for big feature and big data classification". arXiv:1307.0471v2 [quant-ph].
  20. ^ "apozas/bayesian-dl-quantum". GitLab. Retrieved 30 October 2018.
  21. ^ Jacquier, Antoine (2022-10-31). Quantum Machine Learning and Optimisation in Finance: On the Road to Quantum Advantage. Packt. p. 349. ISBN 9781801817875.
  22. ^ Baskaran, N (2023). "Adapting the Harrow-Hassidim-Lloyd algorithm to quantum many-body theory". Physical Review Research. 5 (4): 043113. doi:10.1103/PhysRevResearch.5.043113.
  23. ^ Aaronson, Scott (2015). "Read the fine print". Nature Physics. 11 (4): 291–293. Bibcode:2015NatPh..11..291A. doi:10.1038/nphys3272. S2CID 122167250. Retrieved 2023-05-09.
  24. ^ Schuld, Maria (2018). Supervised Learning with Quantum Computers. Springer Publishing. p. 218. ISBN 9783319964249.
  25. ^ Schuld, Maria (2018). Supervised Learning with Quantum Computers. Springer Publishing. p. 219. ISBN 9783319964249.