The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for obtaining certain information about the solution to a system of linear equations, introduced by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. Specifically, the algorithm estimates quadratic functions of the solution vector to a given system of linear equations.[1]
The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm and Grover's search algorithm. Assuming the linear system is sparse[2] and has a low condition number , and that the user is interested in the result of a scalar measurement on the solution vector and not the entire vector itself, the algorithm has a runtime of , where is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).
An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by three independent publications.[3][4][5] The demonstrations consisted of simple linear equations on specially designed quantum devices.[3][4][5] The first demonstration of a general-purpose version of the algorithm appeared in 2018.[6]
The HHL algorithm solves the following problem: given a Hermitian matrix and a unit vector , prepare the quantum state whose amplitudes equal the elements of the solution vector to the linear system . In particular, the algorithm cannot efficiently output the solution itself. However, one can still efficiently compute products of the form for some Hermitian matrix .
The algorithm first prepares a quantum state corresponding to of the form
Next, Hamiltonian simulation is used to apply the unitary operator to for a superposition of different times . The algorithm uses quantum phase estimation to decompose into the eigenbasis of and find the corresponding eigenvalues . The state of the system after this decomposition is approximately
where are the eigenvectors of and .
We would then like to apply the linear map taking to , where is a normalizing constant. This linear map is not unitary, so it must be implemented using a quantum measurement with a nonzero probability of failure. After it succeeds, we have uncomputed the register and are left with a state proportional to
where is a quantum state corresponding to the desired solution vector x.
Retrieving all components of x would require the procedure be repeated at least N times. However, sometimes the full vector is not needed and one only needs the expectation value of a linear operator M acting on x. By performing the quantum measurement corresponding to M, we obtain an estimate of the product .
Firstly, the algorithm requires that the matrix be Hermitian so that it can be converted into a unitary operator. In the case where is not Hermitian, one can define a Hermitian matrix
The algorithm can now be used to solve to obtain .
Secondly, the algorithm requires an efficient procedure to prepare , the quantum representation of b. It is assumed ether that has already been prepared or that there exists some which takes some quantum state to efficiently. Any error in the preparation of state is ignored.
Finally, the algorithm assumes that the state can be prepared efficiently, where
for some large . The coefficients of are chosen to minimize a certain quadratic loss function which induces error in the subroutine described below.
Hamiltonian simulation is used to transform the Hermitian matrix into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows to be simulated in time .
The key subroutine to the algorithm, denoted , is defined as follows and incorporates a phase estimation subroutine:
1. Prepare on register C
2. Apply the conditional Hamiltonian evolution (sum)
3. Apply the Fourier transform to the register C. Denote the resulting basis states with for k = 0, ..., T − 1. Define .
4. Adjoin a three-dimensional register S in the state
5. Reverse steps 1–3, uncomputing any garbage produced along the way.
The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error .
The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule.
The body of the algorithm follows the amplitude amplification procedure: starting with , the following operation is repeatedly applied:
where
and
After each repetition, is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until is measured, which occurs with a probability . Rather than repeating times to minimize error, amplitude amplification is used to achieve the same error resilience using only repetitions.
After successfully measuring 'well' on the system will be in a state proportional to
We can then perform the quantum measurement corresponding to M and obtain an estimate of .
The best classical algorithm which produces the actual solution vector is Gaussian elimination, which runs in time.
If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector , which can be found in time by minimizing the quadratic function .
When only a summary statistic of the solution vector is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of in .
The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be , where is the error parameter and is the condition number of . This was subsequently improved to by Andris Ambainis[7] and a quantum algorithm with runtime polynomial in was developed by Childs et al.[8] Since the HHL algorithm maintains its logarithmic scaling in only for sparse or low rank matrices, Wossnig et al.[9] extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in time compared to the of the standard HHL algorithm.
An important factor in the performance of the matrix inversion algorithm is the condition number , which represents the ratio of 's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix lie between and 1, in which case the claimed run-time proportional to will be achieved. Therefore, the speedup over classical algorithms is increased further when is a .[1]
If the run-time of the algorithm were made poly-logarithmic in then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.[1]
Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating . Assuming that is s-sparse, this can be done with an error bounded by a constant , which will translate to the additive error achieved in the output state .
The phase estimation step errs by in estimating , which translates into a relative error of in . If , taking induces a final error of . This requires that the overall run-time efficiency be increased proportional to to minimize error.
While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel.
Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993.[10]
On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[4]
On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity.[5]
Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al.[11] in 2018 using the algorithm developed by Subaşı et al.[12]
Several concrete applications of the HHL algorithm have been proposed, which analyze the algorithm's input assumptions and output guarantees for particular problems.
Clader et al. gave a version of the HHL algorithm which allows a preconditioner to be included, which can be used improve the dependence on the condition number. The algorithm was applied to solving for the radar cross-section of a complex shape, which was one of the first examples of an application of the HHL algorithm to a concrete problem.[13]
Berry proposed an algorithm for solving linear, time-dependent initial value problems using the HHL algorithm.[14]
Two groups proposed[15] efficient algorithms for numerically integrating dissipative nonlinear ordinary differential equations. Liu et al.[16] utilized Carleman linearization for second order equations and Lloyd et al.[17] used a mean field linearization method inspired by the nonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations.
The finite element method approximates linear partial differential equations using large systems of linear equations. Montanaro and Pallister demonstrate that the HHL algorithm can achieve a polynomial quantum speedup for the resulting linear systems. Exponential speedups are not expected for problems in a fixed dimension or for which the solution meets certain smoothness conditions, such as certain high-order problems in many-body dynamics, or some problems in computational finance. [18]
Wiebe et al. gave a quantum algorithm to determine the quality of a least-squares fit. The optimal coefficients cannot be calculated directly from the output of the quantum algorithm, but the algorithm still outputs the optimal least-squares error.[19]
Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and thus are well-suited platforms for machine learning algorithms.[20]
The HHL algorithm has been applied to support vector machines. Rebentrost et al. show that a quantum support vector machine can be used for classification and achieve an exponential speedup over classical computers.[21]
In June 2018, Zhao et al. developed a quantum algorithm for Bayesian training of deep neural networks with an exponential speedup over classical training due to the use of the HHL algorithm.[6] They also gave the first implementation of the HHL algorithm to be run in cloud-based quantum computers.[22]
Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution.[23]
The linearized coupled cluster method in quantum chemistry can be recast as a system of linear equations. In 2023, Baskaran et al. proposed the use of HHL algorithm to solve the resulting linear systems.[24] The number of state register qubits in the quantum algorithm is the logarithm of the number of excitations, offering an exponential reduction in the number of required qubits when compared to using the variational quantum eigensolver or quantum phase estimation.
Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson[25] analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm.