SciPost logo

SciPost Submission Page

Solving Systems of Linear Equations: HHL from a Tensor Networks Perspective

by Alejandro Mata Ali, Iñigo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta, Sebastián Vidal Romero

This is not the latest submitted version.

Submission summary

Authors (as registered SciPost users): Alejandro Mata Ali
Submission information
Preprint Link: scipost_202507_00016v1  (pdf)
Code repository: https://github.com/DOKOS-TAYOS/Tensor_Networks_HHL_algorithm
Data repository: https://github.com/DOKOS-TAYOS/Tensor_Networks_HHL_algorithm
Date submitted: July 5, 2025, 4:23 p.m.
Submitted by: Alejandro Mata Ali
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Quantum Physics
Approach: Computational

Abstract

We present a new approach for solving systems of linear equations with tensor networks based on the quantum HHL algorithm. We first develop a novel HHL in the qudits formalism, the generalization of qubits, and then transform its operations into an equivalent classical HHL, taking advantage of the non-unitary operations that they can apply. The main novelty of this proposal is to perform a classical simulation as efficiently as possible of the HHL to benchmark the algorithm steps according to its input parameters and the input matrix. We apply this algorithm to three simulation problems, comparing it with an exact inversion algorithm, and we compare its performance against an implementation of the original HHL simulated in the Qiskit framework, providing both codes. Our results show that our approach can achieve a promising performance in computational efficiency to simulate HHL process without quantum noise, providing a lower bound.

Current status:
Has been resubmitted

Reports on this Submission

Report #2 by Anonymous (Referee 2) on 2025-9-1 (Invited Report)

Strengths

  1. Very well written.
  2. Codes are made available to all.
  3. Interesting ideas.

Weaknesses

The algorithm does not scale well relative to known approaches like conjugate gradient.

Report

The authors present a quantum-inspired classical algorithm to solve system of linear equations based on qudits and tensor networks. The authors extend the qubit HHL algorithm to qudit HHL by proposing that one can use 1 qudit for the state register, and 1 or more qudits for the clock register, with the aim of reducing quantum resources required for the circuit. This framework uses ideas from Ref. 8, and relies on qudit technology being available in the future. Then, they convert the entire qudit circuit into a tensor network (TN HHL) in order to extract x vector efficiently.

The work is interesting, it is very well-written and I enjoyed reading the work (especially the first two sections; the complexity parts are also nicely discussed, for example, lines 220-226), and it is very nice that the code is available to everyone for reproducibility, but I am not sure how the work advances the field of solving systems of linear equations efficiently (the authors of course do state this in the Introduction too). For example, Table I suggests that the complexity could even go as kappa times N^3 or more for A matrix inversion, which is more expensive that Jacobi or conjugate gradient (the authors are again very honest and tell this too explicitly in Section 5.1). Very very unfortunately, I cannot recommend publication in SciPost Physics Lecture Notes simply due to scaling.

Requested changes

A few optional very minor comments:
-Introduction paragraph 1: Is not kappa lambda_max/|lambda_min|?
-Introduction para 2: The cost of calculating expectation value using, say, the Hadamard test, would add further to the O(log(N)s^2kappa^2/epsilon), isn’t it? Is not the O(log(N)s^2kappa^2/epsilon) cost only for the HHL protocol?
-Line 81: What is n in n^3?
-Line 82: Maybe providing a reference would be helpful?
-Line 86: How do the number of iterations scale in system size for the Jacobi method?
-Line 88: definite positive —>? positive definite?
-TN HHL vs HHL section: Perhaps the authors can add that HHL scales much better in principle as it is a quantum algorithm with the log(N) benefit over classical approaches that have N or more?

Recommendation

Reject

  • validity: ok
  • significance: low
  • originality: good
  • clarity: top
  • formatting: excellent
  • grammar: excellent

Author:  Alejandro Mata Ali  on 2025-09-02  [id 5769]

(in reply to Report 2 on 2025-09-01)
Category:
remark
answer to question

The main point of the work is to provide an efficient way to simulate the best possible performance of the HHL, not to give an algorithm to improve the resolution of systems of linear equations, maybe I must to clarify it in a better way in future versions. This work is important because it provides a faster way to test and benchmark the HHL algorithm and, given the results, show that the HHL algorithm is not a good algorithm to use in an easy way, due to it low precision in the reality. If a algorithm that should be always better or equal in performance than the HHL without errors is still bad, the original algorithm must not be accurate in its real applications. Maybe I must to include more experiments for the dependence of the hyperparameters to show the bad performance of the original algorithm. Another important thing is that the algorithm is way faster than the direct implementation is Qiskit, with backends as the Aer Simulator one. This is the main motivation for the work. So the comparison must be 'This algorithm is better than the state-of-the-art for simulating the HHL', not 'This algorithm is better than the state-of-the-art for solving systems of linear equations'.

For the other comments: - Yes, I will correct the absolute value. - Yes, a factor 1/nu must be included. - It must be N, I will correct it. - I will include it. - To the best of our knowledge, is typically quadratic in N, but depends on the problem. - Yes, I will change it. - I will include it.

Report #1 by Anonymous (Referee 1) on 2025-8-19 (Invited Report)

Disclosure of Generative AI use

The referee discloses that the following generative AI tools have been used in the preparation of this report:

Asked GPT-5 specific questions about HHL and tensor network simulation of quantum algorithms. However, AI was not used to assist in the evaluation of the paper, nor in the writing of the report.

Strengths

1- Clearly written, the text flows smoothly

2- Honest account of what has been studied, how and what are some specific outcomes (but lacking vision, see weaknesses)

Weaknesses

1- No clear message: in which context should the proposed algorithm be used? What specifics alternatives should it be preferred over?

2- Some content appears irrelevant (eg details of classical algorithm against which theirs hasn't been benchmarked, redundant figures)

3- Comparison points are black boxes (Tensorflow, Qiskit Aer)

Report

In this paper, the authors consider a qudit version of the HHL algorithm, a key algorithm in quantum computing to solve linear systems of equations of the form Ax=b, which they turn into a tensor network simulation dubbed TN HHL.

It is stressed that this dequantised version of HHL does not aim to compete with efficient classical solvers such as the Conjugate Gradient method, as it is very slow.

Using TN HHL, the authors carry out matrix inversion on 3 example problems (forced H.O., forced damped H.O. and heat equation) in their discretised versions. They compare the resulting x (in terms of relative RMS) with that obtained with the exact inversion of A (Tensorflow, PyTorch). They find problem-dependent values ranging from about 10-5 to about 10-3, for a far superior simulation time (typically 10^3 times superior).

Then, they compare the results of TN HHL with HHL results obtained using Qiskit's Aer simulator. For this, they use random 16x16 matrices. They observe in average a slight increase in the quality of the solution using TN HHL rather than plain HHL.

I find the article to lack a clear direction and consistency. It seems that the authors simply do matrix inversion by contracting tensor networks in a way which is inspired from HHL. Errors arise as, just as HHL, the method is tuned by hyperparameters mu (number of eigenvalues) and tau (scaling parameter) but the effect of these on the accuracy over the solution is not studied - only best results are provided.

Here are more specific comments: 1- I feel like part II shouldn't be a part in its own as porting of HHL to qudits seems straightforward (which translates into eg Fig 2 being redundant of Fig.1) 2- Choices of physics examples to apply the method to are not explained (why forced, then forced+damped H.O. for instance? what is to be expected?) 3- The information content is confusing: the CG method to solve for x in Ax=b is thoroughly introduced in II, table I compares TN HHL with CG but the experiments are made against 'exact matrix inversion' instead (without any reference to what the actual algorithm running behind PyTorch and Tensorflow inverse methods is), Fig.7 is I believe another visualisation of data reported on Fig. 6, etc 4- Sometimes matrix inversion is made with PyTorch, another time with Tensorflow, without any mention of the reason behind this change in benchmarking tools 5- Some quantities are not introduced, such as c in eq. (17), hyperparameters C and t referenced in VI.D. 6- Many statements are qualitative and vague, see for instance terms flagged by * in the last sentence or the intro.: 'Our results show that our approach can achieve a promising performance in computational efficiency to simulate HHL process without quantum noise, providing a lower bound.' 7- In Sec VI.D., results are benchmarked against HHL run on qiskit on the aer simulator but the authors do not specify the backend. I assume this is the default, statevector simulator. Yet, there exists a tensor network backend in qiskit's aer. How does the specific TN HHL presented here compare to this one?

Requested changes

I believe the study is too preliminary for publication, and I'd encourage further investigation for a message to emerge.

If the goal is to advocate for the use of a TN HHL simulator to simulate the performances of HHL in a noise-free setting, then since HHL does not aim at providing x I believe the authors should aim for results in terms of operations over the solution.

As far as I know, a great strength of TN methods in the simulation of quantum algorithms lies in the possibility to mirror the effect of noise in the hardware by adapting the bond dimension (see eg https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.4.020304). This is advantageous since usual noisy simulation (eg via Lindblad) is very demanding in terms of resources.

The rationale behind using a TN method to simulate the noise-free behaviour of the QC with regards to the solution x, on the other hand, is not obvious to me. If there are specific types of errors that a noise-free quantum computation displays and that the TN simulation is able to capture with reduced simulation cost than statevector simulation, then it would be relevant to turn to TN to simulate HHL. This is however not argued nor shown in the paper.

On a side note, I spotted the following typos/English mistakes: - I believe the construction 'Ax=b, being A an invertible matrix' is grammatically incorrect and should be replaced eg by 'Ax=b, A being an invertible matrix' - 'eficient' in the caption of Fig. 3 (a) is missing an extra f - 'base' instead of 'basis' below eq (4) on p2 - 'Ssec' in VI. B

Recommendation

Ask for major revision

  • validity: low
  • significance: -
  • originality: -
  • clarity: low
  • formatting: excellent
  • grammar: good

Author:  Alejandro Mata Ali  on 2025-09-02  [id 5770]

(in reply to Report 1 on 2025-08-19)
Category:
remark
answer to question
objection

The main point of the algorithm is provide an efficient way to benchmark the HHL algorithm, because typical software tools and algorithms are slow in this task. For example, Statevector or MPS backends in Qiskit Aer are way slower, due to the large amount of operations with high dimensionality. Tensor Network backend directly fails a lot of times.

For the specific comments: 1. Yes, the qudit version does not need a subsection, may be included as part of the HHL explanation. 2. These examples are chosen to test the algorithm in toy problems that have matrix structures similars to the real problems that are intended to solve with the HHL, such as physics simulation. The most simple cases are these three. 3. PyTorch uses the LU factorization for the inversion of matrices. The CG is presented and compared to show the best known algorithm, but it is not general (as shown in the explanation), so it cannot be applied to these inversion problems. And yes, Fig. 7 is another visualization of Fig. 6. Maybe I must join them. 4. The Tensorflow is a typo from previous versions, all experiments are made with Pytorch, because we do not use GPU and it is not needed. 5. c in eq 17 is a typo, it must be mu. C and t are part of the differential equation definition, are the amplitude of the force and the time. 6. It is a lower bound for its error. This algorithm is always equal or better than the original one (due to the no-errors regime and avoids the post-selection). If this algorithm works bad, the HHL always must work even worse. 7. Yes, they are benchmarked against statevector, due to the first points of this reply.

For the request changes: - The point is to benchmark the provided state given by the HHL, because all the following of the algorithm depends on it. - The reason for using noise-free simulation is because, if the algorithm does not performs well without noise, it should be worse with it. Noise simulations with tensor networks are not that straightforward, and most of the time are not so interesting. In this case, a noisy simulation won't be the best idea because the main HHL does not perform well even without noise. Our simulation requires less resources than statevector for noise-free simulations, and this is the main focus of interest. - The typos and mistakes will be corrected.

Login to report or comment