SciPost logo

SciPost Submission Page

A causality-based divide-and-conquer algorithm for nonequilibrium Green's function calculations with quantics tensor trains

by Ken Inayoshi, Maksymilian Środa, Anna Kauch, Philipp Werner, Hiroshi Shinaoka

Submission summary

Authors (as registered SciPost users): Ken Inayoshi · Maksymilian Środa
Submission information
Preprint Link: https://arxiv.org/abs/2509.15028v2  (pdf)
Date submitted: Sept. 25, 2025, 12:28 p.m.
Submitted by: Ken Inayoshi
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Theory
  • Condensed Matter Physics - Computational
Approaches: Theoretical, Computational
Disclosure of Generative AI use

The author(s) disclose that the following generative AI tools have been used in the preparation of this submission:

In the main text, GitHub Copilot in VS Code (ChatGPT-4.1) was used for spelling and grammar checking.

Abstract

We propose a causality-based divide-and-conquer algorithm for nonequilibrium Green's function calculations with quantics tensor trains. This algorithm enables stable and efficient extensions of the simulated time domain by exploiting the causality of Green's functions. We apply this approach within the framework of nonequilibrium dynamical mean-field theory to the simulation of quench dynamics in symmetry-broken phases, where long-time simulations are often required to capture slow relaxation dynamics. We demonstrate that our algorithm allows to extend the simulated time domain without a significant increase in the cost of storing the Green's function.

Author indications on fulfilling journal expectations

  • Provide a novel and synergetic link between different research areas.
  • Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
  • Detail a groundbreaking theoretical/experimental/computational discovery
  • Present a breakthrough on a previously-identified and long-standing research stumbling block
Current status:
In refereeing

Reports on this Submission

Report #2 by Anonymous (Referee 2) on 2025-11-26 (Invited Report)

Strengths

1- feasible and useful extension of the QTT strategy for solving the KBE 2- Accurate benchmarks against time-stepping methods

Weaknesses

The paper lacks a discussion of the versatility of the QTT strategy in addressing more realistic Hamiltonians

Report

I have read with interest the paper entitled "A causality-based divide-and-conquer algorithm for nonequilibrium Green’s function calculations with quantics tensor trains" by Inayoshi et al.

The work presents an extremely useful advance in the quantics tensor trains (QTT) strategy for solving the Kadanoff–Baym equations. In essence, the authors have incorporated causality into the original QTT approach. This achievement allows them to extend the propagation time without the need to re-converge the Dyson equation over the entire extended domain.

The authors provide a thorough numerical analysis of convergence with respect to the number of iterations, successful benchmarks against the time-stepping method -- as implemented in the NESSI code -- and evidence that both particle number and energy are conserved during time propagation.

The paper is very well written, and I can recommend it for publication as is. The authors may, however, wish to include a discussion on scaling and memory requirements for simulating realistic systems (e.g., k-dependent Green’s functions and self-energies, multiple bands, long-range interactions etc.). Such a discussion would both broaden the scope of the QTT methodology and highlight the main challenges that must be addressed in order to make the KBE a competitive ab initio method. Related to this, the authors may also wish to discuss how QTT performs for other self-energies such as GW.

Requested changes

See report

Recommendation

Publish (surpasses expectations and criteria for this Journal; among top 10%)

  • validity: high
  • significance: high
  • originality: top
  • clarity: top
  • formatting: excellent
  • grammar: excellent

Report #1 by Anonymous (Referee 1) on 2025-11-19 (Invited Report)

Strengths

1. This paper improves a previous method of finding non-equilibrium Green’s functions with Quantics Tensor Trains by using a causality-based divide and conquer algorithms, which in practice corresponds in solving the Green’s function globally for a given time $t_{max}$ and then fixing the Green’s function inside this time domain and increase the time domain slower $\Delta t$ and just update the part of the Green’s function corresponding to this $\Delta t$.
2. They use this method to find the non-equilibrium DMFT Green’s functions of the Hubbard model in the AFM phase and compare their results with the conventional approach implemented with NESSi.
3. They compare the data-size of the Green’s functions found by conventional methods and those found with QTT methods, finding an improvement of almost 3 orders of magnitude when compressing the data with QTT.

Weaknesses

  1. Although they estimate the runtime memory (or the number of operations) that needs the QTT method at each iteration (it scales as $\mathcal{O}(L D^3)$). They never discuss or estimate the approximate number of iterations that require traditional methods to achieve the final result. It would be interesting to know how those two methods compare in number of operations.

Report

In this work, the authors use a method based in Quantics Tensor Trains to simulate quench dynamics in symmetry broken phases with DMFT. This method improved the previous method published by some of the authors [M. Murray, H. Shinaoka and P. Werner (2024), arXiv:2412.14032] by using a causality-based divide-and-conquer algorithm they are able to avoid the “excessive number of iterations” required for the method to converge. Furthermore, they show results for the quench dynamics when the Hubbard interaction U is changed suddenly from U=2 to U=1.5 and show how the data size of the Green function’s calculated is between 2 or 3 orders of magnitude smaller with respect to conventional state-of-the-art methods (NESSi). The data-size does not seem to scale with $t_{max}$, although they report that for very long times, the energy and particle number conservation start to degrade.
All in all, I would recommend this paper for publication after a couple of minor issues are addressed.

Requested changes

Minor changes
1. The addition of a discussion between the number of operations required by conventional and the QTT block time stepping methods to find the Green’s functions. It does not need to be exhaustive I would be happy with a similar number with the one you give for the QTT block time stepping methods where you report the approximate number of operations required per iteration.
Very small changes in the manuscript
2. In the first paragraph of the introduction, you talk about the data-size and computational cost scaling with the total number of time steps, but in the end you say that it is difficult to simulate non-equilibrium dynamics in “large lattice systems and long times” without talking about the it is difficult to simulate large lattice systems or giving an idea on how the memory and operation cost scale with $N_x$.
3. In section (2.1) when you talk about the bond dimension typically you would like $D \ll 2^R$ so the data size is $\ll \mathcal{O}(4L2^{2R})$.

Recommendation

Ask for minor revision

  • validity: high
  • significance: good
  • originality: good
  • clarity: high
  • formatting: excellent
  • grammar: excellent

Login to report or comment