SciPost logo

SciPost Submission Page

The Cytnx Library for Tensor Networks

by Kai-Hsin Wu, Chang-Teng Lin, Ke Hsu, Hao-Ti Hung, Manuel Schneider, Chia-Min Chung, Ying-Jer Kao, Pochung Chen

Submission summary

Authors (as registered SciPost users): Pochung Chen
Submission information
Preprint Link: scipost_202312_00013v1  (pdf)
Code repository: https://github.com/Cytnx-dev/Cytnx
Date submitted: 2023-12-06 06:36
Submitted by: Chen, Pochung
Submitted to: SciPost Physics Codebases
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Computational
Approaches: Theoretical, Computational

Abstract

We introduce a tensor network library designed for classical and quantum physics simulations called Cytnx (pronounced as /sci-tens/). This library provides almost an identical interface and syntax for both C++ and Python, allowing users to effortlessly switch between two languages. Aiming at a quick learning process for new users of tensor network algorithms, the interfaces resemble the popular Python scientific libraries like NumPy, Scipy, and PyTorch. Not only multiple global Abelian symmetries can be easily defined and implemented, Cytnx also provides a new tool called Network that allows users to store large tensor networks and perform tensor network contractions in an optimal order automatically. With the integration of cuQuantum, tensor calculations can also be executed efficiently on GPUs. We present benchmark results for tensor operations on both devices, CPU and GPU. We also discuss features and higher-level interfaces to be added in the future.

Current status:
Awaiting resubmission

Reports on this Submission

Anonymous Report 2 on 2024-4-8 (Invited Report)

Strengths

1 - accessible presentation, quick learning curve

2 - flexibility of GPU implementation at essentially zero-cost

3 - similarity to the widespread notation of other scientific libraries, hopefully easing compatibility

4 - presentation of benchmarks

Weaknesses

1 - scarce discussion of the relation to other works, especially when it comes to automatic selection of contraction order

2 - syntax is in some places slippery, with commands differing only slightly and thus being prone to generate bugs

3 - benchmarks are a bit reductive, and might be extended to other typical tensor network cases (and should be presented for both programming languages)

Report

The Authors present here a tensor network library (Cytnx) developed in Python and C++, that aims at a quick learning process for new users. It resembles in most of its syntax other popular libraries, and provides a smooth integration of GPU calculations. It leverages over established formalism and concepts in tensor networks (e.g., definition of multiple Abelian symmetries), and promises to be soon extended by others (e.g., treatment of fermions and of non-Abelian groups, or support for automatic differentiation). It also integrates an automatic evaluation of the optimal contraction order for large tensor networks. Benchmarks are offered for a couple of typical operations in the most common tensor network algorithm, namely density matrix renormalization group over matrix product states.

Overall, the Manuscript presents itself as a detailed User’s guide, with precious code snippets that smoothen the learning curve, offering a quick hands-on to everyone. The overall flow of reading feels quite natural, although certain things might be defined in a different order / not anticipated so much -- but that is certainly only matter of personal taste. It falls however a bit short when it comes to put things in the broader context of other existing works (see below for some suggestions).

In general, the Library seems to allow for a lot of smart syntax to ease the life of the programmers, e.g., a flexible naming of the indices and the possibility of forgetting about their precise position for most uses. In some places, though, the command naming seems at first sight a bit slippery (see below). Some additional benchmarks are desirable, e.g., for typical contractions in two-dimensional algorithms, and for the automated selection of contraction order.

Summarising, while it certainly fits the criteria for SciPost Physics Codebases and has the potential to belong to its higher-end, the Manuscript could still be improved / enriched before publication. I do not see major hindrances that could make the process other than smooth: The work presented here is very good, and would just profit of a polishing round. Let me list some more specific points below, in the hope that the Authors would see my point and could in case spot further knobs to be tuned.

Incidentally, I sincerely apologise for the delay incurred in preparing this Report, which was not in any way related to the quality of the Manuscript.

Requested changes

1) Concerning other works that discuss the implementation of tensor network contractions and the conservation of quantum numbers, I find it remarkable that works by Montangero, Weichselbaum, and others (some of which on SciPost) are not mentioned. Same applies, even more urgently, when discussing a feature like automated decision of the optimal contraction sequence: a work by Pfeifer, Haegeman and Verstraete bearing almost this exact title should be mentioned.

2) Minor: The naming UniTensor suggests at first reading some relation to unitarity, which is not at all the case — what is the reason behind the choice? Almost surely a legacy to Uni10 library, which is however not discussed (sorry if I accidentally overlooked it).

3) In Listing 2 (and elsewhere, too), why are such similar names like “Contract” and “Contracts” chosen? A typo is extremely likely to occur, causing an easily avoidable bug. Is there any control in place to prevent this to happen? (Same applies later to “.relabel_” and “.relabels_”, just to cite one). By the way, at this stage of the presentation, it is not clear how “Contracts” will decide over the contraction order.

4) The remark “One typically needs to first find an optimal contraction order before implementing the contractions” should at best be accompanied by the warning that this is (by far!) not the one exemplified in Listing 2, if all bonds have similar dimension…

5) Speaking of slippery notation, and possible typos: is there any specific meaning attached to the “;” between the indices of TOUT in Listing 3? The indices of the other two tensors are separated by “,” instead.

6) Minor: When discussing the merging of bonds in Sec. 4, the natural question is how this will be ruled when symmetries are present. May a link to the proper place in Sec. 6 be useful here? Incidentally, is the reverse operation (i.e., splitting a bond in multiple ones) implemented, and how?

7) Minor: A few commands seem a bit involved, like in Listing 16 the apparent need of assigning explicitly a name to a UniTensor, identical to the variable name on the lhs of the “=“. Or is there any arbitrariness that could be useful in specific cases? If so, a comment would be desirable.

8) When discussing the possible initialisations of a tensor in Sec. 5, a natural question is whether one could add a bit of noise on top of existing entries loaded from a file (e.g., tensors coming from another simulation, or in the case of single-site DMRG algorithms with so-called subspace expansion). In other words, can one perform an element-by-element sum or rescaling of a tensor? It might be related to Secs. 5.5 and 5.6, but it could be useful to mention it more clearly.

9) Minor: on page 15, there is a discussion about “rowrank” deciding over the left/right position of indices in the pictorial representation of the tensor (“We observe that bond "a" is attached to the left of the tensor, while bonds "b" and"c" are attached to the right…”) but this does not correspond to the illustration in Listing 22. Please amend.

10) In Listing 24 and 25, two notations seem to be present for indicating a specific element of a UniTensor, namely “uT.at(["a","b"],[0,2]).value” and “uT[0,2]”: are they truly identical? If yes, why should one ever want to use the first? If not, where is the crucial difference?

11) As a practitioner of tensor networks with symmetries, I am interested in understanding why “users can choose not to group them [sectors with the same quantum number] by setting the argument “is_grp=False”. When is that useful? Can the Authors provide a concrete example for the Readership?

12) Minor: Is there any constraint on the dimensions of incoming / outgoing bonds of symmetric tensors, or are they completely free?

13) When discussing linear algebra operations with symmetric tensors, the case of truncated SVD should be discussed in more detail. The comparison of values to be thrown away should be performed across different sectors, to exclude the risk of spurious symmetry breaking effects. This is discussed in multiple sources about DMRG and other TN algorithms, and should be recalled to the reader here, in the spirit of allowing for “quick learning process for new users”.

14) In Sec. 8 about the “Network” object, its relation to old-standing community-routines like “ncon” and alike might be worth to be discussed. Same, and even more importantly, the similarities and differences between the automated generation of contraction here and the one proposed one decade ago in works by Pfeifer et al.

15) In Sec. 9.2, it might be worth to mention the possibility of using Golub-Kahan-Lanczos bidiagonalization (like in KrylovKit.jl, for example) for obtaining a truncated SVD without need of computing first the full one. While this is commonly not useful in a DMRG algorithm with small physical dimension, it might become handy when the fraction of the singular values to be kept is small (e.g., in a PEPS or TTN application, and also for DMRG with large local dimension). The spirit is very similar to what the Authors hint at when discussing diagonalization of the lowest spectrum via Lanczos in Sec. 9.3.

16) A benchmark exclusively based on C++ seems a bit partial: given the popularity of Python (and Julia), it would be very informative to see the analogue of Figs. 18-19 for these languages, too. Only so could the interested reader / user take a factual decision on what to use. Incidentally, why is the benchmark performed only against iTensor and none of the other listed libraries (e.g., TensorKit or TenPy)?

17) Side comment on Fig. 18: actually, the plot makes it very apparent that it is usually not a good idea to use multiple threads at least for “simple” DMRG algorithms: The gain is most often far less than the number of threads, which means that it would almost only result in a higher amount of core-hours to be accounted on HPC machines, without tangible benefit.

18) Looking at Fig. 19, could the Authors provide an estimate at which bond dimension will the memory limitations of a GPU kick in and invert the trend?

19) It is desirable to to see benchmarks of the different libraries for TN algorithms beyond the "simple" DMRG, especially in the direction of two-dimensional systems (PEPS, TTN, etc.). The more complicated the operations, the more the advantage of an implementation could jump to the eyes.

20) Similarly, it would be good to have concrete examples where the automatic selection of the contraction sequence does bring a concrete advantage — no doubts it is a nice feature to have, but it is good to see it in action.

21) Side comment: When mentioning automated differentiation among future features to be implemented, I cannot agree more, especially given the prominent role AD plays in unlocking (making more accessible and flexible) the variational optimization of complex TN structures like PEPS. Mentioning it could be helpful for interested readers.

  • validity: high
  • significance: high
  • originality: good
  • clarity: high
  • formatting: excellent
  • grammar: excellent

Anonymous Report 1 on 2024-2-6 (Invited Report)

Strengths

Cytnx combines easy-to-use APIs for tensor operations with advanced performance features like memory optimization, catering to both novices and experts. It supports symmetric tensors and uses GPU acceleration, greatly improving computational speed and scalability. I would say this library is an important contribution in computational physics. The technical methodologies employed in the development of the Cytnx library appear sound. The benchmarks provided demonstrate competitive performance with iTensor.

Weaknesses

1) The documentation could benefit from including implementations of common tensor network algorithms such as DMRG (Density Matrix Renormalization Group), TDVP (Time-Dependent Variational Principle) for non-trivial problems. Their absence misses an opportunity to provide users with practical starting points for using the library in well-established computational frameworks.

2) Although the authors mention its potential applicability to higher dimensional tensor network techniques like PEPS, there are no explicit demonstrations that Cytnx can provide a competitive edge.

3) In general, the library seems well equipped but no clear demonstation that it can solve challenging problems. Maybe the future versions will include that.

4) It would be great if the authors could expand on the discussion of potential applications in quantum computing and machine learning to highlight the library's versatility and potential impact further.

Report

The manuscript presents a valuable contribution to the field of computational physics, introducing a versatile and efficient tool for tensor network simulations. With some revisions and/or comments which potentially addresses its weaknesses, it would be a suitable candidate for publication in SciPost.

  • validity: high
  • significance: good
  • originality: good
  • clarity: good
  • formatting: excellent
  • grammar: reasonable

Login to report or comment