SciPost Submission Page
Efficient and scalable Path Integral Monte Carlo Simulations with wormtype updates for BoseHubbard and XXZ models
by Nicolas Sadoune, Lode Pollet
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users):  Lode Pollet 
Submission information  

Preprint Link:  https://arxiv.org/abs/2204.12262v3 (pdf) 
Code repository:  https://github.com/LodePollet/worm 
Date accepted:  20221005 
Date submitted:  20221003 08:52 
Submitted by:  Pollet, Lode 
Submitted to:  SciPost Physics Codebases 
Ontological classification  

Academic field:  Physics 
Specialties: 

Approach:  Computational 
Abstract
We present a novel and opensource implementation of the worm algorithm, which is an algorithm to simulate BoseHubbard and signpositive spin models using a path integral representation of the partition function. The code can deal with arbitrary lattice structures and assumes spinexchange terms, or bosonic hopping amplitudes, between nearestneighbor sites, and local or nearestneighbor interactions of the densitydensity type. We explicitly demonstrate the nearlinear scaling of the algorithm with respect to the system volume and the inverse temperature and analyze the autocorrelation times in the vicinity of a U(1) second order phase transition. The code is written in such a way that extensions to other lattice models as well as closelyrelated signpositive models can be done straightforwardly on top of the provided framework.
Author comments upon resubmission
The first Referee made some additional remarks, with which we agree and address below. We thank this Referee in particular for their renewed careful review of our paper.
We hope that our manuscript is now ready for publication.
(i) In Figs 57, the unit for the autocorrelation time, e.g. "tau_W^2=55" apparently still needs to be specified (updates?).
Reply:
We added the following sentence to the text:
The unit for the autocorrelation time is one sweep, {i.e.} one completed worm update from INSERTWORM to GLUEWORM.
(ii) Fig. 8: It would be helpful to roughly know the proportionality factor, or alternatively the actual memory consumption for some system size.
Reply: We changed the text as follows:
The total average memory consumption can be estimated from the basic data structure which contains 4 integers (which the user can specify), 1 double, and $2d$ (the coordination number, more generally) \texttt{C++} iterators. How much memory is required for this data structure is hence lattice, user, compiler, and hardware dependent. Note that the \texttt{C++} operator \texttt{sizeof(Element)} can provide this information. Assuming 4 bytes for an integer, 8 bytes for a double, and 8 bytes for the iterator, then the size of an element is 72 bytes for a cubic lattice and 56 bytes for a square lattice.
For the linear system size $L=96$ in Fig,~\ref{fig:efficiency} the average memory usage for storing the configuration is then slightly less than 70 megabytes. Doubling this number to account for fluctuations in kinetic energy gives a realistic estimate for the required memory resources for storing the configuration, excluding the Monte Carlo measurements and smaller overheads.
(iii) Updates per second in the text: hardware should again be specified, similar to v1.
Reply: we modified the text as follows:
We observe no loss in performance when increasing the system volume, the total number of updates per second is slightly above $2 \times 10^7$ for a thermalized system obtained on a single node of an iMac with a 3.1 GHz Intel Core i5 processor with 24 GB 1667 MHz DDR4 memory.
List of changes
see (i), (ii), (iii) above
Published as SciPost Phys. Codebases 9 (2022) , SciPost Phys. Codebases 9r1.0 (2022)