SciPost Submission Page
Reweighting Monte Carlo Predictions and Automated Fragmentation Variations in Pythia 8
by Christan Bierlich, Philip Ilten, Tony Menzo, Stephen Mrenna, Manuel Szewc, Michael K. Wilkinson, Ahmed Youssef, Jure Zupan
|Authors (as registered SciPost users):
|Christian Bierlich · Philip Ilten · Tony Menzo
This work reports on a method for uncertainty estimation in simulated collider-event predictions. The method is based on a Monte Carlo-veto algorithm, and extends previous work on uncertainty estimates in parton showers by including uncertainty estimates for the Lund string-fragmentation model. This method is advantageous from the perspective of simulation costs: a single ensemble of generated events can be reinterpreted as though it was obtained using a different set of input parameters, where each event now is accompanied with a corresponding weight. This allows for a robust exploration of the uncertainties arising from the choice of input model parameters, without the need to rerun full simulation pipelines for each input parameter choice. Such explorations are important when determining the sensitivities of precision physics measurements. Accompanying code is available at https://gitlab.com/uchep/mlhad-weights-validation.
Submission & Refereeing History
You are currently on this page
Reports on this Submission
1. In the manuscript, the authors presented for the first time the possibility of reweighting Monte Carlo hadronisation predictions.
2. This approach opens up a new avenue to achieve hadronisation uncertainty efficiently (especially if full detector simulation is considered) and can be potentially used for tuning (fitting) hadronisation models.
3. The manuscript is supplemented with publicly available code (gitlab.com/uchep/mlhad-weights-validation).
1. The authors consider a simplified version of the string model (i.e. they reweighted just a few parameters of the model).
2. It is unclear how this method can be applied when the base function has a domain with zero value.
3. Some citations are missing (see also the other report)
In the manuscript “Reweighting Monte Carlo Predictions and Automated Fragmentation Variations in Pythia 8” the authors present for the first time a framework to obtain reweighting Monte Carlo hadronisation predictions. This approach opens up new possibilities for efficiently obtaining hadronisation uncertainties (especially if a full detector simulation is considered) and can potentially be used to tune (fit) hadronisation models. The results presented are, in my opinion, very interesting. However, before the article is published, I have some comments/questions (see requested changes), the answers to which I think would help to further improve the article.
Requested changes (in order of appearance in the text):
1. In the Introduction, the authors write that the proposed approach can, for example, be applied to a multi-particle interaction model. This does not seem obvious to me. It is also not clear to me how the approach could be applied for colour reconnection - which could be considered a part of hadronization or MPI. Could the authors please elaborate on this more?
2. It would be good to add references to more modern versions of the software and add references to Parton Shower/Generators uncertainty studies of other groups.
3a. The authors consider a simplified version of the string model. For example, they neglect to re-weight the flavour parameters of the model. What is the reason for this? The string model has many more parameters (for example, Monash Tune, which the authors use as the default setting for the Lund model, had more than 20 hadronisation parameters tuned). How would the methods work for such a large number of parameters? What are the potential problems in such a more realistic situation?
3b. Related to this question is the problem described at the bottom of page 4 concerning low-mass strings. The authors write openly about this problem, but it is not clear what exact limitations this problem brings to the estimation of hadronisation uncertainty using the proposed method.
4. Some of the parameters of the string model are discreet. How the method can be applied to the discreet parameters?
5. In section 3.1 the authors write:
“This agreement breaks down, if the Lund fragmentation function for the alternative parameter values is large in a range where the Lund fragmentation function approaches zero for the baseline parameter values, as shown in fig. 2 (bottom left) and fig. 3 (bottom right). The reweighting then requires large weights and samples the phase space poorly.” It is even worse in the case when the base function has a domain where it is zero and the function for the alternative parameter is non-zero.
Clearly, in that situation, the method can not be applied. This appears to be a serious limitation of the method. It would be interesting to see what solutions the authors propose in such a situation.
6. In Fig. 2 in the bottom panel for the case of b=0.58 there is a large error bar for w’ line between charge multiplicity 30-35. What is the reason for that?
7. In perturbation calculations, a variation of the renormalisation and factorisation scales is usually used as a rough estimate of uncertainty. What variations in the parameter of the Lund model would the authors suggest to estimate the uncertainties associated with it?
In summary, I would like to say that I think the document is very interesting. However, there are still some issues which have to be addressed before the publication. Therefore, I recommend that the author addresses the points raised above before resubmitting their paper.
1. The authors provide a first implementation and validation of a hadronisation code that can generate alternative event weights for relevant parameter variations, thus greatly reducing the cost of subsequent detector simulation steps in the simulated events pipeline at high-energy colliders such as the LHC. This lays the groundwork to establishing automated on-the-fly hadronisation uncertainties for particle level simulation results, similar to what has been established for on-the-fly hard process and parton shower uncertainties in recent years.
2. The authors point out potential numerical issues with the ansatz when the baseline and the target probability distributions do not overlap well, as has been observed for shower variations, and attempt to find diagnostics to spot pathological cases. This is also studied in the given validation results.
1. Some citations are missing (see "Requested changes").
2. The criterion of how far the mean is from 1 is not well argued in my opinion, or perhaps I don't understand the point the authors are trying to make here. The actual issue, i.e. the weaker statistical significance due to the wider weight distribution, will of course lead to larger fluctuations of $\mu$ around 1 (as seen from the MC error $\sigma$ given for $(1-\mu)$ for the pathological cases) but then the reliable and more direct measure would be $\sigma$, not $(1-\mu)$, which might be arbitrarily close to 0 given its statistical nature. Alternatively, quoting the effective sample size might be the most natural measure and would at the same time communicate the loss of significance for the alternative weight event sample in a clear way.
This submission reports the first development and application of reweighting methods to a hadronisation model (here, Lund String hadronisation, which is one of two main models in wide use) and the validation of an implementation that enables for the first time the generation of alternative-weight samples for hadronisation uncertainty studies.
Mostly the same methods have been developed for the perturbative parts of the Monte Carlo simulation toolchain (matrix elements, parton showers, matching/merging) between 2011 and 2016 and their use has become a standard for large-scale simulated event sample production at the LHC by ATLAS and CMS and a useful tool for phenomenological studies alike. I expect that the calculation of alternative weights for the hadronisation part will have a similar impact.
Therefore, the submission is a highly relevant contribution. It is of a high quality and definitely worth publishing in SciPost Physics. However, I include a few points in "Requested changes" which were confusing to me and/or that I think could be improved, and I would ask the authors to address them in a minor revision.
1. In the introduction on page 2, after "efficient methods exist for the hard process and the parton shower", it is in my opinion not sufficient to cite only the VINCIA and PYTHIA reweighting-related publications [3, 4]. As for the hard process, LO reweighting is trivial, but publications that developed NLO reweighting should be added, e.g. [1310.7439] for the reweighting of NLO Monte-Carlo using the Catani-Seymour subtraction. When it comes to the shower (which might include reweighting in the context of matching and merging which should perhaps be mentioned, too), the implementations of the two other general-purpose generators heavily in use at the LHC, in [1605.08256] (HERWIG) and [1606.08753] (SHERPA), should be cited.
2. The previous point also applies to the use of the same citation group [3, 4] on page 2 after the sentence "The presented method is similar to the one used previously for parton shower uncertainty estimates", i.e. [1605.08256] and [1606.08753] should be cited, too. Here, as you now refer to the reweighting of the veto algorithm, one should also cite [0912.3501], which predates any of the parton shower uncertainty papers by about two years and uses the same "modified veto algorithm" method, albeit to bias shower emissions to generate additional photons. See App. B of [0912.3501].
3. Since in my (limited) understanding the HERWIG cluster hadronization model is itself an implementation based on ideas from the 80s [Nucl. Phys. B214 (1983) 201, Nucl. Phys. B239 (1984) 349, Nucl. Phys. B288 (1987) 729, Nucl. Phys. B238 (1984) 492], it is unclear why it was picked out as an example, instead of citing also the second implementation of this method in a widely-used general-purpose event generator, i.e. in SHERPA [hep-ph/0311085].
4. As mentioned in the "Weaknesses" part of the review, I am not convinced by the arguments around eqs. (13)-(15) to put forward the deviation of the mean from unity as the most prominent/straightforward criterion to assess the quality of the reweighting. Isn't it much clearer/more straightforward (and less dependent on random fluctuations, which might send the mean $\mu$ arbitrarily close to unity even if $\sigma$ is large) to use the MC error $\sigma$ of the $w'$ sample itself, or the effective sample size? The eqs. (13)-(15) are not required to establish this even, as it is self-evident that the weight distribution widens by the reweighting. So I would ask the authors to either (i) point out what I have misunderstood and/or (ii) clarify their reasoning in the draft, why the deviation of the mean itself is the relevant criterion here or (iii) just use the greater Monte-Carlo error and/or reduced effective sample size as a criterion and discuss that for the given results. In the latter case, it would be interesting to quote the effective sample size in Tab. 1.