SciPost logo

SciPost Submission Page

Amplitude Uncertainties Everywhere All at Once

by Henning Bahl, Nina Elmer, Tilman Plehn, Ramon Winterhalder

Submission summary

Authors (as registered SciPost users): Henning Bahl · Nina Elmer · Tilman Plehn · Ramon Winterhalder
Submission information
Preprint Link: scipost_202509_00024v1  (pdf)
Date submitted: Sept. 10, 2025, 10:35 a.m.
Submitted by: Nina Elmer
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Phenomenology
Approaches: Theoretical, Computational

Abstract

Ultra-fast, precise, and controlled amplitude surrogates are essential for future LHC event generation. First, we investigate the noise reduction and biases of network ensembles and outline a new method to learn well-calibrated systematic uncertainties for them. We also establish evidential regression as a sampling-free method for uncertainty quantification. In a second part, we tackle localized disturbances for amplitude regression and demonstrate that learned uncertainties from Bayesian networks, ensembles, and evidential regression all identify numerical noise or gaps in the training data.

Author indications on fulfilling journal expectations

  • Provide a novel and synergetic link between different research areas.
  • Open a new pathway in an existing or a new research direction, with clear potential for multi-pronged follow-up work
  • Detail a groundbreaking theoretical/experimental/computational discovery
  • Present a breakthrough on a previously-identified and long-standing research stumbling block
Current status:
In refereeing

Reports on this Submission

Report #1 by Anonymous (Referee 1) on 2025-11-21 (Invited Report)

Strengths

  1. The paper presents a thorough investigation of uncertainty estimation for amplitude surrogates.

  2. The authors considers different scenarios that cam impede the training and accuracy of the surrogate models, including localised training data inaccuracies or missing data

Weaknesses

  1. Section 3 reports a bias without investigating (or reporting on the investigation) of its possible source

Report

The manuscript presented is an investigation worthy of publication, provided the requested changes are addressed. I enjoyed reviewing it.

Requested changes

Can the authors please

  • specify how many training were performed for the bands in Fig 2? The manuscript only mentions "multiple times".

  • use a logarithmic scale for the upper panels of Figure 2

  • investigate the source of the bias reported in section 3:

Section 3.2 reports and investigates a bias in the ensemble method. In my opinion this bias is likely due to the fact that the authors fit the logarithm of the amplitude and not the amplitude itself. The effect of this transformation is investigated in their appendix A, from there is is clear that fitting the logarithm of the amplitude and transforming back will yield a positive bias in the amplitude. Can the author check whether this is the source of the bias or clearly exclude this possibility by reporting the values in what they called "l-space" and possibly reporting the value of the bias induced by the transformation alongside the one they measure?

The transformation also makes the discussion in 3.3 more difficult: sigma_stat is not applying to the amplitude but to its logarithm. I would question the validity of the derivation. The following section could be the solution to an inexistant problem.

Can the authors also clarify whether the ensemble average for the amplitude is obtained by exponentiating the average of the logarithm prediction or by averaging the exponentiated logarithms?

  • check whether the scale of the y axis of figure 4 right? It would indicate that the mean relative error for amplitudes > 10^5 is larger than 100%?

  • compare the "bias floor" they found in figure 5 with that expected from the logarithmic transformation?

  • clarify the meaning of "channel" in the caption of figures 5 and 6.

  • make the dashed blue line in figure 5 more visible (or state behind which other line it is hiding)

  • elaborate on the discussion on why \sigma_syst should converge to |A_NN-A_train|, the reader is pointed to an explaation in section 2.1, but I could not find one. If the authors mean to refer to the explanation in Appendix D of their reference 65, perhaps they can remove this level of indirection. The description in that appendix refers to this effect as "ideally ..." so it would be useful for the authors to justify why the formuation of the globally learned systematic uncertainty allows for this ideal case while other strategies do not.

Recommendation

Ask for minor revision

  • validity: good
  • significance: good
  • originality: good
  • clarity: good
  • formatting: excellent
  • grammar: excellent

Login to report or comment