SciPost logo

SciPost Submission Page

Testing new-physics models with global comparisons to collider measurements: the Contur toolkit

by A. Buckley, J. M. Butterworth, L. Corpe, M. Habedank, D. Huang, D. Yallup, M. Altakach, G. Bassman, I. Lagwankar, J. Rocamonde, H. Saunders, B. Waugh, G. Zilgalvis

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Andy Buckley · Jonathan Butterworth · Louie Corpe · Juan Rocamonde
Submission information
Preprint Link: https://arxiv.org/abs/2102.04377v2  (pdf)
Date accepted: 2021-05-10
Date submitted: 2021-04-19 09:12
Submitted by: Corpe, Louie
Submitted to: SciPost Physics Core
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Experiment
  • High-Energy Physics - Phenomenology
Approaches: Computational, Phenomenological

Abstract

Measurements at particle collider experiments, even if primarily aimed at understanding Standard Model processes, can have a high degree of model independence, and implicitly contain information about potential contributions from physics beyond the Standard Model. The Contur package allows users to benefit from the hundreds of measurements preserved in the Rivet library to test new models against the bank of LHC measurements to date. This method has proven to be very effective in several recent publications from the Contur team, but ultimately, for this approach to be successful, the authors believe that the Contur tool needs to be accessible to the wider high energy physics community. As such, this manual accompanies the first user-facing version: Contur v2. It describes the design choices that have been made, as well as detailing pitfalls and common issues to avoid. The authors hope that with the help of this documentation, external groups will be able to run their own Contur studies, for example when proposing a new model, or pitching a new search.

Author comments upon resubmission

We thank all the referees and the editors for their comments on the manuscript. We have submitted a revised versions which addresses the points raised, and which we believe have collectively improved the manual.

----------- Referee #1 -----------

We thank Referee #1 for reviewing our paper and for their report. They did not raise any specific concerns.

----------- Referee #2-----------

We are grateful to Referee #2 for their comments, for trying out the tutorial included in the paper, and for finding remaining bugs and issues, which were logged as issues on our gitlab repository. We have fixed these issues and therefore hope that future readers will have no problem following the tutorial.
Referee #2 asked a number of specific question, which we will respond to here:

>>One question out of interest (no requirement regarding the document): Can the scan utility be extended to iteratively find the least constrained parameter points? Maybe using similar techniques to tuning tools like Professor/Apprentice?

Response: Regarding extending the scanning tool to locate the least constrained parameter point: this can be done by making use of the CONTUR code interface and connecting to a numerical scanner or minimiser. There are myriad extensions (and issues) in doing so, for example the existence of many minima rather than a unique point. More efficient scanning of multi-dimensional parameter spaces is an area of active research in the CONTUR team and in the reinterpretation community more widely. We now have added a short paragraph about these possible extensions in the manual, at the end of section 4.2

>>You describe the operating mode where only the BSM MC is produced, and the SM background is either taken from HepData or the data used as a proxy. It is clear how this "stacking" works for differentical cross sections. But how is this achieved for other measurements, for example even normalised cross sections can't be used because you can't simply add the BSM MC a posteriori, right? Let alone profile histograms or similar objects?

Response: Normalised histograms are complemented with a fiducial cross-section factor in the analysis database (when this is provided by the experiment), which allows rescaling to the differential cross-section for addition, then re-normalising. Ratio plots have a similar special treatment, and a profile histogram treatment is being developed in the same way. We have added some comments about this at the end of Sec 3.0 in the manual.

>> In Sec. 3.1: I was a bit confused when reading the part about "... final states arising from essentially the same events ..." is this referring to correlations between *data* events within different measurements, or *BSM MC* events populating different observables/regions simultaneously? I assume the former, but might be nice to phrase explicitly.

Response: You are right, this refers to measurements where the data which enter the selection may be partially the same. We have rephrased the statement to resolve the ambiguity.

>> You only include LHC analyses. Would non-LHC analyses provide no improvement, or is that a pragmatic decision because their documentation/Rivet analyses are often not rigorous enough?

Response: In principle this method could be extended to non-LHC analyses, although we would need to add additional running modes for the collision energies etc. We have added a statement about this at the end of Sec 3.1 in the manual

>> Since the document has the form of a manual, it would be good to refer the user to the README.md in git for setup instructions to quickly get started. Currently this is not mentioned in the publication nor referenced on the homepage, but is the only place that describes how to install Contur. Even better would be to unify all documentation. Currently there are at least three documentation starting points for the user: This manuscript, the README.md, and the homepage's "Using Contur".

Response: The reason that we did not directly link to the gitlab page in the manual is that there is always a risk that at some point in the future, the code may migrate from gitlab to elsewhere. Therefore, this may not be a 100% stable link forever. Instead, we propose to link to the CONTUR homepage in the introduction and summary,, which then links to the most-recent documentation, setup instructions, and codebase.

>> Minor glitches in the Contur tool/documentation to be resolved: https://gitlab.com/hepcedar/contur/-/issues?state=all&author_username=fsiegert

Response: The glitches pointed out by the referee have all been resolved.

----------- Referee #3-----------

We thank the referee for their careful reading of the paper and helpful comments, which we have addressed to make the manual easier to digest. In particular, we feel like comments 1-3 in the "Weaknesses Section" can be achieved with a detailed overview diagram, which we have created and added to the appendix. This diagram gives an overall view of the CONTUR workflow and ecosystem, while pointing out who (user, external tool or CONTUR [and if so, which executable]) is responsible to execute each step. We hope this clarifies these points. For item 4, the example in the appendix “Example Contur study with Herwig“ is an end-to-end example of the single-point workflow. We do not propose to put detailed installation instructions in the manual - these may become out of date over time! Instead, we propose to link the CONTUR homepage which points the user to the latest source code and setup instructions. We hope the reviewer is satisfied with this suggestion.

We reply to specific points raised by the Referee #3 below.

>> 1. In the main body, provide a technical workflow of Contur following the conceptual workflow in Section 2.1. A flowchart depicting the technical workflow will be very useful. The flowchart could include input files, output files and packages/routines that process and produce those.

Response: We thank the referee for their suggestion and we agree this would be very helpful. Such a diagram is now provided in App A, showing all steps of the workflow, in/out files and their formats, and which executables or (external) tools to use at each step.

>> 2. Related to the previous point: Provide the source code repository already in the main body of the text. The main body starts with conceptual physics descriptions, but especially starting with section 4, a lot of names for Contur and Rivet functionalities are referenced. It is difficult to follow these for a first time reader without understanding their position in the Contur or Rivet packages.

Response: The reason that we did not directly link to the gitlab page in the manual is that there is always a risk that at some point in the future, the code may migrate from gitlab to elsewhere. Therefore, this may not be a 100% stable link forever. Instead, we propose to link to the CONTUR homepage in the introduction and summary, which then links to the most-recent documentation, setup instructions, and codebase. In combination with the flowchart described above, we hope this gives a clear guide to each step and the technical terms.

>> 3. Please also better clarify the task division between Contur and Rivet, i.e. in the flowchart. What is done by either package is mentioned in various places in the text, but it would help to have a concrete, dedicated description.

Response: We now address this in the flowchart.


>> 4. In the main body, it is not always clear which Contur functionalities are controlled by the user and which are the functionalities belong to the internal workflow of the code.

Response: We now address this in the proposed flowchart by specifying which executable to use at each step, or whether an external tool is used.

>> 5. It would be helpful to present the content of an example YODA file. I understand that the YODA files are generated both for the experimental data by Rivet and for the BSM models by Contur via Rivet. Is that correct? Please clarify.

Response: Since YODA files are a standard output of Rivet and are one of the possible output formats of HEPData, and are not specific to CONTUR, the authors do not think this is the right place to exhaustively document their contents. Nonetheless, we added in Section 3 a few brief lines which give more information about the current YODA format, for the benefit of the reader.

List of changes

- At the end of Section 1 (Introduction) and also in the Summary, we added a pointer to the CONTUR homepage, which will host the latest installation instruction and links to the source code.
- In the second paragraph of Section 2 (Overview), we now provide a sentence highlighting the existence of the detailed flowchart in Appendix A. This is also pointed out at the end of the second paragraph of Sec 2.1, and in the caption of Fig1.
- In Section 3 (Rivet analyses), at the end of the first paragraph, we give a few lines of explanation about the YODA format. Also in this section, we updated the list of Rivet reference analyses since more have come out since the original submission of the manuscript, and they are now included in CONTUR.
- At the end of Sec 3.0, we added a short discussion on how BSM contributions are stacked for other data types than differential cross-sections.
-At the end of Sec 3.1, we remake on how one could in principle use the CONTUR framework for results from other experimental facilities such as LEP or HERA.
- At the end of Sec 4.2, we remark upon how the CONTUR scanning machinery could be extended.
- Fig 3 caption has been rewritten in accordance with the requests of the Referees.
- Fig4 has been updated to account for the latest measurements which have been made available in Rivet/CONTUR.
- Sec 6.2.1/6.2.2: As pointed out by the referees, the "Data functions" and "Theory functions" functionality was confusingly named. We have renamed them as " Plotting external grids" and "Plotting external functions" respectively, in the code and in the manual. Each has a small amount of additional content in those sections to clarify their use.
- Added new Appendix A/Fig 5: flowchart

Published as SciPost Phys. Core 4, 013 (2021)


Reports on this Submission

Anonymous Report 2 on 2021-5-3 (Invited Report)

Report

I am happy with the changes made by the authors based on my previous comments. I recommend the manuscript for publication.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Report 1 by Frank Siegert on 2021-4-30 (Invited Report)

Report

I am happy with the new version and how the authors have addressed my previous comments. Thanks also for fixing the problems I encountered when running the tutorial so quickly!

  • validity: high
  • significance: high
  • originality: good
  • clarity: top
  • formatting: excellent
  • grammar: perfect

Login to report or comment