SciPost logo

SciPost Submission Page

Towards a method to anticipate dark matter signals with deep learning at the LHC

by Ernesto Arganda, Anibal D. Medina, Andres D. Perez, Alejandro Szynkman

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Andres Daniel Perez
Submission information
Preprint Link: https://arxiv.org/abs/2105.12018v3  (pdf)
Date accepted: 2021-12-23
Date submitted: 2021-12-06 14:50
Submitted by: Perez, Andres Daniel
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • High-Energy Physics - Phenomenology
Approaches: Computational, Phenomenological

Abstract

We study several simplified dark matter (DM) models and their signatures at the LHC using neural networks. We focus on the usual monojet plus missing transverse energy channel, but to train the algorithms we organize the data in 2D histograms instead of event-by-event arrays. This results in a large performance boost to distinguish between standard model (SM) only and SM plus new physics signals. We use the kinematic monojet features as input data which allow us to describe families of models with a single data sample. We found that the neural network performance does not depend on the simulated number of background events if they are presented as a function of $S/\sqrt{B}$, where $S$ and $B$ are the number of signal and background events per histogram, respectively. This provides flexibility to the method, since testing a particular model in that case only requires knowing the new physics monojet cross section. Furthermore, we also discuss the network performance under incorrect assumptions about the true DM nature. Finally, we propose multimodel classifiers to search and identify new signals in a more general way, for the next LHC run.

List of changes

Re: Anonymous Report 1 on 2021-11-21
SciPost Physics

We thank the referee for her/his positive suggestions in order to improve the paper. Below we address the specific points reported:

1. Table 2, please mention how many neurons were used in the dense layer of the CNN.

We thank again the reviewer for pointing out this issue. We have added in table 2 the missing number of neurons in the dense layer of the CNN.

2. I am not very sure about the statement that histograms method is totally independent of the background number of events. For example, in Fig. 10, I think there is not a significant difference because you are comparing the performance of histograms with 1000 events with 50K, 1000 is already a large number. Could you please also compare the performance of histograms with 20 events or 100 events? It seems the conclusion about the robustness against the background events is most likely true when you form a histogram of a reasonably large number of events. As mentioned in the paper, the total number of background events is decided by the luminosity, but a priori, there is no fixed number for the events to form a histogram.

We have clarified this issue in the paper and we have remarked that this is true in the range of interest. We have added two panels in Fig.10 to include the cases with 20 and 100 events, and a few sentences at the end of the second paragraph of section 4.1. As expected by the referee, there is a decrease in performance for these small numbers of B, since the algorithm needs a sufficiently large number of events to represent properly the underlying distribution and perform the classification. We also included small sentences in section 6 ‘Discussion’ and in section 7 ‘Conclusions’ to specify the range of interest.

3. Section 4.3, Fig. 12, is it true that trained models should have accuracy greater than or equal to 70% because DNN trained on ALP model provides more variation?

In Fig. 12 we are showing ratios between AUCs, so its difficult to get precise values with respect to different models. However, we can interpret the ALP results as lower limits, as we explain below.

On the top-left panel of Fig. 12 we are always using DNNs trained on ALP and testing with samples generated with the other models. Since the ALP distribution is the closest one to the SM (see Fig. 1), we obtain a significant decrease in performance. This means that the algorithm trained with ALP does not generalize correctly to be able to discriminate the other models vs SM. Notice that the biggest decrease, up to 25% (blue curve), occurs for the model with the most different distribution with respect to the SM (see Fig. 1).

In other words, if we want to choose only one model to train an algorithm and use it to distinguish background vs signal ensembles generated with different models, ALP is the worst option. In that sense, the ALP results can be interpreted as a first order lower limit.

The other panels show the results when we are using DNNs trained with the models with a mediator. We can see in Fig.12 top-right, bottom-left and bottom-right panels that these DNNs do generalize correctly and can handle test samples generated with a ‘wrong’ model.

Minor comments:

- Figure 8 with better quality.
- Section 3 renamed ‘Machine Learning algorithms’
- Typos corrected.

Published as SciPost Phys. 12, 063 (2022)

Login to report or comment