We investigate the analogy between the renormalization group (RG) and deep neural networks, wherein subsequent layers of neurons are analogous to successive steps along the RG. In particular, we quantify the flow of information by explicitly computing the relative entropy or Kullback-Leibler divergence in both the one- and two-dimensional Ising models under decimation RG, as well as in a feedforward neural network as a function of depth. We observe qualitatively identical behavior characterized by the monotonic increase to a parameter-dependent asymptotic value. On the quantum field theory side, the monotonic increase confirms the connection between the relative entropy and the c-theorem. For the neural networks, the asymptotic behavior may have implications for various information maximization methods in machine learning, as well as for disentangling compactness and generalizability. Furthermore, while both the two-dimensional Ising model and the random neural networks we consider exhibit non-trivial critical points, the relative entropy appears insensitive to the phase structure of either system. In this sense, more refined probes are required in order to fully elucidate the flow of information in these models.
Cited by 2
Grosvenor et al., The edge of chaos: quantum field theory and deep neural networks
SciPost Phys. 12, 081 (2022) [Crossref]
Lahoche et al., Functional renormalization group for multilinear disordered Langevin dynamics I Formalism and first numerical investigations at equilibrium
J. Phys. Commun. 6, 055002 (2022) [Crossref]
Authors / Affiliations: mappings to Contributors and OrganizationsSee all Organizations.
- 1 Julius-Maximilians-Universität Würzburg / University of Würzburg
- 2 Würzburg-Dresden Cluster of Excellence [ct.qmat]
- 3 Max-Planck-Institut für Physik komplexer Systeme / Max Planck Institute for the Physics of Complex Systems
- 4 Stockholm University [Univ Stockholm]
- 5 Nordisk Institut for Teoretisk Fysik / Nordic Institute for Theoretical Physics [NORDITA]