Deep and Wide Neural Networks Covariance Estimation

Argimiro Arratia*, Alejandra Cabaña, José Rafael León

*Corresponding author for this work

Research output: Chapter in BookChapterResearchpeer-review

Abstract

It has been recently shown that a deep neural network with i.i.d. random parameters is equivalent to a Gaussian process in the limit of infinite network width. The Gaussian process associated to the neural network is fully described by a recursive covariance kernel determined by the architecture of the network, and which is expressed in terms of expectation. We give a numerically workable analytic expression of the neural network recursive covariance based on Hermite polynomials. We give explicit forms of this recursive covariance for the cases of neural networks with activation function the Heaviside, ReLU and sigmoid.

Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2020 - 29th International Conference on Artificial Neural Networks, Proceedings
EditorsIgor Farkaš, Paolo Masulli, Stefan Wermter
PublisherSpringer Science and Business Media Deutschland GmbH
Pages195-206
Number of pages12
ISBN (Print)9783030616083
DOIs
Publication statusPublished - 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12396 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Keywords

  • Deep neural networks
  • Gaussian process
  • Hermite polynomials
  • Kernels

Fingerprint

Dive into the research topics of 'Deep and Wide Neural Networks Covariance Estimation'. Together they form a unique fingerprint.

Cite this