Representer theorems for deep networks

Vigogna Stefano, University of Rome Tor Vergata

Studying the function spaces defined by neural networks provides a perspective to understand the corresponding learning models and their inductive bias. While in some limits neural networks correspond to function spaces that are reproducing kernel Hilbert spaces, these regimes do not capture the properties of the networks used in practice. In contrast, I will show that deep neural networks define suitable reproducing kernel Banach spaces. These spaces are equipped with norms that enforce a form of sparsity, enabling them to adapt to potential latent structures within the input data and their representations. In particular, we derive representer theorems that justify the finite architectures commonly employed in applications.

Area: IS3 - Mathematics of Machine Learning (Andrea Agazzi)

Keywords: Neural networks, reproducing kernel Banach spaces, representer theorems

Il paper è coperto da copyright.