Approximation spaces of deep neural networks

Rémi Gribonval, Gitta Kutyniok, M Nielsen, Felix Voigtlaender

Research output: Working paperResearch

Abstract

We study the expressivity of deep neural networks. Measuring a network’s complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of “skip connections” does not change the resulting approximation spaces. We also discuss the role of the network’s nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.
Original languageEnglish
Number of pages63
Publication statusPublished - 3 May 2019

Fingerprint

Approximation Space
Neural Networks
Nonlinearity
Activation Function
Approximation Theory
Besov Spaces
Linear Space
Best Approximation
Linear Function
Function Space
Neuron
Smoothness
Decay
Norm

Keywords

  • deep neural networks
  • sparsely connected networks
  • Approximation spaces
  • Besov spaces
  • direct estimates
  • inverse estimates
  • piecewise polynomials
  • ReLU activation function

Cite this

Gribonval, R., Kutyniok, G., Nielsen, M., & Voigtlaender, F. (2019). Approximation spaces of deep neural networks.
Gribonval, Rémi ; Kutyniok, Gitta ; Nielsen, M ; Voigtlaender, Felix. / Approximation spaces of deep neural networks. 2019.
@techreport{6b71a8ff63934d47937308154e3e2830,
title = "Approximation spaces of deep neural networks",
abstract = "We study the expressivity of deep neural networks. Measuring a network’s complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of “skip connections” does not change the resulting approximation spaces. We also discuss the role of the network’s nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.",
keywords = "deep neural networks, sparsely connected networks, Approximation spaces, Besov spaces, direct estimates, inverse estimates, piecewise polynomials, ReLU activation function",
author = "R{\'e}mi Gribonval and Gitta Kutyniok and M Nielsen and Felix Voigtlaender",
year = "2019",
month = "5",
day = "3",
language = "English",
type = "WorkingPaper",

}

Gribonval, R, Kutyniok, G, Nielsen, M & Voigtlaender, F 2019 'Approximation spaces of deep neural networks'.

Approximation spaces of deep neural networks. / Gribonval, Rémi; Kutyniok, Gitta; Nielsen, M; Voigtlaender, Felix.

2019.

Research output: Working paperResearch

TY - UNPB

T1 - Approximation spaces of deep neural networks

AU - Gribonval, Rémi

AU - Kutyniok, Gitta

AU - Nielsen, M

AU - Voigtlaender, Felix

PY - 2019/5/3

Y1 - 2019/5/3

N2 - We study the expressivity of deep neural networks. Measuring a network’s complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of “skip connections” does not change the resulting approximation spaces. We also discuss the role of the network’s nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.

AB - We study the expressivity of deep neural networks. Measuring a network’s complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of “skip connections” does not change the resulting approximation spaces. We also discuss the role of the network’s nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.

KW - deep neural networks

KW - sparsely connected networks

KW - Approximation spaces

KW - Besov spaces

KW - direct estimates

KW - inverse estimates

KW - piecewise polynomials

KW - ReLU activation function

M3 - Working paper

BT - Approximation spaces of deep neural networks

ER -

Gribonval R, Kutyniok G, Nielsen M, Voigtlaender F. Approximation spaces of deep neural networks. 2019 May 3.