Speaker
Description
Neural Networks (NN), the backbones of Deep Learning, define field theories through output ensembles at initialization. Certain limits of NN architecture give rise to free field theories via Central Limit Theorem (CLT), whereas other regimes give rise to weakly coupled, and non-perturbative field theories, via small, and large deviations from CLT, respectively. I will present a systematic construction of free, weakly interacting, and non-perturbative field theories by tuning different attributes of NN architectures, bringing in methods from statistical physics, and a new set of Feynman rules. Some interacting field theories of our choice can be exactly engineered at initialization, by parametrically deforming distributions over stochastic variables in NN architectures. As an example, I will present the construction of $λφ^4$ scalar field theory via statistical independence breaking of NN parameters in the infinite width limit. Lastly, I will introduce free and interacting regimes in Grassmann field theories defined via initialized Grassmann NN architectures.