Speaker
Description
In this talk, we approach deep neural networks from the perspective of physics, introducing the bulk–boundary decomposition as a novel framework for analyzing their training dynamics. By treating the learning process as a physical system, we demonstrate that the stochastic gradient descent formulation yields a Lagrangian that can be cleanly separated into a data-independent bulk action and a data-dependent boundary action. The bulk governs the intrinsic dynamics dictated by the network's architecture, whereas the boundary encapsulates the stochastic interactions driven by training data. This decomposition reveals the fundamentally local and homogeneous structure underlying deep networks. Building on this insight, we construct a field-theoretic formulation of neural dynamics, offering a new physical paradigm for interpreting modern artificial intelligence.