Research Focus
I am a PhD candidate at University of Münster, Germany focusing on Probability Theory in Machine Learning, Neural Network Training Dynamics.
My current research and PhD thesis focuses on understanding the training dynamics of ultra wide neural networks using probability theory. In particular, I study the emergence of the Neural Tangent Kernel (NTK) regime and the regime as characterized by Greg Yang. I show that even in rich settings like the regime the layers decouple stochastically for all training time. Furthermore I draw parallels to lazy and rich training regimes and the phenomenon of Grokking.
Publications
Momentum-SAM: Sharpness Aware Minimization without Computational Overhead
The recently proposed optimization algorithm for deep neural networks Sharpness Aware Minimization (SAM) suggests perturbing parameters before gradient calculation by a gradient ascent step to guide the optimization into parameter space regions of flat loss. While significant generalization improvements and thus reduction of overfitting could be demonstrated, the computational costs are doubled due to the additionally needed gradient calculation, making SAM unfeasible in case of limited computationally capacities. Motivated by Nesterov Accelerated Gradient (NAG) we propose Momentum-SAM (MSAM), which perturbs parameters in the direction of the accumulated momentum vector to achieve low sharpness without significant computational overhead or memory demands over SGD or Adam. We evaluate MSAM in detail and reveal insights on separable mechanisms of NAG, SAM and MSAM regarding training optimization and generalization.