/* Das ist der Code, damit das Akkordeon geschlossen angezeigt wird. */ /* Das ist der Code, um offene Akkordeons wieder schließen zu können */
Maxim Bergues

Maxim Bergues

Werner-Heisenberg-Gymnasium

 

Titel der Forschungsarbeit: 

School: TUM School of Computation, Information and Technology

Department: Department of Mathematics

Forschungsgruppe: Professur für die Numerik komplexer Systeme

Betreuung: Prof. Dr. Oliver Junge

Abstract der Forschungsarbeit

Differential equations are central to modeling physical systems, yet classical numerical solvers trade computational cost for accuracy. In this work, we investigate whether contemporary neural architectures can approximate the time evolution of chaotic systems described by differential equations with competitive accuracy at lower computational costs. We train feed-forward neural networks and transformer models to predict the evolution of (i) the Lorenz system and (ii) the one-dimensional Kuramoto–Sivashinsky (KS) equation using fourth order Runge-Kutta (RK4) simulations as ground truth. Models are evaluated on one-step and multi-step errors, error growth rates, and long-term stability. We find that for one-step predictions, classical neural networks achieve a mean square error (MSE) comparable to RK4 in two dimensions, using the same step size. For multi-step predictions, Transformers show low short-term accuracy (26% relative error) but maintain a bounded mean error over long trajectories, demonstrating stable long-term behavior. We quantify computational cost and accuracy,  and identify training protocols that optimize the training process to produce working results on consumer-grade hardware.