Munich, Germany & Beijing, China – Jointly optimizing multiple loss terms is a pervasive challenge in numerous deep learning applications, from Physics-Informed Neural Networks (PINNs) to Multi-Task Learning (MTL) and Continual Learning (CL). However, conflicting gradients between these loss terms often lead to optimization stagnation, trapping models in local optima and even causing training failures. A collaborative research team from the Technical University of Munich (TUM) and Peking University (PKU) is addressing this critical issue with a novel approach called ConFIG, aiming to pave the way for conflict-free training in deep learning.
The research, spotlighted at ICLR 2025, is authored by Qiang Liu, a doctoral student at TUM, and Mengyu Chu, an Assistant Professor at PKU specializing in physics-enhanced deep learning algorithms designed to improve the flexibility, accuracy, and generalization of numerical simulations. The corresponding author is Professor Nils Thuerey of TUM, a renowned expert in the intersection of deep learning and physical simulation, particularly fluid dynamics. Professor Thuerey’s contributions to efficient fluid effects simulation earned him an Academy Award for Technical Achievement. His current research focuses on differentiable physics simulations and advanced generative models for physical applications.
The core problem addressed by the TUM-PKU team lies in the inherent conflicts arising when simultaneously minimizing multiple loss functions. In scenarios like PINNs, researchers often rely on manually adjusting loss weights to mitigate these conflicts. While various weighting strategies have been proposed, often based on numerical stiffness, loss convergence rate differences, or neural network initialization, a consensus on the optimal weighting strategy remains elusive.
[Further details about the ConFIG method and its potential impact are expected to be revealed at ICLR 2025.]
The Team Behind ConFIG:
- Qiang Liu (TUM): First Author, Ph.D. Candidate at the Technical University of Munich.
- Mengyu Chu (PKU): Second Author, Assistant Professor at Peking University, specializing in physics-enhanced deep learning.
- Nils Thuerey (TUM): Corresponding Author, Professor at the Technical University of Munich, expert in deep learning and physical simulation.
Why This Matters:
The ability to effectively train deep learning models with multiple, potentially conflicting, objectives is crucial for advancing research in various fields. ConFIG’s promise of conflict-free training could unlock significant improvements in:
- Physics-Informed Neural Networks (PINNs): Enabling more accurate and efficient solutions to complex physical problems.
- Multi-Task Learning (MTL): Allowing models to learn multiple tasks simultaneously, improving generalization and reducing training time.
- Continual Learning (CL): Facilitating the development of models that can learn new tasks without forgetting previously learned ones.
The ICLR 2025 presentation of ConFIG is highly anticipated, with researchers eager to learn more about the method’s implementation, performance, and potential to revolutionize multi-objective deep learning training.
References:
- (Further details and references will be available upon the full publication of the ICLR 2025 paper.)
(Note: This article is based on preliminary information and will be updated with more details following the ICLR 2025 presentation.)
Views: 0