COLLOQUIUM 656
Data-driven mechanics and physics of materials

21 May — 23 May 2025, Gothenburg, Sweden

Speakers

We will have three plenary talks at the conference:

Surya R. Kalidindi (Georgia Institute of Technology, United States)
Title: Digital twins for mechanics of materials applications

Abstract: This presentation will expound the challenges involved in the generation of digital twins (DT) as valuable tools for supporting innovation and providing informed decision support for the optimization of the mechanical properties and/or performance of advanced material systems. This presentation will describe the foundational AI/ML (artificial intelligence/machine learning) concepts and frameworks needed to formulate and continuously update the DT of a selected material system. The central challenge comes from the need to establish reliable models for predicting the effective (macroscale) mechanical response of the heterogeneous material system, which is expected to exhibit highly complex, stochastic, nonlinear behavior. This task demands a rigorous statistical treatment (i.e., uncertainty reduction, quantification and propagation through a network of human-interpretable models) and fusion of insights extracted from inherently incomplete (i.e., limited available information), uncertain, and disparate (due to diverse sources of data gathered at different times and fidelities, such as physical experiments, numerical simulations, and domain expertise) data used in calibrating the multiscale mechanics of materials models. This presentation will illustrate with examples how a suitably designed Bayesian framework combined with emergent AI/ML toolsets can uniquely address this challenge.

Bernhard Mehlig (University of Gothenburg, Sweden)
Title: How deep neural networks learn — a dynamical-systems perspective

Abstract: After giving an introduction to deep learning, I will discuss how deep networks learn. This can be analysed and understood, in part, using concepts from dynamical-systems and random-matrix theory [1]. For deep neural networks, the maximal finite-time Lyapunov exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems such as turbulent flow. Ridges of large positive exponents divide input space into different regions that the network associates with different classes in a classification task. The ridges visualise the geometry that deep networks construct in input space, and help to quantify how the learning depends on the network depth and width [2].

[1] Bernhard Mehlig, Machine Learning with neural networks, Cambridge University Press (2021).
[2] Storm, Linander, Bec, Gustavsson & Mehlig, Finite-time Lyapunov exponents of deep neural networks, Phys. Rev. Lett. 132 (2024) 057301.


WaiChing Sun
(Columbia University, United States)
Title: Interpretable machine learning for solid mechanics: from representation to forecast and back

Abstract: This talk explores the various ways high-fidelity constitutive laws for a wide range of solids, such as soil, rock, alloys, and polymer composites, can be represented and how the choice of representations influence the accuracy, robustness, and data/computational efficiency for computer simulations of solids. To represent material models as points, we adopt a model-free approach that enables physical simulations of material behaviors without a smooth constitutive law. In this case, pointwise stress-strain pairs are selected in Gauss points of finite elements to be compatible with the conversation laws. To represent material models as meshes, we introduce a latent diffusion model where previous material models and experimental data are used to guide the reverse generation of models. This mesh-based material model is particularly efficient for non-smooth plasticity, where projection on segments can lead to significantly faster simulations. To represent material models as equations, we use the neural additive model in the projected space of strain measures. This technique enables us to search for hyper-elasticity in high-dimensional space without sacrificing the expressivity of neural networks. We show that the proposed model may reproduce any polynomial of arbitrary orders and dimensions and thus achieve the universal approximation through the Stone-Weierstrass theorem. Through a series of 1D post-hoc symbolic regressions, we obtain symbolic material models that significantly reduce the inference time for hydrocodes. The pros and cons of these techniques for various practical applications will be discussed.