Research

Current and previous research projects.

Digitales Lehren und Lernen an der Universität Stuttgart: Boost. Skills. Support

  • Co-PI, Measure M2.2: Daten und Maschinelles Lernen
  • Laufzeit: 10.2021-9/2024

digit@L

A data-driven optimization framework for improving the adaptation of the neuromuscular system in brain pathology

Joint project with Oliver Röhrle and Miriam Schulte.
Start: 10/2021
End: 09/2024

MARDI

Mathematical Research Data Initiative

  • Co-Spokesperson of TA4 "Cooperation with other disciplines"
  • Start: 2021
  • Funding period: until 2026
  • Link

Mathematical research data is vast, complex and multifaceted. It emerges within mathematical sciences but also in other scientific areas such as physics, chemistry, life sciences and the Arts. Standardised formats, data interoperability and application programming interfaces need to be developed to ensure the ease of use of data across broad disciplines.

With this in mind, the Mathematical Research Data Initiative (MaRDI) is being established as the consortia initiative of mathematical science. Its mission is to:

  1. develop a robust Mathematical Research Data Infrastructure that would be useful within mathematics and other disciplines as well as non-scientific fields.
  2. set standards and confirmable workflows for certified Mathematical Research Data and
  3. provide services to both the mathematical and wider scientific community.

All of which is essential in creating and establishing collaborative platforms crucial for knowledge dissemination, quality control and scientific discourse.

MaRDI’s Vision:
Building a community that embraces a FAIR data culture and research workflow through the sustainable realization of MaRDI findings.

Renewable Energies and Data Science

Solvers for Hybrid Modelling in Porous Media – Design, Implementation, Analysis

  • International Project between the Cluster of Excellence SimTech
  • Joint project with Prof. Dr. Christian Rohde and
    Prof. Dr. Adrian Florin Radu
  • Start: October 2021
  • Funding period: October 2024
  • Link

We explore solver aspects for PDE- and data-based numerical schemes that provide discretisations for coupled multi-phase flow problems. This includes free-flow/porous-media-flow coupling and coupled porous media domains with two-phase flow hierarchies. The mathematical models are fundamental for large-scale simulations in the context of geothermal energy storage and production as well as other green-energy settings. We aim at the development of a new class of solvers that combine physics- and data-based preconditioning techniques and pursue both concepts based on the monolithic and the partitioned/splitting paradigms. Preconditioners will be tailored to the differential operators in the underlying equations in a block-like fashion or are induced by Schwarz-type domain decomposition methods. We strive for data-driven techniques to act as surrogate models or determine numerical parameters optimizing the performance of the discretisation on the fly. Finally, we aim to underpin our research by rigorous analysis that is challenged by the combination of PDE- and data-based modeling. This project is a joint effort together with Florin Radu from the University of Bergen.

Provenance-integrated adaptation of numerical approximations of differential equation models

  • Project application within the Cluster of Excellence SimTech
  • Joint project with Prof. Melanie Herschel
  • Start: March 2020
  • Funding period: until May 2023
  • Funding code: PN 7-3
  • Link: PN 7-3

This project explores how to leverage metadata collected a priori and during the execution of a simulation of a differential equation model, with the goal of using these metadata to adapt, improve and predict the simulation. Such metadata, commonly referred to as provenance, include ‘low-level’ performance metrics obtained by monitoring convergence rate, runtime, or memory consumption as well as novel ‘high-level’ measures and derived metrics that help quantify the estimated difficulty of a solution, or the similarity of tasks / components in different simulations. We will contribute novel methods and measures to capture these metadata as well as corresponding analysis algorithms to ultimately advance the fundamental problems of deciding when a (possibly expensive) provenance capture is useful to improve the overall performance or to enable more informed design decisions; and of adapting parameter settings
to a given problem. The results of this project will pave the way towards multi-adaptive simulations, in particular in project networks 5 and 7. Furthermore, the project delivers input for SimTech’s openDASH data and software hub, and thus contributes towards reproducibility and traceability of simulations.

On-The-Fly Model Modification, Error Control, and Simulation Adaptivity

  • Project application within the Cluster of Excellence SimTech
  • Joint work with Prof. Miriam Mehl
  • Start: October 2019
  • Funding period: until March 2023
  • Funding code: PN 5-1
  • Link: PN 5-1

The goal of this project is to analyze, realize and combine unconventional numerical and high performance computing methods to develop truly adaptive simulation software with an important focus on novel optimality criteria. Together with partner projects, the entire hardware spectrum from conventional CPU-based clusters, accelerator hardware to mobile resource-poor devices will be targeted. Optimization in the underlying multi-dimensional discrete-continuous space spanned by multiple choices from model types over discretization methods and solvers, via variable floating point formats to implementation is going to be developed incrementally in this first project phase. The core result of the project will be (a) highly efficient simulation software modules incorporating a wide range of hardware characteristics in terms of compute and communication, (b) the definition of a high-dimensional parameter space, a selection of objective functions, and initial optimization techniques. This will serve as a basis for advancing the first research question (accuracy-resource trade-off) on PN5 into the fourth one (full accuracy-precision-resource trade-off) in a follow-up project.

Completed projects

  • Joint application within the DFG-ANR French-German call, together with D. Komatitsch (Marseille) and S. Chevrot (Toulouse)
  • Start: November 2017
  • Funding period: 3 years
  • Funding code: GO 1758/4-1

Details:

Imaging what is inaccessible to direct observation, based on elastic waves, is a major issue with a wide range of applications of high societal and economic impact. In this project we aim at drastically improving the resolution of seismic tomography to produce enhanced finely- resolved images in a domain with such high societal and economic interest: regional seismological tomography at unprecedented resolution, in particular for passive seismic acquisition by dense arrays. We go beyond classical passive imaging approaches such as ambient noise tomography or receiver function migration, by performing high-frequency full waveform imaging of the shallow or deep Earth, to help investigate the deep roots of continental orogens or the extended fault sources of large earthquakes. To achieve this goal, we extend our imaging techniques to high frequencies, and derive data-driven simulation schemes and novel techniques for highly unstructured irregular problems in high-performance computing.

Coupling porous media and free-flow is a common denominator of CFD research within SFB 1313, with a multitude of application domains. To address these problems computationally, the ‘coupled until proven uncoupled’ paradigm has been established. One way to achieve full coupling also in a numerical scheme is the monolithic approach, which is particularly attractive if the porous medium is described by some averaged formulation such as Darcy’s Law in the most simple case: All underlying physics, such as variants of the Navier-Stokes equations in the free-flow domain, and Darcy-like equations for the porous medium, are assembled into one nonlinear system. However, it turns out that this system, and linearised variants thereof, are notoriously hard to solve with classical iterative schemes for linear(ised) systems. This is due to the extreme difference in the rate in which the flow in the two domains is evolving, and in the scale difference of the domains of interest. In this auxiliary project, we will primarily examine novel and specifically tailored preconditioning techniques to solve monolithic systems with iterative schemes. In addition, we examine variants of monolithic Newton-like approaches for the nonlinear loop.

  • Joint application to the "Baden-Württemberg Stiftung gGmbH" together with Oliver Röhrle (Institute of Applied Mechanics), Miriam Mehl (Institute for Parallel and Distributed Systems) und Thomas Ertl (Visualisation Research Centre) at the University of Stuttgart.
  • Start: 11/2016
  • Funding period: 3 years

Having available a realistic digital model of a human would provide in many ways tremendous benefits, e.g. it certainly would constitute a quantum leap in personalised healthcare. This grand vision is shared between many researchers worldwide including the Cluster of Excellence for Simulation Technology (SimTech) at the University of Stuttgart through SimTech’s “Overall Human Model” vision. With this proposal, we want to contribute to this grand vision by developing realistic models of the neuromuscular system. This poses not only significant modelling and computational challenges, but also challenges in visualising the simulation results.

More specifically, we employ HPC to extend existing computational and modelling frameworks, which are currently only running on small-scale clusters, to achieve realistic large-scale, biophysical, multi-scale simulations of skeletal muscle mechanics. The aim is to build up a 3D continuum-mechanical skeletal muscle model that contains a realistic number of muscle fibres (~1,000,000) and is controlled by a model of the central nervous system. To do so, the research proposal unifies model improvements, HPC on large-scale (heterogeneous) hardware architectures, efficient numerical techniques, dynamic load balancing and advanced visualisation techniques. In the sense of the overall grand vision, this model provides the basis for many physiological investigations. Further, extensions to this project are simulations of musculoskeletal systems containing multiple synergistic and antagonistic muscles revealing internal loading conditions.

 
  • Joint application within the Priority Programme "Software for Exascale Computing" (SPP-1648), together with P. Bastian (Heidelberg), S. Turek (Dortmund), M. Ohlberger und Ch. Engwer (Münster), O. Illisch (Clausthal) and O. Iliev (Kaiserslautern)
  • Start: 01/2016
  • Funding period: 3 years
  • Funding code: GO 1758/2-2
  • Link: http://www.sppexa.de/

In this interdisciplinary project consisting of computer scientists, mathematicians and domain experts from the open source projects DUNE and FEAST we develop, analyse, implement and optimise new numerical algorithms and software for the scalable solution of partial differential equations (PDEs) on future exascale systems exhibiting a heterogeneous massively parallel architecture.
The DUNE software framework combines flexibility and generality with high efficiency by the use of state-of-the-art programming techniques and interchangeable components conforming to a common interface. Incorporating the hardware-oriented numerical techniques of the FEAST project into these components allows us already during the first funding phase to optimally exploit the performance of heterogeneous architectures with their three-level parallelism (SIMD vectorisation, multithreading, message passing) while at the same time being able to support a variety of different applications from the steadily growing DUNE user community.
In order to cope with the increased probability of hardware failures, a central aim in the second funding period is to add flexible, application-orientied resilience capabilities into the framework which, based on a common infrastructure, includes on the one hand ready-to-use self-stabilising iterative solvers and on the other hand global and local checkpoint restart techniques. Continuous improvement of the underlying hardware-oriented numerical methods is achieved by combining matrix-free sum-factorisation based high-order discontinuous Galerkin discretisations with matrix-based algebraic multigrid low-order subspace correction schemes resulting in both robust and performant solvers. On top of that, extreme scalability is facilitated by exploiting massive coarse grained parallelism offered by multiscale and uncertainty quantification methods where we now focus on the adaptive choice of the coarse/fine scale and the overlap region as well as the combination of local reduced basis multiscale methods and the multilevel Monte-Carlo algorithm.
As an integral part of the project we propose to bring together our scalable PDE solver components in a next-generation land-surface model including subsurface flow, vegetation, evaporation and surface runoff. This development is carried out in close cooperation with the Helmholtz-Centre for environmental research (UFZ) in Halle which provides the additional modelling expertise as well as measurement data from multiple sources (experimental sites, geophysical data, remote sensing, ...). Together we set out to provide the environmental research community with an open source tool that contributes to the solution of problems with high social relevance.

 
  • Project application within the Cluster of Excellence SimTech
  • Start: 11/2017
  • Funding period: until Dec. 31, 2018
  • Funding code: PN 2-24
  • Link: SimTech

Mixed-precision schemes are favourable on all current and future architectures, as they potentially allow to execute the bulk of computations in lower than working precision, resulting in more efficient exploitation of hardware resources both in terms of time-to-solution and energy. They effectively double the working set size in the cache hierarchy, and double the effective memory throughput. However, mixed-precision schemes are currently not widely used, because of the a-priori analysis they require. In this project, we derive, analyse and implement a set of tools towards the automatic selection of the “best mixture” of floating point formats in complex applications for guaranteed accuracy.

  • Project application within the Cluster of Excellence SimTech
  • Start: 11/2015
  • Funding period: 3 years up to Oct. 31, 2017
  • Funding code: PN2-17

As the hardware at the bottom of the simulation pipeline becomes increasingly fine-grained parallel and heterogeneous, a multitude of portability, performance and usability challenges arise. Several of these will be addressed in this project. The overall intention is to develop efficient numerical methodology, data structures and implementation techniques that enable better future-proof exploitation of the underlying hardware for adaptive problems. We will mainly focus on aspects arising in finite element modelling on GPUs as current example hardware, without neglecting the general applicability of the developed techniques: As the implementation will be done in the DUNE software framework, these “low-level” improvements will be available for a wide range of applications, within and beyond the SimTech Cluster of Excellence.

  • Joint application in cooperation with S. Turek (TU Dortmund)
  • Start: 08/2013
  • Funding period: 3 years
  • Funding code: GO 1758/3-1

This joint project examines numerical methods for massively-parallel multigrid methods for finite element discretisations of variable order. Special emphasis is placed on techniques that enable robustness and uniform scalability on modern heterogeneous hardware architectures, in particular on hybrid systems comprising conventional CPU-like processors combined with throughput-optimised accelerator designs like graphics processors (GPUs). The goal of uniform scalability is very challenging and embraces aspects of numerical scalability (convergence rates independent of problem size and problem partitioning), the minimisation or even avoidance of sequential components on all parallelism layers of hybrid systems, the equal degree of utilisation of all compute resources, and the numerically stable and robust asynchronous and fault-tolerant parallel execution. In addition, novel numerical methods along with suitable implementation techniques are developed and analysed (hardware-oriented numerics), so that efficient -- simultaneously wrt. numerics, parallelism and hardware - discretisation and solution techniques can be provided for a broad range of flow problems. Joint work in this research project is incorporated both in independently usable libraries as well as the common FEAST software package that has been developed intensively during the last years, so that a numerically robust, scalable and recursively configurable methodology for massively-parallel multigrid methods on heterogeneous hardware platforms can be realised and analysed.

  • Joint application within the Priority Programme "Software for Exascale Computing" (SPP-1648), together with P. Bastian and O. Ippisch (Heidelberg), S. Turek (Dortmund), M. Ohlberger und Ch. Engwer (Münster) and O. Iliev (Kaiserslautern)
  • Start: 01/2013
  • Funding period: 3 years
  • Funding code: GO 1758/2-1
  • Link: http://www.sppexa.de/

The aim of this interdisciplinary project, bringing together experts from the open source projects DUNE and FEAST, is to develop, analyse and realise new numerical, algorithmic and computational techniques to enable exascale computing for partial differential equations (PDEs) on heterogeneous massively parallel architectures. As the life time of PDE software is typically much longer than for hardware, flexible but nevertheless hardware-specific software components are developed based on the DUNE platform, which uses state-of-the-art programming techniques to achieve great flexibility and high efficiency to the advantage of a steadily growing user-community. Hardware-oriented numerical techniques of the FEAST project are integrated to optimally exploit the performance of the local (heterogeneous) nodes (multi-core multi-purpose CPUs, special purpose acceleration units like GPUs, etc.), w.r.t. specific structures of the given PDEs. The introduction of a hardware abstraction layer will make it possible to perform the necessary hardware-specific changes of essential components at compile time with at most minimal changes of the application code. Further adding to the great benefits from a combination of the strengths of DUNE and FEAST, modern numerical discretisation’s and solver approaches like adaptive multi-grid, localised spectral methods (e.g. higher-order Discontinuous Galerkin schemes) and a hybrid parallel grid will increase the scalability. The EXA-DUNE toolbox is extended from petascale towards exascale level computing by introducing multi-level Monte Carlo methods for uncertainty quantification and multi-scale techniques which both add an additional layer of coarse grained parallelism, as they require the solution of many weakly coupled problems. The new methodologies and software concepts are applied to flow and transport processes in porous media (fuel cells, CO2 sequestration, large scale water transport), which are grand challenge problems of high relevance to society.

  • Individual proposal for a kick-off funding within the 2nd announcement of the Mercator Research Center Ruhr (MERCUR)
  • Start: 06/2013
  • Funding period: 1 year
  • Funding code: An-2013-0019

Modern computer systems are increasingly heterogeneous, parallel, dynamic and unreliable. To efficiently exploit their potential performance, the underlying numerical and algorithmic methodology has to be explicitly adapted and extended. The scope of this project is the numerical simulation of partial differential equations, in particular the combination of finite element methods with hierarchical multigrid methods. Components of this type often dominate compute times in modern, fast and accurate simulation schemes for application problems, e.g., in continuum mechanics. While in the past few years significant progress has been achieved in terms of uniform scalability, fine-granular parallelisation (GPUs) and runtime efficiency, we now tackle the even more complex challenges of fault tolerance, asynchronicity and communication avoiding: Resilience in case of partial hardware faults will be integrated directly in more robust numerical schemes, and the dramatically increasing disparity between raw floating point performance and data transfers between heterogeneous memory hierarchies mandates substantial research efforts to develop solution methods that are flexible and highly efficient simultaneously. All implementations are integrated into open-source software and are thus widely available for application codes.

  • Individual proposal within the "CUDA Teaching Program", NVIDIA Research
  • Start: 07/2012
  • Funding period: 1 year
  • Funding code: n/a
This image shows Dominik Göddeke

Dominik Göddeke

Prof. Dr. rer. nat.

Head of Institute and Head of Group

This image shows Britta Lenz

Britta Lenz

 

Secretary's Office

To the top of the page