Dieses Bild zeigt


Details about our current projects and research topics

AFEM Analysis: Complexity and Convergence Analysis of adaptive Methods


The aim of this project is the design, the mathematical analysis and the implementation of rate optimal adaptive finite element methods for the efficient simulation of parabolic problems. We will focus on design, rate convergence analysis and comparison of different adaptive discretizations and semidiscretizations. The analysis will increase the mathematical insight into the properties of adaptive finite element methods that are needed for guaranteeing rate optimal convergence. The derived algorithms will be applied to a variety of problems in order to investigate their robustness and performance.

Researchers: F.D. Gaspoz, K.G. Siebert

Cooperations: C. Kreuzer (Bochum), A. Veeser (Milano)

This project is funded by the DFG  with reference number SI-814/4-1.

During the last decade there has been a substantial progress in the analysis of adaptive finite element methods. This includes a posteriori error estimation as well as the analysis of convergence and optimality of the adaptive algorithm. Our aim is to mathematically prove for a large class of problems convergence of the standard adaptive methods which are used in practice. In addition, we are investigating optimal error decay in terms of degrees of freedom.

AFEM Software: Design und Implementation of Advanced Simulation Tools for Diffusive Processes

Partitioned Grid
Partitioned Grid

In this project, we will develop and improve open source software infrastructure for parallel-adaptive finite element simulations on unstructured grids. Our main focus are diffusion dominated time-dependent and -independent multi field processes. For this problem class, which requires conforming discretizations, parallel-adaptive simulation environments for unstructured grids are not well established yet. It is the ultimate goal of this project to develop a mathematically based software infrastructure, based on DUNE and DUNE-FEM, which also enables other scientists to perform parallel-adaptive FEM simulations with low overhead.

Researchers: M.Alkämper, C.J. Heine, S.Hilb, K.G. Siebert

This project is funded by SimTech.

AFEM-Anwendung: Validierung und Einsatz moderner Methoden im interdisziplinären Kontext

Single Lens DNAPL Infiltration
Single Lens DNAPL Infiltration

We are going to develop, analyze, and implement a discontinuous Galerkin finite element solver for the efficient simulation of multi-phase flows in strongly heterogeneous porous media. We aim at a fully implicit, locally conservative, higher order discretization on adaptively generated meshes. The implementation is based on DUNE and DUMUX. The post participates in the research programme of the SimTech Cluster of Excellence. It is located at the Chair for High Performance Computing at the Institute for Applied Analysis and Numerical Simulation. The project is a collaboration with the Institute for Hydromechanics and Modelling of Hydrosystems.

Researchers: B. Kane, K.G. Siebert

This project is funded by SimTech.

Adaptive Mesh of an Image
Adaptive Mesh of an Image

Non-smooth minimisation for example occur in image reconstruction where the minimisation of the total variation plays a fundamental role. This is due to the reason that the total variation allows for preserving edges and discontinuities in the solution. We are developing Finite Element Methods for such optimisation problems and analyse their behaviour and properties. The implementation is based on DUNE::ACFem.

Researchers: M. Alkämper, S. Hilb, A. Langer

In several application, for example in image processing, one solves a minimization problem of the type

min H(u) + αR(u),

where H represents a data fidelity term, which enforces the consistency between the recovered and measured data, R is an appropriate regularization term, which prevents over-fitting, and α>0 is a regularization parameter weighting the importance of the two terms. The solution of this problem clearly depends on the choice of the parameter. In particular, in image reconstruction, large α, which lead to an over-smoothed reconstruction, not only remove noise but also eliminate details in images. On the other hand, small α lead to solutions, which fit the given data properly but therefore retain noise in homogeneous regions. Hence a good reconstruction can be obtained by choosing α such that a good compromise of the aforementioned effects are made. A scalar regularization parameter might not be the best choice for every application. For example, images usually have large homogeneous regions as well as parts with a lot of details. This motivates that α should be small in parts with small features, in order to preserve the details, and should be large in homogeneous parts to remove noise considerable. With such a choice of a spatially varying weight we expect better reconstructions than with a globally constant parameter.

Researchers: A. Langer

Usually images are corrupted by different types of noise, such as Gaussian noise, Poisson noise, and impulse noise. This contamination usually happens during image acquisition, which describes the process of capturing an image by a camera and converting it into a measurable entity, and image transmission. For the application of removing simultaneously Gaussian and impulse noise we proposed to minimize a functional consisting of a combined L1 and L2 data fidelity term and a total variation term. This new model, called L1-L2-TV model, has noticable advantages over popular models.

Researchers: A. Langer

These methods are relevant and important when one has to solve extremely large problems, such as 4D imaging (spatial plus temporal dimensions) from functional magnetic-resonance in nuclear medical imaging, astronomical imaging or global terrestrial seismic tomography. There exist subspace correction and domain decomposition methods which converge to the solution of PDEs with smooth strictly convex energies, while this is not true in the case of nonsmooth and nonseperable energies. However, we successfully introduced domain decomposition methods for total variation minimization and achieved the first proof of convergence for such methods for the nonsmooth and nonseperable total variation. This clarifies the possibility of using parallel computation for solving problems where a total variation constraint is involved. More information about domain decomposition methods for total variation minimization can be found here.

Researchers: S.Hilb, A. Langer

This project is partially funded by the MWK through the RISC-project "Automatische Erkennung von bewegten Objekten in hochauflösenden Bildsequenzen mittels neuer Gebietszerlegungsverfahren".


Dieses Bild zeigt Siebert
Prof. Dr.

Kunibert Siebert

Head of Group

Dieses Bild zeigt Steiner

Brit Steiner

Secretary's Office