Call for PhD application happens each spring.

Subject: Using artificial intelligence to detect and characterise large samples of dwarf galaxies from the Euclid and LSST missions

Deadline for application: April 26, 2026

Supervision

Pierre-Alain Duc (OBAS, Strasbourg)

Laboratory and team

OBAS, Strasbourg - Team "GALHECOS"

Subject description

Dwarf galaxies are by far the most numerous galaxies in the Universe, but probably also the most difficult to find and characterise. Indeed, their small size and low surface brightness make them particularly difficult to detect outside our Local Group. However, two instruments are about to overcome these difficulties. The Euclid space mission is already a game changer. Thanks to its large field of view and excellent image quality, it can clearly identify large populations of dwarf galaxies, as well as their nucleii and globular cluster populations. At the same time, the ground-based LSST experiment with the Rubin telescope will collect complementary data, in particular detailed colour information. The combination of LSST and Euclid photometry will, for example, help to rule out background galaxies as dwarf candidates and make it easier to identify the population of globular clusters (GCs). 

The PhD student will have privileged access to both sets of data at a critical time, thanks to the PhD supervisor's tickets. The long term aim of the thesis, which will be carried out in close collaboration with the international Euclid and LSST consortia, will be to:

- Identify the population of dwarf galaxies in a specific region of the sky - that around the Fornax supercluster. This region has been chosen because its density minimises the risk of contamination by background objects and maximises the chances of finding globular-cluster-rich objects, for which numerous follow-up studies may be done. 

- study the spatial distribution of dwarfs relative to the most massive galaxies, and the implications for nucleus and globular cluster content. Due to a lack of statistics, the causes of the large variations in the GC content of the dwarf population are still largely unknown. 

- Determine the most relevant parameters to fully characterise the dwarf populations (e.g. effective radius, surface brightness, colour, dark matter content), providing insightful criteria for their identification in the remaining Euclid and LSST surveys. 

One of the big challenges of this work is the large set of data currently available. The analysis of this data set require the development of ad hoc artificial intelligence tools. They are needed to : 

- detect and segment the LSB candidates in the images 

- validate their status as dwarf dwarf galaxies, excluding background objects with the caveat that their distances are not known 

- Determine automatically their properties 

Two ways have so far been explored : 

- Using pre-trained foundation models (I.e. galaxy Zoo) on image cutouts (when catalogs of sources are available), and further eye validation (with costumed visualisation tools) 

- Using dedicated neuronal networks and ad hoc filters (i.e. Gabor filters) to directly detect/segment the candidates, taking into account foreground or background contaminants (e.g. Galaxy cirrus) without the need of pre-defined cutouts 

Both methods will be compared, and their effectiveness depending on image depth, and availability of multi-sectral information will be analysed.

Related mathematical skills

Basic knowledge of AI / deep learning methods is needed. Having some interest in the analysis of astronomical data and more in general in astrophysics would be an asset

 

Subject: Supermassive Black Holes & Galaxy Formation

Deadline for application: April 26, 2026

Supervision

Christian Boily (OBAS, Strasbourg)

Laboratory and team

OBAS, Strasbourg - Team "GALHECOS"

Subject description

The strong gravity of a massive black hole disturbs stars up to break up point. The project follows a theoretical approach for the integration of stellar orbits in general relativity applied to a realistic population of stars in the core of a galaxy. A key objective is to map these high energy events to high accuracy with a view to apply machine learning methods to probe real galaxies as observed by e.g. the James Webb Space Telescope (JWST). To reach that goal, this project aims to quantify the growth rate of black holes in order to explain their very existence in the early Universe (< 1 Gyr), a key challenge of modern astrophysics.

Related mathematical skills

The project would benefit from expertise in high-order stiff differential equations solving techniques. Experience with machine learning techniques to map mock images to real data would also be a valued asset.

Subject: Hybrid numerical–learning methods for cosmological radiative transfer and reionization

Deadline for application: April 26, 2026

Supervision

Emmanuel Franck (IRMA, Strasbourg)

Laboratory and team

IRMA, Strasbourg - Team "MOCO"

Subject description

Radiative transfer is a key ingredient in models describing the formation of large-scale structures and the epoch of reionization, during which the first astrophysical sources progressively ionized the intergalactic medium. Accurate modeling of this phenomenon requires solving a kinetic equation posed in a high-dimensional space, depending on position, propagation direction, and time. In current astrophysical codes, this complexity is usually reduced by using moment models, in particular the M1 closure, which provide a good compromise between computational cost and accuracy in isotropic regimes involving isolated sources. However, recent work has highlighted the fundamental limitations of these approaches whenever the radiation field exhibits complex angular structures, such as in the presence of shadows cast by dense structures or when ionization fronts from nearby sources meet each other or interact with a photon background. These situations are common in reionization simulations and require angular descriptions richer than those accessible through classical closures. In parallel, several exploratory studies have shown that scientific learning approaches, and in particular physics-informed neural networks (PINNs), are capable of efficiently representing solutions of high-dimensional partial differential equations while capturing fine-scale structures that are difficult to approximate with traditional methods. However, a lack of guarantees and error control limits the applicability of these approaches.

We therefore propose to consider hybrid approaches combining numerical discretizations and neural models, in order to overcome the current limitations of moment models while maintaining a computational cost compatible with large-scale simulations. The objective of this thesis is to develop a numerical and algorithmic framework for cosmological radiative transfer based on such hybridization. A first step will consist in designing a semi-Lagrangian scheme for transport in phase space using a hybrid representation. The spatio-angular domain will be decomposed into macro-cells in which the solution is approximated by small neural networks. The macro-cells will be coupled using a discontinuous Galerkin approach. This strategy should preserve the locality, conservation, and parallelization properties of DG methods while benefiting from the ability of neural networks to represent complex functions with a reduced number of degrees of freedom. In a second step, the thesis will introduce a micro–macro formulation of radiative transfer. The idea will be to describe the global dynamics using a robust macroscopic model of M1 type, responsible for capturing near-equilibrium regimes, while deviations from this equilibrium, which carry fine angular information, will be represented by a DG–PINN approximation. Such a decomposition should make it possible to concentrate the most expensive computations only in regions where they are needed. Finally, a central perspective of the project will be to dynamically learn equilibrium distributions, which amounts to learning models that generalize M1, becoming increasingly rich in order to progressively reduce the microscopic part of the solution. These M1-type models, which rely on only a reduced number of angular moments, should further reduce computational cost and will be introduced as new closures for the community.

The thesis lies at the interface between scientific computing, numerical astrophysics, and machine learning for partial differential equations. It will combine numerical analysis, scheme design, scientific learning, and high-performance implementation, with the final goal of integrating these new methods into representative configurations of reionization simulations and, in particular, inverse problems. 

References:

C. D. Levermore, Relating Eddington factors to flux limiters, Journal of Quantitative Spectroscopy and Radiative Transfer (1984). 
J. Rosdahl, J. Blaizot, Dominique Aubert, Timothy Stranex, Romain Teyssier, RAMSES-RT: radiation hydrodynamics in the RAMSES code.
M. Palanque, P. Ocvirk, E. Franck, P. Gerhard, D. Aubert, O. Marchalt, Higher order methods for Radiative Transfer in Astrophysical simulations: Pn vs M1, arXiv preprint 2025.
M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, JCP 2019.

Related mathematical skills

Numerical analysis
Learning
Python

Subject: Stochastic Analysis of Blockchain Systems

Deadline for application: April 17, 2026

Supervision

Denis Villemonais (IRMA, Strasbourg)

Laboratory and team

IRMA, Strasbourg - Team "PROBA"

Subject description

A blockchain is a distributed ledger maintained by a peer-to-peer network, where the nodes apply a consensus algorithm to agree on the recording of new data. Three properties define these systems: efficiency (transaction throughput), decentralization (distribution of control across nodes), and security (resistance to adversarial attacks). The original consensus mechanism, Proof of Work (PoW), relies on computational competition among nodes to solve cryptographic puzzles—a process that incurs substantial energy costs. This PhD project focuses on Proof of Stake (PoS), the consensus protocol underpinning Ethereum and other next-generation blockchains. Unlike PoW, PoS selects validators probabilistically, with selection probabilities scaled by their staked cryptocurrency holdings. The chosen validator proposes the next block and earns a reward in return. While this design might intuitively favor wealthier participants, empirical and theoretical analyses reveal a counterintuitive property: the long-term stake distribution converges to a stable equilibrium on average—a phenomenon mathematically captured by Pólya urn models, a class of reinforced stochastic processes. Modern variants of Proof of Stake (such as Algorand or Ouroboros) lack a unified mathematical description. The objective is to develop a general framework to model these algorithms, drawing on the theory of reinforced stochastic processes and multi-color (or even infinitely many-color) Pólya urns to represent the large number of participants. 

The goal is to study their limiting behaviors and fluctuations to identify conditions that ensure fair decentralization. The work will also extend current theoretical results to better account for the heterogeneous and dynamic structure of blockchains (temporal variations in activity or transactional preferences). The second research axis adopts a queueing-theoretic framework to quantify blockchain transactional efficiency. Here, pending transactions are modeled as customers in a G/G/1 queue, where blocks act as batch-service events. We will analyze the arrival-process dynamics (e.g., transaction submission rates) and service discipline (block propagation and validation delays) to derive key performance metrics, including Mean confirmation time (time-to-inclusion in a block) and System congestion (mempool size evolution). Our approach begins with tractable M/M/1 and M/D/1 models (exponential/inter-determined arrivals and service times) to establish baseline results, then extends to non-Markovian settings (e.g., heavy-tailed distributions for bursty traffic). This progression will yield closed-form approximations for waiting-time distributions and throughputs. To bridge the gap between theory and practice, the model will integrate Two features. Fee-based prioritization as transactions compete for block inclusion via dynamic gas markets (Ethereum) or fee-per-byte auctions (Bitcoin). Time-sensitive abandonment as nodes may discard transactions with suboptimal fees after exceeding empirically observed patience thresholds (e.g., 95th-percentile waiting times). 

This project has two main objectives: 
- A rigorous mathematical characterization of Proof-of-Stake mechanisms, focusing on decentralization properties through stochastic reinforcement models. 
- A quantitative framework for blockchain efficiency, grounded in queueing theory to analyze transactional dynamics. 
By combining probabilistic modeling with empirical validation, the work will deliver both theoretical insights and practical metrics—essential for designing next-generation blockchains that balance fairness, scalability, and sustainability.

Related mathematical skills

Probability and stochastic processes: martingales, Markov processes, convergence in distribution, limit theorems, and possibly elements of branching processes
Queueing theory (M/M/1 models, Little’s law, queues with abandonment/priority)
Scientific programming language (Python, R, Julia, Rust, C or C++)
Statistical methods for stochastic processes (estimation, data-driven calibration, distribution fitting) 

Subject: Bohr-Sommerfeld conditions in the multiple focus-focus case

Deadline for application: April 17, 2026

Supervision

Yohann Le Floch (IRMA, Strasbourg)

Laboratory and team

IRMA, Strasbourg - Team ANA

Subject description

The proposed thesis lies at the interface between the theory of Liouville integrable systems and the semiclassical analysis of geometric quantization. The goal is to describe the joint spectrum of certain pairs of operators acting on spaces of holomorphic sections of large tensor powers of some complex line bundle and quantizing an integrable system with two degrees of freedom whose momentum map possesses a singular value of focus-focus type with multiple singular points on the corresponding level. This description would help studying inverse questions for the spectral theory of such operators. More precisely, on a four-dimensional quantizable compact Kähler manifold, we consider two commuting Berezin-Toeplitz operators whose joint principal symbol is the momentum map for an integrable system, and assume that there exists a singular value of this momentum map of focus-focus type for which the corresponding level is connected and contains several singular points. The goal is to describe the joint spectrum of the operators in the semiclassical limit, locally near this singular value. The case of semiclassical pseudodifferential operators, with a single singular point on the critical level, has been investigated in the early 2000s; the case of several singular points has never been explored, and neither has the single critical point case in the setting of Berezin-Toeplitz operators, which would consitute an interesting first step. The multiple singular points case should involve the semi-local symplectic invariants recently obtained by Pelayo and Tang. To illustrate the results, it will be possible and interesting to rely on the numerous examples investigated in the last few years and for some of which the aforementioned invariants have been computed. As a natural follow-up, the results could be applied to the inverse spectral problem for Berezin-Toeplitz operators; a natural question is to understand whether the knowledge of the semiclassical joint spectrum of two such operators (which commute), whose joint principal symbol is the momentum map for a semitoric integrable system, determines this integrable system up to isomorphism. A positive answer has been given recently by Le Floch and Vũ Ngọc in the case where every focus-focus singular level has a single singular point, but the lack of description of the joint spectrum near focus-focus values led to working with a double limit (first on the semiclassical parameter, then when a regular value goes to a given focus-focus value). A first application for the results of the thesis would be to obtain a more straightforward proof, without using this double limit. A second application would be to determine if a similar inverse result could be obtained in the case where the focus-focus singular levels may contain several singular points.

Related mathematical skills

Advanced differential geometry (fibers, symplectic and
Kählerian geometry, etc.) 

Foundations of semiclassical analysis

Subject: Hexahedral meshes based on medial axis

Deadline for application: April 17, 2026

Supervision

Dominique Bechmann (ICube, Strasbourg)

Laboratory and team

ICube, Strasbourg - Team IGG

Subject description

The construction of a volumetric mesh for a given geometric domain is a complex problem that has been addressed for many years. The generation of purely hexahedral meshes for domains of any shape is still an open problem. Such meshes would be very useful in numerical simulations such as fluid dynamics. As part of the work proposed in this thesis, we aim to develop an efficient and automatic algorithm that, starting from a domain defined by a surface mesh or a point cloud, uses the variational approach [4-HKTB24] to obtain a skeleton, which is then used as a scaffold [2-VKB23] to construct a hexahe-dral volume mesh. 

Numerous problems must be solved in order to obtain a complete and integrated solution.

I. A rigorous mathematical demonstration of the robustness of the algorithm could prove useful in ensuring the long-term viability of our method.

II. The remeshing of the internal topology of the skeleton composed of segments (1D) and triangles (2D), obtained by the variational method, will need to be implemented for its coupling with mesh gen-eration. In addition, the management of special cases that we have identified in order to maintain com-patibility with our scaffolding structure needs to be studied rigorously.

III. Particular attention must be paid to preserving the topological properties of meshes, which is necessary if we wish to retain specialised optimisations for simulation. In this context, methods for subdividing and adapting mesh sampling will need to be explored.

IV. Characterisation of the geometric domains that can be represented by skeletons (1D-2D) and then meshed by our algorithm is also required in order to control the domain of validity of the methodology. 

V. Finally, validating the results by applying simulation codes to the meshes produced by experts would enable practical validation of the work and might lead to the discovery of new problems to be solved. 


[4-HKTB24] Q. Huang, P. Kraemer, S. Thery, D. Bechmann, Dynamic Skeletonization via Variational Medial Axis Sampling, Full paper at ACM SIGGRAPH ASIA 2024, Tokyo, Japan, décem-bre 2024.
[2-VKB23] P. Viville, P. Kraemer, D. Bechmann, Meso-Skeleton Guided Hexahedral Mesh Design, Full paper at Pacific Graphics 2023, Computer Graphics Forum, Volume 42, Number 7.

Related mathematical skills

The candidate holds a master’s degree in computer science with expertise in computer graphics, specifically geometric modeling. 

He or she possesses the skills necessary to address scientific problems and develop 3D applications (C++ programming and graphics). 

Mathematical skills in geometry would also be a major asset for this position.

 

Subject: Active fluid models for cells dynamics: modeling and analysis

Deadline for application: April 17, 2026

Supervision

Laurent Navoret and Benjamin Melinand (IRMA, Strasbourg)

Laboratory and team

IRMA, Strasbourg - Teams MOCO and Analysis

Subject description

During embryogenesis and healing processes, cellular tissues are the site of large-scale cellular movements. Identifying the key biological or physical principles underlying these movements is the subject of much current fundamental research. The PhD thesis will particularly focus on the impact on the boundaries. For instance, active surrounding active cables have been shown to have a key role in the development of motions, in which the boundaries are active actin cables. As the number of the involved cells could be of the order of hundreds or thousands, fluid like models have been considered. They describe the time evolution of the macroscopic density and mean velocity and their analysis can provide concrete criteria for the emergence of collective motion. The main steps of the PhD will be: modeling (apropriate boundary conditions), analysis of the model (stability and global existence property around stationary solutions) and numerical simulations.

Related mathematical skills

The PhD student must have advanced knowledge in functional analysis and partial differential equations. He or she must also have an appetite for modeling and numerical simulations.

Subject: Methods for probing dark matter halos through cosmic time in the perspective of future great observatories

Deadline for application: April 17, 2026

Supervision

Jonathan Freundlich (ObAS, Strasbourg)

Laboratory and team

ObAS, Strasbourg - Team “GALHECOS”

Subject description

The cosmological model based on the existence of cold dark matter describes the large-scale structure of the Universe with great success, but it faces several challenges at the galactic scale. In particular, dark- matter-only simulations predict density profiles that are particularly steep at the center of dark matter halos — known as "cusps" — while some observations favor "cores" of constant density at the center. Introducing processes such as star formation and feedback phenomena resulting from stellar evolution and active galactic nuclei (which include stellar winds, radiation effects, supernova explosions, and jets) into simulations can alleviate this tension by reproducing cores. However, simulations agree neither on the intensity of these processes nor on their effect on dark matter distribution. Furthermore, observations indicate a diversity of dark matter density profiles for a given total mass, contrary to expectations if these processes were the same from one galaxy to another. Finally, some observations seem to indicate the presence of dark matter cores early in the history of the Universe, which requires halo transformation mechanisms that are sufficiently rapid. These challenges raise fundamental questions about the formation mechanisms of dark matter cores, feedback phenomena, and more generally, the very nature of dark matter. The methods used to infer dark matter density profiles from galaxy kinematics rely on various physical assumptions, particularly regarding the equilibrium of galaxies, their axisymmetry, and the ratio between circular motion and velocity dispersion. They also rely on Markov Chain Monte Carlo (MCMC) methods, which prove to be too costly in terms of time and computing capacity when studying samples of more than a dozen galaxies. Indeed, the data used are three-dimensional multispectral cubes (two spatial dimensions, one velocity dimension; the pixels are spectra whose components are Doppler-shifted due to gas movements) which are particularly voluminous, and the models used require dozens of parameters per galaxy. Yet, future great observatories such as the Square Kilometre Array (SKA), the Extremely Large Telescope (ELT), as well as new instrumentation for the Very Large Telescope (VLT), will soon produce unprecedented amounts of data from which it will, in principle, be possible to deduce dark matter density profiles and their evolution over a large part of the history of the Universe. 

The aim of this thesis is, on one hand, to test the physical assumptions of the methods used to deduce dark matter density profiles using simulations of isolated galaxies (produced as part of the thesis) and existing cosmological simulations; and on the other hand, to optimize these methods so they can be applied to future observational surveys. Several directions are envisioned in this regard: porting existing methods to GPUs, accelerating Monte Carlo methods by optimally sampling the probability distribution, and exploring new methods such as Bayesian neural networks. 

This thesis will be carried out within the framework of the IRMIA++ Interdisciplinary Thematic Institute, which brings together mathematicians, computer scientists, and astrophysicists. It will notably benefit from a collaboration within the IRMA statistics team with Augustin Chevallier.

Related mathematical skills

Dynamic modeling of galaxies 
Hydrodynamic simulations
Bayesian statistics
MCMC methods

 

Subject: Computing in Homotopy Type Theory

Deadline for application: April 17, 2026

Supervision

Nicolas Magaud (ICube, Strasbourg) and Viktoria Heu (IRMA, Strasbourg)

Laboratory and team

ICube, Strasbourg - Team “IGG” and IRMA, Strasbourg - Team “AGA”

Subject description

Homotopy Type Theory (HoTT) is a contemporary approach to the foundations of mathematics that draws on ideas from logic, theoretical computer science, and homotopy theory. HoTT is built on type theory, which already serves as the foundation for proof assistants such as Lean and Rocq, and introduces a radical change: instead of considering the fundamental objects of the theory to be sets, HoTT uses spaces (infinity-groupoids). The entire language thus takes on a topological character: proofs of equalities become paths, statements are automatically homotopy-invariant, and so on. This shift in perspective is doubly useful: 
- For the computer scientist, it fills some gaps in Martin-Löf type theory by introducing quotient types, function extensionality, and the ability to do representation-independent reasoning. 
- For the mathematician, it provides a framework for developing homotopy theory in a synthetic and constructive manner, without having to rely on combinatorial models such as simplicial sets. 
In technical terms, HoTT adds two axioms to the language of type theory: the univalence axiom, which identifies equality between types with equivalences, and higher inductive types (HITs), which allow for the definition of higher dimensional types [5]. However, HoTT leaves out an important aspect of the theory: its effectiveness. Indeed, the language of type theory has a well-defined computational behavior, which lets us evaluate any proof of a concrete statement and obtain an explicit value. But the axioms added by HoTT have no clear computational content. Today, the only way to evaluate a proof written in HoTT is through cubical type theory (CTT) [3], which is a different extension of type theory that implements the HoTT axioms while retaining computational content. However, in practice, it is not always easy to compute with CTT! 

The goal of this thesis is to study how to compute efficiently with HoTT and CTT. We propose three approaches: 

I. Computing topological invariants with CW complexes Thanks to its effectiveness, it is theoretically possible to use CTT to compute topological invariants such as homotopy groups and (co)homology groups based on their mere definitions (provided they are constructive). However, a naive definition will not yield an efficient computational algorithm: a well known example is the Brunerie number [1], an integer defined in CTT via homotopic constructions which we know mathematically should evaluate to -2, but its evaluation fills up the computer's memory. We propose to investigate more suitable definitions for computation, such as the definition of homology groups of CW complexes [4] in terms of cellular homology and matrix operations. 

II. Evaluating Cubical Type Theory using Rocq In addition to the use of overly naive definitions, one reason for the inefficiency of CTT is that the only available implementation, Cubical Agda, is not optimized for computation. In contrast, Rocq and Lean offer powerful tools for evaluating definitions by compiling them down to native code. We propose to define a syntactic translation from CTT to Rocq's type theory, based on the method outlined in [2]. This will, on the one hand, provide the first fully formal framework for studying the semantics of CTT, and on the other hand, allow us to leverage Rocq's capabilities to evaluate definitions in CTT. 

III. Conservativity of Cubical Type Theory CTT is currently the only available tool for evaluating proofs written in HoTT. However, since CTT's language is richer than that of HoTT, there is a priori no guarantee that a result obtained by computation in CTT corresponds to an equality which is provable in HoTT. Conservativity results exist for integers, but the problem remains wide open for more complex types. We propose the use of syntactic methods (as in Part II) to address this problem. In the longer run, we could envision an automated tool that takes a proof in CTT and reconstructs a proof of the same statement in HoTT. 

[1] Guillaume Brunerie. Sur les groupes d'homotopie des sphères en théorie des types homotopiques. PhD thesis, Université Nice Sophia Antipolis, 2016.
[2] Loïc Pujet. Computing with Extensionality Principles in Type Theory. PhD thesis, Université de Nantes, 2022. 
[3] Cyril Cohen, Thierry Coquand, Simon Huber, and Anders Mörtberg. Cubical type theory: A constructive interpretation of the univalence axiom. 2017.
[4] Axel Ljungström and Loïc Pujet. Cellular methods in homotopy type theory, 2026. submitted. 
[5] The Univalent Foundations Program. Homotopy Type Theory: Univalent Foundations of Mathematics. 2013.

Related mathematical skills

Knowledge of type theory and being fluent in using proof assistants such as Rocq/Coq, Lean or Agda is a requirement for a successful completion of the PhD. Some basic knowledge of topology would be appreciated, but is not mandatory.

These subjects are proposed by ITI IRMIA++ members for PhD contracts starting in september/october 2026.

In order to apply, please click on the button below to access the application form and select the PhD Position Call for projects / Candidate profile :

Application form

You will need to provide the following information and documents:

  • the subject you are applying for (Please note: applications that do not specify the chosen subject will not be considered)
  • your resume
  • a cover letter
  • full transcript of Master's degree grades
  • recommendation letters from your references.

References can submit their own letter of recommendation. To do so, please click on the button above to access the application form and select the profile PhD Position Call for projects / External support for a candidate.

Deadline for application reception : Please refer to the date indicated for each subject


If you are interested in a subject which is not in the subjects list, please contact directly the researchers and team you want to work with.

 

If you start a PhD in an ITI IRMIA++ team, we can offer you a financial help for your installation !
More information on the dedicated page.