# The DU IRMIA++ students' journal

This page presents a collection of articles written by the DU IRMIA++ students and based on the interdisciplinary seminar.

## 02/12/2021 - Optimal Transport and astrophysics

This article was written by Claire SCHNOEBELEN and Nicolas STUTZ, based on the presentations given by Nicolas JUILLET and Bruno LEVY on December 2nd, 2021.

The theory of optimal transport began at the end of the 1770s. The mathematician Gaspard Monge formalized the problem of "cut and fill". Historically, one sought to move a pile of earth from an initial region of space (called excavation), possibly modifying its shape, to a final region (called embankment) in an optimal way. This is in fact a displacement problem that describes more general situations, where the mass to be transported can be considered as a gas, a liquid, a cloud of points, or even grey levels in the context of image processing.

The so-called matching problems are natural examples of transport problems, and we will explain one of them. Let us give an integer n and consider n mines x1 , . . . , xn from which an ore is extracted, as well as n factories y1, . . . , ynusing this ore as raw material. We try to associate to each mine xia plant yj in order to minimize the total transportation cost. To model the problem, let us note c(xi,yj) = cij the cost of moving an ore from a mine xi to a plant yj. The solution of the problem then reduces to finding a permutation σopt such as

that is solution of the following minimization problem:

Note that the problem and its potential solutions depend on the chosen cost function. In our example, we could identify the mines xi and the factories yjwith points of a Euclidean space and consider cost functions of type "distance", i.e.

A remarkable property of the theory of discrete transport appears in this context as a consequence of the trian- gular inequality : the optimal transport paths do not intersect. Moreover, we can reformulate the above problem. Formally, we look for an application Topt solution of the minimization problem:

We can generalize the previous problem as follows. We still consider n plants to be delivered but there are now m ≠ n mines producing the ore. With the previous modeling, if m < n, the optimal transport function will not be surjective and some plants will not be delivered.

Kantorovich’s formulation of the transportation problem takes this case into account: it involves distributing the ore from each mine to several plants. More precisely, we assign masses p1,...,pm (resp. q1,...,qn) to the mines x1,...,xm (resp. to the plants y1,...,yn) by imposing that there is as much ore at the source as at the demand

The problem then becomes to assign masses to the m×n transport routes, i.e. to find a matrix, called "transport plan":

(where Mij ≥ 0 represents the mass transported from xito yj) in the solution of the minimization problem:

by respecting the conservation of the outgoing and incoming masses:

So far we have only dealt with discrete transport problems. There is a more general formalism in which the displaced mass is represented using probability measures, which can be discrete or continuous. The Kantorovich transport problem in its general formulation then takes the following form: considering two probability measures

(thinking of the first as excavation, the second as embankment) , we look for πopta measure (a transport plane) solution of the minimization problem

with the following margin equations (mass conservation conditions):

The algorithmic aspect of the optimal transport problem has been the source of numerous works since those of the mathematician Leonid Kantorovich in the middle of the 20th century. In astrophysics, the theory of optimal transport is notably used to reconstitute the great movements of matter in the early universe. More precisely, it is a question of finding the transport process which, from a uniform spatial distribution at the beginning of the Universe, has led to the present structures.

In a classical approach, the present matter clusters are represented by a set of points in space Y = (yj)j∈J, distributed according to the present observations. In the same way, the matter at the beginning of the Universe is represented by a set X = (xi)i∈I of points uniformly distributed in space. Now, we have to find the application T ∶ X → Y which associates to each point of X the point of Y on which the corresponding matter was sent with time.

To give a satisfactory account of the transport process, X would have to have a large number of points. But finding T is in fact the same as finding the corresponding I → J application and is therefore a matter of combinatorics. Therefore, the algorithms exploiting a discrete-discrete modeling of this problem are of quadratic complexity. Thus, it quickly becomes numerically very expensive to solve it with X of large size.

One way to avoid this problem is to consider the "limiting case" in which X contains all the points in space. The reformulated problem is one of semi-discrete transport, in which the initial distribution ρ(x) is a uniform density and the arrival distribution

is discrete. In this new problem, the cost minimized by the transport function T is given by:

where x represents a mass element of the space V in R3 and ρ the initial density. It is then possible to show that in this case, portions of the space, called Laguerre cells, are sent to points.

In order to determine these Laguerre cells associated with the optimal transport, the previous problem is reformulated in its dual form, as a maximization problem. This problem can then be solved efficiently with a Newton algorithm. Finally, the numerical reconstruction of the transport of matter in the Universe can be done in O(N log N ) where N denotes the cardinal of the set Y, which is a considerable progress compared to combinatorial algorithms.
We see here that a change of point of view reduces the initial discrete problem to an a priori more complex case because it involves a continuous space, but in fact allows to treat it numerically in a more efficient way.

References:
 Lévy, B., Mohayaee, R. & von Hausegger, S. (2021). A fast semi-discrete optimal transport algorithm for a unique reconstruction of the early Universe. Monthly Notices of the Royal Astronomy Society.

## 18/11/2021 - Wavelets and astrophysics

This article was written by Robin RIEGEL and Claire SCHNOEBELEN, based on the presentations given by Erwan ALLYS and Philippe HELLUY on November 18th, 2021.

The analysis of an image, represented by a matrix coding the gray level of each pixel, consists in a series of mathematical operations highlighting the important components useful for the future analyses and treatments. In that regard, the wavelet method is a remarkably efficient tool. Through successive convolutions of the signal with maps called "wavelets", this method manage to obtain information relating to the amplitude and phases at different scales, as well as interactions between scales. For a signal composed of 2J points, the method consists in computing J + 1 series of 2jcoefficients, noted sj,k for j going from 0 to J and k going from 0 to 2j − 1. These coefficients are obtained by convolving the signal with maps φj,k, called "daughter wavelets", all obtained by translation and dilation of a single map φ, called "mother wavelet".

Different mother wavelets can be used. The Daubechies wavelets are for example used for image compression. In this case, the coefficients are in practice obtained by recurrence, starting from the J series, made up of the points of the signal : the coefficients of the j series are obtained by performing a linear combination of neighboring coefficients in the j+1 series. Another linear combination of the same coefficients gives the "details" dj,k. The last step thus produces a single coefficient s0 and 2J −1 details. Unlike Haar’s method in which the details cancel only when the signal is locally constant, the linear combinations involved in Daubechies’ method are chosen so that the details also cancel when the signal is locally linear. The original signal can be fully reconstructed from the s0coefficient and the non-zero details computed in the last step. Moreover, the removal of the coefficients close to zero results in a minimal loss of information and is therefore a very efficient method for signal compression. This algorithm, applied to the lines then to the columns of pixels of the images, is in particular used to compress them in the storage format JPEG2000.

Another mother wavelet, the one we will discuss in the following, is the Morlet wavelet, obtained by multiplying a Gaussian by a complex exponential:

with n a vector of the plane and σ > 0 the scale parameter of the Gaussian. The convolution with one of these wavelets acts as a band-pass filter on the planar signal which thus allows to quantify the variations in the n direction at the scale 1/σ.

This type of wavelet can thus be used to highlight the interactions between scales in a given signal. It is therefore of particular interest in the analysis of astrophysical images, especially in the analysis of the structures of the interstellar medium (ISM). Indeed, on the images we have, the latter does not have the appearance of a Gaussian field, entirely characterized by its power spectrum and without interactions between scales. On the contrary, the constructive interference between the different scales induces complex structures that we would like to analyze. A very efficient technique to study images is to use neural networks but it is difficult to implement in our case, because of the limited number of data available. Thanks to the wavelet method, it is however possible to imitate these treatments in a totally controlled way and from a single image.

This involves calculating three series of coefficients, the Wavelet Scattering Transform, by convolving the image I with the wavelets discussed in the previous paragraph:

where j1,j2and θ12 are respectively the scales and the directions of the considered wavelets. The first coefficient accounts for the average value of the image I and those of the second series allow to isolate the components of the image related to a direction and a given scale. The third set of coefficients accounts for interactions between scales and directions. The key point in these formulas is the use of the modulus of the images which introduces a non-linearity. Just as in neural networks, this non-linear operation is an integral part of the process in that it reveals the interferences between the different scales.

Another non-linear operation that one can introduce is the phase harmonic. This operation allows to build statistics called Wavelet Phase Harmonics (WPH) in four steps. One starts by filtering an image at two different scales and then performs a phase harmonic which modifies the phase of one of the two images to obtain two images of the same phase now correlated. It is now of interest to compute the different covariance moments of these images and from them build the WPH statistics. The calculated moments quantify the interactions between two given scales, the WPH statistics of an image characterizes its structure. Knowing these statistics, it is then possible to generate a large quantity of images with the same properties. We thus obtain a generative model.

A major application of this generative model is the study of the Cosmic Microwave Background (CMB) which allows us to learn more about the formation of our universe and its large structures. Indeed, it is difficult to separate the CMB from structures such as the Milky Way or stars. The WPH statistics can then simulate a large number of images of these foregrounds from a single other image. The latter having a non-Gaussian structure, unlike the CMB, one can train a neural network to perform the separation and thus obtain data useful for the study of the stellar background.

The wavelet method is thus a central tool for the multiscale analysis of images. Particularly adapted for image compression, it requires extensions to better account for interferences between scales and could thus help to better observe the cosmological background for a better understanding of the evolution of the Universe.

References:

 Allys, E., Marchand, T., Cardoso, J. F., Villaescusa-Navarro, F., Ho, S., & Mallat, S. (2020). New interpretable statistics for large-scale structure analysis and generation. Physical Review D102(10), 103506.

## 04/11/2021 - Discrete geometry for images and point clouds

This article was written by Victor LE GUILLOUX and Lucas PALAZOLLO, based on the presentations given by Etienne BAUDRIER and Quentin MERIGOT on November 4th, 2021.

Discrete geometry makes it possible to develop various mathematical tools to characterise the geometry and topology of objects contained in an image, i.e. the shape of a subset of pixels or voxels in an image. These tools are used, in particular, in biomedical imaging and materials science. From an image partition or even a point cloud, we try to obtain an efficient model thanks to numerous estimators to calculate lengths or to model precisely its curves and edges.

In order to calculate discrete geometric quantities of an object such as its volume, length or curvature, it is essential to use estimators that converge to the continuous geometric quantities we are seeking as the resolution of the image increases. This type of convergence is called multigrid convergence. Let us denote by h the discretization step of the grid, i.e. the size of the pixels, Ɛ̃ the discrete estimator, Ɛ the continuous geometrical characteristic and F a family of shapes. Then the convergence of the estimator can be written :

Consider the example of the perimeter estimator. First, we can think of counting the number of elementary moves : we count the number of horizontal and vertical moves by following the pixels at the edge of the studied shape. However, we end up calculating the perimeter of the rectangle in which our shape is inscribed, which does not make it a reliable estimator at all. More generally, it can be shown that estimators based on elementary moves are doomed to failure because they would only converge for a countable set of segments.

One solution is to estimate the length by polygonalization, i.e. by discretizing the curve by line segments. The line segments form a polygon whose perimeter is easy to calculate. To solve this problem, an algorithm is used to detect whether the pixels belong to the same discretised line segment. Thus, the set of largest line segments is sought. It can be shown that this estimator converges in O(h1/2). In the previous algorithm, the maximum line segments are used as a pattern, but it is possible to use other methods by changing the patterns.

When the shape is obtained using measurements, it is no longer defined as a connected subset of pixels but as a point cloud. Specific methods have therefore been proposed as the so-called distance-based geometric inference methods. Given an object K, one can for example retrieve information about its curvature by using a tubular neighborhood of K, i.e. by considering all points located at a smaller distance than a given distance (the scale parameter) from the object K in question. If r ≥ 0 denotes the scale parameter, let us denote by Kr the tubular neighborhood of K for the parameter r. The first result on these tubular neighborhoods is Weyl’s Theorem, stating that the d-dimensional volume of the neighborhood Kr is a polynomial function of degree d on a certain interval, whose size is specified by Federer’s Theorem. More precisely, Federer’s Theorem describes the boundary measure of K. This measure gives a weight to any subset B of K as follows :

where Hd denotes the d-dimensional volume and pK denotes the projection onto K. This measure contains the information of the importance of the local curvature of K in B because it gives a higher weight to subsets of strong curvature. Federer’s Theorem then states that there are signed measures Φ0, . . . , Φd such that for a small enough r :

and it is these signed measures that contain the information about the local curvature of K, in particular about the possible edges of K.

The question then arises as how to calculate, in practice, the boundary measure of an object. To compute it, we use a Monte-Carlo-type method. This method consists in choosing a large number of points in the neighborhood of the object, and using them to compute a discrete estimator of the boundary measure : each point is somehow projected on the edge of the object and gives a positive weight to the projection location. The larger the number of sample points is, the higher will be the probability of obtaining a maesure that approaches well the boundary measure.

These examples show the diversity of methods used in the discrete estimation of continuous objects in order to retreive their geometrical and topological properties. Discrete geometry provides robust and efficient tools. Moreover, extensive mathematical studies are needed to determine the accuracy of the different estimators and to ensure their relevance to the problems considered.

References:

  Lachaud, J.O., & Thibert, B. (2016). Properties of Gauss digitized shapes and digital surface integration. Journal of Mathematical Imaging and Vision, 54(2), 162-180.

  Mérigot, Q., Ovsjanikov, M., & Guibas, L.J. (2010). Voronoi-based curvature and feature estimation from point clouds. IEEE Transactions on Visualization and Computer Graphics, 17(6), 743-756.

## 21/10/2021 - Lung ventilation modelling and high performance computing

This article was written by Nina KESSLER and Samuel STEPHAN, based on the presentations given by Céline GRANDMONT and Hugo LECLERC on October 21st, 2021.

The lungs allow the oxygenation of the blood and the removal of carbon dioxide produced by the body. To achieve this, the lung tissue has a special tree-like structure that allows air to travel from the trachea to the alveoli in the lungs. This tree has an average of 23 generations of increasingly smaller bronchi for a human lung. The fact that we can breathe effortlessly at rest is due to the pressure difference between the alveoli and the surrounding air, as well as the elasticity of the tissues. However, certain pathologies can interfere with this process, such as asthma, which causes the diameter of the bronchial tubes to decrease, reducing breathing capacity. Dust particles or pollution can also affect breathing. In order to better understand the behaviour of the pulmonary system and to provide analysis tools in pathological cases, mathematical modelling has been proposed.

The lungs can be modelled as a tree of N = 23 generations of bronchi, each branch of which represents a bronchus with, in the last generation, alveoli represented by separate masses of springs. To a first approximation, the airflow in each bronchus is only characterised by one parameter, the resistance R. The flow rate Q is then directly related to the pressure difference at the entrance and exit of the bronchus ∆P by the relation ∆P = RQ. Air transport can therefore be modelled mathematically by means of a tree of 2N resistances. The resistance associated with the i-th generation bronchi is of the form Ri = Ri−13 = R03iwith β = 0.85. Therefore, at the level of the alveoli, since the flow is equal to Q23 = Q0/223, the total pressure difference is equal to R23Q23 = R0Q0/(2β3)23 = (1.63/2)23R0Q0. It is then possible to determine the movement of the cells under the effect of the pressure. The position ui(t) of the i-th cell satisfies the differential equation :

where the pressure term

takes into account the coupling through the bronchial tree. It can be implemented numerically and compared with experimental data from dynamic lung imaging. These data allow us to determine the parameters of the model, test its limitations and then improve it. Some of the limitations are due to the conditions at the edges, i.e. the way the bronchial tree and the air at its entrance interact. But others are practical, as we will see later, due to the computational cost of the numerical simulations.

The previous model was used to study respiratory treatments based on Heliox. Heliox is a mixture of helium and oxygen that could make it easier for people with lung diseases to breathe. It is a less dense gas, but more viscous than air. The resistance within the bronchial tree is globally reduced, and the result is that Heliox has a laminar regime when it passes through a pipe with a low flow. This gas therefore reduces the effort required to breathe. However, when the gas flow rate becomes high, the benefits of the product are considerably reduced due to its viscosity. By providing a description of the overall dynamics of ventilation, the model allows a more detailed analysis of its impact on breathing and helps to determine when it is actually beneficial to use it.

The application of the models described above on concrete cases requires a more precise description of the flow in the first bronchi and therefore the acquisition of data on patients. This can be done with the help of tomography, which allows a 3D image to be reconstructed from a 2D image sequence. A segmentation algorithm must then be applied to create a volume mesh of the bronchial tree. However, the basic algorithms for reconstructing a CT image are not suitable. Indeed, the movements of the organs in the thoracic wall make the data acquisition noisy. A recently proposed segmentation method consists of starting with a small sphere contained in a zone of the tree and then progressively enlarging it while minimising its total surface area at each moment.

For this problem, but also for many others in mathematics, the quantity of data to be manipulated can be very large, thus generating algorithm execution times that we would like to avoid. To reduce these execution times, it is necessary to manage more finely the calculations carried out by the processors and the accesses to the RAM memory. Optimal use of a processor involves parallelizing the calculations it is given to perform. It may therefore be useful to vectorise calculations, i.e. to allow the processor to perform simultaneous operations on neighbouring data. It is also sometimes necessary to redefine the numerical methods themselves to take advantage of the new performance of the calculation tools. For example, it is possible to create very precise numerical methods using symbolic computation. These methods potentially require more computation but less communication with memory.

The example of lung tissue is a perfect illustration of the different steps required to study complex physiological phenomena. Even with a lot of approximations on mechanics and geometry, this approach makes it possible to establish simplified models that make it possible to explain certain data and then eventually orient certain therapeutic studies. This work therefore involves multiple skills ranging from model analysis to a thorough mastery of computer tools to accelerate the numerical simulations of the models.

References:

  Grandmont, C., Maury, B., & Meunier, N. (2006). A viscoelastic model with non-local damping application to the human lungs. ESAIM: Mathematical Modelling and Numerical Analysis, 40(1), 201-224.

## 07/10/2021 - Turbulence, aeronautics and fluid-particle dynamics

This article was written by Raphaël COLIN and Thomas LUTZ-WENTZEL, based on the presentations given by Abderahmane MAROUF and Markus UHLMANN on October 7th, 2021.

The experiments of Osborne Reynolds (1883) have shown the existence of two regimes of fluid flow : laminar and turbulent. By running a fluid through a cylindrical pipe at a controlled rate, he noticed a change in the behavior of the flow as the flow rate increased. The fluid goes from a regular state to a disordered state, which correspond respectively to the laminar and turbulent regimes.

The phenomenon of turbulence refers to the state of a liquid or a gas in which the velocity presents a vortical character at any point. These vortices, present at different length scales, up to the microscopic scale, see their size, their orientation and their localization change constantly in time.

Reynolds introduced, following his experiments, the Reynolds number :

where ρ is the density of the fluid, U its characteristic velocity, μ its viscosity coefficient and D the reference length (for example the diameter of the cylinder in the Reynolds experiment). This number characterizes the behavior of the fluid in a given domain. A large value of Re is associated with a turbulent behavior.

The Navier-Stokes equations describe the behavior of incompressible fluids. They are written :

The numerical solution of these equations in the case of a turbulent fluid is difficult. Indeed, to do so, one must discretize the space using a spatial mesh. One then obtains cells (small volumes) in which the velocity and pressure are constant and evolve over time. A good approximation requires to control what happens inside these cells : we want the fluid to be regular. The presence of vortices at the microscopic scale leads us to consider very thin meshes, which considerably increases the computational cost.

Various methods are available for numerical resolution, for instance DNS (direct numerical simulation) and RANS (Reynolds Averaged Navier-Stokes).

One the one hand, DNS is the more precise of the two, it consists in solving directly the Navier-Stokes equations by using a mesh fine enough to capture the smallest vortex structures. It has the disadvantage of being computationally expensive, which can make it unusable in practice.

The RANS (Reynolds Averaged Navier Stokes) method, on the other hand, is based on a local statistical description of the velocity field. To do this, the velocity in each cell is decomposed as the sum of a mean velocity and a disturbance term :

By injecting it into the equations and after simplifications, we obtain the following equation :

The last term of this equation, called the Reynolds stress tensor, encodes the effect of the fluctuations around the mean velocity and RANS methods actually proposed to model this term using Eddy viscosity assumptions. The obtained models allow the use of coarser meshes that will capture the majority of vortex structures. This method is less accurate than DNS but has a more reasonable computational cost.

These numerical methods are used in many applications. For example, in aeronautics, the boundary layer phenomenon gives rise to turbulent airflow on the surface of aircraft wings. This can lead to a decrease in aerodynamic performance and even to the breakage of the wing via the resonance phenomenon. Bio-morphing techniques aim to develop wings that are adaptable over time, in order to reduce the impact of turbulence on them. For instance, through a controlled variation of the temperature of the materials constituting the wing, we can modify their rigidity during the flight. This allows their deformation into a shape adapted to the flight phase considered.

It is also interesting to study the behavior of particles in turbulent fluids, since this type of system is omnipresent in nature (clouds, sediment transport, etc.) or in industrial systems (solar furnaces). Fluids and particles interact in a complex way since turbulence influences the dynamics of particles and particles can themselves modify the turbulent properties of the fluid.

The spatial distribution of the particles is for example modified by the centrifugal force due to the vortex structures of the turbulent fluid, but also by the wake effect (the particles that follow others are less slowed down by the fluid, which has the effect of grouping the particles). Turbulent fluids also have an influence on the settling velocity of the particles (velocity reached when the fluid resistance reduces the acceleration of the particle to zero, its velocity does not evolve any more), for example, some particles will be attracted by some parts of the vortex structures, which has an effect on their settling velocity. The effect of particles on turbulence is not well understood, and requires the use of different models for the particles and the fluid, which must then be successfully combined. The particles can reduce or amplify the turbulent behavior of the fluid (increase the speed of the vortices or dissipate them).

To simulate such fluid-particle systems, there are several interactions to be computed : the effects of the fluid on the particles and of the particles on the fluid, and the particle-particle interactions. For the fluid-particle interactions, one can use the submerged boundary method, where in addition to the mesh which is a sampling of the space, the surface of the particles is sampled at different points on which the forces that apply are computed. The motion of the particles can be calculated with a discrete version of Newton’s equations. Other methods also exist, in particular methods that use an adaptive space mesh (finer around the particles and coarser elsewhere), to reduce the computational cost. The computation of interactions between particles require more classical methods of collision detection and application of forces.

We have seen that DNS methods are very expensive in terms of computational resources, and it is appropriate to focus on the performance of such simulations. It is possible to divide the regular grid on which we work into a number of subdomains, each of which can be associated with a processor, in order to process the calculations in parallel. Thus, new problems can arise, for example, on the treatment of particles when they travel from one subdomain to another, and the balancing of the computational load (indeed, what to do if the majority of the particles are in a single subdomain ?).

Turbulence is a phenomenon that occurs in many fields such as aeronautics or solar oven design. The development of small-scale vortex forms requires the development of efficient numerical methods with optimized computational capacities or using simplified models.

References:
  Marouf, A., Simiriotis, N., Tô, J.-B., Rouchon, J.-F., Braza, M., Hoarau, Y. (2019). Morphing for smart wing design through RNAS/LES methods . ERCOFTAC Bulletin.

 Chouippe, A., Uhlmann, M. (2019). On the influence of forced homogeneous-isotropic turbulence on the settling and clustering of finite-size particles. Acta Mechanica, 230(2), 387-412.

## 23/09/2021 - Symplectic geometry and astrophysics

This article was written by Robin RIEGEL and Nicolas STUTZ, based on the presentations given by Rodrigo IBATA and Alexandru OANCEA on September 23rd, 2021.

Dark matter makes up about 25% of the Universe but is still very mysterious. The standard model of cosmology called ΛCDM provides a theory to account for the interactions of dark matter with the structures of the Universe.

In order to validate this model, one possibility is to study the residues of low-mass star clusters orbiting the Milky Way, called stellar streams, interacting with dark matter and whose trajectories have been recorded by the Gaia satellite. But this is very difficult. Indeed, the ΛCDM model predicts the accelerations of these stars but it is impossible to measure these accelerations in a suitable time. It would take thousands or even millions of years for such structures with our current instruments.

Another approach is then considered, and we will see that the tools of symplectic geometry play an integral part.

Instead of using the data to validate a theoretical model, astrophysicists try to build a model from the data. More precisely, it is a question of determining the Hamiltonian function H(q,p), or energy of the system, function of the positions q = (q1,...,qN) and impulses p = (p1,...,pN). The associated Hamiltonian dyamic is the following differential system:

As it is, this system is too complicated to learn H from the data. However, the tools of symplectic geometry allow us to simplify the study of this system by expressing it in new coordinates.The goal is to find coordinates (P(p,q),Q(p,q)) allowing to obtain a simpler system to solve and giving more information about the system. The flows of this Hamiltonian system preserve the symplectic 2-form and thus some associated properties such as the volume for example. Thus, coordinate changes of the form (P, Q) = φ(p, q) where φ is a flow of the system are preferred transformations. Such a coordinate change is called a canonical transformation.

The natural coordinates to look for are the action-angle coordinates (θ(p,q),J(p,q)) solutions of a Hamiltonian system :

Indeed, for this coordinate system, the Hamiltonian K(θ,J) then does not depend on θ and the integration is easier. Moreover, J is constant along the trajectories of each star.

This coordinate system gives a lot of information: it allows to clearly identify the considered objects and finally to deduce the acceleration field.

How then to find these action-angle coordinates from the data? Jacobi introduced about 200 years ago the so-called generating functions S(q, P ) of a canonical transformation (P, Q) = φ(p, q).

This function verifies the relations

These conditions result from the fact that the symplectic form cancels on the set

This generating function thus ensures that the coordinate change preserves the symplectic structure. To determine the coordinates (θ,J) from the data, it is possible to use a neural network-based deep learning method that computes the generating function by imposing that J is invariant on each trajectory.

Thus, starting from the observations of stellar streams and studying the Hamiltonian system, we can determine coordinates (θ,J) and find the Hamiltonian of this system. The latter contains all the physics of the system and in particular the effect of dark matter on the dynamics of stars. This is why the study of stellar streams is very promising to probe the dark matter by confronting the calculated acceleration fields with those predicted by the ΛCDM model.

References:
  Ibata, R., Malhan, K., Martin, N., Aubert, D., Famaey, B., Bianchini, P., Monari, G., Siebert, A., Thomas, G.F., Bellazzini, M., Bonifacio, P., Renaud, F. (2021). Charting the Galactic Acceleration Field. I. A Search for Stellar Streams with Gaia DR2 and EDR3 with Follow-up from ESPaDOnS and UVES. The Astrophysical Journal, 914(2), 123.

 Ibata, R., Diakogiannis, F. I., Famaey, B., & Monari, G. (2021). The ACTIONFINDER: An unsupervised deep learning algorithm for calculating actions and the acceleration field from a set of orbit segments. The Astrophysical Journal, 915(1), 5.

## 09/09/2021 - Discrete differential geometry

This article was written by Philippe FUCHS and Claire SCHNOEBELEN, based on the presentations given by Mathieu DESBRUN and Franck HETROY-WHEELER on September 9th, 2021.

It is frequent that a lifelike object is modelized by a surface for the needs of animations or 3D simulations. In order to be manipulated by computers, whose calculation and storage capacities are limited, the contours of this object are approximated by a finite number of juxtaposed polygons, the whole of which is called a ”mesh”. It is this discrete representation of the object’s surface that is then used numerically.

To analyse and manipulate these surfaces, it is interesting to approach them from the point of view of differential geometry. This subject allows, especially through the concepts of curvatures, to characterize the regular surfaces. However, in order to use these tools on our polygonal surfaces, it is necessary to extend their definitions beforehand. Let us consider for example the curvature, a central notion in the study of surfaces. As it is defined on a smooth surface, it has no meaning on a piecewise planar surface. As we will see later, it is however possible to redefine the notions of curvature using the Gauss-Bonnet formula and the discrete Laplace-Beltrami operator.

Let us now see how curvature is defined for a regular surface. On such a surface, each point x is contained in a neighbourhood V parametrized by a certain smooth application X with values of an open U of the plane in V. The set of local parametrizations X describes a folding of the surface in R3. The first derivatives of X determine the so-called first fundamental form, a quadratic form from which we define a distance on the surface. The second derivatives of X determine the shape operator of the surface, from which the different notions of curvature are defined. The Gaussian curvature and the mean curvature in x are thus defined as being respectively the determinant and half of the trace of this operator.

Of these two notions of curvature, only one is independent of the folding: the mean curvature. The Gaussian curvature can in fact be expressed as a function of the coefficients of the first fundamental form and its derivatives which, unlike the shape operator, do not depend on the folding of the surface in R3. For this reason, the Gaussian curvature is said to be intrinsic and the mean curvature is said to be extrinsic. In general, a quantity is extrinsic if it depends on the folding of the surface in R3, in other words if it is necessary to consider the position of the surface in space to measure it. On the contrary, the intrinsic quantities are those which can be defined on the surface as a set with its own properties, without having recourse to the surrounding space, and which do not depend on the way the surface is observed from the outside. The plane and the cylinder, for example, have the same first fundamental shape, and thus the same Gaussian curvature, but not the same mean curvature. This illustrates the fact that they are two locally isometric surfaces plunged differently in R3.

Let us now see how to define the curvature on our polygonal surfaces. It is not possible to define a shape operator on them as it was done on regular surfaces. To extend the two previous notions of curvature, it is necessary to approach them differently. We know for example that the Gaussian curvature K is involved in the Gauss-Bonnet formula:

where χ(S) represents the Euler characteristic of the surface and the εi the exterior angles of the edge of the Ω domain. By choosing a neighbourhood Ω of x in our polygonal surface, we can define the Gaussian curvature at x as

where A is the area ofΩ. In the same way, we can extend the definition of mean curvature by starting from the following equality:

where is the Laplace operator on the surface. It is possible to give an equivalent to this operator in the case of polygonal surfaces: the discrete Laplace-Beltrami operator. The previous formula, applied to this new operator, allows us to define the average curvature of our surface in x.

The process which consists in exploiting some properties of a notion used in differential geometry in order to propose a definition in the discrete case can be used for other generalizations. For example, to smoothen the noise of a surface scan, we would like to apply a method similar to the nonlinear diffusion used on regular surfaces, given by

where n is a normal vector to the surface. However, the previous equation cannot be immediately applied to polygonal surfaces. Returning to the origin of this equation, we see that the motion u of the surface that it describes is in fact the one that minimizes the area the fastest. Having noticed this, it is easy, by using the gradient of the area, to develop a method for denoising polygonal surfaces which, when the mesh becomes finer and finer and tends towards a regular surface, amounts to applying the nonlinear diffusion that we started with.

The extension of the concepts of differential geometry on polygonal surfaces has many other practical applications. For example, let us suppose that we want to draw the coat of an animal in an animated film. We model the animal in question by a polygonal surface and its coat by a vector field on this surface. In order to render it realistically, we would like to create a smooth field almost everywhere and determine the singularities in advance.

The most common way to do this is to exploit parallel transport. This notion, first defined on regular surfaces, can easily be generalized to piecewise planar surfaces. On a plane, the parallel transport related to the Levi-Civita connection takes the very simple form of a translation. To define it on the whole surface, we just have to determine the way to transport a vector v from a face P to one of its neighbours P′. To do this, we consider the two faces as two pieces of a plane that we have folded. Let’s unfold it for a moment: the two faces are now part of the same plane, on which we can transport v by translating it from one face to the other. Let’s fold back our two faces. The vector obtained after translation of v is a tangent vector to P′ and it is this one that we define as the parallel transport of v from P to P′.

In defining parallel transport in this way, we come up against the notion of holonomy. Consider a vertex of our polygonal surface and the set of faces attached to it. By transporting a vector from one face to another with the process described above, it is possible that the vector obtained after having made a complete turn of the vertex by passing from face to face is not collinear with the original vector. This will in fact be the case as soon as these faces are not all coplanar. In order to avoid this kind of phenomenon, we add a step to what precedes. For each pair of faces, we compose the translation on the unfolded faces with a rotation, so that the parallel transport of a vector around a vertex coincides with the origin vector. It is possible to define these rotations on the whole surface by ensuring that around almost all vertices the transport of a vector coincides with itself: the set of angles of these rotations are solutions of a large linear system. With this method, it is also possible to define in advance the vertices around which the holonomy will not be zero. By proceeding in this way, one can, from a single vector carried on all faces, construct a smooth vector field everywhere on the surface except at a few predetermined points.

The two previous examples illustrate the interest of extending the tools of differential geometry to polygonal surfaces: the application of the tools of discrete differential geometry has allowed us to build two efficient and elegant methods to solve concrete 3D imaging problems.

References:
 Meyer, M., Desbrun, M., Schröder, P., & Barr, A. H. (2003). Discrete differential-geometry operators for triangulated 2-manifolds. In Visualization and mathematics III (pp. 35-57). Springer, Berlin, Heidelberg.
 Crane, K., Desbrun, M., & Schröder, P. (2010, July). Trivial connections on discrete surfaces. In Computer Graphics Forum (Vol. 29, No. 5, pp. 1525-1533). Oxford, UK: Blackwell Publishing Ltd.