Those are mostly blog posts, notes, talk slides, nice pictures and various things about mathematics, statistics, CS and machine learning.

**Diffusion models** * (March 2023) **A small mathematical summary. *

**The double descent phenomenon** * (November 2021) **Why do overparametrized networks do well?*

**The dimension of invariant and equivariant linear layers** * (July 2021) **We compute the dimension of equivariant linear layers in neural architectures.*

**The ConvMixer architecture: 🤷** * (December 2021) **I am training a deep network on a GPU using the Flux.jl library. There are two takeaway messages: 1) patches are all you need, 2) in Julia, the ConvMixer *largely* fits in one Tweet. *

**Gradient descent I: strongly convex functions** * (April 2022) **For strongly convex functions, the speed of convergence is determined by the conditionning number of the Hessian. *

**Gradient descent II: stochastic gradient descent for convex functions** * (October 2023) **Stochastic Gradient Descent over strongly convex functions nearly behaves like gradient descent. *

**Gradient descent III: SGD for Polyak-Łojasiewicz functions** * (October 2023) **Convex functions are not my friends anymore. Now I am best friend with Polyak-Łojasiewicz functions. *

**Importance sampling ⚖️ ** * (June 2023) **On the sample size required to get a good approximation*

**Importance sampling ⚖️⚖️ : The Jarzynski connection** * (March 2022) **Change-of-measure for out-of-equilibrium systems*

** 🐘 The Elephant Random Walk ** * (May 2023) **Long-time memory results in non-diffusivity *

**Robbins' version of the Stirling approximation** * (November 2022) **A handy, easy-to-remember estimate for the error in Stirling's approximation.*

**Super-Catalan** * (Janvier 2022) **Une question non-résolue, vieille de 150 ans, et probablement très inutile. *

**Random analytic functions: Ryll-Nardzewski's theorem** * (April 2021) **What happens at the boundary of the disk of convergence of random analytic series. *

**Théorème de Mercer et Kernel Trick** * (Octobre 2023) **Le théorème de représentation des noyaux positifs*

**🏋🏼 Heavy tails I: extremes events and randomness** * (November 2023) **A presentation of heavy tails, how they behave, and a short list of where they come from. *

**🏋🏼 Heavy-tails II: is it really heavy?** * (December 2023) **A presentation of Hill's estimator for the heavy-tail index. *

**🏋🏼 Heavy tails III: Kesten's theorem ** * (November 2023) **The solutions of the distributional equation X = AX+B can have heavy tails: a sketch of proof, plus a presentation of the Renewal theorem. *

**Gaussian conditioning ** * (September 2023) **The conditional distribution of some part of a gaussian vector given the other*

**The Kullback-Leibler divergence between Gaussians** * (June 2022) **I'll know once and for all where to find this damn formula. *

**Mouvement brownien I 📈 : avec une base d'ondelettes** * (Septembre 2023) **Une généralisation de la construction de Paul Lévy: on construit un mouvement brownien continu en s'aidant d'une base orthonormale. *

**Mouvement brownien II 📈📈: représentation de Karhunen-Loève** * (Octobre 2023) **Cette fois on construit un mouvement brownien directement dans une base orthonormale et pas implicitement comme dans la construction de Paul Lévy. *

**An inverse visualization for the elliptic law** * (March 2021) **A beautiful colorplot of the characteristic polynomial of random matrices. *

**Random line on the plane** * (August 2021) **How can we draw random lines on the plane?*

**Waves on donuts** * (August 2021) **A nice plot of random Laplace eigenfuctions on the torus, also called random arithmetic waves.*

**Maths & ML Gems** * (2024) **My personal curated list of old and recent outstanding papers in applied mathematics. *

**Tips and tricks in the Julia language** * (August 2022) **A personnal collection of nice tricks in Julia. *

**The point of view of Professor Parapine** * (November 2022) **An interesting vision of Science from one century ago. *