Oussama Zekri
I am currently a last year Student Researcher at ENS Paris-Saclay, in the mathematics department, and an intern at UW, with Pr. Zaid Harchaoui.
Prior to that, I was an intern at Imperial College London, under the supervision of Nicolas Boullé. During this internship, I worked on discrete diffusion, and authored the paper Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods.
I was also an intern in the GTSBrain team at Huawei, under the supervision of Dr. Ievgen Redko. During this internship, I first authored two papers : Large Language Models as Markov Chains and this Worskhop paper !
I was also an intern at Kyoto University, working on convex optimization, with Dr. Ellen H. Fukuda, Dr. Bruno F. Lourenço and Dr. Tianxiang Liu.
I was also an intern at Centre Borelli, working on time series and optimal transport, with Pr. Laurent Oudre and Dr. Thibaut Germain.
I'm interested in machine learning and particularly in generative modeling. I also like to transmit my knowledge (see this page, for example).
I also maintain a research blog, called logB, with my friend Ambroise Odonnat!
For more details, you can view my resume/CV. Feel free to contact me by e-mail at: oussama.zekri@ens-paris-saclay.fr
News :
(04/02/2025) New preprint ! Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods was done during my 3-months interhsnhip at Imperial College.
(16/10/2024) New preprint ! Incredible work of Abdelhakim & friends at Noah's Ark Lab : Zero-shot Model-based Reinforcement Learning using Large Language Models.
(04/10/2024) New preprint ! Large Language Models as Markov Chains was done during my interhsnhip at Noah's Ark Lab.
(18/06/2024) Worskhop paper accepted at ICML 2024 (1st Workshop on In-Context Learning) !