Michelangelo Conserva

"Caminante, no hay camino: se hace camino al andar."
 
profile picture 

Ph.D. candidate at Queen Mary University of London, United Kingdom.

Bayesian statistician off on an adventure in the land of Reinforcement Learning.

Tireless hiker, bookworm, political junkie, shy dancer, and creative cook.


m.[lastname]@qmul.ac.uk

Selected papers

  • M.C., Paulo Rauber
    Markov Decision Processes Hardness: Theory and Practice [Paper] [Code]
    Neural Information Processing Systems, 2022

  • M.C., Marc Deisenroth, K S Sesh Kumar
    The Graph Cut Kernel for Ranked Data [Paper] [Code]
    Transactions on Machine Learning Research, 2022

  • Aditya Ramesh, Paulo Rauber, M.C., Jürgen Schmidhuber
    Recurrent Neural-Linear Posterior Sampling for Non-Stationary Contextual Bandits [Paper] [Code]
    Neural Computation, 2022

Links

About me

I'm currently pursuing my Ph.D. at Queen Mary University of London under the guidance of Paulo Rauber. The principal objective of my research is to develop practical methodologies with a strong theoretical backing for sequential decision making under uncertainty.

Below it's me looking for an efficient way to compute the Bayes adaptive policy.

alt text

Education

  • Ph.D., Queen Mary University of London, UK, Sep. 2020 -

  • MSc in Computational Statistics and Machine Learning, University College London, UK, Sep. 2019 - Sep. 2020, Distinction.

  • BSc in Statistic, Economics and Finance, Sapienza University of Rome, Sep. 2016 - Jul. 2019, 110 cum laude.

For my MSc thesis, I developed a novel kernel method for ranked data under the supervision of Sesh Kumar and Marc Deisenroth. For my BSc thesis, I empirically investigated state-of-the-art reinforcement learning agents from a game theoretic perspective.