Victor Livernoche | Ph.D. Student at Mila
I’m Victor Livernoche, a Montreal-born Ph.D. student at McGill University and Mila, supervised by Prof. Reihaneh Rabbany. Outside of research, I enjoy working out, playing sports, and making music. Academically, my work centers on generative modeling, anomaly and deepfake detection, and temporal graph learning. I’m especially interested in how large-scale generative systems can be used responsibly, and how we can design models and datasets that make AI more trustworthy and socially impactful.
Now
- Finishing up the theoretical justification section of a paper on kurtosis-guided noise selection for denoising score-matching in tabular anomaly detection. The core idea builds on my earlier work on Diffusion Time Estimation.
- Exploring a label-space diffusion direction and uncertainty estimation for realism scoring in deepfake detection, both still in early stages.
- TA-ing COMP 511 at McGill right now, which means managing project proposal submissions and peer review through OpenReview.
- On the side, starting a project on AI-generated image provenance signals, and thinking through a research direction on political deepfakes following up on work around the 2025 Canadian federal election.
Montréal, QC
Last updated: March 26th, 2026
About Me
Education
Ph.D., Computer Science
Machine learning research supervised by Prof. Reihaneh Rabbany.
M.Sc. (Thesis), Computer Science
Machine learning research supervised by Prof. Siamak Ravanbakhsh.
B.Sc., Honours Computer Science (Physics minor)
Experience
Research Scientist Student
Focused on diffusion models and anomaly detection; developed a new anomaly detection method based on diffusion models. Applied models to galactic star anomalies. Member of Mila’s Mental Health Committee.
Research Intern
Parametrized the BabyAI reinforcement learning environment in Prof. Yoshua Bengio’s group.
Undergraduate Research Assistant
Analyzed data compaction methods in large databases (with Prof. Oana Balmau).
Research Intern
Supported research operations (admin tasks, simulations, funding processes, partner communications) with Prof. Pierre‑Majorique Léger.
Research Interests
- Generative modeling for images and multimodal generation
- Energy‑based generative models (theory and applications)
- Deepfake detection against misinformation
- Temporal graph representation learning
- Anomaly detection
Skills
Publications
Deepfakes in the 2025 Canadian Election: Prevalence, Partisanship, and Platform Dynamics
We analyze visual deepfakes during the 2025 Canadian federal election across 187,778 posts on X, Bluesky, and Reddit. We find that 5.86% of election-related images were synthetic, with right-leaning accounts sharing them more frequently (8.66% vs. 4.42%). Most deepfakes were benign, and harmful ones had limited reach (0.12% of views on X). However, the most realistic fabrications drew disproportionately high engagement.
OpenFake: An Open Dataset and Platform Toward Real-World Deepfake Detection
OpenFake is a politically focused benchmark for modern deepfake detection. It pairs ~3M real images with captions and 963k high‑quality synthetic images from proprietary and open‑source generators, maps misinformation modalities seen on social media, and includes a human‑perception study showing recent proprietary models are hard to distinguish. A crowdsourced adversarial platform continually adds challenging fakes to keep detectors robust. Overall, our results offer encouraging evidence that detectors trained with high-quality data can generalize to real-world social-media distributions.
On Diffusion Modeling for Anomaly Detection
This work explores using diffusion models for anomaly detection in unsupervised and semi-supervised settings. It introduces Diffusion Time Estimation (DTE), a simplified and efficient alternative to DDPM that estimates a diffusion-time density to score anomalies. DTE performs faster than DDPM and achieves top results on ADBench, showing diffusion-based methods are competitive and scalable.
A Reproduction of Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification
We reproduce and extend AMuLaP, a method for automatic label prompting in few-shot classification for Pretrained Language Models. We confirm the original results on 3 GLUE tasks and test on 2 new datasets. Despite some setup friction, the approach is reproducible, efficient, and shows promise for broader real-world NLP applications.
Other Projects
Neural Network from Scratch
A Jupyter notebook implementing a neural network from scratch using NumPy.
Get In Touch
I'm always interested in discussing research opportunities, collaborations, or innovative projects in temporal graphs and machine learning.