Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Playing the Lottery of a Lifetime: The Effect of Socially Induced Aspiration on Q-Learning Agents

Published in Proceedings of the Annual Meeting of the Cognitive Science Society, 2022

Access paper here

Recommended citation: Yosi Hatekar and Rachit Dubey and Ted Sumers and Ilia Sucholutsky, "Playing the Lottery of a Lifetime: The Effect of Socially Induced Aspiration on Q-Learning Agents." Proceedings of the Annual Meeting of the Cognitive Science Society, 2022. https://escholarship.org/uc/item/21j6j1tg

Getting aligned on representational alignment

Published in arXiv preprint arXiv:2310.13018, 2023

Access paper here

Recommended citation: Ilia Sucholutsky and Lukas Muttenthaler and Adrian Weller and Andi Peng and Andreea Bobu and Been Kim and Bradley C Love and Erin Grant and Iris Groen and Jascha Achterberg and Joshua B Tenenbaum and Katherine M Collins and Katherine L Hermann and Kerem Oktar and Klaus Greff and Martin N Hebart and Nori Jacoby and Qiuyi Zhang and Raja Marjieh and Robert Geirhos and Sherol Chen and Simon Kornblith and Sunayana Rane and Talia Konkle and Thomas P O'Connell and Thomas Unterthiner and Andrew K Lampinen and Klaus-Robert Müller and Mariya Toneva and Thomas L Griffiths, "Getting aligned on representational alignment." arXiv preprint arXiv:2310.13018, 2023. https://arxiv.org/abs/2310.13018

What language reveals about perception: Distilling psychophysical knowledge from large language models

Published in Proceedings of the Annual Meeting of the Cognitive Science Society, 2023

Access paper here

Recommended citation: Raja Marjieh and Ilia Sucholutsky and Pol van Rijn and Nori Jacoby and Tom Griffiths, "What language reveals about perception: Distilling psychophysical knowledge from large language models." Proceedings of the Annual Meeting of the Cognitive Science Society, 2023. https://escholarship.org/uc/item/6dk5q565

Characterizing similarities and divergences in conversational tones in humans and llms by sampling with people

Published in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

Access paper here

Recommended citation: Dun-Ming Huang and Pol Van Rijn and Ilia Sucholutsky and Raja Marjieh and Nori Jacoby, "Characterizing similarities and divergences in conversational tones in humans and llms by sampling with people." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024. https://aclanthology.org/2024.acl-long.565.pdf

Multilevel interpretability of artificial neural networks: leveraging framework and methods from neuroscience

Published in arXiv preprint arXiv:2408.12664, 2024

Access paper here

Recommended citation: Zhonghao He and Jascha Achterberg and Katie Collins and Kevin Nejad and Danyal Akarca and Yinzhu Yang and Wes Gurnee and Ilia Sucholutsky and Yuhan Tang and Rebeca Ianov and George Ogden and Chole Li and Kai Sandbrink and Stephen Casper and Anna Ivanova and Grace W Lindsay, "Multilevel interpretability of artificial neural networks: leveraging framework and methods from neuroscience." arXiv preprint arXiv:2408.12664, 2024. https://arxiv.org/abs/2408.12664

AI Impact on Human Proof Formalization Workflows

Published in The 5th Workshop on Mathematical Reasoning and AI at NeurIPS 2025, 2025

Access paper here

Recommended citation: Katherine M Collins and Simon Frieder and Jonas Bayer and Jacob Loader and Jeck Lim and Peiyang Song and Fabian Zaiser and Lexin Zhou and Shanda Li and Shi-Zhuo Looi and Jose Hernandez-Orallo and Joshua B Tenenbaum and Cameron Freer and Umang Bhatt and Adrian Weller and Valerie Chen and Ilia Sucholutsky, "AI Impact on Human Proof Formalization Workflows." The 5th Workshop on Mathematical Reasoning and AI at NeurIPS 2025, 2025. https://openreview.net/forum?id=D7I8fVkMVs

Humanity's last exam

Published in arXiv preprint arXiv:2501.14249, 2025

Access paper here

Recommended citation: Long Phan and Alice Gatti and Ziwen Han and Nathaniel Li and Josephina Hu and Hugh Zhang and Chen Bo Calvin Zhang and Mohamed Shaaban and John Ling and Sean Shi and Michael Choi and Anish Agrawal and Arnav Chopra and Adam Khoja and Ryan Kim and Richard Ren and Jason Hausenloy and Oliver Zhang and Mantas Mazeika and Dmitry Dodonov and Tung Nguyen and Jaeho Lee and Daron Anderson and Mikhail Doroshenko and Alun Cennyth Stokes and Mobeen Mahmood and Oleksandr Pokutnyi and Oleg Iskra and Jessica P Wang and John-Clark Levin and Mstyslav Kazakov and Fiona Feng and Steven Y Feng and Haoran Zhao and Michael Yu and Varun Gangal and Chelsea Zou and Zihan Wang and Serguei Popov and Robert Gerbicz and Geoff Galgon and Johannes Schmitt and Will Yeadon and Yongki Lee and Scott Sauers and Alvaro Sanchez and Fabian Giska and Marc Roth and Søren Riis and Saiteja Utpala and Noah Burns and Gashaw M Goshu and Mohinder Maheshbhai Naiya and Chidozie Agu and Zachary Giboney and Antrell Cheatom and Francesco Fournier-Facio and Sarah-Jane Crowson and Lennart Finke and Zerui Cheng and Jennifer Zampese and Ryan G Hoerr and Mark Nandor and Hyunwoo Park and Tim Gehrunger and Jiaqi Cai and Ben McCarty and Alexis C Garretson and Edwin Taylor and Damien Sileo and Qiuyu Ren and Usman Qazi and Lianghui Li and Jungbae Nam and John B Wydallis and Pavel Arkhipov and Jack Wei Lun Shi and Aras Bacho and Chris G Willcocks and Hangrui Cao and Sumeet Motwani and Emily de Oliveira Santos and Johannes Veith and Edward Vendrow and Doru Cojoc and Kengo Zenitani and Joshua Robinson and Longke Tang and Yuqi Li and Joshua Vendrow and Natanael Wildner Fraga and Vladyslav Kuchkin and Andrey Pupasov Maksimov and Pierre Marion and Denis Efremov and Jayson Lynch and Kaiqu Liang and Aleksandar Mikov and Andrew Gritsevskiy and Julien Guillod and Gözdenur Demir and Dakotah Martinez and Ben Pageler and Kevin Zhou and Saeed Soori and Ori Press and Henry Tang and Paolo Rissone and Sean R Green and Lina Brüssel and Moon Twayana and Aymeric Dieuleveut and Joseph Marvin Imperial and Ameya Prabhu and Jinzhou Yang and Nick Crispino and Arun Rao and Dimitri Zvonkine and Gabriel Loiseau and Mikhail Kalinin and Marco Lukas and Ciprian Manolescu and Nate Stambaugh and Subrata Mishra and Tad Hogg and Carlo Bosio and Brian P Coppola and Julian Salazar and Jaehyeok Jin and Rafael Sayous and Stefan Ivanov and Philippe Schwaller and Shaipranesh Senthilkuma and Andres M Bran and Andres Algaba and Kelsey Van den Houte and Lynn Van Der Sypt and Brecht Verbeken and David Noever and Alexei Kopylov and Benjamin Myklebust and Bikun Li and Lisa Schut and Evgenii Zheltonozhskii and Qiaochu Yuan and Derek Lim and Richard Stanley and Tong Yang and John Maar and Julian Wykowski, "Humanity's last exam." arXiv preprint arXiv:2501.14249, 2025. https://arxiv.org/abs/2501.14249

Large language models surpass human experts in predicting neuroscience results

Published in Nature human behaviour, 2025

Access paper here

Recommended citation: Xiaoliang Luo and Akilles Rechardt and Guangzhi Sun and Kevin K Nejad and Felipe Yáñez and Bati Yilmaz and Kangjoo Lee and Alexandra O Cohen and Valentina Borghesani and Anton Pashkov and Daniele Marinazzo and Jonathan Nicholas and Alessandro Salatiello and Ilia Sucholutsky and Pasquale Minervini and Sepehr Razavi and Roberta Rocca and Elkhan Yusifov and Tereza Okalova and Nianlong Gu and Martin Ferianc and Mikail Khona and Kaustubh R Patil and Pui-Shee Lee and Rui Mata and Nicholas E Myers and Jennifer K Bizley and Sebastian Musslick and Isil Poyraz Bilgin and Guiomar Niso and Justin M Ales and Michael Gaebler and N Apurva Ratan Murty and Leyla Loued-Khenissi and Anna Behler and Chloe M Hall and Jessica Dafflon and Sherry Dongqi Bao and Bradley C Love, "Large language models surpass human experts in predicting neuroscience results." Nature human behaviour, 2025. https://www.nature.com/articles/s41562-024-02046-9

Measuring and mitigating overreliance is necessary for building human-compatible AI

Published in arXiv preprint arXiv:2509.08010, 2025

Access paper here

Recommended citation: Lujain Ibrahim and Katherine M Collins and Sunnie SY Kim and Anka Reuel and Max Lamparth and Kevin Feng and Lama Ahmad and Prajna Soni and Alia El Kattan and Merlin Stein and Siddharth Swaroop and Ilia Sucholutsky and Andrew Strait and Q Vera Liao and Umang Bhatt, "Measuring and mitigating overreliance is necessary for building human-compatible AI." arXiv preprint arXiv:2509.08010, 2025. https://arxiv.org/abs/2509.08010

Representational Alignment Supports Effective Teaching

Published in ICLR 2025 Workshop on Bidirectional Human-AI Alignment, 2025

Access paper here

Recommended citation: Ilia Sucholutsky and Katherine M Collins and Maya Malaviya and Nori Jacoby and Weiyang Liu and Theodore Sumers and Michalis Korakakis and Umang Bhatt and Mark K Ho and Joshua B Tenenbaum and Bradley C Love and Zachary Pardos and Adrian Weller and Thomas L Griffiths, "Representational Alignment Supports Effective Teaching." ICLR 2025 Workshop on Bidirectional Human-AI Alignment, 2025. https://openreview.net/forum?id=7zxUVXFPez

Using the tools of cognitive science to understand large language models at different levels of analysis

Published in arXiv e-prints, 2025

Access paper here

Recommended citation: Alexander Ku and Declan Campbell and Xuechunzi Bai and Jiayi Geng and Ryan Liu and Raja Marjieh and R Thomas McCoy and Andrew Nam and Ilia Sucholutsky and Veniamin Veselovsky and Liyi Zhang and Jian-Qiao Zhu and Thomas L Griffiths, "Using the tools of cognitive science to understand large language models at different levels of analysis." arXiv e-prints, 2025. https://ui.adsabs.harvard.edu/abs/2025arXiv250313401K/abstract

talks

Deep Learning for Lost Data Restoration and Imputation

Published:

Lossy, noisy, or missing data are a common phenomena in many areas of statistics ranging from sampling to statistical learning. Instead of just ignoring these missing values, it can be useful to somehow attempt to recover or impute them. Meanwhile, deep learning is increasingly shown to be adept at learning latent representations or distributions of data. These patterns or representations can often be too complex to be recognized manually or through classical statistical techniques. We will discuss practical deep learning approaches to the problem of lossy data restoration or imputation with examples of several different types of datasets. We will compare the results to classical techniques to see if deep learning can really be used to perform higher quality imputation.

Breaking Into Deep Learning: Five Projects to get you Inspired

Published:

We will go over five exciting projects from very different areas, and examine the deep learning algorithms underlying them, as inspiration for how you can enter the field regardless of where your interests or expertise currently lie.

Making the Most of Graduate Research in AI

Published:

Should you pursue graduate research in AI? What should you expect if you do? Most importantly, how do you ensure that it is a beneficial and positive experience for you? I hope to help you answer some of these questions by sharing my own experiences and introducing you to the very diverse set of AI projects happening at the University of Waterloo that you could work on as a graduate student.

ConvART: Improving Adaptive Resonance Theory for Unsupervised Image Clustering

Published:

While supervised learning techniques have become increasingly adept at separating images into different classes, these techniques require large amounts of labelled data which may not always be available. We propose a novel neuro-dynamic method for unsuper- vised image clustering by combining 2 biologically-motivated mod- els: Adaptive Resonance Theory (ART) and Convolutional Neu- ral Networks (CNN). ART networks are unsupervised clustering al- gorithms that have high stability in preserving learned information while quickly learning new information. Meanwhile, a major prop- erty of CNNs is their translation and distortion invariance, which has led to their success in the domain of vision problems. By embedding convolutional layers into an ART network, the useful properties of both networks can be leveraged to identify different clusters within unlabelled image datasets and classify images into these clusters. In exploratory experiments, we demonstrate that this method greatly increases the performance of unsupervised ART networks on a benchmark image dataset.

Deep Learning for System Trace Restoration

Published:

Most real-world datasets, and particularly those collected from physical systems, are full of noise, packet loss, and other imperfections. However, most specification mining, anomaly detection and other such algorithms assume, or even require, perfect data quality to function properly. Such algorithms may work in lab conditions when given clean, controlled data, but will fail in the field when given imperfect data. We propose a method for accurately reconstructing discrete temporal or sequential system traces affected by data loss, using Long Short-Term Memory Networks (LSTMs). The model works by learning to predict the next event in a sequence of events, and uses its own output as an input to continue predicting future events. As a result, this method can be used for data restoration even with streamed data. Such a method can reconstruct even long sequence of missing events, and can also help validate and improve data quality for noisy data. The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data. We demonstrate our method by reconstructing automotive CAN traces consisting of long sequences of discrete events. We show that given even small parts of a CAN trace, our LSTM model can predict future events with an accuracy of almost 90%, and can successfully reconstruct large portions of the original trace, greatly outperforming a Markov Model benchmark. We separately feed the original, lossy, and reconstructed traces into a specification mining framework to perform downstream analysis of the effect of our method on state-of-the-art models that use these traces for understanding the behavior of complex systems.

teaching

STAT 231

Undergraduate course, University of Waterloo, Department of Statistics and Actuarial Science, 2020

I offered a section of STAT 231 in Winter 2020.