Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
Developing AI that learns and thinks with people
This is a page not in th emain menu
AITP Lab members
This is a visualization
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Developing AI systems that learn and think with people
Probing the limits of learning and teaching with minimal examples
Studying parallels between human and artificial intelligence representations
Published in The Stata Journal, 2017
Recommended citation: Matthias Schonlau and Nick Guenther and Ilia Sucholutsky, "Text mining with n-gram variables." The Stata Journal, 2017. https://journals.sagepub.com/doi/abs/10.1177/1536867X1801700406
Published in Journal of Computational Vision and Imaging Systems, 2018
Recommended citation: Ilia Sucholutsky and Matthias Schonlau, "ConvART: Improving Adaptive Resonance Theory for Unsupervised Image Clustering." Journal of Computational Vision and Imaging Systems, 2018. https://openreview.net/forum?id=SkTqMwkDf
Published in 2019 International Joint Conference on Neural Networks (IJCNN), 2019
Recommended citation: Ilia Sucholutsky and Apurva Narayan and Matthias Schonlau and Sebastian Fischmeister, "Deep Learning for System Trace Restoration." 2019 International Joint Conference on Neural Networks (IJCNN), 2019. https://arxiv.org/abs/1904.05411
Published in PeerJ Computer Science, 2019
Recommended citation: Ilia Sucholutsky and Apurva Narayan and Matthias Schonlau and Sebastian Fischmeister, "Pay attention and you won’t lose it: a deep learning approach to sequence imputation." PeerJ Computer Science, 2019. https://peerj.com/articles/cs-210/
Published in 2021 International Joint Conference on Neural Networks, 2019
Recommended citation: Ilia Sucholutsky and Matthias Schonlau, "Soft-Label Dataset Distillation and Text Dataset Distillation." 2021 International Joint Conference on Neural Networks, 2019. https://ieeexplore.ieee.org/abstract/document/9533769/
Published in , 2021
Recommended citation: Ilia Sucholutsky, "Learning From Almost No Data." , 2021. https://uwspace.uwaterloo.ca/items/3c75c1a6-54e7-4b0c-9c68-1b60ddb9c6fb
Published in Proceedings of the AAAI Conference on Artificial Intelligence, 2021
Recommended citation: Ilia Sucholutsky and Matthias Schonlau, "'Less Than One'-Shot Learning: Learning N Classes From M< N Samples." Proceedings of the AAAI Conference on Artificial Intelligence, 2021. https://ojs.aaai.org/index.php/AAAI/article/view/17171
Published in 2021 International Joint Conference on Neural Networks (IJCNN), 2021
Recommended citation: Ilia Sucholutsky and Nam-Hwui Kim and Ryan P Browne and Matthias Schonlau, "One Line To Rule Them All: Generating LO-Shot Soft-Label Prototypes." 2021 International Joint Conference on Neural Networks (IJCNN), 2021. https://ieeexplore.ieee.org/abstract/document/9534284/
Published in PeerJ Computer Science, 2021
Recommended citation: Ilia Sucholutsky and Matthias Schonlau, "Optimal 1-NN prototypes for pathological geometries." PeerJ Computer Science, 2021. https://peerj.com/articles/cs-464/
Published in Proceedings of the AAAI Conference on Artificial Intelligence, 2021
Recommended citation: Ilia Sucholutsky and Matthias Schonlau, "Secdd: Efficient and secure method for remotely training neural networks (student abstract)." Proceedings of the AAAI Conference on Artificial Intelligence, 2021. https://ojs.aaai.org/index.php/AAAI/article/view/17945
Published in arXiv preprint arXiv:2209.14821, 2022
Recommended citation: Raja Marjieh and Ilia Sucholutsky and Thomas A Langlois and Nori Jacoby and Thomas L Griffiths, "Analyzing diffusion as serial reproduction." arXiv preprint arXiv:2209.14821, 2022. https://arxiv.org/abs/2209.14821
Published in arXiv preprint arXiv:2202.04670, 2022
Recommended citation: Maya Malaviya and Ilia Sucholutsky and Kerem Oktar and Thomas L Griffiths, "Can humans do less-than-one-shot learning?." arXiv preprint arXiv:2202.04670, 2022. https://arxiv.org/abs/2202.04670
Published in Uncertainty in Artificial Intelligence, 2023, 2022
Recommended citation: Katherine M Collins and Umang Bhatt and Weiyang Liu and Vihari Piratla and Ilia Sucholutsky and Bradley Love and Adrian Weller, "Human-in-the-Loop Mixup." Uncertainty in Artificial Intelligence, 2023, 2022. https://proceedings.mlr.press/v216/collins23a.html
Published in Proceedings of the Annual Meeting of the Cognitive Science Society, 2022
Recommended citation: Yosi Hatekar and Rachit Dubey and Ted Sumers and Ilia Sucholutsky, "Playing the Lottery of a Lifetime: The Effect of Socially Induced Aspiration on Q-Learning Agents." Proceedings of the Annual Meeting of the Cognitive Science Society, 2022. https://escholarship.org/uc/item/21j6j1tg
Published in arXiv preprint arXiv:2202.04728, 2022
Recommended citation: Raja Marjieh and Ilia Sucholutsky and Theodore R Sumers and Nori Jacoby and Thomas L Griffiths, "Predicting human similarity judgments using large language models." arXiv preprint arXiv:2202.04728, 2022. https://arxiv.org/abs/2202.04728
Published in arXiv preprint arXiv:2206.04105, 2022
Recommended citation: Raja Marjieh and Pol Van Rijn and Ilia Sucholutsky and Theodore R Sumers and Harin Lee and Thomas L Griffiths and Nori Jacoby, "Words are all you need? language as an approximation for human similarity judgments." arXiv preprint arXiv:2206.04105, 2022. https://arxiv.org/abs/2206.04105
Published in Advances in Neural Information Processing Systems, 2023
Recommended citation: Ilia Sucholutsky and Tom Griffiths, "Alignment with human representations supports robust few-shot learning." Advances in Neural Information Processing Systems, 2023. https://proceedings.neurips.cc/paper_files/paper/2023/hash/e8ddc03b001d4c4b44b29bc1167e7fdd-Abstract-Conference.html
Published in arXiv preprint arXiv:2302.01614, 2023
Recommended citation: Pol van Rijn and Yue Sun and Harin Lee and Raja Marjieh and Ilia Sucholutsky and Francesca Lanzarini and Elisabeth André and Nori Jacoby, "Around the world in 60 words: A generative vocabulary test for online research." arXiv preprint arXiv:2302.01614, 2023. https://arxiv.org/abs/2302.01614
Published in arXiv preprint arXiv:2310.20059, 2023
Recommended citation: Sunayana Rane and Mark Ho and Ilia Sucholutsky and Thomas L Griffiths, "Concept alignment as a prerequisite for value alignment." arXiv preprint arXiv:2310.20059, 2023. https://arxiv.org/abs/2310.20059
Published in , 2023
Recommended citation: Dibyanshu Shekhar and Sree Harsha Nelaturu and Ashwath Shetty and Ilia Sucholutsky, "End-to-End Learnable Masks With Differentiable Indexing." , 2023. https://openreview.net/forum?id=EyliiBqhFz
Published in arXiv preprint arXiv:2310.13018, 2023
Recommended citation: Ilia Sucholutsky and Lukas Muttenthaler and Adrian Weller and Andi Peng and Andreea Bobu and Been Kim and Bradley C Love and Erin Grant and Iris Groen and Jascha Achterberg and Joshua B Tenenbaum and Katherine M Collins and Katherine L Hermann and Kerem Oktar and Klaus Greff and Martin N Hebart and Nori Jacoby and Qiuyi Zhang and Raja Marjieh and Robert Geirhos and Sherol Chen and Simon Kornblith and Sunayana Rane and Talia Konkle and Thomas P O'Connell and Thomas Unterthiner and Andrew K Lampinen and Klaus-Robert Müller and Mariya Toneva and Thomas L Griffiths, "Getting aligned on representational alignment." arXiv preprint arXiv:2310.13018, 2023. https://arxiv.org/abs/2310.13018
Published in , 2023
Recommended citation: Katherine Maeve Collins and Matthew Barker and Mateo Espinosa Zarlenga and Naveen Raman and Umang Bhatt and Mateja Jamnik and Ilia Sucholutsky and Adrian Weller and Krishnamurthy Dvijotham, "Human uncertainty in concept-based ai systems." , 2023. https://dl.acm.org/doi/abs/10.1145/3600211.3604692
Published in Mineral Resource Estimation Conference 2023, 2023
Recommended citation: David First and Ilia Sucholutsky and Daniel Mogilny and Farzi Yusufali, "Introducing deep learning and interpreting the patterns – a mineral deposit perspective." Mineral Resource Estimation Conference 2023, 2023.
Published in Proceedings of the annual meeting of the cognitive science society, 2023
Recommended citation: Mathew Hardy and Ilia Sucholutsky and Bill Thompson and Tom Griffiths, "Large language models meet cognitive science: LLMs as tools, models, and participants." Proceedings of the annual meeting of the cognitive science society, 2023. https://escholarship.org/uc/item/6dp9k2gz
Published in Uncertainty in Artificial Intelligence, 2023
Recommended citation: Ilia Sucholutsky and Ruairidh M Battleday and Katherine M Collins and Raja Marjieh and Joshua Peterson and Pulkit Singh and Umang Bhatt and Nori Jacoby and Adrian Weller and Thomas L Griffiths, "On the informativeness of supervision signals." Uncertainty in Artificial Intelligence, 2023. https://proceedings.mlr.press/v216/sucholutsky23a.html
Published in Proceedings of the Annual Meeting of the Cognitive Science Society, 2023
Recommended citation: Raja Marjieh and Ilia Sucholutsky and Pol van Rijn and Nori Jacoby and Tom Griffiths, "What language reveals about perception: Distilling psychophysical knowledge from large language models." Proceedings of the Annual Meeting of the Cognitive Science Society, 2023. https://escholarship.org/uc/item/6dk5q565
Published in arXiv preprint arXiv:2402.06992, 2024
Recommended citation: Raja Marjieh and Pol van Rijn and Ilia Sucholutsky and Harin Lee and Thomas L Griffiths and Nori Jacoby, "A rational analysis of the speech-to-song illusion." arXiv preprint arXiv:2402.06992, 2024. https://arxiv.org/abs/2402.06992
Published in arXiv preprint arXiv:2409.08212, 2024
Recommended citation: Andi Peng and Belinda Z Li and Ilia Sucholutsky and Nishanth Kumar and Julie A Shah and Jacob Andreas and Andreea Bobu, "Adaptive language-guided abstraction from contrastive explanations." arXiv preprint arXiv:2409.08212, 2024. https://arxiv.org/abs/2409.08212
Published in arXiv preprint arXiv:2403.19669, 2024
Recommended citation: Allison Chen and Ilia Sucholutsky and Olga Russakovsky and Thomas L Griffiths, "Analyzing the roles of language and vision in learning from limited data." arXiv preprint arXiv:2403.19669, 2024. https://arxiv.org/abs/2403.19669
Published in , 2024
Recommended citation: Katherine M Collins and Ilia Sucholutsky and Umang Bhatt and Kartik Chandra and Lionel Wong and Mina Lee and Cedegao E Zhang and Tan Zhi-Xuan and Mark Ho and Vikash Mansinghka and Adrian Weller and Joshua B Tenenbaum and Thomas L Griffiths, "Building machines that learn and think with people." , 2024. https://www.nature.com/articles/s41562-024-01991-9
Published in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024
Recommended citation: Dun-Ming Huang and Pol Van Rijn and Ilia Sucholutsky and Raja Marjieh and Nori Jacoby, "Characterizing similarities and divergences in conversational tones in humans and llms by sampling with people." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024. https://aclanthology.org/2024.acl-long.565.pdf
Published in , 2024
Recommended citation: Raja Marjieh and Pol van Rijn and Ilia Sucholutsky and Harin Lee and Nori Jacoby and Thomas L Griffiths, "Characterizing the Large-Scale Structure of Grounded Semantic Networks." , 2024. https://europepmc.org/article/ppr/ppr831252
Published in arXiv preprint arXiv:2401.08672, 2024
Recommended citation: Sunayana Rane and Polyphony J Bruna and Ilia Sucholutsky and Christopher Kello and Thomas L Griffiths, "Concept alignment." arXiv preprint arXiv:2401.08672, 2024. https://arxiv.org/abs/2401.08672
Published in Decision, 2024
Recommended citation: Kerem Oktar and Ilia Sucholutsky and Tania Lombrozo and Thomas L Griffiths, "Dimensions of disagreement: Divergence and misalignment in cognitive science and artificial intelligence.." Decision, 2024. https://psycnet.apa.org/record/2025-13905-001
Published in ICLR 2024 Workshops, 2024
Recommended citation: Erin Grant and Ilia Sucholutsky and Jascha Achterberg and Katherine Hermann and Lukas Muttenthaler, "First Workshop on Representational Alignment (Re-Align)." ICLR 2024 Workshops, 2024. https://openreview.net/forum?id=bTkdoh5CuG
Published in Proceedings of the National Academy of Sciences, 2024
Recommended citation: Steve Rathje and Dan-Mircea Mirea and Ilia Sucholutsky and Raja Marjieh and Claire E Robertson and Jay J Van Bavel, "GPT is an effective tool for multilingual psychological text analysis." Proceedings of the National Academy of Sciences, 2024. https://www.pnas.org/doi/abs/10.1073/pnas.2308950121
Published in arXiv preprint arXiv:2406.17055, 2024
Recommended citation: Ryan Liu and Jiayi Geng and Joshua C Peterson and Ilia Sucholutsky and Thomas L Griffiths, "Large language models assume people are more rational than we really are." arXiv preprint arXiv:2406.17055, 2024. https://arxiv.org/abs/2406.17055
Published in Scientific Reports, 2024
Recommended citation: Raja Marjieh and Ilia Sucholutsky and Pol van Rijn and Nori Jacoby and Thomas L Griffiths, "Large language models predict human sensory judgments across six modalities." Scientific Reports, 2024. https://www.nature.com/articles/s41598-024-72071-1
Published in , 2024
Recommended citation: Andrea Hui Wynn, "Learning human-like representations to enable learning human values." , 2024. https://proceedings.neurips.cc/paper_files/paper/2024/hash/3578fd44b2381db12bf16e28a667c934-Abstract-Conference.html
Published in arXiv preprint arXiv:2402.18759, 2024
Recommended citation: Andi Peng and Ilia Sucholutsky and Belinda Z Li and Theodore R Sumers and Thomas L Griffiths and Jacob Andreas and Julie A Shah, "Learning with language-guided state abstractions." arXiv preprint arXiv:2402.18759, 2024. https://arxiv.org/abs/2402.18759
Published in arXiv preprint arXiv:2410.21333, 2024
Recommended citation: Ryan Liu and Jiayi Geng and Addison J Wu and Ilia Sucholutsky and Tania Lombrozo and Thomas L Griffiths, "Mind your step (by step): Chain-of-thought can reduce performance on tasks where thinking makes humans worse." arXiv preprint arXiv:2410.21333, 2024. https://arxiv.org/abs/2410.21333
Published in arXiv preprint arXiv:2407.12804, 2024
Recommended citation: Katherine M Collins and Valerie Chen and Ilia Sucholutsky and Hannah Rose Kirk and Malak Sadek and Holli Sargeant and Ameet Talwalkar and Adrian Weller and Umang Bhatt, "Modulating language model experiences through frictions." arXiv preprint arXiv:2407.12804, 2024. https://arxiv.org/abs/2407.12804
Published in arXiv preprint arXiv:2408.12664, 2024
Recommended citation: Zhonghao He and Jascha Achterberg and Katie Collins and Kevin Nejad and Danyal Akarca and Yinzhu Yang and Wes Gurnee and Ilia Sucholutsky and Yuhan Tang and Rebeca Ianov and George Ogden and Chole Li and Kai Sandbrink and Stephen Casper and Anna Ivanova and Grace W Lindsay, "Multilevel interpretability of artificial neural networks: leveraging framework and methods from neuroscience." arXiv preprint arXiv:2408.12664, 2024. https://arxiv.org/abs/2408.12664
Published in , 2024
Recommended citation: Andi Peng and Andreea Bobu and Belinda Z Li and Theodore R Sumers and Ilia Sucholutsky and Nishanth Kumar and Thomas L Griffiths and Julie A Shah, "Preference-conditioned language-guided abstraction." , 2024. https://dl.acm.org/doi/abs/10.1145/3610977.3634930
Published in Proceedings of the AAAI Symposium Series, 2024
Recommended citation: Maya Malaviya and Ilia Sucholutsky and Thomas L Griffiths, "Pushing the Limits of Learning from Limited Data." Proceedings of the AAAI Symposium Series, 2024. https://ojs.aaai.org/index.php/AAAI-SS/article/view/31276
Published in arXiv preprint arXiv:2411.07483, 2024
Recommended citation: Pasan Dissanayake and Faisal Hamman and Barproda Halder and Ilia Sucholutsky and Qiuyi Zhang and Sanghamitra Dutta, "Quantifying knowledge distillation using partial information decomposition." arXiv preprint arXiv:2411.07483, 2024. https://arxiv.org/abs/2411.07483
Published in arXiv e-prints, 2024
Recommended citation: Barproda Halder and Faisal Hamman and Pasan Dissanayake and Qiuyi Zhang and Ilia Sucholutsky and Sanghamitra Dutta, "Quantifying spuriousness of biased datasets using partial information decomposition." arXiv e-prints, 2024. https://ui.adsabs.harvard.edu/abs/2024arXiv240700482H/abstract
Published in , 2024
Recommended citation: Jakob Niedermann and Ilia Sucholutsky and Raja Marjieh and Elif Celen and Thomas L Griffiths and Nori Jacoby and Pol van Rijn, "Studying the Effect of Globalization on Color Perception using Multilingual Online Recruitment and Large Language Models." , 2024. https://escholarship.org/uc/item/4hs755zz
Published in arXiv preprint arXiv:2407.00482, 2024
Recommended citation: Barproda Halder and Faisal Hamman and Pasan Dissanayake and Qiuyi Zhang and Ilia Sucholutsky and Sanghamitra Dutta, "Towards formalizing spuriousness of biased datasets using partial information decomposition." arXiv preprint arXiv:2407.00482, 2024. https://arxiv.org/abs/2407.00482
Published in Proceedings of the Annual Meeting of the Cognitive Science Society, 2024
Recommended citation: Ilia Sucholutsky and Bonan Zhao and Tom Griffiths, "Using compositionality to learn many categories from few examples." Proceedings of the Annual Meeting of the Cognitive Science Society, 2024. https://escholarship.org/uc/item/6kj0s042
Published in AAAI-24 Spring Symposium on Human-Like Learning, 2024
Recommended citation: Ilia Sucholutsky and Thomas L Griffiths, "Why should we care if machines learn human-like representations?." AAAI-24 Spring Symposium on Human-Like Learning, 2024. https://cocosci.princeton.edu/papers/Sucholutsky2024a.pdf
Published in PeerJ Computer Science, 2024
Recommended citation: Tiancheng Yang and Ilia Sucholutsky and Kuang-Yu Jen and Matthias Schonlau, "exKidneyBERT: a language model for kidney transplant pathology reports and the crucial role of extended vocabularies." PeerJ Computer Science, 2024. https://peerj.com/articles/cs-1888/
Published in The 5th Workshop on Mathematical Reasoning and AI at NeurIPS 2025, 2025
Recommended citation: Katherine M Collins and Simon Frieder and Jonas Bayer and Jacob Loader and Jeck Lim and Peiyang Song and Fabian Zaiser and Lexin Zhou and Shanda Li and Shi-Zhuo Looi and Jose Hernandez-Orallo and Joshua B Tenenbaum and Cameron Freer and Umang Bhatt and Adrian Weller and Valerie Chen and Ilia Sucholutsky, "AI Impact on Human Proof Formalization Workflows." The 5th Workshop on Mathematical Reasoning and AI at NeurIPS 2025, 2025. https://openreview.net/forum?id=D7I8fVkMVs
Published in Cognitive science, 2025
Recommended citation: Raja Marjieh and Pol van Rijn and Ilia Sucholutsky and Harin Lee and Nori Jacoby and Thomas L Griffiths, "Characterizing the Large‐Scale Structure of Multimodal Semantic Networks." Cognitive science, 2025. https://onlinelibrary.wiley.com/doi/abs/10.1111/cogs.70131
Published in Proceedings of the National Academy of Sciences, 2025
Recommended citation: Xuechunzi Bai and Angelina Wang and Ilia Sucholutsky and Thomas L Griffiths, "Explicitly unbiased large language models still form biased associations." Proceedings of the National Academy of Sciences, 2025. https://www.pnas.org/doi/abs/10.1073/pnas.2416228122
Published in arXiv preprint arXiv:2501.14249, 2025
Recommended citation: Long Phan and Alice Gatti and Ziwen Han and Nathaniel Li and Josephina Hu and Hugh Zhang and Chen Bo Calvin Zhang and Mohamed Shaaban and John Ling and Sean Shi and Michael Choi and Anish Agrawal and Arnav Chopra and Adam Khoja and Ryan Kim and Richard Ren and Jason Hausenloy and Oliver Zhang and Mantas Mazeika and Dmitry Dodonov and Tung Nguyen and Jaeho Lee and Daron Anderson and Mikhail Doroshenko and Alun Cennyth Stokes and Mobeen Mahmood and Oleksandr Pokutnyi and Oleg Iskra and Jessica P Wang and John-Clark Levin and Mstyslav Kazakov and Fiona Feng and Steven Y Feng and Haoran Zhao and Michael Yu and Varun Gangal and Chelsea Zou and Zihan Wang and Serguei Popov and Robert Gerbicz and Geoff Galgon and Johannes Schmitt and Will Yeadon and Yongki Lee and Scott Sauers and Alvaro Sanchez and Fabian Giska and Marc Roth and Søren Riis and Saiteja Utpala and Noah Burns and Gashaw M Goshu and Mohinder Maheshbhai Naiya and Chidozie Agu and Zachary Giboney and Antrell Cheatom and Francesco Fournier-Facio and Sarah-Jane Crowson and Lennart Finke and Zerui Cheng and Jennifer Zampese and Ryan G Hoerr and Mark Nandor and Hyunwoo Park and Tim Gehrunger and Jiaqi Cai and Ben McCarty and Alexis C Garretson and Edwin Taylor and Damien Sileo and Qiuyu Ren and Usman Qazi and Lianghui Li and Jungbae Nam and John B Wydallis and Pavel Arkhipov and Jack Wei Lun Shi and Aras Bacho and Chris G Willcocks and Hangrui Cao and Sumeet Motwani and Emily de Oliveira Santos and Johannes Veith and Edward Vendrow and Doru Cojoc and Kengo Zenitani and Joshua Robinson and Longke Tang and Yuqi Li and Joshua Vendrow and Natanael Wildner Fraga and Vladyslav Kuchkin and Andrey Pupasov Maksimov and Pierre Marion and Denis Efremov and Jayson Lynch and Kaiqu Liang and Aleksandar Mikov and Andrew Gritsevskiy and Julien Guillod and Gözdenur Demir and Dakotah Martinez and Ben Pageler and Kevin Zhou and Saeed Soori and Ori Press and Henry Tang and Paolo Rissone and Sean R Green and Lina Brüssel and Moon Twayana and Aymeric Dieuleveut and Joseph Marvin Imperial and Ameya Prabhu and Jinzhou Yang and Nick Crispino and Arun Rao and Dimitri Zvonkine and Gabriel Loiseau and Mikhail Kalinin and Marco Lukas and Ciprian Manolescu and Nate Stambaugh and Subrata Mishra and Tad Hogg and Carlo Bosio and Brian P Coppola and Julian Salazar and Jaehyeok Jin and Rafael Sayous and Stefan Ivanov and Philippe Schwaller and Shaipranesh Senthilkuma and Andres M Bran and Andres Algaba and Kelsey Van den Houte and Lynn Van Der Sypt and Brecht Verbeken and David Noever and Alexei Kopylov and Benjamin Myklebust and Bikun Li and Lisa Schut and Evgenii Zheltonozhskii and Qiaochu Yuan and Derek Lim and Richard Stanley and Tong Yang and John Maar and Julian Wykowski, "Humanity's last exam." arXiv preprint arXiv:2501.14249, 2025. https://arxiv.org/abs/2501.14249
Published in arXiv preprint arXiv:2505.16899, 2025
Recommended citation: Kerem Oktar and Katherine M Collins and Jose Hernandez-Orallo and Diane Coyle and Stephen Cave and Adrian Weller and Ilia Sucholutsky, "Identifying, Evaluating, and Mitigating Risks of AI Thought Partnerships." arXiv preprint arXiv:2505.16899, 2025. https://arxiv.org/abs/2505.16899
Published in Nature human behaviour, 2025
Recommended citation: Xiaoliang Luo and Akilles Rechardt and Guangzhi Sun and Kevin K Nejad and Felipe Yáñez and Bati Yilmaz and Kangjoo Lee and Alexandra O Cohen and Valentina Borghesani and Anton Pashkov and Daniele Marinazzo and Jonathan Nicholas and Alessandro Salatiello and Ilia Sucholutsky and Pasquale Minervini and Sepehr Razavi and Roberta Rocca and Elkhan Yusifov and Tereza Okalova and Nianlong Gu and Martin Ferianc and Mikail Khona and Kaustubh R Patil and Pui-Shee Lee and Rui Mata and Nicholas E Myers and Jennifer K Bizley and Sebastian Musslick and Isil Poyraz Bilgin and Guiomar Niso and Justin M Ales and Michael Gaebler and N Apurva Ratan Murty and Leyla Loued-Khenissi and Anna Behler and Chloe M Hall and Jessica Dafflon and Sherry Dongqi Bao and Bradley C Love, "Large language models surpass human experts in predicting neuroscience results." Nature human behaviour, 2025. https://www.nature.com/articles/s41562-024-02046-9
Published in Proceedings of the Annual Meeting of the Cognitive Science Society, 2025
Recommended citation: Ilia Sucholutsky and Bonan Zhao and Hee Seung Hwang and Allison Chen and Olga Russakovsky and Tom Griffiths, "Learning a Doubly-Exponential Number of Concepts From Few Examples." Proceedings of the Annual Meeting of the Cognitive Science Society, 2025. https://escholarship.org/uc/item/011374xq
Published in arXiv preprint arXiv:2509.08010, 2025
Recommended citation: Lujain Ibrahim and Katherine M Collins and Sunnie SY Kim and Anka Reuel and Max Lamparth and Kevin Feng and Lama Ahmad and Prajna Soni and Alia El Kattan and Merlin Stein and Siddharth Swaroop and Ilia Sucholutsky and Andrew Strait and Q Vera Liao and Umang Bhatt, "Measuring and mitigating overreliance is necessary for building human-compatible AI." arXiv preprint arXiv:2509.08010, 2025. https://arxiv.org/abs/2509.08010
Published in arXiv preprint arXiv:2502.20502, 2025
Recommended citation: Lance Ying and Katherine M Collins and Lionel Wong and Ilia Sucholutsky and Ryan Liu and Adrian Weller and Tianmin Shu and Thomas L Griffiths and Joshua B Tenenbaum, "On benchmarking human-like intelligence in machines." arXiv preprint arXiv:2502.20502, 2025. https://arxiv.org/abs/2502.20502
Published in ICLR 2025 Workshop on Bidirectional Human-AI Alignment, 2025
Recommended citation: Ilia Sucholutsky and Katherine M Collins and Maya Malaviya and Nori Jacoby and Weiyang Liu and Theodore Sumers and Michalis Korakakis and Umang Bhatt and Mark K Ho and Joshua B Tenenbaum and Bradley C Love and Zachary Pardos and Adrian Weller and Thomas L Griffiths, "Representational Alignment Supports Effective Teaching." ICLR 2025 Workshop on Bidirectional Human-AI Alignment, 2025. https://openreview.net/forum?id=7zxUVXFPez
Published in arXiv preprint arXiv:2501.10476, 2025
Recommended citation: Katherine M Collins and Umang Bhatt and Ilia Sucholutsky, "Revisiting Rogers' Paradox in the Context of Human-AI Interaction." arXiv preprint arXiv:2501.10476, 2025. https://arxiv.org/abs/2501.10476
Published in Nature Computational Science, 2025
Recommended citation: Ilia Sucholutsky and Katherine M Collins and Nori Jacoby and Bill D Thompson and Robert D Hawkins, "Using LLMs to advance the cognitive science of collectives." Nature Computational Science, 2025. https://www.nature.com/articles/s43588-025-00848-z
Published in arXiv e-prints, 2025
Recommended citation: Alexander Ku and Declan Campbell and Xuechunzi Bai and Jiayi Geng and Ryan Liu and Raja Marjieh and R Thomas McCoy and Andrew Nam and Ilia Sucholutsky and Veniamin Veselovsky and Liyi Zhang and Jian-Qiao Zhu and Thomas L Griffiths, "Using the tools of cognitive science to understand large language models at different levels of analysis." arXiv e-prints, 2025. https://ui.adsabs.harvard.edu/abs/2025arXiv250313401K/abstract
Published in arXiv preprint arXiv:2502.01540, 2025
Recommended citation: Raja Marjieh and Veniamin Veselovsky and Thomas L Griffiths and Ilia Sucholutsky, "What is a Number, That a Large Language Model May Know It?." arXiv preprint arXiv:2502.01540, 2025. https://arxiv.org/abs/2502.01540
Published in arXiv preprint arXiv:2503.13577, 2025
Recommended citation: Umang Bhatt and Sanyam Kapoor and Mihir Upadhyay and Ilia Sucholutsky and Francesco Quinzan and Katherine M Collins and Adrian Weller and Andrew Gordon Wilson and Muhammad Bilal Zafar, "When should we orchestrate multiple agents?." arXiv preprint arXiv:2503.13577, 2025. https://arxiv.org/abs/2503.13577
Published in arXiv preprint arXiv:2602.10001, 2026
Recommended citation: Chenyi Li and Raja Marjieh and Haoyu Hu and Mark Steyvers and Katherine M Collins and Ilia Sucholutsky and Nori Jacoby, "Human-AI Synergy Supports Collective Creative Search." arXiv preprint arXiv:2602.10001, 2026. https://arxiv.org/abs/2602.10001
Published in arXiv preprint arXiv:2603.12229, 2026
Recommended citation: Elizabeth Mieczkowski and Katherine M Collins and Ilia Sucholutsky and Natalia Vélez and Thomas L Griffiths, "Language Model Teams as Distributed Systems." arXiv preprint arXiv:2603.12229, 2026. https://arxiv.org/abs/2603.12229
Published in arXiv preprint arXiv:2602.21262, 2026
Recommended citation: Sasha Robinson and Kerem Oktar and Katherine M Collins and Ilia Sucholutsky and Kelsey R Allen, "Under the Influence: Quantifying Persuasion and Vigilance in Large Language Models." arXiv preprint arXiv:2602.21262, 2026. https://arxiv.org/abs/2602.21262
Published in arXiv preprint arXiv:2602.10473, 2026
Recommended citation: Haoyu Hu and Raja Marjieh and Katherine M Collins and Chenyi Li and Thomas L Griffiths and Ilia Sucholutsky and Nori Jacoby, "Why Human Guidance Matters in Collaborative Vibe Coding." arXiv preprint arXiv:2602.10473, 2026. https://arxiv.org/abs/2602.10473
Published:
An introduction to Generative Adversarial Networks intended for a technical audience with little to no background knowledge in neural networks.
Published:
Lossy, noisy, or missing data are a common phenomena in many areas of statistics ranging from sampling to statistical learning. Instead of just ignoring these missing values, it can be useful to somehow attempt to recover or impute them. Meanwhile, deep learning is increasingly shown to be adept at learning latent representations or distributions of data. These patterns or representations can often be too complex to be recognized manually or through classical statistical techniques. We will discuss practical deep learning approaches to the problem of lossy data restoration or imputation with examples of several different types of datasets. We will compare the results to classical techniques to see if deep learning can really be used to perform higher quality imputation.
Published:
We will go over five exciting projects from very different areas, and examine the deep learning algorithms underlying them, as inspiration for how you can enter the field regardless of where your interests or expertise currently lie.
Published:
Should you pursue graduate research in AI? What should you expect if you do? Most importantly, how do you ensure that it is a beneficial and positive experience for you? I hope to help you answer some of these questions by sharing my own experiences and introducing you to the very diverse set of AI projects happening at the University of Waterloo that you could work on as a graduate student.
Published:
While supervised learning techniques have become increasingly adept at separating images into different classes, these techniques require large amounts of labelled data which may not always be available. We propose a novel neuro-dynamic method for unsuper- vised image clustering by combining 2 biologically-motivated mod- els: Adaptive Resonance Theory (ART) and Convolutional Neu- ral Networks (CNN). ART networks are unsupervised clustering al- gorithms that have high stability in preserving learned information while quickly learning new information. Meanwhile, a major prop- erty of CNNs is their translation and distortion invariance, which has led to their success in the domain of vision problems. By embedding convolutional layers into an ART network, the useful properties of both networks can be leveraged to identify different clusters within unlabelled image datasets and classify images into these clusters. In exploratory experiments, we demonstrate that this method greatly increases the performance of unsupervised ART networks on a benchmark image dataset.
Published:
Most real-world datasets, and particularly those collected from physical systems, are full of noise, packet loss, and other imperfections. However, most specification mining, anomaly detection and other such algorithms assume, or even require, perfect data quality to function properly. Such algorithms may work in lab conditions when given clean, controlled data, but will fail in the field when given imperfect data. We propose a method for accurately reconstructing discrete temporal or sequential system traces affected by data loss, using Long Short-Term Memory Networks (LSTMs). The model works by learning to predict the next event in a sequence of events, and uses its own output as an input to continue predicting future events. As a result, this method can be used for data restoration even with streamed data. Such a method can reconstruct even long sequence of missing events, and can also help validate and improve data quality for noisy data. The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data. We demonstrate our method by reconstructing automotive CAN traces consisting of long sequences of discrete events. We show that given even small parts of a CAN trace, our LSTM model can predict future events with an accuracy of almost 90%, and can successfully reconstruct large portions of the original trace, greatly outperforming a Markov Model benchmark. We separately feed the original, lossy, and reconstructed traces into a specification mining framework to perform downstream analysis of the effect of our method on state-of-the-art models that use these traces for understanding the behavior of complex systems.
Undergraduate course, University of Waterloo, Department of Statistics and Actuarial Science, 2020
I offered a section of STAT 231 in Winter 2020.
Undergraduate course, Princeton University, Computer Science, 2022
I offered COS IW 10: Deep learning with small data in Spring 2022.
Graduate course, NYU, Center for Data Science, 2024
I’m offering DS-GA 3001.011: Special Topics in Data Science - Learning from small data in Fall 2024.
Graduate course, NYU, Center for Data Science, 2025
I’m offering DS-GA 3001.011: Special Topics in Data Science - Learning from small data in Fall 2025.