Hello. I’m Ilia.

I’m doing a postdoc in the CoCoSci Lab at Princeton University supervised by Dr. Tom Griffiths.

Before that, I did my PhD in Statistics at the University of Waterloo where I defended my thesis on ‘Learning From Almost No Data’ supervised by Dr. Matthias Schonlau.

I’m fascinated by deep learning and its ability to reach superhuman performance on so many different tasks. I want to better understand how neural networks achieve such impressive results… and why sometimes they don’t. To do that, I’m exploring what kind of information or knowledge is contained in the datasets we train our models on and how much of this knowledge is actually needed for our models.

In the beginning, I used deep learning to restore lost data from cars in order to improve anomaly detection algorithms and make cars safer. The ability to restore lost data suggests that knowledge is duplicated across a dataset.

Then, I worked on improving dataset distillation, the process of learning tiny synthetic datasets that contain all the knowledge of much larger datasets. If knowledge is duplicated across a dataset then it should be possible to represent that knowledge using fewer samples.

Now, I work on “less than one”-shot learning, an extreme form of few-shot learning where the goal is for models to learn N new classes using M < N training samples. If models can generalize from a small number of synthetic samples, can they also generalize from a small number of real samples?

Check out my publications to see my progress so far. If you find something you’re interested in discussing then shoot me an email and I’d be happy to chat. The best place to reach me is at isucholu@uwaterloo.ca.

News

April 2022: Our paper, “Predicting Human Similarity Judgments Using Large Language Models”, was accepted to the The First Workshop on Learning with Natural Language Supervision at ACL22.

March 2022: I received an NSERC Postdoctoral Fellowship to support my research on STEM-AI: Scientific Theory Encoding Methods for Artificial Intelligence.

January 2022: I’m also joining Josh Tenenbaum’s CoCoSci Lab at MIT as a research affiliate.

July 2021: I’m joining Tom Griffiths’s CoCoSci Lab at Princeton University as a postdoc.

June 2021: I defended my thesis on ‘Learning From Almost No Data’.

May 2021: I’m joining Kin-Keepers as an AI Advisor to support their mission of developing a solution that will enable dementia sufferers to communicate and feel understood.

Apr 2021: Two of our papers were accepted to IJCNN 2021: “Soft-Label Dataset Distillation and Text Dataset Distillation” (preprint) and “One Line To Rule Them All: Generating LO-Shot Soft-Label Prototypes” (preprint)

Mar 2021: Our paper on Optimal 1-NN Prototypes for Pathological Geometries was accepted for publication in PeerJ Computer Science.

Jan 2021: Our reseach was profiled on Scientific American as a pathway towards democratizing AI.

Dec 2020: Our paper on ‘Less Than One’-Shot Learning was accepted to AAAI-21!

Nov 2020: Our extended abstract on privacy-preserving dataset distillation has been accepted for poster presentation in the AAAI-21 Student Abstract and Poster Program.

Oct 2020: Our LO-shot learning research was featured on MIT Tech Review, Digital Trends, and several other outlets!

Aug 2020: I’m joining Stratum AI as VP of Research and will be leading the development of ML/DL methods to make mining more efficient and environmentally sustainable.