Jesse Hoogland
Executive Director, Timaeus
Berkeley, CA, USA
[email protected] • +1 415 424 0316
Website • GitHub • LessWrong • Twitter • LinkedIn
Summary
I'm an AI-safety researcher and one of the founders of the Singular Learning Theory (SLT) for AI safety research agenda. This led to me co-founding and now directing Timaeus, a nonprofit research organization studying this agenda. Together with my team, we've scaled the organization from 3 to 16 staff and raised $3.5M+ in grants. In addition to co-authoring papers directly, I'm an experienced communicator with 20+ talks at frontier AI labs and academic venues.
Core Skills
- Research: Singular Learning Theory • Interpretability & Alignment • Statistical Physics
- Operations: Non-profit leadership • Fundraising • Research management
- Communication: Technical writing • Public speaking • Data visualization
- Technical: PyTorch • TPUs • MLOps (HuggingFace, Wandb) • DevOps (AWS, GCP, Docker, K8s)
Current Role
Timaeus — Executive Director
Berkeley, CA | Jul 2023 - Present
I run Timaeus, a nonprofit AI safety research organization working on singular learning theory (SLT) for alignment. SLT establishes a connection between the geometry of the loss landscape and internal structure in models, which we are using to develop scalable, rigorous tools for evaluating, interpreting, and aligning neural networks.
My main responsibilities include:
- Outreach: My primary responsibility is making sure our research reaches external stakeholders (AI safety researchers, scaling labs, funders, etc.). In practice, this takes the form of (see below) writing distillations and explainers, giving regular talks, and conducting targeted outreach to specific individuals.
- Research: I work closely alongside our director of research, Daniel Murfet, on developing and implementing our research agenda. For a full list of our papers, see here.
- Operations: I work closely alongside our director of operations, Stan van Wingderden, on operations, recruitment, and fundraising. Cumulatively, we've raised ~$3.5mm via the SFF, Manifund, LTFF, Open Phil, and AISTOF. We've scaled the team from 3 FTE to 16 FTE as of July 2025.
History of Timaeus
Timaeus was founded in October 2023 after we raised funding from Manifund and later SFF.
Our initial focus was validating basic scientific predictions made by SLT (Do phase transitions really exist in models trained by SGD? Can we really learn new things by studying development? Etc.). This was a basic prerequisite to the rest of the agenda (applying this understanding to safety). As we describe below in "Timaeus in 2024", this effort was successful.
Subsequently, we turned to the next major bottleneck: scaling our techniques to larger models. As of early 2025, we successfully reached the 10B parameter range. We will release our first safety-relevant results from this work in mid-2025.
Past updates:- Timaeus in 2024
February 2025 • Jesse Hoogland, Stan van Wingerden, Alexander Gietelink Oldenziel, Daniel Murfet - Timaeus's First Four Months
February 2024 • Jesse Hoogland, Stan van Wingerden, Alexander Gietelink Oldenziel, Daniel Murfet - Announcing Timaeus
October 2023 • Jesse Hoogland, Daniel Murfet, Stan van Wingerden, Alexander Gietelink Oldenziel
Events
I was introduced to singular learning theory (SLT) and the rest of Timaeus's founding team by my co-founder, Alexander Gietelink Oldenziel. Alexander proposed the idea of an SLT for Alignment conference, which began the first time where the founding team worked together and met in person.
- ILIAD 2024
August 2024At ILIAD, I was in charge of organizing and coordinating the SLT track, and I also gave two talks.
- The 2023 Oxford Conference
November 2023I co-organized a one-week summit (November 2023 at Wytham Abbey in Oxford) on developmental interpretability. Videos available here. My responsibilities were similar to the previous conference (fundraising, logistics, and content). This conference’s main impact was to accelerate our initial round of papers and disseminate intermediate progress.
- The 2023 Berkeley Conference
June 2023I co-organized a two-week summit (June 19th-July 2nd 2023) on singular learning theory and its applications to AI safety together with the rest of the team that would go on to found Timaeus. Together with Alexander Gietelink Oldenziel, we raised on the order of $40k. Together with Daniel Murfet, I prepared a curriculum, giving six talks myself (see below, videos available here). Together with Stan van Wingerden, I helped arrange the logistics of the conference, including the venue, the food, and the schedule. We ultimately brought together around 40 people in person and more than 150 people virtually to learn about singular learning theory and its applications to alignment. This directly culminated in us proposing the “developmental interpretability” research agenda, our initial funding, and the founding of Timaeus.
Prior Experience
SERI MATS 3.0 & 3.1 — Scholar
Amsterdam, NL; Berkeley, USA | Oct 2022 - Jul 2022
I was in Evan Hubinger's Deceptive Alignment track. This is when I first began investigating developmental interpretability and then later SLT, which I've been working on ever since. I co-organized the first SLT and Alignment conference, which led to the subsequent development of the SLT for Alignment research agenda and foundation of Timaeus.
During this time, I also:
- Worked as a Research Assistant at David Krueger's group at the University of Cambridge with Lauro Langosco and Xander Davies on studying the links between grokking and (epoch-wise) double descent.
- Contributed to the "Single-Agent Control" chapter for Dan Hendrycks's textbook at the Center for AI Safety. I wrote most of the chapter and coordinated its completion with other writers.
FTX Future Fund — Grant Recipient
Amsterdam, NL | Oct 2022 - Mar 2023
I received a grant from the FTX future fund to bridge my transition to AI safety research. I worked through several textbooks (e.g., Mathematics for Machine Learning, Pattern Recognition & Machine Learning, Artificial Intelligence, Reinforcement Learning) and courses (ARENA virtual).
Health Curious — CTO
Amsterdam, NL; SF, USA; Brasília, BR | Mar 2021 - Aug 2022
In 2021, as I was finishing my Masters, I started a company with my now wife, Robin Laird, to help small-and-independent healthcare providers build their own virtual care programs. We focused on bariatric surgery, the leading intervention for obesity. We decided to close Health Curious after a year and a half, when I realized I found the problem uninspiring, the work a poor personal fit, and AI too important to ignore.
Bit — Software Developer & Coach
Amsterdam, NL | Mar 2020 - Oct 2022
During my Bachelors and Masters, I worked part-time at Bit, an applied consulting studio. It's a place companies go to quickly prototype new products/solutions with fresh young talent. I worked on two projects with SURF as well as coaching high school students and teaching bootcampers in Bit's educational offshoot, the Bit Academy.
Education
University of Amsterdam — Masters of Physics (Theoretical Physics)
Amsterdam, NL | Sep 2019 - Aug 2021
I focused on dynamical systems theory, and conducted my research under the supervision of Dr. Greg Stephens whose group studies the physics of animal behavior.
GPA: 4.0
Thesis: The Ergodic Theory of Random Neural Networks (8.5/10)
Amsterdam University College — Bachelor of Science (Physics & CS)
Amsterdam, NL | Sep 2016 - Jun 2019
I majored in theoretical physics and computer science and graduated as salutatorian (defined as among the top-10 students of my class). My thesis was my program's exclusive nomination for the VU thesis prize (out of a class of 300+ students).
GPA: 4.0
Thesis: Restricted Boltzmann Machines and the Renormalization Group: Learning Relevant Information in Statistical Physics (9.7/10)
Fox Lane High School — High School Diploma
Bedford, NY | Sep 2012 - Jun 2016
I graduated as salutatorian, with the second-highest GPA out of a class of roughly 400 students.
GPA: 4.0
Research
I work on and help direct the Singular learning theory (SLT) for Alignment research agenda, which was developed by Daniel Murfet. As part of this agenda, I helped establish Developmental Interpretability, an approach to interpretability grounded in singular learning theory and studying changes over the course of the learning process.
Publications
- Studying Small Language Models with Susceptibilities
April 2025
Garrett Baker=, George Wang=, Jesse Hoogland, Daniel Murfet - You Are What You Eat – AI Alignment Requires Understanding How Data Shapes Structure and Generalisation
February 2025
Simon Pepin Lehalleur=, Jesse Hoogland=, Matthew Farrugia-Roberts=, Susan Wei, Alexander Gietelink Oldenziel, Stan van Wingerden, George Wang, Zach Furman, Liam Carroll, Daniel Murfet - Dynamics of Transient Structure in In-Context Linear Regression Transformers
January 2025
Liam Carroll, Jesse Hoogland, Matthew Farrugia-Roberts, Daniel Murfet - Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient
October 2024 • ICLR • Spotlight
George Wang=, Jesse Hoogland=, Stan van Wingerden=, Zach Furman, Daniel Murfet - Loss Landscape Degeneracy Drives Stagewise Development in Transformers
February 2024 • TMLR 2025 • Best Paper @ HiLD Workshop
Jesse Hoogland=, George Wang=, Matthew Farrugia-Roberts, Liam Carroll, Susan Wei, Daniel Murfet
Talks
- Singular Learning Theory & AI Safety
June 2025 • SLT Seminar - Embryology of AI
June 2025 • The Cognitive Revolution - Jesse Hoogland on Singular Learning Theory
December 2024 • AXRP - Jesse Hoogland - Singular Learning Theory
July 2024 • SciFuture - The Case for AI X-risk (Alignment 1)
June 2024 • SLT Summit 2023 - The Physics of Intelligence (Physics 1)
June 2024 • SLT Summit 2023 - Singular Learning Theory: Overview And Recent Evidence
May 2024 • Plectics Labs - [Series] Growth and Form in Neural Networks
February 2024 • Various organizations including Topos Institute, OpenAI, DeepMind (Virtual), Anthropic, MATS, FAR, 80,000 Hours (Virtual), Constellation, Carnegie Mellon University (Virtual) - The Plan
June 2023 • SLT Summit 2023 - The State of AI Safety (Alignment 2)
June 2023 • SLT Summit 2023 - Singularities and Nonlinear Dynamics (Physics 3)
June 2023 • SLT Summit 2023 - Statistical Mechanics, Boltzmann Distribution, Free Energy, Phases and Phase Transitions (Physics 2)
June 2023 • SLT Summit 2023 - Jesse Hoogland–AI Risk, Interpretability
June 2023 • The Inside View - The Physics of Intelligence
May 2023 • Imperial College London
Other Writing
- SLT for AI Safety
- The Sweet Lesson: AI Safety Should Scale With Compute
- Timaeus is hiring researchers & engineers
- o1: A Technical Primer
- New o1-like model (QwQ) beats Claude 3.5 Sonnet with only 32B parameters
- Timaeus is hiring!
- Stagewise Development in Neural Networks
- Generalization, from thermodynamics to statistical physics
- You’re Measuring Model Complexity Wrong
- Open Call for Research Assistants in Developmental Interpretability
- Towards Developmental Interpretability
- Approximation is expensive, but the lunch is cheap
- Singularities against the Singularity: Announcing Workshop on Singular Learning Theory and Alignment
- Empirical risk minimization is fundamentally confused
- The shallow reality of 'deep learning theory'
- Gradient surfing: the hidden role of regularization
- Spooky action at a distance in the loss landscape
- Neural networks generalize because of this one weird trick
- No, human brains are not (much) more efficient than computers
Projects
Personal Website — JesseHoogland.com
Dec 2020 - Ongoing
I publish much of what I write to my personal website.
Data Visualization Tools
Since reading Bret Victor's What Can a Technologist Do About Climate Change?, I've been obsessed with the idea of developing better tools for embedding dynamic models in writing. I made several attempts to improve on the sorry state of affairs, starting with remark-tangle to add support to Tangle in the remark ecosystem. Second, ddmd to try to subsume Tangle altogether. Later, an unpublished library to try to subsume ddmd (in favor of Solid.js's powerful reactive model). In 2022, obsidian-squiggle to add support for the probabilistic programming language, Squiggle, in the note-taking app Obsidian.
Language-Learning Tools
I love learning languages. A lot. I built my own frontend for Anki in order to speed up flashcard creation (by pulling in content from google searches, wiktionary, Forvo, etc.). At one point, I was ready to start learning Mandarin, so I created a tone trainer for myself. At another point, I was ready to start converting Wiktionary into machine-readable semantic triples, so I could automatically pull more helpful information for my flashcards. That turned out to be pretty audacious (I haven't finished the wikitext parser yet), so it's currently on the backburner.
Note-Taking Tools
Note-taking is an addiction. My personal Obsidian vault is at about 10,000 notes. My personal Anki deck is at about 50,000 flashcards and 500,000 total reviews (encompassing an entire month of my life). I've made a few plugins / modifications (obsidian-squiggle, obsidian-export, an unpublished obsidian-export v2, fork of obsidian-linter, etc.).
Skills
Research
Interpretability, Alignment, Science of Deep Learning, Singular Learning Theory
Machine Learning
PyTorch, JAX, XLA, TPUs, Wandb, HuggingFace
Devops
Docker, K8s, AWS, GRC
Web Development
JS/TS, React, Next.js, Astro, Fast API
Communications
Writing, Lecturing, Lots of Emailing
Operations/Management
Agile/Scrum, Linear
Languages
English
Native
Dutch
Native
French
B2
Spanish
B2
Portuguese
B2
Italian
B1
Japanese
A1
Mandarin
A1