0. The shallow reality of 'deep learning theory'
Produced as part of the SERI ML Alignment Theory Scholars Program  Winter 2022 Cohort
Most results under the umbrella of "deep learning theory" are not actually deep, about learning, or even theories.
This is because classical learning theory makes the wrong assumptions, takes the wrong limits, uses the wrong metrics, and aims for the wrong objectives. Learning theorists are stuck in a rut of oneupmanship, vying for vacuous bounds that don't say anything about any systems of actual interest.
Yudkowsky tweeting about statistical learning theorists. (Okay, not really.)
In particular, I'll argue throughout this sequence that:
 Empirical risk minimization is the wrong framework, and risk is a weak foundation.
 In approximation theory, the universal approximation results are too general (they do not constrain efficiency) while the "depth separation" results meant to demonstrate the role of depth are too specific (they involve constructing contrived, unphysical target functions).
 Generalization theory has only two tricks, and they're both limited:
 Uniform convergence is the wrong approach, and model class complexities (VC dimension, Rademacher complexity, and covering numbers) are the wrong metric. Understanding deep learning requires looking at the microscopic structure within model classes.
 Robustness to noise is an imperfect proxy for generalization, and techniques that rely on it (margin theory, sharpness/flatness, compression, PACBayes, etc.) are oversold.
 Optimization theory is a bit better, but trainingtime guarantees involve questionable assumptions, and the obsession with secondorder optimization is delusional. Also, the NTK is bad. Get over it.
 At a higher level, the obsession with deriving bounds for approximation/generalization/learning behavior is misguided. These bounds serve mainly as political benchmarks rather than a source of theoretical insight. More attention should go towards explaining empirically observed phenomena like double descent (which, to be fair, is starting to happen).
That said, there are new approaches that I'm more optimistic about. In particular, I think that singular learning theory (SLT) is the most likely path to lead to a "theory of deep learning" because it (1) has stronger theoretical foundations, (2) engages with the structure of individual models, and (3) gives us a principled way to bridge between this microscopic structure and the macroscopic properties of the model class^{1}. I expect the field of mechanistic interpretability and the eventual formalism of phase transitions and "sharp left turns" to be grounded in the language of SLT.
Why theory?
A mathematical theory of learning and intelligence could form a valuable tool in the alignment arsenal, that helps us:
 Develop and scale interpretability tools
 Inspire better experiments (i.e., focus our bits of attention more effectively).
 Establish a common language between experimentalists and theorists.
That's not to say that the right theory of learning is riskfree:
 A good theory could inspire new capabilities. We didn't need a theory of mechanics to build the first vehicles, but we couldn't have gotten to the moon without it.
 The wrong theory could mislead us. Just as theory tells us where to look, it also tells us where not to look. The wrong theory could cause us to neglect important parts of the problem.
 It could be one prolonged nerdsnipe that draws attention and resources away from other critical areas in the field. Brilliant string theorists aren't exactly helping advance living and technology standards by computing the partition functions of black holes in 5D deSitter spaces.^{2}
All that said, I think the benefits currently outweigh the risks, especially if we put the right infosec policy in place when if learning theory starts showing signs of any practical utility. It's fortunate, then, that we haven't seen those signs yet.
Outline
My aims are:
 To discourage other alignment researchers from wasting their time.
 To argue for what makes singular learning theory different and why I think it the likeliest contender for an eventual grand unified theory of learning.
 To invoke Cunningham's law — i.e., to get other people to tell me where I'm wrong and what I've been missing in learning theory.
There's also the question of integrity: if I am to criticize an entire field of people smarter than I am, I had better present a strong argument and ample evidence.
Throughout the rest of this sequence, I'll be drawing on notes I compiled from lecture notes by Telgarsky, Moitra, Grosse, Mossel, Ge, and Arora, books by Roberts et al. and Hastie et al., a festschrift of Chervonenkis, and a litany of articles.^{3}
The sequence follows the threefold division of approximation, generalization, and optimization preferred by the learning theorists. There's an additional preface on why empirical risk minimization is flawed (up next) and an epilogue on why singular learning theory seems different.
 0. The shallow reality of 'deep learning theory' (You are here)
 1. Empirical risk minimization is fundamentally confused
 2. On approximation— the cosmic waste of universal approximation
 3. On generalization— against PAClearning
 4. On optimization— the NTK is bad, actually
 5. What makes singular learning theory different?
Footnotes

This sequence was inspired by my worry that I had focused too singularly on singular learning theory. I went on a journey through the broader sea of "learning theory" hopeful that I would find other signs of useful theory. My search came up mostly empty, which is why I decided to write >10,000 words on the subject. ↩

Though, granted, string theory keeps on popping up in other branches like condensed matter theory, where it can go on to motivate practical results in material science (and singular learning theory, for that matter). ↩

I haven't gone through all of these sources in equal detail, but the content I cover is representative of what you'll learn in a typical course on deep learning theory. ↩