Singular Learning Theory
NOTE: Work in Progress
Abstract
Introduction
Regular learning theory is lying to you: "overparametrized" models actually aren't overparametrized, and generalization is not just a question of broad basins.
The standard explanation for neural networks is that gradient descent settles in flat basins of the loss function. On the left, in a sharp minimum, the updates bounce the model around. Performance will vary wildly with new examples. On the right, in a flat minimum, the updates settle to zero. Performance is stable under small perturbations.
That's because loss basins actually aren't basins but valleys, and at the base of these valleys lie manifolds of constant, minimum loss. The higher the dimension of these "rivers", the lower the effective dimensionality of your model. Generalization is a balance between expressivity (more effective parameters) and simplicity (fewer effective parameters).
Singular directions lower the effective dimensionality of your model. In this example, a line of degenerate points effectively restricts the twodimensional loss surface to one dimension.
These manifolds correspond to the internal symmetries of NNs: continuous variations of a given network that perform the same calculation. Many of these symmetries are predetermined by the architecture and so are always present. We call these "generic". The more interesting symmetries are nongeneric symmetries, which the model can form or break during training.
In this light, part of the power of NNs is that they can vary their effective dimensionality (thus also expressivity). Generality comes from a kind of "forgetting" in which the model throws out unnecessary dimensions. At the risk of being elegancesniped, SLT seems like a promising route to develop a better understanding of training dynamics (and phenomenon such as sharp left turns and pathdependence). If we're lucky, SLT may even enable us to construct a grand unified theory of scaling.
A lot still needs to be done (esp. in terms of linking the Bayesian presentation of singular learning theory to conventional machine learning), but, from an initial survey, singular learning theory feels meatier than other explanations of generalization.^{1} So let me introduce you to the basics…
Singular Models
Maximum likelihood estimation is KLdivergence minimization.
We're aiming for a shallow introduction of questionable rigor. For full detail, I recommend Carroll's MSc thesis here (whose notation I am adopting).
The setting is Bayesian, so we'll start by translating the setup of "standard" regression problem to more appropriate Bayesian language.
We have some true distribution $q(yx)$ and some model $p(yx, w)$ parametrized by weights, $w \in W \subseteq \mathbb R^D$. Our aim is to learn the weights that make $p$ as "close" as possible to $q$.
Given a dataset $\mathcal D = {(x_i, y_i)}*{i=1}^n$, frequentist learning is usually formulated in terms of the empirical likelihood of our data (which assumes that each sample is i.i.d.):
The aim of learning is to find the weights that maximize this likelihood (hence "maximum likelihood estimator"):
That is: we want to find the weights which make our observations as likely as possible.
In practice, because sums are easier than products and because we like our bits to be positive, we end up trying to minimize the negative log likelihood instead of the vanilla likelihood. That is, we're minimizing average bits of information rather than maximizing probabilities:
If we define the empirical entropy, $S_n$, of the true distribution,
then, since $S_n$ is independent of $w$, we find that minimizing $L_n(w)$ is equivalent to minimizing the empirical KullbackLeibler divergence, $K_n(w)$, between our model and the true distribution:
So maximizing the likelihood is not just some halfassed frequentist heuristic. It's actually an attempt to minimize the most straightforward informationtheoretic "distance" between the true distribution and our model.
The advantage of working with the KLdivergence is that it's bounded: $K(w) \geq 0$ with equality iff $q(yx, w) = p(yx)$ almost everywhere.
In this frame, our learning task is not simply to minimize the KLdivergence, but to find the true parameters:
Note that it is not necessarily the case that a set of true parameters actually exists. If your model is insufficiently expressive, then the true model need not be realizable: your best fit may have some nonzero KLdivergence.
Still, from the perspective of generalization, it makes more sense to talk about true parameters than simply the KLdivergenceminimizing parameters. It's the true parameters that give us perfect generalization (in the limit of infinite data).
The Bayesian Information Criterion is a lie.
One of the main strengths of the Bayesian frame is that it lets enforce a prior $\varphi(w)$ over the weights, which you can integrate out to derive a parameterfree model:
One of the main weaknesses is that this integral is often almost always intractable. So Bayesians make a concession to the frequentists with a much more tractable Laplace approximation (i.e., you approximate your model as quadratic/gaussian in the vicinity of the maximum likelihood estimator (MLE), $w^{(0)}$):^{2}
where $I(w)$ is the Fisher information matrix:
The Laplace approximation is a probability theorist's Taylor approximation.
From this approximation, a bit more math gives us the Bayesian information criterion (BIC):
The BIC (like the related Akaike information criterion) is a criterion for model selection that penalizes complexity. Given two models, the one with the lower BIC tends to overfit less (/"generalize better").
The problem with regular learning theory is that deriving the BIC invokes the inverse, $I^{1}(w^{(0)})$, of the information matrix. If $I(w^{(0)})$ is noninvertible, then the BIC and all the generalization results that depend on it are invalid.
As it turns out, information matrices are pretty much never invertible for deep neural networks. So, we have to rethink our theory.
Singularities in the context of Algebraic Geometry
For an analytic function $K : W \to \mathbb R, x \in W$ is a critical point of $K$ if it has zero divergence, $\nabla K(x) = 0$. A singularity is a critical point that is also equal to zero, $K(x) = 0$.
Under these definitions, any true parameter $w^*$ is a singularity of the KL divergence. $K(w^*)=0$ follows from the definition of $w^*$, and $\nabla K(w^*)$ follows from the lower bound, $K(w) \geq 0$.
So another advantage of the KL divergence over the NLL is that it gives us a cleaner lower bound, under which $K(w^*)$ is a singularity for any true parameter $w^*$.
We are interested in degenerate singularities — singularities that occupy a common manifold. For degenerate singularities, there is some continuous change to $w^*$ which leaves $K(w^*)$ unchanged. That is, the surface is not locally parabolic.
Nondegenerate singularities are locally parabolic. Degenerate singularities are not.
In terms of $K$, this means that the Hessian at the singularity has at least one zero eigenvalue (equivalently, it is noninvertible). For the KLdivergence, the Hessian at a true parameter is precisely the Fisher information matrix we just saw.
Generic symmetries of NNs
Neural networks are full of symmetries. that let you change the model's internals without changing the overall computation. This is where our degenerate singularities come from.
The most obvious symmetry is that you can permute weights without changing the overall computation. Given any compatible two linear transformations, $A$ and $B$ (i.e., weight matrices), an elementwise activation function, $\phi$, and any permutation, $P$,
because permutations commute with $\phi$. The nonlinearity of $\phi$ means this isn't the case for invertible transformations in general.
An example using the identity function for $\phi$.
At least for this post, we'll ignore this symmetry as it is discrete, and we're interested in continuous symmetries that can give us degenerate singularities.
A more promising continuous symmetry is the following (for models that use ReLUs):
For a ReLU layer, you can continuously scale the incoming preactivation as long as you inversely scale the outgoing activation. Since this symmetry is present over the entire parameter space, $W$, nowhere in the weight space is safe from degeneracy.
As an exercise for the reader, can you think of any other generic symmetries in deep neural networks?^{3}
Both of these symmetries are generic. They're an aftereffect of the architecture choice, and are always active. The more interesting symmetries are nongeneric symmetries — those that depend on $w$.
NonGeneric Symmetries
The key observation of singular learning theory for neural networks is that neural networks can vary their effective parameter count.
Real Log Canonical Threshold (RLCT)
Zeta function of $K(w)$
where $\forall w \in W: \phi(w)>0$.
Analytically continue this to the whole complex plane with a Laurent expansion, then the first (large) pole is the RLCT.
Missing good image of what is going on here. Yes, the pole is the location of a singularity ($\zeta(z) \to \infty$) and its multiplicity is the order of the polynomial you need to approximate the local behavior of the corresponding zero of $\zeta^{1}$. But how do I actually interpret $z$?
Real log canonical threshold ($\lambda$)
 The RLCT is the volume codimension (the number of effective parameters near the most singular point $W_0$).
 For regular (nonsingular) models, the RLCT is precisely $D/2$
 Why divide by two?
Orientationreversing symmetries
Using ReLUs, we can imagine our network performing a kind of piecewise highdimensional splice approximation of the input function. For higher dimensions, we're looking at constant hypersurfaces.
The intersections of these surfaces are described by a linear equation of the parameters that sum to zero. That is, there's an orientation. If we reverse this orientation, we get the same line.
Degenerate nodes
In addition to these symmetries, when the model has more hidden nodes than truth, excess nodes are either degenerate or have the same activation boundary as another one.
NOTE: I'm still confused here.
INSERT comment on equivariance (link Distill article)
Glossary
 If these parameters exist at all (i.e., this set is nonempty, and there is some choice of weights $w_0$ for which $K(w_0) = 0$), we say our model is realizable. We'll assume this is the case from now on.
 When every choice of weights corresponds to a unique model (i.e., the map $\theta \mapsto q(yx, \theta)$ is injective for all $x, y$), we say our model is identifiable.
 If a model is identifiable and its Fisher information matrix is positive definite, then a model is regular. Otherwise, the model is strictly singular.
Singular learning theory kicks into gear when our models are singular. When the true parameters are
If the Hessian is always strictly positive definite (it has no zero eigenvalues for any $\boldsymbol\theta$), then an identifiable model is called regular. A nonregular model is called strictly singular.
Example: Pendulum
Our object of study are triples of the kind $(p(yx,w), q(yx), \phi(w))$, where $p(yx, w)$ is a model p,^{1} of some unknown true model, $q(yx)$, with a prior over the weights, $\phi(w)$.
The model itself is a regression model on $f$:
We have some probability distribution $p(yx)$ and a model $q(yx, \theta)$ parameterized by $\theta$. Our aim is to learn the weights $\theta$ so as to capture the true distribution, and we assume $p(x, y)$ is realizable
Let's start with an example to understand how learning changes when your models are singular.
You have a pendulum with some initial angular displacement $x_0$ and velocity $v_0$. Newton tells us that at time $t$, it'll be in position
The problem is that we live in the real world. Our measurement of $x_t$ is noisy:
What we'd like to do is to learn the "true" parameters $(x_0^*, v_0^*, g^*, t^*)$ from our observations $x_t$. That gives us a problem: For any set of true parameters, the following would output the same value:
That is: our map from parameters to models is noninjective. Multiple parameters determine the same model. We call these models strictly singular.

Aim: Modeling a pendulum given a noisy estimate of $(x, y)$ at time $t$.

The parameters of our model are $(\lambda, g, t)$ (initial velocity, gravitational acceleration, time of measurement), and the model is:
 The map from parameters to models is noninjective. That is, the function, $f$, is exactly the same after a suitable mapping (like $g \to 4g$, $\lambda \to 2 \lambda$, $t \to t/2$).
 You can reparameterize this model to get rid of the degeneracy ($\lambda' = \lambda/\sqrt{g}$, $t' = \sqrt g \cdot t$):
 But that may actually make the parameters less useful to reason about, and, in general, may make the "true" model harder to find.
(If you look at the $t  \lambda$ plane, you get straight line level sets, same for $t  \sqrt g$ plane).
Relation to Hessian of $K(w)$ (KLdivergence between a parameter and the true model)
…TODO
Connection to basin broadness
…TODO
Emphasize this early on.
Question
 Does SGD actually reach these solutions? For a given loss, if we are uniformly distributed across all weights with that loss, we should end up in simpler solutions, right? Does this actually happen though?
 Is part of the value of depth that you create more ReLU like symmetries. Can you create equally successful shallow, wide models if you hardcode additional symmetries?
Footnotes

E.g.: that explicit regularization enforces simpler solutions (weight decay is a Gaussian prior over weights), that SGD settles in broader basins that are more robust to changes in parameters (=new samples), that NNs have Solomonofflike inductive biases [1], or that highly correlated weight matrices act as implicit regularizers [2]. ↩ ↩^{2}

This is just a secondorder Taylor approximation modified for probability distributions. That is, the Fisher information matrix gives you the curvature of the negative log likelihood: it tells you how many bits you gain (=how much less likely your dataset becomes) as you move away from the minimum in parameter space. ↩

Hint: normalization layers, the encoding/unencoding layer of transformers / anywhere else without a privileged basis. ↩