Singular Learning Theory
NOTE: Work in Progress
Regular learning theory is lying to you: "overparametrized" models actually aren't overparametrized, and generalization is not just a question of broad basins.
The standard explanation for neural networks is that gradient descent settles in flat basins of the loss function. On the left, in a sharp minimum, the updates bounce the model around. Performance will vary wildly with new examples. On the right, in a flat minimum, the updates settle to zero. Performance is stable under small perturbations.
That's because loss basins actually aren't basins but valleys, and at the base of these valleys lie manifolds of constant, minimum loss. The higher the dimension of these "rivers", the lower the effective dimensionality of your model. Generalization is a balance between expressivity (more effective parameters) and simplicity (fewer effective parameters).
Singular directions lower the effective dimensionality of your model. In this example, a line of degenerate points effectively restricts the two-dimensional loss surface to one dimension.
These manifolds correspond to the internal symmetries of NNs: continuous variations of a given network that perform the same calculation. Many of these symmetries are predetermined by the architecture and so are always present. We call these "generic". The more interesting symmetries are non-generic symmetries, which the model can form or break during training.
In this light, part of the power of NNs is that they can vary their effective dimensionality (thus also expressivity). Generality comes from a kind of "forgetting" in which the model throws out unnecessary dimensions. At the risk of being elegance-sniped, SLT seems like a promising route to develop a better understanding of training dynamics (and phenomenon such as sharp left turns and path-dependence). If we're lucky, SLT may even enable us to construct a grand unified theory of scaling.
A lot still needs to be done (esp. in terms of linking the Bayesian presentation of singular learning theory to conventional machine learning), but, from an initial survey, singular learning theory feels meatier than other explanations of generalization.1 So let me introduce you to the basics…
Maximum likelihood estimation is KL-divergence minimization.
We're aiming for a shallow introduction of questionable rigor. For full detail, I recommend Carroll's MSc thesis here (whose notation I am adopting).
The setting is Bayesian, so we'll start by translating the setup of "standard" regression problem to more appropriate Bayesian language.
We have some true distribution and some model parametrized by weights, . Our aim is to learn the weights that make as "close" as possible to .
Given a dataset , frequentist learning is usually formulated in terms of the empirical likelihood of our data (which assumes that each sample is i.i.d.):
The aim of learning is to find the weights that maximize this likelihood (hence "maximum likelihood estimator"):
That is: we want to find the weights which make our observations as likely as possible.
In practice, because sums are easier than products and because we like our bits to be positive, we end up trying to minimize the negative log likelihood instead of the vanilla likelihood. That is, we're minimizing average bits of information rather than maximizing probabilities:
If we define the empirical entropy, , of the true distribution,
then, since is independent of , we find that minimizing is equivalent to minimizing the empirical Kullback-Leibler divergence, , between our model and the true distribution:
So maximizing the likelihood is not just some half-assed frequentist heuristic. It's actually an attempt to minimize the most straightforward information-theoretic "distance" between the true distribution and our model.
The advantage of working with the KL-divergence is that it's bounded: with equality iff almost everywhere.
In this frame, our learning task is not simply to minimize the KL-divergence, but to find the true parameters:
Note that it is not necessarily the case that a set of true parameters actually exists. If your model is insufficiently expressive, then the true model need not be realizable: your best fit may have some non-zero KL-divergence.
Still, from the perspective of generalization, it makes more sense to talk about true parameters than simply the KL-divergence-minimizing parameters. It's the true parameters that give us perfect generalization (in the limit of infinite data).
The Bayesian Information Criterion is a lie.
One of the main strengths of the Bayesian frame is that it lets enforce a prior over the weights, which you can integrate out to derive a parameter-free model:
One of the main weaknesses is that this integral is
often almost always intractable. So Bayesians make a concession to the frequentists with a much more tractable Laplace approximation (i.e., you approximate your model as quadratic/gaussian in the vicinity of the maximum likelihood estimator (MLE), ):2
where is the Fisher information matrix:
The Laplace approximation is a probability theorist's Taylor approximation.
From this approximation, a bit more math gives us the Bayesian information criterion (BIC):
The BIC (like the related Akaike information criterion) is a criterion for model selection that penalizes complexity. Given two models, the one with the lower BIC tends to overfit less (/"generalize better").
The problem with regular learning theory is that deriving the BIC invokes the inverse, , of the information matrix. If is non-invertible, then the BIC and all the generalization results that depend on it are invalid.
As it turns out, information matrices are pretty much never invertible for deep neural networks. So, we have to rethink our theory.
Singularities in the context of Algebraic Geometry
For an analytic function is a critical point of if it has zero divergence, . A singularity is a critical point that is also equal to zero, .
Under these definitions, any true parameter is a singularity of the KL divergence. follows from the definition of , and follows from the lower bound, .
So another advantage of the KL divergence over the NLL is that it gives us a cleaner lower bound, under which is a singularity for any true parameter .
We are interested in degenerate singularities — singularities that occupy a common manifold. For degenerate singularities, there is some continuous change to which leaves unchanged. That is, the surface is not locally parabolic.
Non-degenerate singularities are locally parabolic. Degenerate singularities are not.
In terms of , this means that the Hessian at the singularity has at least one zero eigenvalue (equivalently, it is non-invertible). For the KL-divergence, the Hessian at a true parameter is precisely the Fisher information matrix we just saw.
Generic symmetries of NNs
Neural networks are full of symmetries. that let you change the model's internals without changing the overall computation. This is where our degenerate singularities come from.
The most obvious symmetry is that you can permute weights without changing the overall computation. Given any compatible two linear transformations, and (i.e., weight matrices), an element-wise activation function, , and any permutation, ,
because permutations commute with . The non-linearity of means this isn't the case for invertible transformations in general.
An example using the identity function for .
At least for this post, we'll ignore this symmetry as it is discrete, and we're interested in continuous symmetries that can give us degenerate singularities.
A more promising continuous symmetry is the following (for models that use ReLUs):
For a ReLU layer, you can continuously scale the incoming pre-activation as long as you inversely scale the outgoing activation. Since this symmetry is present over the entire parameter space, , nowhere in the weight space is safe from degeneracy.
As an exercise for the reader, can you think of any other generic symmetries in deep neural networks?3
Both of these symmetries are generic. They're an after-effect of the architecture choice, and are always active. The more interesting symmetries are non-generic symmetries — those that depend on .
The key observation of singular learning theory for neural networks is that neural networks can vary their effective parameter count.
Real Log Canonical Threshold (RLCT)
Zeta function of
Analytically continue this to the whole complex plane with a Laurent expansion, then the first (large) pole is the RLCT.
Missing good image of what is going on here. Yes, the pole is the location of a singularity () and its multiplicity is the order of the polynomial you need to approximate the local behavior of the corresponding zero of . But how do I actually interpret ?
Real log canonical threshold ()
- The RLCT is the volume co-dimension (the number of effective parameters near the most singular point ).
- For regular (non-singular) models, the RLCT is precisely
- Why divide by two?
Using ReLUs, we can imagine our network performing a kind of piece-wise high-dimensional splice approximation of the input function. For higher dimensions, we're looking at constant hypersurfaces.
The intersections of these surfaces are described by a linear equation of the parameters that sum to zero. That is, there's an orientation. If we reverse this orientation, we get the same line.
In addition to these symmetries, when the model has more hidden nodes than truth, excess nodes are either degenerate or have the same activation boundary as another one.
NOTE: I'm still confused here.
INSERT comment on equivariance (link Distill article)
- If these parameters exist at all (i.e., this set is non-empty, and there is some choice of weights for which ), we say our model is realizable. We'll assume this is the case from now on.
- When every choice of weights corresponds to a unique model (i.e., the map is injective for all ), we say our model is identifiable.
- If a model is identifiable and its Fisher information matrix is positive definite, then a model is regular. Otherwise, the model is strictly singular.
Singular learning theory kicks into gear when our models are singular. When the true parameters are
If the Hessian is always strictly positive definite (it has no zero eigenvalues for any ), then an identifiable model is called regular. A non-regular model is called strictly singular.
Our object of study are triples of the kind , where is a model p,1 of some unknown true model, , with a prior over the weights, .
The model itself is a regression model on :
We have some probability distribution and a model parameterized by . Our aim is to learn the weights so as to capture the true distribution, and we assume is realizable
Let's start with an example to understand how learning changes when your models are singular.
You have a pendulum with some initial angular displacement and velocity . Newton tells us that at time , it'll be in position
The problem is that we live in the real world. Our measurement of is noisy:
What we'd like to do is to learn the "true" parameters from our observations . That gives us a problem: For any set of true parameters, the following would output the same value:
That is: our map from parameters to models is non-injective. Multiple parameters determine the same model. We call these models strictly singular.
Aim: Modeling a pendulum given a noisy estimate of at time .
The parameters of our model are (initial velocity, gravitational acceleration, time of measurement), and the model is:
- The map from parameters to models is non-injective. That is, the function, , is exactly the same after a suitable mapping (like , , ).
- You can reparameterize this model to get rid of the degeneracy (, ):
- But that may actually make the parameters less useful to reason about, and, in general, may make the "true" model harder to find.
(If you look at the plane, you get straight line level sets, same for plane).
Relation to Hessian of (KL-divergence between a parameter and the true model)
Connection to basin broadness
Emphasize this early on.
- Does SGD actually reach these solutions? For a given loss, if we are uniformly distributed across all weights with that loss, we should end up in simpler solutions, right? Does this actually happen though?
- Is part of the value of depth that you create more ReLU like symmetries. Can you create equally successful shallow, wide models if you hardcode additional symmetries?
E.g.: that explicit regularization enforces simpler solutions (weight decay is a Gaussian prior over weights), that SGD settles in broader basins that are more robust to changes in parameters (=new samples), that NNs have Solomonoff-like inductive biases , or that highly correlated weight matrices act as implicit regularizers . ↩ ↩2
This is just a second-order Taylor approximation modified for probability distributions. That is, the Fisher information matrix gives you the curvature of the negative log likelihood: it tells you how many bits you gain (=how much less likely your dataset becomes) as you move away from the minimum in parameter space. ↩
Hint: normalization layers, the encoding/unencoding layer of transformers / anywhere else without a privileged basis. ↩