Feed

2022-Q3

A lot has changed for me in the past month. My partner and I decided to close the business we had started together, and I've thrown myself full-force at AI safety.

We weren't seeing the traction we needed, I was nearing the edge of burnout (web development is not the thing for me1), and, at the end of the day, I did not care enough about our users. It's hard to stay motivated to help a few patients today when you think there's a considerable risk that the world might end tomorrow. And I think the world might end soon — not tomorrow, but more likely than not in the next few decades.2 At some point, I reached a point where I could no longer look away, and I had to do something.

So I reached out to the 80,000 hours team, who connected me to people studying AI safety in my area, and helped me apply to the ███████ ████ ██████████ for a six-month upskilling grant to receive $25,000 for kickstarting my transition to AI.

Now, I'm not a novice (my Bachelors and Masters theses applied techniques from statistical physics to understand neural networks), but I could definitely use the time to refresh & catch up on the latest techniques. A year is a long time in AI.

Next to "upskilling" in ML proper, I need the time to dive deep into AI safety: there's overlap with the conventional ML literature, but there's also a lot of unfamiliar material.

Finally, I need time to brush up my CV and prepare to apply to AI labs and research groups. My current guess is that I'll be best-suited to empirical/interpretability research, which I think is likely to be compute-constrained. Thus, working at a larger lab is crucial. That's not to mention the benefits of working alongside people smarter than you are. Unfortunately (for me), the field is competitive, and a "gap year" in an unrelated field after your masters is likely to be perceived as a weakness. There's a signaling game at hand, and it's play or be played. To sum, spending time on intangibles like "networking" and tangibles like "publications"3 will be a must.

To keep myself focused throughout the next half year, I'll be keeping track of my goals and progress here. To start, let's take a look at my current plan for the next half year.

Learning Plan

Like all good plans, this plan consists of three parts:

  1. Mathematics/Theory of ML
  2. Implementation/Practice of ML
  3. AI Safety

There's also an overarching theme of "community-building" (i.e., attending EAGs and other events in the space) and of "publishing".

Resources

Textbooks

  • Mathematics for Machine Learning by Deisenroth, Faisal, and Ong (2020).
    • I was told that this book is predominantly important for its first half, but I'm ready to consume it in full.
  • Pattern Recognition and Machine Learning by Bishop (2006)
    • I was advised to focus on chapter 1-5 and 9, but I'm aiming to at least skim the entirety.
  • Cracking the Coding Interview by McDowell (2015)
    • One specification I'm going to have to game is the interview. I'm also taking this as an opportunity to master Rust, as I think having a solid understanding of low-level systems programming is going to be an important enabler when working with large models.

ML/DL Courses

There are a bunch more, but these are the only ones I'm currently committing to finishing. The rest can serve as supplementary material after.

AI Safety Courses

Miscellaneous

Publishing

I'm not particularly concerned about publishing to prestigious journals, but getting content out there will definitely help. Most immediately, I'm aiming to convert / upgrade my Masters thesis to an AI Safety/Interpretability audience. I'm intrigued by the possibility that perspectives like the Lyapunov spectrum can help us enforce constraints like "forgetfulness" (which may be a stronger condition than myopia), analyze the path-dependence of training, and detect sensitivity to adversarial attacks / improbable inputs, that random matrix theory might offer novel ways to analyze the dynamics of training, and, more generally, that statistical physics is an un(der)tapped source of interpretability insight.

In some of these cases, I think it's likely that I can come to original results within the next half year. I'm going to avoid overcommitting to any particular direction just yet, as I'm sure my questions will get sharper with my depth in the field.

Next to this, I'm reaching out to several researchers in the field and offering myself up as a research monkey. I trust that insiders will have better ideas than I can form as of yet, but not enough resources to execute (in particular, I'm talking about PhD students), and that if I make myself useful, karma will follow.

Timeline

Over the next three months, my priority is input — to complete the textbooks and courses mentioned above (which means taking notes, making flashcards, doing exercises). Over the subsequent three months, my priority is output — to publish & apply.

Of course, this is simplifying; research is a continuous process: I'll start to produce output before the next three months is up & I'll continue to absorb lots of input when the three months is up. Still, heuristics are useful.

I'll be checking in here on a monthly basis — reviewing my progress over the previous month & updating my goals for the next month. Let's get the show off the road.

Month 1 (October)

Highlights

References

Footnotes

  1. At least not as a full-time occupation. I like creating things, but I also like actually using my brain, and too much of web development is mindless twiddling (even post-Copilot).

  2. More on why I think this soon.

  3. Whether in formal journals or informal blogs.

  4. I'm including less formal / "easier" sources because I need some fallback fodder (for when my brain can no longer handle the harder stuff) that isn't Twitter or Hacker News.

No, human brains are not more efficient than computers

Epistemic status: grain of salt. There's lots of uncertainty in how many FLOP/s the brain can perform.

In informal debate, I've regularly heard people say something like, "oh but brains are so much more efficient than computers" (followed by a variant of "so we shouldn't worry about AGI yet"). Putting aside the weakly argued AGI skepticism, brains actually aren't all that much more efficient than computers (at least not in any way that matters).

The first problem is that these people are usually comparing the energy requirements of training large AI models to the power requirements of running the normal waking brain. These two things don't even have the same units.

The only fair comparison is between the trained model and the waking brain or between training the model and training the brain. Training the brain is called evolution, and evolution isn't particularly known for its efficiency.

Let's start with the easier comparison: a trained model vs. a trained brain. Joseph Carlsmith estimates that the brain delivers roughly 11 petaFLOP/s (=101510^{15} floating-point operations per second)1. If you eat a normal diet, you're expending roughly 101310^{-13} J/FLOP.

Meanwhile, the supercomputer Fugaku delivers 450450 petaFLOP/s at 3030 MW, which comes out to about 1011.510^{-11.5} J/FLOP…. So I was wrong? Computers require almost 500500 times more energy per FLOP than humans?

Supercomputer J/FLOPHuman J/FLOP\frac{\text{Supercomputer J}/\text{FLOP}}{\text{Human J} /\text{FLOP}}

Pasted image 20220906142829.png

What this misses is an important practical point: supercomputers can tap pretty much directly into sunshine; human food calories are heavily-processed hand-me-downs. We outsource most of our digestion to mother nature and daddy industry.

Even the most whole-foods-grow-your-own-garden vegan is 22-33 orders of magnitude less efficient at capturing calories from sunlight than your average device2. That's before animal products, industrial processing, or any of the other Joules it takes to run a modern human.

After this correction, humans and computers are about head-to-head in energy/FLOP, and it's only getting worse for us humans. The fact that the brain runs on so little actual juice suggests there's plenty of room left for us to explore specialized architectures, but it isn't the damning case many think it is. (We're already seeing early neuromorphic chips out-perform neurons' efficiency by four orders of magnitude.)

Electronic efficiencyBiological efficiency\frac{\text{Electronic efficiency}}{\text{Biological efficiency}}

Pasted image 20220906143040.png

But what about training neural networks? Now that we know the energy costs per FLOP are about equal, all we have to do is compare FLOPs required to evolve brains to the FLOPs required to train AI models. Easy, right?

Here's how we'll estimate this:

  1. For a given, state-of-the-art NN (e.g., GPT-3, PaLM), determine how many FLOP/s it performs when running normally.
  2. Find a real-world brain which performs a similar number of FLOP/s.
  3. Determine how long that real-world brain took to evolve.
  4. Compare the number of FLOPs (not FLOP/s) performed during that period to the number of FLOPs required to train the given AI.

Fortunately, we can piggyback off the great work done by Ajeya Cotra on forecasting "Transformative" AI. She calculates that GPT-3 performs about 101210^{12} FLOP/s3, or about as much as a bee.

Going off Wikipedia, social insects evolved only about 150 million years ago. That translates to between 103810^{38} and 104410^{44} FLOPs. GPT-3, meanwhile, took about 1023.510^{23.5} FLOPs. That means evolution is 101510^{15} to 102210^{22} times less efficient.

log10(total FLOPs to evolve bee brains)\log_{10}\left(\text{total FLOPs to evolve bee brains}\right)

Pasted image 20220906143416.png

Now, you may have some objections. You may consider bees to be significantly more impressive than GPT-3. You may want to select a reference animal that evolved earlier in time. You may want to compare unadjusted energy needs. You may even point out the fact that the Chinchilla results suggest GPT-3 was "significantly undertrained".

Object all you want, and you still won't be able to explain away the >1515 OOM gap between evolution and gradient descent. This is no competition.

What about other metrics besides energy and power? Consider that computers are about 10 million times faster than human brains. Or that if the human brain can store a petabyte of data, S3 can do so for about $20,000 (2022). Even FLOP for FLOP, supercomputers already underprice humans.4 There's less and less for us to brag about it.

$/(Human FLOP/s)$/(Supercomputer FLOP/s)\frac{\$/(\text{Human FLOP/s})}{\$/(\text{Supercomputer FLOP}/s)}

Pasted image 20220906143916.png

Brain are not magic. They're messy wetware, and hardware will catch up has caught up.

Postscript: brains actually might be magic. Carlsmith assigns less than 10% (but non-zero) probability that the brain computes more than 102110^{21} FLOP/s. In this case, brains would currently still be vastly more efficient, and we'd have to update in favor of additional theoretical breakthroughs before AGI.

If we include the uncertainty in brain FLOP/s, the graph looks more like this:

Supercomputer J/FLOPHuman J/FLOP\frac{\text{Supercomputer J}/\text{FLOP}}{\text{Human J} /\text{FLOP}}

Pasted image 20220906150914.png

(With a mean of ~101910^{19} and median of 830830.)

Appendix

Squiggle snippets used to generate above graphs. (Used in conjunction with obsidian-squiggle).

brainEnergyPerFlop = {
	humanBrainFlops = 15; //10 to 23;	// Median 15; P(>21) < 10%
	humanBrainFracEnergy = 0.2;
	humanEnergyPerDay = 8000 to 10000; // Daily kJ consumption
	humanBrainPower = humanEnergyPerDay / (60 * 60 * 24); // kW
	humanBrainPower * 1000 / (10 ^ humanBrainFlops) // J / FLOP
}

supercomputerEnergyPerFlop = {
    // https://www.top500.org/system/179807/ 
	power = 25e6 to 30e6; // J
	flops = 450e15 to 550e15;
	power / flops
}

supercomputerEnergyPerFlop / brainEnergyPerFlop
humanFoodEfficiency = {
	photosynthesisEfficiency = 0.001 to 0.03
	trophicEfficiency = 0.1 to 0.15
	photosynthesisEfficiency * trophicEfficiency 
}

computerEfficiency = {
    solarEfficiency = 0.15 to 0.20
    transmissionEfficiency = 1 - (0.08 to .15)
    solarEfficiency * transmissionEfficiency
}

computerEfficiency / humanFoodEfficiency
evolution = {
    // Based on Ayeja Cotra's "Forecasting TAI with biological anchors"
    // All calculations are in log space.
	
	secInYear = log10(365 * 24 * 60 * 60);
	
	// We assume that the average ancestor pop. FLOP per year is ~constant.
	// cf. Humans at 10 to 20 FLOP/s & 7 to 10 population
	ancestorsAveragePop = uniform(19, 23); # Tomasik estimates ~1e21 nematodes
    ancestorsAverageBrainFlops = 2 to 6; // ~ C. elegans
	ancestorsFlopPerYear = ancestorsAveragePop + ancestorsAverageBrainFlops + secInYear;

	years = log10(850e6) // 1 billion years ago to 150 million years ago
	ancestorsFlopPerYear + years
}
humanLife$ = 1e6 to 10e6
humanBrainFlops = 1e15
humanBrain$PerFlops = humanLife$ / humanBrainFlops 

supercomputer$ = 1e9
supercomputerFlops = 450e15
supercomputer$PerFlop = supercomputer$ / supercomputerFlops


supercomputer$PerFlops/humanBrain$PerFlops

References

Footnotes

  1. Watch out for FLOP/s (floating point operations per second) vs. FLOPs (floating point operations). I'm sorry for the source of confusion, but FLOPs usually reads better than FLOP.

  2. Photosynthesis has an efficiency around 1%, and jumping up a trophic level means another order of magnitude drop. The most efficient solar panels have above 20% efficiency, and electricity transmission loss is around 10%.

  3. Technically, it's FLOP per "subjective second" — i.e., a second of equivalent natural thought. This can be faster or slower than "truth thought."

  4. Compare FEMA's value of a statistical life at $7.5 million to the $1 billion price tag of the Fukuga supercomputer, and we come out to the supercomputer being a fourth the cost per FLOP.

Rationalia starter pack

LessWrong has gotten big over the years: 31,260 posts, 299 sequences, and more than 120,000 users.1 It has budded offshoots like the alignment and EA forums and earned itself recognition as "cult". Wonderful!

There is a dark side to this success: as the canon grows, it becomes harder to absorb newcomers (like myself).2 I imagine this was the motivation for the recently launched "highlights from the sequences".

To make it easier on newcomers (veterans, you're also welcome to join in), I've created an Obsidian starter-kit for taking notes on the LessWrong core curriculum (the Sequences, CodexHPMOR, best of, concepts, various jargon, and other odds and ends).

There's built-in support to export notes & definitions to Anki, goodies for tracking your progress through the notes, useful metadata/linking, and pretty visualizations of rationality space…

vault-graph.png

It's not perfect — I'll be doing a lot of fine-tuning as I work my way through all the content — but there should be enough in place that you can find some value. I'd love to hear your feedback, and if you're interested in contributing, please reach out! I'll also soon be adding support for the AF and the EAF .

More generally, I'd love to hear your suggestions for new aspiring rationalists. For example, there was a round of users proposing alternative reading orders about a decade ago (by Academianjimrandomh, and XiXiDu) and may be worth revisiting in 2022.

References

Footnotes

  1. From what I can tell using the graphql endpoint.

  2. Already a decade ago, jimrandomh was worrying about LW's intimidation factor — we're now about an order of magnitude ahead.

My Personality

Most personality tests are bullshit. Even the Big-5 are a bit overhyped. Take it from the experts:

"Personality scales tend to show longterm retest correlations from .30 to .80 over intervals of up to 30 years." [1]

".30 to .80" sounds good until you remember that even the upper limit means the first test score explains only about 64% of the variance in later test scores. At the median retest correlation of .57, almost 70% of your personality is explained by something other than your continuity of existence. Granted, these numbers are great by the standards of psychology, but they're rather dismal for any substantive field.

As for the rest: Myers-Briggs, Enneagram, and RIASEC... it's total nonsense. It's still fun — maybe even useful as a vague suggestion of behavioral flavor frozen in time — but ultimately such hogwash that it raises the question, "why take the time to publish this?"

The Big 5+1

aka: "OCEAN", "HEXACO"

Myers-Briggs

ENTJ-A (Commander)

Enneagram

Holland Types

aka: "RIASEC"

References

  1. Costa, Paul T., and Robert R. McCrae. “Personality Stability and Its Implications for Clinical Psychology.” Clinical Psychology Review, Special Issue Personality Assessment in the 80’s: Issues and Advances, 6, no. 5 (January 1, 1986): 407–23. https://doi.org/10.1016/0272-7358(86)90029-2.
    ↩️

2022-Q2

I fell off the wagon.

This was supposed to be the year of the quantified self. I set out to track every minute of my time across several dozen goals in thirteen categories.

The effort began strong. For three months I was off of social media, exercising super consistently, timing mostly everything, and on track towards my goals.

Then... something happened.

Between April and July — the tracking vanished. I returned to my vices, and if anything, I became more distractible than I've been in ages. Not a second of tracking, and all hopes of achieving my goals in the trash.

What happened?

My leading theory is that it was moving-related. In February, I moved to Brasília for two months. In April, I moved back to the US, and by June, I was back in the Netherlands.

Brasília was in many ways a delight: great weather, amazing fruit, a sauna and gym two minutes from my door, and everything extremely affordable. Other things were less than great: the internet speeds (at least at first), my workstation (I appropriated a low-res TV for a monitor on a minuscule kitchen table), no AC.

These things seem minor, but they add up over time. If it takes 60 seconds to install a new package, you open up Hacker News and end up wasting five minutes. Each five session chips at your attentional capacity. The frustration builds and burns you out. DX matters.

Still, mostly I was on track.

What probably caused the discipline to falter was the disruption of coming back. Moves are great opportunities to change behaviors, but this works in either direction, and the asymmetry of habit formation means you have to be extra careful.

When you're moving a bunch in a short period of time, you have to be even more careful because Ego depletion1 comes into play. You exhaust your willpower and become more susceptible to developing bad habits with each successive move.

Seasoned digital nomads probably have their tricks to get around this, but that's not me yet. Lesson learned.

A few other problems at play:

  • My Obsidian has again become a disordered wreck. I keep on trying to impose fragile top-down hierarchies on the notes, and it ends up breaking everything.
  • In a related vein, I've come to the conclusion that using Obsidian for both task management and knowledge management is bad practice. Tasks should vanish when done. If they linger around they'll muck up your access to the more important persistent knowledge. I've moved to trying out Linear instead.
  • Tracking was much too manual. I was tracking in Obsidian, which proved too unstructured (similar concern to "not using Obsidian for task management"), so I moved on to Google Sheets, which is a nightmare (as you know). This time, I'm going to give Airtable a shot (which takes inspiration & validation from the professionals).
  • At the start of the year, I redesigned my website, because using Next.js for a static site was overkill, but then I went too far in the opposite direction (towards raw, uncut html). The problem with this is that regularly publishing is the best for me to orient my review and tracking processes. When that output process becomes too unergonomic, it clogs up the rest of the pipeline. I'm now using Astro with a custom, simplified export pipeline (a successor to my previous solution & a set of plugins to recreate Obsidian-flavored markdown in the Unified.js ecosystem). This isn't public yet, but it will be when I've ironed out the kinks.

Whatever the reason, it's in the past, and every day is a chance to start fresh.

Let's try again

We've still got basically half a year left. What can we recover?

  1. 🛑 No more scrolling (YouTube, Reddit, Porn, etc.):
    • Right. That failed miserably — I even ended up caving and finally getting on Twitter. 🤷‍♂️ I still agree with the intent but can't deny the value of being up to date with Hacker News, tech Twitter, and edu-Youtube.
    • My idea of a solution was using Inoreader. There, the problem was that it was filling much too quickly and that much of it was low quality. This time around I need to be more diligent in removing feeds that don't serve me.
    • I'm going to try again. This time, I'll allow myself a dash of Hacker News a day — call it part of the job requirements. Youtube, I'll get through Inoreader, and the rest, hopefully never. (I may relax this further, and allow myself some maximum amount of time per day on these trash platforms.)
  2. 🚪 Screen time:
    • I'm scrapping the limit for desktop (because programming is my job).
    • The main obstacle to actually tracking this was that I was manually copying the information every week. This is a perfect opportunity for automation (there is fortunately an API, but you have to call it on device). Until I get access to the data, I'm not going to require myself to track this, but the goal stands: Less than an hour a day as a baseline; less than two as a stretch.
  3. Self-monitoring:
    • Toggl was easy and intuitive; my main obstacle was that I had defined too many different projects and types of tasks.
    • Time to simplify: Only three projects (work, personal, misc). Only a handful of allowed labels: programming, reading, watching, wasting time, organizing, meeting, writing.
    • Also, no more manual copying stuff over. I'm a programmer and should know better. Same for Apple Health information about exercise.
  4. 📚 Books (1 book per week):
    • I'm a bit behind schedule. 19 books in fact. But that's ok, there's plenty of time to catch up. That said, I am scrapping all of the specific goals like read X books in French, Y by this author. I'll just read what I want to read.
  5. 🗃 PKM:
    • I'm going to remove all specific goals and just commit to regularly maintenance.
  6. ✍️ Writing:
    • I haven't been writing, but I have plenty of room to catch up with my goal of 6 articles.
    • I missed M4-M7 & Q1, but whatever. For the rest of the year, I'm forbidding myself from including any quantitative result in my reviews that I haven't automated.
  7. 🗣 Languages:
    • I'm scrapping this goal. It was too ambitious from the start. I do want to catch up again, but I have one or two tools I want to finish up before I actually start learning Chinese.
    • My main goal in this category is to just catch up on Anki again & to have no overdue cards in any of my principal decks (General, French, Portuguese, Dutch). Stretch goal if I can work in Italian and German.
  8. 🏃 Moving
    • Subjectively, I'm happy enough with my movement. I'm going to avoid setting quantitative goals until I've automated the information capture.
  9. 🍽 Fasting
    • When I fell off the productivity wagon, I also fell off the IF wagon for the first time in 5 years (but I'm back again).
    • This has fallen by the wayside but it's totally recoverable. I'm going to start committing to one day (Monday) a week for the rest of the year.
  10. 🌏 Diet:
    • I was tracking meat & alcohol consumption. In hindsight, it required a bit too much input. I'm going to drop this until next year.
  11. 👓 Myopia:
    • The initial progress I've made seems to have been undone by staring at the computer screen for ungodly amounts of time. We'll fix this at some future point
  12. 👥 Relationships:
    • Mentorship & community: I'd actually say that I've achieved these goals though not in the way originally envisioned. I've found my mentors in the right software development streamers & my community in the right discords. 2022, eh? I'm crossing this off as completed.
  13. 💰 Money:
    • We moved back to the cheap Netherlands, and I got a side-job for about one day a week, and we're golden. It's a lot easier if you decide you don't have to live in the US.

References

Footnotes

  1. I've read this has been somewhat debunked, so take it with the proper grain of salt.