No, human brains are not more efficient than computers

Epistemic status: grain of salt. There's lots of uncertainty in how many FLOP/s the brain can perform.

In informal debate, I've regularly heard people say something like, "oh but brains are so much more efficient than computers" (followed by a variant of "so we shouldn't worry about AGI yet"). Putting aside the weakly argued AGI skepticism, brains actually aren't all that much more efficient than computers (at least not in any way that matters).

The first problem is that these people are usually comparing the energy requirements of training large AI models to the power requirements of running the normal waking brain. These two things don't even have the same units.

The only fair comparison is between the trained model and the waking brain or between training the model and training the brain. Training the brain is called evolution, and evolution isn't particularly known for its efficiency.

Let's start with the easier comparison: a trained model vs. a trained brain. Joseph Carlsmith estimates that the brain delivers roughly 11 petaFLOP/s (=101510^{15} floating-point operations per second)1. If you eat a normal diet, you're expending roughly 101310^{-13} J/FLOP.

Meanwhile, the supercomputer Fugaku delivers 450450 petaFLOP/s at 3030 MW, which comes out to about 1011.510^{-11.5} J/FLOP…. So I was wrong? Computers require almost 500500 times more energy per FLOP than humans?

Supercomputer J/FLOPHuman J/FLOP\frac{\text{Supercomputer J}/\text{FLOP}}{\text{Human J} /\text{FLOP}}

Pasted image 20220906142829.png

What this misses is an important practical point: supercomputers can tap pretty much directly into sunshine; human food calories are heavily-processed hand-me-downs. We outsource most of our digestion to mother nature and daddy industry.

Even the most whole-foods-grow-your-own-garden vegan is 22-33 orders of magnitude less efficient at capturing calories from sunlight than your average device2. That's before animal products, industrial processing, or any of the other Joules it takes to run a modern human.

After this correction, humans and computers are about head-to-head in energy/FLOP, and it's only getting worse for us humans. The fact that the brain runs on so little actual juice suggests there's plenty of room left for us to explore specialized architectures, but it isn't the damning case many think it is. (We're already seeing early neuromorphic chips out-perform neurons' efficiency by four orders of magnitude.)

Electronic efficiencyBiological efficiency\frac{\text{Electronic efficiency}}{\text{Biological efficiency}}

Pasted image 20220906143040.png

But what about training neural networks? Now that we know the energy costs per FLOP are about equal, all we have to do is compare FLOPs required to evolve brains to the FLOPs required to train AI models. Easy, right?

Here's how we'll estimate this:

  1. For a given, state-of-the-art NN (e.g., GPT-3, PaLM), determine how many FLOP/s it performs when running normally.
  2. Find a real-world brain which performs a similar number of FLOP/s.
  3. Determine how long that real-world brain took to evolve.
  4. Compare the number of FLOPs (not FLOP/s) performed during that period to the number of FLOPs required to train the given AI.

Fortunately, we can piggyback off the great work done by Ajeya Cotra on forecasting "Transformative" AI. She calculates that GPT-3 performs about 101210^{12} FLOP/s3, or about as much as a bee.

Going off Wikipedia, social insects evolved only about 150 million years ago. That translates to between 103810^{38} and 104410^{44} FLOPs. GPT-3, meanwhile, took about 1023.510^{23.5} FLOPs. That means evolution is 101510^{15} to 102210^{22} times less efficient.

log10(total FLOPs to evolve bee brains)\log_{10}\left(\text{total FLOPs to evolve bee brains}\right)

Pasted image 20220906143416.png

Now, you may have some objections. You may consider bees to be significantly more impressive than GPT-3. You may want to select a reference animal that evolved earlier in time. You may want to compare unadjusted energy needs. You may even point out the fact that the Chinchilla results suggest GPT-3 was "significantly undertrained".

Object all you want, and you still won't be able to explain away the >1515 OOM gap between evolution and gradient descent. This is no competition.

What about other metrics besides energy and power? Consider that computers are about 10 million times faster than human brains. Or that if the human brain can store a petabyte of data, S3 can do so for about $20,000 (2022). Even FLOP for FLOP, supercomputers already underprice humans.4 There's less and less for us to brag about it.

$/(Human FLOP/s)$/(Supercomputer FLOP/s)\frac{\$/(\text{Human FLOP/s})}{\$/(\text{Supercomputer FLOP}/s)}

Pasted image 20220906143916.png

Brain are not magic. They're messy wetware, and hardware will catch up has caught up.

Postscript: brains actually might be magic. Carlsmith assigns less than 10% (but non-zero) probability that the brain computes more than 102110^{21} FLOP/s. In this case, brains would currently still be vastly more efficient, and we'd have to update in favor of additional theoretical breakthroughs before AGI.

If we include the uncertainty in brain FLOP/s, the graph looks more like this:

Supercomputer J/FLOPHuman J/FLOP\frac{\text{Supercomputer J}/\text{FLOP}}{\text{Human J} /\text{FLOP}}

Pasted image 20220906150914.png

(With a mean of ~101910^{19} and median of 830830.)


Squiggle snippets used to generate above graphs. (Used in conjunction with obsidian-squiggle).

brainEnergyPerFlop = {
	humanBrainFlops = 15; //10 to 23;	// Median 15; P(>21) < 10%
	humanBrainFracEnergy = 0.2;
	humanEnergyPerDay = 8000 to 10000; // Daily kJ consumption
	humanBrainPower = humanEnergyPerDay / (60 * 60 * 24); // kW
	humanBrainPower * 1000 / (10 ^ humanBrainFlops) // J / FLOP

supercomputerEnergyPerFlop = {
	power = 25e6 to 30e6; // J
	flops = 450e15 to 550e15;
	power / flops

supercomputerEnergyPerFlop / brainEnergyPerFlop
humanFoodEfficiency = {
	photosynthesisEfficiency = 0.001 to 0.03
	trophicEfficiency = 0.1 to 0.15
	photosynthesisEfficiency * trophicEfficiency 

computerEfficiency = {
    solarEfficiency = 0.15 to 0.20
    transmissionEfficiency = 1 - (0.08 to .15)
    solarEfficiency * transmissionEfficiency

computerEfficiency / humanFoodEfficiency
evolution = {
    // Based on Ayeja Cotra's "Forecasting TAI with biological anchors"
    // All calculations are in log space.
	secInYear = log10(365 * 24 * 60 * 60);
	// We assume that the average ancestor pop. FLOP per year is ~constant.
	// cf. Humans at 10 to 20 FLOP/s & 7 to 10 population
	ancestorsAveragePop = uniform(19, 23); # Tomasik estimates ~1e21 nematodes
    ancestorsAverageBrainFlops = 2 to 6; // ~ C. elegans
	ancestorsFlopPerYear = ancestorsAveragePop + ancestorsAverageBrainFlops + secInYear;

	years = log10(850e6) // 1 billion years ago to 150 million years ago
	ancestorsFlopPerYear + years
humanLife$ = 1e6 to 10e6
humanBrainFlops = 1e15
humanBrain$PerFlops = humanLife$ / humanBrainFlops 

supercomputer$ = 1e9
supercomputerFlops = 450e15
supercomputer$PerFlop = supercomputer$ / supercomputerFlops




  1. Watch out for FLOP/s (floating point operations per second) vs. FLOPs (floating point operations). I'm sorry for the source of confusion, but FLOPs usually reads better than FLOP.

  2. Photosynthesis has an efficiency around 1%, and jumping up a trophic level means another order of magnitude drop. The most efficient solar panels have above 20% efficiency, and electricity transmission loss is around 10%.

  3. Technically, it's FLOP per "subjective second" — i.e., a second of equivalent natural thought. This can be faster or slower than "truth thought."

  4. Compare FEMA's value of a statistical life at $7.5 million to the $1 billion price tag of the Fukuga supercomputer, and we come out to the supercomputer being a fourth the cost per FLOP.

Rationalia starter pack

LessWrong has gotten big over the years: 31,260 posts, 299 sequences, and more than 120,000 users.1 It has budded offshoots like the alignment and EA forums and earned itself recognition as "cult". Wonderful!

There is a dark side to this success: as the canon grows, it becomes harder to absorb newcomers (like myself).2 I imagine this was the motivation for the recently launched "highlights from the sequences".

To make it easier on newcomers (veterans, you're also welcome to join in), I've created an Obsidian starter-kit for taking notes on the LessWrong core curriculum (the Sequences, CodexHPMOR, best of, concepts, various jargon, and other odds and ends).

There's built-in support to export notes & definitions to Anki, goodies for tracking your progress through the notes, useful metadata/linking, and pretty visualizations of rationality space…


It's not perfect — I'll be doing a lot of fine-tuning as I work my way through all the content — but there should be enough in place that you can find some value. I'd love to hear your feedback, and if you're interested in contributing, please reach out! I'll also soon be adding support for the AF and the EAF .

More generally, I'd love to hear your suggestions for new aspiring rationalists. For example, there was a round of users proposing alternative reading orders about a decade ago (by Academianjimrandomh, and XiXiDu) and may be worth revisiting in 2022.



  1. From what I can tell using the graphql endpoint.

  2. Already a decade ago, jimrandomh was worrying about LW's intimidation factor — we're now about an order of magnitude ahead.

We need a taxonomy for principles

When you start collecting principles, a natural question arises: how to organize these principles? Clear organization is not just useful for quicker access but — when the collecting is crowd-sourced — critical to ensuring that the database of principles grows healthily and sustainably. We need a balance between the extremes of hairballs and orphan principles.

Now, there are books written on this subject, knowledge management (I promise, it's not nearly as dull (or settled) a subject as you might think). That said, one thing at a time. In this post, all I want to do is propose a few dimensions I think might be useful for classifying principles in the future.

Here they are:

  • Normative vs. Descriptive
  • Universal vs. Situational (or "First" and "Derived")
  • Deterministic vs. Stochastic

Normative and Descriptive.

There's a big difference between principles that tell you how the world *is* and how it (or you) *should be*. The former are the domain of the traditional sciences. It's what we mean when we talk about principles and postulates in physics. The latter are the domain of decision theory/philosophy/etc.

There's a bridging principle between the two in that accomplishing any normative goals requires you to have an accurate descriptive view of how the world is. Still, in general, we can make a pretty clean break between these categories.

Universal and Situational ("First" and "Derived")

The universe looks different at different length scales: the discrete, quantum atoms in Angströms give rise to continuous, classical fluids at meter scales and might yet contain continuous strings at Planck-lengths.

Physics gives us a formal way to linking the descriptive principles of one length scale to those of another—the "the Renormalization Group". This is a (meta-)principled approach to constructing "coarse-grained", higher order principles out of base principles. In this way, the postulates of quantum gravity would give rise to those of classical mechanics, but also those of chemistry, in turn biology, psychology, etc.

The same is true on the normative end. "Do no harm" can look very different in different situations, and the Golden Rule has more subtleties and gradations than I can count.

In general, the "first principles" in these chains of deduction tend to be more universal (and apply across a wider range of phenomena). Evolution doesn't just apply to biological systems but to any replicators, be it cultures, cancers, or memes. 1

Final Project — Anthropology of Science and Tech through …|700

Deterministic and Stochastic

One of the main failure modes of a "principles-driven approach" is becoming overly rigid—seeing principles as ironclad laws that never change or break.

I believe one of the main reasons for this is error that we tend to think of principles as deterministic "rules". We tend to omit qualifiers like "usually", "sometimes", "occasionally" from our principles because they sound weaker. But randomness has a perfectly important role in description (the quantum randomness of measurement or the effective randomness of chaotic systems) and in prescription (e.g., divination rituals may have evolved as a randomizing device to improve decision-making).

So we shouldn't shy away from statements like "play tit-for-tat with 5% leakiness". But also less precise statements like "avoid refined sugars, but, hey, it's okay if you have a cheat day every once in a while because hey you also deserve to take it easy on yourself."

A Few Examples

Using these classifications, we can make more thorough sense of the initial set of Open Principles divisions:

"Generic"/"situational" principles and "mental models" are descriptive principles that differ in how universal they are. "Values" and "virtues" are universal normative principles with "habits" as their derived counterparts. "Biases" are a specific type of derived descriptive principle reserved to the domain of agents.

A few more examples:


Call to Action

A few things that might help us keep the Open Principles healthy:

  • Decide what not to include as a principle. Constraints can be wonderfully liberating.
  • Define and contrast terms like axioms, postulates, laws, hypotheses, heuristics, biases, fallacies, aphorisms, adages, maxims, platitudes, etc.
  • Read up on Knowledge Management. Wikipedia is an excellent starting point. In particular, I think we might benefit from a more faceted approach.
  • Vigorously disagree with everything I just wrote to start a bit of antifragilizing debate.

Cheers, Jesse



  1. This isn't always true: the real world is not very quantum mechanic. But it's probably a good enough starting point for now.

Introduction to Atomic Workflows

An ongoing trend in the tech-productivity space is productivity gurus sharing their workflows.


Superficially, these digital crib-tours act as a reference for an audience that wants to implement similar workflows.

The Expectations Are Too High

But the tours risk setting too high a target for the beginner. Rather than take these examples as inspiration, the beginner interprets them as instruction: "in order to be 'productive,' you have to use this tool in this way."

In their defense, these tours really can be a source of motivation and insight. But the workflows themselves are often too complex and time-intensive for the budding productivitist to copy exactly. And when the beginner sets too high a target, they are less likely to persist and realize a lasting routine.

We need a more structured approach to building workflows. In this article, I'll suggest an approach I call "atomic workflows" (after James Clear's Atomic Habits).

Let's take a step back. What are habits and what are workflows?

🗿 1 Projects/Rationalia/LW/Concepts/Habits are behavioral routines that usually operate subconsciously. In contrast with workflows, habits are behaviorally monolithic: they involve single (or very similar) actions with clear outcomes.


  • 🚲 You bike to the gym.
  • 📖 You read at night in bed.
  • 💅 You bite your nails raw.
  • 🚬 You smoke your lungs black.

🎡 Workflows also involve behavioral routines that may (or may not) operate subconsciously. What sets workflows apart from habits is that workflows are orchestrated collections of interdependent habits. They involve habits that would not function in isolation, and they are "orchestrated" in that workflows require the executive ability to choose the right habits at the right times.


  • 🗃 The Zettelkasten is a workflow for writing. Its habits include finding and reading content, taking and managing notes, and drafting and editing texts.
  • 📥 Getting Things Done (GTD)is a workflow for managing time. Its habits include adding tasks to the inbox, processing the inbox, and reviewing your progress.
  • 📈 Spaced Repetition Systems (SRS) are a workflow for memorizing. Its habits include acquiring content, forming questions, creating notes, and reviewing cards.
  • 🏷 Scrum (along with other agile frameworks) is a workflow for managing teams in product development. Its habits include meeting together, writing "stories", and managing time.

The asymmetry of habit-formation

Good habits are hard enough to develop as they are.

Because workflows involve multiple habits that can depend intricately on each other, good workflows are even harder to develop.

Clear gave us the answer to forming habits in Atomic Habits. His process combines first-principles thinking with the precision of a surgeon: (1) Strip a habit to its minimum set of activities, and (2) build it up from there.


  • 🚲 Gym: Start by regularly biking to the gym, but don't do anything else. Then, add two minutes of jumping jacks. Move on to a five minute core routine. Etc.
  • 📖 Reading: Read one page before lights out every night, then 2, then 4…

So too, Clear's insight offers the answer to forming workflows: "atomic workflows". This adds one additional starting step: (1) Strip a workflow to its minimum set of habits, (2) strip those habits to their minimum sets of activities, and (3) build them up from there.


  • Atomic Zettelkasten:
    • Taking literature notes: Unless you already have a strict note-taking practice, begin by taking literature notes in the margins of your texts. Disallow yourself literal quoting. This will force you to be sufficiently concise. It will also shorten the time you need before you write which is the end goal and the source of feedback.
    • Taking permanent notes: Give yourself a consistent time in the day to add at least 5 notes.
    • Writing (drafting + editing): The most essential habit. Keep yourself to short blog articles so you keep the feedback present.
    • Organizing your notes: Imposing a top-level structure is non-essential. Defer this until later.
    • Only when this is running smoothly, expand your literature note-taking, spend more time pruning and organizing your Zettelkasten, and let a top-level structure emerge organically.
  • Atomic GTD:
    • Processing the inbox (daily): start with a simple daily to-do list on a sheet of scrap paper.
    • Managing an inbox: use the other side of your daily to-do list to note tasks for the next day.
    • Reviewing your progress: start with a daily transfer session.
    • Gradually level up to an additional weekly to-do list, then monthly, etc.; At the same time, add some form of priority labels and time estimates.
  • Atomic SRS:
    • Reviewing cards: This is the most essential activity. Start with someone else's deck.
    • Gathering content: Find something simple like a list of vocabulary words and filter for the words you don't know.
    • Making cards: Start with simple uniand bidirectional notes (containing only a pair of a word and a picture plus definition).
    • When this feels comfortable, explore making cloze cards and your own note templates. Then, consider larger projects (that require multiple kinds of notes), like learning a language.
  • Atomic Scrum:
    • Reviewing the sprint: Choose a simple retrospective structure and stick to it religiously (for a while).
    • Planning the sprint (writing cards): skip the user story (when you're early on in development, the AC usually stand on their own), and skip the HTD (trusting that your founding team are all A-players).
    • Keeping each other up-to-pubDate: Choose a time (or several) and get in the habit of standing-up even if you have nothing pressing to share.
    • As complexity and manpower increase, add ideas like a user story and HTD back in. Introduce longer-term review sessions and new review formats. Adopt stricter asynchronous communication guidelines.


12. Be radically honest and transparent

12. Be Radically Honest and Transparent

"One sincere and honest move will cover over dozens of dishonest ones. Open-hearted gestures of honesty and generosity bring down the guard of even the most suspicious people. Once your selective honesty opens a hole in their armor, you can deceive and manipulate them at will. A timely gift—a Trojan horse—will serve the same purpose." — The 48 Laws of Power (12. Use selective honesty and generosity to disarm your victim)

In the upside-down world of the 48 Laws of Power, even honesty becomes a tool of the dishonest—just another weapon to deceive the insufficiently cynical. To prove his point, Greene shares how Count Victor Lustig conned Al Capone. One day, Lustig approached Capone with a dubious money-making proposal. Give him two months, Lustig promised, and he would double Capone's $50,000 investment. Smooth talker as he was, Lustig secured the money and promptly locked it in a safety box where it remained untouched for the full two months.

Lustig returned to Capone appearing contrite and humbled. The plan had failed, he admitted. But before Capone had time to flay Lustig alive (or to inflict whatever method he preferred for slow, torturous death), the con artist pulled out the original $50,000 and returned it penny for penny. Capone was shocked. Honest men did not cross his threshold often.

Lustig had correctly calculated that Capone would soften at a display of honesty. After returning the money, Lustig secured a smaller gift of $5,000—the true aim of his con all along.

It's a pretty story and apt anecdote. The only problem1 in viewing it as lesson material is that it assumes you want to spend time in the company of ruthless criminals. If you want to have a constructive impact on society, there are easier paths.

Creative power requires a different kind of honesty—not selective and Trojan but radical and unrelenting.

"I'm talking about a specific extra type of integrity that is not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen" — Richard Feynman

I'm talking about Feynman's kind of honesty—the radical honesty underpinning the scientific project that led us out of the dark ages. Whether you are a pollyannaish progress worshipper or climate-fearing progress denier, we need the very same honesty to wage an effective climate dialogue—all that stands between us and a prompt return to the dark ages.

But there's no reason to restrict this boon to only scientists. More generally, radical honesty is our responsibility as citizens, colleagues, and children, as partners, parents, and people of the Earth. It builds more resilient, trusting, and happier companies and communities.

Observance of the Law #1

Perhaps the best example of this principle in practice is Bridgewater, one of the world's most successful investment management firm. When asked, founder Ray Dalio often cites "radical transparency" as the most important factor in his company's culture.

As Dalio writes in his Principles:

"Provide people with as much exposure as possible to what’s going on around them. Allowing people direct access lets them form their own views and greatly enhances accuracy and the pursuit of truth. Winston Churchill said, “There is no worse course in leadership than to hold out false hopes soon to be swept away.” The candid question-and-answer process allows people to probe your thinking. You can then modify your thinking to get at the best possible answer, reinforcing your confidence that you’re on the best possible path."

Radical transparency is not easy. Newcomers typically need 18 months to adjust to the new expectations, and many never complete the transition—turnover at Bridgewater is survivorship bias, which is almost double the average. But those that remain, Dalio might say, are stronger for it. They have survived the hazing and excised their deceitful tendencies to become more productive coworkers and business people.

Interpretation # 1

The main purpose of radical transparency at Bridgewater is to foster clearer communication. In our information age, conflict stems less frequently from resource scarcity—the usual cause in premodern societies—than from simple misunderstanding.

Large corporations like Bridgewater are liable to fracture into bureaucratically isolated strata. As a result, information ends up taking painfully convoluted paths to get from A and B. It's a game of telephone sure to corrupt the original message.

Which is why Elon Musk advocates against overly hierarchical company structures:

"A major source of issues is poor communication between depts. The way to solve this is allow free flow of information between all levels. If, in order to get something done between depts, an individual contributor has to talk to their manager, who talks to a director, who talks to a VP, who talks to another VP, who talks to a director, who talks to a manager, who talks to someone doing the actual work, then super dumb things will happen. It must be ok for people to talk directly and just make the right thing happen."

It's a straightforward consequence of the Information Inequality:

I(X)I(f(X)).I(X)\geq I(f(X)).

The information, II, contained in a signal, XX , is always less than or equal to the information contained in any function/modification of that signal, f(x)f(x) . By reducing the number of interlocutors, fif_i, we can more tightly hug the upper bound:

I(X)I(f1(X))I(f2(f1(X))))I(f3(f2(f1(X))))).I(X) \geq I(f_1(X)) \geq I(f_2(f_1(X)))) \geq I(f_3(f_2(f_1(X)))))\geq \dots.

A different approach is to make the modification functions fif_i less lossy. Every additional filter we impose on the signal—our sense of what is decent, what will offend people, what is or is not relevant, etc.—makes us a worse conveyor of information. The solution, then, is to strip out as many filters as possible to make fif_i more conservative: i.e., to adopt radical transparency.2

“By lying, we deny others a view of the world as it is. Our dishonesty not only influences the choices they make, it often determines the choices they can make—and in ways we cannot always predict. Every lie is a direct assault upon the autonomy of those we lie to.” — Sam Harris

Observance #2

Another company celebrated for its culture of radical transparency is Netflix. Here's a 25% over the first 18 months:

“In most situations, both social and work, those who consistently say what they really think about people are quickly isolated and banished. We work hard to get people to give each other professional, constructive feedback—up, down and across the organization—on a continual basis. Leaders demonstrate that we are all fallible and open to feedback. People frequently ask others, ‘What could I be doing better?’ and themselves, ‘What feedback have I not yet shared?’”

Yet more concisely, co-CEO Reed Hastings wrote in a memo:

”You only say things about fellow employees you say to their face.”

This is also the company famous for its "Keeper Test"—managers regularly ask themselves which of their employees they would fight to keep if those employees were preparing to leave for another company. Anyone who doesn't make the cut is promptly fired with a considerable severance package. Better to buy off detractors and free room for star players than slowly decline into the sunk-cost fallacy swamp. "Adequate performance gets a generous severance package".

Despite the continual risk of being fired and at-times brutal peer reviews (or, who knows, maybe because of), Netflix is consistently ranked as one of the tech world's favorite places to work—among the release from the company. Maybe radical transparency works.

Interpretation #2

For Netflix, radical "candor" is less about clarity in communication than making room for personal growth and trust. Most of us tend towards stagnation because we surround ourselves with people unwilling to critique us. Perhaps it comes from a good place: our friends don't want to hurt us. Perhaps it comes from a more nefarious place: we self-select an entourage of yes-men to feel better about ourselves. Whatever the cause, the end result is complacency at best and spiritual death at worst. Honest third-party feedback is the fuel of personal development.

If for no other reason, we should strive towards radical honesty because honest people are more pleasant to be around. On the long run, one sincere and consistently honest person will outweight a dozen potentially hurtful feedbacks.

“Honest people are a refuge: You know they mean what they say; you know they will not say one thing to your face and another behind your back; you know they will tell you when they think you have failed—and for this reason their praise cannot be mistaken for mere flattery.” — Sam Harris


With both Bridgewater and Netflix, the work culture probably isn't all that the press mythologizes it to be.

Bridgewater's radical transparency has a number of rather creepy corollaries. For one, almost every encounter is recorded on video, so it can potentially serve as training material in the future. This leads to a near-Orwellian surveillance state with "Truespeak" substituted for Orwell's original "Newspeak"—Big Brother butts in only when its subjects become too conventional and self-moderating.

The problem with this particular incarnation of radical transparency is that trust needs autonomy to flourish (top 10 or 20 American companies on GlassDoor). Constant surveillance breeds suspicion.

In addition, Dalio's adherence to his principles trust to be trusted:

Each day, employees are tested and graded on their knowledge of the Principles. They walk around with iPads loaded with the rules and an interactive rating system called “dots” to evaluate peers and supervisors. The ratings feed into each employee’s permanent record, called the “baseball card.”

Two dozen Principles “captains” are responsible for enforcing the rules. Another group, “overseers,” some of whom report to Mr. Dalio, monitor department heads.

Maybe this really is something you can get used to after 18 months of living it. But I'm inclined to think that this period serves more to select for those people already distrusting enough that they can tolerate a work culture so clearly inspired by the Stasi.

Another risk is that Bridgewater's notorious public condemnations are almost universally less effective than private feedback. As radically transparent as you think your culture is, human nature is more receptive to feedback when it is delivered in a small, private setting than when you are surrounded by a tribe of coworkers.

Netflix's variety of radical candor has can get near cultish. In particular, the willingness to fire has leld to a pervasive fear of dismissal. After asking a group of people how many of them feared being fired, Karen Barragan doubled down with the declaration that it was "[g]ood, because fear drives you.”

Within limits, Ms. Barragan, within limits. It definitely isn't good when it leads to a situation like the following:

One former employee remembers seeing a woman who was just fired crying, packing up her boxes, while the rest of her team shied away from the scene without offering any support. They feared that “helping her would put a target on their back,” the employee said. “I just couldn’t believe it.”

Radical candor should not have to mean emotional blunting, but it requires active work to keep the two apart. Every strategy has its pitfalls.

Like any, radical honesty is no perfect decision. There will always be cases where omitting information is the best course of action—or even actively lying (if the Nazis are at the door asking for the location of the family you are hiding). That said, in general, not lying comes pretty close to perfect.

“Lying is, almost by definition, a refusal to cooperate with others. It condenses a lack of trust and trustworthiness into a single act. It is both a failure of understanding and an unwillingness to be understood. To lie is to recoil from relationship.” — Sam Harris




  1. There are actually other problems. Many of them. Like, if Lustig had calculated incorrectly, he would be dead, and we likely would not have heard it. Always factor in the its own downsides.

  2. Ok this needs a bit more rigor since human beings are not quite well-behaved functions f(x)f(x) (we can spit out different results for the same answers and might introduce information of our own). Really, this should be expressed in terms of mutual information: H(X)=I(X;X)I(X;f(x))H(X) = I(X; X) \geq I(X; f(x)) — the entropy (average information) contained in XX (equal to the average amount of information about XX contained in XX) is always less than the average information about XX contained in a function of XX.