Thursday, July 20, 2017

Detecting Intelligence with an Unknown Objective

This is the second post in a series on theory for adaptive systems. The previous post argued that the lack of good adaptive systems theory is the main bottleneck to scientific progress today. The main goal for the next few posts is to lay out questions and problems, and to suggest possible approaches toward quantitative solutions.

Today’s question: How can we recognize adaptive systems in the wild?

To be more concrete: suppose I run a ridiculously huge Game of Life simulation with random initial conditions. What function can I run on the output in order to detect adaptive system behavior within the simulation? Specifically, we’re looking for subsystems of the Game of Life which:
  • Learn from their environment
  • Use what they learn to optimize for some objective
I see two major difficulties to this problem:
  1. We don’t know the system’s objective.
  2. We don’t know what defines the “system”.
I’ll focus on the first part for now; defining the “system” will be a running theme which I will revisit toward the end of these posts.

Example 1: Street Map
Imagine cars driving around on a street map which looks roughly like this:
Suppose two types of cars drive around this map. The first type wanders about, picking a random direction at each intersection until it reaches its destination. The second type knows what the map looks like, and takes the shortest path from its starting point to its destination. Looking at their paths as they drive, how could we tell the two apart? In particular, how could we tell the two apart without knowing the destination?

It would be tedious but straightforward to build an elaborate statistical test for this particular problem, but it wouldn’t generalize. Instead, I’ll point out an heuristic: the intelligent cars, the cars which take the shortest route, will almost always start by driving toward the center, and almost always finish by driving away from the center.

Why? Pick two points at random on the map. Look at the shortest path between them. A majority of the time, it will go through the center point. Even when it doesn’t, it almost always goes first toward the center, then away - it never gets closer to the center then further then closer again.

(For the mathematically inclined: you can prove this by looking at the map as a tree, picking a root, and viewing “distance to center” as the depth within the tree.)

Example 2: Shortest Paths
In the street map example, we can detect “intelligent” behavior by looking for cars which first go towards the center of the map. This behavior is statistical evidence that the car is following a relatively short path to some destination.

Can we generalize this? “Intelligent” cars only start by going toward the center because that’s the shortest path. Even on a more general map, we could look for statistical patterns among shortest paths. On a real-world road map, “shortest paths” over significant distances usually hop onto a highway for most of the drive. Even locally, there are more central and less central roads. Without diving into any statistics, it seems like we could take a typical road map and develop a statistical test to tell whether a car is following a short path between two points, without needing to know the car’s destination.

But what makes the short path “intelligent” at all? Why do we intuitively associate short paths with intelligent behavior, as opposed to wandering randomly around the map?

Example 3: Resource Acquisition
Let’s look at the problem from a different angle. One characteristic behavior of living creatures, from animals to bacteria, is a tendency to acquire resources.

In biology, the main types of resources acquired are energy and certain standard biochemicals. Each of these resources is stored - e.g. energy is stored as starch, fat, ATP, an electric potential difference, etc.

Why would adaptive systems in general want to acquire and store resources? Because it gives the system more options. A human who accumulates lots of currency has more options available than a human without any currency. A bacteria with a store of energy has more options than a bacteria without. Ultimately, those resources could be used in a variety of different ways in order to achieve the system’s objective.

Whether it’s a human taking a vacation or buying a car, a bacteria reproducing or growing, a pool of resources offer options suitable to many different situations. Intuitively, we expect adaptive systems to accumulate resources, because those resources will give the system many more options in the future.

Example 4: Time as a Resource
One universal resource is time. In this view, saving time is a special case of accumulating resources: time saved can be spent in a wide variety of ways, offering more options in the future.

This ties back to the shortest path example. We expect “intelligent” systems to take short paths in order to save time. They save time because time is a universal resource - time saved can almost always be “spent” on something else useful to the system’s goal.

In the street map example, we run into a more unusual resource: “centrality” in the road map. (Mathematically: height in the tree.) A more central location is closer to most points. By moving toward the center of the map, a car accumulates centrality. It can then cash in that centrality for time savings, converting one resource (centrality) into another (time).

A Little Formalization
We now have a handful of examples of intuitively “intelligent” behavior - short paths, energy and currency accumulation, saving time. These examples all amount to the same thing: accumulating some useful resource. Can we formalize this intuition somewhat? Can we generalize it further?

In AI theory, there’s a duality between constraint relaxation and heuristics. A constraint relaxation would be something like “what could the system do if it had more of resource X?”. The amount of X is constrained, and we “relax” that constraint to see if more X would be useful. That constraint relaxation has a corresponding heuristic: “accumulate X”. That heuristic is useful exactly when relaxing the constraint on X is useful.

All of our resource accumulation examples can be viewed as heuristics of that same form: “accumulate X”. Each of them has a corresponding constraint relaxation: “what could the system do if it had more of resource X?”.

In principle, any formal heuristic can be viewed as a resource. But the examples we addressed above seem more specific than heuristics in general. They share some common features beyond general formal heuristics:
  • Each resource is highly fungible. Energy, currency and time are easy to trade for a very wide range of other things, and other things are easy to trade back into energy, currency, and/or time.
  • Each resource can be stored efficiently. Cream is not a good resource for humans to accumulate; it spoils quickly.
  • Each resource is scarce. Bacteria need water, and they could accumulate water, but they’re usually surrounded by unlimited amounts of water anyway. No point storing it up.
In some ways, these are just criteria for what makes a good formal heuristic. In order for a formal heuristic to accelerate planning significantly, it needs to be scarce and storable and fungible. In order for something to be a good resource to accumulate, it should be a useful heuristic for planning problems.

Problems. Plural.

Remember where we started this post: we want to detect adaptive systems without necessarily knowing the systems’ objectives in advance. All the resources listed above make good heuristics not just for one problem, but for a wide variety of different problems. Why? What do they have in common, beyond generic formal heuristics?

The Ultimate Resource
Let’s go back to where formal heuristics come from: constraint relaxation. Intuitively, by accumulating resources, by following an heuristic, by relaxing a constraint, a system gives itself more options. That’s why it’s useful to have more energy, more currency, more time: the system can choose from among a wider variety of possible actions. The action space is larger.

This is the ultimate resource: accessible action space. The more possible actions available to an adaptive system, the better. A good resource to accumulate is, in general, one which dramatically expands the accessible action space. Fungibility, storability, and scarcity are all key criteria for something to significantly expand the action space.

Redux
Time to go back to the opening question: suppose I run a ridiculously huge Game of Life simulation with random initial conditions. What function can I run on the output in order to detect adaptive system behavior within the simulation?

This post has only addressed one tiny piece of that problem: an unknown objective. Later posts will focus more on information processing, learning, and defining the system. But already, we have a starting point.

We expect optimizing systems to accumulate resources. These resources will be fungible, storable, and scarce in the environment. The system will accumulate these resources in order to expand its action space.

So what might we look for in the Game of Life? Very different kinds of resources could be useful, depending on the scale and nature of the system. But we would certainly look for statistical anomalies - resources are scarce. Those anomalies should be persistent - resources can be stored. Finally, the extent of those anomalies should grow and shrink over time - resources are acquired and spent.

It’s not much, but it’s a starting point. Hopefully it suggests a flavor for what sort of things could be involved in research on the subject. Or better yet - hopefully it gives you ideas for better approaches to tackle the problem.

Next post will talk about how to extract an adaptive system’s internal probabilistic model.

Friday, June 30, 2017

Technical vs Economic Bottlenecks

What were the major barriers faced by the Manhattan project?

I would point to two broad classes of problems. The first set are economic challenges: obtaining funding, resources, people with specialized knowledge, coordinating work, and so forth. The second set are technical challenges: what materials to use, the mechanism of the bomb, all the technical issues required to produce a blueprint. Both of these kinds of problems had to be solved in order for the Manhattan project to succeed.

In response to my post on the scientific bottleneck, a few people mentioned various research/commentary on bottlenecks to scientific progress. Different people linked me to very different works with very different purposes, but they shared a common theme: the limiting factors to scientific progress boil down to getting knowledge from the people who have it to the people who need it.

I totally agree with that idea. So why am I writing a series of posts on a “scientific bottleneck” which does not seem particularly related to this problem?

Just like the Manhattan project, scientific progress requires overcoming both economic and technical challenges. I argued in my previous post that the theory of adaptive systems (or lack thereof) is the limiting technical challenge across multiple fields currently on the scientific frontier. But I didn’t address economic challenges in that post at all.

I’m focused on technical challenges mainly because the major economic bottlenecks to scientific progress are exactly the same as the major economic bottlenecks in any other industry: coordination problems. In the linked post, I claimed that coordination problems are the primary bottleneck in nearly every industry today. As a result, if you want to add lots of value to your company, the natural starting point is to look for coordination problems. In particular:
  • Is there information which some people have and other people need?
  • Are there communication difficulties?
  • Are different specialized groups supporting each other as needed?

This piece is one example account of scientific bottlenecks. Its thesis:
“Behavioral science researchers are now recognizing that it is impossible to find and incorporate all related disciplinary knowledge. … There are simply too many overlapping research areas across disciplines for any single person to integrate or utilize.”
And later:
“a conservative evaluation found 70 differently named self-efficacy constructs, and only eight of these construct names were used more than once.”
One more:
“when the independent variables of most of these disciplines are examined, there is enormous overlap.”

Sound familiar? I’m cherry-picking quotes to make a point, but these are pretty representative of the essay. The major economic bottlenecks to scientific progress are:
  • Getting information from people who have it to people who need it.
  • Communication difficulties, especially different/obscure terminology.
  • Bridging the gap between groups with different specializations.

If you’re a scientist looking for career advice, this is it: look for coordination problems.

Wednesday, June 28, 2017

The Scientific Bottleneck

Imagine you’re in a sci-fi universe in the style of StarTrek or Stargate or the like. You’ve bumped into a new alien species, drama ensued, and now you’re on their ship and need to hack into their computer system. Actually, to simplify the discussion, let’s say you’re the aliens, and you’re hacking into the humans’ computer system.

Let’s review just how difficult this problem is.

You’re looking at billions of tiny electronic wires and switches and capacitors. You have a rough idea of the high-level behavior they produce - controlling the ship, navigating via the stars, routing communications, etc. But you need to figure out how that behavior is built up out of wires and switches and electronic pulses and whatnot. As a first step, you’ll probably scan the whole CPU and produce a giant map of all the switches and wires and maybe even run a simulation of the system. But this doesn’t really get you any closer to understanding the system or, more to the point, any closer to hacking it.

So how can we really understand the computer system? Well, you’ll probably notice pretty quickly that there’s regular patterns on the CPU. At the low level, there’s things like wires and switches. You might also measure the voltages in those wires and switches, and notice that the exact voltage level doesn’t matter much; there’s high voltages and low voltages, and the exact details don’t seem to matter once you know whether it’s high or low. Then you might notice some higher-level structures, patterns of wires and switches which form other standard elements, like memory elements and logic gates. But eventually, you’re going to exhaust the “hardware” properties, and you’ll need to start mapping “software”. That problem will be even harder: you’ll basically be doing reverse compilation, except you’ll need to reverse compile the operating system at the same time as the programs running on it, and without knowing what language(s) any of those programs were written in.

That’s basically the state of biology research today.

There’s millions of researchers poking at this molecule or that molecule, building very detailed pictures of small pieces of the circuitry of living organisms. But we don’t seem much closer to decoding the higher-level language. We don’t seem any closer to assigning meaning to the signals propagating around in the code of living organisms.

Of course, part of the problem is that organisms weren’t written in any higher level language. They were evolved. It’s not clear that it’s possible to assign meaning to a single molecular signal in a cell, any more than you could assign meaning to a single electron in a circuit. There certainly is meaning somewhere in the mess - organisms model their environments, so the information they’re using is in there somewhere. But it’s not obvious how to decode that information.

All that said, biologists have a major advantage over aliens trying to hack human computer systems: software written by humans is *terrible*. (Insert obligatory Java reference here.) Sure, there’s lots of abstraction levels, lots of patterns to find, but there’s no universal guiding principle.

Organisms, on the other hand, all came about by evolution. That means they’re a mad hodgepodge of random bits and pieces, but it also means that every single piece in that hodgepodge is *optimized*. Every single piece has been tweaked toward the same end goal.

The Problem: General
There’s a more general name for systems which arise by optimization: adaptive systems. Typical examples include biological organisms, economic/financial systems, the brain, and machine learning/AI systems.

Each of these fields faces the same fundamental problem as biology: we have loads of data on the individual components of a big, complicated system. Maybe it’s protein expression and signalling in organisms, maybe it’s financial data on individual assets in an economy, maybe it’s connectivity and firing data on neurons in a brain, maybe it’s parameters in a neural network. In each case, we know that the system somehow processes information into a model of the world around it, and acts on that model. In some cases, we even know the exact utility function. But we don’t have a good way to back out the system’s internal model.

What we need is some sort of universal translator: a way to take in protein expression data or neuron connectivity or what have you, and translate it into a human-readable description of the system’s internal model of the world.

Note that this is fundamentally a theory problem. The limiting factor is not insufficient data or insufficient computing power. Google throws tremendous amounts of data and computational resources into training neural networks, but decoding the internal models used by those networks? We lack the mathematical tools to even know where to start.

Bottleneck
A while ago I wrote a post on the hierarchy of the sciences, featuring this diagram:

The dotted line is what I called the “real science and engineering frontier”. The fields within the line are built on robust experiments and quantitative theory. Their foundations and core principles are well-understood, enough that engineering disciplines have been built on top of them. The fields outside have not yet reached that point. Fields right on the frontier or just outside are exciting places to be - these are the fields which are, right now, crossing the line from crude experiments and incomplete theories to robust, quantitative sciences.

What’s really interesting is that the fields on or just outside the frontier - biology, AI, economics, and psychology - are exactly the fields which study adaptive systems. And they are all stuck on qualitatively similar problems: decoding the internal models of complex systems.

This suggests that the lack of mathematical tools for decoding adaptive systems is the major bottleneck limiting scientific progress today.

Removing that bottleneck - developing useful theory for decoding adaptive systems - would unblock progress in at least four fields. It would revolutionize AI and biology almost overnight, and economics and psychology would likely see major advances shortly thereafter.

Questions
Let’s make the problem a little more concrete. Here are a few questions which a solid theory of adaptive systems should be able to answer.
  1. How can we recognize adaptive systems in the wild? What universal behaviors indicate an adaptive optimizer?
  2. There are already strong theoretical reasons to believe that any adaptive system which predicts effectively has learned to approximate some Bayesian model; the history of machine learning provides plenty of evidence supporting the theory as well. Given a fully specified adaptive system, e.g. a trained neural network, how can we back out the Bayesian model which it approximates?
  3. Bayesian models are constrained by the rules of probability, but we can also add the rules of causality. How can we tell when an adaptive system (e.g. a neural net) has learned to approximate a causal model, and how can we back out that model?
  4. Outside of machine learning/AI, utility functions are generally unknown. We know that e.g. a bacteria is evolved to maximize evolutionary fitness, but how can we estimate the shape of the fitness function based on parameters of the optimized system?
  5. Under what conditions will an adaptive system learn models with levels of abstraction? How can those abstractions be translated into something human-readable?
  6. Once the fitness function and internal models used by a bacteria have been decoded, how can new information or objectives be passed back into the cell via chemical concentrations or genetic modification? More generally, how can human-readable information (including probabilities, causal relationships, utility, and abstractions) be translated back into the parameter space of an adaptive system?

Obviously this list is just a start, but it captures the flavor of the major problems. Over the next few weeks, I’ll have a few more posts on specific issues and possible approaches to these problems.

Monday, June 19, 2017

Prerequisites for Universal Basic Income

When I hear people talk about UBI, they often paint a picture of a future in which most labor has been automated, from agriculture to shipping to construction to manufacturing. A handful of people can produce everything needed to support the entire population. That handful of people are rewarded handsomely, and the rest of the population lives out their days in leisure, supported by UBI.

Generally speaking, I like that picture. It leaves out important questions, like what the majority of the population does with their time, but that’s a question for another day. It certainly seems silly to have an economy in which most people work jobs they hate, when you could have an economy in which most people don’t need to work at all unless they want to.

On the other hand, that picture depends on very high productivity, driven by very high automation of labor. In the middle ages, it would not have been possible to build this sort of UBI-society, because the vast majority of the population needed to work just to produce enough food to feed everyone. If most people need to work just to keep everyone alive, then no amount of clever distribution is going to create a society of leisure.

But where’s the line? Somewhere between the middle ages and the future, UBI should be possible… but how can tell whether we’ve passed that line yet? And what happens if we try to institute UBI before crossing that line?

Key Pieces
I sat down at one point and worked out the math for UBI in a very simple model economy. The answer was, in retrospect, pretty intuitive. The key questions are (1) how many people need to work, and (2) what motivates those people to work? Conceptually, the logic goes like this:
  1. Enough people need to work to produce and distribute all the necessary goods needed by the population as a whole. This includes food, housing, medical, military/police, etc.
  2. Extra goods, above and beyond those consumed by the general populace, must be produced in order to incentivize the workers to work. Additional workers are needed to produce these incentive goods. Enough must be produced for both the workers producing necessary goods, and the additional workers who are producing the incentive goods.
The key piece here is that, under UBI, nobody is forced to work - everyone has the option of simply not working, and everyone can live a comfortable life without working. But at the same time, we need some people to work - the economy isn’t 100% automated, and worker productivity isn’t infinite. So we need a positive incentive to convince people to work - people who work must be able to afford some goods which are not available to the populace as a whole, and those goods must be attractive enough to motivate working rather than not working, and the incentive goods must attract enough people to produce both necessary and incentive goods.

Note that the UBI amount is a key variable here. Is the UBI amount set to include internet and cell service? Travel? A car? Every additional good provided to the population as a whole requires more people to work in order to produce that good. On the other hand, every good provided to the population as a whole is one less good to serve as incentive. There has to be things which workers can afford, but non-workers cannot - otherwise there’s no incentive to work. As UBI amount goes up, more workers are needed but fewer people will work. Economically, the more goods covered by the UBI amount, the higher productivity and more automation required in order for that UBI amount to be possible.

Failure Modes
To make these requirements more intuitive, I’ll outline what happens when UBI fails, i.e. when UBI is implemented without meeting the economic prerequisites.

Scenario: One Time Inflation
In this scenario, UBI is set to some fixed dollar amount, sufficient to live comfortably at pre-UBI prices. As soon as UBI is put into effect, most of the population quits their job. Prices skyrocket across the board, reflecting shortages in every good. Wages also skyrocket, with companies desperate to fill positions.

Very quickly, prices on consumer goods increase enough that the UBI amount no longer covers living costs. The population reluctantly returns to work, and the economy basically ends up where it started. Inflation has rendered the UBI amount too small to have a significant impact.

Scenario: Runaway Inflation
In this scenario, UBI is indexed to inflation. As before, everybody quits, prices shoot up, etc. But this time, every few months, the UBI amount jumps up to reflect price growth.

This won’t make a substantive difference. Everybody knows the UBI will increase, so prices stay ahead of it. As before, the economy ends up roughly where it started, except inflation is so high we have to ask Zimbabwe to return our wheelbarrow.

Scenario: Price Controls
In the worst case, price controls are instituted. Now people quit their jobs, don’t go back, and things get very ugly. The economy does not produce enough goods for everyone, and prices cannot adjust, leading to shortages. Food riots are likely.

Political Failure Mode
There’s one more important failure mode, separate from the economic failure modes above. In this case, the economy is capable of supporting UBI, but there’s no politically stable equilibrium.

As before, UBI is set to some fixed amount, sufficient to live comfortably at pre-UBI prices. Lots of people quit their jobs, prices go up, but it’s not a full failure. Enough people still work, and the UBI amount is still enough to live off.

But now a huge chunk of the population has lots of time on their hands and not much to do. Maybe they want to travel, but the UBI amount isn’t enough to travel much. Maybe they want to take classes in music or a language, but the UBI amount won’t cover that either. Maybe they just want to eat out more often.

One way or another, a huge chunk of the population is left with lots of time on their hands, and they’re all going to want more money for something. They may not want more money badly enough to work, or they may not be able to work, but they’ll still vote. So every politician with a hope of winning is going to promise to raise the UBI amount.

It shouldn’t take much imagination to see that, in a country like e.g. the US, the UBI amount is going to increase and keep increasing. Sooner or later, it will cross the threshold, and the economic failure modes discussed above will kick in. Politics will raise the UBI amount, inflation will kick in to effectively lower it, back and forth, back and forth.  That’s a stable equilibrium, and not a terrible one, inflation aside. But it’s only a matter of time before some clever politician tries to outsmart inflation, and the food riots kick in.

I see two solutions where politics won’t likely ruin UBI.

The first is less-ambitious UBI, intended more to replace welfare than to overhaul the economy. In this case, people able to work are generally expected to keep working, and the UBI amount is intentionally limited to a living wage, not intended for comfortable living. The key here is that living off UBI would have to be tight enough that pretty much everyone would prefer to work if they can. This would still need to meet the economic prerequisites, but with most of the population still working, hopefully the political issues wouldn’t be a limiting factor.

The other solution is when automation is so complete that hardly any people are needed. If only a thousand people need to work to support the entire population, then we could plausibly get a thousand people to just volunteer, without needing extra goods to incentivize them. We’re certainly nowhere near that point today - even just looking at food distribution, we’d never get enough volunteers to drive all the big rigs needed. But it’s not out of the question for the future.

Monday, June 12, 2017

Be More Evil

Spoiler warning: significant spoilers for Avengers: Age of Ultron.

The Road to Hell
Everyone thinks of themselves as a hero of the story.

Gandhi thought of himself as a good person. So did Lenin. So has every president of the United States, from Jackson to Lincoln to FDR. Your parents see themselves as good. Your annoying neighbors see themselves as good. Everyone sees themselves as good.

This is a problem.

People tend to model their identity - and their life - after stories. Alas, the tropes which make fun stories are not representative of the real world. People grow up with stories of heroes fighting villains, heroes fighting monsters, heroes fighting alien invaders. In the stories, nine times out of ten the problems are caused by antagonists. So of course, people turn to the real world, and they see problems, and they look for antagonists. They blame society’s problems on the rich, the politicians, the religious, the sinful, etc.

We’re a world full of heroes in search of villains.

What if what we really need is more villains?

Remember that scene in Avengers: Ultron, where Tony and Cap argue about how best to defend the world from invasions by alien armies? Tony argues that Earth has no viable defense against an invasion, and Cap argues that the Avengers can handle it.

Really? Six people? How are six people going to stop an invading army?

“Together”, replies Cap, against a backdrop of dramatic music.

Yeah. Great plan ya got there, Cap. All that togetherness makes for a real solid planetary defense strategy.


But it’s not Cap being a moron that’s notable here. Heroes act that stupid more often than not. What’s really surprising is that one of the good guys - Tony - is not a complete moron. Normally, it would be a villain’s job to point out that six people and some togetherness do not constitute a military defense strategy.

But it’s not a total departure from literary norms - Tony’s unusual common sense is portrayed as a character flaw. Tony overcoming that character flaw is one of the main lines of character development in the film, as well as the following Iron Man III film.

Apparently the only way a superhero is allowed to display real intelligence is as a character flaw.

What’s really alarming about all this, is that these are the stories which people use to model their own identities… and everyone thinks of themselves as a hero. We have a world full of people trying to be Captain America, people who want to save the world by (usually metaphorically) punching villains in the face. If the punching doesn’t work, then maybe we need some more togetherness?

They say politics is the mind-killer, but it’s broader than that. Morality is the mind killer. Everyone is trying to be the hero, and the vast majority of the heroes we see are morons. It’s no surprise that the moment morality comes up, everyone scrambles to grab the idiot ball.

Trolley Problems
In addition to behaving like morons in general, heroes have a contractual obligation to make very poor decisions in certain situations.

Going back to Ultron, there’s a trolley problem near the end of the film. The villain is levitating a mid-size city. Once it gets high enough, the villain plans to drop it, generating a big enough boom to wipe out Europe (or something like that). Tony suggests nuking the whole thing while it’s still near the ground. Cap says “No! There’s civilians in that city, we need to evacuate them!”. Of course, there’s no real doubt for the viewer - everyone knows they’re going to evacuate the city first. When a hero faces a trolley problem, they save the baby and then punch the trolley in the face.

Heroes, in general, are very bad at tradeoffs. Mosquito nets can save a life for something like $5000, but what hero would leave a baby on a train track in order to save a briefcase full of money? It’s hardly surprising that most altruism is so ineffective, when everybody’s trying to mimic heroes who have no idea how to handle tradeoffs.

Planning Ahead
The nature of fiction dictates that protagonists mostly be reactive, rather than proactive.

When the hero sets out to foil the villain’s dastardly plan, they don’t know the plan yet. The plan is a mystery, gradually revealed over the course of the story. It makes for a good story.

The converse would be a hero making a plan. Imagine: the first half of the story consists of the hero running various scenarios and putting backup plans in place for each of them. Finally, the plan actually kicks off, and the second half of the story consists of watching the plan work more or less as outlined earlier.

You know what we call that? A heist story. Funny coincidence, the genre where the protagonists plan things is also the genre where the protagonists are villains.

Heist stories aside, hero plans do not usually make for a good story. At most, they are small in scope, limited to laying a trap for the villain. Villains have plans, heroes try to break them; that’s how the story works.

When people try to act heroic, their first thought is not “you know what we need? A plan!”. Maybe they’ll throw together a small plan to stop their perceived villain, but nobody sits down to write a detailed, quantitative plan to eliminate poverty.

And if someone did write a detailed, quantitative plan to eliminate poverty, they would probably be a villain.

Join the Dark Side
Time for the pitch.

Join the dark side! You’ll immediately receive:
  • 15 IQ points!
  • Special Ability: Make Tradeoffs! (Includes: Resistance to Dutch Book Attacks!)
  • Special Ability: Plan Ahead more than Five Minutes!
… and many other bonuses.

You don’t need to take over the world. You don’t need a secret lair. You just need to ask yourself - what would a villain do? When faced with a problem, you just need to consider the Evil approach.

Even if your goal is world peace, or eliminating poverty. Villainy does not judge you on your aims, only on your methods. Ruthless efficiency, the pursuit of your objective above all else, doing what works - that is what the Dark Side is all about.

So the next time you want to do something about poverty, don’t volunteer at the soup kitchen or march to “spread awareness” or write a scathing facebook post about Bad People. That won’t fix poverty. Instead, do what a villain would do. Sit down and research the problem. Learn the underlying causes. Run the numbers. Make a detailed, quantitative plan. Find a devious way to make people help, whether they want to end poverty or not. If you need resources, acquire them. Make the necessary tradeoffs. And above all, be smart - it’s not about punching Bad People in the face, it’s not about togetherness or love, it’s about achieving the goal.


Ruthlessly.