Friday, June 30, 2017

Technical vs Economic Bottlenecks

What were the major barriers faced by the Manhattan project?

I would point to two broad classes of problems. The first set are economic challenges: obtaining funding, resources, people with specialized knowledge, coordinating work, and so forth. The second set are technical challenges: what materials to use, the mechanism of the bomb, all the technical issues required to produce a blueprint. Both of these kinds of problems had to be solved in order for the Manhattan project to succeed.

In response to my post on the scientific bottleneck, a few people mentioned various research/commentary on bottlenecks to scientific progress. Different people linked me to very different works with very different purposes, but they shared a common theme: the limiting factors to scientific progress boil down to getting knowledge from the people who have it to the people who need it.

I totally agree with that idea. So why am I writing a series of posts on a “scientific bottleneck” which does not seem particularly related to this problem?

Just like the Manhattan project, scientific progress requires overcoming both economic and technical challenges. I argued in my previous post that the theory of adaptive systems (or lack thereof) is the limiting technical challenge across multiple fields currently on the scientific frontier. But I didn’t address economic challenges in that post at all.

I’m focused on technical challenges mainly because the major economic bottlenecks to scientific progress are exactly the same as the major economic bottlenecks in any other industry: coordination problems. In the linked post, I claimed that coordination problems are the primary bottleneck in nearly every industry today. As a result, if you want to add lots of value to your company, the natural starting point is to look for coordination problems. In particular:
  • Is there information which some people have and other people need?
  • Are there communication difficulties?
  • Are different specialized groups supporting each other as needed?

This piece is one example account of scientific bottlenecks. Its thesis:
“Behavioral science researchers are now recognizing that it is impossible to find and incorporate all related disciplinary knowledge. … There are simply too many overlapping research areas across disciplines for any single person to integrate or utilize.”
And later:
“a conservative evaluation found 70 differently named self-efficacy constructs, and only eight of these construct names were used more than once.”
One more:
“when the independent variables of most of these disciplines are examined, there is enormous overlap.”

Sound familiar? I’m cherry-picking quotes to make a point, but these are pretty representative of the essay. The major economic bottlenecks to scientific progress are:
  • Getting information from people who have it to people who need it.
  • Communication difficulties, especially different/obscure terminology.
  • Bridging the gap between groups with different specializations.

If you’re a scientist looking for career advice, this is it: look for coordination problems.

Wednesday, June 28, 2017

The Scientific Bottleneck

Imagine you’re in a sci-fi universe in the style of StarTrek or Stargate or the like. You’ve bumped into a new alien species, drama ensued, and now you’re on their ship and need to hack into their computer system. Actually, to simplify the discussion, let’s say you’re the aliens, and you’re hacking into the humans’ computer system.

Let’s review just how difficult this problem is.

You’re looking at billions of tiny electronic wires and switches and capacitors. You have a rough idea of the high-level behavior they produce - controlling the ship, navigating via the stars, routing communications, etc. But you need to figure out how that behavior is built up out of wires and switches and electronic pulses and whatnot. As a first step, you’ll probably scan the whole CPU and produce a giant map of all the switches and wires and maybe even run a simulation of the system. But this doesn’t really get you any closer to understanding the system or, more to the point, any closer to hacking it.

So how can we really understand the computer system? Well, you’ll probably notice pretty quickly that there’s regular patterns on the CPU. At the low level, there’s things like wires and switches. You might also measure the voltages in those wires and switches, and notice that the exact voltage level doesn’t matter much; there’s high voltages and low voltages, and the exact details don’t seem to matter once you know whether it’s high or low. Then you might notice some higher-level structures, patterns of wires and switches which form other standard elements, like memory elements and logic gates. But eventually, you’re going to exhaust the “hardware” properties, and you’ll need to start mapping “software”. That problem will be even harder: you’ll basically be doing reverse compilation, except you’ll need to reverse compile the operating system at the same time as the programs running on it, and without knowing what language(s) any of those programs were written in.

That’s basically the state of biology research today.

There’s millions of researchers poking at this molecule or that molecule, building very detailed pictures of small pieces of the circuitry of living organisms. But we don’t seem much closer to decoding the higher-level language. We don’t seem any closer to assigning meaning to the signals propagating around in the code of living organisms.

Of course, part of the problem is that organisms weren’t written in any higher level language. They were evolved. It’s not clear that it’s possible to assign meaning to a single molecular signal in a cell, any more than you could assign meaning to a single electron in a circuit. There certainly is meaning somewhere in the mess - organisms model their environments, so the information they’re using is in there somewhere. But it’s not obvious how to decode that information.

All that said, biologists have a major advantage over aliens trying to hack human computer systems: software written by humans is *terrible*. (Insert obligatory Java reference here.) Sure, there’s lots of abstraction levels, lots of patterns to find, but there’s no universal guiding principle.

Organisms, on the other hand, all came about by evolution. That means they’re a mad hodgepodge of random bits and pieces, but it also means that every single piece in that hodgepodge is *optimized*. Every single piece has been tweaked toward the same end goal.

The Problem: General
There’s a more general name for systems which arise by optimization: adaptive systems. Typical examples include biological organisms, economic/financial systems, the brain, and machine learning/AI systems.

Each of these fields faces the same fundamental problem as biology: we have loads of data on the individual components of a big, complicated system. Maybe it’s protein expression and signalling in organisms, maybe it’s financial data on individual assets in an economy, maybe it’s connectivity and firing data on neurons in a brain, maybe it’s parameters in a neural network. In each case, we know that the system somehow processes information into a model of the world around it, and acts on that model. In some cases, we even know the exact utility function. But we don’t have a good way to back out the system’s internal model.

What we need is some sort of universal translator: a way to take in protein expression data or neuron connectivity or what have you, and translate it into a human-readable description of the system’s internal model of the world.

Note that this is fundamentally a theory problem. The limiting factor is not insufficient data or insufficient computing power. Google throws tremendous amounts of data and computational resources into training neural networks, but decoding the internal models used by those networks? We lack the mathematical tools to even know where to start.

Bottleneck
A while ago I wrote a post on the hierarchy of the sciences, featuring this diagram:

The dotted line is what I called the “real science and engineering frontier”. The fields within the line are built on robust experiments and quantitative theory. Their foundations and core principles are well-understood, enough that engineering disciplines have been built on top of them. The fields outside have not yet reached that point. Fields right on the frontier or just outside are exciting places to be - these are the fields which are, right now, crossing the line from crude experiments and incomplete theories to robust, quantitative sciences.

What’s really interesting is that the fields on or just outside the frontier - biology, AI, economics, and psychology - are exactly the fields which study adaptive systems. And they are all stuck on qualitatively similar problems: decoding the internal models of complex systems.

This suggests that the lack of mathematical tools for decoding adaptive systems is the major bottleneck limiting scientific progress today.

Removing that bottleneck - developing useful theory for decoding adaptive systems - would unblock progress in at least four fields. It would revolutionize AI and biology almost overnight, and economics and psychology would likely see major advances shortly thereafter.

Questions
Let’s make the problem a little more concrete. Here are a few questions which a solid theory of adaptive systems should be able to answer.
  1. How can we recognize adaptive systems in the wild? What universal behaviors indicate an adaptive optimizer?
  2. There are already strong theoretical reasons to believe that any adaptive system which predicts effectively has learned to approximate some Bayesian model; the history of machine learning provides plenty of evidence supporting the theory as well. Given a fully specified adaptive system, e.g. a trained neural network, how can we back out the Bayesian model which it approximates?
  3. Bayesian models are constrained by the rules of probability, but we can also add the rules of causality. How can we tell when an adaptive system (e.g. a neural net) has learned to approximate a causal model, and how can we back out that model?
  4. Outside of machine learning/AI, utility functions are generally unknown. We know that e.g. a bacteria is evolved to maximize evolutionary fitness, but how can we estimate the shape of the fitness function based on parameters of the optimized system?
  5. Under what conditions will an adaptive system learn models with levels of abstraction? How can those abstractions be translated into something human-readable?
  6. Once the fitness function and internal models used by a bacteria have been decoded, how can new information or objectives be passed back into the cell via chemical concentrations or genetic modification? More generally, how can human-readable information (including probabilities, causal relationships, utility, and abstractions) be translated back into the parameter space of an adaptive system?

Obviously this list is just a start, but it captures the flavor of the major problems. Over the next few weeks, I’ll have a few more posts on specific issues and possible approaches to these problems.

Monday, June 19, 2017

Prerequisites for Universal Basic Income

When I hear people talk about UBI, they often paint a picture of a future in which most labor has been automated, from agriculture to shipping to construction to manufacturing. A handful of people can produce everything needed to support the entire population. That handful of people are rewarded handsomely, and the rest of the population lives out their days in leisure, supported by UBI.

Generally speaking, I like that picture. It leaves out important questions, like what the majority of the population does with their time, but that’s a question for another day. It certainly seems silly to have an economy in which most people work jobs they hate, when you could have an economy in which most people don’t need to work at all unless they want to.

On the other hand, that picture depends on very high productivity, driven by very high automation of labor. In the middle ages, it would not have been possible to build this sort of UBI-society, because the vast majority of the population needed to work just to produce enough food to feed everyone. If most people need to work just to keep everyone alive, then no amount of clever distribution is going to create a society of leisure.

But where’s the line? Somewhere between the middle ages and the future, UBI should be possible… but how can tell whether we’ve passed that line yet? And what happens if we try to institute UBI before crossing that line?

Key Pieces
I sat down at one point and worked out the math for UBI in a very simple model economy. The answer was, in retrospect, pretty intuitive. The key questions are (1) how many people need to work, and (2) what motivates those people to work? Conceptually, the logic goes like this:
  1. Enough people need to work to produce and distribute all the necessary goods needed by the population as a whole. This includes food, housing, medical, military/police, etc.
  2. Extra goods, above and beyond those consumed by the general populace, must be produced in order to incentivize the workers to work. Additional workers are needed to produce these incentive goods. Enough must be produced for both the workers producing necessary goods, and the additional workers who are producing the incentive goods.
The key piece here is that, under UBI, nobody is forced to work - everyone has the option of simply not working, and everyone can live a comfortable life without working. But at the same time, we need some people to work - the economy isn’t 100% automated, and worker productivity isn’t infinite. So we need a positive incentive to convince people to work - people who work must be able to afford some goods which are not available to the populace as a whole, and those goods must be attractive enough to motivate working rather than not working, and the incentive goods must attract enough people to produce both necessary and incentive goods.

Note that the UBI amount is a key variable here. Is the UBI amount set to include internet and cell service? Travel? A car? Every additional good provided to the population as a whole requires more people to work in order to produce that good. On the other hand, every good provided to the population as a whole is one less good to serve as incentive. There has to be things which workers can afford, but non-workers cannot - otherwise there’s no incentive to work. As UBI amount goes up, more workers are needed but fewer people will work. Economically, the more goods covered by the UBI amount, the higher productivity and more automation required in order for that UBI amount to be possible.

Failure Modes
To make these requirements more intuitive, I’ll outline what happens when UBI fails, i.e. when UBI is implemented without meeting the economic prerequisites.

Scenario: One Time Inflation
In this scenario, UBI is set to some fixed dollar amount, sufficient to live comfortably at pre-UBI prices. As soon as UBI is put into effect, most of the population quits their job. Prices skyrocket across the board, reflecting shortages in every good. Wages also skyrocket, with companies desperate to fill positions.

Very quickly, prices on consumer goods increase enough that the UBI amount no longer covers living costs. The population reluctantly returns to work, and the economy basically ends up where it started. Inflation has rendered the UBI amount too small to have a significant impact.

Scenario: Runaway Inflation
In this scenario, UBI is indexed to inflation. As before, everybody quits, prices shoot up, etc. But this time, every few months, the UBI amount jumps up to reflect price growth.

This won’t make a substantive difference. Everybody knows the UBI will increase, so prices stay ahead of it. As before, the economy ends up roughly where it started, except inflation is so high we have to ask Zimbabwe to return our wheelbarrow.

Scenario: Price Controls
In the worst case, price controls are instituted. Now people quit their jobs, don’t go back, and things get very ugly. The economy does not produce enough goods for everyone, and prices cannot adjust, leading to shortages. Food riots are likely.

Political Failure Mode
There’s one more important failure mode, separate from the economic failure modes above. In this case, the economy is capable of supporting UBI, but there’s no politically stable equilibrium.

As before, UBI is set to some fixed amount, sufficient to live comfortably at pre-UBI prices. Lots of people quit their jobs, prices go up, but it’s not a full failure. Enough people still work, and the UBI amount is still enough to live off.

But now a huge chunk of the population has lots of time on their hands and not much to do. Maybe they want to travel, but the UBI amount isn’t enough to travel much. Maybe they want to take classes in music or a language, but the UBI amount won’t cover that either. Maybe they just want to eat out more often.

One way or another, a huge chunk of the population is left with lots of time on their hands, and they’re all going to want more money for something. They may not want more money badly enough to work, or they may not be able to work, but they’ll still vote. So every politician with a hope of winning is going to promise to raise the UBI amount.

It shouldn’t take much imagination to see that, in a country like e.g. the US, the UBI amount is going to increase and keep increasing. Sooner or later, it will cross the threshold, and the economic failure modes discussed above will kick in. Politics will raise the UBI amount, inflation will kick in to effectively lower it, back and forth, back and forth.  That’s a stable equilibrium, and not a terrible one, inflation aside. But it’s only a matter of time before some clever politician tries to outsmart inflation, and the food riots kick in.

I see two solutions where politics won’t likely ruin UBI.

The first is less-ambitious UBI, intended more to replace welfare than to overhaul the economy. In this case, people able to work are generally expected to keep working, and the UBI amount is intentionally limited to a living wage, not intended for comfortable living. The key here is that living off UBI would have to be tight enough that pretty much everyone would prefer to work if they can. This would still need to meet the economic prerequisites, but with most of the population still working, hopefully the political issues wouldn’t be a limiting factor.

The other solution is when automation is so complete that hardly any people are needed. If only a thousand people need to work to support the entire population, then we could plausibly get a thousand people to just volunteer, without needing extra goods to incentivize them. We’re certainly nowhere near that point today - even just looking at food distribution, we’d never get enough volunteers to drive all the big rigs needed. But it’s not out of the question for the future.

Monday, June 12, 2017

Be More Evil

Spoiler warning: significant spoilers for Avengers: Age of Ultron.

The Road to Hell
Everyone thinks of themselves as a hero of the story.

Gandhi thought of himself as a good person. So did Lenin. So has every president of the United States, from Jackson to Lincoln to FDR. Your parents see themselves as good. Your annoying neighbors see themselves as good. Everyone sees themselves as good.

This is a problem.

People tend to model their identity - and their life - after stories. Alas, the tropes which make fun stories are not representative of the real world. People grow up with stories of heroes fighting villains, heroes fighting monsters, heroes fighting alien invaders. In the stories, nine times out of ten the problems are caused by antagonists. So of course, people turn to the real world, and they see problems, and they look for antagonists. They blame society’s problems on the rich, the politicians, the religious, the sinful, etc.

We’re a world full of heroes in search of villains.

What if what we really need is more villains?

Remember that scene in Avengers: Ultron, where Tony and Cap argue about how best to defend the world from invasions by alien armies? Tony argues that Earth has no viable defense against an invasion, and Cap argues that the Avengers can handle it.

Really? Six people? How are six people going to stop an invading army?

“Together”, replies Cap, against a backdrop of dramatic music.

Yeah. Great plan ya got there, Cap. All that togetherness makes for a real solid planetary defense strategy.


But it’s not Cap being a moron that’s notable here. Heroes act that stupid more often than not. What’s really surprising is that one of the good guys - Tony - is not a complete moron. Normally, it would be a villain’s job to point out that six people and some togetherness do not constitute a military defense strategy.

But it’s not a total departure from literary norms - Tony’s unusual common sense is portrayed as a character flaw. Tony overcoming that character flaw is one of the main lines of character development in the film, as well as the following Iron Man III film.

Apparently the only way a superhero is allowed to display real intelligence is as a character flaw.

What’s really alarming about all this, is that these are the stories which people use to model their own identities… and everyone thinks of themselves as a hero. We have a world full of people trying to be Captain America, people who want to save the world by (usually metaphorically) punching villains in the face. If the punching doesn’t work, then maybe we need some more togetherness?

They say politics is the mind-killer, but it’s broader than that. Morality is the mind killer. Everyone is trying to be the hero, and the vast majority of the heroes we see are morons. It’s no surprise that the moment morality comes up, everyone scrambles to grab the idiot ball.

Trolley Problems
In addition to behaving like morons in general, heroes have a contractual obligation to make very poor decisions in certain situations.

Going back to Ultron, there’s a trolley problem near the end of the film. The villain is levitating a mid-size city. Once it gets high enough, the villain plans to drop it, generating a big enough boom to wipe out Europe (or something like that). Tony suggests nuking the whole thing while it’s still near the ground. Cap says “No! There’s civilians in that city, we need to evacuate them!”. Of course, there’s no real doubt for the viewer - everyone knows they’re going to evacuate the city first. When a hero faces a trolley problem, they save the baby and then punch the trolley in the face.

Heroes, in general, are very bad at tradeoffs. Mosquito nets can save a life for something like $5000, but what hero would leave a baby on a train track in order to save a briefcase full of money? It’s hardly surprising that most altruism is so ineffective, when everybody’s trying to mimic heroes who have no idea how to handle tradeoffs.

Planning Ahead
The nature of fiction dictates that protagonists mostly be reactive, rather than proactive.

When the hero sets out to foil the villain’s dastardly plan, they don’t know the plan yet. The plan is a mystery, gradually revealed over the course of the story. It makes for a good story.

The converse would be a hero making a plan. Imagine: the first half of the story consists of the hero running various scenarios and putting backup plans in place for each of them. Finally, the plan actually kicks off, and the second half of the story consists of watching the plan work more or less as outlined earlier.

You know what we call that? A heist story. Funny coincidence, the genre where the protagonists plan things is also the genre where the protagonists are villains.

Heist stories aside, hero plans do not usually make for a good story. At most, they are small in scope, limited to laying a trap for the villain. Villains have plans, heroes try to break them; that’s how the story works.

When people try to act heroic, their first thought is not “you know what we need? A plan!”. Maybe they’ll throw together a small plan to stop their perceived villain, but nobody sits down to write a detailed, quantitative plan to eliminate poverty.

And if someone did write a detailed, quantitative plan to eliminate poverty, they would probably be a villain.

Join the Dark Side
Time for the pitch.

Join the dark side! You’ll immediately receive:
  • 15 IQ points!
  • Special Ability: Make Tradeoffs! (Includes: Resistance to Dutch Book Attacks!)
  • Special Ability: Plan Ahead more than Five Minutes!
… and many other bonuses.

You don’t need to take over the world. You don’t need a secret lair. You just need to ask yourself - what would a villain do? When faced with a problem, you just need to consider the Evil approach.

Even if your goal is world peace, or eliminating poverty. Villainy does not judge you on your aims, only on your methods. Ruthless efficiency, the pursuit of your objective above all else, doing what works - that is what the Dark Side is all about.

So the next time you want to do something about poverty, don’t volunteer at the soup kitchen or march to “spread awareness” or write a scathing facebook post about Bad People. That won’t fix poverty. Instead, do what a villain would do. Sit down and research the problem. Learn the underlying causes. Run the numbers. Make a detailed, quantitative plan. Find a devious way to make people help, whether they want to end poverty or not. If you need resources, acquire them. Make the necessary tradeoffs. And above all, be smart - it’s not about punching Bad People in the face, it’s not about togetherness or love, it’s about achieving the goal.


Ruthlessly.

Wednesday, June 7, 2017

Be More Amoral

Morality Projection
“If everyone cared and nobody cried
If everyone loved and nobody lied
If everyone shared and swallowed their pride
Then we'd see the day when nobody died”
- I have officially sunk to quoting Nickelback

Intuitively, humans tend to think that bad things happen because of bad people. If only everyone were caring and loving and humble and shared with each other, then cancer would be magically cured. Apparently technical issues ranging from cytokine signals to senescence-autophagy choice to drug specificity can all easily be resolved by sufficient loving and sharing.

Of course, it sounds completely stupid when you put it like that, but Nickelback just takes the usual stupidity and stretches it into hyperbole. Ever notice how people think marching in the street will somehow make it easier to cure cancer?

I call this sort of thinking morality projection. People think of the world in terms of Good and Bad: doing Good things will cause everybody to be happier and healthier and generally better off, while doing Bad things will cause everybody to be sadder and die sooner and be generally worse off. Conversely, if people are unhappy, it must be because of Bad People doing Bad things, or at least not enough Good People doing Good things.

This post is about how to avoid morality projection in your own thinking.

Taboo Morality
A few years ago, I decided to taboo all moralizing terms in my own head, just as an experiment for a week. If I caught myself thinking “X is good”, then I had to cross out that thought and replace it with “I would like X” or “X would result in Y, which I would like” or “X would result in Y, which lots of people would like”. Similarly with “X is bad”, or right/wrong, or “should”. Especially “should” - that one was particularly insidious. The goal was not simply to replace morally-flavored words, but to reduce moral concepts down to peoples’ preferences wherever they appeared.

I was shocked by the extent of morality projection in my own head. I was expecting political thoughts to be the main offender, but there was so much more - choices of food, clothing, social interaction, work habits, sleep schedule, financial habits... moralization was hiding everywhere. Everywhere were long-since-absorbed social lessons on the “right” thing to eat or to wear, “good” habits, all the little things one “should” do. All these lessons, absorbed when I was too young to question them, were suddenly thrust back into my awareness and re-examined.

Of course, I also started to notice morality projection in others - and I started to notice myself projecting on others as well. I caught myself thinking of others as “bad” when they engaged in “bad” habits, or ate the “wrong” foods, or didn’t act as they “should”. Even after recognizing the flaws in many of society’s lessons, it’s still hard to adjust your standards of others accordingly.

Halfway through the week, I knew this experiment had to become permanent. Turns out, a large chunk of the little things society teaches us are either pointless, situational, or just plain counterproductive.

I’m not going to write out a long list here, because people will just argue with it. When you’ve been trained from childhood to view some foods as good and others as bad, some habits as good and others as bad, and so forth, challenges to that worldview just trigger cached responses. I bet most of the people reading this got fired up when I criticized marching for cancer, for instance.

So I’ll just say this: try it. Just try it for a week. Taboo all the little “good” and “bad” and “should” thoughts, ask yourself whether each little thing actually achieves something you want.

Here are some examples to start off:
  • “X is a good idea” -> “X would make it easier to achieve goal Y”
  • “X is bad” -> “X would make lots of people unhappy”
  • “I should do X” -> “X would make it easier to achieve goal Y”
  • “I should do X” -> “X would make it easier to achieve lots of my goals”
  • “I should do X” -> “If I don’t do X, lots of people will be angry at me”
  • “They should do X” -> “If they do X, it will make it easier to achieve goal Y”
  • “They should do X” -> “X needs to be done in order to achieve Y, and it will be easiest for them”
  • “X is healthy” -> “X has high vitamin content”
  • “X is polite” -> “X avoids confrontation”
  • “X would be a nice thing to do” -> “X would make someone feel happy, which is something I want”
In general, replace anything that conveys a positive feeling without a specific physical interpretation. Words like “good”, “should”, “healthy”, “polite”, “nice”, etc all feel positive, but don’t mean anything specific. Phrases like “I want” or “they want” are fine, emotions are fine, anything with a specific physical meaning is fine.

One final note. Some clever person is bound to say “Why don’t we just define ‘good’ as whatever makes people happier/live longer/generally better off?” That is a perfectly decent definition of “good”, but it doesn’t necessarily have anything to do with any of the things we usually consider “good” or otherwise virtuous. So you’re welcome to define good that way, but you’ll still need to go through and check that all the things we usually think of as “good” meet the new definition… and that’s going to be a lot harder with an overloaded word floating around.