Wednesday, April 11, 2018

Blog Moving; What to Read Here

I've finally become tired enough of Blogger's terrible interface to write elsewhere. For now, I'm posting on Medium and on Lesswrong (mostly crossposting the same stuff to both).

If you're here looking for my older writings, here's an overview of what I consider the best posts on this blog. They are ordered chronologically within categories. If you're only going to read one, go with either From Personal to Prison Gangs or The Broken Chain Problem.

Posts on How the World Works
  • Theory of Extreme Wealth: High-wealth occupations mainly solve coordination problems.
  • Rich People Pay Consumption Tax: Running a business gives a person de-facto tax options which cannot be changed by any reasonable tax code.
  • Coordination Economy: The main economic bottlenecks across most industries most of the time are coordination problems.
  • College Costs, part I and part II: I follow the money to find the root cause of college cost growth, and find a cambrian explosion in course topics driving small class sizes.
  • From Personal to Prison Gangs: Increased regulation, litigation, licensing, credentialism, stereotyping and tribal identity are all driven by community growth.
  • Post-Scarcity: "Post-scarcity" worlds, as we usually think of them, will still have scarcity in the form of signalling goods, and developed countries are already most of the way to such a world.
  • Computational Limits of Empire: Pre-modern empires tend to max out around 60M people. The US hit 60M around 1890 - right when IBM was created to handle the census.
Political or Semipolitical Posts
  • The Problem with Atheism and The Value of Religion, By an Atheist: Atheists need to accept that religion does offer real value, God or no, and change messaging to say "look, you can still get this value even without God existing per se".
  • How to Implement a National Popular Vote: Could probably be done by half a dozen people working full-time for a year.
  • Summers' Hypothesis: Summers' hypothesis seeks to explain the STEM gender gap by the difference in IQ variance across genders. I got so tired of seeing people incorrectly "refute" this by looking at means (not variances) that I decided to run some numbers myself.
  • The Immigrant Superbug: A parable of science and politics.
  • Prerequisites for UBI: Universal basic income would be great in a sufficiently post-scarcity economy, but what exactly does "sufficiently post-scarcity" mean? This post answers.
Other Posts

Wednesday, February 21, 2018

John's Tips for Low-Effort Housekeeping

Many people out there are endlessly fascinated with organizing things into shelves, boxes, and shelves within boxes. Some of them write blog posts about the joy of organizing their living spaces.

I am not one of those people. This post suggests a couple ideas for people who think folding laundry is a waste of time, and who like to “clean” by picking up everything on the table and dropping it in a pile somewhere else.

Two-Hamper Technique
I’m sure we’ve all wondered, at some point in our lives, why people fold laundry. You’re just going to unfold it again as soon as you use it! What’s the point?

To be fair, folded clothes are more convenient for storage - they fit better, and are easier to sift through. On the other hand, I only use a fraction of my clothes on a regular basis. I’m perfectly willing to accept somewhat less efficient storage for those clothes, in exchange for not having to fold them.

I present: the two-hamper technique. Clean clothes go in one hamper. Dirty clothes go in the other hamper. On laundry day, the dirty hamper is dumped into the washer, wash, dry, and dryer is emptied into the clean hamper. No folding required.

I keep less often-used clothes folded or hanging in the closet. They don’t take up hamper space, and since they’re rarely used, the overhead to fold them is minimal.

I’ve also tried a few ways to generalize the two-hamper technique to other areas, but they haven’t worked out. Dishes are a good example - keeping dirty dishes in the sink and clean dishes in the dishwasher failed for multiple reasons. First, when I use dishes, I use too broad a variety of dishes to fit all of them in the dishwasher - whereas all the clothes I regularly wear do fit easily in one hamper. Second, I usually don’t use dishes at all - I mostly go out to eat. When I put dirty dishes in the sink, they end up sitting there growing unpleasant. I eventually just switched to disposables, which suit my infrequent use much better.

Recency Cache and Cleaning
From time to time I used to wish I had room to work at my desk, rather than covering literally all of it with stacks of paper, folders, and random objects.

One day it dawned on me: this is a caching problem.

I gave various surfaces in my apartment different cache levels:
  • L0 is table and desk
  • L1 is bookshelf and counter
  • L2 covers cabinets (lower shelves are L2a, upper are L2b)
  • L3 is the black hole, a.k.a closet.
When I want to clean something, I simply empty all of its contents into the next higher cache. For instance, to clean my desk, I literally pick up all the shit on my desk and move it to the counter. That’s it. Done.

Then, time passes. As I use things, I put them down wherever is convenient, which usually means L0 or L1. Pretty soon, the things which I use most often have migrated back to convenient low-numbered cache locations. Things I never use gradually move to higher and higher numbered locations, until they get buried in the closet, leaving behind space for more oft-used items.

This has worked pretty well so far.

The main hiccup was that I realized a second use case for high-priority cache locations. It isn’t just about making things easy to retrieve. I also leave things on my desk as a reminder to look at them later or to check them regularly. In retrospect, this never worked very well. After noticing this use-case, I’ve started looking for more effective (but still non-intrusive) ways to handle such reminders.

Monday, January 22, 2018

Trump 1-yr Retrospective

Shortly after Trump’s election, I wrote a few pieces on the subject, including “Why I Like Trump… AND Hillary” and “What Might Trump Actually Do?”. Both of these included predictions, so it’s time to evaluate how those predictions played out.

Qualitative Expectations

My main argument in favor of Trump was:
“If ever there was a complete huckster, a con-man who is master at the art of schmoozing and suckering, it's Trump. One of the two things that I really like about a Trump presidency is that there's no way in hell this guy is keeping his campaign promises. [...] The other thing I really like about Trump is that he has an established reputation for bringing in the most competent people to do the actual work. [...] if Trump's presidency goes anything like I expect, he'll be offloading all the work to extremely competent people, and he'll spend his time going around blustering and bullshitting and generally telling the public whatever they want to hear. In the best case, the competent people will get a great deal of freedom to do what needs to be done, while Trump bullshits the media.”

Over the past year, I’ve sat down at least four times to write a post saying I was wrong about Trump and he’s an awful president. Every time, I started by re-reading the above. And every time, I thought “actually, that’s mostly still true”. In every case, Trump was doing something really awful under a huge media spotlight… but ultimately with little impact. (Most notable examples are the Muslim travel ban and the trans military ban, both of which withered away in court.)

That said, I definitely got some parts wrong. Even without explicitly predicting much competence from Trump himself, I still overestimated his general competence and underestimated his awfulness. The trans military ban in particular was completely indefensible, even if it ultimately had little impact other than a media circus. Also, covfefe.

On the “hiring competent people front”, six months ago I was totally ready to admit that didn’t happen. But since then, the incompetent people have largely been fired - most notably Bannon. Tillerson’s great, Kelly’s great, Mnuchin’s solid, Gorsuch’s stellar. I don’t like Sessions, but I can’t fault his competence (and I’m sure many of you would say the same for some of the other names I’ve listed). It’s still not 100%, but overall, the “Trump serves as media shit umbrella while competent people do their thing” model seems to be up and running.

Specific Prediction Performance

Alright, time to march down the list of more specific predictions from “What Might Trump Actually Do?”. Here were the main predictions, by header:
  • Term limits/lobbying/etc. Predictions: no term limits, lobbying & fundraising limits unlikely, hiring and regulation freezes plausible. Result: no term limits, no lobbying & fundraising limits, hiring and regulation freezes both happened. 
  • Trade, jobs, EPA. Predictions: abandoning TPP and renegotiating NAFTA were plausible, and cutting various environmental regulations and funding was likely. Result: NAFTA is still on hold, but this stuff has mostly happened. 
  • Immigration & Misc. Predictions: pro-judicial-constraint, mostly ignore abortion & gay marriage, lots of noise but not much substantive change on immigration. Result: judicial appointment specifically known for “textualism”, mostly ignored abortion & gay marriage, mostly noise on immigration so far. 
To be fair, I hedged by not giving numerical probabilities for these. Overall, things I found “unlikely” didn’t happen, things I found “plausible” or “likely” almost all happened. The biggest single thing I was wrong about was immigration: there’s already been more substantive damage there than I expected, and likely more to come. Even so, “mostly noise” still seems like an accurate description.

Next, I had a list of predictions about Trump’s legislative agenda. I won’t go through the whole list - most of them haven’t seen any major bill in Congress, which wasn’t a possibility I accounted for. The big two have obviously been healthcare and taxes; I predicted both of these would be high priorities and would definitely pass with all-Republican president and Congress. Obviously, I was very wrong about one of those, and very right about the other. Overall, I don’t think I outperformed (or underperformed) pundit predictions on the legislative front.

Finally, I predicted that Trump wouldn’t do anything particularly awful which wasn’t on the list. The trans military ban proved me wrong on that front, though happily it’s been the one exception to a generally-accurate rule. With that one exception, I think I had a generally accurate idea of what we were in for under a Trump presidency.

Main conclusion, one year later: damn, that was a LOT of media noise.

Would I do it again?
At this point… not sure. If you’d asked me a year ago, I would have predicted that Trump would perform worst relative to Hillary over the first year. Hillary’s big advantage was already having the day-to-day president skill set and knowledge base. I expect the worst of Trump has passed, and we’re just now getting to the point where the good parts might shine. We’ll see.

Tuesday, January 16, 2018

Perspective

There was some trouble posting this earlier; Blogger did something weird to the formatting. It's still not quite right, but I figured having something here is better than nothing, especially for people who use the RSS feed.

The first US census was taken in 1790. Boston, according to the census, housed 18,320 people. The famous Battle of Bunker Hill, Boston’s main battle in the American Revolution, saw about 2,400 colonial militia face off against at least 3,000 British redcoats.

Let’s put that in perspective. In 1970, Kent State University had about 21,000 students - slightly more than 1790 Boston. The protest which ended with the Kent State shootings drew about 2,000 students.

So, comparing the Battle of Bunker Hill and the Kent State shootings, we see communities of comparable size, and “rebel” forces of comparable size.

In 1775, a few strong writers and orators (e.g. Samuel and John Adams) could rile up the entire city of Boston to the point of armed rebellion. Imagine this today - despite the daydreams of protesters and organizers, it seems pretty unlikely that a major city would be driven to arms by politicians and activists. There’s just too many people to reach them all. But in 1775, the entire city if Boston was only as big as a mid-size modern university. The entire community could be riled up by a handful of writers and speakers.

The comparison between 1775 and today grows even stranger when we think about the battle itself. 2,000 militia - roughly comparable to a campus protest, but with guns - went toe-to-toe with the military of the world’s dominant empire. When the war was over, the student-protesters-with-guns came out ahead, and an entire new nation was founded.

In 1775, a person, a pen, and a soapbox could make that sort of thing happen. Today, no way. Why? Population growth. When the third largest city in the region is smaller than today’s universities, communities were small enough that a few people could mobilize a large fraction of the population. But as populations grew, the methods which once mobilized tight communities no longer worked.

One more piece of perspective, to drive home the point. In the 1790 US census, New York City had a population of 33,131. That’s comparable to today’s Claremont, CA, where I went to college. Claremont isn’t a tiny town - most people don’t know each other. On the other hand, their kids all go to the same high school (Claremont High). That was New York City, the largest city in the US, in 1790: small enough that everybody’s kids would have fit in one modern-day high school.

Some of the teachers at Claremont High have probably met a majority of the people in Claremont. Shaking hands with everyone in the city is quite feasible, and politicians in 1790 New York probably did just that.

And that was the largest city in the nation! When Jefferson called the US a nation of small farmers, he wasn’t waxing poetic or fantasizing. The whole nation had 3.9 million people in 1790; the 24 largest cities housed just 0.2 million. 90% of the population worked on farms - I’ll have more to say about this in a future post.

Friday, December 15, 2017

Bitcoin Future-Spot Divergence



As of last night’s close, the price for a January bitcoin future was $1,171.21 higher than the price for a bitcoin. That means anyone could:
  • Buy one bitcoin for about $16,500
  • Sell a bitcoin future for about $17,500
  • Wait until January
  • Sell off the bitcoin and pay off the future contract
  • Pocket over $1,000.
That’s a return of ~6% in just one month, and you can lock it in instantly - once the bitcoin is bought and the future sold, their values will move in tandem, so there’s no risk of losing money. Normally, we expect these kinds of arbitrage opportunities to disappear quickly - especially for assets like bitcoin, which are easy to acquire and cost nothing to store. So why haven’t the prices converged?


If you want to know why an apparent arbitrage opportunity hasn’t disappeared, an easy way to find out is to try to exploit it, and see what stops you.


In this case, the main issue is that a bitcoin future is a future. Futures require both parties to hold margin: money available to cover their side of the contract as prices move. Usually, margin on a future isn’t huge, since prices aren’t too volatile. But bitcoin? Very volatile. That means very high margin requirements.


As usual, Interactive Brokers has everything you need to know on one page: margin requirement to sell a single bitcoin future is $40,000. Now, that margin can still earn interest while it’s sitting around, so it isn’t a “cost” per se. You don’t need to spend it in order to take advantage of the arbitrage opportunity. But you do need to have it available, and you can only arbitrage one bitcoin per $40,000 available.


This is still a pretty good opportunity, but you can only put so much money into it. That explains why small traders aren’t wiping out this arbitrage opportunity. So, next question: why aren’t the usual big institutions arbitraging away that price difference?


Usually, here’s how the situation would play out. A trader would buy a bitcoin and sell a future. Now, the trader would like to repeat this trade in order to make more money. So, the trader would go to a banker and say “hey, I have this low-risk arbitrage opportunity, I’d like to take out a loan collateralized by my one bitcoin in order to leverage my position.” Banks love collateral, so they’d give the trader a loan, and the trader would make the same trade again. Now the trader has another bitcoin, gets another loan, rinse, lather, repeat. In actual practice, many of the middle steps happen automatically, and the whole process is called “leveraging”.


With bitcoin, this is not so easy. Good luck finding a banker who will make a loan collateralized by bitcoin (i.e., look for a margin account which allows direct bitcoin trading). Even setting aside that issue, a trader would also have to borrow for the futures’ margin requirement. Now, the whole arbitrage together is actually very low risk, so it should be possible in theory to get a loan to do this… but it would require a personal relationship with a banker who understands the nitty-gritty and is willing to dip their toe in untested waters.

Put it all together, and we have a beautiful, persistent arbitrage opportunity limited by liquidity. It will disappear eventually, but it’s going to take time for the bankers to warm up.

Tuesday, December 5, 2017

Lemons, In-Group Signals and Marketing

Professor Quirrell didn't care what your expression looked like, he cared which states of mind made it likely.” - Harry Potter and the Methods of Rationality, chapter 26


Quick, which slogan will yield more sales:
  • “Be smart, buy X!”
  • “Not Your Grandma’s X”
Got a guess? Good, remember it.


This post is going to present some background game theory on signalling, and then talk about what that theory predicts for the slogans above.


The Lemons Game

What can a used car dealer say to convince you it's not a lemon? (image source)

Consider a game with two players: a prospective car buyer, and a seller. The seller begins with either a working car or a broken car - a “lemon” - at random (50% chance for each). The seller knows whether or not the car is a lemon, and considers a working car more valuable. So, for instance, maybe the seller is willing to sell only above $10k if the car is working, but will sell a lemon as low as $5k. On the other side, the buyer is willing to pay up to $12k for a working car, or up to $6k for a lemon.


One little wrinkle: the buyer has no way to check whether or not the car is a lemon before deciding whether to buy. Mechanical problems may not be immediately obvious during a test drive.


What happens?


Well, think it through from the borrower’s perspective. The car has a 50% chance of being a lemon, a priori. Ignoring risk aversion, a buyer would pay $9k for a 50/50 chance of a working car… but at that price, the seller wouldn’t be willing to part with a working car. So if the buyer offers $9k, then she will only end up with either no sale or a lemon! So, the borrower will only bid somewhere between $5k and $6k in the first place - since she’s only going to get lemons anyway, she only offers enough to buy a lemon.


The sad thing is, you may have an honest seller on one side trying to sell a working car for $11k, and a buyer on the other side who would love to buy a working car for $11k… but the deal won’t happen, because there’s no way for the seller to convince the buyer that the car isn’t a lemon. Anything the seller could say which would convince the buyer, a dishonest seller with a lemon could also say.


Cheap Talk vs Signalling
The lemons game illustrates a key concept: even when you let two people communicate freely, it may be impossible to convey relevant information between them.


This problem comes up whenever someone might be motivated to bluff. In the lemon game, a seller with a lemon is motivated to bluff - whatever a seller with a working car might say to sell for $11k, the seller with a lemon will also say in an attempt to get $11k for their lemon. Thus the phrase “cheap talk”: talking can’t actually convey any useful information here.


In the real world, we have various ways around this.


Among the simplest is Carfax: a trusted third party which can tell the buyer whether the car is a lemon. A seller with a working car will happily pay Carfax $50 to certify it. The certified car will then sell somewhere around $11k.


But barring trusted third parties (Carfax isn’t perfect), how else can a seller signal that their car is not a lemon? Remember, the key here is that it must be something which a seller with a lemon could not, or would not, do!


Another simple answer: offer to cover the cost of any mechanical issues for some time after the sale. That would be expensive for lemon-sellers, so they won’t agree to it. Any seller willing to cover mechanical costs must be selling a working car. This is useful, but it creates a new problem: the buyer will be incentivized not to take very good care of the car, since the seller is covering repair costs anyway.


Here’s a more interesting answer: whenever the car needs repairs, the buyer pays for the repairs and then sends the receipt to the seller. The seller takes enough money out of their bank account to cover the repairs, puts the money in a fireplace, and burns it. As before, this is a bad deal for lemon-sellers, so they won’t agree to it. Only sellers with working cars, expecting few mechanical issues, will agree - ideally, this means little or no money will actually need to be burned! What matters is the seller’s willingness to bet on the quality of the car, which signals the car’s quality to the buyer.


Marketing and In-Group Signalling
In the lemons game, the key to effective signalling is that the signal - whether a carfax report, a contract to cover breakdowns, or a contract to burn money in the event of breakdowns - must be very expensive for a lemon-seller, but not very expensive for the seller of a working car. This is critical. Anything which a dishonest lemon-seller could afford to say is cheap talk, and buyers won’t buy cheap talk.


This has interesting implications for in-group signalling.


Suppose I want to signal to my goth friends that I’m in their boat. So, I put on the most over-the-top outfit I can manage, chains and black makeup, the whole shebang - the key being that such an outfit would definitely not fit in any non-goth social circle. (Politics offers better examples, but I don’t want to derail this post.)


If someone wants to signal their membership in a group, then the best way to do that is with something which would be prohibitively expensive for someone outside the group. In these situations, we’re not usually talking about monetary expense. Instead, the “cost” is in social capital with other groups. In other words: the best way to signal membership in an in-group, is to do something which completely ruins one’s chance with the out-group.


Which brings us to marketing.


Truism: it’s better to have 10% of the population 100% interested in your product than to have 100% of the population 10% interested. Nice heuristic, but the model which usually underlies it in practice is in-group signalling. If you can signal that your product is affiliated with some group, then group members will buy your brand religiously. Apple, converse, starbucks… many a household name has made a fortune on this principle. But the all-important key to an in-group product is that it must not target everyone. Like the lemons game, if anyone can send the signal, if the signal is no more expensive for the out-group than for the in-group, then the signal is not a signal at all - it’s just cheap talk. Signals must cost something.


If you want to signal that your product is great for <in-group>, then the best way to do that is to offend <out-group>. The more blatant, the better. “Duck Dynasty” figured this out better than most, and gathered a truly ridiculous following for a show which could generously have been called a non-entity. Part of the key to Starbucks’ success, is all the people who hate it and hate everything a $5 coffee stands for. That’s the beauty of it: offend the out-group’s sensibilities, and you send a strong signal of in-group status. It’s the equivalent of offering to burn money if the car breaks down. (Just make sure to pick an actual in-group first; offending random people is no more useful than randomly burning money!)


Let’s say, hypothetically, you want to get young people to use product X. Easy tagline: “Not Your Grandma’s X”. Conversely, for targeting less-young people, “X for Grownups”, ideally delivered with an ad making fun of teenagers for being idiots (I remember a great Old Spice campaign along these lines). Humans have great intuition for this sort of thing: we see our outgroup mocked, and automatically assume that the mocker is “on our side”.


A few other ideas, to convey the flavor:
  • “Moms love X!”
  • “The X for people who like trucks”
  • “X: for true <sports team> fans only”
Note that these don’t always “offend” the outgroup per se; but they do all but guarantee that nobody in the out-group will ever buy your product. Indeed, the more they discourage out-group members from buying the product, the better they work. Non-moms will almost never buy “X for moms”. By way of contrast, consider a useless slogan like “Be smart, buy X!”. Everyone wants to be smart! Unless your advertising manages to convey a very group-identity-loaded concept of “smart”, enough to actually turn away non-”smart” consumers, it’s going to come off as generic cheap talk and fail to tap into any identity at all.


The takeaway:

  • Signalling should be costly to fake; otherwise it’s just cheap talk.
  • In the case of in-group signalling, the “cost” is usually to push away out-group members.
  • Humans have strong intuition for this stuff.
  • In-group-specific marketing should push away people not in the group.
  • More generally, in marketing, any signal which costs only ad spend dollars will be seen as cheap talk - ad spend is cheap for fakers.

Wednesday, November 22, 2017

Computational Limits of Empire

The tabulation of the 1880 US census took 8 years to complete. As preparation began for the 1890 census, it was estimated that tabulation would not be complete until after the 1900 census began! The computational load was declared to be too great; an alternative approach was needed.

The problem was solved by a mechanical computer based on punch cards. A company was founded specifically to build the contraption; that company would later become IBM.

I was thinking about this story, and I wondered: just how large was the US population in 1890? Did other nations reach that population level before? How did they handle the problem?

The 1890 US census counted 63M people, in total (source). How large did the Roman empire grow? Well, the Roman empire seems to have reached its peak around… 60M people. At this point I really started to get suspicious, and looked up population statistics for the ancient Persian empire and the Chinese empires. 50M people for the Achaemenid empire (Persian). China had 30-85M under the Han dynasty, stabilized around 50M for a few centuries, then grew from 45 to 80M under the Tang dynasty.

Next, I pulled up wikipedia’s list of largest empires and Business Insider’s list of top 10 greatest empires. I had to google around for population stats, many of which were not immediately available, but here are the big ones, excluding empires from 1700 or later:
There were a number of smaller “empires”, mainly the predecessors and/or successors of empires on this list. But on the other end, only the Mongols managed to scrape together an empire of over 100k people, and that empire split within a generation (spinning off the 60M-person Yuan dynasty).

Yes, this is a far cry from systematic. Yes, there’s room to complain about selection. Nonetheless, there is at least a very noticeable tendency for pre-modern empires to max out in the 50-70M population range.

Is the empire population cap due to computational limits in governance? I’m not sure how to properly test that hypothesis, but it does seem awfully suspicious that the founding event of the modern computing industry was triggered specifically by the US passing that 60M population mark.

One interesting question to pursue next: how did other modern nations/empires handle passing the 60M population mark? India and China both achieved sustained growth and built stable nations of over 100M people during the early modern era. Presumably the British empire’s population was also beyond 100M during much of the 19th century. Did these states also face computational blockades? What techniques did they introduce which might explain their ability to overcome the 60M person cap?