Wednesday, September 30, 2015

Abstract Human

I'm going to present a psychological model. I've never seen it in a professional publication (nor have I looked), I have no hard evidence for it, but I do believe it to be true.

According to the Machiavellian intelligence hypothesis, the main problem which drove human brain evolution was predicting and outmaneuvering other human brains. Unsurprisingly, this evolutionary process left us with specialized hardware for modelling other humans: mirror neurons. When we see someone else doing something, mirror neurons fire in our own heads to simulate the activity. When you "put yourself in someone else' shoes", imagine yourself as someone else, you are using your mirror neurons.

Let's create a more abstract model of the mirror neuron. We start with a black box representing the human brain. The box is quite complicated, and to this day we do not understand its internal functions. We do know that there are rather a lot of these boxes in the world, not identical but quite similar. The boxes constantly talk, compete, cooperate, scheme, fight, bicker, etc...

Each box is equipped with advanced planning capacity. A box can imagine hypothetical environments, and imagine what it would do in those environments. (The technical term for these what-if scenarios is "counterfactual scenario"; we have a very firm mathematical understanding of them). The box runs its usual programs within this counterfactual world, and the output is its own behavior in that environment. This is very helpful for the box to make plans. In humans, we sometimes call it "daydreaming" when one spends too much time in counterfactual mode.

A related piece of hardware allows for even more advanced planning: the box can simulate other boxes. Of course, other boxes are extremely complex, so they cannot be simulated directly... but because the boxes are so similar, they can be simulated directly on the hardware of any single box. A box simply goes into counterfactual mode, changes a few internal parameters to simulate another box, and then runs in the counterfactual world normally. The box keeps detailed lists of parameters to change in order to simulate each of the boxes in its social circle. These internal parameter change lists are the intuition underlying what we call "personality".

Now we get to the interesting part. Turns out, the box has an internal change list representing itself. Remember, all this hardware evolved primarily for modelling other boxes. When the box goes into counterfactual mode, a change list is applied automatically. Not having any changes at all is not an option. Some of those changes are overriding components normally attached directly to the physical world; they must be circumvented in order for the counterfactual processing to remain counterfactual. So, if the box wants to model itself, then it needs a change list like any other. This change list is the box' abstract social representation of itself. We might even call it the box' "identity".

Notice that the contents of the box' self-representing change list need not be accurate. It's a change list like any other, representing boxes as seen from the outside. The self representing change list is learned, just like the others, by observing behavior (primarily social interactions). Of course, the self-representing change list is used in virtually all planning, so its contents also affect behavior. The result is a complicated feedback interaction: self-identity informs behavior, and behavior informs self-identity. On top of that, self-identity also learns heavily from interactions with other boxes. If box A and box B have very different change lists for box A, then box A will behave according to its own list, but will simultaneously update its list throughout their interaction to account for box B's representation of box A. Oh, and A might sometimes change a parameter or two in its self-identity just to try it.

Ok, deep breath. Direct modelling of interactions is definitely going to be very, very complicated. Let's ignore that problem and consider another angle.

What if there are subpopulations with similar parameters? Then the box can simplify its change lists by keeping a single change list for the whole group of boxes with similar parameters. This single change list applies to the whole group; we might call it a "group identity". Of course, any box may belong to multiple groups. A change list for one box might look like "Apply change lists for groups X, Y, and Z, then apply all the following individual changes...". In practice, change lists consist mainly of group memberships. Special-case changes are less efficient, so we try to avoid them.

This means that a box' self-identity also consists mainly of group membership (although research shows most boxes are much more tolerant of special-case changes in their self-identity). And remember, the self-identity, like any other, is constantly learned. So a box can change its self-identity by changing its group membership, or even just pretending to change its group membership.

Notice that both group membership and group parameters are constantly learned. So, if box A suddenly starts wearing leather jackets, nearby boxes will update their change lists to increase parameters which tend to cause leather-jacket-wearing, including group memberships. In fact, A itself will update its own self-identity based on the new clothes. As many people have observed, trying on new clothes is trying on a new identity, and a change in clothing style will cause a change in behavior. Clever companies even take advantage of this in their ad campaigns; the Converse shoe-box company is especially good at it.

No comments:

Post a Comment