This is a birthday present to myself: I’m 33 today!
I used to really dislike emotions. For context: my favorite fictional character is Star Trek‘s Mr. Spock, I think Danny Kahneman’s Thinking Fast and Slow is a masterpiece and I’m a big fan of Eliezer Yudkowski‘s writings on rationality. I’ve just always thought emotions were a really out-dated information processing mechanism and should be totally discarded for logic.
I’ve been in interested in cognitive biases mostly as a way to “think better”; to see reality more clearly. My underlying mental model for how I worked was that there was some signal distortion (i.e. cognitive biases) between my true intentions and what I ended up doing; by understanding the nature the of signal distortions, I would be able to correct for this and achieve my goals. In essence, I was assuming that there was a “me” (my prefrontal cortex) that was giving instructions to this machine (everything else in my body).
News flash: this isn’t a very good description of how I actually work.
My new working model is pretty much Jonathan Haidt’s “rider and the elephant” metaphor. My conscious/analytic/rational side isn’t an engineer keying in instructions to the faulty robot that is my body – it is a skinny, frail rider sitting astride a huge elephant, trying to get it to move in one direction or another.
The elephant is not a robot. The elephant wants things. It gets in a bad mood. It learns its own lessons from experience. It decides when it pays attention to the rider. It prefers that the rider ask for things nicely.
The rider is not an engineer in complete command; it’s just a skinny dude along for the ride, trying to have an influence. The worst thing about the rider is that the guy likes to think he’s in charge; since the elephant generally moves in the direction the rider intends, the rider thinks he’s commanding a robot. When the elephant doesn’t quite do what the rider asks, the rider just thinks his robot has some design flaws. The rider will generally prefer every alternative explanation to these “errors” to the simplest, most likely one: he’s not in complete control.
This post makes it sound like I don’t like the rider, which isn’t true: if I was going to design a human being from the ground up, I’d probably go with the engineer/robot model and make the rider king, just like he’d like! But no-one’s offered me that job yet.
So, on the margin, I think a better model for how I work is that I am both the rider and the elephant. Thinking this way, the rider will probably be a lot more effective at giving directions now he knows he’s not riding a mecha; the elephant will probably have a nicer time because of it, maybe even work a little harder. They’re not a bad combination, if you ask me.