Forest and Trees

caught
a net
my mother’s gaze, my silence; my son’s distance
the splintered table a graveyard of sunlight

my fingers on the net, grasping gently
the fibers inside them – inside? – wound more tightly still

one more crow and the flock arises
an eyebrow is raised
but we live by flocks, forests, villages
yet touch a feather, a fir’s lime-fresh needle, a warm cheek smiling

my ruler bends at such tangles
we’ve always started with the alpha,omega
and left the middle
to some other guy

this ruler is borrowed
but I think I will bend it
(gently, my fingers)
some more

 

 

The Is-Ought Problem and Moral Advocacy

David Hume once made the very useful observation that you can’t really make an argument for what ought to be based solely on logical statements. Ultimately, you need to have a stated assumption about what you believe is good or ideal or desirable, and then you can use logic to draw out implications. It’s called the Is-Ought problem.

So, much to my chagrin, I can’t tell people they shouldn’t like mayonnaise based on strictly logical facts. Somewhere in my arguments there will be some (usually hidden) assumption about what I think is right (e.g. the destruction of all mayo or of all creamy white things), with the rest of my argument being constructed on that premise. And because it’s not really a matter of logic, people can agree or disagree with that premise.

I think Hume’s idea is true, but it sure does make moral advocacy more difficult. It means I can’t just tell people that to be misogynist or racist or a lot of the things I hate is irrational. But I’m not a sophist, and common sense tells me we need to keep telling people to do “the right thing”, so I need a different approach.

Here’s what I came up with:

  1. Start with the standard existentialist position: there’s no externally imposed meaning, so each person has to impose meaning internally. I can keep telling people to stop being racist or misogynist or into mayo – but I have to own it. I think those things are wrong; it’s not God or Reason or any other higher authority that does the work or gets the blame.
  2. If I own it and am trying to get people to change, it’s incumbent upon me to persuade them. I can’t just call them irrational and go home, basking in superiority. If I didn’t persuade them, I’ve failed.
  3. I think people can be persuaded; I think human beings share a lot of the same wiring. If you dig deep enough past the culture and experience shaping a person, I think you get pretty similar basic drives. So, putting it roughly, there’s probably a lot of shared ought between me and most people.
  4. Start from the common ground and build up. Start with finding the common ground of oughts and then try to work your way towards the point you’re advocating for by using logical arguments. Don’t start with the arguments until you’ve hit on genuine common ground. You’re essentially leveraging human beings’ need for internal consistency to change their point of view.
  5. ???
  6. Profit!

Admittedly, this is difficult. There may be no common ground, which means you’re done (I don’t think this is the issue most of the time). More likely, people will stop listening or engaging with your arguments (and punch you) whenever they makes them uncomfortable (something about horses not drinking water). Or, it turns out you have no valid arguments, in which case you’ve got to do more thinking.

Its chances of working aren’t great most of the time, but I’m guessing they beat those of the alternative approach (it’s illogical to be racist). Of course, you could also just bully people into thinking what you think, but I think that’s wrong.

The Elephant is Not a Robot

This is a birthday present to myself: I’m 33 today!

I used to really dislike emotions. For context: my favorite fictional character is Star Trek‘s Mr. Spock, I think Danny Kahneman’s Thinking Fast and Slow is a masterpiece and I’m a big fan of Eliezer Yudkowski‘s writings on rationality. I’ve just always thought emotions were a really out-dated information processing mechanism and should be totally discarded for logic.

I’ve been in interested in cognitive biases mostly as a way to “think better”; to see reality more clearly. My underlying mental model for how I worked was that there was some signal distortion (i.e. cognitive biases) between my true intentions and what I ended up doing; by understanding the nature the of signal distortions, I would be able to correct for this and achieve my goals. In essence, I was assuming that there was a “me” (my prefrontal cortex) that was giving instructions to this machine (everything else in my body).

News flash: this isn’t a very good description of how I actually work.

My new working model is pretty much Jonathan Haidt’s “rider and the elephant” metaphor. My conscious/analytic/rational side isn’t an engineer keying in instructions to the faulty robot that is my body – it is a skinny, frail rider sitting astride a huge elephant, trying to get it to move in one direction or another.

The elephant is not a robot. The elephant wants things. It gets in a bad mood. It learns its own lessons from experience. It decides when it pays attention to the rider. It prefers that the rider ask for things nicely.

The rider is not an engineer in complete command; it’s just a skinny dude along for the ride, trying to have an influence. The worst thing about the rider is that the guy likes to think he’s in charge; since the elephant generally moves in the direction the rider intends, the rider thinks he’s commanding a robot. When the elephant doesn’t quite do what the rider asks, the rider just thinks his robot has some design flaws. The rider will generally prefer every alternative explanation to these “errors” to the simplest, most likely one: he’s not in complete control.

This post makes it sound like I don’t like the rider, which isn’t true: if I was going to design a human being from the ground up, I’d probably go with the engineer/robot model and make the rider king, just like he’d like! But no-one’s offered me that job yet.

So, on the margin, I think a better model for how I work is that I am both the rider and the elephant. Thinking this way, the rider will probably be a lot more effective at giving directions now he knows he’s not riding a mecha; the elephant will probably have a nicer time because of it, maybe even work a little harder. They’re not a bad combination, if you ask me.