Friday, November 30, 2012

Irrational Rationality

Well, I thought I had 'tao' basically finished, and then I found out that Jay Earley from IFS also practices the diamond approach. His article on the relationship between the two is quite a bit different from mine, and mentions some things that I didn't. Apparently his books are primarily focused on IFS as 'therapy', ie goal/problem oriented, whereas the DA is very open-ended. He has worked with some DA students, but generally he keeps IFS and DA separate. Since I have the opportunity, I'm planning to shoot him some questions and see if I can get some clear(er) answers before I rewrite (and yes, I've been slacking).

Today, however, I want to talk about a disturbing trend that I've seen gurgling from the annals of the interwebz. There are a few people who hold 'rationality' as an ideal, and profess that all people should live with 'rationality' as their primary standard. In IFS, you might say they had an over-analytical part, and moreover that they're trying to tell everyone that we should be led by over-analytical parts as well. Speaking 'rationally', though, my bigger complaint is that they often use logic in ways it is not meant to be used, or in ways which are incorrect or not logic at all. At the same time, they have no problem regularly begging for donations for their vaporware projects. There are two main offenders that I'm aware of, Stefan Molyneux of freedomainradio and the folks at lesswrong, who also run the singularity institute and the Center for Applied Rationality. After I've deconstructed them to death, I'll discuss what 'rationality' really is and what it really means.

Stefan Molyneux, 'Universally Preferable Behavior', and 'Real-time Relationships'

Stefan Molyneux is a libertarian activist who has an endless collection of podcasts about libertarianism, austrian economics, psychology, atheism and what he deems 'objective ethics'. While he makes some good points about government, and introduced me to anarcho-capitalism and austrian economics, I can't say that I agree with much else that he says. Several Austrian Economists have pointed out that he has no idea what he's talking about, both in regards to economics (a topic for another time) and to ethics, which is a consistent pattern with most anything he says.

"UPB"

Molyneux's pride, glory, and primary source of income is convincing people that he has discovered something miraculous; a system of 'objective' and 'universal' ethics, which he calls "Universally Preferable Behavior". His definitions of 'objective' and 'universal' are far from standard, however, as is the 'logic' he uses to justify his conclusions.

At first, he acknowledges Hume's observation that you cannot derive an 'ought' from an 'is', and that there is no objective quality of 'better':

As Hume famously pointed out, it is impossible to derive an “ought” from an “is.” What he meant by that was that preference in no way can be axiomatically derived from existence. It is true that a man who never exercises and eats poorly will be unhealthy. Does that mean that he “ought” to exercise and eat well? No. The “ought” is conditional upon the preference. If he wants to be healthy, he ought to exercise and eat well. It is true that if a man does not eat, he will die – we cannot logically derive from that fact a binding principle that he ought to eat. If he wants to live, then he must eat. However, his choice to live or not remains his own.
Similarly, there is no such thing as a universally “better” direction – it all depends upon the preferred destination. If I want to drive to New York from San Francisco, I “ought” to drive east. If I want to drive into the ocean from San Francisco, I “ought” to drive west. Neither “east” nor “west” can be considered universally “better.”
It is true that very few people do drive into the ocean, but that does not mean that it is universally true that nobody ought to drive into the ocean. Principles are not democratic – or, if they are, we once more face the problem of rank subjectivism, and must throw the entire concept of ethics out the window.


 and acknowledges that a theory must be consistent with observed phenomena in order to be considered valid:
If I say that gravity affects matter, it must affect all matter. If even one pebble proves immune to gravity, my theory is in trouble.
and that preferences cannot be objective:
Preferences do not exist objectively within reality.
but then (actually out of order in the book) he contradicts himself, saying:
When I say that some preferences may be objective, I do not mean that all people follow these preferences at all times. If I were to argue that breathing is an objective preference, I could be easily countered by the example of those who commit suicide by hanging themselves. If I were to argue that eating is an objective preference, my argument could be countered with examples of hunger strikes and anorexia.
and makes arbitrary leaps over the is/ought gap:
If you correct me on an error that I have made, you are implicitly accepting the fact that it would be better for me to correct my error. Your preference for me to correct my error is not subjective, but objective, and universal.
Nevermind the fact that he states originally that behavior is what is objective, universal, and to be preferred but here he clearly states that it is the subjective preference which is 'objective and universal'.

And, to finish it off, he 'proves' his theory in his most favorite way of doing so, circular logic (that's a fallacy for those not aware):
  1. The proposition is: the concept “universally preferable behaviour” must be valid.
  2. Arguing against the validity of universally preferable behaviour demonstrates universally preferable behaviour.
  3. Therefore no argument against the validity of universally preferable behaviour can be valid.
 Overall the book is nothing but a bunch of pseudo-logical rhetoric designed to convince overanalytical people that morality is objective and universal. The actual content of the book is nowhere near as organized as my deconstruction of it (which I strained to manage) which makes it much easier to hide all the self-contradiction and blatant fallacy. In spite of its "logical" facade, the actual arguments are predominantly emotional, with very little real logic or reasoning and absolutely zero evidence to support the claims he makes. The rest of the book only goes downhill from what I've mentioned, with false dichotomies and the same fallacies repeated in variations.

RTR

Molyneux's other major work he calls "Real-time Relationships" or RTR for short. In spite of its name, the primary focus of RTR is neither being in the moment, nor mindful awareness of one's feelings. Instead it encourages people to analyze every social interaction they have and to judge whether the other person/people are interacting morally or not.

In spite of the fact that Stefan often refers to IFS, both UPB and RTR make assumptions and recommendations that are in direct contradiction to the theory and teachings of Dick Schwartz. Molyneux often advises people to disown their families and to leave abusive relationships. While I agree that you have no obligations to stay in abusive relationships, he also promotes an attitude of judgment towards everyone (x is a bad/immoral person) and blame especially towards 'bad' parents. He also likes to play therapist, although he has never been trained in IFS or anything else, and obviously shows little comprehension of psychology in general.


Lesswrong, SI and CAR

In contrast to Stefan Molyneux, the folks at lesswrong are techno-socialists. They profess that in a short while we will develop a superhuman AI which will either make human effort completely obsolete or else destroy humanity terminator style. They too promote an absolute standard of 'rationality' which they claim will solve all of humanity's problems. However, the fallacies that plague lesswrong's arguments have some general differences from those of Molyneux. They're also somewhat better organized, which makes it much easier to dissect.

Lesswrong's Bayesian Machine

The cornerstone of lesswrong's 'rationality' is bayesian logic and game theory. In their distorted view of reality, human-level intelligence, and for that matter all truth can be achieved using only these things. They seem to be blissfully ignorant of visually-driven thought experiments, meta-logic, epistemology and ontology. They only vaguely acknowledge that psychology exists, mentioning obscure concepts like 'ugh-zones', which describe a very real aversion to confronting psychological 'burdens', but relegate their importance to the sole fact that they interfere with 'rationality' by producing cognitive bias. Their model is extremely oversimplified, to say the least, and not nearly as useful as they claim it to be.

The Singularity Institute Mugs You

Springboarding off their dramatically oversimplified view of psychology and intelligence, they then take a great leap into fantasy land, predicting that in 50 years we will have a superhuman AI that will either be the ultimate nanny or the terminator. Thus, they claim, we need to start now to develop 'friendly AI' just like in an Isaac Asimov book. Even if you ignore the economic and physical problems with their vaporous theory, and the fact that they aren't actually doing AI research, and the fact that real human intelligence cannot be summarized with bayesian networks, their explanation for how we will achieve this is an insult to anyone with any knowledge of computer science.

Namely, they invoke Moore's Law, claiming that since computing power tends to double every two years or so, that we will necessarily have superintelligent computers just around the corner. Ignoring (and this is a lot of ignoring) the problem that Moore's law is soon doomed to face the reality of the limits of atomic scale, the real problem is that Moore's law does not apply to AI research at all. Yes, desktop computers have become much more powerful since they were first introduced; orders of magnitudes more. However, computers still do basically the same mechanical, deterministic tasks. Computing power does not equal intelligence, so no matter how fast or how complex a desktop computer becomes, it will not become a shred more intelligent, except perhaps by means of very complex software. Software which depends on AI research that they're not doing, and which moves far slower than Moore's law.

In fact, SI doesn't actually do anything at all except self-flagellating and 'brand building' (ie taking people's money). Givewell declined to recommend SI, commenting that they didn't actually accomplish anything, and furthermore that their argument was pascal's mugging. I consider that an accurate assessment.

The Center for Applied Fallacy

As part of their campaign to 'rationalize the world', they split off an extra branch of SI which is aimed at imposing 'rationality training' on unsuspecting layfolk. Through this effort, they hope to brainwash the world into becoming good little 'rational' techno-socialists, ready to accept the blessings of their all-benevolent technological singularity dictator in 50 years. They also have an entrepreneurship program, associated with ycombinator, which has become known for ripping off investors with fraudulent tech startups. The only good thing I can say about it is that, given their general ignorance of psychology and complete lack of charisma, they aren't likely to succeed.

So What Is 'Rationality'?

In NLP, we would call the word 'rationality' a nominalization. A nominalization is a verb, which through the magic of the ambiguity of language has been converted into a noun. The verb from which 'rationality' is derived is 'to reason'. So what does it mean to reason? Put simply, reasoning is the capacity to represent and solve problems without having the actual problem in front of you. Reasoning includes thought experiments, meta-analysis, logic, questioning and hypothesis.

In the past there has been much controversy over whether reasoning or empiricism is more useful for discovering truth, and even whether 'truth' can be known at all (it can't, but that's a topic for another time). Over the past century, however, we've pretty much come to the conclusion that they must be used together (reason + evidence = science) in order to achieve results, and even then there is an uncertain element that requires imagination (ie, coming up with theories that interpret the raw data).

Far from being an ideal to live up to, reasoning is something which requires constant questioning of everything in order to remain useful. It is, at best, combined with evidence under proper conditions in order to come to a better understanding of how things work, and at worst fundamentally limited in accuracy thanks to the map-territory distinction and recursive complexity. While there is a definite place for reason and evidence in discovering truth, there is also far more to life and being human. Both lesswrong and Stefan Molyneux seem to be trying, in contrast, to reduce humans to mere machines.

No comments:

Post a Comment