“A man is well equipped for all the real necessities of life if he trusts his senses, and so cultivates them that they remain worthy of being trusted.” -Goethe
“The only real valuable thing is intuition.” -Albert Einstein
“It is through science that we prove, but through intuition that we discover.” -Henry Poincaré
The first post in this series argued that people might lack good introspective access to their own will not because introspection is inherently weak, but to the contrary, because they introspect too rarely and are out of practice. The Less Wrongians argument is rather vapid, as you certainly wouldn’t criticize Calculus just because most people are terrible at it. Instead, you would demand they practice their Calculus, not read a bunch of heuristics and biases literature to find out why they don’t care to practice their Calculus! They seem to think that humans have “instincts” instead of sensitive imprinting periods for certain cognitive skills, and proceed to criticize these instincts in order to convince us how relevant Less Wrong is and how much we need its deep wisdom to overcome our animal nature. If Pinker’s “language instinct” is right, should we criticize this instinct when people speak poorly, or should we tell them to buy a dictionary and simply try to express themselves more eloquently? Sadly, the Less Wrongians prefer the former in every case I’ve reviewed, attacking introspection, intuitions, and even, in this case, the whole of philosophy, as it often involves arguments based on intuition. The latter post attacks the greater part of both Analytic and Continental philosophy as being useless, but what they really mean is that it won’t help you build an AI or become more “rational.” Would they similarly criticize romantic poetry as well? Totally useless because it won’t make you more rational or help you code the next iPad app? This is laughable.
We do not have instinctual intuitions; our intuition can be informed and educated. What better tool for this than philosophy? Even reading through bad arguments helps update your intuition: this is why they still teach you outmoded physical theories when you study Physics at university. By this author’s logic we should also scrap “Science” and start over, given the pessimistic meta-induction.
Do they have no respect for Einstein, who read Schopenhauer, Nietzsche and many others; a man who claimed that he thought with his muscles? Do they have no respect for cognitive scientists like Pinker, whose books are replete with references and ideas from mainstream philosophy? They claim to have respect for Dan Dennett, but don’t seem to have read any of his books, which are also replete with references to mainstream philosophy quoted with great admiration. The book “Inside Jokes,” which Michael Vassar lent to me, admits that it doesn’t really make much progress on Arthur Schopenhauer’s incongruity-resolution model, despite having the advantage of all of this lofty “cognitive science.”
Here is a fun line to think about:
What is wrong with “cognitive algorithms” and why is this so important for philosophers to understand? Birds that need to migrate South have such cognitive algorithms in their little noggins and they appear to be highly reliable. Humans have far more highly evolved algorithms and can even update them. Not all algorithms are of equal utility or quality, so shouldn’t the philosopher simply be more interested in distinguishing good from bad intuitions, or better still, which intuitions to use when? As it happens, this is precisely what they do all day. Its like the author of this post doesn’t think that philosophers can be skeptical about their own minds and knowledge! This is preposterous! Ever heard of a philosophical school called Skepticism or a branch of philosophy called Epistemology? Ever consider that these gave birth to your precious Science? Ever consider that epistemology and skepticism were first intuitions? Apparently not.
The post reveals its own biggest bias in its claim that “humans are loaded with biases.” “Loaded” with biases? My god, its a wonder we all don’t just accidentally choose to bite into our own flesh or put our babies in the microwave! “Loaded” with biases? Not even cognitive scientists talk like this: they respectfully point out where we naturally make bad inferences and so forth, while acknowledging how truly magnificent the human brain is most of the time. Scientists similarly point out where our senses tend to prove unreliable, but they hardly claim that our senses are “loaded” with error, my god!
Less Wrong is clearly its own dogmatic religion, something it ironically attacks with immense glee. Oddly, it doesn’t seem to have a very nuanced understanding of neuroscience. Certain posts mention the “baloney generator” in the left hemisphere, referring to Gazzaniga’s “interpreter module,” but then glorify left hemisphere capabilities to the complete exclusion of the right hemisphere, somehow failing to see that it is the former that produces the tendencies of religious dogmatism. They need to read some McGilchrist.
The post claims that “a few naturalistic philosophers are doing some useful work.” “Some useful work”!?! Naturalistic philosophers built Science, humanism, the enlightenment, and Less Wrong writes the enterprise off as largely useless? This is preposterous. Have some respect. This post is just impossibly arrogant:
Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
So looking back is some kind of vice? History contains no lessons? This author could use a “fair amount of time re-interpreting Hume and Dewey yet again” himself, not to mention quite a few other “old dead guys.” Oh but he has, claiming to have based some of his hypotheses (such as the above) on his “thousands of hours in the literature.” He doesn’t seem to have learned much in those thousands of hours if his central hypothesis is that these thousands of hours were a waste of his time. Does he not see that he undermines his own authority here? He doesn’t even recommend reading Quine, one of the few philosophers he seems to have some respect for:
Update: To be clear, though, I don’t recommend reading Quine. Most people should not spend their time reading even Quinean philosophy; learning statistics and AI and cognitive science will be far more useful.
“Useful” for what? That is the meat of the issue. He means, “useful for building AI or building a new iPad app.” Go figure, studying AI will be more “useful” for building AI! As for building a soul worth having, I’d recommend reading some old dead guys. As for trying to understand your deepest self, I’d recommend looking to authors who wouldn’t write this self off as just “cognitive algorithms,” as if that was very useful or enlightening.
Bob: “What is your deepest longing in this world, your brightest dream, Dave?” Dave: “Who cares, longings and dreams are just cognitive algorithms.”
Bob: “What is ‘love,’ Dave?” Dave: “Just pair bonding instincts, Bob.”
Is this what passes for knowledge, wisdom and enlightenment these days? Is this what passes for “useful” and “cutting-edge” philosophy? The author of this post thinks that instead of studying the dry words of dead men we should make progress by “scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science.” Does he not realize that this “correct understanding of language, physics, and cognitive science” was born of “the whole mess” and would be nothing without it? The author continues:
Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
Aha. “In order to solve problems in AI.” Again we meet the heart of the issue. Go figure that Eliezer would have to blaze new trails in contributing to a brand new field. But was he really blazing trails, given that he found “standard position[s]” that “his own theory was…similar to”?
AI is useful because it keeps you honest: you can’t write confused concepts or non-natural hypotheses in a programming language.
Really? You can’t write confused concepts in a programming language? Ever used Microsoft Vista?
Does this guy not realize that Dennett has enormous respect for much of philosophy and defends a compatiablist notion of Free Will?
But if you’re looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first
Did this guy never think to himself before leveling broad attacks at “mainstream philosophy” that it might be responsible for solving all of those used-to-be-cutting-edge problems that now make way for the new ones he endeavors to work on? Baffling.
Swimming the murky waters of mainstream philosophy is perhaps a job best left for those who already spent several years studying it – that is, people like me. I already know what things are called and where to look, and I have an efficient filter for skipping past the 95% of philosophy that isn’t useful to me. And hopefully my rationalist training will protect me from picking up bad habits of thought.
Ah, I see…so I should avoid reading most of philosophy and instead simply rely on you, oh great swami!?! What makes him think that “what is useful to [him]” is useful to the rest of us? Many people might be seeking truth, wisdom, insight, moral growth, and the expansion of their consciousness and mental freedom instead of confining the “useful” to building a superintelligence that might make up for his dearth of intelligence, given the pathetic meat-computer he is working with, “loaded with biases” and all.
After finally admitting that philosophy is unavoidable, as we all wake up in the morning as homo sapiens, with the problems endemic to our species, he goes and says the following:
you’re probably better off trying to solve the problem by thinking like a cognitive scientist or an AI programmer than by ingesting mainstream philosophy.
So the really tough problems in ones life, like “what is a meaningful use of my time here?” or “should I stay with this girl?” or “why should I not pursue my narrow self-interest alone?”…these problems are better solved by thinking like an AI programmer? This encapsulates all of my aversion to the Less Wrong blog: it preaches to the choir, advocating that computer programmers and math geeks study more programming and math in order to better their lives, instead of studying their own human nature and grappling with the fact that they are a homo sapien. No, better to study statistics and long for a transhumanist future free of the problems of being human than to simply master your own humanity, order your will, and build a soul worth having.
But why must I rely on Less Wrong, given that “nearly all these terms and ideas have standard names outside of Less Wrong”? What have they contributed to philosophy outside of their ideas about AI? If this is what “cutting edge philosophy” looks like, you can count me out. The job of philosophy is not to promote science nor to build a computer superintelligence. The job of philosophy is to build better men, free their minds, and contribute meaning, passion, and clarity to their lives. This is what is “useful” about it. Furthermore, as Russell would say, “there is much pleasure to be gained from useless knowledge,” so perhaps “useful” should not be the goal governing all of your learning. To the Less Wrongian wondering what use philosophy might have for him, I would again turn to Russell for advice:
To teach how to live without certainty, and yet without being paralyzed by hesitation, is perhaps the chief thing that philosophy, in our age, can still do for those who study it.