?

Log in

No account? Create an account
The Blog Monster
 
[Most Recent Entries] [Calendar View] [Friends View]

Friday, September 2nd, 2005

Time Event
12:06p
Bayesian religion
One of the more interesting characters in the transhumanist community is Eliezer Yudkowsky, head of the Singularity Institute for Artificial Intelligence. He's a polymath with one of the sharpest minds you're likely to come across this side of the Singularity. And he has a plan for creating an AI that he says is based on Bayesian reasoning.

Bayesian reasoning is, simply, the correct way of drawing conclusions from evidence. Eliezer has a good, if over-long, explanation of Bayes' law on one of his many web pages. Most people - the vast majority of people, including most trained scientists - don't understand it, and make bad decisions because of it. In some cases trained scientists are MORE likely to make these types of errors, because they've been trained in faulty, non-Bayesian reasoning, to believe things like "the absence of evidence in favor of a proposition is not evidence against that proposition." Which, Bayes' law tells us, is wrong. (As Eliezer explains it: P(A|~B) < P(A) iff P(A|B) > P(A).)

It's often difficult to have a rational conversation with someone who doesn't understand Bayes' law, because they aren't, strictly speaking, rational. This is perhaps why our public officials avoid discusion about issues whenever possible: assuming that they understand the issues, and economics, and Bayes' law, and that they have honest reasons for their policies, these reasons can make no sense to voters who don't.

However, in my professional opinion as an AI researcher, saying that you take a "Bayesian approach to AI" because you're in favor of using Bayes' law to make choices when given probabilities, makes about as much sense as saying you have an "arithmetic approach to AI" because you know how to implement multiplication and addition correctly. The hard parts of AI are not how to update the probabilities assigned to propositions. The hard parts are observing the world, and listening to the things people say, and abstracting or classifying them into the symbols that make up those propositions (if you're using propositions), and figuring out what the initial probabilities are to give to them in the first place.

The puzzle is how someone as smart as Eliezer could decide that using Bayesian reasoning gives you even a leg up on creating an AI, when the verdict has been in for two decades, backed by many powerful lines of evidence too numerous to even enumerate here, that it doesn't. This was worrying away at the back of my mind as I was reading "Sex and Cognition" by Doreen Kimura on the subway, and her opening chapter was about how research on sex differences is subject to different standards of evidence than other research. Many scientists believe that research on sex differences shouldn't be published unless it's exceptionally strong. And in general, they believe that the more serious the social consequences of scientific research are, the higher the level of supporting evidence should be before we publish it.

Well, the correct way to evaluate the evidence is using Bayes' law. All that this "higher standard of evidence" does is to throw out mountains of evidence and lead us to make worse decisions. This has real social consequences, all bad. (It's even worse than that, because evidence is thrown out in a biased manner that appears to lend support to the currently-accepted hypothesis.)

Which led to an "Aha!" moment in which I think I understood how someone could grasp on Bayes' law as the basis for an AI. The appeal isn't because Bayesian reasoning solves any key unsolved problem in AI - it doesn't. The appeal is of a nation of rational Bayesian reasoners, who could elect competent officials, set rational public policies, guide scientific inquiry by reason rather than feeling, and with whom one could hold decent conversations. The pursuit of Bayesian AI isn't a science, but a Utopian vision, like the pursuit of the true communist state.

<< Previous Day 2005/09/02
[Calendar]
Next Day >>
About LiveJournal.com