Modularity, My Dear Watson

Earlier this month, IBM’s Watson beat Ken Jennings and Brad Rutter on Jeopardy! on national television, giving rise to the usual barrage of jokes about skynet and HAL 9000 and how Watson and its progeny will eventually enslave us all.

My interest is less in the potential that Watson will bring about the Third Human-Cylon war, and more on a couple of possible links between what Watson does and what brains do.

The first element that I thought was interesting was the fact that Watson had to be designed in such a way so that it not only generated a best answer to each Jeopardy! question – or is that a best question to each answer? – but it also had to assign a probability to each answer. Because there is a penalty for answering incorrectly, Watson had to decide whether or not to buzz in, which in turn required some sort of summary variable of how sure it was that it had the answer right. You can see how this looked to viewers of Jeopardy in the picture.

What is it like to be a Watson?

It seems to me that this is a really nice illustration of something – but I’m not sure exactly what – about phenomenology. I like to think of “experience” or qualia or “raw feels” or whatever as the felt readout of some variable. I alluded to an example of this in a previous post about holding one’s breath until you die. Somehow, the “need to breathe” – perhaps, more formally, something like, the probability of catastrophic failure conditional on not breathing – is computed from the information available to the nervous system, and this value gets higher and higher as – in this case – carbon dioxide accumulates. I take it that there are many such summary variables having to do with deciding on various kinds of actions. It seems to me that if it felt like something to be Watson, then those confidence bars would feel, more or less, like what we refer to as the “feeling of knowing,” as Jonah Lehrer discussed last year. As the confidence in the answer increases, I suppose the urge to buzz in would increase as well. I sort of like to think about all these summary variables being produced in our heads as action is demanded on various issues, making consciousness sort of seem like a clearinghouse for all the different possible courses of action one might take… apropos…

The New York Times ran a story last year about Watson, and I thought it was pretty good. In particular what caught my eye were the hints about how Watson works. According to Ferrucci, the head of the project, it sounds more or less like a very modular architecture:

Watson’s speed allows it to try thousands of ways of simultaneously tackling a “Jeopardy!” clue. Most question-answering systems rely on a handful of algorithms, but Ferrucci decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions.

Not only that, but the question of figuring out the answer and figuring out how sure it is seems to be functionally separated:

Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one.

This suggests something like superordinate modules that are tabulating the outputs of other modules, which is quite cool. The article continues:

At best, Ferrucci suspects that Watson might be simulating, in a stripped-down fashion, some of the ways that our human brains process language. Modern neuroscience has found that our brain is highly “parallel”: it uses many different parts simultaneously, harnessing billions of neurons whenever we talk or listen to words. “I’m no cognitive scientist, so this is just speculation,” Ferrucci says, but Watson’s approach — tackling a question in thousands of different ways — may succeed precisely because it mimics the same approach.

Hm… lots of different parts… tacking questions in different ways… hm…

Anyway, I’m looking forward to understanding better how our future mechanical overlords work. All the better to serve them (in case they’re reading this).

28. February 2011 by kurzbanepblog
Categories: Blog | 2 comments

Comments (2)

Skip to toolbar