Never Ending Language Learner
by Tim Moran
Watson, meet NELL. Watson, of course, is the IBM supercomputer designed to beat humans at Jeopardy! by using "more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions."
NELL, on the other hand, is the creation of researchers at Carnegie Mellon University, and it is actually being taught to learn.
NELL--which is short for Never Ending Language Learner--is not just accessing vast amounts of information manipulated by fancy algorithms, as does Watson. Rather, it is "reading the internet and learning from it in much the same way that humans learn language and acquire knowledge. Basically by soaking it all up and figuring it out."
So how is NELL doing? She has about a "C" average, thus far, having learned more than "440,000 separate things with an accuracy of 74 per cent."
Nevertheless, while NELL is "getting brainier every day," the computer is having some difficulty with making any real sense out of the facts that it encounters. This is, perhaps, a problem that it shares with humans:
"Some [facts] it promotes to the idea of being beliefs if they come from a single trusted source, or if they come from several less reliable sources." TechEye.net suggests that this makes NELL more of a "rumour mill than a trusted source," and it's giving the boffins a good laugh.
To further complicate matters, NELL can't unlearn anything or make a change in the way it thinks--again, sounding very much like some people we all know. Once the computer changes a fact to a belief, it stays a belief in its mind: "NELL's human handlers had to tell NELL that Klingon is not an ethnic group, despite the fact that many earthlings think it is." I am laughing right along with the boffins.
TechEye.net is edited by one of the founding editors of the U.K.'s snarky and infamous The Register; one commenter took offense at its cavalier attitude toward serious computer research: "Shame on you for writing such a poor article about this technology. The least you could do is report on some of the findings...Instead, we're treated to a dumbed-down, poorly written commentary by someone whose command of the English language is most likely surpassed by NELL's."
So in order not to be similarly chastised, here's a link to the paper -- "Toward an Architecture for Never-Ending Language Learning" -- describing the researchers' goals and techniques in attempting to deal with "the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day."
We can mock their results, but we should never mock those who strive to learn.