Teaching Aliens To Take Over The Earth

We’ve all seen these movies. In it, a horrible looking alien lands from outer space, finds a sleeping human being or—alternately—kills one, then while it is lying there, places a tentacle on top of its head and in about 15 seconds, right before your eyes, the horrible alien reconfigures itself into this very pleasant human being, with all its quirks, smiles, jokes and intelligence.
Then it walks off. And for the rest of the movie, until the very end, we are all in very, very deep trouble.
Now you would not tend to think that any humans on the planet today would want to create such a monster with these abilities. But you would be wrong.
In a laboratory at Carnegie Mellon University out in Pittsburgh, scientists are actually getting paid to teach a robot to do exactly that. They are deluding themselves into thinking that what they are doing is an advancement in science that will greatly benefit mankind, but they do, and so they are pressing on.
Their delusion is based on semantics. They are not using words such as aliens and monsters. They are using words like computer and software and mainframe. They are about to kill us all.
I read all about this in an article in The New York Times, which appeared on October 4 last fall. It was headlined AIMING TO LEARN AS WE DO, A MACHINE TEACHES ITSELF. Yes, this machine, or whatever it is, has been given some tools that enable it to “learn.” And, day by day, in a sort of very slow-motion version of that horrible scene in many science fiction movies, it is moving along. [expand]
As we know from the sci-fi movies, what is slow motion today is, in just a very short time, a procedure that can be shortened to just, say, 20 seconds. Watch.
The people working on this project seem very satisfied with themselves. They are doing good work. According to the article, the team is headed up by Dr. Tom M. Mitchell, who is a computer scientist and chairman of the Machine Learning Department of Carnegie Mellon University.
“Our computer is called NELL,” he says. “NELL stands for Never-Ending Learning System.”
He might have said robot. Or alien. But he says computer.
NELL is, in fact, a silver-grey metal computer that is calculating, analyzing and soaking up billions and trillions of bits of information 24 hours a day and is working feverishly to understand the scientists who are feeding it not only information but ‘right’ answers so as to teach it to think like a human being.
The key to it, Dr. Thomas explains, is that NELL has been programmed to gather in information, process it, think about it, build upon it and put the pieces of it together. It does not act upon it. Yet.
It is programmed to do this relentlessly, non-stop, and when something doesn’t seem quite right, it is programmed to go back and do it again until it does get it right—or at least nearly right.
“NELL is operating with 85% accuracy,” Dr. Thomas says. “It has the ability to go back and fix itself, but then it goes forward with 85% accuracy again. If left alone, she can go off on a tangent in some direction from which it cannot recover.”
Dr. Thomas gave an example of this. NELL was loaded with information about food. Within this category were sub-categories such as vegetables, meats, foul and baked goods. Under baked goods, it was asked to accept such things as pies, cookies, cakes, breads and muffins.
“We left NELL alone with all this stuff for the six months,” Dr. Thomas said, “and when we came back we found NELL had most of it right, but had created a blind loop when it encountered the phrase ‘internet cookies.’ It put this into the category of baked goods. And that led it off into all these mistakes. For example, it decided that ‘file’ was baked goods, since it was used in the phrase ‘internet file.’ We had to go in and find the original error and fix it. After that it re-set itself and began again to move forward without this mistake.”
NELL is also being provided with emotions. She learns that “anger is an emotion.”  She learns that “bliss is an emotion.” She tries to relate this category of things with other things.
She is slowly succeeding at this.
The scientists have also seeded NELL with a group of ‘truths.’ NELL can compare things to those ‘truths,’ and if they match up, add her ‘truths’ to the database too. And then she can use that in other ways.
I really should at this point explain how the scientists came to teach NELL in this unique and unorthodox new way. They’d concluded, correctly, that much human learning is connected to semantics and probability. Humans interpret words with other words. Then they make decisions about them.
The scientists mimicked this ability to interpret within NELL. For example, they put a category into NELL called ‘mountain.’ With ‘mountain’ in there, they tried through trial and error to get NELL to understand that Pike’s Peak is a mountain. NELL went through its vast database and noted that the word ‘peak’ very often appears with the word ‘mountain.’ Then it perceives that the word just before ‘peak’ had an apostrophe ‘s’ at the end, which indicates possession or ownership. Thus it concluded there was some probability that ‘Pike’s Peak’ was the name of a mountain. And it could move on from there. Soon, NELL was putting things into context.
Until now, computers have just been task oriented. The data goes in. Certain behavior comes back out. It’s been known for quite awhile, for example, that a computer could best a human in chess. Chess involves the study of variables. The amount of variables are finite, too vast for a human, but not too vast for a computer. A computer can memorize tons of them. Thus computers beat humans at chess.
But computers, until now, could not think. Dr. Thomas gave as examples the two phrases “the girl caught the butterfly with the spots,” and “the girl caught the butterfly with the net.”
A human knows from his experience that the spots belong with the butterfly and the net belongs with the girl. But a computer does not. It gets confused, or the metallic equivalent of confused.
The key therefore was semantics and probability. NELL would every day scan and re-scan words and phrases putting them into its hard drives and memory chips in different categories or relationships to one another. NELL has begun to learn to make judgments about them. That’s because the scientists not only seeded NELL with about 150 categories and relationships, they seeded NELL with things that were ‘right.’ NELL keeps trying to link things up, over and over and over. And, ultimately, it gets them ‘right.’
“It learns that saying ‘I climbed Pike’s Peak’ relates to a mountain. And it learned ‘I climbed the stairs,’ relates to building parts,” Dr. Mitchell said.
NELL is by trial and error developing reason. She can speculate that something is related to something. And then, after scanning its database over and over billions of times a day, she can move it from speculation to probability and then to almost certainly. Eventually, with a little help from the scientists who shove NELL back onto the track when she makes a mistake, there comes certainty. And after certainty—well we are not there yet.
But NELL is getting help with all this not only from Carnegie Mellon University, but from several other giant organizations which are joining in. They include Yahoo, the Defense Advanced Research Projects Agency and Google.
“This technology is really maturing and will soon grow into understanding,” said Alfred Spector, who is the vice president of Research at Google, about NELL.
“What’s exciting and significant about NELL,” said Oren Etzioni, a computer scientist at the University of Washington, “is the continuous learning, as if NELL is exercising curiosity on its own, with little human help.”
The scientists say there is soon going to be a very big payday for the first company that can harness NELL and make use of her in the area of personal assistants, advanced data searches and shopping, just to name a few.
The scientists really have no idea what they are on track to create, in my opinion. They are on track to create a monster, one with enormous intelligence and emotion, who could push us aside and take over the world.
And no doubt run it far better than we. I just hope I live to see the day. [/expand]

BACK TO Dan Rattiner's Stories