Software that learns by doing

06.02.2006
Attempts to create self-improving software date to the 1960s. But "machine learning," as it's often called, has remained mostly the province of academic researchers, with only a few niche applications in the commercial world, such as speech recognition and credit card fraud detection. Now, researchers say, better algorithms, more powerful computers and a few clever tricks will move it further into the mainstream.

And as the technology grows, so does the need for it. "In the past, someone would look at a problem, write some code, test it, improve it by hand, test it again and so on," says Sebastian Thrun, a computer science professor at Stanford University and the director of the Stanford Artificial Intelligence Laboratory. "The problem is, software is becoming larger and larger and less and less manageable. So there's a trend to make software that can adapt itself. This is a really big item for the future."

Thrun used several new machine-learning techniques in software that literally drove an autonomous car 132 miles across the desert to win a US$2 million prize for Stanford in a recent contest put on by the Defense Advanced Research Projects Agency. The car learned road-surface characteristics as it went. And machine-learning techniques gave his team a productivity boost as well, Thrun says. "I could develop code in a day that would have taken me half a month to develop by hand," he says.

Computer scientist Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, says machine learning is useful for the kinds of tasks that humans do easily -- speech and image recognition, for example -- but that they have trouble explaining explicitly in software rules. In machine-learning applications, software is "trained" on test cases devised and labeled by humans, scored so it knows what it got right and wrong, and then sent out to solve real-world cases.

Mitchell is testing the concept of having two classes of learning algorithms in essence train each other, so that together they can do better than either would alone. For example, one search algorithm classifies a Web page by considering the words on it. A second one looks at the words on the hyperlinks that point to the page. The two share clues about a page and express their confidence in their assessments.

Mitchell's experiments have shown that such "co-training" can reduce errors by more than a factor of two. The breakthrough, he says, is software that learns from training cases labeled not by humans, but by other software.

Stuart Russell, a computer science professor at the University of California, Berkeley, is experimenting with languages in which programmers write code for the functions they understand well but leave gaps for murky areas. Into the gaps go machine-learning tools, such as artificial neural networks.

Russell has implemented his "partial programming" concepts in a language called Alisp, an extension of Lisp. "For example, I want to tell you how to get to the airport, but I don't have a map," he says. "So I say, 'Drive along surface streets, stopping at stop signs, until you get to a freeway on-ramp. Drive on the freeway till you get to an airport exit sign. Come off the exit and drive along surface streets till you get to the airport.' There are lots of gaps left in that program, but it's still extremely useful." Researchers specify the learning algorithms at each gap, but techniques might be developed that let the system choose the best method, Russell says.

The computationally intensive nature of machine learning has prompted Yann LeCun, a professor at New York University 's Courant Institute of Mathematical Sciences, to invent "convolutional networks," a type of artificial neural network that he says uses fewer resources and works better than traditional neural nets for applications like image recognition. With most neural nets, the software must be trained on a huge number of cases for it to learn the many variations -- size and position of an object, angle of view, background and so on -- it's likely to encounter.

LeCun's technique, which is used today in bank check readers and airport surveillance systems, divides each image of interest into small regions -- a nose, say -- and then combines them to produce higher-level features. The result is a more flexible system that requires less training, he says.

Intelligent design -- not

Meanwhile, research is pushing forward in a branch of machine learning called genetic programming (GP), in which software evolves in a Darwinian fashion. Multiple versions of a program -- often thousands of them generated at random -- set to work on a problem. Most of them do poorly, but evolutionary processes pick two of the best and combine them to produce a better generation of programs. The process continues for hundreds of generations with no human intervention, and the results improve each time.

GP pioneer John Koza, a consulting professor in electrical engineering at Stanford, has used the method to design circuits, controllers, optical systems and antennas that perform as well as or better than those with patented designs. He recently was awarded a patent for a controller design created entirely by GP.

It is, like biological evolution, a slow process. Until recently, computer power was too expensive for GP to be practical for complex problems. Koza can do simple problems on laptop PCs in a few hours, but the controller design took a month on a 1,000-node cluster of Pentium processors.

"We started GP in the late 1980s, and now we have 1 million times more computer power," Koza says. "We think sometime [within] 10 years we ought to be able to play in the domain of real engineers."