Software that learns by doing

06.02.2006

Stuart Russell, a computer science professor at the University of California, Berkeley, is experimenting with languages in which programmers write code for the functions they understand well but leave gaps for murky areas. Into the gaps go machine-learning tools, such as artificial neural networks.

Russell has implemented his "partial programming" concepts in a language called Alisp, an extension of Lisp. "For example, I want to tell you how to get to the airport, but I don't have a map," he says. "So I say, 'Drive along surface streets, stopping at stop signs, until you get to a freeway on-ramp. Drive on the freeway till you get to an airport exit sign. Come off the exit and drive along surface streets till you get to the airport.' There are lots of gaps left in that program, but it's still extremely useful." Researchers specify the learning algorithms at each gap, but techniques might be developed that let the system choose the best method, Russell says.

The computationally intensive nature of machine learning has prompted Yann LeCun, a professor at New York University 's Courant Institute of Mathematical Sciences, to invent "convolutional networks," a type of artificial neural network that he says uses fewer resources and works better than traditional neural nets for applications like image recognition. With most neural nets, the software must be trained on a huge number of cases for it to learn the many variations -- size and position of an object, angle of view, background and so on -- it's likely to encounter.

LeCun's technique, which is used today in bank check readers and airport surveillance systems, divides each image of interest into small regions -- a nose, say -- and then combines them to produce higher-level features. The result is a more flexible system that requires less training, he says.

Intelligent design -- not