Geek's Garden

22.05.2006

They developed a "process" model of consciousness, says Metta. This assumes that objects in the environment are not real physical objects as such; rather, they are part of a process of perception.

The practical upshot is that while other models describe consciousness as perception, cognition and then action, the ADAPT model sees it as action, cognition, then perception. And it's how babies act, too. The team used Babybot to test the model, providing a minimal set of instructions -- just enough for Babybot to act on the environment. For the senses, the team used sound, vision and touch and focused on simple objects within the environment.

By simply interacting with the environment, Babybot did its engineering parents proud when it demonstrated that it could learn to successfully separate objects from the background. Once the visual scene was segmented, Babybot could start learning about specific properties of objects that would, for instance, allow the robot to grasp them. Grasping opens a wider world to the robot and to infants.

"Ultimately, this work will have a huge range of applications, from virtual reality, robotics and AI to psychology and the development of robots as tools for neuroscientific research," concludes Metta.

Groves of academe: Purdue scientists produce micropump to cool chips