Microsoft interns invent smarter photo tagging

So many pictures, so little time to tag all those -generated memories. A pair of Microsoft interns have come up with a remedy.

The TagSense app created by Duke University and University of South Carolina researchers using Google Nexus Phones exploits motion, light and other sensors on the end user’s mobile phone, and those phones nearby, to piece together information about photos taken.  This, they say, will help users more easily search for photos in the future.


The prototype app was discussed at the , held last month in Washington, D.C.  Details of the app are explained in a paper titled

"When a TagSense-enabled phone takes a picture, it automatically detects the human subjects and the surrounding context by gathering sensor information from nearby phones," said Xuan Bao, a Ph.D. student in computer science at Duke, in  

The phone’s accelerometer, for example, can detect if a person is stationary or moving in a photo, and other sensors can be used to determine everything from lighting to weather to whether the people in a picture are silent or laughing. Such attributes are piled into the tagging system, and can go beyond rapidly improving facial recognition technologies to help categorize and later identify photos.