ss_blog_claim=a290fbfb2dabf576491bbfbeda3c15bc

Sunday, November 18, 2007

Face recognition

MONTEREY, Calif.--Get ready for a new era in which your camera knows not just when you took a picture but who's in it, too.

Many cameras today can detect the faces of those being photographed, which is handy for guiding the camera to set its exposure, focus, and color balance properly. But the more difficult challenge of face recognition is more useful after the photo has been taken.

That's because of a concept called autotagging, one of a number of technologies that make digital photography qualitatively different from the film photography of the past.

Tags of descriptive data can be attached to digital photos, and they help people find and organize pictures. The only problem is that tagging your photos, today a laborious manual task, is like eating your vegetables. It's good for you but a lot of people don't like it.

With autotagging, the camera attaches tags as the pictures are taken. Today, cameras embed timestamps in photos, which makes it possible to sift through pictures by date. But be honest here--how reliably can you remember exactly when you took that picture of your darling daughter a year or two ago that you'd like to e-mail to her grandparents? Being able to screen for photos only of a particular person could dramatically speed up the search process.

Face recognition requires computational horsepower that is hard to fit into the confines of a digital camera, but one company likely to help make it a reality is Fotonation, which already supplies face-detection software for dozens of camera models from Samsung, Pentax, and others.

The computational challenge is reduced by the fact that most folks tend to photograph the same set of 25 or 30 people, Eric Zarakov, Fotonation's vice president of marketing, said in an interview here at the 6sight digital imaging conference. A camera could be "trained" to recognize just those particular people.

He wouldn't comment on whether Fotonation plans to sell such software to camera makers, but it sure looks likely. "We're looking at a lot of stuff. That would be a natural extension" of today's product lines, Zarakov said.

One camera maker willing to mention its interest in autotagging is Panasonic. "A lot of thought is going into how to tag photos so you can retrieve them at a moment's notice," said Alex Fried, national marketing manager for imaging at Panasonic's Consumer Electronics Co. But he wouldn't go into specifics: "There are things we have in the works that will help benefit consumers going forward."

And faces aren't the only aspect of autotagging that's likely to show up in cameras. Location, too, is another useful attribute that can be attached to photos through a process called geotagging. Geotagging can be used both to look for photos whose location you know and to figure out what exactly is in a photo you already have at hand.

Today, geotagging is generally a laborious manual task that requires geographic data to be merged with photos after the fact using a computer. But more power-efficient approaches will lead to in-camera GPS systems that will enable automatic geotagging, predicted Kanwar Chadha, founder of GPS chip designer SiRF Technology.

"A location stamp is much more important than a time stamp in most cases. A year down the road, you have no idea where those pictures were taken and no way to search for location," Chadha said.

Face recognition is an area of active research and some commercialization. Start-up Riya is working on technology to search through online photo albums to try to identify individuals. Polar Rose is trying to improve recognition by generating 3D models of faces. And 3VR wants to apply face recognition to what's become a highly lucrative market, security.


At the 6sight conference, Marian Stewart Bartlett showed results of her research into not just face detection, but expression detection. Her work at the Machine Perception Lab at the University of California-San Diego lets a computer monitor 30 of the 46 codified components of facial expressions. That includes movements such as raised eyebrows and wrinkled noses.

In the demonstration, software tracked Stewart's face from a video camera and recorded expression parameters. Analyzing the data, the computer can draw conclusions about people. For example, when comparing a video of a man's face as he experienced actual pain from immersing his hand in cold water to another in which he faked the pain, people had about an even chance guessing which showed the authentic pain. The computer, though, had 72 percent accuracy, she said.

That level of sophistication is beyond a camera's abilities today, requiring a full-fledged computer run by people with Ph.D. degrees. But particularly given that Sony already has introduced a camera with smile detection, it's not hard to imagine a day when your photos could also some day be tagged "delighted" or "disgusted," too.

Stephen Shankland
Post a Comment
 
ss_blog_claim=a290fbfb2dabf576491bbfbeda3c15bc