Student working to give robots ability to see
Babies can still do things the smartest of robots struggle with - but a Victoria University PhD student hopes to change that.
At present, robots and computer programs cannot see and recognise the world around them in the way that even the youngest of humans can.
But computer engineer Syed Saud Naqvi and supervisor Will Browne have been developing an algorithm for two years that will help robots overcome that hurdle.
Many of today's gadgets used means other than light to navigate the world around them, Browne says.
"One is [the robot] just wanders around bumping into things - the simple, vacuum-cleaning robot, that's all it does."
Other methods include bouncing back ultrasonic pulses or infrared light and using laser scanners or depth sensors, as the Xbox Kinect game play system does.
But give them an image, and computers "see" something very flat, finding it hard to pick out one object from another.
Yet the next generation of technologies - anything from photo-recognition software to driverless cars or home-helper bots - will need to pick up cues from patterns of light to function.
"Humans have created a visual world," Browne says. "We colour things interestingly to stand out, through shapes and signs, so we can recognise important things like a stop sign."
The challenge is there are more than 100 visual cues that a computer could use to pick out what is important in an image - anything from its colour, or how many clear edges it has, to its texture.
"If you had a system that had to search through each [cue] one by one by one, it takes far too long. So what we're trying to do is find a subset of features."
In the final year of his PhD, Saud will be programming his new algorithm into a bot and seeing how it functions in picking out the critical objects it needs to.
Algorithms such as his were likely to supplement, or even replace, current navigation systems in the near future, Browne said.
- The Dominion Post