We've been promised for years that robots will soon move from factories into our everyday lives (maybe even white-collar offices), and yet so far, the closest thing we have to Rosie Jetson is the Roomba.
In addition to dexterity and the ability to walk, one of the biggest hurdles to personal robotics has been human-machine interaction. For a machine to enter human space, it has to understand certain niceties. (You don't want a robot chef that can't tell if you gag when you take a bite of its food, do you?)
And if a robot with social skills can be built, it could have a huge effect on our classrooms.
An ideal social robot responds not just to what you say, but to how you say it - factoring in social cues like intonation, gestures and facial expressions. The robot can then respond with appropriate body language. We take this kind of interaction for granted in science fiction - with C-3PO, for example. (Bender from Futurama might be better termed the antisocial robot.)
In reality, it looks like Leonardo, a robot built by MIT's Cynthia Breazeal, that smiles and reaches for a puppet that he's told is good. Or Kismet, who hangs his head in shame when scolded.
We like treating robots as though they are people. Even with today's simplest robots, researchers have seen study participants give their machines names and carry on one-sided conversations with them. Compare that to the ambivalence people feel about their computer screens (and no, you do not really love your smartphone) and it's easy to see the potential for robots to keep people engaged.
Our affinity for robots also appears to affect how we learn from them. Researchers at Yale recently found that people doing cognitive tasks like logic puzzles learn more effectively when guided by a physical robot than they do with the same help from an on-screen avatar.
The study doesn't draw conclusions as to why the physical robot was a more effective teacher than its on-screen version, but one guess is that the physical presence of a teaching robot lends it a degree of authority that participants didn't sense from the digital instructors.
The way we divide the world between animate and inanimate objects plays a major role in how we learn, says Yale professor Brian Scassellati, who worked on the study.
We're used to identifying computers as inanimate and teachers as animate, but social robots don't fit neatly into either category. Their physical presence and interactivity make them seem animate without actually being alive, and they can respond to people and social cues, but often only in fixed and superficial ways.
A National Science Foundation project led by Scassellati, Breazeal and University of Southern California professor Maja Mataric aims to push these limits. The team is working to develop robots that can help children with disabilities learn social and cognitive skills.
To carry out meaningful interactions, though, these robots have to be able to learn on their own so they can understand an individual's personality traits and social cues.
Juyang Weng, co-founder of Michigan State University's Embodied Intelligence Laboratory, is studying how robotic learning and cognitive development can look more like human learning in order to strengthen the connection between children and robot teachers. Traditional robots, even ones used in education, aren't really interactive, he says, but are often just a new interface for familiar computer programs.
Creating artificial intelligence based on learning and social skills, though, will set a new standard for interactive educational technologies. "Eventually if a robot can develop its mind, then the robot can be a very close friend of a child," Weng says. "The robot can be a teacher in a very fundamental way."
Moving social robots from science fiction to reality promises to be a powerful force for education.