Stop me if you’ve heard this one: A robot rolls into an Airbnb…


Maybe you like your Airbnb to come with a nice big living room, or lots of light, or his-and-her sinks. If you’re a robot, though, you just want a little variety. A carpet here, a hardwood floor there. Because you’re a pioneer, not just a tourist.

At least, if you’re a very special robot from a team at Carnegie Mellon University. To train their machines to manipulate objects in the real world, they needed access to lots of houses—their own homes, and their friends’ homes, until they ran out of homes. So when their robot was all trained up, they put it to the test in the unfamiliar world of Airbnbs.

Video by Abhinav Gupta/Adithya Murali/Dhiraj Gandhi/Lerrel Pinto/CMU

Before you ask: Yes, the homeowners knew who their robotic renters were. And yes, the robots—which look a lot like a Roomba with an arm attached—were very lovely guests. (They stayed for around a day and a half, accompanied by a few scientists testing them.) “I remember they were really excited and wanted to see how the robot does at different parts of the house,” says CMU roboticist Lerrel Pinto. “They told us that we were free to use the robot in other parts of the house.” The researchers did, by the way. “Also some were curious and checking out the robot and how it moved and asked if it can pick up trash from the floor for them.”

It couldn’t. But what the robot could do was show off how it had learned to manipulate novel objects the researchers brought along, from staplers to stuffed toys to spray bottles. These the researchers placed on various kinds of flooring, be it carpet or hardwood, which gave the robot varied backgrounds to work with.

You can teach machines to grasp in one of two ways, generally speaking: In a simulation, or with real-world practice. Simulations are good because they’re fast; you can get a digital model of a robot to test many hundreds of grips in the time it would take a pokey machine to move its elbow slightly, move its wrist slightly, and see what happens. Unfortunately, you can’t perfectly model the real world in digital: Physical tests are the only way to make sure that training truly matches up with real-world physics. (You can also do something called imitation learning, where you joystick a robot around and it learns how to move that way—lotta work, though.)

And the ultimate physical test takes robots outside the sterile environments constructed for lab tests and into the messy, disorderly lives of humans. “We need to take our robots into homes,” says Abhinav Gupta, a roboticist at CMU who helped develop the new system. “We need to collect lots of data of manipulation in a real setting where the floors can be different—sometimes it can be carpet, sometimes it can be tiled floor, sometimes it can be wooden floor.”

When these researchers were training the robot in their homes, it came loaded with some prior knowledge. For instance, that a grasp entails seeing an object with machine vision, reaching down, and grabbing it. The question was where to grab an object. “It will choose a random location and try to close its fingers and see if it’s able to successfully grasp it or not,” says Gupta. “Basically pick it up from the floor or not.” The robot can tell this is a successful grip thanks to a force sensor in its gripper, and from seeing the object in its hand.

“Initially it’s random, but then after a few thousand iterations, it will learn where it is successful and where it is not,” Gupta adds. Thus a robot can teach itself with real-world objects, then use that data to inform how it tackles different things it comes across in the home. Unlike in the lab, it’s doing all this with different lighting and flooring, so it’s gathering richer data that more accurately represents the environments where robots will one day work—decluttering homes for the elderly, for instance. So once it lands in an Airbnb—an unfamiliar environment—it can adapt instead of freaking out. Here it was able to successfully grasp novel objects 62 percent of the time, while a model trained in the lab could only manage 18.5 percent.

That doesn’t mean lab testing is passe by any means; sophisticated robots that can execute tasks with accuracy down to a few millimeters are critical for research into grasping in general, which remains a big problem for robots. But those kinds of bots are both oversized and overpriced—running into the tens of thousands of dollars—for experimenting in the home. The CMU researchers pieced this more mobile home robot together for the low low price of $3,000.

That came with compromises, like less accurate motors with centimeter, not millimeter, accuracy. That’s not great—imagine being a centimeter off-target as you go in to grab a can of soda. But “what we tried to do is model the noise,” Gupta says. “We’re not only trying to learn how to grasp, but we’re also trying to learn what are the errors in the controllers.” When they could model this, they could correct the robot’s slightly wayward movements appropriately.

“By factoring the noise in such uncontrolled environments and low cost hardware, the paper shows how data collection for robotics can be taken out of the lab, which can allow for more highly scalable, diverse, and generalizable data,” says Xavier Puig, who’s working on robot learning in simulation at MIT CSAIL.

Great for robots, and great for the owners of those Airbnbs. Robots, after all, would never dare leave the toilet seat up.

More Great WIRED Stories

Read More On This At Science Latest