Imagine, for a moment, the simple act of picking up a playing card from a table. You have a couple of options: Maybe you jam your fingernail under it for leverage, or drag it over the edge of the table.

Now imagine a robot trying to do the same thing. Tricky: Most robots don’t have fingernails, or friction-facilitating fingerpads that perfectly mimic ours. So many of these delicate manipulations continue to escape robotic control. But engineers are making steady progress in getting the machines to manipulate our world. And now, you can help them from the comfort of your own home.

UC Berkeley and Siemens researchers have launched something called Dex-Net as a Service, a beta program that computes how and where a robot should grip objects like vases and turbine housings. You can even upload designs of your own objects. The goal: to one day get the robot in your home to call up to the cloud for tips on how to manipulate novel objects. Maybe we can even keep them from destroying the delicates.

Check out the simulator here. You’ll see a spray bottle as a robot might see it: Each colored bar going through the bottle establishes a spot where a robot pincer could attempt a grip. The line enters where one of its finger pads could rest, and exits where the other would go. Pinching, basically. The colors correspond to the probability of a successful grasp at that location—green is good, red is bad, yellow is in between.

Animation by Dex-Net

The quality of the grasp depends on a few things. A robot’s sensor is never perfectly calibrated, and the sensors themselves come with a bit of noise, so there's always a little randomness in how it approaches an object. Then, as the robot approaches, there’s no guarantee it will perfectly follow orders. “If you command a robot to go to some point in space, it'll get there pretty close but never perfectly,” says UC Berkeley roboticist Ken Goldberg. And then there’s the variability in the physical world; push a pen with your finger across a table and it’ll move differently every time.

So this simulator is looking for spots that are “robust” to all of these factors. “In other words, even if the robot is slightly off, if the object is slightly off, if the physics are slightly off, the grasp still has a high probability of success,” says Goldberg.

In the presence of these uncertainties, the system calculates what would happen if the robot gripped an object at a certain spot—and lots of spots nearby. “We say, ‘What if we perturb it? If we sort of move everything around a little bit, does the grasp still work?’” Goldberg says.

Take a look at the spray bottle again. If you move the “grasp robustness” slider all the way left you’ll see red lines pop up—bad grasps. Notice where they are, up at the head of the bottle. The system has determined that’s a spot that wouldn’t hold up well to perturbations. The green whiskers down at the bulbous bottom, though, those have a higher likelihood of a successful grasp.

Interestingly, that’s not where you or I would go by default. Most humans would probably grip the neck, which is designed with those nice finger holds. But for the robot’s two-pronged gripper in simulation, the base is best.

And out in the real world, a robot will need options if, for instance, it can’t reach part of an object. Calculating perturbations for many different grasps on just one spray bottle takes a whole lot of brain power. “You quickly get into billions of computations per object,” says Goldberg.

Which is where so-called fog robotics comes in: Some computation would be done by the robot itself, and some done in the cloud. (Fog, get it?) Goldberg sees Dex-Net as a Service working like software as a service—something like Google Docs, where calculations are done in the cloud and beamed down to your computer.

Animation by Dex-Net

So say your shiny new home robot gets to work decluttering your floors, and it comes across a teddy bear, which it’s never seen before. “What it does is it takes an image or scans it in three dimensions, uploads that into the cloud, and the cloud does this analysis,” says Goldberg. The service says, here’s what the object is, here's how to grasp it, here’s where it goes in the house. It might also work in a factory setting, allowing production lines to more fluidly adapt to new parts robots have to manipulate.

“We are delighted to see Berkeley taking this initiative to crowdsource efficient grasping of a variety of products,” says Anurag Maunder, SVP of engineering at Kindred, which uses machine learning techniques to get robots to better manipulate objects. “The simulator they have created can form the basis for creating training sets for more advanced scenarios.”

Dex-Net as a Service has some limitations (again, it’s in beta). For one, it doesn’t precisely model friction between the gripper and the object. And it doesn’t calculate the object’s center of mass—which would come in handy if you wanted the bot to handle something like, say, a hammer.

But because you can upload your own designs to fiddle with them, you too can help Goldberg and his colleagues tackle one of the biggest problems in robotics. “We're going to be looking at these examples because we can learn from them,” he says. “We'll look at where it fails, where it succeeds, and it will help us fine-tune the system.”

Still, it’ll be a long, long while before robots can manipulate with the dexterity of humans. But bit by bit, we can all help them get there. Next stop: card-dealing robots that don’t give themselves panic attacks.


More Great WIRED Stories

Read more: