New York, March 7 Researchers, including one of Indian-origin, have designed a system that lets robots learn complicated tasks like setting a dinner table under certain conditions, which they would otherwise find confusing with too many rules to follow.
The new system, called Planning with Uncertain Specifications (PUnS) system, gives robots human-like planning ability to simultaneously weigh many ambiguous, and potentially contradictory requirements to reach end goals, according to their study, published in the journal IEEE Robotics and Automation Letters.
With the new system, robots choose the most likely action to take, based on a “belief” about some probable specifications for the task it is supposed to perform, the researchers from Massachusetts Institute of Technology (MIT) in the US, said.
In the study, the scientists compiled a dataset with information about how eight objects — a mug, glass, spoon, fork, knife, dinner plate, small plate, and bowl — could be placed on a table in various configurations.
A robotic arm first observed randomly selected human demonstrations of setting the table with the objects, the study noted.
The researchers then tasked the arm with automatically setting a table in a specific configuration, in real-world experiments and in simulation, based on what it had seen.
To succeed, the robot had to weigh many possible placement orderings, even when the items were purposely removed, stacked, or hidden, they said.
While following these rules would normally confuse robots too much, the new system helped the bot make no mistakes over several real-world experiments, and only a handful of errors over tens of thousands of simulated test runs, the researchers said.
“The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code,” said study first author Ankit Shah from MIT.
“That way, robots won’t have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home,” Shah said.
According to the scientists, for robots, learning to set a table by observing demonstrations, is full of uncertain specifications.
Items must be placed in certain spots, depending on the menu and where guests are seated, and in certain orders, depending on an item’s immediate availability or social conventions, they explained.
The researchers said present approaches to planning are not capable of dealing with such uncertain specifications.
Using the new PUnS system enables a robot to hold a “belief” over a range of possible specifications, the scientists said.
“The robot is essentially hedging its bets in terms of what’s intended in a task, and takes actions that satisfy its belief, instead of us giving it a clear specification,” Shah said.
According to Shah and his team, the new system is built on “linear temporal logic” (LTL), a language that enables robotic reasoning about current and future outcomes.
The researchers defined templates in LTL that model various time-based conditions, such as what must happen now, must eventually happen, and must happen until something else occurs.
The robot’s observations of 30 human demonstrations for setting the table yielded a probability distribution over 25 different LTL formulas, they said.
Each formula, according to the scientists, encoded a slightly different preference — or specification — for setting the table.
That probability distribution becomes its belief, the researchers explained.
“Each formula encodes something different, but when the robot considers various combinations of all the templates, and tries to satisfy everything together, it ends up doing the right thing eventually,” Shah said.
In simulations asking the robot to set the table in different configurations, it only made six mistakes out of 20,000 tries, the study noted.
The researchers said the robot showed behaviour similar to how a human would perform the task in real-world demonstrations.
If an item wasn’t initially visible, the scientists said, the robot would finish setting the rest of the table without the item.
Then, when the fork was revealed, it would set the fork in the proper place, they added.
“That’s where flexibility is very important. Otherwise it would get stuck when it expects to place a fork and not finish the rest of table setup,” Shah said.
The scientists hope to modify the system to help robots change their behaviour based on verbal instructions, corrections, or a user’s assessment.
“Say a person demonstrates to a robot how to set a table at only one spot. The person may say, ‘do the same thing for all other spots,’ or, ‘place the knife before the fork here instead,'” Shah said.
“We want to develop methods for the system to naturally adapt to handle those verbal commands, without needing additional demonstrations,” he added.
sms\rm
(This story has not been edited by BDC staff and is auto-generated from a syndicated feed from IANS.)
Writers are welcome to submit their articles for publication. Please contact us through Contact Us in the Menue