Robots learn social skills through a new machine-learning system

Web Desk BOL News

06th Nov, 2021. 07:01 pm

MIT researchers have incorporated social interactions into a framework for robotics, enabling simulated machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. Credit: MIT

An innovative machine-learning system helps robots perform certain social exchanges.

In modern times, Robots are known to deliver food to the doorstep and hit goals at the golf course, but even the most sophisticated robot can’t achieve simple social interactions of everyday human life.

The researchers at MIT have now combined different human social interactions into an outline for robotics, which will enable machines to comprehend the meaning of help and to perform these social behaviors on their own.

“Robots will live in our world soon enough, and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” says Boris Katz, principal research scientist and head of the InfoLab Group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Centre for Brains, Minds, and Machines (CBMM).

The researchers are developing a 3D agent system that will allow the robots many other forms of social interactions. The researchers have also scheduled to transform their model to contain the failure of tasks.

The researchers are also working on a neural network-based robot planner into the model system, which absorbs from familiarity and executes sooner.

“Hopefully, we will have a benchmark that allows all researchers to work on these social interactions and inspire the kinds of science and engineering advances we’ve seen in other areas such as object and action recognition,” Barbu says.

“I think this is a lovely application of structured reasoning to a complex yet urgent challenge,” says Tomer Ullman, assistant professor in the Department of Psychology at Harvard University and head of the Computation, Cognition, and Development Lab, who was not involved with this research. “Even young infants seem to understand social interactions like helping and hindering, but we don’t yet have machines that can perform this reasoning at anything like human-level flexibility. I believe models like the ones proposed in this work, that have agents thinking about the rewards of others and socially planning how best to thwart or support them, are a good step in the right direction.”