Can robots ever have a true sense of self? Scientists are making progress

By Vishwanathan Mohan - Lecturer of Computer Science, University of Essex

Having a sense of self lies at the heart of what it means to be human. Without it, we couldn’t navigate, interact, empathise or ultimately survive in an ever-changing, complex world of others. We need a sense of self when we are taking action, but also when we are anticipating the consequences of potential actions, by ourselves or others.

Image Credit: Seanbatty via Pixabay

Given that we want to incorporate robots into our social world, it’s no wonder that creating a sense of self in artificial intelligence (AI) is one of the ultimate goals for researchers in the field. If these machines are to be our carers or companions, they must inevitably have an ability to put themselves in our shoes. While scientists are still a long way from creating robots with a human-like sense of self, they are getting closer.

Researchers behind a new study, published in Science Robotics, have developed a robotic arm with knowledge of its physical form – a basic sense of self. This is nevertheless an important step.

There is no perfect scientific explanation of what exactly constitutes the human sense of self. Emerging studies from neuroscience shows that cortical networks in the motor and parietal areas of the brain are activated in many contexts where we are not physically moving. For example, hearing words such as “pick or kick” activate the motor areas of the brain. So does observing someone else acting.

The hypothesis emerging from this is that we understand others as if we ourselves were acting – a phenomenon scientists refer to as “embodied simulation”. In other words, we reuse our own ability to act with our bodily resources in order to attribute meanings to the actions or goals of others. The engine that drives this simulation process is a mental model of the body or the self. And that is exactly what researchers are trying to reproduce in machines.

The physical self

The team behind the new study used a deep learning network to create a self model in a robotic arm through data from random movements. Importantly, the AI was not fed any information about its geometrical shape or underlying physics, it learned gradually as it was moving and bumping into things – similar to a baby learning about itself by observing its hands.

It could then use this self model containing information about its shape, size and movement to make predictions related to future states of actions, such as picking something up with a tool. When the scientists made physical changes to the robot arm, contradictions between the robot’s predictions and reality triggered the learning loop to start over, enabling the robot to adapt its self model to its new body shape.

While the present study used a single arm, similar models are also being developed for humanoid robots through the process of self exploration (dubbed sensory motor babbling) – inspired by studies in developmental psychology.

One of our beloved robots on Mars - Image Credit: NASA/JPL-Caltech/MSSS

The complete self

Even so, a robotic sense of self does not come close of the human one. Like an onion, our self has several mysterious layers. These include an ability to identify with the body, being located within the physical boundaries of that body and perceiving the world from the visuo-spatial perspective of that body. But it also involves processes that go beyond this, including integration of sensory information, continuity in time through memories, agency and ownership of one’s actions and privacy (people can’t read our thoughts).

While the quest to engineer a robotic sense of self that encompasses all these multiple layers is still in its infancy, building blocks such as the body schema demonstrated in the new study are being created. Machines can also be made to imitate others and predict intentions of others or adopt their perspective. Such developments, along with growing episodic memory, are also important steps towards building socially cognitive robotic companions.

Interestingly, this research can also help us learn more about the human sense of self. We know now that robots can adapt their physical self model when changes are made to their bodies. An alternative way to think about this is in the context of tool use by animals, where diverse external objects are coupled to the body (sticks, forks, swords or smart phones).

Imaging studies show that neurons active during hand grasping in monkeys also become active when they grasp using pliers, as if the pliers were now the fingers. The tool becomes a part of the body and the physical sense of self has been altered. It is similar to how we consider the avatar on the screen as ourselves while playing video games.

An intriguing idea originally proposed by Japanese neuroscientist Atsushi Iriki is that the ability to literally incorporate external objects into one’s body and the ability to objectify other bodies as tools, are two sides of the same coin. Remarkably, this blurred distinction requires emergence of a virtual concept – the self – to act as place holder between the subject/actor and objects/tools. Tweaking the self by adding or removing tools can therefore help us probe how this self operates.

Robots learning to use tools as an extension to their bodies are fertile test beds to validate such emerging data and theories from neuroscience and psychology. At the same time, the research will lead to development of more intelligent, cognitive machines working for and with us in diverse domains.

Perhaps this is the most important aspect of the new research. It ultimately brings together psychology, neuroscience and engineering to understand one of the most fundamental questions in science: Who am I?

Source: The Conversation


If you enjoy our selection of content please consider following Universal-Sci on social media: