For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Social robots are increasingly making their way into our homes, schools, and healthcare institutions. But how do people actually interact with this technology? Do they truly engage with robots, or are they mainly seen as tools? In her PhD research, Navya Sharan sheds new light on these questions and reveals that the relationship between humans and social robots is not necessarily social. Sharan will defend her dissertation on Wednesday 23 April at the University of Amsterdam.
A social robot

Our understanding of human-robot interaction is largely based on research that is now more than thirty years old. That research led to the development of the Computers-Are-Social-Actors (CASA) theory, which predicts that people respond socially to technology that displays social cues. “Since the 1990s, CASA has shaped the way we think about human-machine communication,” says Sharan. “But is this theory still valid now that people have become so used to interacting with technology?”

In a series of experiments with participants, Sharan tested whether the CASA theory holds up in interactions with today’s advanced social robots. She first examined how people respond to social robots as part of a team, and whether people conform to the robot. Whether or not people engage with a robot, and to what extent, appears to depend on the robot’s perceived cognitive abilities: the more intelligent people believed the robot to be, the more they conformed.

Reciprocity and realism

In another experiment, Sharan focused on reciprocity: if a robot helps a human every time, will the human return the favour? The results showed that they do not—despite what CASA would predict. Sharan explains: “People don’t treat robots the same way they treat other people, even if the robots display social characteristics. We’re so used to technology being around us that we don’t expect to respond to it socially.”

In response to the limitations of CASA, a new theory called MASA—Media Are Social Actors—was developed. MASA suggests that social signals, such as a voice or facial expression, can automatically elicit social responses. This model goes beyond cognitive capacities and includes human-like features. In a third experiment, Sharan investigated whether people are more helpful towards a robot with a face than one without. The presence of a human-like feature (such as a face) made no difference.

Children and robots

Sharan’s research also included a long-term study on human-robot interaction. Over the course of eight weeks, 400 children aged 8 to 9 took home a small robot (Cozmo). Sharan expected that over time, the children would develop a bond with the robot. However, the data told a different story: the children reported no clear sense of companionship or social presence. “This raises the question of to what extent are humans and robots truly capable of forming relationships,” says Sharan.

Looking ahead

Social robots are not people—and people don’t treat them as such. “People tend to see robots more as tools than as social partners,” says Sharan. “For the development of social robots, this means that the focus shouldn’t be solely on mimicking human traits, but rather on how people experience and use these robots in practice.”

Promotion details

Navya Sharan, 2025:  'How Human Are Our Machines? Rethinking How We Communicate with Social Robots'. Supervisor is Prof. J. Peter. The co-supervisor is Dr C.L. van Straten.