A new study shows that humans can infer specific information when genetically closely related species verbally express emotions. In an experiment including over 3000 human listeners judging 155 vocalizations of 66 chimpanzees, the participants were able to accurately infer behavioural contexts like threat, play, and food.
Researchers from the University of Amsterdam, the University of York and the Max Planck Institute for Evolutionary Anthropology in Leipzig, provide evidence that human listeners can infer the behavioural context in which chimpanzee vocalisations were produced, such as being attacked by another chimpanzee and discovering a large food source, and can correctly judge arousal (excitement), and valence (positive and negative). The results of their study are now published in Proceedings of the Royal Society B.
When we hear a cat hissing, a dog barking, or a person laughing, we infer information from these vocalisations about the affective state of the individual and the kind of situation they are in. From earlier studies we already know that humans can accurately infer arousal and emotional valence from vocalisations of many different species. There are also some studies that tested if human listeners are able to recognise behavioural contexts from vocalisations, like discovering food. These studies showed that listeners can indeed correctly classify the production context of dogs’ barks, cats’ meows, and the vocalisations of pigs.
However, these studies are all limited to domesticated animals that are distantly related to humans. Kamiloğlu et al. now examined the ability of human listeners to infer behavioural context in addition to arousal (excited, relaxed) and emotional valence (positive, negative) from vocalisations of chimpanzees, one of the genetically closest living relatives to humans.
The authors tested two hypotheses:
To test these hypotheses Kamiloğlu et al. conducted two experiments, asking 3430 naive participants to make specific judgements about chimpanzees by only listening to their vocalizations. This included 155 vocalisations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium, or low arousal levels:
Human listeners failed to categorise production contexts of vocalisations in the forced-choice task of Experiment 1. They were however able to match vocalisations to most behavioural contexts in the simpler Yes/No match-to-context task of Experiment 2.
In addition, the arousal levels (high, medium, low) and valence (positive, negative) of the chimpanzee vocalisations were accurately inferred by human listeners. Human listeners were able to match the vocalisations produced while eating high and low value food, discovering a large food source, being refused access to food, being attacked by another chimpanzee, and threatening an aggressive chimp or predator to the corresponding contexts. In particular, accuracy was especially high for highly aroused negative vocalisations, which might signal immediate, potentially dangerous situations.
The authors also conducted an acoustic analysis to investigate the features that shape the perception of human listeners of affective information from chimpanzee vocalisations. The results showed that human listeners made use of brightness, duration, and noisiness of the vocalisations while making behavioural context judgments, and relied on pitch to infer arousal level and valence from chimpanzee vocalisations.
Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalisations beyond core affect dimensions. Human listeners can guess whether the vocalization was produced in a positive or negative context, and whether the chimpanzee was excited or relaxed. Furthermore, listeners can accurately infer particular behavioural contexts in which these vocalizations were produced. These findings suggest preservation of acoustic features mapping onto specific behavioural contexts, in addition to features characterising valence and arousal levels.
“Human listeners’ perception of behavioural context and core affect dimensions in chimpanzee vocalizations” by Roza G. Kamiloğlu, Katie E. Slocombe, Daniel B. M. Haun and Disa A. Sauter, published in Proc. R. Soc. B (http://dx.doi.org/10.1098/rspb.2020.1148).
Faculty of Social and Behavioural Sciences
Programme group Social Psychology