Sign languages, even those that are unrelated, encode inherent meaning components of verbs in such a way that even non-signers can extract these meanings. This is one of the surprising results of research conducted by an international group of linguists, which includes UvA researcher Roland Pfau. The team’s results were recently published in the academic journal PNAS.
Since classical times, verbs have been classified into two broad categories, namely telic and atelic. Telic verbs such as ‘decide’ and ‘sell’ encode a logical endpoint, whereas atelic verbs like ‘think’ or ‘negotiate’ do not and can therefore continue indefinitely. Known as telicity, this distinction is important in all spoken languages, yet is not explicitly signalled by verbs themselves. Across sign languages, however, verbs appear to encode telicity by means of so-called systematic form properties. For example, in American Sign Language (ASL) telic verbs are characterised by a clear gestural boundary such as an abrupt stop in movement or contact with a body part, while atelic verbs clearly lack such a boundary and involve repetitive movements.
In their study, the researchers looked at whether non-signing (hearing) subjects without any prior knowledge of sign language are able to identify telic and atelic verbs on the basis of their visual form. To do this, they conducted several experiments in which subjects were presented with verbs from three unrelated sign languages (Italian Sign Language, Sign Language of the Netherlands and Turkish Sign Language) and two meaning choices for each verb, one of which matched the telicity of the presented verb. In some of the experiments, the meaning choices included the meaning of the presented verb while in others they did not. For example, subjects were shown the Italian Sign Language verb ‘discuss’ (atelic) paired with the telic meaning choices ‘discuss’ and ‘sell’, or the same verb coupled with the meaning choices ‘imagine’ (atelic) and ‘forget (telic).
In all of the experiments, the subjects showed a strong tendency to choose the meaning with the matching telicity. The same pattern was observed in an additional experiment in which the subjects were faced with non-existing yet possible signs that were designed to display the presence or lack of gestural boundaries. Once again the subjects clearly chose the meaning that matched the implied telicity of the non-existing sign.
Pfau: ‘Our results confirm that fine-grained aspects of verbs visually emerge across unrelated sign languages thanks to the use of identical mappings between meaning and visual form. They also reveal that even non-signers can extract these meanings from entirely unfamiliar signs. Taken together, this suggests that signers and non-signers share universally accessible notions of telicity and “mapping biases” between telicity and visual form.’
Brent Strickland, Carlo Geraci, Emmanuel Chemla, Philippe Schlenker, Meltem Kelepir and Roland Pfau (2015): Event representations constrain the structure of language: sign language as a window into universally accessible linguistic biases’, Proceedings of the National Academy of Sciences USA (online early edition, 27 April) PNAS. doi: 10.1073/pnas.1423080112