Gathering around a table, conversing, and sharing food are some of the most widespread, and universal social experiences. The project COmputational Models of Commensality for artificial Agents (COCOA) founded by Italian Ministry of University and Research aims to investigate human-human interactions in a commensal setting using state-of-the art AI methods as well as to develop artificial commensal companions (e.g., social robots) capable of engaging with human commensals. We will use machine/deep learning and computer vision methods to develop computational models, e.g., for commensal activities’ recognition and social relations analysis. We will also develop artificial commensal companions able to recognize human actions and attitudes, and to maintain engaging interaction with human commensal partners. You can read more about commensality in our previous papers:
Human expressivity and social interaction patterns vary significantly across cultures, making the development of computational models highly challenging. Most existing datasets and models are monocultural, which limits their applicability and can reinforce cultural biases or stereotypes. A cross-cultural approach is therefore essential—especially in today’s globalized world where intercultural interactions are increasingly common. Collecting comparable data across cultures not only enables the development of more robust and fair models but also deepens our understanding of cultural diversity. Computational modeling can quantify subtle social differences—such as gaze patterns—revealing how behaviors that are appropriate in one culture may be perceived differently in another. This project aims to: (1) collect novel datasets for affective and social modeling using shared protocols in Italy and India; (2) develop and test computational models of affective and social behavior in cross-cultural settings; and (3) computationally analyze cross-cultural differences in social interaction.
The form of an action conveys important information about the agent performing it. Humans may execute the same action in different ways—for example, vigorously, gently, or rudely. This fundamental component of action has been termed vitality forms. To date, despite the crucial role of vitality forms in social communication, the kinematic features that characterize them have been rarely investigated. This project, guided by Prof. Giuseppe Di Cesare (University of Parma) in collaboration with Alessandra Sciutti (IIT), aims to create new multimodal datasets of various vitality forms; investigate the spatiotemporal characteristics of actions performed with different vitality forms; and develop classification and synthesis models for artificial agents that enable them to recognize human attitudes as well as communicate their own attitudes toward humans.