Inclusion and Computer Science

The project aims to investigate the impact of computer science, and in particular Artificial Intelligence (AI) and Natural Language Processing (NLP), on diversity, equity, and inclusion. The topic of inclusion and accessibility is nowadays at the forefront of research in computer science and artificial intelligence, as shown by thematic events organized within the main international conferences of the field [1]. 

On the one hand, the project aims to analyze the criticalities that computer tools and artificial intelligence techniques can present or even amplify in terms of inclusion, studying in particular the presence of issues and social biases (such as those related to gender, race, religion, or disabilities) in computational models trained on large amounts of content [2,3]. 

On the other hand, the project aims to develop and implement concrete tools, methods, technologies, and computational models for automatic tasks that promote inclusion and improve the accessibility of content. Some examples of these activities are automatic text simplification, information extraction from textual and/or multimodal content, and presentation of content in more accessible formats. 

Furthermore, the project contributes by developing methodologies to correctly apply digital technologies to enhance content accessibility, especially of educational materials in the teaching context [4]. 

Group leader: Marco Rospocher

Internal members: 

  • Marco Bombieri (AdR) 
  • Manuel Boschiero 
  • Simone Rebora 
  • Massimo Salgaro

External members: 

  • Olga Forlani (Direzione ITC @ UniVR) 
  • Davide Quaglia (Dipartimento Informatica @ UniVR) 
  • Silvano Pasquali (Direzione ITC @ UniVR)

Actions: WP1.13 

References: 

[1] Diversity and inclusion Forum, IJCAI 2021, https://aim.um6p.ma/en/ijcai-2021-diversity-and-inclusion-forum/  

[2] Caliskan, A., Bryson, J. J. & Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science356, 183–186 (2017), https://www.science.org/doi/10.1126/science.aal4230  

[3] Venkit, P., Srinath, M., Wilson, S. A Study of Implicit Language Model Bias Against People With Disabilities Pranav Venkit, Mukund Srinath & Shomir Wilson. COLING, 2022 https://aclanthology.org/2022.coling-1.113.pdf 

[4] Boschiero, M., Rospocher, M., Forlani, O. L. & Pasquali, S. Migliorare l’accessibilità dei materiali didattici digitali nel contesto universitario: un caso di studio in Atti di “MoodleMoot Italia 2023”, Firenze, Italia, 14-16 dicembre 2023 (2024), pp. 61-70 https://eventi.aium.it/event/1/contributions/44/attachments/12/17/Pagine%20da%20MoodleMoot%20Italia%202023%20-%20Atti%20del%20Convegno-9.pdf