Recent years have seen growing interest in modeling visitor engagement in museums with multimodal learning analytics. In parallel, there has also been growing concern about issues of fairness and encoded bias in machine learning models. In this paper, we investigate bias detection and mitigation techniques to address issues of algorithmic fairness in multimodal models of museum visitor visual attention. We employ slicing analysis using the Absolute Between-ROC Area (ABROCA) statistic to detect encoded bias present in multimodal models of visitor visual attention trained with facial expression and posture data from visitor interactions with a game-based museum exhibit about environmental sustainability. We investigate instances of gender bias that arise between different combinations of modalities across several machine learning techniques. We also measure the effectiveness of two different debiasing strategies—learned fair representations and reweighing—when applied to the trained multimodal visitor attention models. Results indicate that patterns of bias can arise across different modality combinations for the different visitor visual attention models, and there is often an inherent tradeoff between predictive accuracy and ABROCA. Analyses suggest that debiasing strategies tend to be more effective on multimodal models of visitor visual attention than their unimodal counterparts.
Document
TEAM MEMBERS
Citation
Funders
If you would like to edit a resource, please email us to submit your request.