AI Fairness for People with Disabilities: Investigating Facial Recognition System Performance

Abstract:

Disabled individuals, who comprise approximately 16% of the global population, may be underrepresented in the data used to develop AI systems such as facial recognition technologies. Researchers suggest that such systems may fail to perform effectively for people with distinct facial features, such as those with Down Syndrome or Achondroplasia condition, and could potentially discriminate against them if they were not included during the model training and evaluation process. In this study, we explore the performance of state-of-the-art facial recognition models when applied to individuals with disabilities. Our analysis involves three datasets, the first includes 100 subjects with disabilities, each having 7 to 10 facial images sourced from Creative Commons YouTube videos and Websites, while the other two consist of LFW and CelebA. To ensure a fair comparison, we assessed the quality of each dataset using two distinct algorithms. Following this, we evaluated the performance of three CNN architectures: VggFace, FaceNet, and ArcFace. Our findings reveal that VggFace and FaceNet perform similarly for individuals with disabilities to those in the LFW and CelebA datasets. This indicates that these models generalize
well across diverse populations, suggesting no evidence of bias or discrimination against disabled individuals.