Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic

Alicia de Manuel*, Janet Delgado, Iris Parra Jounou, Txetxu Ausín, David Casacuberta, Maite Cruz, Ariel Guersenzvaig, Cristian Moyano, David Rodríguez-Arias, Jon Rueda, Angel Puyol

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

6 Citations (Scopus)

Abstract

The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on non-racial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond.

Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalBig Data and Society
Volume10
Issue number1
DOIs
Publication statusPublished - 12 Jun 2023

Keywords

  • AI systems
  • bias
  • COVID-19
  • social determinants of health
  • triage and risk prediction

Fingerprint

Dive into the research topics of 'Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic'. Together they form a unique fingerprint.

Cite this