Care work in long-term care (LTC) is considered as a genuine human-centred activity, requiring empathy, emotional investment, physical encounters and intimate, trust-based relations between various care-givers and care-recipients. Artificial Intelligence (AI) technologies are introduced in this professional field to assist care workers in their daily activities and provide an additional measure of care for clients. This has changed the provision of care, affecting care givers and recipients alike. So far, little research has been done on the biases that emerge from AI in this field and the risks that algorithmic governance of care offers in the profession. Based on data generated by AI technologies, unfair decisions can remain unnoticed in the process of linking different big data sets, leading to ethical and social issues in LTC. ALGOCARE’s goal is to understand the functionality and bias of algorithmic governing systems of care and their effects on care givers and recipients. Insight from qualitative case study research in LTC will provide an understanding of the impact and needs of care in relation to AI systems. The use-value of explainable AI (xAI) methods (trustworthiness, fairness, explainable procedures) and different levels of transparency that either the model itself provides or methods that provide them before or after model development are explored. Based on this insight, tools and metrics are developed to evaluate the explainability of AI for care.
Principal Investigator:
Institution:
Project title:
Co-Principal Investigator(s):
Status:
Ongoing (01.12.2021 – 30.06.2025)
GrantID:
10.47379/ICT20055
Funding volume:
€ 429,940