Algorithmic governance of care
Artificial intelligence (AI) is becoming increasingly common in health and care settings, promising greater efficiency, improved safety, and better support for care workers. But how do these technologies actually work in practice, and what effects do they have on care relationships, work routines, and the lives of older adults? The ALGOCARE project set out to answer these questions by studying AI systems already in use in Austrian and international long-term care (LTC) contexts.
Rather than starting from technical models or laboratory settings, ALGOCARE took an innovative approach: we went directly into care homes and observed AI “in action”. Our team conducted 40 interviews with care workers, managers, residents, developers, and other stakeholders, and spent over 80 hours in participant observation – accompanying care staff during day and night shifts, taking field notes, and documenting everyday interactions with AI tools.
We examined three types of AI technologies:
- Fall-detection sensors – systems using 3D sensors and behavioural analysis to detect irregularities, such as falls.
- Social robots – humanoid and animal-like robots designed to interact with residents using AI-based speech and facial recognition.
- AI-based pain recognition – developed to detect pain in residents who may not be able to communicate it verbally.
Our research revealed that these technologies are never “just technical tools.” They are shaped by – and shape – social, economic, and institutional contexts. For example, the pain recognition system, while marketed as advanced AI, functioned more like a digital questionnaire. It standardised daily pain assessments, replacing more flexible checks, and contributed to efficiency in care. However, it also reinforced the idea of residents as passive recipients of care, raised ethical questions around consent, and relied on technology’s perceived neutrality to legitimise decisions.
One key finding is that explainability – often discussed in AI research as a technical goal – had little relevance for people working in or receiving care unless approached from a broader socio-technical perspective. In LTC, trustworthiness, participation, and the ALGOCARE also broadened the understanding of “bias” in AI. Bias does not only stem from data or algorithms but is embedded in the entire socio-technical “assemblage” of care: the relationships between people, institutions, technologies, and even the physical spaces in which care happens. Recognising this complexity is essential to designing AI that truly supports dignity, autonomy, and meaningful human interaction in care settings.
Through its interdisciplinary collaboration between social scientists, computer scientists, and healthcare partners, ALGOCARE developed new ways of studying AI in real-world contexts. The project’s insights contribute to international debates on how to create human-centred, ethically grounded AI – and how to ensure that new technologies in ageing and care are developed and implemented with the participation and needs of older adults at their core.