By Scott DeLahunta, Alison Halford, Teoma Naccarato, Kathryn Stamp, Petra A. Wark
Six copyright free images shared with focus group participants in our AI and healthcare sector study are placed throughout this post. They were invited to select ones that represented the current state of AI in healthcare, the future of AI in healthcare, and the clinician-patient relationship and perception of that experience. The images included here were the ones most frequently selected triggering a range of conversations including about self-experiences of healthcare, lack of knowledge of clinicians, where things are going and the need to always have some human interaction.
The fast speed of technological development has become a given and in no area is this more evident than Artificial Intelligence (AI), which has triggered significant societal concerns about where our future relationship with technology is headed. What is unusual with AI is that alongside technological progress, there are a range of rapidly evolving initiatives related to its ethical development to try and ensure that AI is good for society. These initiatives come from various sectors including UK Research and Innovation calling for fundamental research joining the social sciences and humanities with technical development “so that we better understand how AI is being deployed and its impact on society and the economy” (UKRI 2021 report, p.6).
GAP-E[thics] has been running as a cross-centre transdisciplinary collaboration since 2021 bringing Coventry University researchers from the Centre for Dance Research (C-DaRE) and the Centre for Computational Science and Mathematical Modelling (CSMM) together to explore emerging questions related to the ethical development of AI. Our first study together explored the concept of “human centred-ness” as it appeared in the research literature with a focus on human-computer interaction design, human-in-the-loop and human-centred computing. The study probed this literature for how the human is imagined, conceptualised and modelled. A key finding was general confusion about the role, position and definition of the human here, as well as tendencies toward privileging certain narratives and marginalising others.[1] These findings appeared to undermine the transformative potential of being “human centred” to innovate design and thinking. The study also allowed us to explore differences in our disciplinary perspectives.[2]
While we were busy researching “human centred-ness”, concerns about AI were increasing as government, industry and research organisations were all discussing the potential dangers and benefits. By that time several seminal studies (e.g. Jobin et al 2019) had shown there was general agreement about WHAT the ethical issues with AI were, but much less was known about HOW to make change happen in the AI community (e.g. Morley et al 2020). Taking a cue from the confusion about “human centred-ness” reflected in the literature in our first study, we assumed there would be gaps in understanding about AI ethics that would be contributing to this problem. We decided to first pursue the short-term aim of understanding and highlighting these gaps in selected industry sectors that were developing and deploying AI. The long-term ambition is to develop transdisciplinary approaches that might contribute to resolving them.
CSMM has strong connections to the energy sector, making this sector the logical choice for our next study commissioned by the EnergyREV consortium, a network of universities and industrial collaborators working on smart local energy systems. The aim of the study was to conceptualise ethical frameworks that can aid fair and just transitions in the digitalisation of the UK energy landscape, and the research team used qualitative methods such as phenomenological framing which focusses on perceptions and feelings held by people. The team identified a range of issues including confusion around ethics and AI and an interest in regulation and legislation as monitoring practices. These are written up alongside recommendations such as the use of training to develop an ethics-conscious culture in this July 2022 Report.
Healthcare was another sector we were interested in, since C-DaRE is doing research in the areas of dance and disability, health, well-being and embodiment. An understanding of the ethical issues related to AI is crucial in this sector due to the vulnerable role of the patient and their safety and care. We had begun to explore a collaboration with the Centre for Intelligent Healthcare (CIH) through exchanges about their NIHR-funded project using AI to help with the prediction of preterm birth. The opportunity to undertake a healthcare sector-focussed study came when GAP-E was awarded Research Excellence Development Funding support for this purpose in late 2022. The focus of the study was “the practices and perceptions of AI and Ethics in the Health Care Sector” with an expanded team bringing GAP-E members together with the Centre for Healthcare and Communities (CHC).
Our proposed transdisciplinary study into AI and the Healthcare Sector was timely, reflecting current debates that resulted in major research initiatives being invested in such as Enabling a Responsible AI Ecosystem, which was set up to establish collaborations between the arts, humanities and the STEM disciplines that were seen to be “crucial to responsible AI” (Luger & Vallor 2023). We planned to replicate key components of the energy study, including a rapid literature review and data collection with focus groups. Using the literature review, we narrowed the scope of our focus within the sector to the use of natural language processing (NLP) and then narrowed it further to focus on the use of NLP in connection to Electronic Health Records (EHR). The use of AI in the form of NLP to process these records, in particular the unstructured texts entered by the clinicians, featured in the literature as a key Smart Healthcare research area.[3]
What drew us to EHR was the possibility of gaining a perspective on AI from the context of in-person embodied encounters between the clinician and the patient. Focussing on the clinician (doctor, nurse or another healthcare practioner) documenting these encounters, whether on paper or electronically, offered us a way of thinking about the ethics of AI in healthcare at the level of phenomenological experience and experiential knowledge. Clinician training understandably stresses the need for standard terminology and systematic approaches to documentation. However, we thought there could be a space for asking questions about what happens to the experiential and embodied knowledge of the clinician who can “read the body language” (for example in a preoperative dialogue) of the patient. Such insights would be difficult to record using standardised coding, but might end up as unstructured notes. Some of the NLP related literature on record keeping seemed to affirm this, e.g. The Elephant in the Record by Cabitza et al. 2019.
It was not our intention to work with patients directly, in part due to the complex ethical issues involved. We began to think about focus groups composed of individuals working for start-ups, e.g. developing an NLP chatbot for the health market, a service organisation providing safe digital health applications and perhaps a research group working with NLP to analyse Electronic Health Records. We expanded on the qualitative method of phenomenological framing used for the EnergyRev study to include more arts-based methods inspired by a dive into literature about graphic drawing, photo elicitation and narrative research with a focus on embodiment. We developed an interview guide with questions such as “if AI and healthcare were an object what would it be”, which invited the participants to draw the object. They were then shown a selection of photos and asked “which image best represents the effects of AI on clinician-patient relationships?”
A focus group was held in May with an emphasis on NLP in healthcare, and another focus group with a broader scope was held in June.[4] Our initial analysis of the transcripts suggests the methods used were effective in drawing out perceptions and feelings of focus group participants in ways that differed from the tendency to express confusion about AI. Working with images seemed to offer participants a way to articulate difficult-to-elucidate concepts, and it appeared that letting the photographs speak gave space for participants to identify shared and divergent values. Given our interest in evoking the embodied know-how of the clinician in relation to the patient, it was interesting to hear focus group participants discussing the lack of knowledge of the clinician as compared to AI. Overall, this approach looks like a productive way forward for doing further research in this sector and one we could recommend.
Upon reflection, our study was a slow moving one. As AI debates and initiatives swarmed around us, we were relatively unhurried in our processes. Alongside testing methods for getting closer to our focus group participants, we took time for activities such as engaging other AI ethics researchers in conversation about how embodied and practice research approaches might overlap with feminist and queer perspectives on technology. In the early 2000s, a paper was published with the title Slow Technology – Designing for Reflection by Lars Hallnäs & Johan Redström. This seminal paper drew attention to the need to make more room for “reflection and moments of mental rest” in the Human Computer Interface design process. Slowness in research in relation to AI seems to be completely antithetical, but we need special resources to respond to the call for truly unbiased human-centred research in the context of ethical AI development. Should these resources just be more funding? If we wish to integrate the arts and humanities with engineering and computing science in a transdisciplinary way, we need the conditions to engage in reflective, mindful deliberation together. Perhaps slow research has a role to play here.
[1] Such tendencies were also discussed in the C-DaRE Invites roundtable Bodies, AI, Ethics and Diversity.
[2] see: Transdisciplinary Conversations on AI & Ethics.
[3] Link to relevant NLP literature (scroll down the Airtable). See also: Revival of the Notes Field. Assale et al. 2019
[4] Coventry University Ethics Approval. EnergyRev P130364. Healthcare P150619.
Comments are disabled