Transdisciplinary Conversations on AI and Ethics

a street scene with people's faces circled by facial recognition softwarre

James Brusey & Scott deLahunta

We are part of a transdisciplinary research project looking at the gaps that exist in the implementation of ethical ways of developing and deploying AI. We are looking at these gaps from different perspectives. James is a computer scientist specialising in reinforcement learning and wireless sensing. Scott is an arts and humanities researcher specialising in the study of dance creation and its digital documentation. In May 2022, we took part in a public panel on the theme of transdisciplinary approaches to AI and ethics in which we were invited to give a short overview of how we understand the ethics and AI debates.

James presented the following two ideas on the panel:

  1. AI laws focus on obvious problems, but ignore less obvious ones
  2. Ethical review doesn’t always occur, even when it “should”


Mastercard recently announced that they are thinking about facial recognition at the checkout. Rather than show your credit card or scan your smartphone, this system would require you simply to look at a camera and then the system would identify you and charge the purchases to your account. There is some debate about whether such a system poses an ethical risk from the point of view of its accuracy or bias.

To explain this further, facial recognition accuracy refers to the combined problems of failing to identify someone or misidentifying one person as someone else. Bias refers to the problem that accuracy varies depending on such things as gender or ethnicity. Since bias causes systems to behave differently to different groups of users, it raises legitimate ethical concerns about the long-term effect on those groups.

From the point of view of the recent EU AI Act, which aims to be the first regulatory framework worldwide, this probably falls into the category of high-risk AI. Falling into such a category doesn’t cause it to be banned, but does expose it to more scrutiny. Risk arises because past AI systems, such as those used to screen CVs, have been found to replicate bias that was in the original data; the basis of screening decisions wasn’t transparent and favoured white males over other categories. Similarly, facial recognition has been shown to perform more accurately for White faces than for those of other ethnicities. The negative consequences of bias have been well documented.

While the EU AI Act is clear about the risk associated with known problems (such as, facial recognition) it says little about yet to be uncovered issues (e.g., when the first AI-based recruitment decision systems were first developed, it wasn’t obvious that it would simply replicate or amplify the prejudices in the training data). Risk assessment is subtle and should not be left to the imagination of regulators. So what other options are there apart from regulation?


 As an academic, ethical review appears to be a given. We are taught and we teach each other that ethical review is a pillar of good science. It should never be left up to individual scientists to decide whether their research crosses an ethical line.

On a recent trip to a developing nation to teach a course to research managers, a striking fact emerged: ethical review processes were only observed rigorously for medical sciences. Indeed, the research managers seemed to accept this as a norm simply because the existing review processes were so bureaucratic and time consuming that to apply them to all research projects would produce a logjam and ordinary scientific endeavour would come to a halt. However, this left a situation where research projects have no form of ethical supervision merely because they are to do with engineering or computer science. Yet we know today that computer science and, particularly, artificial intelligence, has plenty of risks—to privacy, to biased decision making systems, and to the well-being of users.

Perhaps we are fortunate that much research is conducted at universities where ethical review for engineering and computer science is considered necessary. However, increasingly important AI research is coming from private companies. Some of those companies are quite large and some try to include an ethical review process. However, the dominant paradigm for the tech industry is that if it is legal to do a thing, then it is allowed. This shifts the ethical concerns to the consequences of doing something that is illegal. But is a legal framework the best way to set ethical boundaries?

Scott presented the following ideas on the panel:

  1. There are calls for more involvement of the arts and humanities in AI research
  2. Methods from the arts and humanities might offer ideas for an ‘ethics-in-practice’


How much to regulate? How to set ethical boundaries? These two questions point toward issues of major concern for non-profit organisations, research institutions and policy makers trying to ensure AI works for the betterment of society. James has outlined the risks above, and anyone listening to Stuart Russell’s Reith Lectures Living with Artificial Intelligence at the end of last year would be aware of the seriousness of these and other issues. They would have also heard Russell say at the end of his final lecture that living with AI will “need all the writers and filmmakers and poets” to come up with new metaphors to guide us in this process. Metaphors shape and enrich our perceptions of things. Russell’s call is an acknowledgement that perceptions of AI matter enough to engage the arts and humanities[1] as much as science and technology in its development.

This idea that the arts and the humanities should be collaborating on AI research is becoming more common.[2] Horizon Europe funding schemes such as AI for Human Empowerment explicitly reference the essential contribution the humanities should be making to this research. The AHRC has a webpage dedicated to the AI research projects they are currently supporting, including a three-year programme titled Enabling a responsible AI System. However, it is difficult to find concrete evidence of the impact the arts and humanities is making on AI research so far.


The arts and humanities can do more than come up with new metaphors to help change perceptions. They can directly engage various groups using participatory and creative research methods that focus as much on processes and practices as on their outcomes.[3] Knowing how is as important as knowing what. In working to create or build anything together, we cope with all kinds of situations by making decisions that are as much embodied, intuitive, and emotional as they are deliberate and cognitive.

This well-known fact is often missing from our representations of scientific and engineering practices. One area of research the arts and humanities might contribute to is practice. Practice research is not solely the domain of the arts and humanities, it is inclusive of other fields such as medicine and engineering. However, as written in the 2021 PRAG-UK Report on Practice Research, contemporary developments have been driven largely by the arts and humanities.

So, going back to the two questions posted by James:  an alternative way to think about regulation and the setting of ethical boundaries might be to go from the bottom-up, to look at changing practices not through compliance with external laws, but through reflecting on how AI work happens in practice in specific contexts drawing on participatory and creative research methods combined with rigorous evaluation of its impact. What might an effective ethics-in-practice informed by arts and humanities research approaches look like?


There are challenges to working across disciplines in this transdisciplinary way, as there are no standard approaches that we are aware of. Each project needs the time to develop a bespoke relationship based on mutual trust and respect.  Another challenge is, of course, that the solutions will never be so simple as top-down versus bottom-up. There is a clear need to have ethical principles to follow and create boundaries and regulations to make sure they are followed. But if we rely only on regulation, then we miss the opportunity to engage in truly transformative change, to work on filling the gaps in understanding of AI and Ethics such that it becomes a way of doing things. It is important that computing science and the arts and humanities work together to fill this gap. Our GAP-E research project has made some first steps in this direction.

[1] Russell at the end of his lecture says at Berkeley University about the Social Sciences and Humanities: “many of them want to leave their home department, move into the College of Engineering (…) and help us figure out how to navigate the next 30 years safely.”

[2] The Ada Lovelace Institute published three blogs on the topic of what Humanities can bring to Ethics & AI, by Alison Powell, Shannon Vallor, John Tasioulas.

[3] Examples of this kind of research involving the public include data walking and Performing AI.