Human+ Tech Talk Explores the Global Ethical AI Landscape

The third Human+ Tech Talk was led by programme Fellow Dr Nicola Palladino, who works on embedding human values and ethical standards in AI. He was joined by his academic supervisors Prof Blanaid Clarke, McCann Fitzgerald Chair and Professor of Law in TCD, and Prof Dave Lewis, Associate Professor of Computer Science in TCD and Deputy Director of the SFI ADAPT Research Centre, as well as his enterprise mentor, Dr Kenneth McKenzie, Research Portfolio Lead at the Human Sciences Studio in Accenture. Also on the panel was Dr David Filip, Chair of NSAI TC 02/SC 18 AI and Convenor of ISO/IEC JTC 1/SC 43/WG 3 AI Trustworthiness, who has a strong expertise in ICT standardisation.

Opening the session as moderator on Tuesday the 1st of November, Prof Clarke noted that “we are living in a time where there is a great crisis of trust in technology,” citing the current chaotic takeover of Twitter as a prime example. It was evident throughout the talk that this crisis of trust is a guiding concern for each of the four panellists, as each explored the considerable challenges of building trust around AI in an effort to realise its full potential, while minimising possible risks to both people and communities.

Dr Palladino paved the way for insightful discussions on the topic by first giving an overview of the ethical AI landscape. He invited us to consider what aspects of the process of AI we are basing our trust on. Rather than blind faith, “[trustworthiness] should be based on the characteristics that we see in the agent, that give us some guarantee that in the end we can complete our task.” It is necessary to ascertain what level of performance we expect, what guarantees there are of safety, and under what circumstances – including breakdown – AI will continue to perform. He outlined the hybrid governance model of trustworthiness in AI led by the larger industry landscape, organisations within those industries and specific teams who lead relevant projects. The task of instilling trust is a shared responsibility, with multiple entities and stakeholders playing a variety of roles, Dr Palladino argued.

The discussion was carried forward by Prof Dave Lewis from a regulatory standpoint. He illustrated the complex landscape of ethical AI cluttered, not just by numerous legislations and frameworks, but also a wide range of stakeholders who are involved in enhancing the trustworthiness of AI. Trustworthiness will depend on how effectively various stakeholders communicate with each other, he emphasised, thus helping us understand what the AI provider of the future might look like. Dr Lewis also called for further discussions to work out the missing elements of the AI Act (a proposed common regulatory and legal framework by the EU) and other standardisation policies to help us constantly learn, analyse, and improvise.

Dr Kenneth McKenzie then approached the topic from an enterprise angle. For citizens to trust AI, they first need to understand it. Sometimes news articles refer to things as AI, when in fact the technology used is simply data analytics with some inferential statistics. He suggested the need for cognitive anchors in the form of age-old systems to help citizens understand AI better. Court proceedings do not necessarily help people understand tax laws, but they help them understand when they might be breaking them: the same could be done for AI. “We will only make progress, when we make use of these cognitive anchors to help people understand AI better,” he emphasised. Dr McKenzie identified universities as a site for understanding AI, remembering back to when problem-based learning was introduced to TCD. We need to reconsider how we train computer scientists and sociologists: is the university timetable currently set up for this? 

This was followed by Dr David Filip’s talk from an ICT standardisation perspective. As a member and leader on several technical committees of ethical AI projects, Dr Filip shed light on the day-to-day experiences of professionals in the area. The task isn’t as simple as bureaucrats telling the technical committee that AI must not violate human rights. Human rights are a complex subject which need expert discussion to navigate and comply. Besides, technical professionals cannot be expected to provide diplomatic and political solutions. This is the reason there needs to be dialogue. He also went on to highlight the dangers of pushing regulations too fast without understanding the implications. “Such legislation will undermine trust instead of increasing it,” he said.

The panel then answered some pressing questions from moderator Prof Clarke on the topic, further illustrating, on the one hand, the importance of involving public discourse in this domain, and on the other, the danger of ethics washing. Ultimately, the group agreed on the need for more interdisciplinary discussions to be carried out in this area, to keep informing and transforming AI regulatory frameworks presently and in the future.

The next and final Human+ Tech Talk will explore AI-enhanced personalisation in online tutoring systems led by Human+ programme fellow, Dr Qian Xiao, and an expert interdisciplinary panel.

Register Now

Human+ is a five-year international, inter and transdisciplinary fellowship programme conducting ground-breaking research addressing human-centric approaches to technology development. Human+ is led by the Trinity Long Room Hub Arts and Humanities Research Institute and ADAPT, the Science Foundation Ireland Centre for Digital Content Innovation, at Trinity College Dublin. The HUMAN+ project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 945447. The programme is further supported by unique relationships with HUMAN+ Enterprise Partners.

Share with others