Get on the AI & Big Data Ethics Bandwagon!

Bryn Williams-Jones calls on bioethicists to contribute to research and public discussions on the ethics of Artificial Intelligence and Big Data.

__________________________________________

Not a day goes by without there being a news story about the ethics of artificial intelligence (AI) and the use of Big Data. Concerns with AI and Big Data – including on the part of industry and government – include biases in the algorithms that may be guiding autonomous vehicles, the risks following data breaches, self-learning medical decision-aids that doctors can no longer understand, and of course, killer robots. Clearly not every problematic technology involves AI, but increasingly there are learning algorithms involved in many new technological innovations such as smart watches, cellphones, and medical diagnostic aids. These innovations are grounded in ever growing data sets that combine personal information to inform decision making, and they raise concerns about governance.

There is much hope and hype about the potential social and economic benefits of AI and Big Data. Major technology companies such as Google, Apple, Facebook, Amazon, amongst others, are investing heavily in the sector. Municipalities are courting these companies (and smaller start-ups), and provincial and national governments are investing, all with a view to stimulating innovation and economic growth. “Move over Big Pharma – AI is here!”

 

Photo credit: Gerd Altmann. Image description: A face of a human-like robot with algorithms on the background.

Interestingly, this enthusiasm is also being accompanied by attention to social, legal, and ethical issues, leading to research opportunities. One might easily disparage this as “ethics washing”, a critique that has been directed at government (and industry) investments in the ethics of genomics, stem cells, or nanotech. In this negative view, bioethics funding is a band-aid to ensure public acceptability of technological innovations that will happen despite ethics and social science critiques.

As a bioethicist who “grew up” academically during the genetics wave of the 1990s/2000s, and who benefited directly from the substantial attention to these areas, I am all too aware of the opportunities, but also of the risks, of riding the next technological wave, of joining the bandwagon. While the money is flowing, there are the resources and opportunities to do innovative bioethics research, and even to influence scientific and policy debates, and help shape public opinion. But also very real is the risk of being the “token ethicist” on the big science project, who gets used to get the grant, to justify the innovation, and to help build public acceptability (the moral imprimatur), but whose critiques fall on deaf ears. And when the topic wanes in interest, so do research funds.

What’s interesting this time around, is that scientists, engineers, and companies at the leading edge of innovation are pushing the AI and Big Data ethics agenda, rather than government. These players are seeing the ethical challenges in the lab, and realising that they need help. If they get it wrong, they risk having their research falter and losing public trust (e.g., because of fear of bias, violations of privacy, and killer robots). As bioethicists, we can and should help. Yes, there’s money for us in AI and Big Data ethics, but there’s also an opportunity to do good research and scholarship, to get our students funded, and to help shape science and policy for the better. Great examples are the recent Montreal Declaration for Responsible AI and the new Observatory on the Societal Impacts of AI and Digital Technology, which have started galvanising the research and policy communities in Quebec, as well as across Canada and internationally.

But let’s also be explicit about the fact that even if the topic is new, the issues are not. Concerns about choice and consent, about implicit bias and the stigmatisation of minority groups, the problems of resource allocation, and the difficulty of building and maintaining trust, are all major issues in AI and Big Data. But these issues are also present with human participant research and clinical ethics, with genetics and genomics, with stem cell research and nanotechnology. Over the last forty years, Bioethics has developed rich areas of specialisation in clinical ethics, research ethics, professional ethics, public health ethics, which have given us robust and tested conceptual, analytic and decision tools that must be brought to bear. Ethicists are used to working across disciplines and with a diversity of actors to analyse complex issues and to mediate diverging views.

So I call on my fellow bioethicists to join the AI and Big Data ethics bandwagon! Let’s show that we already have effective and practical “applied ethics” tools to address most of the issues at hand, and that where needed we can adapt these tools and also develop new ones. Finally, let’s get out into the public space and engage with citizens. (I’m doing this with three Bioethics Cafes during 2019). We can help the public to become empowered to ask the right questions of companies and government, to demand changes where needed, and so ensure that innovations in AI and Big Data actually serve the public interest, and not just private interests.

__________________________________________

Bryn Williams-Jones is Professor and Director of the Bioethics Program at the School of Public Health, Université de Montréal, Editor-in-Chief of the Canadian Journal of Bioethics/Revue canadienne de bioéthique, and Co-director of the Ethics Branch in the new Observatory on the Societal Impacts of AI and Digital Technology. @BrynWJones