Late last month, Pope Francis gave an address on the ethics of artificial intelligence (AI). Speaking to participants in the Minerva Dialogues, a gathering of theologians, ethicists, scientists, engineers, business leaders, and lawyers organized by the Vatican’s Dicastery for Culture and Education, he pointed to his social encyclicals Laudato Si’ and Fratelli Tutti as a reference for ethical reasoning regarding AI, noting that although AI is a legitimate manifestation of human creativity and the drive to better the world, it is also prone to ethical risks, most notably the risk of reinforcing the structures of inequality that undermine human dignity and social participation. Francis called for an inclusive dialogue for establishing ethical guidelines for the development and use of AI.
It certainly seems like we have reached an inflection point in terms of the Catholic Church’s engagement with AI, both from church leaders like Pope Francis and from theologians. For example:
In February, 2020, the Pontifical Academy for Life organized a conference on ethics and artificial intelligence, held in the Vatican, that included not just ethicists, but key business leaders, politicians, and representatives of international organizations. This conference resulted in the Rome Call for AI Ethics, a statement of ethical guidelines signed by the Pontifical Academy for Life, Microsoft, IBM, the Food and Agriculture Organization of the United Nations, and the Italian Ministry of Innovation, representing the Italian government. The Rome Call focuses on six ethical principles: transparency, inclusion, responsibility, impartiality, reliability, and security and privacy.
Beginning in March, 2020, what was then called the Pontifical Council for Culture and the Markkula Center for Applied Ethics at Santa Clara University, located in the heart of Silicon Valley, began a series of online dialogues on AI.
Late in 2022, the Journal of Moral Theology published a special issue focused specifically on the ethical issues raised by AI, co-edited by Brian Green, who works at the Markkula Center, and Matthew Gaudet, who teaches ethics at Santa Clara’s School of Engineering. In addition to articles addressing various ethical and theological aspects of AI by a range of scholars, this issue also featured edited transcripts of some of the conversations of the Pontifical Council for Culture-Markkula Center group.
Earlier this month, the Catholic Theological Society of America hosted a virtual event for its members on “Theology and Teaching in Light of Chat GPT,” featuring a panel of scholars including Anne Carpenter, Stephen Okey, and David Turnbloom, and moderated by Mary Kate Holman (whom I interviewed on unrelated matters here). ChatGPT is a “chatbot” developed by the company OpenAI that can provide sophisticated responses to prompts based on learning from human-generated texts provided to the program by its designers.
And this list is not exhaustive. It is fair to say that the rapid development of AI is a “sign of the times” in the sense intended by the Second Vatican Council: a development in human history that may represent a source of peril, but also could potentially be a source of authentic human development; an event that demands a response from Christians drawing on the spiritual resources of our faith tradition.
But what is artificial intelligence? A standard definition is that it is the demonstration of intelligence by machines, rather than by humans or other forms of animal life. But this begs the question of what counts as a demonstration of intelligence, a problem that has bedeviled artificial intelligence research since its beginnings in the 1950s. Today when we talk about artificial intelligence, we are usually talking about machine learning (ML), in which a machine learns to perform a task by being fed “training data” that it analyzes using complex algorithms. For example, a machine could learn to identify which patients are at risk for a certain health condition by being provided large quantities of data from patients previously diagnosed with that condition; the machine is able to identify patterns in the data, too complex for the human brain to identify, that can be used to identify future patients who are at risk of the condition. Machine learning can also involve providing the machines with feedback on which of their predictions proved correct, helping the machine refine its algorithms.
Today the focus of conversation is also usually on what is called narrow AI, as opposed to artificial general intelligence (AGI). AGI would mean that a machine demonstrates or exceeds the multiple forms of intelligence demonstrated by the human mind. This is what most people probably think of when they hear the phrase “artificial intelligence,” and it has often been portrayed in science fiction, for example HAL 9000 in 2001: A Space Odyssey and at least some of the droids in the Star Wars series. Although some researchers, including those at OpenAI, continue to strive toward some form of AGI, the increasing awareness of the complexity of human intelligence, as well as the acknowledgement that intelligence might take forms different from human intelligence, has led to a greater focus on “narrow AI,” or AI focused on specific tasks. ChatGPT and the medical diagnostic AI mentioned above are both examples of narrow AI. Other examples of narrow AI include: algorithms used by banks to extend loans, algorithms used by insurance companies to assess risk, and the algorithms used in self-driving cars.
It should not be difficult to see why theologians should take great interest in the rapid development of AI that we are witnessing. I think it is helpful to group the sorts of questions raised by AI most interesting to theologians into two broad categories: ontological questions and ethical questions. What I mean by ontological questions are those related to what it means to be an intelligent being, which include the previously-noted question of how to define “intelligence,” and the question of what it means to have a relationship with an intelligent machine. Ethical questions are those related to how the design, training, and outputs of learning machines impact human beings, society, and the natural environment. Of course, these two categories can overlap. For example, the ontological question of the status of a machine with AI, and especially AGI, raises the ethical question of whether a machine can have rights. And how we evaluate the use of AI and its impact on human beings will depend on how we understand the meaning of being human. One of the most profound aspects of the dialogues conducted by the Pontifical Council for Culture and the Markkula Center and published in the JMT are the participants’ reflections on how thinking about artificial intelligence can help us better understand what it means to be human, both in our likenesses with machines, but also our differences.
The possibility of AGI, in particular, raises critical ontological questions. How do we know if a machine has achieved intelligence comparable to our own? How do we recognize consciousness? Perhaps no theologian has thought through these issues more deeply than Noreen Herzfeld, who published In Our Image: Artificial Intelligence and the Human Spirit back in 2002. Herzfeld skeptically challenges the dominant models of intelligence used in AI research for neglecting the role of both the physical body and relationships with others in the development of human intelligence. The Franciscan theologian Ilia Delio has taken a more positive approach to AGI, seeing its development as a stage in the spiritual evolution of the cosmos, for example in her book Re-Enchanting the Earth: Why AI Needs Religion.
If AGI is a more remote prospect than once thought, or even impossible, then some of the more profound ontological questions lose some of their salience, but others remain quite interesting. For example, the philosopher Luciano Floridi has argued that artificial entities such as as programs, algorithms, or robots should be considered beings (independently of the question of whether they are “persons”) whose existence or destruction should be considered morally good or bad, respectively. (Floridi has also argued that “information” can be considered the ultimate nature of reality, a thesis that might resonate with those influenced by the more Platonic side of the Christian tradition, like Augustine or Bonaventure). Floridi also considers at least some of these entities as “agents,” an ontological claim that raises further ethical implications.
As I already noted, ontological questions about AI can easily turn into questions about ourselves. During the CTSA Virtual Event, Anne Carpenter claimed that she was less dismayed that ChatGPT might be used by students for cheating on a paper than by the fact that students choose not to engage in the questioning, contemplating, and trial and error that good theology always requires. After all, these are characteristically human traits. And at least for now, they are uniquely human traits; ChatGPT and similar programs do not wonder about things, and in fact seem to struggle with questions of truth. Carpenter’s remarks are provocative because they take the occasion of the emergence of a seemingly human-like technology to ask why we so often are satisfied with less than fully human intelligence from ourselves.
The six principles elucidated in the Rome Call for AI Ethics provide a thorough overview of the ethical questions that have arisen with the development of AI, and in particular recent forms of narrow AI:
Transparency: As I noted earlier, machine learning programs are able to recognize subtle patterns in data. In the special issue of the JMT, Jordan Joseph Wales has a wonderful reflection from an Augustinian perspective on what this tells us about reality. Even when a machine recognizes these patterns and develops predictive algorithms based on them, however, these algorithms remain hidden from human beings, a phenomenon sometimes referred to as the “black box” of machine learning. This raises the possibility that the algorithm may not be working as intended or in a biased way. Therefore, it is essential for engineers to develop ways to make these algorithms transparent.
Inclusion: The benefits of AI must be put to use for the good of all, and not just a few. All people, regardless of wealth and status, should be able to access AI technologies and their benefits. Otherwise, AI will simply reinforce and widen existing inequalities.
Responsibility: There must be humans who take responsibility for what a machine does, potentially including the engineers and designers, the company that markets the machine, and the operators. Who is responsible, for example, if a medical diagnostic AI misdiagnoses a patient, leading to negative outcomes? As I have pointed out elsewhere, lethal autonomous weapons, that is, weapons that use AI to identify targets without human assistance, create a vacuum of accountability when civilians are accidentally targeted (which is by no means the most important ethical issue raised by lethal autonomous weapons…). Some ethicists have argued that there should always be a human operator “in the loop” with ultimate decision-making authority over AI systems that have a dramatic impact on human life, but this still leaves unaddressed the responsibilities of designers and marketers.
Impartiality: One of the most talked-about ethical problems with AI is the potential for bias. This problem is particularly salient because AI is sometimes understood as a tool for avoiding human bias. The problem arises because the training data used to “teach” the machine may reflect human biases, and therefore the machine’s algorithms will then reflect and reinforce those biases. In a famous example, Amazon created an AI tool to identify quality job applicants. The model was fed using ten years’ worth of applicant resumes. What they found, however, was that the algorithm reproduced the gender bias already reflected in Amazon’s hiring practices, downgrading resumes with the word “women” included, favoring resumes with certain words more commonly used by men such as “executed,” etc. Similarly, racial biases have been found in AI algorithms designed to evaluate applicants for home loans.
Reliability: AI algorithms should perform the tasks they are designed to perform, and mechanisms should be in place for monitoring the reliability of AI.
Security and Privacy: AI obviously involves the collection and analysis of huge amounts of data, and much of that data could be considered personal data, such as medical data, financial data, and so on. People have a reasonable expectation that their data be kept private, even if they have consented to its use for purposes of data analysis. This is an area of special interest to me, and I have recently been attempting to develop a theological account of digital privacy (for example, see here).
Because much of my own scholarly work, on a variety of topics, has been interdisciplinary, I always have to come back to the question, “In the end, what makes this work theological?” What unique contribution does theology make to this conversation? (On a side note, Myles Werntz has some interesting thoughts on speaking competently about other fields, as a theologian, that I found helpful here at his Substack Christian Ethics in the Wild).
It is right to ask, then, what does theology add to the conversation about AI? Herzfeld’s work, focusing on what it means that we are created in the image of God, and how our relationship with God is reflected in our relationship with technology, provides one type of answer drawing on theological anthropology. In the JMT special issue, John Slattery likewise draws on the theological anthropology of M. Shawn Copeland to address bias and other forms of injustice in the implementation of AI. Pope Francis makes what I think is a crucial anthropological point in his recent address:
The concept of human dignity – and this is central – requires us to recognize and respect the fact that a person’s fundamental value cannot be measured by data alone. In social and economic decision-making, we should be cautious about delegating judgements to algorithms that process data, often collected surreptitiously, on an individual’s makeup and prior behaviour. Such data can be contaminated by societal prejudices and preconceptions. A person’s past behaviour should not be used to deny him or her the opportunity to change, grow and contribute to society. We cannot allow algorithms to limit or condition respect for human dignity, or to exclude compassion, mercy, forgiveness, and above all, the hope that people are able to change.
Another approach focuses on whether Christian provides us a distinctive way of seeing the world. In my favorite essay from the JMT special issue, Levi Checketts critiques the way AI researchers understand “intelligence,” failing to recognize that intelligence is always socio-economically situated. Intelligence that presumes to be abstracted from concrete reality reflects the privileges of the dominant in society. Christians, however, are called to see the world from the perspective of the poor. Similarly, I have written about how the Christian preferential option for the poor should guide how we collect digital data and protect individual privacy. No matter the approach, however, the theological contributions to our conversations about AI will only grow.
Of Interest…
Pope Francis has made significant changes to the voting process at the upcoming Synod of Bishops (on Synodality) in October. Whereas previously voting was limited to bishops and ten priests from the Institutes of Consecrated Life and representing the various organizations of religious, now voting will be extended to five religious men and five religious women, as well as seventy other Catholics, including priests, consecrated women, deacons, and lay faithful. Among other things, this represents the first time women will be allowed to vote at a synod. It does raise the interesting question of whether it is truly a Synod of Bishops or something else! Reported by Gerard O’Connell at America.
Last week the interfaith advocacy group Faith in Public Life hosted an online event, “Catholic Women: Reclaiming Debates about Abortion and Reproductive Justice.” The event was moderated by Jeanné Lewis and included theologians Natalia Imperatori-Lee, Kimberly Lymore, and Emily Reimer-Barry, and journalist Mollie Wilson O’Reilly. A recording of the event is available here. The event followed the publication of an open letter signed by a number of Catholic women scholars arguing for a more comprehensive conversation about abortion and social policy, including access to quality health care and social services, racial justice, and sexual violence. The letter also called for creating a culture of respect and listening where women’s experiences and narratives can be share and heard. Reporting on both the Faith and Public Life event and the open letter by Katie Collins Scott at the National Catholic Reporter.
Coming Soon…
For nearly two years, a committee of the Catholic Theological Society of America (of which I am a member) has been exploring the prospect of the Society divesting from the fossil fuel industry. Last June, the CTSA’s board agreed to a proposal presented by the committee to divest from fossil fuels, and in the months since then, the committee has been working on a proposal explaining how the CTSA could transition toward divestment. Earlier this week, the committee presented its proposal to CTSA members at a virtual event. Next week in the newsletter, I want to outline the reasons behind the decision to divest and some of the issues we have faced in working out our proposal for how the Society should divest. My hope is that other Catholic institutions, including colleges and universities, can learn from our experience.
I apologize for missing a post earlier this week, I was sidelined by a sinus infection or similar malady, but I am now happily back on my feet.